PLEX86  x86- Virtual Machine (VM) Program
 Plex86  |  CVS  |  Mailing List  |  Download  |  Computer Folklore     

solaris vs. linux network stack comparison a developer's perspective

Introductory note - linux kernel developers from the inner circle (who I perceive usually as an arrogant bunch of cenceited individuals) claim that by putting most frequently used functions in header files they achieve great inlining benefits and performance boost. Little do they know that it pains mercenary programmers to view their often inefficient, bloated and lacking style code.

2 answers - inlining is not necessarily the boon they make it out to be, it can turn into a performance person. Also, that was a valid argument before the arrival of compilers than can do interprocedural optimizations (IPO). But not any more. A source pool is much cleaner and shorter if headers are shared between platforms and the ugly implementation details are hidden in differently named C files. The build mechanism would pick out the right C files for each platform and IPO would take care of the performance issues. IPO might be tricky to get done for modules though and is probably unnecessary in that case.

1. Insertion of modules into stack. a) Solaris does it through through establishment of autopush dependencies, see sad(7D) man page. If hooking is at layer 2 then fstat() must be performed on thedeventry to obtain the correct major-minor number. No other method exists. If hooks are just above also a possibility.

pros: 1. well thought out modular architecture, very stable across releases 2. there is no evidence that performance suffers due to memory bound processing. Processing is not any more "memory bound" than in linux.

cons: 1. "protocol" (i.e. state machine) obeyed by streams components can easily be broken by a newly inserted module which decides to send a probe up- or down-stream to obtain some information, an intermediate module may not expect the probe or the reply and consume it or do some other stupid thing before it reaches the intended recipient. For instance, if there is a process above the stream head awaiting an ACK which gets consumed by an intermediate module (which just sent a probe downstream), then process remains stuck and unkillable forever. 2. serivce routines are dangerous because they can engender out-of-order messages that recipients are not expecting and unable to process correctly. 3. module insertion and activation are 2 separate steps because the module init routine must return 0 and thus be marked as inserted before autopush can be done. Therefore an enbreasty external to the module must take care of activation. This is not entirely true since a timeout thread can be used for that purpose but this is a hack, not an officially sanctioned technique. Insertion & activation ought to be a done in 1 step. 4. It's more difficult to track plumbing of new interfaces on solaris than linux.

RTFE 13165
Firstly as far as I am aware, all an EULA violation can do is remove your right to use the software. quotes are mainly headings in the articles "Do not criticize...

Diversion: the timeout() facility is more sophisticated on linux (inittimer, addtimer etc.) because theoretically timeout(9F) on solaris can't be guaranteed to succeed in out-of-memory situations and there is no return code to indicate failure; this is a critical flaw if resource reclamation happens in routine to be called on expiry; the second reason has to do with multiprocessor races.

Some comments on the latest Linux Format magazine 13168
On Monday 24 October 2005 11:10, Kier stood up and spoke the following words to the mbuttes incomp.os.linux.advocacy...: Indeed...

b) Just above L2 linux does it by registering a L3 protocol handler into a linked list of handlers hanging off a protocol table. Alternate methods exist but you'll get lynched by conceited linux developers if you start doing that (e.g. trap handlers that steal function prologues, a.k.a. hot patching). Hooks in pendable context happen by calling a socket function and obtaining private, unexported pointers from it, pointers thereupon are replaced.

Some comments on the latest Linux Format magazine
Apologies for posting through Google; my regular new server seems to have fallen down a black hole; I'm hoping normal service will resume at some point, but until then...) The latest Linux...

pros: 1. "protocol" (i.e. state machine) breakdown less likely due to mostly unidirectional processing. 2. large single-chain skb's allow for faster processing. 3. iptables stateful firewall is somewhat more advanced than solaris ipfilter but who knows for how long 4. fine-grained kernel parameter tuning from application sockets, such as "priority". cons: 1. linux headers are a mess and have evolved in an adhoc, haphazard way. 2. everything and the kitchen sink has been accommodated in important structs, making them much too large and wasteful. 3. immature and unfriendly framework for 3rd party binary drivers. A bunch of zealots brimming over with GPL fervor decide what's exported and what isn't, forcing vendors to resort to all kinds of precarious hacks to get what they want. 4. "fast" NAPI path falls apart if you happen to maintain a data struct such as a dictionary that is often entered from, say pendable *and* BH contexts. The frequent localbhdisable() effect on a busy web server may kill the performance of NAPI (and the whole machine) on the chosen CPU. 5. Due to the multi-distro nature of linux userspace is effectively outside of the kernel developer's control which forces them to implement everything in the kernel. For example there is no good reason why the spanning tree protocol should reside in the kernel. The old design prinicple that the kernel should be as lean as possible still holds true. Only the IO hooks can be in the kernel the rest of the logic can and should be moved to userspace. Linux obviously has a lot to learn from micro-kernel architectures. 6. there is way too much repebreastion in the linux network stack. TCP resets are mentioned in at least 3-4 different places. Why not create just 1 function for resets and use it everywhere? Since GPL zealots refuse to provide friendly APIs you see too much code like this lookup route, allocate skb, set route, set skb fields, memcpy(skbput(), mydata, len) pbutt to devqueuexmit() why repeat code like this in tens of modules? device drivers for video cards and other devices similarly go through too much repebreastion of VM code to expose physical memory to user space.

Additional unrelated notes: services as of solaris 10 are more advanced than linux (init.d) and privileges are more advanced than selinux & friends. The memory allocator on linux is off in production environments, making it impossible to track corruption (which the monolithic architecture of linux invites to happen) without compiling a debug kernel which most admins don't know how to do. On solaris a global variable change is all that's needed to track down memory corruption. Also, on linux crash dump facilities are a joke. Netdump if you get one at all is much more difficult to analyze than unix-vmcore pairs on Solaris which have incidentally existed for at least 10 years (SunOS 4.1.x had them). I'd read sparcv9 or x86 buttembly and decode memory locations any day rathen than try to make sense of a netdump file. Debugging linux driver problems without an ICE is often an exercise in futility. Rather than recognize the inherent instability resulting from its monolithic architecture linux leadership has insisted that debug facilities are not important. Not surprisingly most debug tools are not terribly useful, or fail to keep pace with the newest releases...

Lastly let me a find a fault or 2 with solaris kernel development. Sometimes you find code like this: int pfilprecheck(queuet *q, mblkt **mp, int flags, qift *qif) on page which shows that things have gone terribly wrong somewhere, possibly the result of developers coding around flaws and exhibiting the opposite of teamwork rather than deal with the root cause. Also occasionally, solaris kernel developers are hard pressed to explain notions such as a mutex that also keeps interrupts away, by contrast linux developers can write better code because they are more likely to understand in depth low level primitives they use. My email addr is bogus, don't bother emailing me.

Linux | Previous | Next

RTFE 13165

Linux Advocacy Newsgroups

UK graphics firm goes Open Source