Buffers
Whenever a network adapter receives a frame of data off of the network, there
must be an available memory buffer to store the contents of the frame; if no
buffers are available, the received data will be discarded, or lost. Lost transmissions usually result in one or more delayed retransmissions of
the same data from the originating machine. This is especially true when a
transport protocol (such as TCP) is concerned with the data that was lost. The
TCP/IP protocol stack requires the LSL to provide and manage receive buffers on its
behalf; the MCP/IPX protocol stack supplies its own buffers independently of
the LSL services.
Specifying 32 buffers provides a near-perfect assurance that a single
transmission from each of 30 workstations will be able to get through to the server
even if all 30 transmissions are attempted at the same time (i.e., within the
same fraction of a second). While such a scenario is a worst-case scenario, it can happen. And when it does happen, this configuration will maintain
overall performance at its theoretical best (as seen from a workstation's
point-of-view).
Can the number of buffers be reduced? Yes. While we do not have an exact
statistical analysis, it is probable that this number could be reduced by 10 (i.e.,
to a value of 22) on a 30-user system with little, if any, degradation in
performance. The only motivation for reducing the buffer count would be to prevent
the allocation of DOS memory which might be needed for other purposes.
Can 30 users access the server if there are only 20 receive buffers? Yes. As
mentioned above, the worst-case scenario is not a frequent occurrence. Usually
there will be more than enough buffers to handle the incoming frames, and these
buffers are typically processed and emptied in time spans measured in
milliseconds (thousandths of a second).
It should also be understood that these buffers are not assigned to receive
any particular workstation's data; under peak conditions the workstation requests
will simply compete for the available frames; all requests will eventually
get through. Statistically, the competition will average any delays among all of
the competing workstations, introducing only minor loss of performance, unless
the buffer shortage is quite severe. Therefore, for best results, do not reduce the number of buffers unless you are constrained to do so by DOS
memory optimization issues.
The performance relevance of the receive buffer count really only comes into
play when several workstations are making many requests in rapid succession, and
that only happens when a large number of users attempt to load predefined
pages at approximately the same period of time.
If you find you must reduce the number of buffers, try to retain at least as many buffers as the maximum number of concurrent users. A couple of spares
won't hurt, either.
Why are the buffers specified to be only 638 bytes? In short, because that is
as small as LSL.COM will allow. A typical workstation request is less than
1/10th of the buffer size. To pre-allocate larger buffers would waste precious DOS
memory. Larger requests are found only in the context of various database
maintenance activities, such as importing historical data or modifying historical
data after a stock-split. Provisions are in place to ensure that all frames
transmitted by workstations or other AspeNet utilities fit within the smaller
buffer size.
The default buffer size used by LSL.COM is 1500 bytes. Why can't we just leave
it at that? You could under certain low-demand conditions; but, except for a
small advantage in the infrequent case of database maintenance, most of that
buffer space would be wasted, even if there were only a few active users. The
need to provide more buffers and hence more consistent high-performance to the
maximum number of concurrent users is of greater overall importance, and as will
shortly be seen, the memory saved by the smaller buffers is needed elsewhere by
the LSL.