Quantcast
Channel: Which computer system defined the IPv4 576 byte datagram limit - Retrocomputing Stack Exchange
Viewing all articles
Browse latest Browse all 3

Answer by Raffzahn for Which computer system defined the IPv4 576 byte datagram limit

$
0
0

Which computer system defined the IPv4 576 byte datagram limit

The question seems to be made under the impression that this is a maximum limit defined by some machine that needed to take part. But it is not.

It is a minimum requirement. Any hardware/software to participate needs to be able to handle at least a packet of this size. This is done to establish a useful minimum packet size that can be transferred without prior negotiation over all stations involved.

The number itself isn't defined by some hardware, but a balancing of values that makes sense. When looking close, then 576 decimal is in binary a number with only 2 bits set: 2^9 and 2^5 or 512 + 64. Doesn't that look quite like a useful data block and some generic header block?

When coming up with a data size for transmission blocks one has to balance various requirements:

  1. Max. Payload should be reasonable large to keep fragmentation low
  2. Header sizer should be large enough to allow expansion
  3. Both (*1) need allow variable size
  4. Dedicated buffer space (in adaptors/small nodes) is small
  5. Memory management at large nodes should be fast
  6. Memory management at large nodes should be not wasteful.

While for #1 disk blocks are a good indication and 512 Bytes a middle of the road approach (*1), the real antagonists are

  • variable payload size vs.
  • max payload size vs.
  • header size

in terms of memory management in systems that manage multiple blocks at a time. Variable block size is a must to increase line utilization to with short blocks. This means neither a linked list nor a fixed size list would be great, as the first will run fast into fragmentation, while the second is extreme wasteful - keep in mind, systems back then did not have many megabytes of memory to waste.

In the end only a memory management with fixed size blocks, small enough to keep fragmentation low but big enough to keep numbers of blocks low. A size that nicely fits the maximum block while not wasting much on smaller ones.

Which is exactly what these two sizes, 512+64 give: Using 64 bytes as allocation size:

  • The maximum payload will fit into 8 of these blocks,
  • The maximum header will fit into 1 of them.
  • Fragmentation is kept in check
  • Waste is kept in check

A memory management handling messages as single block will have memory block sizes of 1..9 blocks, creating 9 types, giving acceptable fragmentation with good chance of low defragementation need. At the same time waste is limited to a maximum 63 bytes per message.

A memory management handling them as two lists will have one less fragementation size, resulting in eben better utilization (*3) but a trade off of up to 2x63 bytes waste (*4).

Now since there are only 9 different message sizes (in terms of 64 byte chunks), a memory management could use 9 (8) lists. One for each of these sizes, resulting in a lightening fast memory management.


Long story short, it's all about a sensible minimum message length alowing useful transmission while keeping memory management in check.


*1 - Well, or at least payload

*2 - Common sizes at that time for Disks were 128, 256, 512, 1024 and 2048 bytes.

*3 - Well, a binary tree block size of 63,128,255 and 512 would that reduce even more

*4 - A bit more complex as a header is for sure larger than 1 byte, but still larger waste.


Viewing all articles
Browse latest Browse all 3

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>