Before going into the rows batching details, let's speak about some protocol details, in particular packets and buffers. As virtually any other kind of networking-enabled software, Firebird is packet oriented, i.e. it sends and receives logically bound chunks of bytes. So it caches the bytes to be sent in the buffer until the packet is completed or until the buffer is full, and only then it transmits the data.
Usually, logical packets are quite small and thus the protocol buffer is always large enough to fit it entirely. The rule of thumb here is that it must be not less than the MSS (maximum segment size) value for the given transport layer. For example, Ethernet has MTU (maximum transmission unit) equal to 1500 bytes, so IPv4 MSS = MTU - IPv4 overhead = 1460 bytes, and TCP MSS = IPv4 MSS - TCP overhead = 1448 bytes. If the protocol buffer size is smaller than this value, the network bandwidth will not be utilized completely.
Just for the record, Firebird supports TCP buffer sizes in the range between 1448 (min) and 32768 (max) bytes. The default setting is 8KB. For NetBEUI (named pipes), the buffer size is hardcoded as 2KB.
From another side, some packets can be large enough and the fetch response is a good example. Also, multiple logical packets can be sent together within a single transport layer transmission. So a larger protocol buffer sounds as a good idea at the first glance. And, provided that we don't need a reply immediately, it also becomes possible to replace a few application layer round-trips with a single one. But we already know that any packet longer than MSS will be splitted into multiple smaller packets. Actually, this is not as bad as it sounds because they can be transmitted more or less asynchronously and recombined on the other side, i.e. the performance should be limited mostly by the bandwidth (which is commonly quite good nowadays) and not the latency (which can be bad for the global networks). However, TCP requires regular acknowledgements from the other side to make sure that everything is delivered properly. This is necessary to avoid resending the whole [possibly longish] message in the case of networking issues (lost frames, etc). So every few outcoming TCP packets get ACKed with a small reply packet. This introduces another level of round-trips that depends on the network only and cannot be tuned at the application layer (*). This leads us to the conclusion that the larger protocol buffer on the sender's side allows to reduce the number of application layer round-trips, but it's unlikely to significantly affect the number of transport layer round-trips, so the performance impact would be minimal.
So far we were discussing the sender side, now what's about the receiver side? The protocol buffer size does not matter much there, because the received bytes are already cached at the transport layer (TCP/IP stack driver) anyway, so a bigger protocol buffer just means less OS calls which is not that important if compared to delays in the network. However, there's a situation where the client side protocol buffer size matters a lot. It will be mentioned in the next blog post, so stay tuned.
(*) A good reading on the subject is here: http://smallvoid.com/article/tcpip-rwin-size.html.
This article, as well as many others, suggest that the intensity of ACKs (i.e. how many received bytes get ACKed) depends exclusively on the RWIN (TCP window size) setting on the receiving side and it could be altered both at the system level (in the registry) and at the application level. My box has the registry limit set to 256KB. For a remote Internet connection, the network dynamically selects RWIN equal to 8KB (at least getsockopt(SO_RCVBUF) reports so). But ACKs are sent for the every two TCP segments received from the remote host. Trying to alter RWIN to 32KB with setsockopt(SO_RCVBUF) succeeds and reports the new value as 32KB but ACKs are still sent for the every two TCP segments. I suppose it's related to the Delayed ACK algorithm as described here:
http://www.stuartcheshire.org/papers/NagleDelayedAck/.
Usually, logical packets are quite small and thus the protocol buffer is always large enough to fit it entirely. The rule of thumb here is that it must be not less than the MSS (maximum segment size) value for the given transport layer. For example, Ethernet has MTU (maximum transmission unit) equal to 1500 bytes, so IPv4 MSS = MTU - IPv4 overhead = 1460 bytes, and TCP MSS = IPv4 MSS - TCP overhead = 1448 bytes. If the protocol buffer size is smaller than this value, the network bandwidth will not be utilized completely.
Just for the record, Firebird supports TCP buffer sizes in the range between 1448 (min) and 32768 (max) bytes. The default setting is 8KB. For NetBEUI (named pipes), the buffer size is hardcoded as 2KB.
From another side, some packets can be large enough and the fetch response is a good example. Also, multiple logical packets can be sent together within a single transport layer transmission. So a larger protocol buffer sounds as a good idea at the first glance. And, provided that we don't need a reply immediately, it also becomes possible to replace a few application layer round-trips with a single one. But we already know that any packet longer than MSS will be splitted into multiple smaller packets. Actually, this is not as bad as it sounds because they can be transmitted more or less asynchronously and recombined on the other side, i.e. the performance should be limited mostly by the bandwidth (which is commonly quite good nowadays) and not the latency (which can be bad for the global networks). However, TCP requires regular acknowledgements from the other side to make sure that everything is delivered properly. This is necessary to avoid resending the whole [possibly longish] message in the case of networking issues (lost frames, etc). So every few outcoming TCP packets get ACKed with a small reply packet. This introduces another level of round-trips that depends on the network only and cannot be tuned at the application layer (*). This leads us to the conclusion that the larger protocol buffer on the sender's side allows to reduce the number of application layer round-trips, but it's unlikely to significantly affect the number of transport layer round-trips, so the performance impact would be minimal.
So far we were discussing the sender side, now what's about the receiver side? The protocol buffer size does not matter much there, because the received bytes are already cached at the transport layer (TCP/IP stack driver) anyway, so a bigger protocol buffer just means less OS calls which is not that important if compared to delays in the network. However, there's a situation where the client side protocol buffer size matters a lot. It will be mentioned in the next blog post, so stay tuned.
(*) A good reading on the subject is here: http://smallvoid.com/article/tcpip-rwin-size.html.
This article, as well as many others, suggest that the intensity of ACKs (i.e. how many received bytes get ACKed) depends exclusively on the RWIN (TCP window size) setting on the receiving side and it could be altered both at the system level (in the registry) and at the application level. My box has the registry limit set to 256KB. For a remote Internet connection, the network dynamically selects RWIN equal to 8KB (at least getsockopt(SO_RCVBUF) reports so). But ACKs are sent for the every two TCP segments received from the remote host. Trying to alter RWIN to 32KB with setsockopt(SO_RCVBUF) succeeds and reports the new value as 32KB but ACKs are still sent for the every two TCP segments. I suppose it's related to the Delayed ACK algorithm as described here:
http://www.stuartcheshire.org/papers/NagleDelayedAck/.
NetBEUI isn't dropped from WindowsXP ? i guess it can be deleted in the Firebird 3.x
ReplyDeletehttp://support.microsoft.com/kb/306059
Firebird actually uses Named Pipes which are a part of the Windows API and thus unlikely to be dropped ever. They could be built on top of NetBEUI or natively over TCP, it's just an implementation detail. So in fact this doesn't change anything for Firebird.
ReplyDelete