May 30, 2012

A few words about the shared memory and files

Even if running in the Classic (isolated process) mode, Firebird needs some data to be available to all the running server processes. There are four kinds of information that must be shared:
  • Lock table. It's the vital part of the lock manager and its primary goal is to synchronize access to various resources that can be read or modified simultaneously.
  • Event table. Every time a posted event is committed, the server needs to find all the subscribers and redirect the event delivery to processes handling the appropriate user connections.
  • Monitoring snapshot. It keeps the latest known state of all the running worker processes and it gets updated once some user connection attempts to access the monitoring tables in a new transaction.
  • Trace configuration. It contains the information required for the worker processes to react on the currently active tracing events and log the appropriate notifications.
The shared memory regions are backed by the files on disk, i.e. those files are mapped to the address space of the worker processes. The backing files can grow if necessary, causing the shared memory regions to be remapped. The backing files and shared memory regions are created when the first worker process initializes the corresponding subsystem and then used by other worker processes. Once there are no active users of those backing files, they are either removed by the last worker process (Firebird 2.5 and above) or could be left on disk to be reused later (prior to Firebird 2.5). They're also left on disk in the case of the server crash. The initial size of the shared memory regions can be user configurable, see LockMemSize and EventMemSize settings in firebird.conf.

March 27, 2012

Firebird vs Windows: file-system caching issue

This article is going to shed some light on the facts related to the issue registered as CORE-3791 in the project bug tracker. Let's start with some links.

The origin of the problem is described here:
http://support.microsoft.com/kb/2549369

A lot of additional useful information can be found here (especially in the comments):
http://blogs.msdn.com/b/ntdebugging/archive/2007/11/27/too-much-cache.aspx

In fact, this issue consists of two slightly different but interrelated parts.

The first one is that the file-system cache size is unlimited by default in Windows 64-bit, so its working set can consume all the available RAM, thus forcing almost everything else to swap and finally starting to swap itself. It's visually detected by a slow system response and lots of the HDD activity. The suggested solution is to limit the file-system cache size with the SetSystemFileCacheSize API function. And Firebird 2.5 supports this solution with the FileSystemCacheSize setting in firebird.conf.

Unfortunately, we have reports that the problem may still take place for some customers. I have performed a research and the findings are the following:

March 15, 2012

Records batching

As explained priorly, the fetch response includes multiple resulting records batched together. But how many records should be batched? Sending too few records is going to increase a number of round-trips. Sending too many records will cause the client to stall waiting until it can acknowledge the delivery. And it could also be possible that the client does not need so many records and it was going to close the cursor after receiving the first one. So it turns out that some compromise is required here. The minimal practical batch size depends on the used protocol buffer size, i.e. how many records could be cached before sending and then transmitted as a single packet. The buffer size for the TCP protocol is defined by the TcpRemoteBufferSize setting in firebird.conf. However, it often makes sense to send more records (i.e. a few protocol buffers) without waiting for an ACK, because the CPU power could allow to process more records while waiting for the network to transmit the next batch.

Firebird has its batching logic optimized to transfer between 8 and 16 packets at once. The client library sends this number to the server, waits for transmission of the requested number of records, starts caching the received records and returning them to the client application. The tricky thing here is that the batch size is expressed in records and this value is calculated using the batch size in packets and the expanded (unpacked) record length. A soon as the record gets packed or compressed in some way, the calculation becomes wrong and it results in sending less packets than expected. Also, the last packet could be sent incomplete. Currently, the only "compression" that's available is trimming of VARCHAR values. So the batching could be either effective or somewhat sub-optimal depending on how many VARCHAR values are fetched and how long the actual strings are as compared to their declared lengths.

March 14, 2012

Firebird roadmap has been updated

The project roadmap has been updated a bit. The change is to boost the v2.1.5 and v2.5.2 releases at the cost of slightly delaying the v3.0 Alpha release.

Firebird 2.1.4 was released exactly one year ago, so now it's a promised time for v2.1.5. It has 53 bugs fixed and no critical issues remaining unresolved. Firebird 2.5.1 was released more than 5 months ago and the expected release date for v2.5.2 is approaching the next month. It has 45 issues resolved up-to-date and a few more are in the pipeline. So it makes a lot of sense to release them sooner rather than later.

The v3.0 Alpha release will be going through the preparation stage while all three release candidates (v2.0.7, v2.1.5, v2.5.2) are being field tested, so it's likely to appear shortly after the aforementioned releases, in the second quarter.

Thanks for your understanding.

February 7, 2012

Protocol packets and buffers

Before going into the rows batching details, let's speak about some protocol details, in particular packets and buffers. As virtually any other kind of networking-enabled software, Firebird is packet oriented, i.e. it sends and receives logically bound chunks of bytes. So it caches the bytes to be sent in the buffer until the packet is completed or until the buffer is full, and only then it transmits the data.

Usually, logical packets are quite small and thus the protocol buffer is always large enough to fit it entirely. The rule of thumb here is that it must be not less than the MSS (maximum segment size) value for the given transport layer. For example, Ethernet has MTU (maximum transmission unit) equal to 1500 bytes, so IPv4 MSS = MTU - IPv4 overhead = 1460 bytes, and TCP MSS = IPv4 MSS - TCP overhead = 1448 bytes. If the protocol buffer size is smaller than this value, the network bandwidth will not be utilized completely.

Just for the record, Firebird supports TCP buffer sizes in the range between 1448 (min) and 32768 (max) bytes. The default setting is 8KB. For NetBEUI (named pipes), the buffer size is hardcoded as 2KB.