As most of you already know, NexusDB has a very fast engine structure at its heart and performs very well out of the box. Its flexibility and tuning options still allow lots of improvement though. This short article shows you the impact of different settings for write access to your data, especially bulk insert speed.

 

The basic insert loop

 

Let's start with the typical piece of code to insert records into a table.

 

nxTable1.EmptyTable;

nxTable1.Open;

StartTime;

for I := 0 to 100000 do begin    // Iterate

   nxTable1.Append;

   nxTable1.FindField('FileName').AsString:='TestFileName';

   nxTable1.FindField('Directory').AsString:='TestDirectory';

   nxTable1.FindField('FType').AsString:='TestType';

   nxTable1.FindField('FSize').AsInteger:=1234567890;

   nxTable1.FindField('FDate').AsDateTime:=now;

   nxTable1.FindField('Attributes').AsInteger:=654321;

   nxTable1.FindField('ParentID').AsInteger:=-1;

   nxTable1.Post;

   if (i mod 10)=0 then

     SetCounter(i);

end;    // for

SetCounter(i);

PrintTime('Base');

nxTable1.Close;

 

The methods StartTime, SetCounter and PrintTime are just simple methods to progress the UI and are of no interest here. As you can see all we do is to insert 100.000 records into a table. We first look at how this performs on an embedded server engine, while we look at remote servers later on.

 

Let's take a look at the structure of the table. As you can readily see the record size is rather large due to the use of a 1024 byte string field.

 

The table also has indices defined for all fields except the Directory field.

 

Running this loop on a current machine (Athlon 2800, 1 Gig Ram, fast 7200 rpm disk system) takes about 61 seconds resulting in roughly 1640 records/sec performance. In other words NexusDB is capable of inserting one record per 0.6 milliseconds. This insert operation includes writing and safely flushing the data to disk, updating the different indices and, again, writing and safely flushing the index blocks to disk as well. Not bad, is it! But can we do better? Sure we can!

 

Let's look at transactions:

 

Transactions

 

A transaction essentially wraps a number of calls into one logical block and executes these as ONE unit. For our example we just need to do some small changes to the code to make explicit use of transactions:

 

nxTable1.EmptyTable;

nxTable1.Open;

StartTime;

nxTable1.Database.StartTransaction;

for I := 0 to 100000 do begin    // Iterate

   nxTable1.Append;

   nxTable1.FindField('FileName').AsString:='TestFileName';

   nxTable1.FindField('Directory').AsString:='TestDirectory';

   nxTable1.FindField('FType').AsString:='TestType';

   nxTable1.FindField('FSize').AsInteger:=1234567890;

   nxTable1.FindField('FDate').AsDateTime:=now;

   nxTable1.FindField('Attributes').AsInteger:=654321;

   nxTable1.FindField('ParentID').AsInteger:=-1;

   nxTable1.Post;

   if (i mod aSize)=0 then

   begin

     nxTable1.Database.Commit;

     nxTable1.Database.StartTransaction;

     SetCounter(i);

   end;

end;    // for

nxTable1.Database.Commit;

SetCounter(i);

PrintTime('Transaction ('+inttostr(aSize)+'):');

nxTable1.Close;

 

Note the aSize variable. It is used to define certain sizes (here in number of loops) for the transaction. Using for example aSize=1000 means that the server is caching 1000 loop operations in memory and then executes ALL of them at once to disk. This again heavily reduces write access and flushing to the much slower file system. It is essential for this to work efficiently that the server has enough memory to handle the selected transaction size. (Also when you design your system for multiple users, keep in mind that there may be a number of transactions active at any single point in time, thus affecting overall memory requirements.)

 

Running our example again with different transaction size yields the following results:

 

Loops per transaction

Time

1000

11.5 seconds

10000

9.4 seconds

20000

9.7 seconds

100000

10.1 seconds

 

Now that is quite a lot faster isn't it. For the fastest case we are now able to insert around 10640 records/sec. That's an average of 0.09 milliseconds per record.

 

Please note that a larger transaction size does not always mean faster execution times. As the block organization complexity rises the total speed falls again. So keep in mind that you might have to experiment a bit to find the optimal size for your table/s within an application

 

So this all begs the question: can we squeeze out any further speed? The answer is "Yes"!

 

Batch mode

 

NexusDB has a Batch insert mode that wraps a number of Posts into one single operation by combining a number of bytes to write into one so called BatchPost command.

 

To use this we just need to make some slight changes to our original loop:

 

nxTable1.EmptyTable;

nxTable1.Open;

StartTime;

nxTable1.BeginBatchAppend(trunc(aSize*1024*1024));

for I := 0 to 100000 do begin    // Iterate

   nxTable1.Append;

   nxTable1.FindField('FileName').AsString:='TestFileName';

   nxTable1.FindField('Directory').AsString:='TestDirectory';

   nxTable1.FindField('FType').AsString:='TestType';

   nxTable1.FindField('FSize').AsInteger:=1234567890;

   nxTable1.FindField('FDate').AsDateTime:=now;

   nxTable1.FindField('Attributes').AsInteger:=654321;

   nxTable1.FindField('ParentID').AsInteger:=-1;

   nxTable1.Post;

   if (i mod 10000)=0 then

     SetCounter(i);

end;    // for

SetCounter(i);

nxTable1.EndBatchAppend;

PrintTime('Batch ('+floattostr(aSize)+'M)');

nxTable1.Close;

 

Again note the aSize variable. This time it is used to define the size of the batch operating in Megabytes (note that NexusDB expects its value in bytes, that's why we multiply it with 1024*1024 ). Each post is just caching the data and once we reach the given ASize, it posts the changes and writes/flushes them to disk. This reduces write access and flushing to the much slower file system. It is essential that the server has enough memory to handle these blocks of memory. Because if the server needs to swap memory to disk the effect turns around and makes your operation slower instead of faster!

 

Running our example again with different batch size gives us these results:

 

Bytes per Batch

Time

512 kByte

11.45 seconds

1 Mbyte

11.25 seconds

10 Mbyte

9.2 seconds

15 Mbyte

8.7 seconds

20 throughput

9.3 seconds

 

This time we can even reach 11494 records/second which is an average of 0.087 milliseconds per record.

 

Again note that bigger is not necessarily better. With increasing batch size the internal management complexity rises and thus the timings get worse after an optimal batch size.

 

Conclusion

 

Our original loop took 61 seconds. By tweaking the code we were able to reduce the time to 8.7 seconds, a speedup of 7 times faster. Lots of real-world bulk operations involve operations to millions of records and cutting the time taken to a 1/7th, or less, for these operations is very appealing indeed.

 

Looking at the final table size (including indexes) of 201.105.408 bytes we are talking about 22.6 Mbyte/seconds raw disk write speed. Using Passmark's PerformanceTest, the same partition can perform to 33.2 MByte/second in the Sequential Write test. In other words: NexusDB is capable of performing to about 2/3 of the raw disk speed using batch mode. We think that's a very good speed :)

 

Please note that the speed of write operations rises and falls with the speed of your I/O subsystem. Running the same tests on a machine with triple striped 15k rpm harddisks resulted in a result of 4.3 seconds for the 15 MByte batch (about 45 Mbyte/sec raw speed). Passmark's PerformanceTest returned 68 MByte/sec for this setup.

 

Considerations about remote servers

 

The above tests were all run with an embedded server. But how does the above apply to remote servers?

 

On networks the main performance killer is network speed and there especially the latency. Let's take a look at this briefly now.

 

The most common network speed nowadays is 100MBit. On the machine used for the tests a "ping NexusDBServer -l 1276" (a ping to the server machine using 1276 bytes - equal to the record size - packages) takes an average of 8 ms. Now let's look a bit closer at what the VCL does with Append/Post. It first positions at the end, thus we need to send a message to the server and wait for a reply. Then we post the data to the server and again wait for an answer. So in total we have to deal with network latency 4 times per loop instance. This results all together in a total optimal latency of 4 * 100.000 * 8 ms = 3.200.000 ms or 3200 seconds. In plain words that means that our original base loop performance will be completely unacceptable even if it is running under optimally achievable network speeds.

 

Well, that's under optimal circumstances. In real life that loop took about 15700 seconds or 260 minutes. Not a very good state of affairs.

 

Let's wrap this into transactions and see what happens.

 

Loops per transaction

Time

1000

603 seconds

10000

604 seconds

20000

602 seconds

100000

634 seconds

 

Still not really fast. But then, transactions only slightly reduce the message count and do nothing about the transferred data size.

 

So now let's look at batch operations. Using batch mode means that we massively reduce the number of messages. For example using a 1 MByte batch size reduces the number of messages sent to the server from 200.000 to about 130 (total of approx 128 MByte of data transferred), for 10 MByte roughly 15 messages (please note the VCL does not position the cursor at the end before each Post when using batchmode!).

 

Bytes per Batch

Time

512 kByte

73 seconds

1 MByte

65 seconds

5 Mbyte

65 seconds

10 Mbyte

64 seconds

 

Looks like the limiting factor here is not latency anymore, but the need to transfer about 128 MByte of data over the network. NexusDB has built in message compression, so let's take a look at this. We add nxllZipCompressor to our uses clause and set the CompressionType property of the transport to 9 (highest zip compression). In this (optimal, because same data) case the amount of data is reduced from 128 MByte to 400 KByte.

 

Bytes per Batch

Time

512 kByte

71 seconds

1 MByte

40 seconds

5 Mbyte

31 seconds

10 Mbyte

41 seconds

 

Ok. That's much better isn't it? We've cut the time from 15700 seconds to 31 seconds, which is more than 500 times faster.

 

The limiting factor is now the CPU for compression and decompression of data. Mind you we are looking at a 100MBit network here. The slower the network the more important it becomes to reduce the message count and transfer volume.

 

As an example we look at a real world 802.11b network. Same loop, same server machine as above but this time we use the wireless network for the test. I really didn't want to run the base and transaction loops as this would have taken far too long.

 

Here are the results without compression. Again the limiting factor is not just latency but also the throughput of the network. Transferring 128 MByte simply takes some time.

 

Bytes per Batch

Time

512 kByte

280 seconds

1 MByte

248 seconds

5 Mbyte

243 seconds

10 Mbyte

256 seconds

20 Mbyte

261 seconds

 

And here are the (excellent) results with maximum zip compression.

 

Bytes per Batch

Time

512 kByte

72 seconds

1 MByte

56 seconds

5 Mbyte

41 seconds

10 Mbyte

40 seconds

20 Mbyte

48 seconds

 

We've now seen a lot of numbers and have been able to get some impressive improvements.

 

You can imagine that what has been said applies in a similar way to read operations. Keep these things in mind when designing an application that needs to do bulk inserts or lots of operations over a network (especially slow networks like the internet) and you should be able to get a superbly performing solution running and your clients happy.

 

Don't just use the standard out of the box methods, but use NexusDB's flexibility and built in features to fine-tune and improve the performance of your applications! Develop like the NexusDB professionals do!

Home | Site Contents | Documentation | Articles