Memory tables in NexusDB are, to be blunt, nothing more than normal tables that are RAM resident.  There are no limitations with a Nexus memory table over a disk-based Nexus table.  Consequently, you can treat your memory tables with exactly the same code and in exactly the same way as you would your normal tables.  This example shows you how to easily and quickly create and populate memory tables.

Create a new project with a single form as shown in the screen snapshot below.  Connect up the NexusDB components as per Section 6 above.  Be aware that we have hardcoded the "uses nx1xAllEngines" registration rather than use the Tnx1xAllEngines component.




Connect up the following code to the appropriate button click events:

Ancillary Code:


procedure TMainformMemTablesDialog.nxLoadTables;













procedure TMainformMemTablesDialog.FormCreate(Sender: TObject);






Top button converts a normal table into a memory table:

procedure TMainformMemTablesDialog.Button1Click(Sender: TObject);


If Not nxTable1.Active then ShowMessage('No table is open') else

if nxTable1.TableName[1]='<' then ShowMessage('This is already a memory table') else


   with nxTable2 do















Bottom button – displays data from selected table in the TdbGrid:


procedure TMainformMemTablesDialog.Button2Click(Sender: TObject);


If ListBox1.ItemIndex<0 then ShowMessage('No Table Selected') else












This example has shown you, in a very quick way, how to create and populate a memory table from an existing Nexus table.  This is the simplest way to do it : associate a TnxTable component with an existing Nexus table, rename the table with angle braces (< and >), and then create the table.  The CopyRecords method of the table then copies all the records across in one operation.


For those who use excellent kbmMemTable want an in-memory structure with database like facilities are to note that  memory table in Nexus is basically  the same thing but with the added advantage that a NexusDB in-memory can be used by multiple threads in the same way as a normal nexusdb table and that can use all the capabilitied of the SQL engine. There also is the added advantage that, as required, NexusDB can swap out blocks of the tables into temporary storage if you happen to put more information in then fit within  the MaxRam setting, kbmMT has to depend on the normal windows page file which is not always an optimal solution.


Is Memory Table simply a way of caching a normal Nexus table?


More the other way around, a "normal" table is technically the same as an in-memory table with the following additional functionality: blocks are read as needed from disk, when a transaction commits the modified blocks are written to disk, if MaxRam is reached blocks can be simply discarded (and later read in again) instead of having them written out to temporary storage. These are the only real differences between a in-memory table and a persisted table.


Does NexusDB aggressively cache?


So long as your data fits into the available memory...


That's the huge advantage a proper pure C/S implementation (server opens files exclusively, locks are handled with custom structures inside the server instead of OS locks on shared files) has over file-sharing databases, both direct file-sharing or file-sharing with remoting layer on top which some people then call C/S.


The server can always be sure it's cached data is consistent. All changes to the files go through the same layer (BufferManager in NexusDB) which is responsible for caching. As long as the server has more memory available as the size of the table files every block will only be read once and stays in the buffer manager from then on. The BufferManager is far below the point in the design where different users/sessions are distinguished, in a multi-user environment all sessions share the same buffer manager and a block needs to

be read only the first time a user accesses it. In a file-sharing design if one session writes to a table all other sessions need to read that changed block in again (actually, they have to read the blocks in all the time because it's unknown if the block in question has changed or not).


Difference between a NexusDB in-memory table and a normal table


The only difference between a NexusDB in-memory table and a normal table is that in-memory tables don't have a file assigned and the buffer manager  never writes changes for them out to disk. For all intents and purposes, once a normal table is completely present in the buffer manager, as long as only read access takes place, it will perform exactly like an in-memory table would perform.


How is a table hosted by an embedded server very similar to an in-memory server scenario?


The engine never accesses files directly in any way. Instead, they use a function that looks like this:



TnxBaseTransaction = class;

TnxBaseFile = class;

TnxBlockNumber = Cardinal;

TnxReleaseMethod = procedure(var aBlock : PnxBlock) of object;

TnxBlock = array [TnxWord16] of TnxByte8;

PnxBlock = ^TnxBlock;


function  GetBlock(aTrans         : TnxBaseTransaction;

                  aFile          : TnxBaseFile;

                  aBlockNumber   : TnxBlockNumber;

                  aMarkDirty     : Boolean;

              var aReleaseMethod : TnxReleaseMethod)

                                 : PnxBlock;


and the way how that function is used looks roughly like this:



Block         : PnxBlock;

ReleaseMethod : TnxReleaseMethod;


Block := GetBlock(aTrans, aFile, 1234, True, ReleaseMethod);


   { us Block to read/write that specific block }






now take a quick look here:


image 1 for nexus


Pay special attention to "read only block", "modified block" and "snapshot block".


if GetBlock is called with:


aTrans = nil then the returned block is the "read only block" (newest commited version of the block)
aTrans being a snapshot transaction then the returned block is either the "read only block" or a "snapshot block" if the block has been changed by commiting another transaction which modified that block since the snapshot transaction was started.
aTrans is a normal transaction then the returned block will be the newest modified block (or the read only block if the block hasn't been modified in this transaction yet), if aMarkDirty is true and there either is no modified block yet or the newest modified block has a lower level then the current transaction then a new modified block is created, the contents of the prior block (newest modified or read only) is copied into it and the new modified block is returned.
When a transaction is rolled back all modified blocks for the current transaction level are simply discarded.
When a nested transaction is commited all the modified blocks for the current transaction level are added to the next lower trans level, replacing exiting modified blocks if required.
When a non-nested transaction is commited all the read only blocks are replaced by the modified blocks, if there are any snapshot transactions active the readonly blocks are moved into snapshot blocks as required instead of being discarded. If the "file" is actually backed by a physical file on disk all the modified blocks are written to disk now, if requested (failsafe transactions) using a 2 phase commit system.
When a snapshot transaction ends all snapshot blocks that are no longer required are discarded. The buffer manager keeps track of all these blocks, the time they were last accessed, if they are currently "in use" (meaning GetBlock has been called but the ReleaseMethod hasn't yet) and how much memory they require. Whenever the buffer manager needs to allocate an additional buffer (w.g. because a block that isn't yet present in memory has been requested, a new modified block needs to be created, a new snapshot block needs to be created...) it checkes if all current blocks together exceed the MaxRAM setting, If yes, it starts to check the oldest blocks if their memory can be freed up for other use till enough memory has been freed to stay below MaxRAM.
the block can't currently be "in use"
read only blocks can just be discarded because they can be read from disk again. *If* the "file" for this block isn't backed by a real physical file (= in-mem table) it has to be written to temporary storage instead.
all other blocks can be written into temporary storage.


When a block is requested the buffer manager reads the block from disk or temporary storage as required if it isn't already present in memory.


The *only* difference between a persisted table and a in-mem table is that an in-mem table is not backed by a physical file on disk, that means:

When commiting a non-nested transaction:


persisted tables have to write the modified blocks into their physical file before returning
in-mem tables just do nothing, When MaxRAM is reached and the oldest blocks are used to free up memory:
persisted tables can just discard the read only blocks as they are identical to the version on disk
in-mem tables have to write the block into temporary storage


Given your description of what you want to do I don't see much point at all in using in-mem tables. As long as your MaxRAM setting is larger then all  tables together sooner or later they will be cached in memory completely...if they are larger then MaxRAM the most often used blocks will be cached in memory, with blocks being read from disk as required and the oldest blocks just discarded. With in-mem tables everything that doesn't fit into memory would need to be written into temporary storage. You can force the data into memory by just opening a TnxTable and doing a "while not Eof do Next"; After that all the data blocks and the blocks for the current index should be in memory (again, assuming MaxRAM is larger then the table).



Home | Site Contents | Documentation | Articles