Are you taking too much to load a Database in Memory? How to troubleshoot and fix

Document created by dmarocco Employee on Mar 15, 2018
Version 1Show Document
  • View in full screen mode

Board adopts a database technology, named HBMP: Hybrid Bitwise Memory Patterns. This technology uses internal algorithms for managing multidimensional data, which make a much larger utilisation of RAM. This combined with large databases,  may grow a lot the Loading Time of the Databases in Memory, trasforming a simple action like the Service re-start in a laborious task.


First thing is to understand where the time goes by, looking into the file: "c:\Board\Dataset\Log\HBMP_dbname_inMemory.log" that reports the Memory Absorption and the InMem Loading Time for all the Objects of a specific Database. 




It's likely to happen that a lot of times goes into loading Info-Cubes and Sparse structures.

If this is the case, a simple thing to do is switch the BoardServer setting to "Hybrid".




Then open the database or a Layout working on such database  and record how much time it takes. When in Hybrid, and then the Database is opened only the Entities and Relationships are loaded in RAM, other Objects are loaded when needed. If the opening time is better (a lot better? a little better?) then it's possible to make additional tuning: set to "inRAM" the cubes that are most frequently used by end-users so that those info-cubes will perform as fully in-memory.



Note that the in-RAM setting is at version level not info-cube. So you can be more granular, if you want to save some RAM (and loading time), even if a cube is frequently used you should set to inRAM only the versions that are very granular or large.



For example you can sort the *.BM5 files of the C:\Board\Database\database_name folder and anything that's below 200MB in size then don't set it to inRAM because even if slower that version will perform still reasonably well even if on-disk (roughly 20 to 25% slower but if the absolute number is small then the loss is negligeable).