How to empty a cube slice faster ?

Options

Hi all,

We are currently working to load our Board environment with data from new acquired companies, and so we have to load (and reload) many times significant amounts of data.

The process is quite slow in the first place, and most of the processing time is taken by the sub procedure where we empty the cube slice before loading data into it again.

Even when we reload only one or two companies, (a few thousand lines, the data reader takes 5 seconds to complete), the emptying part can take up to 20 or 30min because it does a clear of each cube (6 cubes for sales values) and for each version (4 to 7 versions per cube).

Narrowing the selection to empty only a small slice does not improve the clearing speed much.

Below an extract of our DB log:

DC 20180503 17:43 fde FAST_001 DataClear S&GM_Quantity Details [Ver:001] 00h00m36s [0/0]Mb 
DC 20180503 17:44 fde FAST_001 DataClear S&GM_Quantity Details [Ver:002] 00h01m19s [0/0]Mb
DC 20180503 17:44 fde FAST_001 DataClear S&GM_Quantity Reporting [Ver:001] 00h00m36s [0/0]Mb
DC 20180503 17:45 fde FAST_001 DataClear S&GM_Quantity Reporting [Ver:002] 00h01m19s [0/0]Mb
DC 20180503 17:46 fde FAST_001 DataClear S&GM_Amount GC [Ver:001] 00h00m36s [0/0]Mb
DC 20180503 17:47 fde FAST_001 DataClear S&GM_Amount GC [Ver:002] 00h01m49s [0/0]Mb
DC 20180503 17:48 fde FAST_001 DataClear S&GM_Amount GC [Ver:003] 00h02m29s [0/0]Mb
DC 20180503 17:48 fde FAST_001 DataClear S&GM_Amount GC [Ver:004] 00h03m09s [0/0]Mb
DC 20180503 17:49 fde FAST_001 DataClear S&GM_Amount GC Act@N-1 [Ver:001] 00h00m36s [0/0]Mb
DC 20180503 17:50 fde FAST_001 DataClear S&GM_Amount GC Act@N-1 [Ver:002] 00h01m37s [0/0]Mb
DC 20180503 17:51 fde FAST_001 DataClear S&GM_Amount GC Act@N-1 [Ver:003] 00h02m16s [0/0]Mb
DC 20180503 17:51 fde FAST_001 DataClear S&GM_Amount GC Act@N-1 [Ver:004] 00h02m54s [0/0]Mb
DC 20180503 17:52 fde FAST_001 DataClear S&GM_Amount GC Act@Bud [Ver:001] 00h00m35s [0/0]Mb
DC 20180503 17:53 fde FAST_001 DataClear S&GM_Amount GC Act@Bud [Ver:002] 00h01m37s [0/0]Mb
DC 20180503 17:53 fde FAST_001 DataClear S&GM_Amount GC Act@Bud [Ver:003] 00h02m16s [0/0]Mb
DC 20180503 17:54 fde FAST_001 DataClear S&GM_Amount GC Act@Bud [Ver:004] 00h02m54s [0/0]Mb
DC 20180503 17:55 fde FAST_001 DataClear S&GM_Amount RC [Ver:001] 00h00m35s [0/0]Mb
DC 20180503 17:56 fde FAST_001 DataClear S&GM_Amount RC [Ver:002] 00h01m48s [0/0]Mb
DC 20180503 17:57 fde FAST_001 DataClear S&GM_Amount RC [Ver:003] 00h02m29s [0/0]Mb
DC 20180503 17:57 fde FAST_001 DataClear S&GM_Amount RC [Ver:004] 00h03m08s [0/0]Mb
DC 20180503 17:58 fde FAST_001 DataClear S&GM_Amount RC Act@N-1 [Ver:001] 00h00m36s [0/0]Mb
DC 20180503 17:59 fde FAST_001 DataClear S&GM_Amount RC Act@N-1 [Ver:002] 00h01m37s [0/0]Mb
DC 20180503 18:00 fde FAST_001 DataClear S&GM_Amount RC Act@N-1 [Ver:003] 00h02m17s [0/0]Mb
DC 20180503 18:00 fde FAST_001 DataClear S&GM_Amount RC Act@N-1 [Ver:004] 00h02m56s [0/0]Mb
DC 20180503 18:01 fde FAST_001 DataClear S&GM_Amount RC Act@Bud [Ver:001] 00h00m36s [0/0]Mb
DC 20180503 18:02 fde FAST_001 DataClear S&GM_Amount RC Act@Bud [Ver:002] 00h01m38s [0/0]Mb
DC 20180503 18:02 fde FAST_001 DataClear S&GM_Amount RC Act@Bud [Ver:003] 00h02m17s [0/0]Mb
DC 20180503 18:03 fde FAST_001 DataClear S&GM_Amount RC Act@Bud [Ver:004] 00h02m56s [0/0]Mb
DC 20180503 18:04 fde FAST_001 DataClear S&GM_Amount OC [Ver:001] 00h00m35s [0/0]Mb
DC 20180503 18:04 fde FAST_001 DataClear S&GM_Amount OC [Ver:002] 00h01m14s [0/0]Mb
FR 20180503 18:04 fde FAST_001 S&GM Reading Amount GC - RC 00h00m04s FAST_-9904 16416 16416 0 [0/0]Mb

 

Any idea to improve the process ?

I've tried to copy 0 into the cube, make a "a=b" dataflow with an empty cube, but the times are the same (not surprinsingly). Also the "replace" option of the datareader is not an option because we load only one phase (actual or budget) at the same time for a given month, and sometimes only one company.

 

Thanks,
Etienne

Answers

  • Domenico Panetta
    Options

    Hi Etienne,

     

    you could try the Clear Cube action with "Use current selection" option.

     

    Make some test (always after backup!) and check if it works for you.

     

    Regards

    Domenico

  • Etienne CAUSSE
    Options

    Hi Domenico Panetta, unfortunately that's exactly what we are currently doing, sorry for not mentioning that in my post

  • Domenico Panetta
    Options

    I'm sorry...your db log extract is showing exactly this (and I didn't see it) 

    Which Board version are you using?

    How big are those cubes?

    Can you share a screenshot of the structure (entities, dense, sparse, versions) of one of the cube?

     

    Domenico

  • Etienne CAUSSE
    Etienne CAUSSE Customer
    First Anniversary Advocate Community Voices Name Dropper
    edited March 2020
    Options

    Hi Domenico,

    We are currently using Board 10.1.2.

    Below a screenshot of the cube structure, except the quantities cubes the structures are all the same since we use them in the same reports most of the time. The physical size is 3,2Gb currently (another series has cubes around 6Gb and even longer clearing times).

    image

  • Helmut Heimann
    Options

    Hi Etienne,

     

    it looks as if the 128-bit sparsity is causing your problem. Since I assume, you've already tried everything to avoid such structures the usual dataflow will probably not become any faster in the current situation--and you've tried different possibilities to speed things up. With no luck, so far.

    I would try to "empty the cube" via a datareader process based on an ASCII file--loading zeroes via replace. I'm not sure if that'll do the trick but I think it's worth a try.

     

    Kind Regards,

    Helmut

  • Etienne CAUSSE
    Options

    Thanks Helmut Heimann, indeed I did not try it yet because the "replace" mode clears the whole time dimension, however I did not try to execute a "replace" datareader with the "Use current selection" option yet. I'll test and let you know.

  • off-topic question regarding the cube dimensions: are the "H" entities (ME concat H, MLA concat H, Material concat H) there only for relationships historization purposes ? Or do they have a business meaning which is not covered by the other dimensions ?

  • Etienne CAUSSE
    Options

    Yes, these "H" entities are the "slow moving hierarchy" keys that we discussed in another topic. They keep track of the time-dependent hierarchies (5 to 10 parents each time) for each of the basic dimensions.

  • Etienne CAUSSE
    Options

    Hi Helmut Heimann

    I tested the data reader with zeroes but it's not really faster; I guess the "replace" process uses the same mechanics as the standard dataclear.

    I did try to compare 64 bits sparse with 128 sparse cubes, and the clearing seems much faster indeed. I am not sure if this is due to the alignment or the clearing itsself as Antonio Speca mentioned, I have some further tests to do.

  • Unknown
    Unknown Active Partner
    Options

    This seems like a good candidate for parallel processing. Is there a way to split the clearing commands into the components you see in the log? Since this is a dataclear process, it should be fine to run in parallel from a conceptual perspective. If you reuse your dataflow setting a=0 across a selected set, and then copy that procedure a dozen times, you should be able to get them to run in parallel. The maximum degree of parallelism is probably governed by the number of logical CPU cores on your server. In other tools, I use parallelism up to (n-1) cores, so we always leave one core for commands and operating system operations. 

     

    Triggering the multiple threads may be a bit tricky. I suspect you could use the BOARD Procedure Launcher from a batch file to kickoff them off as separate threads.

     

    Here is some pseudocode to articulate what I'm proposing

    1. Create a BOARD procedure with two steps
      1. Select
      2. Cube Clear OR Dataflow a=0
    2. Copy the procedure for as many CPU cores as available on the server
    3. Adjust each Select step to reference a different surface area
    4. Setup a batch file as follows (Powershell would work too with start-job syntax)
      1. For each of the dataclear processes, use this command to call the procedure launcher. This will ensure they start in a new thread, and the batch file can continue without waiting.
        call "cmd /c start BoardProcedureLauncher.exe"
    5. Run the batch file by double-click or windows scheduler
    6. Watch the TaskMgr CPU usage to see if all cores are running

     

    Good luck,

  • Looks like I underestimated the 128-bit payload in align cube, good job Helmut Heimann!

    Etienne CAUSSE keep us posted

  • Helmut Heimann
    Options

    Thank you, Antonio Speca!

  • Etienne CAUSSE
    Options

    Hi Bob,

    Thanks for your input. Indeed in this case it looks like some parallel execution would be nice, however since there is no native way to trigger parallel tasks the method you suggest seems difficult to implement in real life. The main issue I see is that this procedure, in our context, must be triggered by a key user from an administration capsule. How would you maintain a correct interactive flow of events to make sure the user can launch the process then be informed of the correct result before going forward to launch the data readers ?

     

    Antonio Speca, any idea if parallel jobs will be a feature of future versions ?

  • Etienne CAUSSE
    Etienne CAUSSE Customer
    First Anniversary Advocate Community Voices Name Dropper
    edited March 2020
    Options

    Hi Antonio Speca, Helmut Heimann

    Below my latest results. I built a test case for new cubes that allow me to compare the performance of several structures before going live with these cubes.

    Structures are as follows for my 4 cubes:

    1. 64 bits sparse for V1, no additional versions
    2. 64 bits sparse for V1, 2 additional versions with 64 bits sparse also.
    3. 128 bits sparse for V1, 2 additional versions with 64 bits sparse.
    4. 128 bits sparse for V1, 128 bits sparse for V2, 64 bits sparse for V3.

     

    The versions are all built on the same dimensions of course.

     

    I imported a full year of data for 1 phase (for "Phase" entity means Actual / Budget / ReForecast / Restated), target is ~10 years x phase in these cubes.

    image

    Of course the 64 bits versions are using more space due to the dense dimensions which are significant, so I want to know if the speed versus memory consumption trade-off is worth it before going live.

     

    I made a simple procedure for dataclear using selection, since I concluded that the other ways we discussed to empty the cubes are not significantly faster.

    image

     

    I launched it for one month and one phase, here are the results:

    DC 20180509 10:24 eca FAST_001_ECA DataClear Reprocessed ROPA GC S64 3 versions [Ver:002] 00h00m01s [100634/155690]Mb
    DC 20180509 10:24 eca FAST_001_ECA DataClear Reprocessed ROPA GC S64 3 versions [Ver:003] 00h00m02s [100634/155665]Mb
    DC 20180509 10:25 eca FAST_001_ECA DataClear Reprocessed ROPA GC V1 S128 V2 S64 V3 S64 [Ver:001] 00h00m39s [100187/156142]Mb
    DC 20180509 10:25 eca FAST_001_ECA DataClear Reprocessed ROPA GC V1 S128 V2 S64 V3 S64 [Ver:002] 00h01m14s [100187/156141]Mb
    DC 20180509 10:26 eca FAST_001_ECA DataClear Reprocessed ROPA GC V1 S128 V2 S64 V3 S64 [Ver:003] 00h01m50s [100187/156130]Mb
    DC 20180509 10:27 eca FAST_001_ECA DataClear Reprocessed ROPA GC V1 S128 V2 S128 V3 S64 [Ver:001] 00h00m39s [100187/156130]Mb
    DC 20180509 10:27 eca FAST_001_ECA DataClear Reprocessed ROPA GC V1 S128 V2 S128 V3 S64 [Ver:002] 00h01m16s [100187/156129]Mb
    DC 20180509 10:28 eca FAST_001_ECA DataClear Reprocessed ROPA GC V1 S128 V2 S128 V3 S64 [Ver:003] 00h01m52s [100187/156127]Mb

     

    Observations:

    • Clearing of first cube (no version) did not even show up in the log... is that normal ?
    • Clearing of the fully 64 bits version took 2 seconds (I can live with that !)
    • Clearing of the 128 bits versions both took around 110 seconds to complete, with only a 2 seconds difference for V2 (which was 64 bits in the first case, 128 in the second).

     

    Conclusions:

    The biggest performance factor for dataclear seems to be the 64 vs 128 bits for the sparsity of version 1 of the cube, as suggested by Helmut Heimann.

    This impacts not only the clearing speed of version 1, but also the clearing and / or alignment of the aggregated versions in the same proportions.

     

    Hope this will be useful.

     

    Cheers,


    Etienne

  • Helmut Heimann
    Options

    Hi Etienne CAUSSE,

     

    thanks for the effort of examining this subject--I think this will be useful to many of us.

    Glad, it worked out.

     

    Kind Regards,

    Helmut