Hi everyone,
We noticed a few months back, while analyzing the performance of our data loading processes, that the information available in the logs was not always accurate.
Yesterday I noticed a strange behavior: in the DB log, the "time elapsed" info is half of what it should be.
Action Code | Date | Time | UserName | DbName | Operation-Title | D.Flow Mode | Target | Elapsed | File | RecordNr | Validated | Rejected | RAM Status | ErrCode |
FR | 20180129 | 19:15 | eca | FAST_001 | Standard Prices | | | 00h18m43s | FAST_-9541 | 938043 | 938043 | 0 | [0/0]Mb | |
FR | 20180129 | 19:58 | eca | FAST_001 | Standard Prices - Detailed Costs | | | 00h21m13s | FAST_-9540 | 823410 | 823410 | 0 | [0/0]Mb | |
FR | 20180129 | 20:06 | eca | FAST_001 | Standard Prices - Calculation Dates | | | 00h03m27s | FAST_-9533 | 397962 | 397344 | 618 | [0/0]Mb | |
The procedure was launched around 18:40. Therefore I know the first data reader took around 35 minutes to load, which is confirmed by the data reader screen:

These data reader uses a SAP connector. When I go in the connector log, I see each "extractor" has been launched twice. I suspect this is due to the use of the "replace" option which needs to scan the whole cube to obtain the time entities used (which means loading 1M lines to know that only 2018 is present instead of just reading the time field !).
So a few questions :
- Does anybody have the same issues ?
- How can we obtain the correct timings in the DB logs ?
- Is the "replace" option a good choice in this case or should I first clear part of the cube before loading in a normal mode ? Do you have any suggestion to make sure the procedure clears what is needed (not more and not less) before loading ?
Thanks !
Etienne