Hi All,thank you for these feedback, very valuable as we are starting our migration.
We have 9 data models including huge ones (>80Gb), and optimization is a big topic for us.Do you have any experience regarding the Performance / Memory Usage / Disk Space ratio ?In our last upgrade, we added a dimension to our biggest cubes (>5Gb each) and we had to switch to a 128-bit sparsity in order to keep a reasonable cube size. We noticed that keeping too many dense dimensions led to huge increases in disk space usage / memory footprint.
@Bettina Clausen , @Dominik Borchert any experience on redesigning sparsities in V12 vs V10 ?Thanks,Etienne-------------------------------------------
In v12.5 Spring Release, we noticed setting max item number to auto had an adverse effect on many existing dataflows. These were simple dataflows (ex. c=a*b adding a single dimension into the target cube). In fact, to get the dataflows to perform in the same time as v10, we had to decrease max item numbers significantly from their original v10 setting. Not sure if this has been anyone else's experience in the latest v12 but it at least has cautioned me away from setting all max item numbers to auto without major regression testing.
Hi Audrey,
I would not generally set all max item numbers to “auto”. As a best practice I try to set huge entities that grow fast as “auto”, so I don't need to adjust them each year. Please note that there are many other factors that have an impact on dataflow's performances. Tuples extension? Big sparsities? Time functions? Selections? etc.
I suggest diving in deeper into the individual dataflow analysis. In your example you mention that you are adding a dimension to the target cube (c=a*b). In an optimized way this should run as a “join” dataflow type (check db log).
Kind regards,
Bettina