Experiences upgrading from Board 10.x to Board 12.x

Unknown
edited February 2022 in Platform
Hi Everyone,

I'm relatively new to board and as luck would have it, my first project is to upgrade one of our clients from version 10.6 to version 12.2, with more clients to follow for the remainder of the year. I've read the instructions for upgrading and regression testing, however I am interested to get people's experiences who have completed this procedure. I ask because I am upgrading a client with a very large database and this has caused memory issues on the new server when trying to import extracted cubes from the previous version; I suspect that we need to add additional memory (the new server currently has 16GB).

Also any other tips or experiences that you can share, would also be greatly appreciated.

------------------------------
Eric Rizk
Business Intelligence Analyst
LIGHTARC PTY LTD
Australia
------------------------------

Answers

  • Bettina Clausen
    Bettina Clausen Employee
    Fourth Anniversary 25 Up Votes 25 Likes 5 Answers
    edited February 2022
    Hi Eric,

    we recently migrated a customer from B10.5 to B12.1. The database used to be 10GB but due to some optimizations we managed to decrease the size to 4GB.

    In our case, when we initially created the datamodel in B10, we needed to have the biggest entity (~300k elements) set as a dense entity into the cubes. Otherwise, we would have created a 128bit sparsity which is not recommended to use when planning in these cubes. However, with the migration to B12, we were able to include this dimension also into the sparsity (higher threshold in B12). Moreover, we could set the max item number to this entity also to "auto" which then will be automatically calculated with the maximal amount of elements that allows us to stay in 64bit sparsity. Of course, we needed to adjust the sparsity for each cube that had this dimension, which caused quite an effort.
    In some procedures, we decided to use virtual cubes instead of regular cubes used as temporary cubes. The advantage here is that they will be "destroyed" once the procedure is finished and not stored into the database.
    After all, with the few optimizations we managed to decrease the whole currency calculation process from ~30 minutes to only ~4 minutes.
    In the end, I think, our biggest gain was to make use of bigger sparsity threshold.

    Of course, this experience cannot be a template for all migrations projects. Probably, there are some more things one might be able to do. But it might give you some inspiration to make use of B12 enhancements.

    Please note, that there are many other things to do in the migration process (extend tuples? did I use sparsity beforehand to limit the result? extract cubes/trees? Migration of frontend necessary?...).

    Kind regards
    Bettina

    ------------------------------
    Bettina Clausen
    Consultant
    Board Community
    Switzerland
    ------------------------------
    -------------------------------------------
  • Robert-Jan van Kuppeveld
    Robert-Jan van Kuppeveld Active Partner
    Second Anniversary Advocate First Comment
    edited May 2022
    Hi Eric,

    We helped various Board users with the upgrade from 10.x to 12.x for this we have created for this the Board 12 competence center. 

    So please feel free to contact us if you want to know more about how we can help you.

    Best regards,

    Robert-Jan van Kuppeveld

    ------------------------------
    Robert-Jan van Kuppeveld
    Partner
    Planpulse B.V.
    Netherlands
    ------------------------------
    -------------------------------------------
  • Dominik Borchert
    Dominik Borchert Customer
    25 Up Votes Second Anniversary First Comment Photogenic
    edited February 2022
    Hi Eric,

    we migrated from 11.X to 12.1 and had some bigger issues regarding data flows.

    - The "extend calculation on new tuples"-functionality works a bit different as in 11 (or 10). The older versions seemed to "forgive" mistakes here - we needed to set many more extensions as before to make things work again.
    - We also had some issues with temporary cubes that worked in 11 and not anymore in 12(.1) - but I guess this is solved in 12.2
    - And the biggest issue: the extract cubes / trees functionality created suddendly files with a different order. This structure then didn't hit the expected structure for the belonging data reader (we do this export / import thing quite often for master data to fix missing relations).

    As always you need to do a full regression test and really test every data flow. 

    Best regards

    Dominik



    ------------------------------
    Dominik Borchert
    ------------------------------
    -------------------------------------------
  • Unknown
    edited February 2022
    Hi Bettina,

    The B12 enhancements certainly do sound inspirational indeed. Thank you for sharing that.

    I found the things you shared in your last paragraph particularly to be of most interest. If there is anything else, you can add to this list, I would be most grateful.

    kind regards,


    ------------------------------
    Eric Rizk
    Business Intelligence Analyst
    LIGHTARC PTY LTD
    Australia
    ------------------------------
    -------------------------------------------
  • Unknown
    edited February 2022
    Hi Dominik,

    It is becoming more apparent that there will be a few things that will need to be fixed in the process and yes the regression test will have to be quite extensive.

    Thank you for sharing that.

    ------------------------------
    Eric Rizk
    Business Intelligence Analyst
    LIGHTARC PTY LTD
    Australia
    ------------------------------
    -------------------------------------------
  • Unknown
    edited February 2022
    Hi Robert,

    Yes, I am definitely interested in checking what you can offer in regards to this process.

    kind regards,

    ------------------------------
    Eric Rizk
    Business Intelligence Analyst
    LIGHTARC PTY LTD
    Australia
    ------------------------------
    -------------------------------------------
  • Bettina Clausen
    Bettina Clausen Employee
    Fourth Anniversary 25 Up Votes 25 Likes 5 Answers
    edited February 2022
    Hi Eric,
    • dataflows (check each single dataflow!):
      • Do I need to extend tuples, because one dimension is added or I just write a constant or an entity straight into the cube?
      • Do I have an if-statement in my dataflow which might or might not need an extend? E.g. the algorithm looks like this: d=if(c=0,b,a) --> depending on the outcome of the algorithm you might need to extend the tuples, because b or a does not have a target dimension.
      • Did I use sparsity before to reduce to write the result into the relevant combinations and not all possibilites? --> do I need to limit the tuples now?
    • extract
      • Cube: The order of columns can now be defined. Do I need adjust all extract-steps so the order is exactly the same as before?
      • Tree: The order now follows a different rule. You cannot adjust it manually, so you need to adjust the interface of the extracted file (e.g. another datareader in Board, or a SQL-task, etc.).
    • CPSX vs BCPS
      • Was the frontend implemented in the Windows client?
      • If yes, are there functionalities used that are currently not (yet) available in Board Web (Trellis, Cockpit, ...)?
      • Do I need to adjust the screen size (lock size vs fit-to-width etc.)?
    • General limitations B10 vs B12
      • MXC cubes are no longer available --> aggregation functions in frontend instead available
      • no expressions available
    Kind regards
    Bettina

    ------------------------------
    Bettina Clausen
    Consultant
    Board Community
    Switzerland
    ------------------------------
    -------------------------------------------
  • Unknown
    edited February 2022
    Hi Bettina,

    Thank you so much for that. Very much appreciated.

    Last but not least, I was wondering how long it took you to do your regression testing and complete the fixes. I know this varies greatly, depending on the size of the database, number of screens and so on but I'm currently trying to work out a suitable estimate for this project. 

    kind regards,

    ------------------------------
    Eric Rizk
    Business Intelligence Analyst
    LIGHTARC PTY LTD
    Australia
    ------------------------------
    -------------------------------------------
  • Robert-Jan van Kuppeveld
    Robert-Jan van Kuppeveld Active Partner
    Second Anniversary Advocate First Comment
    edited February 2022
    Hi Eric,

    Thank you for your message and please feel free to contact me directly so we can talk about this.

    My contact details are:
    Robert-Jan van Kuppeveld
    robert-jan.vankuppeveld@planpulse.eu

    Best regards,

    RJ

    ------------------------------
    Robert-Jan van Kuppeveld
    Partner
    Planpulse B.V.
    Netherlands
    ------------------------------
    -------------------------------------------
  • Bettina Clausen
    Bettina Clausen Employee
    Fourth Anniversary 25 Up Votes 25 Likes 5 Answers
    edited February 2022
    Hi Eric,

    I think it makes sense to write down beforehand how much dataflows, extracts, screens etc. you have and then to estimate them with a factor (e.g. 5 minutes adjustment per dataflow, 15 minutes per extracts, ...). Of course each function must be tested extensively (more effort to consider - depending on the application). Then I'd include some buffer for optional optimizations (max item numbers, new sparsity definitions, virtual vs. real cubes, ...) or other issues that one did not take into account. Plus, I recommend to have someone in the migration-team who knows the database and the application very well and how it should work.

    Kind regards
    Bettina

    ------------------------------
    Bettina Clausen
    Consultant
    Board Community
    Switzerland
    ------------------------------
    -------------------------------------------
  • Unknown
    edited February 2022
    Hi Bittina,

    I'll take this to the team and we'll go over all the considerations that you provided.

    A huge thanks you for all your help. You are a major asset to the Board community!

    kind regards,

    ------------------------------
    Eric Rizk
    Business Intelligence Analyst
    LIGHTARC PTY LTD
    Australia
    ------------------------------
    -------------------------------------------
  • Etienne CAUSSE
    Etienne CAUSSE Customer
    Fourth Anniversary 250 Up Votes 100 Comments 100 Likes
    edited February 2022

    Hi All,
    thank you for these feedback, very valuable as we are starting our migration.

    We have 9 data models including huge ones (>80Gb), and optimization is a big topic for us.
    Do you have any experience regarding the Performance / Memory Usage / Disk Space ratio ?

    In our last upgrade, we added a dimension to our biggest cubes (>5Gb each) and we had to switch to a 128-bit sparsity in order to keep a reasonable cube size. We noticed that keeping too many dense dimensions led to huge increases in disk space usage / memory footprint.


    @Bettina Clausen , @Dominik Borchert any experience on redesigning sparsities in V12 vs V10 ?

    Thanks,
    Etienne-------------------------------------------

  • Bettina Clausen
    Bettina Clausen Employee
    Fourth Anniversary 25 Up Votes 25 Likes 5 Answers
    edited February 2022
    Hi Etienne,

    before designing a data model, I create table of the dimensions I will need + the max item numbers.
    Based on this information, I can simulate, where my sparsity limit is and if there is any room for improvement.
    For instance:
    imageIn this example, I have 5 dimensions with the given max item numbers. In B10 you would jump into 128bit sparsity, once you exceed the limit of 10^15 possible sparsecombinations. In B12 the limit is much higher (2^64). So as you can see in my example, I was now able to include all dimensions into the 64bit sparsity, whereas in B10 I could only put 3 of them into it.

    If you use "auto" as a max item number setting, Board will automatically calculate the value based on the used sparsity combinations and the 64bit limit (2^64). In general, I'd strictly avoid a 128bit sparsity. If this is still the case in B12, you might need to redesign the whole datamodel (e.g. by using technical keys and concatenations of trees) but that's another topic...

    If you plan on redesigning your sparsity, I suggest to analyze your possiblites with such a matrix, before adjusting anything in Board.

    Kind regards
    Bettina

    ------------------------------
    Bettina Clausen
    Consultant
    Board Community
    Switzerland
    ------------------------------
    -------------------------------------------
  • Etienne CAUSSE
    Etienne CAUSSE Customer
    Fourth Anniversary 250 Up Votes 100 Comments 100 Likes
    edited February 2022
    Hi @Bettina Clausen and thanks for your input. I had a similar Excel file in the past, but I was not aware of this difference between Board 10 and Board 12 (10^15 vs 2^64). I'll review my model around that.

    Regarding the cube size, did you see any difference between B10 and B12 with the same dense/sparse structure ?

    Etienne-------------------------------------------
  • Dominik Borchert
    Dominik Borchert Customer
    25 Up Votes Second Anniversary First Comment Photogenic
    edited February 2022
    Thanks for your interesting insights! Especially the idea about changing the sparse structure in B12 (to decrease cube sizes) was also new to me. We basically didn't change anything here during migration as far as I remember - cube structures stayed as they were in B11/10. In this context our cubes didn't really change regarding size (total of 8GB in B10/11 and total of 8 GB in B12). Nevertheless we have experienced less CPU und RAM consumption during complex procedures under B12 (e.g. 32 GB -> 16 GB for the biggest prodedure)...

    ------------------------------
    Dominik Borchert
    ------------------------------
    -------------------------------------------
  • Tobias Feldmann
    Tobias Feldmann Customer
    Second Anniversary First Comment
    edited February 2022
    hello to all,
    We are currently migrating from 10 to 12. Our first step was to extract the metadata from 10 and put it into a list form using XSLT into all steps of Procedures. We started with the AsciiExportAction, ExtractCubeAction, ExtractEntityAction, ExtractCubeAction and ExtractTreeAction. These are part of the AsciiDataReaderAction to be tested. Here, the layouts of the AsciiExportAction are particularly problematic. We have used many layouts with high computing power. This naturally leads to an enormous CPU load on the server. First we have to optimise all layouts. Only then can we check all the readers.

    The Excel Sparsity calculation is also a good indication in any case, thanks to @Bettina Clausen.


    ------------------------------
    Tobias Feldmann
    Senior Controller
    Weber GmbH & Co. KG Kunststofftechnik und Formenbau
    Germany
    ------------------------------
    -------------------------------------------
  • Audrey Nobles
    Audrey Nobles Active Partner
    Fourth Anniversary Level 200: Building A Planning Solution in Board Level 100: Foundations of Building in Board First Answer
    edited March 2023

    In v12.5 Spring Release, we noticed setting max item number to auto had an adverse effect on many existing dataflows. These were simple dataflows (ex. c=a*b adding a single dimension into the target cube). In fact, to get the dataflows to perform in the same time as v10, we had to decrease max item numbers significantly from their original v10 setting. Not sure if this has been anyone else's experience in the latest v12 but it at least has cautioned me away from setting all max item numbers to auto without major regression testing.

  • Hi Audrey,

    I would not generally set all max item numbers to “auto”. As a best practice I try to set huge entities that grow fast as “auto”, so I don't need to adjust them each year. Please note that there are many other factors that have an impact on dataflow's performances. Tuples extension? Big sparsities? Time functions? Selections? etc.

    I suggest diving in deeper into the individual dataflow analysis. In your example you mention that you are adding a dimension to the target cube (c=a*b). In an optimized way this should run as a “join” dataflow type (check db log).

    Kind regards,

    Bettina