Data flow no more working

Options

Hi community

 

I have a data flow using join that regularly works with version 10.1.2 and is not working with 10.3.0 (last release)

 

See the picture

Tagged:

Answers

  • Hi Daniele Di Lorenzo,

     

    your matrix cube (block b) should have at least one dimension less than your source cube (block a) to perform a working join dataflow. In your example both matrix and source cube have 3 dimensions.

    See also the answer to this topicDataflows : c=a*b versus c=join(a*b) .

     

    Hope this helps

     

    Best regards

    Bettina

  • Daniele Di Lorenzo
    Options

    thank you Bettina Clausen

     

    I'll try to add a dummy dimension in the 1st cube. But I don't think this is the problem. 

    The fact is that this data flow is correctly processing data with version 10.1.2 (this procedure have been created months ago and has always worked perfectly) and is not working with 10.3.0. 51220.

     

    I have also another procedure with a join (and here 1st cube have more dimensions then the matrix cube) and the behaviour is the same.

     

    Finally, what I can see is that join data flow seems not working with 10.3

     

    I'll keep you updated about further testing results

     

    thanks!

  • Daniele Di Lorenzo
    Options

    Bettina Clausen, I made some test.

     

    I don't think the nr of dimensions affects the calculation; I mean, as Michele Roscelli explained, the join allow to expand the nr of dimensions in the target. Here the matrix have the dimension to add to the target, so in my opinion this is correct. This is the configuration

     

    a) Source cube: Month (D), Entity 1 (S), Entity 2 (S)
    b) Matrix cube: Month, Entity 2 (S), Entity 3 (S)
    c) Target cube: Month (D), Entity 1 (S), Entity 2 (S),Entity 3 (S), Entity 4 (S)

     

    Moreover I verified this strange behaviour: If the target have been already calculated (in this case in 10.1.2) the data flow seems running because target values remains unchanged.  BUT if I delete some data ( i manually delete some cells for testing) when I run again the data flow all data in the process selection are resetted. 

  • Unknown
    Unknown Active Partner
    Options

    I don't think there is an issue with the join matrix syntax either.

     

    Probably you tried it already, but did you clear the data dictionary cache and load-reload your database again?

     

    Did you try disabling the cache parameter in the adubaparameters?

     

    Did you try clearing and reloading your cubes?

     

    Did you try with a different target cube with the same entity structure?

     

    When a dataflow stops working for magical reasons, one of abovementioned countermeasures usually seals the deal for me.

     

    Cheers,

    Jonathan

  • Daniele Di Lorenzo
    Options

    Hi Jonathan Baetens

     

    thanks a lot

     

    I tried all options (no changes in behaviour) except "disabling the cache parameter in the adubaparameters"

    How can I do this?

    Which is its impact ?

     

    Anyway, I made such df work removing hpm and join, they are less efficient but I made running time sustainable reducing the process time selection.

    So, for now I have a workaround for the time I'll upgrade the version in the live environment.

     

    But this not fix the issue

  • Unknown
    Unknown Active Partner
    edited March 2020
    Options

    Hi Daniele,

     

    The AdulaParams can be found under the BOARD Server folder. You would have to change that file and restart the service.
    image

     

    Typically I set those flags to false as sometimes due to imprecise caching, some data entries/selections point to wrong cells in the database. I am actually not 100% sure if it has any impact on a dataflow, but I apply this change when I start seeing odd behaviour in the system. The only possible impact is a negligible performance decrease.

    Cheers,
    Jonathan

  • Daniele Di Lorenzo
    Options

    hi Jonathan Baetens

     

    I tried this but is still not working

     

    thanks for the support

  • Hello Daniele,

     

    Join algorithm is only supported in InRam Mode.

    If you are running HYbrid you need to set the three involved cubes as InRam and restart the service, or run the serice in full InRam mode.

     

    Jonathan Baetens The adulaparams caches are only used for dataentry.

     

     

    Ciao

     

    Antonio

  • Daniele Di Lorenzo
    Options

    Hi Antonio Speca

     

    Yes, this is the reason

    My test environment was set in hybrid mode

     

    thanks a lot!