I've seen a few cases now where checking the Open Sparsity option on a dataflow does the trick to ensure data makes it to the destination cube correctly. I was hoping someone could help articulate when and why this should be used, or avoided?
As I understand it, the opensparsity option removes any assumptions about where valid data intersections exist in the target. This forces execution of the dataflow for each intersection of the target cube. If a non-zero value is found, it is saved to the target cube and the intersection is populated. Without this option, a dataflow could have a non-zero number to store in a target cube, but since the target intersection is not valid, the result cannot be saved to the target cube. The behaviour looks like the dataflow works fine, but some numbers just don't populate in the target. Without this option checked, the dataflow will only calculate results for each existing intersection of the target cube. With this option checked, a dataflow takes a bit longer because there are more intersections to calculate. Is that correct? Please feel free to correct me where I've misunderstood the option.