Featuring: Community Captain Leone Scaburri, Solution Architect—internal Center of Excellence for Professional Services, Board
If you have ever reviewed a Board cube and felt uneasy after seeing 128-bit sparsity, you are not alone. For many architects and modelers, 64-bit sparsity is perceived as the “safe zone,” while 128-bit triggers immediate concerns about performance, memory usage, and design quality. The instinctive reaction is usually the same: “How do we get this back to 64-bit?” However, that reaction, while understandable, is not always justified.
To truly understand whether 128-bit sparsity is a problem, we need to step back and reconsider what sparsity is meant to achieve in the first place—and what actually drives performance in Board.
Sparsity is a Consequence, Not a Goal
Sparsity is not a feature to be “optimized” in isolation. It is the natural outcome of dimensional design choices.
Every cube reflects a balance between:
- The number of dimensions involved.
- Their cardinality.
- The number of combinations that make sense from a business perspective.
When this balance is handled correctly—by identifying which entities should be dense and which should be sparse—the cube starts to resemble reality more closely. In some cases, that realistic representation naturally exceeds what can be addressed with a 64-bit pointer. When that happens, Board simply scales to 128-bit sparsity.
This is not a failure of the engine, nor an indication of instability. It is a supported and expected behavior.
What Really Changes Between 64-Bit and 128-Bit
A common fear is that 128-bit sparsity will dramatically slow down procedures, calculations, or screen interactions. In practice, this fear is often misplaced. The core engine logic does not change when moving from 64-bit to 128-bit sparsity. What changes is primarily the width of the internal pointer, which has an impact on memory footprint and, consequently, on pure runtime execution.
However, what truly affects performance is not the pointer size, but:
- How many combinations are actually stored.
- How many of those combinations are meaningful.
- How much unnecessary data the cube is carrying.
A cube with fewer, well-defined combinations, even if managed with 128-bit sparsity, will often perform better than a bloated cube artificially constrained to 64-bit.
The Hidden Risk of “Forcing” 64-Bit Sparsity
Trying to stay within 64-bit sparsity at all costs can be counterproductive. Common strategies—such as keeping high-cardinality entities dense “just in case”, avoiding sparse structures where they are logically required, and preserving combinations that never occur in real data—may may help remain under the 64-bit threshold, but they do so by inflating the cube with meaningless combinations. This leads to higher memory consumption, heavier size on disk, and ultimately worse performance.
In other words, forcing a cube to remain 64-bit can be far more damaging than allowing it to move naturally to 128-bit.
How to Design Cubes for Correct Sparsity
A healthier design mindset is to treat 64-bit sparsity as a preference, not a constraint.
The recommended approach is simple in principle:
- Start from the business reality and identify meaningful combinations.
- Apply sparsity consistently to entities that do not interact fully with others.
- Reduce unnecessary dimensions and unused entities.
- Observe the resulting cube size and data density.
If this process results in 64-bit sparsity, that is ideal. If it results in 128-bit sparsity, that is still acceptable, if the cube is smaller, cleaner, and more representative of actual data.
When 128-Bit Sparsity Is a Warning Sign
There are cases where 128-bit sparsity should raise questions, but the questions should be about modelling choices, not about the engine.
It is worth reviewing the design if:
- Many dimensions are rarely used or completely unused.
- Sparse entities were added without validating real data interactions.
- The cube stores a large number of empty or meaningless combinations.
In these situations, the issue lies in dimensional design, not in the sparsity level itself.
Final Thoughts
The concern around 128-bit sparsity often comes from treating it as a warning sign rather than what it actually is: a natural consequence of dimensional design. When sparsity is applied correctly, cubes stop storing artificial combinations and start representing real business structures. In many cases, this makes the model significantly more efficient, even if it moves beyond the traditional 64-bit range.
From a modeling perspective, this is the right trade-off. The real danger is not reaching 128-bit sparsity. The real danger is distorting a model just to avoid it, keeping unnecessary combinations or avoiding proper sparse structures. 128-bit sparsity is not something to be afraid of. It is not an error, a limitation, or a performance sentence. The real goal is not to “stay at 64-bit,” but to store only what truly matters.
So, the next time you see 128-bit sparsity, resist the instinct to panic. Instead, ask a simpler question: Does the cube reflect the business correctly? If the answer is yes, then the model is doing exactly what it should.