Best Of
Re: FlexGrid - Wrap column names
Hi
Actually we can remove the aggregate labels by checking below
then the calculate is removed
Rgds
Khaleelah
Re: Presentations Modules - Make independant links to screens to the folders in which they are stored
Hi, is there any update on this topic?
Re: Dataflow performance
Just be aware that the JOIN() function is deprecated from Board 12.6 onwards, as Board now automatically applies such functions without you needing to explicitly use them, it won't cause any harm but it will be unnecessary and will be ignored by the engine.
How to display images stored in the Azure Storage
CONTENT
1.1 Display an image stored in the Azure Storage
Displaying images stored in Azure Storage directly in Board is possible by dynamically constructing the URL according to the File Storage public access schema. The integration leverages the algorithm blocks to concatenate the relevant URL components
Each image is accessed through this URL structure:
https://$STORAGE_ACCOUNT.file.core.windows.net/$SHARE_NAME/$DIRECTORY_PATH/$FILE?$SAS_TOKEN
Note that only HTTP or HTTPS protocols are supported for image rendering in Board.
1.2 How to Build It in Board
Define URL Components in Separate Text Algorithms
Create five text algorithms in your layout to represent each part of the Azure URL:
Variable | Description | Example |
|---|---|---|
$STORAGE_ACCOUNT | Base account endpoint | "https://myaccount.file.core.windows.net/" |
$SHARE_NAME | Azure File Share or Blob Container | "product-images/" |
$DIRECTORY_PATH | Folder/subfolder hierarchy | "electronics/" |
$FILE | Image filename including extension | "laptop123.png" |
$SAS_TOKEN | Secure Access Signature token (with ? prefix) | "?sp=rl&st=2025-02-10T10:00Z&se=2025-12-31T10:00Z&sv=2023-11-03&sr=f&sig=xxxxxx" |
Each algorithm block returns a single text value
Build the Dynamic Picture algorithm that concatenates the five text algorithms:
$STORAGE_ACCOUNT & $SHARE_NAME & $DIRECTORY_PATH & $FILE & $SAS_TOKEN
Use this algorithm in the layout of a Label or Card and your Azure image appears dynamically in Board.
Below an example of Layout configuration to display the stored image:
Retrieve the SAS token from the SAS URI
!!! Be cautious: Displaying or exposing a SAS token in text is a potential security risk, because it effectively acts as a temporary key that grants access to the Azure Storage resources.
Azure storage :
1.3 Conclusion
Displaying images from Azure Storage in Board is a good alternative to storing them directly in the data model. By dynamically building HTTPS URLs through text algorithms or cubes, you can keep your data model lighter while still enriching dashboards with secure, cloud-hosted visual content.
Board 14.5 is here!
We are pleased to announce that the Board 14.5 release is here! 🚀
The new Board 14.5 release includes significant improvements for Developers, Administrators, and Planners (End Users).
Board 14.5 proudly introduces a new semantic layer to your data – the Datasets! Furthermore, we made improvements to the Data Reader Action Group in Procedures, and in the Integrated ALM (Application Lifecycle Management).
Various other enhancements have also been made to the overall Platform experience, namely to the Administration section of Presentations, as well as improvement for the user of Presentations. Administrators are now also able to access a centralized view of all Send To configurations, and Subscriptions. Finally, for the users of the Flex Grid Object we have implemented a new formatting option for Row Style Templates - the double border.
Please find all the enablement materials linked below.
What to expect in the new Board 14.5 release
Improvements to the Error Management in Data Reader Action Group
We’ve introduced new configuration options that give developers greater flexibility in how Data Reader Procedure steps handle errors, including a new “NONE (Continue Procedure Execution)” option that preserves legacy behavior and a “Get back” flag that lets a Procedure return to the main flow after an Error Group runs. Together, these updates make it easier to choose whether execution should continue as usual or follow a defined error-handling path.
Datasets - A New Semantic Layer to Your Data
Datasets is a new object that provides a business-friendly semantic layer for accessing and exploring data, giving users a simplified view without needing to understand the underlying Data Model; it is currently available only to joint Board/Foresight customers. It allows developers to create purpose-built queries, reuse existing screen layouts, and empower business users with self-service data exploration.
Improvements to the Integrated ALM (Application Lifecycle Management) - Transporter enhancements
Datasets can now be fully managed through the Transporter, allowing you to move, update, or remove them across Data Models or Board environments just like any other Data Model object. The Transporter also runs an automatic dependency check before package creation to ensure everything remains consistent and conflict-free.
Presentations
New Archive and Unarchive options let users clean up their Presentation lists by hiding items without affecting sharing or content, while unarchiving restores them when needed.
Administrators gain streamlined management tools—including centralized menu actions, improved grids, direct Chat/Email/Delete options, and more control over author access and membership. Users also benefit from features like Leave Sharing to manage their own workspace and a new Slide List view that simplifies navigating and reviewing complex Presentations.
Send To
A new Send To panel in System Administration gives Administrators a centralized view of all Send To configurations across the platform, allowing them to audit usage, monitor schedules, and manage execution through run, enable, disable, and delete actions. Users continue to control detailed configurations, while Admins can review settings and adjust availability without altering the underlying setup.
Subscriptions
The Subscriptions panel in System Administration centralizes oversight of all automated content deliveries, giving Administrators visibility into what is being sent, when, and to whom while leaving detailed configuration under the control of each Subscription’s creator.
Admins can run, enable, disable, edit, or delete Subscriptions, individually or in bulk, to manage execution without altering user-defined settings.
Row Template Style - Double Border
A new double-border formatting option is now available for Row Style Templates, enabling clearer and more polished financial statement layouts.
This enhancement is supported across Flex Grid, Data View, printable reports, and Excel exports.
Board 14 Family FAQs
Looking for answers? Read our FAQs about the Board 14 release (access under community log in).
Board 14.4 Enablement Materials
- Review the [Official] Board 14.5 Release Deck attached at the end of this post.
- Explore the B14.5 training course here.
Release Notes and Bug Fixes
You can review the full Board 14.5 release notes now on the Board Knowledge Base as well as on the dedicated page here on Community. Read more details about the most recent bug fixes on our Community’s dedicated page.
Install Files
To download the latest installer files, please visit the Board 14.5 Download page. You must sign in to view and download the files.
Upgrade Instructions
The upgrade instructions are the same as for 14.1. Visit the upgrade instructions page of the Board Knowledge Base to learn about the upgrade process.
Review the Official Board 14.4 Release Deck below!
Smart Import Object sample use-cases and functionalities.
Author: Amruta Shaha, Senior Consultant for Board.
This article takes a deep dive into the Smart Import Object facility in Board, exploring how and when to use it in place of Data Readers, the different modes of import available, and practical scenarios where formula-based imports can unlock real value. While existing resources already cover the fundamentals, this piece is designed to go further—highlighting additional areas and advanced applications. The aim is to help readers configure Smart Import not just for reading Cube or Entity values, but to fully leverage its solution-driven capabilities in tackling complex business cases.
Existing Useful Articles as a pre-read:
- Smart import Object
- New training for the Smart Import Objects tool - Board Community
- How to hide/ignore columns of an Excel data source when using the Smart Import Object - Board Community
When to use: Smart Import Vs Data Reader
- End User in play mode can populate entity members/cube with Excel-like interface through file upload / copy paste on screen from their local machine directory as source
- Users can see the errors on screen on failed upload
Not useful for:
- Admin activities for mass uploads/scheduled batch uploads
- Procedure based data reading
- Migration activities/multiple period and cubes massive updates/application-level updates or changes needing huge database updates
Smart Import modes of data upload
A frequent question about Smart Import is: “What happens if I upload the same file more than once?”
The answer depends on the configuration mode—Add, Replace, or Merge—and each handles repeated imports differently. Using an inventory stock cube with dimensions Month and Category, the table below shows how the data behaves under each mode when the same file is imported multiple times.
Upload 1
Month | Category | Quantity |
|---|---|---|
202501 | 1 | 100 |
202501 | 2 | 100 |
202501 | 3 | 100 |
Upload 2
Month | Category | Quantity |
|---|---|---|
202501 | 1 | 200 |
202501 | 3 | 100 |
Resulting Cube Value
Data Behavior | Month | Category | Quantity |
|---|---|---|---|
Add | 202501 | 1 | 300 |
Add | 202501 | 2 | 100 |
Add | 202501 | 3 | 200 |
Data Behavior | Month | Category | Quantity |
|---|---|---|---|
Replace | 202501 | 1 | 200 |
Replace | 202501 | 3 | 100 |
Data Behavior | Month | Category | Quantity |
|---|---|---|---|
Merge | 202501 | 1 | 200 |
Merge | 202501 | 2 | 100 |
Merge | 202501 | 3 | 100 |
From above it is clear that Merge option will be more favored in configuring the smart import.
Add option will be used when the users are aware that each upload increments the existing values (they can then adjust by uploading negative values to adjust).
Replace mode is to be carefully used as any member entity not in the upload data is zeroed if it already holds some value. Replace mode recommended to be used with active selections / discard outside selection mode on so that any entity members that are outside the active selections are not zeroed down.
To note however even with discard outside selection mode on, any absent combination in the new upload will zero down existing value present on that combination.
Example in above month selection could be 202501 and category selected could be all 1, 2, 3. In that case the second upload will nullify/zero out the value existing on combination 202501 as month and 2 as category.
Use case for advanced Auto Incrementation / Serialized numbering (Using procedure trigger for Data reader from Smart Import):
When an entity is made auto incremental, each new upload increments the number from the last upload. The database entity member list is checked in the backend to have new smart import upload place the incrementation to next number.
Example: creating new Employee ID. The employee ID is to be system generated as auto incremented but also to include the department. This can be achieved by Smart import of a stage cube, Stage cube with dimensions – Department and Auto Incrementing Entity and Name, Cube fixed value = 1.
Additionally, setting a trigger on smart import object that on the save action will extract the cube and data read the Emp ID entity concatenating Increment ID and Department ID for the required name as a mapping cube.
The Auto incrementation can also be triggered for backend data read processing where the serialization needs to be from a higher “Start” number example if the series starts with “5000000” and the records uploaded are first 100, the formula in the data read trigger procedure can be 5000000 + the column for value of auto incrementation in the ETL function of Data reader.
To note, the auto incremental entity used in smart import to also have equal code width as the final series entity so that the above addition updates the series at higher numbers too (for example when it reaches 5999999, to increment to start with 6000000.
The setup in ETL will look as below:
And result as:
This is to address the limitation of Smart Import that only existing column header name of the excel like layout in play mode of the smart import object can be referenced in the [@....] formula and at the same time the auto incremental entity cannot reference any such play mode column header, it needs to be mapped as fixed/formula in the configuration.
Multi step calculations
Except auto incrementation to overcome smart import limitation of being able to reference only frontend column headers in formula i.e. [@Column Header …], Users can handle multi step calculations using combination of frontend column excel like formulas + backend mapping configuration.
Consider the below case as example: we need to calculate Net Amount with slab discount:
Step 1: Calculate Basic Amount
- Qty × Rate = Amount
- e.g., 15 units × 120 = 1800
Step 2: Apply Slab Discount
- If Amount > 1500, discount = 10% of Amount
- Net Amount = Amount – Discount
- e.g., 1800 – 180 = 1620
We can add the Basic pre Discount Amount in the front end of the smart import (Users will upload Qty and Rate in this example).
In design mode, the amount column will be added with formula as below to simplify the steps:
In backend mapping configuration of the smart import, the cube can be mapped to formula as:
=if([@Amount (Calc)]>1500,[@Amount (Calc)]*90%,[@Amount (Calc)])
Conditional value based on other columns value (Dependent Dynamic value)
There are many uses for using formula based on another column reference.
Example: default value for blanks example if Department is not provided, the default value to take is “Operations” (Ensure that department entity has the member “Operations”.
While populating the dimension Department for the cube, and if the play mode column header is “Dept.”, you can use the formula if([@Dept .]="","Operations",[@ Dept .])
Above is used to populate default entity member. The same approach can be used to populate cube values and dependent calculated cube values from the formulas.
Example if column headers populating cubes are “Qty” and “Amount”, then the rate cube can be derived using formula =([@Amount .]/[@Rate ])
Importing non Board format Date can also be achieved in Smart Import configuration.
This is especially useful when Users need to upload files downloaded from their ERP or other systems.
Example: if we need to populate cube at month level with data at date level, as in below samples where each row is for 02nd January 2025 in different date formats, the formulas to achieve board month identifier format is as below:
Incoming data (Column Header "Date") | Date Format | Required Board Format (result of Formula) | Formula |
|---|---|---|---|
01-02-2025 | MM-DD-YYYY | 202501 | =RIGHT([@Date ],4)&LEFT([@Date ],2) |
02-01-2025 | DD-MM-YYYY | 202501 | =RIGHT([@Date ],4)&MID([@Date ],4,2) |
02/01/2025 | DD/MM/YYYY | 202501 | =RIGHT([@Date ],4)&MID([@Date ],4,2) |
01/02/2025 | MM/DD/YYYY | 202501 | =RIGHT([@Date ],4)&LEFT([@Date ],2) |
Use Cases for Smart Import Validations:
The smart import Validations are useful tool to further guide and control incoming data with enhanced error messages, the validation option is found on the right panel:
They can be used to validate cube value/entity value. The reference follows same syntax [@Column header ].
For example, for controlling the month value for allowing only 202501 onwards data, the configuration used is as below:
Similarly cube values can also be validated for example negatives not allowed (>0 as True) else user message: “Negative Values not allowed”.
This tool can also be used with Multiple validations running parallel, but to note that the validation refers to only the column headers from the play mode. So any dimension / cube populated other than the column header reference in the smart import configuration (like mapped as fixed or formula) cannot be used in validation the [@....] can only be a reference to the column header of the play mode format of the smart import.
Suggested Values
Suggested Values can be used for popping up a list of drop down to select the values from, this can be used for both cubes and entities.To note, if a parent hierarchy is added in suggested value and the child entity is mismatched in the smart import, the relationship will be overwritten as per the smart import. Also adding parent as a preceding column will not limit dynamically the suggested values of the child entity column.
Smart Import is not just a convenience feature; it’s a design choice that can fundamentally change how you manage data in Board. By learning when to use it instead of Data Readers, mastering the behaviors of Add, Replace, and Merge, and applying advanced formulas with validation, you’ve gained the ability to transform Smart Import into a solution engine rather than a simple input tool.
The aim of this article was to show that Smart Import is not only for reading Cube or Entity values as is, but also a tool that can support more complex business cases.As a next step, try applying these concepts in your own models and see the difference. And if you’ve discovered unique ways to use Smart Import, share them in the comments — your ideas could help others explore new possibilities.
Re: How to ensure correct rule behavior when using entity sorting
Thanks @Anastasia Vladimirova for this content.
How to ensure correct rule behavior when using entity sorting
1. Abstract
Rules applied to an Entity are widely used in Board to support aggregation logic within reporting structures. A common example is a P&L Reporting Line entity, where rules allow users to calculate and aggregate specific reporting categories.
When working with such rules, it is critical to understand how entity sorting impacts both the display of members and the results of rule calculations.
Although entity members may be loaded in one order, a SORT setting can reorder them, creating discrepancies between the underlying member sequence and the visible order. This can affect:
- How members appear to the user
- Which members are included in rule-based operations
- How data moves between cubes during procedures, especially via Dataflows
In particular, rules that use a SUM range (e.g., SUM([Member1]:[MemberN])) can produce inconsistent results when sorting differs across models or cubes.
2. Context
Assume we have a P&L report based on a cube where a rule is used to calculate aggregated results. We now want to create a dashboard that uses a new set of dashboard cubes, and the rule logic needs to work identically in this new environment.
If the original rule relies on positional logic—such as SUM ranges—the calculations in the dashboard cubes may differ from those in the original report due to differences in entity sorting or cube structures.
3. Content
To ensure rule evaluations behave consistently when data flows from one cube to another (e.g., Reporting → Dashboard), rules should be constructed using explicit, member-by-member aggregation rather than SUM ranges.
Example (Avoid):
SUM([Member1]:[MemberN])→ This depends on the entity's internal order, which may be modified by SORT settings.
Example (Recommended):
[Member A] + [Member B] + [Member C] + …→ This method is order-independent and ensures stable results across cubes, regardless of sorting differences.
HINT:
If you need to transition existing rules, one way is to use ChatGPT using a similar prompt to the image
Please note:
- It is VERY important to tweak the prompt in order to fit the structure of your file. Check if different coding structure is being used, or any other formulas rather than SUM.
- ALWAYS check the output file, since AI isn’t perfect
4. Conclusion
- Avoid SUM range functions in Rules: Do not use SUM([Member1]:[MemberN]) when defining rules for entities that may have custom sorting. SUM ranges rely on positional order, which can break when sorting differs between members.
- Use explicit member aggregation: Always write rules as:[Member A] + [Member B] + [Member C] + …This ensures consistency and prevents incorrect aggregation in Dataflows or dashboard cubes.
- Validate entity sorting before creating Rules: Confirm whether an entity uses a custom SORT configuration. Since SORT can change the runtime ordering of members, it may influence which members are included in rule calculations.
- Test rule behavior across cubes: When data is pushed from a source cube to a target cube (e.g., Reporting → Dashboard), verify that the rule’s output matches expectations in both cubes.
Incorrect use of SUM ranges can lead to unexpected results when saving a rule to a new cubes. By adopting explicit member-by-member aggregation and validating entity configurations, users can ensure that rule results remain consistent across cubes and throughout Dataflows.
Re: December CommunityCast and Badge Opportunity
@Karry Schupp I've had some good BBQ in Kansas, so there's that!
Thrilled to hear about all the Board wins from this year! Thanks for sharing :)
Enjoy your family time and 'the good life!'























