QueBIT Blog

QueBIT Blog: IBM Planning Analytics/TM1 Performance Assessments

Written by James Miller | Nov 13, 2019 4:14:53 PM

Way back in 2014, I offered some ideas for when there was a need to conduct a performance assessment of a TM1 model with limited time or budget. This scenario often still arises, even though TM1 has evolved into IBM Planning Analytics, so I thought it might be interesting to revisit some of my original recommendations to see if they may still be helpful.

Don’t think you need a performance assessment just yet? then you’re in great shape! – but to the rest of us, those waiting for cube views to open, TurboIntegrator processes to complete, or reports to refresh (just to mention a few symptoms), its time to perform at least a cursory assessment of our models.

Just as I pointed out  back in 14, I will start by asserting that a “proper” performance review would be extensive and take significant time, so if your time is scarce, it would be more productive to focus on some specific areas (until such time when a more formal review is possible).

These areas include:

  1. Locking and Concurrency. You should try to ensure that there are no restrictions or limitations indirectly built into the model that may prevent users from performing tasks in one part of the application because other users are “busy” exercising the application somewhere else. This means that you should try to expose any functionality that allows one user or feature to adversely affect another.
  2. Batch (TurboIntegrator) Processing Time. This is the time it takes for TurboIntegrator scripts to complete. Calculate the average completion time and determine if the application “offline” or unusable for more than a reasonable amount of time while processes execute? If so, “zero in” on those processes.
  3. Application Size. The overall size of the application should be reviewed to determine if it is “of a reasonable size”, based upon the current and future expectations of the application (note that there are a lot of factors that will cause memory consumption to be less than optimal but, given time, a cursory check is recommended to identify any potential offenders). Is the server start time excessive? In addition, are there views and subsets that are slow to open?

Based upon timing and the above areas of focus, you might spend your time by:

  • Taking a quick look at the server configuration settings (tm1s.cfg) and follow-up on any non-default settings – find out who changed them and why? Do they make sense? Where they validated as having the expected effects?
  • Reviewing each of the cubes and their dimensions. Are there any “excessive” views or subsets? Are there any cubes with an extraordinary number of dimensions (more than 7)? Have the cube dimension orders been optimized? IS there any dimension that is particularly large?
  • Looking at how security has been implemented. It can be complex and restrictive or simple and more open. Given limited time, try to check the number of roles (groups) vs. the number of users (clients) – hint: never have more groups than clients – and things like naming conventions and how security is maintained. In addition, I am always uneasy if cell-level security implemented, if it is, try to understand why it is needed.
  • Reviewing all TurboIntegrator processes that are considered to be part of critical or “routine” application processing. As part of an abbreviated review, I always recommend leveraging a file-search tool such as grep (or similar) which provides the ability to search and examine all of the applications TurboIntegrator script files for a specific function or logical pattern use. For example, using this type of tool, you can easily generate a list of all processes that “touch” a particular cube or uses a special function, such as SAVEDATAALL, CUBESAVEDATA or VIEWZEROOUT.

General Things to Look for

There are many approaches to assessing a model and not all apply to every model. Here are just a few “generic” areas to focus on:

  1. Quickly scan the TI processes and determine if any call the SAVEDATAALL function to force in-memory changes to be committed to disk SAVEDATAALL commits changes in-memory for all cubes in the instance or server, creating a period of time that the server can be locked to all users. In addition, depending on the volume of changes that were made since the last save, the period of time the function will require to complete will vary. Rather than using the SAVEDATAALL function, you can consider the CUBESAVEDATA function to serialize or commit in-memory changes for (only) a specific cube or at least calculate a schedule as to when the most efficient times for saving data may be.
  2. The server may be logging all changes as transactions to all cubes. Transactional logging may be used to recover in an event of a server crash or other abnormal scenario. Typically, an application does not need to log all transactions for all Transaction logging can impact overall server performance and increase processing time when changes are being “batched”. A best practice recommendation is to turn off logging for all cubes that do not require TM1 to recover lost changes. In other situations, it may be better to set cube logging on for a particular cube but temporally turn off logging at the start of a (TurboIntegrator) process and then reset or turn back on logging after the process completes successfully.
  3. Look for the use of the VIEWDESTROY and SUBSETDESTROY functions as these functions are memory intensive, can cause potential locking/rollback situations and impact performance. You should use VIEWEXISTS and SUBSETEXISTS and if the view or subset exists, update the view and subsets as required for the particular purpose. An additional good idea is to alter the view and subset in the Epilog section (of a process) to insert a single leaf element to all subsets in the view to reduce its overall size in case a user “inadvertently” opens any “not for user” or system used views.
  4. Even the best model designs have limitations on the volume of data that can be loaded and saved indefinitely in any single cube. How much data is currently held in the mode? An optimal approach is to limit a cube to 3 years of “active” data with prior years available in archival cubes or elsewhere. This will reduce the size of the entire application and improve performance. Note - older data can still be available (sourced from 1 or more archival cubes for example) using a utility process for both archival and removal of specific years of data.
  5. TM1 caches views of data into memory for faster access. Once data changes, views become invalid and must be re-cached for use. Look for opportunities where it may be beneficial to utilize the function VIEWCONSTRUCT to “force” a pre-cache update of a view before using it. This function is useful for pre-calculating and storing large views, so they can be quickly accessed after a data load or update.
  6. Generally speaking, consolidation is one of TM1’s greatest features. The TM1 engine will perform in-memory consolidations and drill-downs faster than any rule or TI process. All data that is not loaded from external means should be maintained using the most efficient means: either by a consolidation, TurboIntegrator process or rule. From a performance and resource usage perspective, consolidations are the fastest and require the least memory and rules are the slowest and require the most memory. Simply put, all data that cannot be calculated by a TM1 consolidation should be seriously evaluated to determine its change tempo (slow moving or fast moving). For example, data that changes little or only changes during a set period of time, or “on demand” are “good candidates” for TI processing (or near real time maintenance) rather than using a rule. If at all possible, moving rule calculation logic into a TurboIntegrator process should be the method used to maintain as much data as possible. This will reduce the size of the overall TM1 application and improve overall processing performance as well.
  7. One of my favorite recommendations is to organize application logical components as separate or distinct from each another. This is known as “encapsulation” of business purpose. Each component should be purpose based and be optimized to best solve for its individual purpose or need. For example, the part of the application that performs calculating and processing of information should be separated from the part (of the application) that supports the consumption of or reporting on of information. Applications with architecture that does not separate on purpose are more difficult (costlier) to maintain and typically develop performance issues over time.

Conclusion

Overall, most model assessment efforts will result in typical “to-dos” such as fine tuning of specific feeders and other overall optimizations. These tasks may be already ‘in process” or scheduled to occur as part of an existing project plan.  It is recommended that if an extended performance assessment is not feasible right away, each of these suggestions I mentioned here should be reviewed and discussed to determine its individual feasibility, expected level of effort to implement and effect on overall design approach (keeping in mind these are only high level – but important suggestions). Lastly, performance assessment should not be considered a finite effort it really should be made part of an ongoing, routine “care and feeding” effort. This will ensure that the model continues to perform and remain sustainable even as the user community and the business grows.