IBM Planning Analytics (TM1) gives us the ability to quickly and effectively prototype a potential business solution demonstrating to stakeholders and others proposed functionalities and capabilities of a design rather than relying on only discussions and theory.
In some cases, it may seem like a good idea for your prototype to become the first iteration of the delivered solution. However, once the “ah ha moment” has passed, it is critical to understand the difference between “It can do this” and “It will do this”.
During prototyping, concepts and “what ifs” are rapidly explored to determine “if” or “how” requirements can be meet; during development, architectural best practices should be strictly followed to ensure that what is built (and ultimately delivered) “will” meet or exceed requirements, in a robust and sustainable way.
Practical Architectural Concepts
Here are some basic architectural best practice concepts that should be kept in mind when evolving a prototype into a delivered solution.
When solving any business problem, you should always look for “natural joints”. These will be your “break points” which will allow you to “modularize” the solution into “simple solution steps”.
- Identify (a “view of data” to be processed)
- Process (an identified view of data)
- Present (the results of processing an identified view of data)
With Planning Analytics ETL tool “TurboIntegrator”, there is already a bit of simple abstraction taking place with the separation of the process into the Prolog, Metadata, Data and Epilog tabs.
For example, meta data maintenance is performed in the Metadata “step” separating or “abstracting” it from any logic coded in the Data step.
Consider a process that loads daily product sales transactions. In the Metadata tab, product codes are validated and if a new product code is found, it is programmatically added to the product dimension hierarchy, then in the Data tab, the sales dollars for that product code can be loaded.
Encapsulation and Information Hiding
Solution modules should not “know about the internals of another”. Building upon the above TurboIntegrator example, the solution module (or process) responsible for processing (a view of data) should not “care about” or “in any way be connected to” the modules that identify, reform or present data.
In other words, strive to separate the data transformation steps from the logic of processing the transformed data; use separate processes: one to transform the data another to process the data (once it is correctly transformed). Each module (or in this case process) should specialize in solving its “own problem”.
Solution modules should use a single, common “interface point” - that is, you should design modules to exchange information with other modules (or other applications) through one common “gateway” - removing any complexity of “translation of information” from the solution module – in other words, solve that problem once.
For example, if it is a common occurrence for applications to interface with an enterprise data warehouse, then build a reusable interface module with that purpose in mind. How about loading GL transactions from different general ledgers into a single Profit and Loss cube?
A good architectural design might include a generic “mapping cube” that allows load processes to lookup/translate account codes to the common reporting code used by a consolidated reporting system. The mapping cube would be the “interface point” where account translations are maintained, and all load processes can leverage (rather than each building and maintaining their own translation utility) ensuring that all GL codes, from any source, are correctly translated.
Scope & Life of Variables
Programming commonly requires the use of variables. It is important to have a strict policy governing the definition and use of all implemented variables. Some guidelines are:
- Know “when” and “when not” to initialize a variable
- Use naming conventions - to indicate the variable “scope” (public/private, global/local, etc.)
- Know its “role” – counter, comparator or accumulator?
There is perhaps no bigger challenge - when moving from prototype to development - then rereading - and trying to interpret – one’s program code written during prototyping and “without convention”.
Simply making the time to follow standard guidelines during prototyping will reduce the time required to “understand again” when revisiting the logic. It will also reduce errors caused by misinterpretation of the variables used and prevent having to breakdown and rebuild cubes just to rename objects.
Understand Murphy’s Law
Always practice “defensive programming” - especially in a distributed environment!
Solution modules should never “assume”. Make sure each module in your solution can handle a “less than perfect” state. That is, that it performs an appropriate interrogation of all information it consumes, processes or presents and “knows what to do” with any exceptions.
Mentioned earlier, load processes should check for the existence of key data, not only new product codes, but also “supporting data” such as other dimensional points (perhaps company codes, regions, departments/divisions, etc.). Finally, validation is important, and it is recommended that the system include “Validators” which may be a used to verify the data points but checking the value against a list or even testing it with well defined business rules.
Remember, application requirements will almost always change and so will your application – but by using architectural best practices, when changes come, you’ll be adjusting and not starting over.