As we mentioned in a previous blog, Predictive Analytics is More than a Magic Act, one of the impediments to predictive analytics success are company-wide misconceptions. These misconceptions can arise before a predictive analytics roll out even begins.
Topics: Predictive Analytics
Predictive analytics certainly has the power to influence major decisions at the top. For instance, if a company is looking into a merger, executives may want to forecast the financial risks involved or evaluate the impact the move could have on consumer demands. Or if decision-makers are contemplating a change in their current business model, predictive analytics can help them identify new opportunities that could come with the change.
Topics: Predictive Analytics
The way analytics are described in some marketing and sales circles, it almost seems like you just need to push a button, numbers are spit out, and your company gains this amazing knowledge transfusion to make better decisions. It’s like having a direct link to a magic eight ball that actually works. As James Taylor (yeah, you guessed it, not that James Taylor) explained, there is much more work involved for realizing the value that predictive analytics bring. Companies that experience failures when they start their predictive analytics projects tend to lose sight of the most essential steps.
Taylor—the CEO of Decision Management Solutions—explained how to avoid common pitfalls of predictive analytics in the MIT Sloan Review article, The Four Traps of Predictive Analytics (link to article: http://sloanreview.mit.edu/article/the-four-traps-of-predictive-analytics/).
The impact of a predictive data model largely depends on the software technology behind it. But both the models and software can only be so effective without the proper business purposes established. If the goals of a predictive analysis aren’t established up front, then the technology really can’t drive the results you are looking for. Nate Silver harped on the importance of making sure that the data programs, analytics tools, and goals of the findings should all be intertwined.
“Tools are important and efficient code is important, but at the same time, the attitudes you adopt toward this, and a solid understanding of what your goals are… those are more fundamental issues than which software you’re using.”
Back in 2012, The Memphis Daily News revealed some noteworthy results of a McKinsey Quarterly survey of 2,207 executives. In this survey, only 28% of participants stated that the quality of strategic decisions was generally good, and 60% thought that bad decisions were about as frequent as good ones. Think about that last stat point for a second. If those bad decisions translate into equally bad outcomes, there’s no telling how many failed projects, failed hires, and failed experiments have occurred, to name a few failures. So what gives?
Believe it or not, there are plenty of biases that get in the way of would-be objective data analysis, and those biases largely impair decision-making. It’s especially a troublesome prospect for business leaders who count on well-founded information.
Along with trying to find the highest probabilities for particular business outcomes, it is equally important to put your data to the test once it’s ready. In that regard, taking a “trial and error” approach is what Nate Silver recommends to derive the most value from large data sets.
Really, what Silver advocates is bypassing the theory stage, jumping into the heart of advanced analytics and decision-making, and not being afraid of the missteps that could follow. It’s definitely a scary thought. However, without making mistakes, how can a company get better?
Is data the source to find right answers, or is it used to weed out the obvious wrong conclusions? If you believe in the latter point, you are probably on to something. And data and statistics guru Nate Silver would agree. In many instances, there is no magic right answer to a business problem or scenario. There can be many answers that can be construed as being right or justifiable. The goal of any CFO or CIO is to eliminate as many incorrect or unusable data points as possible. For them, it’s like taking an exam with five different answer options, eliminating the clear wrongs, and then potentially deducing the best rights from two or three very viable choices.
Getting back to a basic (yet highly important) point we made in the blog Big Data Perception vs. Reality: Is it Value or Noise?, predictive models and advanced analytics deliver probabilities that can cut down on the frequency of irrelevant data points popping up. Probabilities exist to enhance the chances that a good decision can be made under a certain set of circumstances; not that it will be made. Once decision-makers start blurring the lines between probabilities and certainties, they’re caught flat-footed.
As we discussed in our last blog, big data is highly influential and is certainly changing the decision-making process across the business landscape. More organizations are consuming tools such as Hadoop and YARN to get in on the data-crunching fun. According to a survey that IDC conducted, 32% of businesses have already deployed a Hadoop solution. Meanwhile, 31% said they plan to deploy within the next year. However, for those organizations looking to use Hadoop to run their own version of Moneyball, they may be expecting too much.
Great data points that stand out on their own (out of context) can certainly look impressive and convincing enough for decision-makers to make a bold move. The problem is, in the big data universe, there are plenty of those data points if you look closely enough. Do all of those big data findings equate to prime business opportunities? Nate Silver—one of the foremost statisticians, predictors, and vocal big data experts—says no. There is a need to discern from all of the noise that big data brings and logically assess the information that is in front of you. As Silver puts it, businesses need to stop “cherry-picking the results they want to see.”
So how should businesses go about running the most optimal predictive models and analytics to uncover the truth about their data?