Q&A

Q&A: Lack of trust in predictive analytics models can snarl projects

Beth Stackpole

Just because a company’s senior management has signed off on investing in predictive analytics models and tools doesn’t mean it’s a slam dunk to get broader organizational buy-in for using the technology to drive operational decision making

    Requires Free Membership to View

. Proving the accuracy of predictive analytics to business managers and workers can be a big hurdle, according to Eric Siegel, president of consulting firm Prediction Impact Inc. in San Francisco and program chair of the Predictive Analytics World conference.

SearchBusinessAnalytics.com recently spoke with Siegel to get his perspective on strategies for building trust in the findings of predictive models and encouraging more analytical decision making within an organization. Excerpts from the interview follow:

Beyond the technical complexities associated with predictive analytics, how much of a challenge is changing the mind-set and culture of an organization to drive decision making less by gut instinct and more by what analytical models reveal?

Eric Siegel: It’s a tremendous issue, and one could argue it’s the primary bottleneck that keeps pervasive adoption [of predictive analytics tools] at bay. For example, in many large organizations, only a small portion of predictive analytics’ potential is being tapped. We’re talking not only about cultural changes but about making changes to the way operations are conducted and the way customer decisions are driven.

To use data to drive decisions requires a greater trust in the technology and the math that hasn’t existed when people execute by way of gut reactions. This trust may be established at the organization from the bottom up, since proponents and practitioners can establish one-off, proof-of-principle [demonstrations] of predictive analytics’ value. But ultimately, the culture shift must be driven from the top down. You really need executive buy-in to create the kind of culture required across the organization.

How common is it for companies to embark down the path of predictive analytics only to find that their employees don’t fully understand, hence don’t trust, the outcome of the models?

Siegel: I think there’s a meaningful portion of projects that stall because of that problem -- anywhere from 5% to 15%. Most practitioners know it’s important to spend time getting organizational buy-in before spending time on the technology. The proof is in the pudding, and preliminary deployment can be executed in a risk-averse, incremental manner to engender further understanding and trust in the technology. Generate predictive models that, for example, score customers on the expected chance they will make a purchase or turn out to be a bad credit risk, and put the model’s predictive scores in action to drive decisions. That helps build confidence.

You need to gain buy-in not just among executives but among the people who are in charge of operations, where decision making will be newly driven or informed by predictive scores. Is there is a willingness to make a change in operations? Part of the organizational process around the project needs to explore and iterate on that.

Any other thoughts on how best to foster business buy-in?

Siegel: Another way is to guide people through the inner workings of a predictive model, helping them make sense of the often revealing business rules embedded therein. It can certainly help, psychologically, to show people the inner workings of the models. It speaks to people’s curiosity and skepticism for opening up the “black box.”

One technique for mitigating risk when going to deployment is instead of changing operations in one fell swoop, initially make the change for only a small portion of cases. That is, continue doing some things the old-fashioned way, in the existing manner, and make the adoption of process changes incremental. That way, the new and old ways of doings thing can be compared head-to-head in parallel. In the worst-case scenario, if there’s a bug or analytical mistake where the model turns out to be lousy with poor performance, that part is exposed during the trial deployment. In turn, you’ll gain buy-in as people see the model succeeding to do what was promised.

How important is that kind of head-to-head comparison to ensuring that predictive analytics can deliver on its intended promise of more insightful decision making?

Siegel: Extremely valuable. Without it, people are essentially back to making a decision -- in this case, whether the predictive model is helping -- based on gut [instinct]. If you don’t have a head-to-head comparison between the old way and the new way, then you might see an improvement with the model in play, but there’s no way to know for sure what can be attributed to the model. The only way to know for sure if the model is the reason things have improved is to do some sort of head-to-head comparison.

What are the biggest missteps companies make when trying to establish a data-driven culture and what are your suggestions for avoiding them?

Siegel: The classic error companies make when embarking on a project is to start doing the analytics before getting buy-in on the business side, including where there is an intended change to operational decision making by way of leveraging predictions. You have to establish a clear understanding of what’s going to change in operations in order to act upon these predictions. It’s the classic thing people overlook. They get excited, they get enough buy-in to devote resources to the technology and analytics side, and then they come up with a “cool” model that’s fascinating and interesting but doesn’t necessarily provide [business] value -- and the whole project stalls.

Beth Stackpole is a freelance writer who has been covering the intersection of technology and business for 25-plus years for a variety of trade and business publications and websites.