The idea that “better access to data must result in better decision making” is a myth.
Access alone does not necessarily lead to better decisions. We’ve all seen the impact of poor decisions made in spite of the available data. The subprime mortgage debacle, which in part contributed to the global economic slowdown in the late 2000s, is an instructive example. Financial institutions had access to decades of data on lending and repayment habits. They’d invested a lot in reporting systems and old-style BI. Their staff had access to data but they still got blinded both by herd thinking and one-dimensional measurement, myopically following a single idea to its predictable endpoint. The revenue KPI was good, right? “Just keep the ‘make more money’ dial turned up to the max, don’t let that number drop!” Evidently, just because data is made available doesn’t mean that it’s being used actively or effectively.
In this type of situation it seems that humans are prone to a form of inattentional blindness*, and stop considering (or even noticing) information outside of their current focus. Even the best data discovery software in the world isn’t going to overcome human perceptual weaknesses, but giving people the chance to easily flex their view on data and navigate through it helps enormously, by lowering the barriers to asking innovative questions that less flexible representations of data put in place. To ask the right questions, and get better at decision making, requires repeated immersion in the data. Only then can people acquire to ability to question accepted measures, and go outside of dangerous preconceptions.
In the end, as with any skill, practice is what matters. Only the frequent use of BI will improve decision competency. Frequency of iteration of doing analysis – of being in the data – is what will deliver insight (and of course, raise more questions and avenues of exploration). But this will only happen if BI technology user experiences prompt this behavior, that is they are enjoyable to use, fast to respond and inherently flexible when it comes to data.
So adoption of BI tools is the answer right? Kind of, but it’s important to differentiate between adoption and usage. I’ve often heard it said that user adoption is the only true measure of the success of a BI deployment. I disagree. After all, I can adopt a BI tool by just passively consuming a financial report once a month. Check! Adoption secured. But insight delivered? Not much. Worse, I can adopt a BI tool if I just open a report and export the data to Excel. Check! Adoption secured. What a tragedy – when this happens the investment in BI is utterly negated, and chaos reigns.
In the end, if we’re to really help improve organizational decision-making it comes down to providing software that people will use every day. Software they’ll practice with and learn through using together, to the point that they begin to reach the threshold for true expertise in understanding their data and its meaning: “In fact, researchers have settled on what they believe is the magic number for true expertise: ten thousand hours.”Malcolm Gladwell, ‘Outliers: The Story of Success’
*For an amazing illustration of inattentional blindness see BBC Horizon’s restaging of the seminal experiment run by Daniel Simons of the University of Illinois at Urbana-Champaign and Christopher Chabris of Harvard University’s in 1975. You won’t believe your eyes. Literally.