Executives tend to distrust organizational data.  Determining the cost of bad data to your organization is the first step in gaining executive trust.

Why Executives Distrust their Data

A broad set of indicators determine the value of your enterprise data. With the breadth of this set ranging from accuracy to understandability and community contributions, deficiencies in any of the data value indicators are common. One problem in any of these indicators can cause your data to transition from an asset that provides strategic competitive advantages to an unknown liability that results in poor decisions, missed opportunities, damaged reputations, customer dissatisfaction and exposure to additional risk and expenses. For example, inaccurate information can cause skew in summary data or bias in data science models.

Errors like these can become extremely impactful to business decisions that ultimately affect the bottom-line. As data volume and sources increase so does the relevance of managing quality. Unfortunately, errors like the examples given are not uncommon, leading to mistrust of data. In fact, according to a study done by Harvard Business Review, only 16% of managers fully trust their data.

A recent study by New Vantage Partners, uncovers more reasons for executive concern, especially executives leading transformations to data-driven cultures. The study cites cultural resistance and lack of organizational alignment and agility as barriers to adoption of new data management technologies. What stood out the most is that 95% of the executives surveyed said the biggest challenges with data management changes are cultural, stemming from people and process. There is a clear lack of (and need for) tools that can be easily adopted and improve processes for data management.

The cost of poor data

While trust in data quality is low, executives recognize the importance of data quality. Organizations are beginning to understand the high cost associated with poor data quality. In a recent study done by Experian Plc., bad data costs companies 23% of revenue, worldwide. Even more eye-opening is that according to IBM, the total cost of poor data quality to the U.S. economy is an estimated $3.1 trillion per year.

The cost mainly comes from initial errors that can have downstream effects, inciting an expensive reactionary response to errors. For example, according to a survey done by 451 Research, 44.5% of respondents cited the finding of data errors by using reports and then taking subsequent (after the fact) corrective action as their means for data quality management, while 37.5% employed a manual data cleansing process.

But it doesn’t stop there, highly skilled data analysts in IT groups are misusing valuable time to manually analyze and fix errors. According to a study done by Syncsort, 38% of data-driven analyst roles spend more than 30% of their time manually remediating data.

Likewise, MIT reported that in a recent study they found that knowledge workers waste up to 50% of their time dealing with mundane quality issues, and for data scientists, time spent on quality issues can be as high as 80%. This is time that could be better used uncovering business-transforming insights, developing solutions to complex business challenges, or creating revenue drivers instead of revenue drains.