paint-brush
Debunking the 15 Biggest Myths About Data Qualityby@zacamos
405 reads
405 reads

Debunking the 15 Biggest Myths About Data Quality

by Zac AmosMarch 8th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

To avoid the negative impacts of data quality myths, companies should do their best to quash any misconceptions. These include: Data quality is only for large enterprises; Data cleaning only needs to happen once; Internal data does not need cleaning; and more.
featured image - Debunking the 15 Biggest Myths About Data Quality
Zac Amos HackerNoon profile picture

Many data scientists, business professionals, and machine learning engineers do not realize that some of their long-held beliefs are misconceptions. Here is the truth about 15 of the most commonly accepted myths regarding data quality.

1. Other Companies Do Not Struggle With Data Quality

Unbelievable metrics and market valuations often showcase the promising business outcomes data quality practices can generate. Many enterprise and small business leaders believe they are doing something wrong since they do not achieve the same results.


Many companies struggle with maintaining data quality. According to one global survey, 50% of business IT professionals say their employers have often or occasionally faced issues with it. They have also admitted to being unable to drive results — their strategies proving only somewhat or slightly successful.

2. Data Quality Is Only for Enterprises

A common misconception is data quality practices, and policies are only for enterprises. In reality, small business leaders should hold themselves to the same standards even if they have less to manage. To generate actionable insights, they must adequately clean, transform, and analyze information regardless of volume.

3. Poor Data Quality Is Nothing to Worry About

Poor data quality leads to unactionable insights and financial losses. In 2023, industry professionals reported it affected 31% of their revenue on average, up from 26% the year prior. Although many shrug off minor errors, they can have a substantial impact.

4. High-Quality Data Is Always Accurate

While many business professionals assume a thorough cleaning guarantees accurate results, data can be unreliable. Unexpected events and emerging details can make output inaccurate at any time. While companies should keep using insights for guidance, they should not rely on it as their sole driver.

5. Most Issues Stem From Data Entry

Many people wrongly assume sourcing and entry are responsible for quality issues. In reality, someone could collect, merge, and scrub data to perfection but still have problems. Timeliness, relevancy, and consistency are conditional since minor changes can happen whenever — making clean information outdated, irrelevant, or inconsistent.

6. Preparation and Strategization Aren’t Necessary

Many business professionals assume they can navigate issues as they arise, choosing to forgo preparation. This choice leads to poor data quality, which causes enterprises to lose an average of $15 million annually. Inaccurate insights and unexpectedly lengthy resolutions result in missed business opportunities, low customer confidence, and a poor market reputation.


The concept of preparation also applies to machine learning applications. Data scientists and machine learning engineers should consider how to align their goals with their data sourcing, collection, and transformation techniques before beginning development. This way, they avoid costly mistakes.

7. Data Quality Is the IT Team’s Responsibility

Although the IT team shoulders most of the technical aspects of data quality, the duty should not be theirs alone. Since their work determines insight accuracy, their accomplishments decide business outcomes. Organization leaders — even ones in purportedly unrelated departments — should maximize their chances of success by taking on more responsibility.

8. All Data Is Valuable and Worth Analyzing

Sometimes, seemingly valuable sources aren’t worth the effort. In fact, many companies have too many — more than they know what to do with. In these cases, they waste effort merging, cleaning, and transforming data, which ends up unanalyzed. Professionals’ time is better spent on more minor, impactful responsibilities.

9. Data Cleaning Only Needs to Happen Once

Data is constantly changing, so professionals should not expect to clean it once and be done with it. Even if they automate the process, human perception is a necessity. They must be able to catch anything from a minor entry error to a categorization change before it impacts insights.


The concept of ongoing maintenance is especially applicable to machine learning applications. Information changes over time, potentially introducing errors despite a thorough initial cleaning. Professionals must scrub repeatedly to maintain prediction accuracy and performance.

10. 100% Data Quality Is the End Goal

Perfection is generally unachievable in any respect. In fact, reaching 100% quality is almost impossible when dealing with large volumes of data. Many business and data professionals wrongly assume their end goal is to fix every duplicate, missing value or inconsistency. They should view it as an ongoing process and instead prioritize reliability and performance.

11. Internal Data Does Not Need Cleaning

Many professionals mistakenly assume information collected internally will not need cleaning. Realistically, it is just as full of errors as other data sets. Even something as simple as recording “first name, last name” instead of “last name, first name” can cause significant inconsistencies.


The same concept applies to synthetic data sets generated by algorithms for machine learning applications. While they might seem error-free at first glance, there is a high likelihood of inconsistencies, duplicates, and missing values.

12. Having Quality Data Guarantees Success

Data scientists generally assume their efforts will result in success. Unfortunately, the reality is sometimes different. While data-driven insights are valuable, they cannot guarantee positive business outcomes. A solid strategy and organization-wide support are essential to maximize positive outcomes.

13. Data Cleaning Won’t Take Much Time

Cleaning is an unbelievably time-intensive task — especially when dealing with large volumes of data. Some claim data scientists spend 80% of their workweek on it. In other words, they can only dedicate 20% of their time to transformation, analysis or insight generation. Companies should hesitate to assume they will be able to generate insights immediately.

14. Minor Errors Are Insignificant

Some people mistakenly assume a handful of duplicates or missing values are acceptable. Although data scientists cannot achieve perfection or should strive for it, they should not accept errors. Ignoring minor errors to save time can ultimately result in poor insights and unhappy clients.

15. Data Cleaning Should Take Priority

While professionals prioritize data cleaning, problems often arise from poor management. Experts believe companies spend 10%-30% of their revenue on it. Resolving misguided decisions, mismanagement, and misplans is costly. They should consider reprioritizing protocols and oversight.

Professionals Should Quash These Data Quality Myths

These data quality myths negatively impact business outcomes, return on investment, reputation, and client satisfaction. Professionals should do their best to quash any misconceptions in their organization.