Many people confuse data precision with accuracy, but it’s important to understand each and the differences, especially when applied to data quality. Precision is defined as the exactness of the measurement. A highly precise television would reflect minute differences in colors with incredibly high pixel resolution. In data quality, precision assesses the depth of detail that is encoded in the data. To strengthen the definition, one may ask themself, “how tightly can my data be defined?”

Qualytics, the leading platform for data quality enterprise solutions, announced the addition of technology maven and business strategist Bill Murphy to its Board of Directors. Bill’s expertise will enable Qualytics to further enhance its product strategy and grow its market share. Bill’s experience in board service includes technology and growth-driven advisory for numerous companies including Accurics, Cherre, Phantom Cyber, iLevel, and Carbon Black.

Qualytics announced today the appointment of data and analytics veteran Stewart Bryson to the role of Chief Customer Officer. Bryson has nearly three decades of experience delivering professional services for hundreds of organizations. He spent the last 12 years in the role of CEO of consulting companies building analytics solutions for Fortune 100 and Fortune 300 companies including Google and LinkedIn.

The Data Quality Platform for the Enterprise, announces Eric Simmerman as Chief Technology Officer (CTO). Simmerman brings nearly 25 years of experience building software products and software teams as the CTO or VP of Engineering at Interos, HealthPrize, Social Tables, FolderGrid and Pascal Metrics.  Simmerman has a passion for applying machine learning and data science to risk management which is fundamental to the Qualytics Platform strategy. 

In October, we started a Behind the Scenes initiative at Qualytics to share monthly updates on some of our product features. We wanted to give insight into our amazing team of hardworking characters and provide customers, followers, and other interested parties a highlight on what we are working on. Get to know us along with our products.

According to DataCadamia, a definition of consistency is, “It specifies that two data values drawn from separate data sets must not conflict with each other, although consistency does not necessarily imply correctness.”

Data consistency means that the value is the same across all datastores within the organization. This data belongs together and describes a specific process at a specific time, meaning that the data is not changed during processing or transfers. Without consistency, there is no way to guarantee that when a piece of data is moved it is correct and the same across all places data is stored.

The term big data is thrown around a lot these days, but one of the main areas where this term truly applies is large industrial units (manufacturing facilities, refineries, vehicle assembly plants etc.). With the advent of digital technologies and advanced sensors, the amount of data being collected every day is astounding. This poses several challenges: these datasets are prone to numerous errors and issues.

Timeliness is a measure of how often data is available when it’s expected. It can be calculated as the time difference of when information should be available and when it is actually available. Informed business decisions depend upon consistent and timely information. Therefore, critical measures of data quality include tests specifying how quickly data must be propagated and compliance with other timeliness constraints such as periodic availability.

As the CEO of Red Pill Analytics, I led our company through a journey similar to the one we now lead customers through. We founded the company in 2014 with a focus on building on-prem analytics stacks, which was still all the rage then, with the individual components of those stacks being primarily Oracle products. Although our name was inspired by the revolutionary Matrix film (and exactly one of the sequels) and the metaphor that data can free our mind and offer us the truth, with a nod and a wink we were also acknowledging the color most associated with Oracle.

As mentioned, with Qualytics Compare, you can ensure consistency throughout your data. Our product works for you to identify incorrect data and the root cause for the error. Additionally, with Qualytics Protect, you can capture anomalies in data pipelines and quarantine records; or identify and alert on anomalies in your historical data. With our products, businesses are alerted of problems within their data, so the problems can be solved.

We’re thrilled to announce that we will be attending our first (virtual) conference as a start-up-level sponsor at the 2021 Ai4 Conference. With three days of 200 influential speakers and over 21 industry-specific tracks to discuss the use of AI and ML, it’s an event we can’t miss. If you’re not sure if you should attend, tracks can be customized to personalize your agenda and are built for both technical and non-technical audiences.

Why an AI & ML Conference?

As AI is crucial to the success of Qualytics Data Firewall, we thought we’d take the opportunity to step into the event world and join colleagues, data practitioners, and industry leaders. And as a startup company walking into a relatively new and cutting-edge field, we need to get the word out about not only our product— but also how we are approaching Data Confidence. In today’s world, where data is in line with oil as a resource, we want to share our message: Quality of Data matters, and it matters a lot.

This year, AI usage across businesses is set to create $2.9 trillion of dollars worth of business value. Our product, the Qualytics Data Firewall, similarly uses AI to ensure Data Quality for the industry. It does this through innovative features that take advantage of machine learning and artificial intelligence.

Data Quality is a problem for many. We as company owners and operators make thousands of decisions every day – anywhere from C-Suite to the mailroom – by looking at data that may be in our home-grown or SaaS products, in databases or data warehouses, raw or aggregated to KPIs. As we grow more dependent on data in the modern age, there is a growing need for ensuring that the data we look at is of “some” quality. In this article, we take a 5W1H approach to data quality monitoring.

A century ago, the most valuable resource was oil. Companies rushed to mine the oil, process it, sell it and influence the dependencies on it, ultimately growing the macroeconomy and other industries with the additional mobility gained by consumers. The oil of the 21st century is data.

At its most basic, a firewall is the barrier that sits between a private internal network and the public internet. It was invented in the 1980s and soon became the most important line of defense for organizations against cyber attacks. Its main purpose is to keep dangerous traffic out. We took this concept and applied it to data. This means our Data Firewall’s main purpose is to keep bad data out. We profile and analyze your data, ultimately using our understanding to improve your data’s quality by filtering and quarantining the bad data.

Bad data. It sounds simple; it’s just inaccurate data or data that goes to the wrong place, right? Not quite. Even true data can be bad data. It may even be correct in every way— but duplicated or in the wrong field or simply not what you’re looking for. This is indeed bad data. Those small glitches in the system are where huge mistakes can arise. In today’s world that relies so heavily on data, bad data needs to be monitored to prevent it from spiraling into countless financial, operational, and reputational damage.