What is Qualytics Protect?

Leveraging ML methods to infer data quality rules from historic data, the Qualytics Data Firewall captures erroneous data in-flight and at-rest in data stores. You can capture anomalies in data pipelines and quarantine records; or identify & alert on anomalies in your historical data.

How does it work?

STEP 1

Connect to a
Data Store

Connect to any database, data warehouse, data lake, pipeline or source system through standard-based methods

STEP 2

Profile

Using proprietary algorithms, the Data Firewall infers rich metadata through its deep profiling operation of the historic data in the data store.

STEP 3

Infer & Author Rules

The Data Firewall uses inductive learning, along with unsupervised learning methods to automatically infer data quality rules. You can also write your own using our rich metadata!

Screenshot of Protect Feature of Qualytics’ Firewall
STEP 4

Catch Anomalies

The Data Firewall does what a firewall should – stop bad actors at their tracks by applying the Data Quality rules in a flexible manner. Anomalies in flight can be quarantined or alerted; anomalies at rest can be identified and alerted – ultimately, you Protect your data store.

STEP 5

Optional Enrichment

Data Observability is a start to detecting anomalies. Qualytics lets you take confidence in your data to the next level with Enrichment.

Enrich your target data stores with anomalies and metadata in separate tables, enabling your team to take corrective actions with existing data tools.

STEP 6

Learn, Learn, Learn

Data is always changing and so should rules. The Data Firewall leverages unsupervised learning to detect model and data drift in metadata, while supervised learning ensures rules are always aligned with user needs

How do we qualify your data?

The Qualytics 8

Qualytics uses these 8 fundamental categories to assess data quality.

Completeness
required fields are fully populated
Coverage
availability and uniqueness of expected records
Conformity
Alignment of the content to the required standards, schemas, and formats
Consistency
the value is the same across all datastores within the organization
Precision
your data is the resolution that is expected - How tightly can you define your data?
}}
Timeliness
data is available when expected
Volumetrics
data has the same size and shape across similar cycles
Accuracy
your data represents the real-world values they are expected to model
Data Confidence Demo

Are you ready to embrace
data confidence?