Glossary

Data Integration Architecture

What is Data Integration Architecture?

Data integration architecture defines the flow of data from source systems through connectors and transformations into enterprise systems within an organization. For a typical enterprise-scale application, data is often comes from multiple databases, file systems, internal and external applications, cloud resources, and streaming devices. To orchestrate and manage this complexity, a data integration architecture is important to structure the workflow required to capture, aggregate, cleanse, normalize, synthesize, and store the data in a form useful for processing. Such an architecture typically includes change data capture (CDC) and extract/transform/load (ETL) processes as well as data persistence for storage. Data integration architecture needs to address governance policies, security and privacy, and data quality requirements.

 

Why is Data Integration Architecture Important?

Artificial intelligence applications produce the most value when they have access to a comprehensive view of every dimension of an enterprise’s operations, customers, and markets.A data integration architecture is required to create a unified, federated image from disparate sources, while meeting the scale, reliability, and responsiveness requirements, and maintaining the unified data to deliver insights.

 

How C3 AI Enables Organizations to Use Data Integration Architecture

The foundation of the C3 AI® Suite is a model-driven architecture that provides an abstraction layer to simplify the complexities of implementing AI-driven applications, from data integration through ML model development to the end user interface.This abstraction layer simplifies the data integration architecture by requiring only one-time integration for each data source, with more than 200 prebuilt connectors to commonly used data stores, data warehouses, and applications.

Because the data integration architecture is decoupled from application logic, no changes are required to the existing application in order to add new sources within existing schemas and data models. This can accelerate the time to add new data sources for increased performance, or to add new use cases to get further value out of the operational data available.