Data from CAD systems and databases (BIM) are some of the most sophisticated and dynamically updated data sources in the business of construction companies. These applications not only describe the project using geometry, but also supplement it with multiple layers of textual information: volumes, material properties, room assignments, energy efficiency levels, tolerances, life expectancies and other attributes.
Attributes assigned to entities in CAD -models are formed at the design stage and become the basis for further business processes, including costing, scheduling, life cycle assessment and integration with ERP- and CAFM-systems, where the efficiency of processes largely depends on the quality of data coming from design departments.
The traditional approach to attribute validation in CAD- (BIM-) models involves manual validation (Fig. 7.2-1), which becomes a long and costly process when the volume of models is large. Considering the volume and number of modern construction projects and their regular updates, the process of data validation and transformation becomes unsustainable and unaffordable.
General contractors and project managers are faced with the need to process large amounts of project data, including multiple versions and fragments of the same models. The data comes from design organizations in RVT, DWG, DGN, IFC, NWD and other formats (Fig. 3.1-14) and requires regular review for compliance with industry and corporate standards
The dependence on manual actions and specialized software makes the data validation process a bottleneck in workflows related to data from company-wide models. Automation and the use of structured requirements can eliminate this dependency, multiplying the speed and reliability of data validation (Fig. 7.3-7).

CAD data validation process includes data extraction (ETL stage Extract) from various closed (RVT, DWG, DGN, NWS, etc.), open semi-structured and parametric formats (IFC, CPXML, USD) or open semi-structured and parametric formats (IFC, CPXML, USD), in which rule tables can be applied to each attribute and its values (Transform stage) using regular expressions RegEx (Fig. 7.3-8), a process which we discussed in detail in the fourth part of the book.
The creation of a PDF error report and successfully validated records should be finalized with output (Load step) in structured formats that only consider validated entities that can be used for further processes.

Automating the validation of data from CAD (BIM) systems with structured requirements and streaming new data that are processed through ETL-Pipelines (Fig. 7.3-9) reduces the need for manual involvement in the validation process (each of the validation and data requirements processes have been discussed in previous chapters).

Traditionally, validation of models provided by contractors and CAD (BIM) specialists can take days to weeks. However, with the introduction of automated ETL processes, this can be reduced to a few minutes. In a typical situation, the contractor states: “The model is validated and compliant.“ This statement starts the chain of verification of the contractor’s data quality claim:
Project Manager – “Contractor states, ‘The model has been tested, everything is fine’”
Data Manager – Load Validation:
A simple script in Pandas detects a violation in seconds. Automation eliminates disputes:
Category: OST_StructuralColumns, Parameter: FireRating IS NULL.
Generate list of violation IDs→ export to Excel/PDF.
A simple script in Pandas detects the violation in seconds:
df = model_data[model_data[“Category”] == “OST_StructuralColumns”] # Filtering
issues = df[df[“FireRating”].isnull()] # Empty values
issues[[“ElementID”]].to_excel(“fire_rating_issues.xlsx”) # Export IDs
Data Manager to Project Manager – “A check of shows that 18 columns do not have the FireRating parameter filled in“
Project manager to contractor – “The model is returned for revision: the FireRating parameter is mandatory, without it acceptance is impossible”
As a result, the CAD model does not undergo validation, automation eliminates disputes, and the contractor almost instantly receives a structured report with a list of IDs of problematic elements. In this way, the validation process becomes transparent, repeatable and protected from human error (Fig. 7.3-10).
This approach turns the data validation process into an engineering function rather than a manual quality control process. This not only increases productivity, but also makes it possible to apply the same logic to all of the company’s projects, enabling end-to-end digital transformation of processes, from design to operations.

Through the use of automated pipelines (Fig. 7.3-10), system users expecting quality data from CAD- (BIM-) systems can instantly get the output data they need – tables, documents, images – and quickly integrate it into their work tasks.
The automation of control, processing and analysis is driving a change in the way construction project management is approached, especially the interoperability of different systems, without the use of complex and expensive modular proprietary systems or closed vendor solutions.
While concepts and marketing acronyms come and go, the data requirements validation processes themselves will forever remain an integral part of business processes. Rather than creating more and more specialized formats and standards, the construction industry should look to tools that have already been proven effective in other industries. Today, there are powerful platforms for automating data processing and process integration that allow companies to significantly reduce time for routine operations and minimize errors in Extract, Transform and Load.
One of the popular examples of solutions for automation and orchestration of ETL processes is Apache Airflow, which allows you to organize complex computational processes and manage ETL pipelines. Along with Airflow, other similar solutions such as Apache NiFi for data routing and streaming and n8n for business process automation are also actively used.