As the number of digital systems within companies grows, so does the need for data consistency between them. Managers responsible for different IT systems often find themselves unable to keep up with the increasing volume of information and the variety of formats. In such circumstances, they are forced to ask specialists to create data in a form suitable for use in other applications and platforms.
This, in turn, requires engineers and data generation staff to adapt to a multitude of requirements, often without transparency and a clear understanding of where and how the data will be applied in the future. The lack of standardized approaches to handling information leads to inefficiencies and increased costs during the verification phase, which is often manual due to the complexity and non-standardized nature of the data.
The issue of data standardization is not just a matter of convenience or automation. It is a direct financial loss. According to a 2016 IBM report, the annual loss from poor data quality in the US is $3.1 trillion (Harvard Business Review, “Bad Data Costs the U.S. $3 Trillion Per Year,” September 22, 2016). Additionally, studies by MIT and other analytical consulting firms show that the cost of poor data quality can be as high as 15-25% of a company’s revenue (Delpha, “Impacts of Data Quality,” 1 Jan. 2025).
Under these conditions, it becomes critical to have clearly defined data requirements and descriptions of what parameters, in what format and with what level of detail should be included in the created objects. Without formalizing these requirements, it is impossible to guarantee the quality and compatibility of data between systems and project stages (Fig. 4.2-4).

In order to formulate the correct data requirements, you need to understand the business processes at the data level. Construction projects vary in type, scope, and number of participants, and each system – be it modeling (CAD (BIM)), scheduling (ERP 4D), costing (ERP 5D), or logistics (SCM) – requires its own unique parameters for inputs (input entity-elements).
Depending on these needs, business managers must either design new data structures to meet the requirements or adapt existing tables and databases. The quality of the data created will directly depend on how precisely and correctly the requirements are formulated (Fig. 4.2-5).

Since each system has its own specific data requirements, the first step in formulating general requirements should be to categorize all elements involved in business processes. This means the need to categorize objects into classes and groups of classes corresponding to specific systems or application tasks. For each such group, separate requirements for data structure, attributes and quality are developed.
In practice, however, the implementation of this approach faces a major challenge: the lack of a common language for grouping data. Disparate classifications, duplicate identifiers and incompatible formats result in each company, each software and even each project forming its own, isolated data models and classes. The result is a digital “Tower of Babel” where transferring information between systems requires multiple conversions to the right data models and classes, often done manually. This barrier can only be overcome by moving to universal classifiers and standardized sets of requirements.