Data in information systems are organized in different ways – depending on the tasks and requirements for storing, processing and transmitting information. The key difference between the types of data models, the form in which information is stored, is the degree of structuring and the way in which the relationships between elements are described.
Structured data has a clear and repeatable schema: it is organized as tables with fixed columns. This format provides predictability, ease of processing and efficiency when performing SQL -queries, filtering and aggregation. Examples – databases (RDBMS), Excel, CSV.
Loosely structured data allows flexible structure: different elements can contain different set of attributes and be stored as hierarchies. Examples are JSON, XML or other document formats. These data are convenient when it is necessary to model nested objects and relations between them, but on the other hand, it complicates data analysis and standardization (Fig. 3.2-6).

The choice of the appropriate format depends on the objectives:
If the speed of filtering and analytics is important – relational tables (SQL, CSV, RDBMS, columnar databases) will do.
If flexibility of structure is required – it is better to use JSON or XML.
If the data has complex relationships – graph databases provide visibility and scalability.
In classical relational databases (RDBMS), each entity (e.g., a door) is represented by a row and its properties by table columns. For example, a table of items from the category “Doors” may contain the fields ID, Height, Width, Fire Resistance, and Room ID indicating the room (Fig. 3.2-7).
In classical relational databases (RDBMS) relations are formed in the form of tables, where each record represents an object and columns represent its parameters. In the tabular format the data about doors in the project looks like this, where each row represents a separate element – a door with its unique identifier and attributes, and the relationship with the room is realized through the parameter “Room ID”.

In loosely structured formats such as JSON or XML, data is stored in a hierarchical or nested form, where elements may contain other objects and their structure may vary. This allows complex relationships between elements to be modeled. Similar information about doors in the project, which was recorded in structured form (Fig. 3.2-7), is represented in a loosely structured format (JSON) in such a way (Fig. 3.2-8) that they become nested objects within Rooms (Rooms – ID), which logically reflects the hierarchy.

In a graph model, data is represented as nodes (vertices) and links (edges) between them. This allows you to visualize the complex relationships between objects and their attributes. In the case of door and room data in the project, the graph representation of is as follows:
- Nodes (nodes) represent the main entities: rooms (Room 101, Room 102) and doors (ID1001, ID1002, ID1003)
- Ribs (links) show the relationships between these entities, e.g., the belonging of a door to a certain room
- Attributes are mapped to nodes and contain entity properties (height, width, fire resistance for doors)

In the graph data model of door description, each room and each door are separate nodes. Doors are linked to rooms through edges that indicate whether the door belongs to a particular room. The attributes of the doors (height, width, fire resistance) are stored as properties of the corresponding nodes. More details about graph formats and how graph semantics appeared in the construction industry will be discussed in the chapter “The emergence of semantics and ontology in construction”.
Graph databases are effective in cases where it is not so much the data itself that is important, but the relationships between them, for example, in recommender systems, routing systems, or when modeling complex relationships in facility management projects. The graph format simplifies the creation of new relationships by allowing new data types to be added to the graph without changing the storage structure. However, compared to relational tables and structured formats, there is no additional data connectivity in a graph – transferring two-dimensional database data into a graph does not increase the number of relationships and does not allow to obtain new information.
The form and schema of the data should be tailored to the specific use case and tasks to be solved. To work effectively in business processes, it is important to use those tools and those data models that help you get results as quickly and easily as possible.

Today, most large companies face the problem of excessive data complexity. Each of hundreds or thousands of applications uses its own data model, which creates excessive complexity – an individual model is often dozens of times more complex than necessary, and the aggregate of all models is thousands of times more complex. This excessive complexity significantly hampers the work of both developers and end users.
Such complexity imposes serious limitations on the development and maintenance of the company’s systems. Each new element in the model requires additional code, implementation of new logic, thorough testing and adaptation to existing solutions. All this increases costs and slows down the work of the automation team in the company, turning even simple tasks into costly and time-consuming processes.
Complexity affects all levels of data architecture. In relational databases, it is expressed in the growing number of tables and columns, often redundant. In object-oriented systems, complexity is increased by the multiplicity of classes and interrelated properties. In formats like XML or JSON, complexity is manifested through confusing nested structures, unique keys, and inconsistent schemas.
The excessive complexity of data models makes systems not only less efficient, but also difficult to be understood by end users and in the future by large language models and LLM agents. It is the problem of understanding and complexity of data models and data processing that raises the question: how to make data easy enough to use that it actually starts to be useful quickly.
Even when data models are chosen wisely, their utility is dramatically reduced if access to the data is limited. Proprietary formats and closed platforms hinder integration, complicate automation, and take away companies’ control over their own information, creating not just a silo of new data, but a locked silo that can only be accessed with the permission of the vendor. To understand the scale of the problem, it’s important to consider exactly how closed systems affect digital processes in construction.