Key takeaways:
Most problems usually arise not from the protocol itself, but from assigning the wrong role to communication within the machine or installation architecture. This decision is best made at the concept stage by defining the data owner, the consequences of communication failure, and the boundaries of system responsibility.
- The choice between MQTT, OPC UA, and PLC communication affects the architecture, implementation costs, supplier responsibilities, and the pace of acceptance.
- It is not about a “better” protocol, but about matching the model to the function: monitoring, integration, control, or system development.
- Direct communication with the PLC speeds up startup, but ties the application to a specific controller, memory layout, and the manufacturer’s implementation.
- MQTT supports lightweight data distribution, while OPC UA provides interoperability and structure, but both require a good data model.
- If communication affects machine motion, sequencing, or interlocks, the choice must be tied to the risk assessment and the consequences of a loss of communication.
The choice between MQTT, OPC UA, and direct PLC communication is no longer a purely technical decision. Today, it affects system architecture, commissioning costs, the allocation of supplier responsibilities, and the pace of acceptance all at once. In practice, the question is not which protocol is “better,” but which data exchange model fits the project’s actual function: whether you need simple signal integration from a single machine, line supervision, data exchange with higher-level systems, or distributed control that will be expanded over the coming years. A mistake at this stage usually does not become visible immediately in the lab. It appears later: during startup, during validation, when the PLC supplier changes, or when the maintenance department tries to trace the cause of a disturbance and discovers that the data is inconsistent, delayed, or stripped of context.
From a project perspective, the most dangerous approach is to adopt a communication model “out of habit.” Direct PLC communication can be tempting because it provides fast access to registers and often shortens the first stage of implementation. The problem is that this choice can easily tie the higher-level application to a specific controller, memory addressing scheme, and the manufacturer’s implementation method. When the software version changes, the hardware is migrated, or the line is expanded, the cost comes back in the form of modifications, regression testing, and disputes over responsibility for process data. MQTT, in turn, works well where lightweight information distribution and separation of senders from receivers matter, but it requires a deliberate definition of data semantics, delivery quality, and broker maintenance rules. OPC UA is often chosen as a compromise between interoperability and information structure, but it does not solve problems by itself either: if the data model is wrong, formally correct communication still leads to poor operational decisions.
A practical decision criterion is simple, although often overlooked: first, determine whether a given exchange concerns information, control, or a function that affects machine safety. If the channel is used only for monitoring, reporting, or transferring recipes in a controlled mode, solutions can be compared in terms of maintenance, scalability, and integration. However, if the same path is also intended to carry commands that affect motion, the operating sequence, interlocks, or the equipment’s ready state, the issue immediately stops being purely an IT matter. At that point, you need to assess not only transmission delay and reliability, but also the predictability of behaviour after loss of communication, after a system restart, after a software version change, and after incorrect interpretation of status by an external system. This is the point at which the issue naturally becomes a practical risk analysis of communication affecting machine safety for design decisions, and sometimes also a matter of protection against unexpected start-up.
A typical example from manufacturing plants follows a familiar pattern: at first, the goal is only to read data from a machine into a visualisation or reporting system, so the team opts for a quick connection directly to the PLC. A few months later, the same channel starts handling setpoint writes, alarm acknowledgements, and eventually remote service commands as well. Formally, the system still “works,” but the architecture no longer reflects the actual allocation of responsibility. It is no longer clear which layer is the source of truth for the machine state, who is responsible for authorising changes, and how to demonstrate that external communication does not create a path to unintended start-up. At this point, the questions are no longer only about the protocol, but also about the division of functions between the control, supervision, and safety layers and, in scenarios involving direct PLC communication, about the consequences for the electrical layer and machine connections.
The normative and compliance significance of this choice therefore arises when the data exchange model affects machine behaviour in normal and fault conditions. Not every integration immediately falls within the scope of requirements for safety functions, but every integration should be assessed in terms of the consequences of error, loss of communication, and incorrect action sequences. If an external interface can change the machine state, release an interlock, resume a cycle, or bypass logic intended to remain local, then the communication decision must be linked to the risk assessment and, where relevant, also to requirements for preventing unexpected start-up. That is why this issue requires a decision now, at the assumptions and architecture stage, not only during commissioning. This is exactly when measurable criteria can still be defined: who owns the data model, what consequence of communication loss is acceptable, how many integration points will have to be maintained after a PLC change, and how it will be demonstrated that communication does not shift responsibility beyond the system’s planned scope.
Where cost or risk most often increases
Most problems do not stem from choosing between MQTT, OPC UA, and direct PLC communication as such, but from assigning the wrong role to that communication within the machine or plant architecture. A project starts to become expensive when a channel intended for auxiliary data exchange begins carrying operational decisions that affect process continuity, equipment condition, or operator behaviour. In practice, this means the team implements a solution that appears cheaper and faster, then later adds workarounds: extra hardwired signals, local interlocks, exceptions in the controller logic, and separate mechanisms for acknowledgements and state recovery after a loss of communication. These late corrections are what generate cost, delay, and disputes over responsibility between the integrator, the software supplier, and the end user. A practical assessment criterion is simple: determine whether, after communication is lost, the system should merely “stop reporting” or whether it could enter an unsafe state, a technologically incorrect state, or one that is costly from a production standpoint.
In models based on direct PLC communication, the risk usually increases where the interface becomes dependent on a specific manufacturer, software version, and controller memory structure. At commissioning, this often works well, but the cost appears when the controller is changed, the line is upgraded, or another supervisory system is added. Each such change requires data remapping, verification of types, addresses, permissions, and behaviour in the event of a transmission error. From the product owner’s perspective, this matters because maintenance stops being predictable: documentation quickly becomes outdated, knowledge remains with the contractor, and responsibility for data correctness becomes fragmented. If the team cannot identify the owner of the data model and the procedure for changing it after a PLC update, then the cost of future integration is already built into the project, even if it is not yet visible today.
With MQTT and OPC UA, the most common mistake is different: it is assumed that the communication layer will solve the problem of data semantics and decision reliability on its own. MQTT handles events and telemetry well, but without carefully defined topics, quality of delivery, retention, and source identification, it is easy to create a situation in which the recipient receives data that is formally correct but useless or too late for the process. OPC UA, in turn, structures the information model and improves interoperability, but its implementation is often underestimated if the devices do not have a consistently prepared structure of objects, states, and methods. A practical example arises with recipes, batch confirmation, or remote cycle restart: if it has not been clearly defined which side confirms receipt of the command, which confirms execution, and which only records it in the log, then after the first incident it is difficult to show whether the error originated in the application layer, the communication layer, or the machine logic. A good decision criterion here is not how “modern” the protocol is, but whether the state, command source, validity conditions, and method of restoring operation after a disturbance can be described unambiguously.
A separate source of cost is mixing operational requirements with safety and compliance requirements. If MQTT, OPC UA, or direct access to the PLC can influence machine motion, release interlocks, the start-up sequence, or parameters with a protective function, then the issue is no longer purely an IT matter. At this point, the topic naturally moves into a practical risk analysis of communication affecting machine safety for design decisions: what must be assessed is not the protocol itself, but the consequences of an incorrect command, stale data, unauthorised setpoint changes, and inconsistencies between the local and external state. In actuator systems, including hydraulic ones, the communication decision can affect how stop, unloading, motion blocking, and safe return to operation functions are implemented, so it may be linked to the design requirements applied during conformity assessment. If the external interface begins to affect protective functions or behaviours that the operator perceives as part of the safeguarding, it must be treated as part of the safety architecture, not as a convenient integration add-on.
From a project management perspective, the safest decision is one that can be defended not only technically, but organisationally as well. That is why, before selecting a data exchange model, it is worth defining several measurable criteria: the time needed to restore correct operation after a loss of communication, the number of places where data mapping must be maintained, the method of versioning the information model, the scope of regression testing after a PLC change, and evidence that external communication does not bypass local protective mechanisms. When the answers to these questions are unclear, the project has usually already entered an area in which the communication decision itself should be covered by a more formal risk assessment and, in some applications, also coordinated with communication decisions made at the machine architecture stage. This is the point at which the choice between MQTT, OPC UA, and direct communication stops being a matter of technical preference and becomes a decision about maintenance cost, the boundary of responsibility, and the resilience of the entire solution to error.
How to approach the issue in practice
In practice, the choice between MQTT, OPC UA, and direct PLC communication should start not with the technology itself, but with a simple question: what operational outcome is the data exchange meant to deliver, and who is accountable for the result. If the data is used only for monitoring, reporting, or feeding higher-level systems, the priority will be an integration that is resilient to change and based on a clear information model. If, however, the other side starts issuing commands that affect the cycle, recipes, operating states, or start conditions, the decision is no longer purely an IT matter. At that point, the communication method directly affects the boundary of responsibility between the integrator, the machine manufacturer, maintenance, and the process owner. This has direct consequences for the project: a different scope of acceptance testing, different change documentation, a different scale of regression after modifying the controller program, and a different post-implementation maintenance cost.
A good decision criterion is where the single source of truth should sit for machine status and the logic that permits operation. Direct PLC communication can be justified where a simple execution path, a small number of intermediaries, and fully predictable controller-side behavior matter most. The trade-off is usually a strong dependency on a specific PLC program, data address, and one supplier’s practices. OPC UA is a sensible choice when the project needs a more stable data model, better separation between the application layer and the controller program, and clearer signal semantics. MQTT works best when data needs to be distributed to multiple recipients beyond a single system-to-controller relationship, and when an indirect communication model is acceptable. However, this is not a neutral choice: the more intermediate layers, brokers, translators, and mappings involved, the larger the error surface becomes, and the harder it is to demonstrate that a change on the integration side does not violate the assumptions adopted for local control. In practice, this should be considered as part of the broader communication architecture in industrial automation.
A typical design mistake is choosing the model that is most convenient for integration during commissioning, only to discover the maintenance cost later. A practical example: a higher-level system is expected to write recipes and switch operating modes for several stations. If this is done by writing directly to PLC memory areas, the solution will be quick at first, but every change to the data structure in the controller will trigger testing on both sides of the interface, and responsibility for recipe consistency will start to blur. If the same case is based on clearly defined objects and states on the OPC UA side, it becomes easier to separate a change in the machine program from a change in the higher-level system, but the information model and its versioning must then be maintained. Using MQTT for this scenario only makes sense if data distribution to multiple systems is genuinely needed and if control over delays, delivery confirmation, and state recovery after loss of connection has been defined and verified in tests. Otherwise, the team is buying flexibility it will never use and is left with additional points of failure.
This is also the point where the topic naturally shifts into a practical risk analysis of design decisions. If the communication channel can change the machine state, release a sequence, resume operation after loss of connection, or indirectly affect actuators, it is necessary to assess not only transmission reliability, but also the consequences of an incorrect or delayed command. In some applications, this already overlaps with requirements for protection against unexpected start-up, because even a technically correct integration must not create a path that bypasses local lockout measures or energy isolation procedures. Within that scope, the communication choice should be coordinated with the control architecture, the electrical layer, and the rules for software changes, rather than made as a standalone integration decision. From a manager’s perspective, this leads to a simple rule: the data exchange model is appropriate only if it is possible to show who approves the change, how the safe state is restored after a failure, and which KPIs will be measured after implementation, for example time to restore operation, the number of incidents after changes, and the number of locations where data mapping is maintained.
What to watch out for during implementation
During implementation, the biggest risk does not come from the choice between MQTT, OPC UA, and direct PLC communication itself, but from the hidden assumptions the team adopts without formal confirmation. In project practice, the most expensive situations are those in which the data exchange model is selected for a functional demonstration rather than for the actual operating mode, maintenance model, and allocation of change responsibility. MQTT is sometimes implemented on the assumption that it will only provide simple data transfer to higher-level systems, only to start carrying operational commands a few months later. OPC UA is chosen as a “universal” solution, but without defining which services, data models, and authorisation mechanisms will actually be used. Direct PLC communication seems like the shortest path until it turns out that every additional data consumer requires separate mapping, regression testing, and coordination with the controller supplier. For a manager, the consequence is straightforward: the cost of implementation does not end when the connection is brought online, but extends across the entire cycle of changes, failures, and technical acceptance.
The key decision question should therefore not be “what can we get running fastest,” but “where does responsibility for the meaning of the data and the consequences of its use end.” If communication is used solely to monitor the process, the assessment criteria will be different than when the same path is intended to affect recipes, operating parameters, interlocks, or control sequences. At this point, the topic naturally moves into a practical risk analysis of communication that affects machine safety for design decisions: you need to assess not only the likelihood of losing connectivity, but also whether an incorrect value, a delayed update, or ambiguous variable mapping could cause improper operation of the machine or line. If the answer is yes, the communication architecture is no longer just an integration issue. It becomes an element that affects the control function, system acceptance, and the integrator’s responsibility when connecting subsystems. This often needs to be aligned with the division of functions between the control, supervision, and process-line layers.
In practice, this is easy to see in a simple scenario: the supervisory system is supposed to read statuses from several controllers, and once the project is up and running, the user also asks for remote setpoint changes. With direct communication to the PLC, this often ends with additional registers, exceptions, and workarounds tied to a specific manufacturer. In MQTT, the problem is often a loss of unambiguity: the message arrives, but without a well-defined context the recipient does not know whether the value is current, confirmed, or which operating mode it comes from. In OPC UA, the trap is not the protocol itself, but the overly optimistic assumption that the information model on the device side matches what the supervisory application requires. That is why the practical assessment criteria should cover three things: who owns the data semantics, how the validity and freshness of values are confirmed, and what the change procedure looks like after commissioning. If the team answers any of these questions only in general terms or in a way that depends on the vendor, it means the cost of future modifications has just been pushed into the maintenance stage.
A separate trap is underestimating the physical and installation impact. In projects where the choice of data exchange model affects the location of intermediary devices, additional power supply, cable routing, or network segmentation, the issue starts to overlap with the design of the electrical layer and machine connections. This applies in particular to solutions with additional communication gateways, industrial PCs, or switches that look harmless in the documentation, but in the control cabinet mean space, cooling, protection, service, and additional points of failure. In that case, the communication decision cannot be separated from the detailed design. The team should be able to indicate what happens after a power loss of the intermediary device, how the communication state will be restored, and whether a failure in the transmission layer could create an ambiguous picture of the machine status for the operator or the supervisory system. These are also consequences for the electrical layer and machine connections.
A reference to compliance requirements appears only when the data exchange channel affects the control function, the way the machine is used, or the boundaries of responsibility between suppliers. Within that scope, it is not enough to state that a protocol is “industrial” or widely used. It must be demonstrated that the adopted architecture has been assessed in the context of foreseeable fault conditions, operational changes, and interfaces between subsystems, which in practice leads to a methodical risk assessment consistent with the adopted project scope. If the system is assembled from ready-made modules, controllers, and communication layers from different parties, the formal assignment of the integrator’s responsibility also becomes more important. This is usually the point at which it is worth pausing the project and reviewing not only the data exchange scheme, but also the limits of modifications after acceptance, the rules for validating changes, and maintenance KPIs: communication recovery time, the number of incidents after updates, and the number of interfaces requiring manual mapping. In such cases, the choice should also fit the wider IT/OT and safety-by-design approach for industrial software.
MQTT, OPC UA, or direct PLC communication—how do you choose the right data exchange model for an industrial project?
No. The article indicates that the choice should match the project’s function: simple signal reading serves one set of needs, while line monitoring, integration with higher-level systems, or distributed control serve others.
This happens when direct register access starts tying the application to a specific controller, memory addressing scheme, and the manufacturer’s implementation. The problem usually comes back when the program is changed, the hardware is migrated, or the line is expanded.
MQTT is well suited to lightweight information distribution and decoupling senders from receivers, but it requires deliberate definition of data semantics and clear broker maintenance rules. OPC UA can be a compromise between interoperability and information structure, but it will not fix a poorly designed data model.
This applies when the same channel carries commands that affect machine motion, the work sequence, interlocks, or the machine’s ready state. In such a case, the response to loss of communication, restart, and incorrect state interpretation by the external system must also be assessed.
That is when you can still define communication roles, the data model owner, and the acceptable consequences of a loss of connectivity. The article emphasizes that late-stage changes usually increase costs, delays, and disputes over liability.