Key takeaways:
The text shows that delays and disputes mainly stem from unclear boundaries of responsibility between the integrator, the software house, and maintenance. Agreeing early on the architecture, testing, change management, and system handover reduces technical, budget, and compliance risk.
- The collaboration model must be agreed when defining the scope, not only after conflicts arise.
- The greatest risk increases at the interface between automation, the application, and operation when there is no single decision owner.
- Early involvement of maintenance reveals service and diagnostic requirements, as well as post-failure recovery procedures.
- Costs rise because decisions are postponed: the communication architecture, logic boundaries, post-change testing, and system takeover.
- For critical functions, it is advisable to assign separate responsibility for specifying the requirement, implementation, and acceptance.
Why this matters today
Collaboration between the integrator, the software house, and the maintenance department is no longer just a matter of organizational convenience. In practice, it now determines whether a project can be commissioned without scope disputes, whether a software change will delay technical acceptance, and whether the plant will be able to maintain the solution safely after deployment. The more process logic moves into the software layer, and the less remains in the standard functions of controllers and devices, the more important responsibility boundaries become. If they are not defined at the outset, project costs usually do not rise linearly. Instead, they grow through rework carried out in the wrong place: the integrator revises interfaces, the software house rebuilds business logic, and the maintenance department reveals operating requirements only at the end that no one documented earlier.
This is also a budget issue, not just a technical one. In many projects, the question of cooperation between the parties quickly turns into the question of what dedicated industrial software actually represents in the investment budget: a capital item, a maintenance cost, or a combination of both. If the solution architecture assumes that key process, reporting, recipe management, batch tracking, or plant-system integration functions will be developed outside the standard automation scope, this needs to be identified before the order is placed, not after the first prototype. The practical test is simple: if the lack of a single decision owner for the boundary between automation, the application, and operations means requirements, tests, and change costs cannot be assigned clearly, then the project has already entered a higher-risk zone and the cooperation model needs to be corrected.
This is easiest to see in a line modernization project where the integrator is responsible for control and commissioning, the software house is responsible for the application layer and data exchange, and the maintenance department is then expected to take over the system for continuous operation. If the maintenance team is only brought in at the acceptance stage, the issues that usually emerge are not “defects” but decision gaps: no recovery procedure after a failure, no requirements for service accounts, no agreed update windows, an unforeseen dependency on an external supplier, or insufficient error visibility. At that point, the dispute is no longer about code quality or whether the control cabinet is correct, but about who should bear the cost of adapting the system to the plant’s actual operating conditions. This is where the issue naturally connects with hidden project and compliance costs, because delayed acceptance or late changes to safety functions, technical documentation, or validation are often the result of poorly organized cooperation rather than a single execution error.
The compliance aspect arises when the division of work affects product characteristics, safety-related functions, documentation, or how the solution is put into use. Not every application-to-machine integration triggers the same obligations, but even uncertainty about who is responsible for the functional description, change management, and complete documentation is already a warning sign. This applies in particular to projects carried out within the company’s own plant, phased modernization projects, and solutions built for in-house use, where the boundary between “maintenance work” and manufacturing or a substantial modification can be legally significant. That is why the decision on the cooperation model should be made not when the first conflict appears, but when the scope is being defined: who describes the operational requirements, who approves the architecture, who is responsible for cross-layer testing, and who takes over the system after commissioning together with the actual ability to maintain it.
Where cost or risk most often increases
In projects run jointly by the integrator, the software house, and the maintenance department, costs rarely increase because of one major mistake. They usually build up at the interfaces between responsibilities, in other words, where no one has full ownership of carrying the matter through to completion. The most expensive issues are not technical errors in themselves, but decisions that are postponed or made without a clear owner: no agreed communication architecture, undefined boundaries between control logic and the application layer, no agreed method for post-change testing, and handover of the system into operation without a real ability to maintain it. In practice, this means rework after commissioning, disputes over contractual scope, and responsibility for downtime being pushed into the phase where every change is most expensive. A simple way to assess the situation is to ask whether, for every critical function, one party can be identified as responsible for the requirement, one for execution, and one for acceptance. If the answer is “it depends,” the project is already carrying organizational risk.
A second area of loss arises when design decisions are made without maintenance involvement or, conversely, when maintenance imposes service-friendly solutions that do not fit the system architecture. The integrator usually looks at the project through the lens of commissioning and device interoperability, the software house through business logic and interfaces, and maintenance through availability, diagnostics, and time to restore operation. If these perspectives do not come together when requirements are defined, they return later as change costs: additional signals, reworked permissions, missing event logging for diagnostics, no safe way to perform updates, or no workaround procedure for a failure. This is the point where the discussion naturally shifts to the role of the engineering project manager, because the issue is no longer a single technical decision but the management of dependencies, agreement deadlines, and responsibility for escalation.
A typical practical example is an implementation in which the supervisory application is expected to manage orders, recipes, and reporting, while the integrator is responsible for the controller, drives, and machine sequence. If the boundary of responsibility is described only in functional terms, without defining intermediate states, error conditions, and behavior after loss of communication, each party will build its own version of “safe” assumptions. The software house will assume that no acknowledgment means the command should be resent, the integrator will assume the command is one-shot, and maintenance will end up with a system that cannot be diagnosed during downtime. The outcome is predictable: a long commissioning phase, ambiguous faults, interface rework, and tension around the question of who is responsible for an unplanned stop. When assessing such a situation, it is worth measuring not only the delivery date but also the number of interface changes after design approval, the number of defects detected only on site, and the time needed to reconstruct the cause of a failure. If these indicators are rising despite visible progress, the problem usually lies in how the cooperation is organized, not in the performance of any single supplier.
A separate source of risk is treating testing and documentation as a by-product of commissioning. Wherever the system affects machine operation, operator access, diagnostics, process parameters, or safety-related functions, a late change is not just a routine programming fix. It may require a renewed review of the design assumptions, an update to the technical documentation, repetition of part of the testing, and, in certain factual circumstances, a fresh analysis of the obligations on the part of the user or the entity making the change. This cannot be decided in the abstract in the same way for every project, but the practical rule is simple: the more a change affects system behavior in normal and abnormal states, the less it can be handled through “working-level agreements.” This is also where the area of typical mistakes encountered in machine construction and modernization begins: no interlocks against incorrect configuration, no enforced sequence of actions, and no mechanisms to prevent operator or service errors. If such safeguards are not included in the scope from the outset, they come back later as cost, downtime, or a dispute over responsibility.
How to approach this in practice
In practice, cooperation between the integrator, the software house, and the maintenance department should not be organized around companies, but around the boundaries of responsibility for specific technical decisions. That determines who is responsible for control logic, who for the application layer and communication, and who for service conditions, backups, disaster recovery, and the safe deployment of changes on site. If those boundaries remain broadly defined, the project starts running on assumptions: the integrator assumes the plant will provide operating requirements, the software house assumes the process logic has already been finalized, and maintenance receives a system that cannot be effectively maintained without the code author. The consequence is not merely organizational. Commissioning costs increase, fault resolution takes longer, and in the event of a dispute it becomes harder to determine whether the problem results from an implementation error, incomplete assumptions, or an uncontrolled change after acceptance.
That is why the first decision should not be the choice of tool or the workshop schedule, but the adoption of a shared responsibility model for the entire lifecycle of the solution. For a manager, the practical criterion is simple: every function that affects the operation of a machine or line must have a named owner in four project states — design, commissioning, acceptance, and maintenance. If, for a given function, it is not possible to answer clearly who approves the requirement, who makes the change, who tests the effects, and who is responsible for restoring operation after a failure, then the scope is not ready for execution. This is where the role of the engineering project manager naturally appears: not as the person “in charge of deadlines,” but as the owner of decision-making order across disciplines and suppliers.
Most problems arise at the interface between control systems and dedicated software. A typical example is an application that changes how recipes are selected, parameterizes the operating sequence, or affects operator permissions. To a software house, this may look like a routine functional change, but to the integrator and maintenance team it is an intervention in system behavior, diagnostics, and changeover procedures. If it was not agreed before implementation where responsibility for the interface ends and responsibility for process logic begins, a correction made “on the live system” may require repeat trials, instruction updates, and sometimes a redesign of service procedures. This is also where the budget issue comes in: the cost of dedicated industrial software in the investment budget is driven not only by writing the code, but by how much responsibility the project shifts to validation, documentation, and later maintenance.
To prevent this, it is worth assessing the project not on the basis of supplier declarations, but on artifacts that can be verified. The minimum set includes an agreed interface list, a versioning description, a procedure for reporting and authorizing changes, acceptance test scenarios, and a post-commissioning maintenance plan. One short decision filter works well here:
- does the change affect process logic, operating parameters, or operator behavior,
- can it be reproduced, tested, and rolled back without the involvement of the solution author,
- does the post-implementation documentation allow the plant to maintain the system without relying on knowledge hidden in the contractor’s email inbox.
If the answer to any of these questions is “no,” the project needs a clearer scope, not faster execution. Only at this stage does it make sense to refer to formal requirements: not to add generic disclaimers to the contract, but to check whether the nature of the changes already affects documentation, acceptance, or the assessment of obligations on the user side or the party introducing the change. This is particularly important where the plant co-develops the solution itself, develops it with its own resources, or builds elements of the system as solutions built for in-house use. In that case, cooperation between the three parties stops being only a matter of project organization and also enters the area of the plant’s legal obligations.
What to watch out for during implementation
Most problems do not arise when the team lacks competence, but when the project parties work correctly within their own boundaries and no one manages the interface between them. In a project where the integrator is responsible for the execution layer and connections to automation, the software house for the application layer, and maintenance for keeping the plant running, poor implementation planning almost always shifts risk into the commissioning stage. That is where it becomes clear whether project decisions were made with the entire lifecycle of the solution in mind, or only to let individual contractors close out their own scope. For the project, this usually means one of three things: costly fixes after start-up, a dispute over responsibility for failures, or delayed acceptance because the system works only under laboratory conditions, not in the real process.
The key trap is that implementation is often treated as a technical phase, although in practice it is the point at which organizational decisions are verified. If the integrator can make changes in the control system without full knowledge of the effects on the application side, the software house develops functions without confirmation of equipment and industrial network constraints, and maintenance is brought in only at acceptance, then the problem is not communication but a flawed allocation of responsibilities. The practical assessment criterion is simple: before going on site, each party should be able to state which changes it can make independently, which require joint authorization, and who decides to stop the work if a risk arises for the process, safety, or configuration recoverability. If the answer depends on “case-by-case arrangements,” the implementation is not yet prepared, even if the schedule is formally on track.
A typical example concerns a seemingly minor modification: a change in the operating sequence of a workstation which, from the software house’s perspective, is a logic correction, for the integrator means different equipment response times, and for maintenance affects diagnostics and post-failure procedures. If such a change reaches the site without a joint review of its effects, it becomes difficult after start-up to determine whether the source of the problem is the code, the controller configuration, the drive parameters, or the way the operator handles the system. In that case, the cost increases not only because of the correction itself, but also because of downtime, additional testing, and the involvement of people who previously did not need to take part in the analysis. That is why it is worth measuring not only the commissioning date, but also the number of implementation changes made without a full approval path, the time needed to restore the previous version, and the share of defects detected only after the system has been handed over for operation. This gives a real picture of whether cooperation between the three parties is being managed or merely sustained on an ad hoc basis.
At this stage, a natural boundary also emerges between a standard implementation and a situation in which the plant begins to co-create the solution in a way that affects its formal obligations. If the maintenance department does more than review the concept and actually modifies the logic, selects system components, or takes over part of the design decisions, the issue is no longer limited to organizing cooperation and also extends into the area of solutions built for in-house use. This cannot be decided by a single rule for every project; what matters is the scope of the intervention, the plant’s degree of independence, and who is actually shaping the characteristics of the solution. The same applies to risk analysis: if the change affects the process function, operator behavior, service intervention conditions, or the sequence of emergency states, then it is no longer just a question of “whether to implement it,” but also “whether the risk assessment must be repeated and the acceptance assumptions updated.” In practice, this is exactly where the role of the person leading the project is most visible: not as an intermediary handling status updates, but as the owner of the decision on when a convenient simplification ends and technical and legal responsibility begins.
How can you organize collaboration between the integrator, the software house, and the maintenance department within a single project?
Ideally, this should be done when the project scope is being defined, not only when the first conflict arises. At that stage, it should be specified who defines the requirements, who approves the architecture, who is responsible for testing, and who takes over the system for operation.
Because involving this party late usually reveals operational gaps, not just faults. This includes, among other things, post-failure recovery procedures, service accounts, maintenance windows, and error diagnostics.
Most often at the interface between responsibilities, when no single person owns the decision. That is when post-commissioning revisions, scope disputes, and costly changes made too late start to appear.
A warning sign is any situation in which requirements, tests, and change costs cannot be assigned unambiguously. The same applies when, for a critical function, no single party can be identified as responsible for the requirement, implementation, and acceptance.
A general functional breakdown is not enough. You must also define intermediate states, fault conditions, behavior after loss of communication, and the method of post-change testing.