One of the most common misconceptions about Open Process Automation (OPA) is that it is simply a new control system product or a repackaging of existing DCS concepts. In reality, OPA represents something more fundamental: a re-architecture of how industrial control systems are designed, assembled, and evolved over time.
At the center of that shift is the OPA Reference Architecture. It defines not just what components exist in an open control system, but — just as importantly — how responsibility, interoperability, and lifecycle boundaries are structured.
Understanding what this architecture changes — and what it deliberately excludes — is critical to understanding why end users are driving OPA adoption.
From Monolithic Stacks to Modular Architecture
Traditional control systems were built as vertically integrated stacks. Hardware, operating systems, control logic, networking, and lifecycle tooling were tightly coupled, often proprietary, and designed to be upgraded as a unit. This approach simplified vendor accountability, but at a cost: rigidity.
Don Bartusiak, Former Chief Engineer for Process Control at ExxonMobil, describes the limitation clearly:
“When you have closed, tightly coupled, proprietarily connected components, you can’t upgrade component pieces.”
— Don Bartusiak, Former Chief Engineer, Process Control at ExxonMobil (Why End Users Are Driving the Open Process Automation Standard, ~05:47)
The OPA Reference Architecture breaks this coupling by separating compute, I/O, networking, and applications into distinct, standards-based layers. This separation is not academic — it directly addresses the operational pain of disruptive upgrades and vendor lock-in.
The Core Layers of the OPA Reference Architecture
While implementations vary, the reference architecture consistently defines three major architectural layers.
1. Distributed Compute Nodes (DCNs)
At the foundation of the architecture are Distributed Compute Nodes (DCNs). These are where control applications execute — closer to the process, with modern compute capability.
As Bartusiak explains:
“DCNs are where we envision the actual edge layer of computing technology.”
— Don Bartusiak, Former Chief Engineer, Process Control at ExxonMobil (Why End Users Are Driving the Open Process Automation Standard, ~07:34)
By decoupling compute from I/O and from specific vendors, DCNs allow operators to:
- Upgrade compute power independently of field wiring
- Run modern control, analytics, or optimization workloads
- Avoid rip-and-replace hardware cycles
2. O-PAS Connectivity Framework
This is the standardized communication layer in Open Process Automation that enables secure, interoperable data exchange between multi-vendor control systems, devices, and applications, using OPC UA running on industry standard ethernet and utilizing standardized information models.
“We want to use Ethernet-based industry standards technologies to achieve interoperability.”
— Don Bartusiak, Former Chief Engineer, Process Control at ExxonMobil (Why End Users Are Driving the Open Process Automation Standard, ~07:55)
This layer enables multi-vendor communication without forcing devices to “speak” only to components from the same manufacturer — a foundational requirement for true openness.
3. Advanced Compute Platform (ACP)
At the top of the architecture is the Advanced Compute Platform (ACP) — scalable, data-center-class compute adapted for industrial environments.
“ACP is a way of realizing highly scalable compute power, akin to what IT companies do now in data centers and in the cloud, but in a small on-prem footprint.”
— Don Bartusiak, Former Chief Engineer, Process Control at ExxonMobil (Why End Users Are Driving the Open Process Automation Standard, ~08:12)
The ACP is where higher-level capabilities live:
- System management and orchestration
- Human-machine interfaces (HMI)
- Historians and data services
- Advanced control, optimization, and AI workloads
This layered approach allows compute-intensive innovation to occur without destabilizing core control functions.
What the OPA Architecture Deliberately Leaves Out
Equally important is what the OPA standard does not attempt to cover.
Bartusiak is explicit about scope boundaries:
“The field devices, the communications to sensors and final control elements, the business computing systems, and the safety instrumented systems are out of scope.”
— Don Bartusiak, Former Chief Engineer, Process Control at ExxonMobil (Why End Users Are Driving the Open Process Automation Standard, ~09:01)
This is not a weakness — it is a design choice.
By not redefining everything, OPA avoids:
- Disrupting proven safety system practices
- Replacing well-established field instrumentation ecosystems
- Entangling business IT systems into control standards
Instead, it focuses on the layer where flexibility and innovation matter most: the control and compute infrastructure that binds systems together.
Why This Matters to End Users
From an end-user perspective, the OPA Reference Architecture delivers three structural advantages:
- Incremental modernization
Systems can evolve component by component, rather than through disruptive platform migrations. - Reduced lifecycle risk
Obsolescence in one layer does not force replacement of the entire system. - Future readiness
Modern compute, open data access, and scalable architecture enable advanced control, analytics, and AI when — and where — they deliver value.
As Julie Smith of DuPont notes, end users are no longer willing to accept architectures that constrain innovation:
“We need to figure out a way to do the same things IT has already figured out how to do.”
— Julie Smith, DuPont (Why End Users Are Driving the Open Process Automation Standard, ~14:32)
Architecture as the Enabler of Ecosystems
The OPA Reference Architecture does not prescribe a single vendor or solution. Instead, it creates the conditions for ecosystem-driven innovation — where multiple suppliers compete and collaborate within a shared, standards-based framework.
That shift is foundational to everything that follows: interoperability, cybersecurity by design, lifecycle flexibility, and ultimately production deployment.
In the next post, we’ll explore why interoperability itself — not any individual component — is the real breakthrough, and how ecosystems consistently out-innovate proprietary platforms.
More Insights





