In the late 1990s and early 2000s, OPC spread like a weed. OPC servers were everywhere. Kepware and Matrikon and others deployed gazillions (well, thousands anyway) of OPC servers in every corner of the automation industry. Every type of industry. Every type of application.
But all wasn’t well with OPC, or OPC Classic as I now call it. Security issues, real and perceived, plagued OPC Classic.
Why? People, mostly, and COM (DCOM is the distributed variant), the Microsoft technology underpinning OPC Classic, too. Let me explain.
COM is difficult to maintain and understand without significant training. And how you use it, configure it and set up authorizations varies slightly from one version of Microsoft Windows to the next. There are many ways to screw it up. And when you screw it up, your OPC Classic server stops transferring data and the people who want the data start screaming.
It’s actually more insidious than that. If a well-meaning but undertrained individual (not usually understanding that they are undertrained) goes in and fiddles with some COM or DCOM parameters, nothing happens. Literally, there is no affect. The OPC server keeps working.
For a while anyway.
Eventually, a week, a month or three months later, there’s a reboot. Maintenance powered down the manufacturing cell, a year-end shutdown, whatever, but the machine is powered down. Now, when you bring the machine back up, all of sudden, for “no reason at all,” DCOM stops working. The memory of that guy fiddling with the DCOM parameters is long forgotten. Instead, the OPC Classic server is blamed. It just stopped working.
It’s a nightmare. The system needs to be fixed and another well-meaning, undertrained person starts fooling around. First thing they do: remove all the security. Generally, that makes it work. “Whew,” they say. “Glad I could get that fixed.”
Now you’re set up for real trouble. There’s a security hole in one of your servers that can lead a hacker down a path to who knows where.
And that’s the kind of thing that makes the VP of IT lay in bed at night staring at the ceiling. He’s the one whose butt is going to be called on the carpet if that hacker ever does attack.
And it does happen. Stuxnet is, of course, the classic case. Someone without malicious intent finds a “blank” USB stick and eventually plugs it into this now-unprotected server. Malware then starts looking for paths to specific automation equipment and all of a sudden you have a much bigger problem on your hands.
But in truth, the lack of DCOM knowledge and the seemingly inconsequential act of plugging in a data stick are really not OPC Classic problems. They’re management issues. Management didn’t dictate that a checklist be in place when an OPC Classic server stops communicating. Management didn’t have a certification program in place ensure that the people maintaining OPC Classic servers were well trained in COM and DCOM and troubleshooting OPC Classic server problems.
It’s just more convenient to blame the technology than ourselves and our management. So OPC Classic has taken some hits over the past few years in its public perception.
Probably not warranted, but what do they always say? “Perception is reality.”
But there’s another more pervasive problem with OPC Classic. One that can’t be blamed on management. It’s the deficiencies that come with dependency on Microsoft and Windows.
COM, the base technology for OPC Classic, is a Microsoft product. It runs only on Microsoft platforms. Not Linux, not VxWorks, not anything else. And that’s a problem.
Microsoft has a well-deserved bad reputation, especially in industrial automation. In this industry, we generally build automation processes to last. There are a few products that are short-lived, but it’s much more common to build production processes for diapers, soap, tea and hundreds of other products that we’re going to run for the next five, ten or twenty years.
Microsoft products and PCs aren’t suited for that kind of environment. Every time you buy a new laptop, you are hopelessly obsolete in, what, six months? How do you maintain an OPC Classic server on a Microsoft Windows platform for the next ten years?
So there’s been a desire to run OPC Classic servers on other platforms. Platforms with longer lives and stable hardware that will last. Platforms that are smaller. Embedded platforms. The question being asked was “Why do I have to be tied to Microsoft? What about Linux? Why can’t my flow meter and lots of other devices be an OPC server?
None of that is possible with Microsoft and DCOM.
There’s also been dissatisfaction with OPC Classic in the way that data gets to all those data-hungry servers in the upper echelons of the factory automation system and at the enterprise.
Most data is passed to those systems through a PLC. Possibly it starts in an RFID reader passing a pallet number, ID code and weight from the RFID reader to the PLC. From the PLC, it gets read by an OPC Classic server and passed to another application in the PC that transfers it to a logging database in the enterprise.
The real problem with OPC Classic is that this is an expensive and inefficient way to get data from a device (RFID reader) into that database. There’s a PC involved – someone has to set it up, maintain it, validate that it is secure, etc. Initial hardware and labor and more ongoing labor.
But more than that, it’s very inefficient and provides incomplete data. The data has to be carefully managed all the way from the RFID reader to the server to make sure that the different systems use the correct data types, that resolution is maintained, the endian order (which byte is first) is proper for that system. It’s not easy.
And every time you decide you want a new piece of data, you have to touch multiple systems without breaking any of them. “Yuck” is the technical term for it.
Plus you don’t get any of the meta-data. Meta-data being the associated data that provides the semantics for what you are transferring. Meta-data includes stuff like units, Scaling and all that other stuff that lets you work with the data without guessing as to what it is.
So, even though OPC Classic is wildly successful and works well when managed right, there was enough dissatisfaction with the security issues, platform issues and data inconsistencies that a successor was planned for it.
What is OPC UA?
That is a very simple question. The answer when you are discussing a complex technology like OPC UA isn’t as simple.
OPC UA, which I will refer to as UA throughout this article, is the next generation of OPC technology. UA is a more secure, open, reliable mechanism for transferring information between servers and clients. It provides more open transports, better security and a more complete information model than the original OPC, “OPC Classic.” UA provides a very flexible and adaptable mechanism for moving data between enterprise-type systems and the kinds of controls, monitoring devices and sensors that interact with real world data.
Why a totally new communication architecture? OPC Classic is limited and not well suited for today’s requirements to move data between enterprise/Internet systems and the systems that control real processes that generate and monitor live data. These limitations include:
- Platform dependence on Microsoft – OPC Classic is built around DCOM (Distribution COM), an older communication technology that is being de-emphasized by Microsoft
- Insufficient data models – OPC Classic lacks the ability to adequately represent the kinds of data, information and relationships between data items and systems that are important in today’s connected world
- Inadequate security – Microsoft and DCOM are perceived by many users to lack the kind of security needed in a connected world with sophisticated threats from viruses and malware.
UA is the first communication technology built specifically to live in that “no man’s land” where data must traverse firewalls, specialized platforms and security barriers to arrive at a place where that data can be turned into information. UA is designed to connect databases, analytic tools, Enterprise Resource Planning (ERP) systems and other enterprise systems with real-world data from low-end controllers, sensors, actuators and monitoring devices that interact with real processes that control and generate real-world data.
UA uses scalable platforms, multiple security models, multiple transport layers and a sophisticated information model to allow the smallest dedicated controller to freely interact with complex, high-end server applications. UA can communicate anything from simple downtime status to massive amounts of highly complex plant-wide information.
UA is a sophisticated, scalable and flexible mechanism for establishing secure connections between clients and servers. Features of this unique technology include:
Scalability – UA is scalable and platform independent. It can be supported on high-end servers and on low-end sensors. UA uses discoverable profiles to include tiny embedded platforms as servers in a UA system.
A Flexible Address Space – The UA address space is organized around the concept of an object. Objects are entities that consist of variables and methods and provide a standard way for servers to transfer information to clients.
Common Transports and Encodings – UA uses standard transports and encodings to ensure that connectivity can be easily achieved in both the embedded and enterprise environments.
Security – UA implements a sophisticated security model that ensures the authentication of clients and servers, the authentication of users and the integrity of their communication.
Internet Capability – UA is fully capable of moving data over the Internet.
A Robust Set of Services – UA provides a full suite of services for eventing, alarming, reading, writing, discovery and more.
Certified Interoperability – UA certifies profiles such that connectivity between a client and server using a defined profile can be guaranteed.
A Sophisticated Information Model – UA profiles more than just an object model. UA is designed to connect objects in such a way that true information can be shared between clients and servers.
Sophisticated Alarming and Event Management – UA provides a highly configurable mechanism for providing alarms and event notifications to interested clients. The alarming and event mechanisms go well beyond the standard change-in-value type alarming found in most protocols.
Integration with Standard Industry-Specific Data Models – The OPC Foundation is working with a number of industry trade groups to define specific information models for their industries and to support those information models within UA.
How OPC UA Differs from Plant Floor Systems
I’ve studied this technology for a long time now. And yet there is a question that I almost shrink from. In fact, I sometimes hate to answer it.
It’s not because I don’t understand what it is. It’s not that I don’t understand how it works. And it’s not that I don’t believe that it is a very valuable tool to almost every plant floor system.
It’s just hard to put it into context when there isn’t anything to compare it to. For example, when Profinet IO came out, I could tell people that it was the equivalent of EtherNet/IP for Siemens Controllers. Same kind of technology. Basically the same kind of functionality. Easy to explain.
But how do I explain UA when it doesn’t have an equivalent? You could say that it is Web services for automation systems. Or that it’s SOA for automation systems, an even more arcane term. SOA is “Service Oriented Architecture,” basically the same thing as Web services. That’s fine if you’re an IT guy (or gal) and you understand those terms. You have some context.
But if you’re a plant floor guy, it’s likely that even though you use Web services (it the plumbing for the Internet) you don’t know what that term means.
So the reason I get skittish about answering this question is that they always follow up with another question that makes me cringe: “Why do we need another protocol? Modbus TCP, EtherNet/IP and Profinet IO work just fine.”
So I have to start with the fact that it’s not like EtherNet/IP, Profinet IO or Modbus TCP. It’s a completely new paradigm for plant floor communications. It’s like trying to explain EtherNet/IP to a PLC programmer in 1982. With nothing to compare it to, it’s impossible to understand.
That’s where I am trying to explain OPC UA.
The people I’m trying to reach have lived with the PLC networking paradigm for so long that it’s second nature. You have a PLC, it is a master kind of device and it moves data in and out of slave devices. It uses really simple, transaction-type messaging or some kind of connected messaging.
In either case, there is this buffer of output data in a thing called a programmable controller. There is a buffer of input data in a bunch of devices called servers, slaves or nodes. The buffer of input data moves to the programmable controller. The output data buffers move from the programmable controller to the devices. Repeat. Forever. Done.
That’s really easy to wrap your mind around. Really easy to see how it fits into your manufacturing environment and really easy to architect.
OPC UA lives outside that paradigm. Well, really, that’s not true. OPC UA lives in parallel with that paradigm. It doesn’t replace it. It extends it. Adds on to it. Brings it new functionality and creates new use cases and drives new applications. In the end, it increases productivity, enhances quality and lowers costs by providing not only more data, but also information, and the right kind of information to the production, maintenance, and IT systems that need that information when they need it.
Pretty powerful, huh?
Our current mechanisms for moving plant floor data – few or no systems move information – is brittle. It takes massive amounts of human and computing resources to get anything done. And in the process we lose lots of important meta-data, we lose resolution and we create fragile systems that are nightmares to support.
And don’t even ask about the security holes they create. Because when there are problems, and there always are, the first thing everyone does is to remove the security and reboot.
These systems are a fragile house of cards. They need to be knocked down.
And because of all this, opportunities to mine the factory floor for quality data, interrogate and build databases of maintenance data, feed dashboard-reporting systems, gather historical data and feed enterprise analytic systems are lost. Opportunities to improve maintenance procedures, reduce downtime, compare performance at various plants, lines and cells across the enterprise are all lost.
This is the gap that OPC UA fills. It’s not something Profinet IO can do, even though the devoted acolytes would contest that statement. It’s not something that EtherNet/IP can do. And it’d be a joke to talk about Modbus TCP in this context.
So I’m back to the original question: “What exactly is OPC UA”?
OPC UA is about reliably, securely and most of all, easily, modeling “objects” and making those objects available around the plant floor, to enterprise applications and throughout the corporation. The idea behind it is infinitely broader than anything most of us have ever thought about before.
And it all starts with an object. An object that could be as simple as a single piece of data or as sophisticated as a process, a system or an entire plant.
It might be a combination of data values, meta-data and relationships. Take a dual loop controller. The dual loop controller object would relate variables for the setpoints and actual values for each loop. Those variables would reference other variables that contain meta-data like the temperature units, high and low setpoints and text descriptions. The object might also make available subscriptions to get notifications on changes to the data values or the meta-data for that data value. A client accessing that one object can get as little data as it wants (single data value) or an extremely rich set of information that describes that controller and its operation in great detail.
OPC UA is, like its factory floor cousins, composed of a client and a server. The client device requests information. The server device provides it. But as we can see from the loop controller example, what the UA server does is much more sophisticated than what an EtherNet/IP, Modbus TCP or Profinet IO server does.
An OPC UA server models data, information, processes and systems as objects and presents those objects to clients in ways that are useful to vastly different types of client applications. And better yet, the UA server provides sophisticated services that the client can use, including:
Discovery Services – Services that clients can use to know what objects are available, how they are linked to other objects, what kind of data and what type is available, and what meta-data is available that can be used to organize, classify and describe those objects and values
Subscription Services – Services that the clients can use to identify what kind of data is available for notifications. Services that clients can use to decide how little, how much and when they wish to be notified about changes, not only to data values but to the meta-data and structure of objects
Query Services – Services that deliver bulk data to a client, like historical data for a data value
Node Services – Services that clients can use to create, delete and modify the structure of the data maintained by the server
Method Services – Services that the clients can use to make function calls associated with objects
Unlike the standard industrial protocols, an OPC UA server is a data engine that gathers information and presents it in ways that are useful to various types of OPC UA client devices, devices that could be located on the factory floor like an HMI, a proprietary control program like a recipe manager, or a database, dashboard or sophisticated analytics program that might be located on an enterprise server.
Even more interesting, this data is not necessarily limited to a single physical node. Objects can reference other objects, data variables, data types and more that exist in nodes off someplace else in the subnet or someplace else in the architecture or even someplace else on the Internet.
OPC UA organizes processes, systems, data and information in a way that is absolutely unique to the experience of the industrial automation industry. It is a unique tool that attacks a completely different problem than that solved by the EtherNet/IP, Modbus TCP and Profinet IO Ethernet protocols. UA is an information-modeling and delivery tool that provides access to that information to clients throughout the enterprise.
OPC UA Terminology
One of the things that you should know about OPC UA is that the terminology is a little different than what you’re used to seeing. The terms used in many UA documents are similar to what you might expect, but the designers twisted the meanings slightly. It’s probably because UA is the first protocol that really crosses the line between the enterprise and the factory Floor. Because it has a foot in both worlds, the terms can be confusing to well-versed individuals in both the IT and factory floor worlds.
Another reason why the terms appear at first glance to be a little confusing is the scope of a UA discussion. In IA (industrial automation), we generally talk about interfaces between software components cohabiting in a processor. Or we talk about devices on the same subnet communicating over a very well-defined and very restrictive interfaces (EtherNet/IP, Modbus TCP or Profinet IO). In the Internet world, people talk about generic services with much more flexibility and capability than the interfaces between factory floor devices.
Here’s a dictionary of the most important OPC UA Terms.
UA APPLICATION – In industrial networking, we generally draw a distinction between the end user application and the protocol stack. The end user application implements some set of defined functionality. The protocol stack moves well-defined data between the application and some external device using a very restrictive interface. Not quite the same in UA. In UA, I have found that the UA application references the end user application, the UA object model and the set of UA services implemented by the UA device. This is a much more encompassing use of the term.
UA CLIENT – A UA client endpoint is the side of a UA communication that initiates a communication session. Clients in UA are much more flexible than other network clients. UA Clients have the capability to search out and discover UA servers, discover how to communicate with the UA server, discover what capabilities the UA servers have, and configure the UA server to deliver specific pieces of data when and how they want it. UA clients will generally support many different protocol mappings so that they can communicate with all different types of servers.
UA SERVER – A UA server endpoint is the side of a UA communication that provides data to a UA client. There is no standard UA server either in functionality, performance or device type. Devices from small sensors to massive chillers may be UA servers. Some servers may host just a couple of data points. Others might have thousands. Some UA servers may use mappings with high security and lower performance XML, while others may communicate without security using high performance UA Binary Encoding. Some servers may be completely configurable and offer the client the option to configure data model views, alarms and events. Others may be completely fixed.
BLOB (Binary Large Object Block) – “BLOBs” provide a way to transfer data that has no UA data definition. Normally, all UA data is referenced by some sort of data definition which explains the data format. BLOB data is used when the application wishes to transfer data that has no UA definition. BLOB data is user defined and could be anything: video, audio, data files or anything else.
PROTOCOL STACK OR STACK – A protocol stack like EtherNet/IP or Profinet IO in industrial networking generally implements the data model and the services of that protocol. An API connects that data model and service model to the data of the end user application. Though protocol stack vendors can implement this in many different ways, in general a UA protocol stack is comprised of three components: data encoding, security and network transport. Note that, unlike IA (industrial automation) protocol stacks, the data model and service model for the device are not necessarily included in the protocol stack.
ENCODINGS – A data encoding is a specific a way to convert an OPC request or response into a stream of bytes for transmission. Two encodings are currently supported in OPC UA: UA Binary and XML. UA Binary is a much more compact encoding with smaller messages, less buffer space and better performance. XML is a more generic encoding that is used in many enterprise systems. XML is easier for enterprise servers to process, but requires more processing power, larger messages and more buffer space.
SECURITY PROTOCOL – A security protocol is the way to ensure the integrity and privacy of messages being transferred across a connection. UA uses the same type of security used on the Internet for privacy and security: certificates.
TRANSPORTS – A transport is the mechanism that moves a UA message between a client and server. This is another term that at first glance can be confusing. All UA messages are delivered over a TCP/IP connection. Within TCP, there is what I would call a session, though the word “session” is not specifically used anywhere that I’ve found in my study of the technology. There are two kinds of these sessions that message over TCP, and they are called transports when using UA. They are UA TCP and SOAP/HTTP.
MAPPINGS – This is an interesting term. The UA specifications are very abstract, unlike, say, a Modbus RTU specification. Modbus RTU runs over multi-drop RS485 and that fact is inherent in the specifications. It’s not that way with UA. The specifications for UA operation are very abstract and done that way to maintain the ability to take advantage of future technologies. A mapping refers to how those abstract specifications are mapped onto a specific technology. For example, a security mapping describes how the UA Secure Channel Layer is implemented using WS Secure Conversation. A UA Binary Encoding mapping describes one way that UA data structures are mapped into a stream of bytes.
API (Application Program Interface) – An API is the set of the software interfaces that allow one software application to use the services of another software application. In the industrial world, this normally refers to the interface between two pieces of software cohabitating in the same processor. In the Ethernet world, the API can refer to the interfaces needed by some client device to access the services available of some remote web service. In UA the API generally refers to the set of interfaces that a UA toolkit vendor provides to a device developer. Because different toolkits are designed differently, the APIs work differently. The API may include interfaces to the data model. In other cases, the API may only interface the three main components of UA: the encoding layer, the security layer and the transport layer. The UA data and service model may be part of the user application.
WEB SERVICES – Web services is a generic term for loosely coupling Internet services (applications) in a structured way. The majority of Internet applications today are built using Web services. With Web services, you can easily find services, obtain the interfaces and characteristics of the interfaces and then bind to them. HTTP, SOAP, XML are the basic technologies of Web services applications and are some of the technologies that can be used by OPC UA clients and servers.
SERIALIZATION – This is an easy term to comprehend. This is the process of taking a service like the read attribute service and creating the series of bytes that a UA server can process and return the value of an attribute. Serialization dictates how data elements like a floating point value are transformed into a series of bytes that can be sent serially over a wire. Two types of serial encoding are currently supported by UA: UA Binary and UA XML.
UA XML ENCODING – XML encoding is a way to serialize data using Extensible Markup Language (XML). An encoding is a specific way of mapping a data type to the actual data that appears on the wire. In XML encoding, data is mapped to the highly-structured, ASCII character representation used by XML. XML can be cumbersome, large and inhibit performance, but the encoding is used because a large number of enterprise application programs support XML by default.
UA BINARY ENCODING – UA Binary encoding is a way to serialize data using an IEEE binary encoding standard. An encoding is a specific way of mapping a data type to the actual data that appears on the wire. In Binary encoding, data is mapped to a very compact binary data representation that uses fewer bytes and are more efficient to transfer and process by embedded systems. Binary encoding is widely used by industrial automation systems, but less common among enterprise applications.
SECURITY PROTOCOL – A security protocol protects the privacy and integrity of messages. OPC UA takes advantage of several standard, well-known security protocols. The selected security protocol for a specific application is a combination of the security requirements for the installation and the encoding and transports selected for OPC UA implementation.
TRANSPORT PROTOCOL – A transport protocol (also referred to as a “transport”) provides the end-to-end transfer of UA messages between UA clients and servers. Once a UA service message is encoded and passes through securitization, it is ready for transport. Two transports are currently defined for OPC UA: UA TCP and SOAP/HTTP. The underlying technology for both these transports is standard TCP. TCP provides the socket-level communication between clients and servers.
UA TCP TRANSPORT – UA TCP transport is essentially a small protocol that establishes a low-level communication channel between a client and a server. Most of what the UA TCP transport does is to negotiate maximum buffer sizes so both sides understand the limits of the other. The advantage of UA TCP is its size and negligible impact on throughput.
HTTP (Hypertext Transfer Protocol) – is part of the basic plumbing of the Internet. It is the low-level protocol that allows a client application like your browser to request a web page from a web server. HTTP messages request data or send data in a very standard format supported by every Internet-aware application.
XML (Extensible Markup Language) – XML is a highly structured way of specifying data such that applications can easily communicate. XML transfers all data as ASCII – the one commonly understood data format for all computer systems. XML uses a grammar to define the specific data tags that are used by an application to pass data.
SOAP (Simple Object Access Protocol) – SOAP extends XML and provides a higher level of functionality. Among other things, SOAP adds the ability to make remote procedure calls within an XML structure.
HTTP/SOAP UA TRANSPORT – The HTTP/SOAP transport is the second transport currently supported in OPC UA. This transport requires larger messages, bigger buffers and more processing, but is used because HTTP and SOAP are supported by almost all (if not all) enterprise applications. It is a standard way of moving serialized OPC UA messages between a client and a server.
For more information on OPC UA, contact Real Time Automation, Inc.