Jason K. Firth, C.E.T.

Instrumentation, Control, and Automation

New Software Release: Schneider Unity Pro 9 -- I mean 10 -- I mean 11!

Jan 142016

January 14, 2016

I'm Jason Firth.

You will recall that last year I posted about the release of Unity Pro v8.1.

Well, I got an email from a vendor this week about the opportunity to learn about the new version of Unity Pro -- Version 10!

(Wait, did I miss something? What happened to 9?)

I have absolutely no idea what happened to 9. They skipped it. Maybe to keep on track with Windows?

 

Unity Pro V10 supports

M580 features

*CCOTF(Configuration change on the fly) on M580 Local IOs

*Cybersecurity: Events log, Data Integrity, Enable/Disable Services

*System Time Stamping of Application Variables

*Device Integration: Network Manager

Quantum Platform Features

*HART on X80 remote drops

*New Quantum firmware v3.3

Full Excel Import/export tool

Audit Trail

*Log in Syslog

ANY_BOOL Data Type

Supports:

Win 7 32 and 64 bit;

Win 8.1 32 and 64 bit;

Win 10 32 and 64bit and;

Windows Server 2012.

How exciting, right? Well, I went onto the schneider-electric website to download the latest version, and was shocked to discover that version 10 isn't the latest version!

 

Yes, there's a vesion 11, out right before Christmas.

Unity Pro V11 support new Modicon M580 Controllers :

Support new Modicon M580 High End

Support new Modicon M580 HSBY CPUs

Support LL984 language on Modicon M580

Quantum Ethernet IO drops is now supported on Modicon M580

Supports:

Win 7 32 and 64 bit;

Win 8.1 32 and 64 bit;

Win 10 32 and 64bit and;

Windows Server 2012.

 

To be honest, these seem like awfully incremental improvements to be justifying major sofware number increments.

Alongside Unity Pro v10, there are new firmware images for the Modicon Quantum CPUs, and a major revision of the M580 firmware images for all the new features.

Alongside Unity Pro v11, there are new firmware images for the M580 platform.

Thanks for reading!

Happy New Year! Also, Virtual Machines

Jan 102016

January 10, 2016

I'm Jason Firth.

 

Happy New year to everyone!

I promised to talk more about virtual machines last time, and I'm finally following up on that.

First, some definitions.

A Virtual Machine is basically a machine (a computing machine with what we're about to talk about) implemented in software.

A simple virtual machine may be very simple. Most programmers could bang a simple one out in a few minutes to do very simple operations, such as taking a file and doing operations with what's contained in that file.

There are also much more complicated virtual machines. For example, the Java programming language is compiled into something called 'bytecode'. This bytecode is not executable on your computer by itself. However; your computer has a virtual machine designed specifically for running Java bytecode loaded on it, called the Java JRE. That's how Java programs run. One of the benefits of this is that you can write java code once to run on a virtual machine, and then as long as you have a working virtual machine, you should be able to run that program on any platform, no matter the processor, or the operating sytem.

A much more complicated version of a virtual machine is to have a PC create a virtual machine pretending to be another PC in software. This is sometimes called emulation. There's a number of software packages that do this for us. VMWare is probably the most popular one. Windows Server 2008 and onwards has the ability to run virutal machines built in. Oracle has Virtual PC. Bochs is a popular open source program for virtualization.

Intel has included special hardware acceleration of virtual machines for server machines. It helps provide direct access to hardware and CPU resources without having a software layer in between.

There is a clear benefit to using virtual machines on servers. Basically, you can have one incredibly powerful computer that is acting like 100 different computers. You can also have the virtual machine running in parallel on different computers so if there's a hardware failure, a different parallel system can immediately take over. In terms of resource utilization, virtual machines have useful benefits: not every virtual server will be using all resources at all times, so you can run multiple virtual servers, called "instances" on the same amount of hardware that might run one server if you had dedicated hardware for each server. Also, if you're selling instances, you can assign more or less resources to that instance depending on the demand, allowing you to set your prices based on resourcing.

So, how can this possibly apply to PLCs?

Well, the idea is that instead of running PLC logic on a PLC processor, you could run the software in an instance in a data center.

There's no reason why this can't work. Most PLCs today run a proprietary operating system called VxWorks by a company called Wind River. This operating system is probably the most popular operating system you've never heard of. It's used in spacecraft, in military and commercial aircraft, in communication hardware, and in large scale electrical infrastructure, to name a few. This operating system runs just fine in VMWare, according to some sources I found on the Internet. It's extremely likely that a PLC controller can be made to run in a Virtual Machine on a server.

Of course, there's reasons why it's not a great idea. After all, a PLC controller is a purpose built piece of hardware. It is designed to handle dirty power, and extreme temperatures and humidity. It is designed to fail gracefully when it does fail. By contrast, consumer grade or even business grade servers are much less robust, are designed with much shorter running life in mind, and usually designed with more performance in mind than reliability.

There are PLCs that do make decisions from a PC. They've existed for decades. They're called "Soft PLCs". NASA uses a Soft PLC named "National Instruments LabView". So there is a place for these, but as you can see it's not a simple one size fits all answer.

Now for the trick I mentioned last time.

See, for all this talk of "Should PLCs use Virtual Machines", there's one simple fact it's easy to forget: PLCs are Virtual Machines!

Inside a PLC, there is no giant rack of relays. The original purpose of a PLC was to take that giant cabinet filled with relays and replace it with a CPU running a program that emulates the function of that relay cabinet -- the definition of a virtual machine.

Some PLCs have a special custom chip designed to run the virtual machine logic called an ASIC. This means that however we do it, a PLC does, and will almost always, run a virtual machine!

"Almost"?

Well, there's one more spanner in the works. Not every PLC is a virtual machine.

How would this work? Well, if you had a microcontroller, and programmed a compiler designed to produce native code using ladder logic or FBD, then uploaded the complied code to the microcontroller, then you'd have a normal controller running native code, and not a virtual machine. On the other hand, every major PLC brand today doesn't do that. They all use virtual machines.

 

 

Thanks for reading!

The Cloud, Virtual Machines, and Arduino (Oh my!)

Nov 042015

November 4, 2015

I'm Jason Firth.

 

I've noticed a number of discussions lately about using different technologies that are less expensive than typical controls equipment.

"Why not use Arduino? Why not use virtualized PCs? Why not use the cloud?"

These questions really trouble me.

"Cloud computing" generally relates to computing resources being pooled in third party data centers and their operation being shown as a "cloud" on network diagrams.

Three current examples of cloud computing are solutions from Microsoft Azure, Amazon AWS, and Google Cloud Platform.

Some of the benefits of cloud computing are:

You can immediately spin up or down capacity as required with no capital expenditure. For example, if you're running a website and your website becomes more and more popular, you can press a button and Amazon will dedicate additional computing resources to your website.

Your resources aren't tied to a single point of failure like a single server. Most virtualized computers can be swapped between different servers to allow failover without any disruption of services. This can mean that if server hardware is failing, it can be replaced with no impact on the provided services.

Another company can focus on the specifics of managing large IT infrastructure and can carry the risks of investing in individual pieces of IT infrastructure

By having many huge companies pooling their data infrastructure requirements together, the total pool increases, which provides some safety. For example, it's very difficult to DDOS Amazon AWS compared to a single corporate file server located in a company head office. In addition, in the event of such attacks, a single company specializing in network infrastructure can have experts on hand to work on such risks, where a single company may not be able to afford an expert for a few servers.

So those are some benefits, but people honestly suggesting this solution really troubles me. The reason is that they show a lack of appreciation for the gravity of your control devices.

What do I mean by this?

Well there's a number of things.

Before we start with any of the reasons, I need to explain the difference between two types of failure: Revealed and Unrevealed failures.

Revealed failures are failures that are immediately apparent. They shut down devices, or they immediately cause a process upset. These failures are immediately found, and likely immediately fixed.

Unrevealed failures, by contrast, are not immediately apparent.

A few years back I was facilitating Reliability Centered Maintenance analyses. In many of the scenarios we covered, a motor would overheat and fail, and the control systems would shut it down. A relay would fail, and the control system would shut it down. A boiler would reach overpressure, and the control system would shut it down.

Then we started covering different control systems. Failure wouldn't immediately harm anything, but when other problems occurred, that's when the problem starts. Suddenly, the control systems wouldn't be shutting down failing systems. In fact, they might be actively working to make things worse.

I've witnessed the trouble caused when control systems go down. Operators rely on their control systems more than they realize to tell them exactly what's going on. If the alarm doesn't sound, if the screen doesn't light up, if the flashing light doesn't flash, then often they'll have no way to know something bad is happening until something terrible has happened.

So now we have this critical piece of infrastructure we've sent to another site, perhaps thousands of kilometers away, that we're relying on the Internet to maintain connectivity to.

The ability to spin up or spin down CPU resources is sort of a non-starter, in terms of relevant benefits. CPU resources are not usually a bottleneck in PLC or DCS systems. You might have an issue if your PLC scantimes are getting too high, but PLCs are already using CPUs that have been obsolete for 15 years -- if CPU resources were a primary consideration, then that wouldn't be the case.

The risks associated with controls infrastructure are not the same risks associated with IT infrastructure. That being the case, I don't think companies like Microsoft, Google, or Amazon are better prepared to manage those risks than companies that deal with the real risks every day. We're not talking about having to issue refunds to some customers, we're potentially talking about dead people and a pile of smoking rubble where your plant used to be.

Whether the data center side of your infrastructure is safe from DDoS attack or not, your plant likely is not. Therefore, a potential attack, or a potential loss of connectivity has the capability not just of taking down your email and intranet, but your plant.

Along the same lines, industry best practices for security uses a "Defense in depth" strategy, where your core devices are protected by layers and layers of security. By contrast, a cloud strategy puts your control data directly onto the public internet. If your system has a vulnerability (and it will), then you're going to be risking your people, and the environment, and your plant, and your reputation.

Industrial network security generally places availability at the highest priority, and security as important, but secondary. If your controller can cause an unrevealed release of toxic gas if it gets shut down, it's important that the controller continues to function. If that controller stops controlling because a security setting told it to, then that's going to be a question to answer for later.

A single point of failure can be a good thing, or a horrible thing. In September of this year, Amazon AWS suffered an outage. Half a dozen companies that rely on them suffered reduction in availability as a result. By contrast, not a single system in my plant realized it. What if you were using AWS to control your plant?

Now that we've covered cloud computing, let's look at Arduino.

Arduino is an open source hardware platform based on ATmel microcontrollers. An arduino provides a circuit board that lets people use breadboard materials to connect to their microprocessor; a USB programming interface; and a software package that makes it relatively simple to write software for the controller and send your program to the microcontroller.

Benefits? Well, an arduino comes cheap. You can buy an arduino board for 20 Euro from arduino.cc.

Downsides? There's a lot, actually.

Let's start from the outside and work our way in.

Arduino I/O isn't protected with optoisolators. The analog IO isn't filtered. None of it is certified for industrial use, and I'm guessing any insurance company would tear you apart for directly connecting an Arduino to plant devices, especially safety related ones.

Arduino is "modular" in a sense that you can plug devices called "shields" onto the main board, but this is not modular the same way a Schneider Quantum or an Allen Bradley ControlLogix. You can get a shield onto the board. Once it's on, it's on until you power down, and don't expect to get 16 different kinds of card mounted together.

Arduino isn't hardened on the power side, either. Noisy power could cause problems, and unless you break out a soldering iron and start building a custom solution, multiple power supplies aren't an option.

Arduino isn't protected physically. Admittedly, most PLCs are only IP20 rated, but at least that means you can't stick your fingers on the exposed board, which isn't something you can say for Arduino.

Arduino has a fairly limited amount of logic capacity. The number of things you can do with one are limited to begin with, and become more limited as you add things like TCP/IP for HMI access.

Speaking of logic, don't expect to change logic on the fly without shutting down the controller, or view logic as it is running. That's just not possible.

With Arduino, you're likely to be reinventing the wheel in places. Want Modbus TCP? Hope you've brushed up on your network coding, because you're going to be writing a Modbus TCP routine for everything you'd like to do.

As well, congratulations on your new Arduino installation! The guy who wrote the software quit. Can a regular electrician or instrument guy muddle their way through the software?

I'm not saying you can't build a PLC using an arduino, or that you can't control a process with one. What I am saying is that if you're doing industrial control, it probably isn't the right tool for the job.

I'm going to save Virtual Machines for another time, because there's a trick in there I want to expand on a bit more.

 

Thanks for reading!

Unity Pro 8.1 Released

Dec 142014

December 14, 2014

I'm Jason Firth.

Because of an ongoing support issue we've been having, the folks at Schneider made sure to e-mail me about a point release of Schneider's Unity Pro software that was recently released.

 

Additions include:

-support for new M580 devices

-Device Integration improvements

-references

-implicit conversions (no more real_to_int and int_to_real everywhere)

-security improvements

 

Interestingly, Unity Pro 8.1 supports Windows 8.1 but not Windows 8.0. Windows XP is no longer supported either.

It's not mentioned anywhere, but there have been other bug fixes as well. I had a problem with animation tables when I built changes to the program, and installing Unity Pro 8.1 solved those problems.

We also had a problem where the program would slow to a crawl if the variable properties window (accessed by pressing ctrl-enter while a variable is selected) is open. This version solves that problem.

Thanks for reading!