Categories
Industrial IT

Happy New Year! Also, Virtual Machines

I’m Jason Firth.

Happy New year to everyone!

I promised to talk more about virtual machines last time, and I’m finally following up on that.

First, some definitions.

A Virtual Machine is basically a machine (a computing machine with what we’re about to talk about) implemented in software.

A simple virtual machine may be very simple. Most programmers could bang a simple one out in a few minutes to do very simple operations, such as taking a file and doing operations with what’s contained in that file.

There are also much more complicated virtual machines. For example, the Java programming language is compiled into something called ‘bytecode’. This bytecode is not executable on your computer by itself. However; your computer has a virtual machine designed specifically for running Java bytecode loaded on it, called the Java JRE. That’s how Java programs run. One of the benefits of this is that you can write java code once to run on a virtual machine, and then as long as you have a working virtual machine, you should be able to run that program on any platform, no matter the processor, or the operating sytem.

A much more complicated version of a virtual machine is to have a PC create a virtual machine pretending to be another PC in software. This is sometimes called emulation. There’s a number of software packages that do this for us. VMWare is probably the most popular one. Windows Server 2008 and onwards has the ability to run virutal machines built in. Oracle has Virtual PC. Bochs is a popular open source program for virtualization.

Intel has included special hardware acceleration of virtual machines for server machines. It helps provide direct access to hardware and CPU resources without having a software layer in between.

There is a clear benefit to using virtual machines on servers. Basically, you can have one incredibly powerful computer that is acting like 100 different computers. You can also have the virtual machine running in parallel on different computers so if there’s a hardware failure, a different parallel system can immediately take over. In terms of resource utilization, virtual machines have useful benefits: not every virtual server will be using all resources at all times, so you can run multiple virtual servers, called “instances” on the same amount of hardware that might run one server if you had dedicated hardware for each server. Also, if you’re selling instances, you can assign more or less resources to that instance depending on the demand, allowing you to set your prices based on resourcing.

So, how can this possibly apply to PLCs?

Well, the idea is that instead of running PLC logic on a PLC processor, you could run the software in an instance in a data center.

There’s no reason why this can’t work. Most PLCs today run a proprietary operating system called VxWorks by a company called Wind River. This operating system is probably the most popular operating system you’ve never heard of. It’s used in spacecraft, in military and commercial aircraft, in communication hardware, and in large scale electrical infrastructure, to name a few. This operating system runs just fine in VMWare, according to some sources I found on the Internet. It’s extremely likely that a PLC controller can be made to run in a Virtual Machine on a server.

Of course, there’s reasons why it’s not a great idea. After all, a PLC controller is a purpose built piece of hardware. It is designed to handle dirty power, and extreme temperatures and humidity. It is designed to fail gracefully when it does fail. By contrast, consumer grade or even business grade servers are much less robust, are designed with much shorter running life in mind, and usually designed with more performance in mind than reliability.

There are PLCs that do make decisions from a PC. They’ve existed for decades. They’re called “Soft PLCs”. NASA uses a Soft PLC named “National Instruments LabView”. So there is a place for these, but as you can see it’s not a simple one size fits all answer.

Now for the trick I mentioned last time.

See, for all this talk of “Should PLCs use Virtual Machines”, there’s one simple fact it’s easy to forget: PLCs are Virtual Machines!

Inside a PLC, there is no giant rack of relays. The original purpose of a PLC was to take that giant cabinet filled with relays and replace it with a CPU running a program that emulates the function of that relay cabinet — the definition of a virtual machine.

Some PLCs have a special custom chip designed to run the virtual machine logic called an ASIC. This means that however we do it, a PLC does, and will almost always, run a virtual machine!

“Almost”?

Well, there’s one more spanner in the works. Not every PLC is a virtual machine.

How would this work? Well, if you had a microcontroller, and programmed a compiler designed to produce native code using ladder logic or FBD, then uploaded the complied code to the microcontroller, then you’d have a normal controller running native code, and not a virtual machine. On the other hand, every major PLC brand today doesn’t do that. They all use virtual machines.

Thanks for reading!

Categories
Industrial IT

The Cloud, Virtual Machines, and Arduino (Oh my!)

I’m Jason Firth.

I’ve noticed a number of discussions lately about using different technologies that are less expensive than typical controls equipment.

“Why not use Arduino? Why not use virtualized PCs? Why not use the cloud?”

These questions really trouble me.

“Cloud computing” generally relates to computing resources being pooled in third party data centers and their operation being shown as a “cloud” on network diagrams.

Three current examples of cloud computing are solutions from Microsoft Azure, Amazon AWS, and Google Cloud Platform.

Some of the benefits of cloud computing are:

You can immediately spin up or down capacity as required with no capital expenditure. For example, if you’re running a website and your website becomes more and more popular, you can press a button and Amazon will dedicate additional computing resources to your website.

Your resources aren’t tied to a single point of failure like a single server. Most virtualized computers can be swapped between different servers to allow failover without any disruption of services. This can mean that if server hardware is failing, it can be replaced with no impact on the provided services.

Another company can focus on the specifics of managing large IT infrastructure and can carry the risks of investing in individual pieces of IT infrastructure

By having many huge companies pooling their data infrastructure requirements together, the total pool increases, which provides some safety. For example, it’s very difficult to DDOS Amazon AWS compared to a single corporate file server located in a company head office. In addition, in the event of such attacks, a single company specializing in network infrastructure can have experts on hand to work on such risks, where a single company may not be able to afford an expert for a few servers.

So those are some benefits, but people honestly suggesting this solution really troubles me. The reason is that they show a lack of appreciation for the gravity of your control devices.

What do I mean by this?

Well there’s a number of things.

Before we start with any of the reasons, I need to explain the difference between two types of failure: Revealed and Unrevealed failures.

Revealed failures are failures that are immediately apparent. They shut down devices, or they immediately cause a process upset. These failures are immediately found, and likely immediately fixed.

Unrevealed failures, by contrast, are not immediately apparent.

A few years back I was facilitating Reliability Centered Maintenance analyses. In many of the scenarios we covered, a motor would overheat and fail, and the control systems would shut it down. A relay would fail, and the control system would shut it down. A boiler would reach overpressure, and the control system would shut it down.

Then we started covering different control systems. Failure wouldn’t immediately harm anything, but when other problems occurred, that’s when the problem starts. Suddenly, the control systems wouldn’t be shutting down failing systems. In fact, they might be actively working to make things worse.

I’ve witnessed the trouble caused when control systems go down. Operators rely on their control systems more than they realize to tell them exactly what’s going on. If the alarm doesn’t sound, if the screen doesn’t light up, if the flashing light doesn’t flash, then often they’ll have no way to know something bad is happening until something terrible has happened.

So now we have this critical piece of infrastructure we’ve sent to another site, perhaps thousands of kilometers away, that we’re relying on the Internet to maintain connectivity to.

The ability to spin up or spin down CPU resources is sort of a non-starter, in terms of relevant benefits. CPU resources are not usually a bottleneck in PLC or DCS systems. You might have an issue if your PLC scantimes are getting too high, but PLCs are already using CPUs that have been obsolete for 15 years — if CPU resources were a primary consideration, then that wouldn’t be the case.

The risks associated with controls infrastructure are not the same risks associated with IT infrastructure. That being the case, I don’t think companies like Microsoft, Google, or Amazon are better prepared to manage those risks than companies that deal with the real risks every day. We’re not talking about having to issue refunds to some customers, we’re potentially talking about dead people and a pile of smoking rubble where your plant used to be.

Whether the data center side of your infrastructure is safe from DDoS attack or not, your plant likely is not. Therefore, a potential attack, or a potential loss of connectivity has the capability not just of taking down your email and intranet, but your plant.

Along the same lines, industry best practices for security uses a “Defense in depth” strategy, where your core devices are protected by layers and layers of security. By contrast, a cloud strategy puts your control data directly onto the public internet. If your system has a vulnerability (and it will), then you’re going to be risking your people, and the environment, and your plant, and your reputation.

Industrial network security generally places availability at the highest priority, and security as important, but secondary. If your controller can cause an unrevealed release of toxic gas if it gets shut down, it’s important that the controller continues to function. If that controller stops controlling because a security setting told it to, then that’s going to be a question to answer for later.

A single point of failure can be a good thing, or a horrible thing. In September of this year, Amazon AWS suffered an outage. Half a dozen companies that rely on them suffered reduction in availability as a result. By contrast, not a single system in my plant realized it. What if you were using AWS to control your plant?

Now that we’ve covered cloud computing, let’s look at Arduino.

Arduino is an open source hardware platform based on ATmel microcontrollers. An arduino provides a circuit board that lets people use breadboard materials to connect to their microprocessor; a USB programming interface; and a software package that makes it relatively simple to write software for the controller and send your program to the microcontroller.

Benefits? Well, an arduino comes cheap. You can buy an arduino board for 20 Euro from arduino.cc.

Downsides? There’s a lot, actually.

Let’s start from the outside and work our way in.

Arduino I/O isn’t protected with optoisolators. The analog IO isn’t filtered. None of it is certified for industrial use, and I’m guessing any insurance company would tear you apart for directly connecting an Arduino to plant devices, especially safety related ones.

Arduino is “modular” in a sense that you can plug devices called “shields” onto the main board, but this is not modular the same way a Schneider Quantum or an Allen Bradley ControlLogix. You can get a shield onto the board. Once it’s on, it’s on until you power down, and don’t expect to get 16 different kinds of card mounted together.

Arduino isn’t hardened on the power side, either. Noisy power could cause problems, and unless you break out a soldering iron and start building a custom solution, multiple power supplies aren’t an option.

Arduino isn’t protected physically. Admittedly, most PLCs are only IP20 rated, but at least that means you can’t stick your fingers on the exposed board, which isn’t something you can say for Arduino.

Arduino has a fairly limited amount of logic capacity. The number of things you can do with one are limited to begin with, and become more limited as you add things like TCP/IP for HMI access.

Speaking of logic, don’t expect to change logic on the fly without shutting down the controller, or view logic as it is running. That’s just not possible.

With Arduino, you’re likely to be reinventing the wheel in places. Want Modbus TCP? Hope you’ve brushed up on your network coding, because you’re going to be writing a Modbus TCP routine for everything you’d like to do.

As well, congratulations on your new Arduino installation! The guy who wrote the software quit. Can a regular electrician or instrument guy muddle their way through the software?

I’m not saying you can’t build a PLC using an arduino, or that you can’t control a process with one. What I am saying is that if you’re doing industrial control, it probably isn’t the right tool for the job.

I’m going to save Virtual Machines for another time, because there’s a trick in there I want to expand on a bit more.

Thanks for reading!

Categories
Industrial IT

Really Disappointing

[Note: This situation changed since I wrote this in 2014/2015]

I’m Jason Firth.

I want to talk a bit about a trend I’ve been seeing.

First, some background to understand where I’m coming from.

Downloading a piece of software on the Internet is a real gamble.

Of course, some software contains viruses meant to destroy your PC, but that’s a relatively small amount. To really understand where the software that will really harm you comes from, you need to follow the money. Some people make money stealing proprietary secrets from companies, or credit card numbers, or passwords for paid accounts on services like netflix. Other people make money selling advertising displayed through unscrupulous means. Still other people make their money controlling “botnets”, which are large numbers of computers (computers owned by normal people) infected with worms that take control which allow buyers to send specially crafted packets to servers intended to tie up resources like bandwidth and memory in order to prevent that server from operating, in an attack called a “Distributed Denial of Service”, or DDoS.

There are a few ways you can be infected (called ‘vectors’). In 1999, my first website was hosted on a free web host, and I visited my own website to learn it was trying to install a spyware program on my PC (I soon after started paying for ad-free hosting). In 2000, the ILOVEYOU worm moved through e-mail, and the e-mail program outlook at the time would automatically run the infection script when the e-mail was viewed. 2004, the sasser worm could infect a PC that was simply connected to the Internet, without the owner of the computer doing anything wrong. Along the way, programs that were (or appeared to be) useful eventually started being bundled with a new kind of worm, that served advertising content even when the program wasn’t running. Some applications installed this software without any indication that it was doing anything. Others would try to hide their request to install behind deceptive wording or deceptive button placement. Either way, once installed, the new program would display advertising on your PC even when you weren’t using the application that installed the ad program — in fact, sometimes even after the original program was uninstalled.

My first job was as a registered apprentice helpdesk support analyst for a school board. One of the challenges they faced at the time was removing the programs installed by peer to peer file sharing programs (it was a more innocent time before such computers were completely locked down). These programs would cause web advertisements to appear, and used resources that the old Pentium 133MHz machines with 32MB of RAM couldn’t really afford to give up.

If you want to be assured that software from the Internet is safe, you have only a few options. You can download only from trusted proprietary sources like Microsoft. You can just not download anything at all. A third option for a long time was Free and Open Source software. This software is written by hobbyists or by companies who want to build an open platform, and is licensed so that everyone who wants to use it can use it, and anyone can access the source code and use it, but only if they release any modifications for use by others. (There are other free licenses which are broader, but we’ll discuss this one for now). These programs didn’t usually come with problem software, because anyone could check the code, and compile a clean version from scratch.

Unfortunately, something has changed. The largest distributor of open source software, called Sourceforge, changed owners a few years back, and now they encourage top projects to try to install this problem software on their users computers.

I’ve contributed to open source projects over the years. I’ve written documentation, I’ve written code, I’ve even started projects. I believe in the idea. To see big projects that people like me have put their heart and soul into and to have it used to try to unscrupulously infect other people’s computers, I consider that criminal, and I consider it a tragedy.

I consider it criminal because any consent could only be possibly be gained through deception, because no reasonable person would allow such software to be installed on their computer. “Hey, want me to put advertising on every webpage you visit, even the ones without advertising? Want me to randomly pop up advertising all the time? Want all this to happen even when you’re not using my program?” — of course the answer is ‘No’. I’ve seen them get an “I agree” click by using deceptive language (“Click if you don’t want to unagree that we won’t not install the software”), or by using deceptive button placement (“next next next next next I agree to be infected with a virus”). I don’t consider this access to one’s computer legitimately authorized, and thus taking over people’s computer is a cyber-crime. Governments around the world appear to agree with me, because laws are being passed all the time to make it perfectly clear for the courts that this sort of behaviour isn’t acceptable.

As for why I consider it a tragedy, imagine the passion of people contributing to their favourite open source projects, working for no compensation. Their reward? To have the project maintainer try to take over their computer for monetary gain. That’s tragic. It really is the tragedy of the commons here, where someone realized they could burn it all to the ground to make a buck. That’s very sad.

Anyway, back to instrumentation next time. I just had to vent about the end of an era: The era where you could at least trust an open source project.

Thanks for reading!

Categories
Industrial IT

sqlcmd- A means to doing SQL commands from the command line

I’m Jason Firth.

Last time, I posted about openness. One of the ways openness can help everyone is by providing flexibility to do things that may not have otherwise been possible.

At this point, many software packages use Microsoft SQL server as a front-end. Pi Historian and Wonderware Historian, for example, both use SQL Server as a front-end.

This provides some really neat opportunities. You can automate the retrieval or analysis of data from the historian, for example. Visual Studio Express is available for free, and includes all the APIs for communicating with an SQL server.

Let’s say you don’t want to do anything that complicated. What if you just want to run a simple query and spit out a simple table?

If you’re running a computer with SQL Server or the free to download and use SQL Server express, you can use the sqlcmd command from the command line.

You can use the command line “sqlcmd -S [protocol:]server[\instance_name][,port] -U [userid] -p [password]” to connect to a command line instance, but the really interesting part is that you can use “-i input_file[,input_file2…]” or “-o output_file” to automate the running of certain queries.

The input file is a script written in TRANSACT-SQL, (that “Select * where” stuff).

Knowing this, you can pull data and manipulate data from the command line, or from batch files. That isn’t something you may want to use for everything on a regular basis, but it’s a great little tool to have in your back pocket for those times when you have to get a little script going quickly.

Thanks for reading!

Categories
Industrial IT

On Openness

I’m Jason Firth.

Merry Christmas, and happy new year!

On my “About me” page, I wrote: “With this blog, I have a few goals: I’m hoping to get some of that information together so control professionals from all over can use it. I’m hoping to take some of the extremely cryptic academic work out there and simplify it for industry.”

Recently, I was speaking with someone from the aaOpenSource project, which was started in part by the guy at the Archestranaut blog over at Avid Solutions. I definitely recommend the blog. It isn’t always updated, but when it is, there’s some great information there.

One thing we both agreed on was that this industry needs more openness and sharing.

I started my “programming career” such as it is in open source. I started off by learning GWBasic, then progressed to QBASIC, then learned Visual Basic and C++ and a bunch of other programming languages afterwards. It might be a bit sacreligious for the hardcore programmers out there, but I always enjoyed BASIC, because compared to many other programming environments, you don’t need to micromanage as much. The runtime library contains most things you’d need for simple programs, so you don’t need to manage library binaries or header files. Eventually, I ended up using the FreeBASIC project. It’s very much like a C++ compiler with a very comprehensive runtime library built in. I ended up contributing a small amount of code, and working as much as I could to improve the documentation for new users.

No matter what programming lanugage I was learning, whether it was gwbasic or C++ or assembly or php, open code was a crucial piece of my learning experience. It was much easier saying “What does correct code look like?”, than trying to decipher sometimes archaic documentation. Having a library of code snippets to call upon means you can focus on solving the novel parts of your solution, rather than reinventing the wheel.

Two pieces of code in particular were things I was particularly proud of improving upon when I was back in high school were a graphics routine, and a keyboard handler.

In DOS programming, and particularly real-mode DOS programing, you end up having to manually handle your graphics to a large degree. I found some code demonstrating a simple pixel set routine for 320×240, a video mode called “ModeX”. It has some really cool features, such as allowing you to draw to an off-screen part of the video memory while showing a different part of the video memory. This is called “Double buffering” when there is an onscreen page and an offscreen page, but ModeX supports two offscreen pages and one onscreen page, called “Triple Buffering”. The most difficult part of programming this to run quickly is that there’s all sorts of insanity in how you write pixels properly. You have four “planes” which you have to write to, and each plane has the graphic laid out in an odd way. The original code showed me how to initialize the video mode, but the code for placing a dot on the screen involved calculating the memory location (involving a multiply and a divide), and setting the plane. After months of staring at the code, I came up with a clever way of writing an entire plane’s pixels in one step consisting only of additions, then I could write an entire screen with only 4 plane shifts. Without the original code showing how ModeX worked, I would have had nothing to start from, and I probably wouldn’t have gone with the arcane video mode without some sample code to start from. Without open documentation, the person who wrote that code probably never would have had a place to start.

Another challenge is key detection. For multikey applications, you have to capture each button press and unpress to determine the keyboard state. To accomplish this, you must create an interrupt handler to replace the existing DOS keyhandler, which only captures one key at a time. Then, you must continuously poll the port. I found some novel tweaks to the code to allow more accurate polling of the port and recording (and retrieval of) multikey values. Without the original code showing how raw keyboard polling worked, I would have had nothing to start from, and I probably wouldn’t have gone with any sort of continuous multikey detection without some sample code to start from. Without open documentation, the person who wrote that code probably never would have had a place to start.

These are small programming problems, but they’re how I started to learn. Without the documentation and open code, I never would have had a place to start, and never would have learned the fundamentals I use to solve problems on a regular basis today, a decade later.

However, our industry is built upon certain open standards. The PID, for example, or Zeigler Nichols tuning, or 4-20mA, or 3-15 PSI. Everyone who learns about the trade needs to learn these things, and by learning them, doesn’t need to reinvent the wheel later.

One thing that should be immediately obvious is that all those standards are from 40 years ago. In some ways, it’s like our trade hit a time warp, and although we’re seeing more and more new technology, it’s all a black box. Some specialized experts understand them, but they’re in the minority.

I come from a few industries where people believe that if you hoard information, and ration it out in little bits, that’s how you stay valuable. I don’t believe that. I believe that the way we stay relevant is by proving to the world all the interesting ways we can provide value to their organizations. We’re tube benders, but we’re not just tube benders. We’re cable pullers, but we’re not just cable pullers. We’re calibrators, but we’re not just calibrators. We’re documenters, but we’re not just documenters. We’re programmers, but we’re not just programmers. We’re electronics techs, but we’re not just electronics techs. I could go on all day, because our trade and profession is so broad, we end up with a view that is equally broad. Instead of being jealous and trying to protect this information, we should be teachers, trying to help each other, and also the other disciplines become better. If we try to go alone, to fend for ourselves, then we’re going to be swept away by the tide of all the new stuff we need to keep on top of.

That’s one big reason why I wanted to start writing this blog, because I was able to build upon the work of others, and I’d like to continue to do that. Together, we can build the future.

Thanks for reading!