Jason K. Firth, C.E.T.

Instrumentation, Control, and Automation

Unacceptable.

Oct 232017

I'm Jason Firth.

 

I'm not normally the sort to comment on current news stories (apparently I'm making a habit of it, though). I would rather my blog not become overtly political, and personally I think the idea of 'picking sides' in partisan nonsense is a great way to lose friends and make enemies.

 

However, when it comes to the stories coming out about Harvey Weinstein's reprehensible behaviour, I'm willing to make an exception.

 

In years past, I wrote about the environment women in technology and the trades might find themselves in. When I said "it won't be fair", I was referring to more passive behaviours I saw: looks that weren't appropriate for the workplace, comments behind people's backs and behind closed doors that weren't the most professional. What I was absolutely NOT referring to is overt sexual harassment or sexual assault in the workplace.

 

I want to be crystal clear: Overt sexual harassment or sexual assault is not acceptable in the workplace, period. There is no "Oh, work your way through it" in such situations -- such a person should face the legal and practical consequences of their actions, and nobody -- male or female -- has any reason to work under such conditions. It is against the law for a supervisor or manager to sexually harass a worker under any and all circumstances. There is no excuse.

In fact, in Ontario, "wanting it" isn't a defense. If a supervisor or manager is making advances against workers, that's unlawful harassment under the occupational health and safety act. Speak up. If that supervisor won't listen, then another will. We have a legal duty to.

Thanks for reading.

Do not pass go, do not collect $200.

Aug 292017

I'm Jason Firth.

I don't make it a habit of commenting on local news stories, but this one really got under my skin: A car dealership demanded additional money from a customer after the sale concluded, and when the purchaser refused to comply, they remotely disabled the vehicle.

A consumer rights organization spoke to consumer rights law, but let's call a spade a spade here: this is a criminal act. Someone should be going to jail over this.

Perhaps you think I'm being melodramatic about this, but hear me out. This dealer accessed computer equipment they had sold -- equipment they no longer owned and were not authorized to access. They did so for the express purpose of following up on a threat they'd made: "either pay us, or we will hack and disable your vehicle."

This is exactly the modus operandi of the WannaCry hackers. They took over systems they did not own, and issued an ultimatum: pay us or lose access to these systems we do not own.

Besides the thinnest veneer of respectability, there is no difference between the two.

Well, there is one difference, but it is without distinction for legal purposes: whereas the WannaCry hackers had to force their way into systems, the auto dealership left a bomb in the car they once owned.

On a few occasions, disgruntled former employees have used old usernames and passwords to get into the systems of former employers. It's still very illegal and the fact that they had a username and password does not mean they are magically authorized to enter systems for which they no longer have reason to enter.

Both the WannaCry hackers and disgruntled former employees would go to jail for their crimes. The responsible people at this dealership ought to as well.

In the grand scheme of things, this should also be a warning to those of us who are in charge of digital systems: if a car dealership can commit extortion, if they can use a trap well laid to demand more money, then so can former employees. It's important then to make sure you revoke permissions immediately when people leave the company, and do routine audits to find hidden bombs before they can turn into a threat down the line.

 

Thanks for reading!

 

Blue skies, green Fields

Aug 242017

I'm Jason firth.

One commonality I notice when people ask me to help solve a problem is that quite often they explicitly limit solutions to "what sort of control systems can we install?" Type queries.

I immediately force myself to ignore the question as presented, because of the limits it puts on the creativity we can use to solve problems.

Occasionally, we can introduce a new and innovative control system to solve a problem, but just as often, we need to take a step back and re-examine the problem. Sometimes we can solve a problem by providing more data to operators, or by making it easier to follow procedure using their current user interface. Sometimes we need to inform rather than control. Sometimes we need to analyze in a new way. Sometimes it's a maintenance problem and fixing a chronic problem will help. Sometimes there's no problem at all and things must be operated on a certain way for safety or operational reasons.

By looking at problems outside of their ostensible technical scope, we can see the systems involved. We can ask questions we might not have asked otherwise: systems involve processes, equipment, operators, procedures, user interfaces, and control systems. Sometimes the answer comes from looking at the whole picture rather than a small piece.

Looking at problems this way also provides new opportunities. A few years back, I was asked to investigate problems with a certain Historian in gathering process critical data. What I discovered was that we were asking the historian to do something incompatible with its design. Historians consist of dozens of working parts, all of which need to function for data to be saved and retrieved. Instead of fighting the historian to conform, we created a new system which consisted of a single simple program with one purpose. Instead of requiring dozens of systems to work, suddenly we only needed two: retrieval and storage. Once we created this new system, we were able to extend it to automatically produce files for regulatory reporting -- an unexpected boon which saved the site time and increased accuracy.

This provides new opportunities for a shop. Many people want their shop to limit its influence to "what control systems can we install", but by looking at a strategy which embraces increased responsibility and increased work in service to other groups, new opportunities arise, because it's all connected.

Everyone wants to find a new and innovative and cool control system, but sometimes you need to step back from that well trodden lot, and look at the areas nobody is looking, where there are blue skies and green fields, waiting for someone.

Thanks for reading!

The next road

Aug 192017

I'm Jason Firth.

It's been a long while since I updated, because I've been transitioning into a new role: planning and supervising the instrument shop, and supervising the gas fitters.

The transition from front line worker to front line supervision has meant a whole new set of challenges, and a whole new viewpoint.

As a worker, road blocks are a nuciance. "They really ought to make this easier", I'd say. We'd all say it. Now, navigating those road blocks and keeping workers away from them is a big part of my raison d'etre. The more I can keep my guys working on jobs, the better job I'm doing.

There's a lot of road blocks out there, too. From inception, the question of whether work should even be completed ought to be answered by supervision and management before a worker is ever even close to being assigned the job.

In maintenance planning, there's a lot of processes that should exist and be followed to ensure the job is properly vetted. For corrective work, risk analysis can help justify work. For preventative maintenance, a methodology like Reliability Centred Maintenance can define and justify which work shall be done. For proactive maintenance, there are a number of failure mode analysis tools which can help dictate what work should be done in response to different unmanaged failures.

Following processes like these can help on two fronts: it helps ensure that front line workers aren't wasting their time on work that is going to be immediately vetoed, and it helps ensure that supervision and management have their finger on the pulse of exactly what is going on and why. Besides that, it ensures that appropriate documentation to support work exists so you can go back as part of a living program and see how your assumptions worked out.

Next up are planning road blocks. Ideally, you should have all the parts kitted for the job, you should have all the steps identified, correctly documented, and permits pre prepared as much as possible. If you can schedule the job as well and coordinate with operations to get the equipment in question, that's another major roadblock that front-line folks won't have to deal with.

During execution, your best people will have their better nature working against them. People will want help with their personal priorities, but the problem is if you're focusing on everything, you're focusing on nothing. It's important to keep your people on the task at hand. For those who have personal priorities, they need to enter their work into whatever work management process you have.

Looking at the big picture, the work management process is your most important tool. See the work, prioritize it, plan it, schedule it, execute it. This requires teamwork not just amongst your team, but amongst your site.

The "hey buddy system" is any time where someone sidetracks the work management process and tried to get their work done through side channels. This is sometimes appropriate for high criticality work, but usually it isn't appropriate. Every job that gets done on the "hey buddy system" is another job that went through the proper channels that got delayed. When someone successfully gets their job done this way, it reduces the credibility of the process, and increases the number of "hey buddy" jobs done.

This is the easiest roadblock for great workers to hit: the traffic jam. A hundred uncontrolled jobs hit at once, and in trying to keep everyone happy by focusing on all these jobs, none but the simplest jobs get done.

If I'm doing my job right, then everyone should win: the workers should be less stressed out because they can focus just on doing the work safely. Operations should have the right work happening at the right time. Supervision and management can complete their due diligence in preparing work, and a system of continuous improvement should help make the process consistently smoother.

To be honest, although I took the career track change for professional reasons, the reason I get out of bed in the morning (and one of the big reasons I applied for the job) is knowing how difficult life is on the front line when you don't have someone there willing to handle these problems.

As for a different perspective, You get to peek out from the front line and see (or even steer) the path ahead. Changing from being a passive observer of what's coming down the line, you can become an active participant.

I'm sure I'll have plenty more to say in the future, but this is what I've learned so far in my crash course on supervision.

Thanks for reading!

All you need to know about PID controller

Feb 272017

27 Feb 2017

I'm Jason Firth.

 

I recently commissioned this article explaining the function of a PID controller by freelance writer Sophia O'Connor. It's one of a few pieces I've commissioned recently. It's partially a test to see how well commissioning freelancers can work, and partially a public service to get some stuff written about some basic concepts. Enjoy!

 

A proportional integral derivative (PID) controller is an instrument that is used mainly in the industrial control applications. PID controller involves three controllers i.e. p-controller, D-controller and I-controller. All these controllers are combined in a way that they produce a control signal. The main purpose of using a PID controller is to control the speed, temperature, pressure, flow and other variables that needs to be processed. It can be installed near the control regulation devices. Moreover, a PID controller is monitored through an SCADA system.

Working of a PID controller:

As explained above, a PID controller involves the working of three different controllers that are combined together to perform different tasks. The main purpose of installing a PID controller is to control the operations. Although a simple machine with the ON and OFF option can be easily used for this purpose. However, when it comes to something complex, the only thing that can be used is the PID controller. It will provide with the maximum opportunity to control the overall system.

A PID controller is responsible for the controlling of the output. Moreover, the desired output can also be achieved with the help of this. The three basic controls have their own working in the PID controller, they all work together to achieve a common goal. The working of these controls is explained below:

Functions of the Proportional controller:

P-controller is responsible for providing the output that is required. The output that is achieved is proportional to the current error value. The main working of a P-controller involves the comparison of the desired set point with the actual value or the value that is achieved through the feedback process. So, if the error value of this controller is zero, the output value of the controller is also zero. Moreover, this type of controller requires a manual resetting every time.

Functions of the Derivative controller:

The requirement of the controlling system involves the prediction of the future behaviour as well. This will not be done with the I-controller. D-controller is the one that will solve this problem. The output value of this controller is dependent on the rate of change of error with the time. It works as a kick start for the output system hence increasing its system response.

Functions of the integral controller:

There are certain limitations with the p-controller that are fulfilled with the help of I-controller. It is needed in this controller system because it will provide with necessary actions that are required for the elimination of the steady state error. It is responsible for integrating the error for a period of time so that the error value reaches to zero value.

 

All of these controller works together to form a perfect controller that can be used in the process control application.

 

 

Thanks for reading!

What does success look like?

Jan 012017

Jan 1st, 2017

I'm Jason Firth.

 

I recently wrapped up a fairly major project, which I spoke of earlier: Implementing a large software package for 2 sites.

The story of this project can be seen as a tale of two cities, or of two different groups with fundamentally different requirements, and importantly fundamentally different ideas of what success looks like.

In one city, we have the 10,000km view. From this viewpoint, the project was a huge success. We successfully designed the project, successfully trained all the users, successfully deployed the software on time and under budget, and our metrics look great -- thousands of work orders created, thousands closed, as large increase in the number of active users of the software, and we can all pat ourselves on the back for a job well done.

In another city, we have the close up view. From this viewpoint, the project was much less successful. The design was clunky and complicated, the training was incomplete and in some cases meaninglesss, the go-live day was a mess which never really got cleaned up, time and budget are irrelevant because of the former, as are metrics. Congratulations on foisting a broken system on a bunch of unwilling users, who are upset that we've taken their original tools away from them!

 

Let's look at another project.

Implementing a new control system, the spec comes back from engineering. We followed the spec completely, successfully implemented it, documented it, and patted ourselves on the back.

The problem? The specs were for a control system that wasn't going to work. After being implemented, the system was never put into service for any appreciable amount of time, because it didn't correctly control the process.

 

So, which viewpoint is correct?

Both. It all depends on how you define success. That's why it's important to define success properly to encompass both viewpoints: Both the micro and the macro. Is the project successful as a project, as something with a beginning, middle and end, with a budget and concrete goals? Is it successfull as an ongoing operation afterwards, will it actually be used, is it acceptably free of defects, does it actually do the intended job? Is it structurally good, is there ongoing documentation and training, continuous improvement set up in the systems at the facility you're at?

 

If you're able to succeed on the micro level, and at the macro level, then you've got something that's going to make you look good over time.

Thanks for reading!

Therac-25, a study in the potential risks of software bugs

Dec 062016

December 6th, 2016

I'm Jason Firth.

 

It's unfortunately common to find that people don't appreciate the risks involved with software, as if the fact that the controls are managed by bits and bytes changes the lethal consequences of failure.

A counterpoint to this is the Therac-25, a radiation therapy machine produced by Atomic Energy of Canada Limited -- AECL, for short.

The system had a number of modes, and while switching modes, the operator could continue entering information into the system. If the operator switched modes too quickly, then key steps would not take place, and the system would not be physically prepared to safely administer a dose of radiation to a patient.

Previous models had hardware interlocks which would prevent radiation from being administered if the system was not physically in place. This newer model relied solely on software interlocks to prevent unsafe conditions.

There were at least 6 accidents involving the Therac-25. Some of these accidents permenantly crippled the patients or resulted in the need for surgical intervention, and several resulted in deaths by radiation poisioning or radiation burns. One patient had their brain and brainstem burned by radiation, resulting in their death soon after.

There were a number of contributing factors in this tragedy: Poor development practices, lack of code review, lack of testing, and of course the bugs themselves. However; rather than focus on the specifics of what caused the tragedy, what I want to show is that what we do is not just computers -- it's where rubber meets road, and where what happens in our computers meets the reality. People who would never dream of opening a relay cabinet and starting to rewire things would think nothing of opening a PLC programming terminal and starting to 'play'.

Secondly, part of the problem is people who didn't realise that they were controlling a real physical device. There are things to remember when dealing with physical devices: For example, that no matter how quick your control system, valves can only open and close so fast, motors can only turn so fast, and your amazing control system is only as good as the devices it controls. Because the programmer forgot that these are real devices, they forgot to take that into account, and people died as a result. This holistic knowledge is why journeyman instrument techncians and certified engineering technologists in the field of instrumentation engineering technology are so valuable. They don't just train on how to use the PLC, they train on how the measurements work, how the signalling works, how the controllers work (whether they are digital or analog in nature), how final control elements work, and how processes work.

When it comes to control systems, just because you're playing with pretty graphics on the screen doesn't mean you aren't dealing with something very real, and something that can be very lethal if it's not treated with respect.

Another point that's near and dear to my heart comes in one of the details of the failures: When there was a problem, the HMI would display "MALFUNCTION" followed by number. A major problem with this is that no operator documentation existed saying what each malfunction number meant. I've said for a long time in response to people who say "The operator should know their equipment", that we as control professionals ought to make the information available for them to know their equipment. If we don't, we can't expect them to know what's going on under the surface. If the programmer had properly documented his code, and properly documented the user interface, then there may have been a chance operators would have understood the problem earlier, preventing lethal consequences.

 

Thanks for reading!

 

full report

New Software Release: Schneider Unity Pro 9 -- I mean 10 -- I mean 11!

Jan 142016

January 14, 2016

I'm Jason Firth.

You will recall that last year I posted about the release of Unity Pro v8.1.

Well, I got an email from a vendor this week about the opportunity to learn about the new version of Unity Pro -- Version 10!

(Wait, did I miss something? What happened to 9?)

I have absolutely no idea what happened to 9. They skipped it. Maybe to keep on track with Windows?

 

Unity Pro V10 supports

M580 features

*CCOTF(Configuration change on the fly) on M580 Local IOs

*Cybersecurity: Events log, Data Integrity, Enable/Disable Services

*System Time Stamping of Application Variables

*Device Integration: Network Manager

Quantum Platform Features

*HART on X80 remote drops

*New Quantum firmware v3.3

Full Excel Import/export tool

Audit Trail

*Log in Syslog

ANY_BOOL Data Type

Supports:

Win 7 32 and 64 bit;

Win 8.1 32 and 64 bit;

Win 10 32 and 64bit and;

Windows Server 2012.

How exciting, right? Well, I went onto the schneider-electric website to download the latest version, and was shocked to discover that version 10 isn't the latest version!

 

Yes, there's a vesion 11, out right before Christmas.

Unity Pro V11 support new Modicon M580 Controllers :

Support new Modicon M580 High End

Support new Modicon M580 HSBY CPUs

Support LL984 language on Modicon M580

Quantum Ethernet IO drops is now supported on Modicon M580

Supports:

Win 7 32 and 64 bit;

Win 8.1 32 and 64 bit;

Win 10 32 and 64bit and;

Windows Server 2012.

 

To be honest, these seem like awfully incremental improvements to be justifying major sofware number increments.

Alongside Unity Pro v10, there are new firmware images for the Modicon Quantum CPUs, and a major revision of the M580 firmware images for all the new features.

Alongside Unity Pro v11, there are new firmware images for the M580 platform.

Thanks for reading!

Happy New Year! Also, Virtual Machines

Jan 102016

January 10, 2016

I'm Jason Firth.

 

Happy New year to everyone!

I promised to talk more about virtual machines last time, and I'm finally following up on that.

First, some definitions.

A Virtual Machine is basically a machine (a computing machine with what we're about to talk about) implemented in software.

A simple virtual machine may be very simple. Most programmers could bang a simple one out in a few minutes to do very simple operations, such as taking a file and doing operations with what's contained in that file.

There are also much more complicated virtual machines. For example, the Java programming language is compiled into something called 'bytecode'. This bytecode is not executable on your computer by itself. However; your computer has a virtual machine designed specifically for running Java bytecode loaded on it, called the Java JRE. That's how Java programs run. One of the benefits of this is that you can write java code once to run on a virtual machine, and then as long as you have a working virtual machine, you should be able to run that program on any platform, no matter the processor, or the operating sytem.

A much more complicated version of a virtual machine is to have a PC create a virtual machine pretending to be another PC in software. This is sometimes called emulation. There's a number of software packages that do this for us. VMWare is probably the most popular one. Windows Server 2008 and onwards has the ability to run virutal machines built in. Oracle has Virtual PC. Bochs is a popular open source program for virtualization.

Intel has included special hardware acceleration of virtual machines for server machines. It helps provide direct access to hardware and CPU resources without having a software layer in between.

There is a clear benefit to using virtual machines on servers. Basically, you can have one incredibly powerful computer that is acting like 100 different computers. You can also have the virtual machine running in parallel on different computers so if there's a hardware failure, a different parallel system can immediately take over. In terms of resource utilization, virtual machines have useful benefits: not every virtual server will be using all resources at all times, so you can run multiple virtual servers, called "instances" on the same amount of hardware that might run one server if you had dedicated hardware for each server. Also, if you're selling instances, you can assign more or less resources to that instance depending on the demand, allowing you to set your prices based on resourcing.

So, how can this possibly apply to PLCs?

Well, the idea is that instead of running PLC logic on a PLC processor, you could run the software in an instance in a data center.

There's no reason why this can't work. Most PLCs today run a proprietary operating system called VxWorks by a company called Wind River. This operating system is probably the most popular operating system you've never heard of. It's used in spacecraft, in military and commercial aircraft, in communication hardware, and in large scale electrical infrastructure, to name a few. This operating system runs just fine in VMWare, according to some sources I found on the Internet. It's extremely likely that a PLC controller can be made to run in a Virtual Machine on a server.

Of course, there's reasons why it's not a great idea. After all, a PLC controller is a purpose built piece of hardware. It is designed to handle dirty power, and extreme temperatures and humidity. It is designed to fail gracefully when it does fail. By contrast, consumer grade or even business grade servers are much less robust, are designed with much shorter running life in mind, and usually designed with more performance in mind than reliability.

There are PLCs that do make decisions from a PC. They've existed for decades. They're called "Soft PLCs". NASA uses a Soft PLC named "National Instruments LabView". So there is a place for these, but as you can see it's not a simple one size fits all answer.

Now for the trick I mentioned last time.

See, for all this talk of "Should PLCs use Virtual Machines", there's one simple fact it's easy to forget: PLCs are Virtual Machines!

Inside a PLC, there is no giant rack of relays. The original purpose of a PLC was to take that giant cabinet filled with relays and replace it with a CPU running a program that emulates the function of that relay cabinet -- the definition of a virtual machine.

Some PLCs have a special custom chip designed to run the virtual machine logic called an ASIC. This means that however we do it, a PLC does, and will almost always, run a virtual machine!

"Almost"?

Well, there's one more spanner in the works. Not every PLC is a virtual machine.

How would this work? Well, if you had a microcontroller, and programmed a compiler designed to produce native code using ladder logic or FBD, then uploaded the complied code to the microcontroller, then you'd have a normal controller running native code, and not a virtual machine. On the other hand, every major PLC brand today doesn't do that. They all use virtual machines.

 

 

Thanks for reading!