Jason K. Firth, C.E.T.

Instrumentation, Control, and Automation

The next road

Aug 192017

I'm Jason Firth.

It's been a long while since I updated, because I've been transitioning into a new role: planning and supervising the instrument shop, and supervising the gas fitters.

The transition from front line worker to front line supervision has meant a whole new set of challenges, and a whole new viewpoint.

As a worker, road blocks are a nuciance. "They really ought to make this easier", I'd say. We'd all say it. Now, navigating those road blocks and keeping workers away from them is a big part of my raison d'etre. The more I can keep my guys working on jobs, the better job I'm doing.

There's a lot of road blocks out there, too. From inception, the question of whether work should even be completed ought to be answered by supervision and management before a worker is ever even close to being assigned the job.

In maintenance planning, there's a lot of processes that should exist and be followed to ensure the job is properly vetted. For corrective work, risk analysis can help justify work. For preventative maintenance, a methodology like Reliability Centred Maintenance can define and justify which work shall be done. For proactive maintenance, there are a number of failure mode analysis tools which can help dictate what work should be done in response to different unmanaged failures.

Following processes like these can help on two fronts: it helps ensure that front line workers aren't wasting their time on work that is going to be immediately vetoed, and it helps ensure that supervision and management have their finger on the pulse of exactly what is going on and why. Besides that, it ensures that appropriate documentation to support work exists so you can go back as part of a living program and see how your assumptions worked out.

Next up are planning road blocks. Ideally, you should have all the parts kitted for the job, you should have all the steps identified, correctly documented, and permits pre prepared as much as possible. If you can schedule the job as well and coordinate with operations to get the equipment in question, that's another major roadblock that front-line folks won't have to deal with.

During execution, your best people will have their better nature working against them. People will want help with their personal priorities, but the problem is if you're focusing on everything, you're focusing on nothing. It's important to keep your people on the task at hand. For those who have personal priorities, they need to enter their work into whatever work management process you have.

Looking at the big picture, the work management process is your most important tool. See the work, prioritize it, plan it, schedule it, execute it. This requires teamwork not just amongst your team, but amongst your site.

The "hey buddy system" is any time where someone sidetracks the work management process and tried to get their work done through side channels. This is sometimes appropriate for high criticality work, but usually it isn't appropriate. Every job that gets done on the "hey buddy system" is another job that went through the proper channels that got delayed. When someone successfully gets their job done this way, it reduces the credibility of the process, and increases the number of "hey buddy" jobs done.

This is the easiest roadblock for great workers to hit: the traffic jam. A hundred uncontrolled jobs hit at once, and in trying to keep everyone happy by focusing on all these jobs, none but the simplest jobs get done.

If I'm doing my job right, then everyone should win: the workers should be less stressed out because they can focus just on doing the work safely. Operations should have the right work happening at the right time. Supervision and management can complete their due diligence in preparing work, and a system of continuous improvement should help make the process consistently smoother.

To be honest, although I took the career track change for professional reasons, the reason I get out of bed in the morning (and one of the big reasons I applied for the job) is knowing how difficult life is on the front line when you don't have someone there willing to handle these problems.

As for a different perspective, You get to peek out from the front line and see (or even steer) the path ahead. Changing from being a passive observer of what's coming down the line, you can become an active participant.

I'm sure I'll have plenty more to say in the future, but this is what I've learned so far in my crash course on supervision.

Thanks for reading!

All you need to know about PID controller

Feb 272017

27 Feb 2017

I'm Jason Firth.

 

I recently commissioned this article explaining the function of a PID controller by freelance writer Sophia O'Connor. It's one of a few pieces I've commissioned recently. It's partially a test to see how well commissioning freelancers can work, and partially a public service to get some stuff written about some basic concepts. Enjoy!

 

A proportional integral derivative (PID) controller is an instrument that is used mainly in the industrial control applications. PID controller involves three controllers i.e. p-controller, D-controller and I-controller. All these controllers are combined in a way that they produce a control signal. The main purpose of using a PID controller is to control the speed, temperature, pressure, flow and other variables that needs to be processed. It can be installed near the control regulation devices. Moreover, a PID controller is monitored through an SCADA system.

Working of a PID controller:

As explained above, a PID controller involves the working of three different controllers that are combined together to perform different tasks. The main purpose of installing a PID controller is to control the operations. Although a simple machine with the ON and OFF option can be easily used for this purpose. However, when it comes to something complex, the only thing that can be used is the PID controller. It will provide with the maximum opportunity to control the overall system.

A PID controller is responsible for the controlling of the output. Moreover, the desired output can also be achieved with the help of this. The three basic controls have their own working in the PID controller, they all work together to achieve a common goal. The working of these controls is explained below:

Functions of the Proportional controller:

P-controller is responsible for providing the output that is required. The output that is achieved is proportional to the current error value. The main working of a P-controller involves the comparison of the desired set point with the actual value or the value that is achieved through the feedback process. So, if the error value of this controller is zero, the output value of the controller is also zero. Moreover, this type of controller requires a manual resetting every time.

Functions of the Derivative controller:

The requirement of the controlling system involves the prediction of the future behaviour as well. This will not be done with the I-controller. D-controller is the one that will solve this problem. The output value of this controller is dependent on the rate of change of error with the time. It works as a kick start for the output system hence increasing its system response.

Functions of the integral controller:

There are certain limitations with the p-controller that are fulfilled with the help of I-controller. It is needed in this controller system because it will provide with necessary actions that are required for the elimination of the steady state error. It is responsible for integrating the error for a period of time so that the error value reaches to zero value.

 

All of these controller works together to form a perfect controller that can be used in the process control application.

 

 

Thanks for reading!

What does success look like?

Jan 012017

Jan 1st, 2017

I'm Jason Firth.

 

I recently wrapped up a fairly major project, which I spoke of earlier: Implementing a large software package for 2 sites.

The story of this project can be seen as a tale of two cities, or of two different groups with fundamentally different requirements, and importantly fundamentally different ideas of what success looks like.

In one city, we have the 10,000km view. From this viewpoint, the project was a huge success. We successfully designed the project, successfully trained all the users, successfully deployed the software on time and under budget, and our metrics look great -- thousands of work orders created, thousands closed, as large increase in the number of active users of the software, and we can all pat ourselves on the back for a job well done.

In another city, we have the close up view. From this viewpoint, the project was much less successful. The design was clunky and complicated, the training was incomplete and in some cases meaninglesss, the go-live day was a mess which never really got cleaned up, time and budget are irrelevant because of the former, as are metrics. Congratulations on foisting a broken system on a bunch of unwilling users, who are upset that we've taken their original tools away from them!

 

Let's look at another project.

Implementing a new control system, the spec comes back from engineering. We followed the spec completely, successfully implemented it, documented it, and patted ourselves on the back.

The problem? The specs were for a control system that wasn't going to work. After being implemented, the system was never put into service for any appreciable amount of time, because it didn't correctly control the process.

 

So, which viewpoint is correct?

Both. It all depends on how you define success. That's why it's important to define success properly to encompass both viewpoints: Both the micro and the macro. Is the project successful as a project, as something with a beginning, middle and end, with a budget and concrete goals? Is it successfull as an ongoing operation afterwards, will it actually be used, is it acceptably free of defects, does it actually do the intended job? Is it structurally good, is there ongoing documentation and training, continuous improvement set up in the systems at the facility you're at?

 

If you're able to succeed on the micro level, and at the macro level, then you've got something that's going to make you look good over time.

Thanks for reading!

Therac-25, a study in the potential risks of software bugs

Dec 062016

December 6th, 2016

I'm Jason Firth.

 

It's unfortunately common to find that people don't appreciate the risks involved with software, as if the fact that the controls are managed by bits and bytes changes the lethal consequences of failure.

A counterpoint to this is the Therac-25, a radiation therapy machine produced by Atomic Energy of Canada Limited -- AECL, for short.

The system had a number of modes, and while switching modes, the operator could continue entering information into the system. If the operator switched modes too quickly, then key steps would not take place, and the system would not be physically prepared to safely administer a dose of radiation to a patient.

Previous models had hardware interlocks which would prevent radiation from being administered if the system was not physically in place. This newer model relied solely on software interlocks to prevent unsafe conditions.

There were at least 6 accidents involving the Therac-25. Some of these accidents permenantly crippled the patients or resulted in the need for surgical intervention, and several resulted in deaths by radiation poisioning or radiation burns. One patient had their brain and brainstem burned by radiation, resulting in their death soon after.

There were a number of contributing factors in this tragedy: Poor development practices, lack of code review, lack of testing, and of course the bugs themselves. However; rather than focus on the specifics of what caused the tragedy, what I want to show is that what we do is not just computers -- it's where rubber meets road, and where what happens in our computers meets the reality. People who would never dream of opening a relay cabinet and starting to rewire things would think nothing of opening a PLC programming terminal and starting to 'play'.

Secondly, part of the problem is people who didn't realise that they were controlling a real physical device. There are things to remember when dealing with physical devices: For example, that no matter how quick your control system, valves can only open and close so fast, motors can only turn so fast, and your amazing control system is only as good as the devices it controls. Because the programmer forgot that these are real devices, they forgot to take that into account, and people died as a result. This holistic knowledge is why journeyman instrument techncians and certified engineering technologists in the field of instrumentation engineering technology are so valuable. They don't just train on how to use the PLC, they train on how the measurements work, how the signalling works, how the controllers work (whether they are digital or analog in nature), how final control elements work, and how processes work.

When it comes to control systems, just because you're playing with pretty graphics on the screen doesn't mean you aren't dealing with something very real, and something that can be very lethal if it's not treated with respect.

Another point that's near and dear to my heart comes in one of the details of the failures: When there was a problem, the HMI would display "MALFUNCTION" followed by number. A major problem with this is that no operator documentation existed saying what each malfunction number meant. I've said for a long time in response to people who say "The operator should know their equipment", that we as control professionals ought to make the information available for them to know their equipment. If we don't, we can't expect them to know what's going on under the surface. If the programmer had properly documented his code, and properly documented the user interface, then there may have been a chance operators would have understood the problem earlier, preventing lethal consequences.

 

Thanks for reading!

 

full report

New Software Release: Schneider Unity Pro 9 -- I mean 10 -- I mean 11!

Jan 142016

January 14, 2016

I'm Jason Firth.

You will recall that last year I posted about the release of Unity Pro v8.1.

Well, I got an email from a vendor this week about the opportunity to learn about the new version of Unity Pro -- Version 10!

(Wait, did I miss something? What happened to 9?)

I have absolutely no idea what happened to 9. They skipped it. Maybe to keep on track with Windows?

 

Unity Pro V10 supports

M580 features

*CCOTF(Configuration change on the fly) on M580 Local IOs

*Cybersecurity: Events log, Data Integrity, Enable/Disable Services

*System Time Stamping of Application Variables

*Device Integration: Network Manager

Quantum Platform Features

*HART on X80 remote drops

*New Quantum firmware v3.3

Full Excel Import/export tool

Audit Trail

*Log in Syslog

ANY_BOOL Data Type

Supports:

Win 7 32 and 64 bit;

Win 8.1 32 and 64 bit;

Win 10 32 and 64bit and;

Windows Server 2012.

How exciting, right? Well, I went onto the schneider-electric website to download the latest version, and was shocked to discover that version 10 isn't the latest version!

 

Yes, there's a vesion 11, out right before Christmas.

Unity Pro V11 support new Modicon M580 Controllers :

Support new Modicon M580 High End

Support new Modicon M580 HSBY CPUs

Support LL984 language on Modicon M580

Quantum Ethernet IO drops is now supported on Modicon M580

Supports:

Win 7 32 and 64 bit;

Win 8.1 32 and 64 bit;

Win 10 32 and 64bit and;

Windows Server 2012.

 

To be honest, these seem like awfully incremental improvements to be justifying major sofware number increments.

Alongside Unity Pro v10, there are new firmware images for the Modicon Quantum CPUs, and a major revision of the M580 firmware images for all the new features.

Alongside Unity Pro v11, there are new firmware images for the M580 platform.

Thanks for reading!

Happy New Year! Also, Virtual Machines

Jan 102016

January 10, 2016

I'm Jason Firth.

 

Happy New year to everyone!

I promised to talk more about virtual machines last time, and I'm finally following up on that.

First, some definitions.

A Virtual Machine is basically a machine (a computing machine with what we're about to talk about) implemented in software.

A simple virtual machine may be very simple. Most programmers could bang a simple one out in a few minutes to do very simple operations, such as taking a file and doing operations with what's contained in that file.

There are also much more complicated virtual machines. For example, the Java programming language is compiled into something called 'bytecode'. This bytecode is not executable on your computer by itself. However; your computer has a virtual machine designed specifically for running Java bytecode loaded on it, called the Java JRE. That's how Java programs run. One of the benefits of this is that you can write java code once to run on a virtual machine, and then as long as you have a working virtual machine, you should be able to run that program on any platform, no matter the processor, or the operating sytem.

A much more complicated version of a virtual machine is to have a PC create a virtual machine pretending to be another PC in software. This is sometimes called emulation. There's a number of software packages that do this for us. VMWare is probably the most popular one. Windows Server 2008 and onwards has the ability to run virutal machines built in. Oracle has Virtual PC. Bochs is a popular open source program for virtualization.

Intel has included special hardware acceleration of virtual machines for server machines. It helps provide direct access to hardware and CPU resources without having a software layer in between.

There is a clear benefit to using virtual machines on servers. Basically, you can have one incredibly powerful computer that is acting like 100 different computers. You can also have the virtual machine running in parallel on different computers so if there's a hardware failure, a different parallel system can immediately take over. In terms of resource utilization, virtual machines have useful benefits: not every virtual server will be using all resources at all times, so you can run multiple virtual servers, called "instances" on the same amount of hardware that might run one server if you had dedicated hardware for each server. Also, if you're selling instances, you can assign more or less resources to that instance depending on the demand, allowing you to set your prices based on resourcing.

So, how can this possibly apply to PLCs?

Well, the idea is that instead of running PLC logic on a PLC processor, you could run the software in an instance in a data center.

There's no reason why this can't work. Most PLCs today run a proprietary operating system called VxWorks by a company called Wind River. This operating system is probably the most popular operating system you've never heard of. It's used in spacecraft, in military and commercial aircraft, in communication hardware, and in large scale electrical infrastructure, to name a few. This operating system runs just fine in VMWare, according to some sources I found on the Internet. It's extremely likely that a PLC controller can be made to run in a Virtual Machine on a server.

Of course, there's reasons why it's not a great idea. After all, a PLC controller is a purpose built piece of hardware. It is designed to handle dirty power, and extreme temperatures and humidity. It is designed to fail gracefully when it does fail. By contrast, consumer grade or even business grade servers are much less robust, are designed with much shorter running life in mind, and usually designed with more performance in mind than reliability.

There are PLCs that do make decisions from a PC. They've existed for decades. They're called "Soft PLCs". NASA uses a Soft PLC named "National Instruments LabView". So there is a place for these, but as you can see it's not a simple one size fits all answer.

Now for the trick I mentioned last time.

See, for all this talk of "Should PLCs use Virtual Machines", there's one simple fact it's easy to forget: PLCs are Virtual Machines!

Inside a PLC, there is no giant rack of relays. The original purpose of a PLC was to take that giant cabinet filled with relays and replace it with a CPU running a program that emulates the function of that relay cabinet -- the definition of a virtual machine.

Some PLCs have a special custom chip designed to run the virtual machine logic called an ASIC. This means that however we do it, a PLC does, and will almost always, run a virtual machine!

"Almost"?

Well, there's one more spanner in the works. Not every PLC is a virtual machine.

How would this work? Well, if you had a microcontroller, and programmed a compiler designed to produce native code using ladder logic or FBD, then uploaded the complied code to the microcontroller, then you'd have a normal controller running native code, and not a virtual machine. On the other hand, every major PLC brand today doesn't do that. They all use virtual machines.

 

 

Thanks for reading!

The Cloud, Virtual Machines, and Arduino (Oh my!)

Nov 042015

November 4, 2015

I'm Jason Firth.

 

I've noticed a number of discussions lately about using different technologies that are less expensive than typical controls equipment.

"Why not use Arduino? Why not use virtualized PCs? Why not use the cloud?"

These questions really trouble me.

"Cloud computing" generally relates to computing resources being pooled in third party data centers and their operation being shown as a "cloud" on network diagrams.

Three current examples of cloud computing are solutions from Microsoft Azure, Amazon AWS, and Google Cloud Platform.

Some of the benefits of cloud computing are:

You can immediately spin up or down capacity as required with no capital expenditure. For example, if you're running a website and your website becomes more and more popular, you can press a button and Amazon will dedicate additional computing resources to your website.

Your resources aren't tied to a single point of failure like a single server. Most virtualized computers can be swapped between different servers to allow failover without any disruption of services. This can mean that if server hardware is failing, it can be replaced with no impact on the provided services.

Another company can focus on the specifics of managing large IT infrastructure and can carry the risks of investing in individual pieces of IT infrastructure

By having many huge companies pooling their data infrastructure requirements together, the total pool increases, which provides some safety. For example, it's very difficult to DDOS Amazon AWS compared to a single corporate file server located in a company head office. In addition, in the event of such attacks, a single company specializing in network infrastructure can have experts on hand to work on such risks, where a single company may not be able to afford an expert for a few servers.

So those are some benefits, but people honestly suggesting this solution really troubles me. The reason is that they show a lack of appreciation for the gravity of your control devices.

What do I mean by this?

Well there's a number of things.

Before we start with any of the reasons, I need to explain the difference between two types of failure: Revealed and Unrevealed failures.

Revealed failures are failures that are immediately apparent. They shut down devices, or they immediately cause a process upset. These failures are immediately found, and likely immediately fixed.

Unrevealed failures, by contrast, are not immediately apparent.

A few years back I was facilitating Reliability Centered Maintenance analyses. In many of the scenarios we covered, a motor would overheat and fail, and the control systems would shut it down. A relay would fail, and the control system would shut it down. A boiler would reach overpressure, and the control system would shut it down.

Then we started covering different control systems. Failure wouldn't immediately harm anything, but when other problems occurred, that's when the problem starts. Suddenly, the control systems wouldn't be shutting down failing systems. In fact, they might be actively working to make things worse.

I've witnessed the trouble caused when control systems go down. Operators rely on their control systems more than they realize to tell them exactly what's going on. If the alarm doesn't sound, if the screen doesn't light up, if the flashing light doesn't flash, then often they'll have no way to know something bad is happening until something terrible has happened.

So now we have this critical piece of infrastructure we've sent to another site, perhaps thousands of kilometers away, that we're relying on the Internet to maintain connectivity to.

The ability to spin up or spin down CPU resources is sort of a non-starter, in terms of relevant benefits. CPU resources are not usually a bottleneck in PLC or DCS systems. You might have an issue if your PLC scantimes are getting too high, but PLCs are already using CPUs that have been obsolete for 15 years -- if CPU resources were a primary consideration, then that wouldn't be the case.

The risks associated with controls infrastructure are not the same risks associated with IT infrastructure. That being the case, I don't think companies like Microsoft, Google, or Amazon are better prepared to manage those risks than companies that deal with the real risks every day. We're not talking about having to issue refunds to some customers, we're potentially talking about dead people and a pile of smoking rubble where your plant used to be.

Whether the data center side of your infrastructure is safe from DDoS attack or not, your plant likely is not. Therefore, a potential attack, or a potential loss of connectivity has the capability not just of taking down your email and intranet, but your plant.

Along the same lines, industry best practices for security uses a "Defense in depth" strategy, where your core devices are protected by layers and layers of security. By contrast, a cloud strategy puts your control data directly onto the public internet. If your system has a vulnerability (and it will), then you're going to be risking your people, and the environment, and your plant, and your reputation.

Industrial network security generally places availability at the highest priority, and security as important, but secondary. If your controller can cause an unrevealed release of toxic gas if it gets shut down, it's important that the controller continues to function. If that controller stops controlling because a security setting told it to, then that's going to be a question to answer for later.

A single point of failure can be a good thing, or a horrible thing. In September of this year, Amazon AWS suffered an outage. Half a dozen companies that rely on them suffered reduction in availability as a result. By contrast, not a single system in my plant realized it. What if you were using AWS to control your plant?

Now that we've covered cloud computing, let's look at Arduino.

Arduino is an open source hardware platform based on ATmel microcontrollers. An arduino provides a circuit board that lets people use breadboard materials to connect to their microprocessor; a USB programming interface; and a software package that makes it relatively simple to write software for the controller and send your program to the microcontroller.

Benefits? Well, an arduino comes cheap. You can buy an arduino board for 20 Euro from arduino.cc.

Downsides? There's a lot, actually.

Let's start from the outside and work our way in.

Arduino I/O isn't protected with optoisolators. The analog IO isn't filtered. None of it is certified for industrial use, and I'm guessing any insurance company would tear you apart for directly connecting an Arduino to plant devices, especially safety related ones.

Arduino is "modular" in a sense that you can plug devices called "shields" onto the main board, but this is not modular the same way a Schneider Quantum or an Allen Bradley ControlLogix. You can get a shield onto the board. Once it's on, it's on until you power down, and don't expect to get 16 different kinds of card mounted together.

Arduino isn't hardened on the power side, either. Noisy power could cause problems, and unless you break out a soldering iron and start building a custom solution, multiple power supplies aren't an option.

Arduino isn't protected physically. Admittedly, most PLCs are only IP20 rated, but at least that means you can't stick your fingers on the exposed board, which isn't something you can say for Arduino.

Arduino has a fairly limited amount of logic capacity. The number of things you can do with one are limited to begin with, and become more limited as you add things like TCP/IP for HMI access.

Speaking of logic, don't expect to change logic on the fly without shutting down the controller, or view logic as it is running. That's just not possible.

With Arduino, you're likely to be reinventing the wheel in places. Want Modbus TCP? Hope you've brushed up on your network coding, because you're going to be writing a Modbus TCP routine for everything you'd like to do.

As well, congratulations on your new Arduino installation! The guy who wrote the software quit. Can a regular electrician or instrument guy muddle their way through the software?

I'm not saying you can't build a PLC using an arduino, or that you can't control a process with one. What I am saying is that if you're doing industrial control, it probably isn't the right tool for the job.

I'm going to save Virtual Machines for another time, because there's a trick in there I want to expand on a bit more.

 

Thanks for reading!

This one is for the travellers

Sep 012015

September 1, 2015

I'm Jason Firth.

 

Fly-in Fly-out jobs are becoming commonplace in Canada, because the places where resources happen to be are not the places people want to live, or places where it's not practical to place a down, or simply because the skills you need are not in the same place as where you need them.

I've been flying a lot lately in Ontario, and so I've been flying a lot on Porter Airlines.

I noticed something interesting in their frequent flier program. They have a "goal meter" on their website for their VIPorter program. It shows the $1500 level before you get to the Passport level, and the $3000 level before you get to the Priority level, but after that it shows $10,000.

How bizzare? I decided to look further into it.

It turns out Porter has another level to their frequent flier program. It's invitation only, and requires a $10,000 annual spend on flights. It's called the "VIPorter First" program.

It looks like it has everything the VIPorter Priority status has, and in addition:

 

2 Free checked bags

Free premium seat selection

Free last seat guarantee on sold-out flights

Free same day changes to reservations

 

Now, another note for people who travel very frequently:

Your spending shows up on your VIPorter level when you have completed the flight, not when you book your flight. By contrast, your VIPorter level when you book the flight is the level the flight will be treated as in their computer system, not the level you're at when you fly.

This means that if you buy 6 months of tickets in advance in January, you won't VIPorter status for any of those flights, in spite of potentially spending more than the $3000 level with them, even once you've taken $3000 worth of flights. Your new status won't kick in until you've purchased a flight AFTER your status has activated.

 

Thanks for reading!

5 reasons I gave up being an instrument guy (for now)

Aug 152015

August 15, 2015

I'm Jason Firth.

 

Recently, I accepted a new role within my organization for a while: I'm assisting with a project to deploy a new Enterprise Resource Planning (ERP) system. My role is to bring maintenance expertise to the project specific to the plant I'm stationed at.

When I told my team about it, my partner was confused: "Why don't you want to be an instrument guy anymore?", he asked.

In fact, it's exactly because I'm an instrument guy and because I plan to continue being an instrument guy that I accepted the role!

Let's look at some of my reasons.

 

1. Instrument technicians are information mongers.

There's no two ways to put this: Instrument Techs, or at least good instrument techs, are information mongers. Every piece of information we gather is another tool in our belt that we might make use of.

This project is going to leave me and my shop with comprehensive lists regarding equipment and maintenance for everything on site. We're going to have access to more people and more information than we ever would have had as just instrument guys. That sort of information is invaluable at moments you'd never have imagined -- because until you have it, you never consider it.

Everything is connected, and the more information you have, the easier you can make the connections yourself. The more you can make those connections on your own, the more effective you can be.

 

2. Potential synergies between these two roles.

I always hate to use the S-word, but it's absolutely true. I'll explain.

Work that's been done as part of other completely unrelated projects suddenly becomes relevant, and you don't have to do anything but cross-reference. This means the project benefits from that work. Work I previously did on SCADA, establishing equipment taxonomies and working within constraints, suddenly becomes extremely important because it's already done.

As well, work that's done today may have a dramatic impact on future trades projects. In particular, having a say as to how business critical systems are set up on Day 1 means those same systems are structured in a way that might facilitate the information's re-use later. All it takes is a little vision, and you can make everyone's life easier in those future projects.

 

3. The wrong person doing this can sink entire shops.

The Computerized Maintenance Management System (CMMS) aspect of ERP is one of the most important elements of a modern shop. It facilitiates communication between operations and trades, it stores information about the history of work done, it keeps track of costs and of time spent on jobs, and it plays a key role in material management.

If the wrong person is setting up the system, communication between operations and trades may become ineffective, history may be lost or unusuable, costs can't be tracked, and materials can't be found. All this adds up to a skilled worker not spending time using their skills.

 

4. The right person doing this can let us spend more time being tradesmen.

Along the same lines, if the right person sets up the system to its potential, communcations between operations and trades (and between trades and other trades) may be enhanced, history may become a key tool in predicting failures or detecting current failures, costs and time spent on jobs can be effectively measured and managed, and materials will always be where they need to be when they're needed.

Modern ERP systems also allow supervisors to assign jobs to certain individuals, and to allow those individuals to see their work queues on mobile devices, so the work is always at their fingertips. They can allow test results to be stored and historized immediately without additional paperwork. They can allow work completion comments to be added directly when a job is complete, increasing the speed and accuracy of the history. They can even allow documentation to be carried around for instant access to key information.

All this adds up to one thing: Tradesmen spending less time on paperwork, and more time on trades. That's good for the business, it's good for the tradesmen, and it makes everyone's life a little easier.

 

5. Ultimately, you need a voice from the front.

There's a lot of perfectly reasonable sounding suggestions out there.

It's easy to sit around a desk and come up with this stuff, and the technology is amazing, you can implement anything you want. The problem is, how will it affect someone on the front line? It's those people who are going to keep your plant running day to day, and even smart people, with the best of intentions, can make a decision that works very well in theory, but is disasterous in practice. Someone from the front line is absolutely neccessary. You need a canary to tell you when things could get bad.

 

Thanks for reading!