December 6th, 2016
I'm Jason Firth.
It's unfortunately common to find that people don't appreciate the risks involved with software, as if the fact that the controls are managed by bits and bytes changes the lethal consequences of failure.
A counterpoint to this is the Therac-25, a radiation therapy machine produced by Atomic Energy of Canada Limited -- AECL, for short.
The system had a number of modes, and while switching modes, the operator could continue entering information into the system. If the operator switched modes too quickly, then key steps would not take place, and the system would not be physically prepared to safely administer a dose of radiation to a patient.
Previous models had hardware interlocks which would prevent radiation from being administered if the system was not physically in place. This newer model relied solely on software interlocks to prevent unsafe conditions.
There were at least 6 accidents involving the Therac-25. Some of these accidents permenantly crippled the patients or resulted in the need for surgical intervention, and several resulted in deaths by radiation poisioning or radiation burns. One patient had their brain and brainstem burned by radiation, resulting in their death soon after.
There were a number of contributing factors in this tragedy: Poor development practices, lack of code review, lack of testing, and of course the bugs themselves. However; rather than focus on the specifics of what caused the tragedy, what I want to show is that what we do is not just computers -- it's where rubber meets road, and where what happens in our computers meets the reality. People who would never dream of opening a relay cabinet and starting to rewire things would think nothing of opening a PLC programming terminal and starting to 'play'.
Secondly, part of the problem is people who didn't realise that they were controlling a real physical device. There are things to remember when dealing with physical devices: For example, that no matter how quick your control system, valves can only open and close so fast, motors can only turn so fast, and your amazing control system is only as good as the devices it controls. Because the programmer forgot that these are real devices, they forgot to take that into account, and people died as a result. This holistic knowledge is why journeyman instrument techncians and certified engineering technologists in the field of instrumentation engineering technology are so valuable. They don't just train on how to use the PLC, they train on how the measurements work, how the signalling works, how the controllers work (whether they are digital or analog in nature), how final control elements work, and how processes work.
When it comes to control systems, just because you're playing with pretty graphics on the screen doesn't mean you aren't dealing with something very real, and something that can be very lethal if it's not treated with respect.
Another point that's near and dear to my heart comes in one of the details of the failures: When there was a problem, the HMI would display "MALFUNCTION" followed by number. A major problem with this is that no operator documentation existed saying what each malfunction number meant. I've said for a long time in response to people who say "The operator should know their equipment", that we as control professionals ought to make the information available for them to know their equipment. If we don't, we can't expect them to know what's going on under the surface. If the programmer had properly documented his code, and properly documented the user interface, then there may have been a chance operators would have understood the problem earlier, preventing lethal consequences.
Thanks for reading!
full reportJan 142016
January 14, 2016
I'm Jason Firth.
You will recall that last year I posted about the release of Unity Pro v8.1.
Well, I got an email from a vendor this week about the opportunity to learn about the new version of Unity Pro -- Version 10!
(Wait, did I miss something? What happened to 9?)
I have absolutely no idea what happened to 9. They skipped it. Maybe to keep on track with Windows?
Unity Pro V10 supports
*CCOTF(Configuration change on the fly) on M580 Local IOs
*Cybersecurity: Events log, Data Integrity, Enable/Disable Services
*System Time Stamping of Application Variables
*Device Integration: Network Manager
Quantum Platform Features
*HART on X80 remote drops
*New Quantum firmware v3.3
Full Excel Import/export tool
*Log in Syslog
ANY_BOOL Data Type
Win 7 32 and 64 bit;
Win 8.1 32 and 64 bit;
Win 10 32 and 64bit and;
Windows Server 2012.
How exciting, right? Well, I went onto the schneider-electric website to download the latest version, and was shocked to discover that version 10 isn't the latest version!
Yes, there's a vesion 11, out right before Christmas.
Unity Pro V11 support new Modicon M580 Controllers :
Support new Modicon M580 High End
Support new Modicon M580 HSBY CPUs
Support LL984 language on Modicon M580
Quantum Ethernet IO drops is now supported on Modicon M580
Win 7 32 and 64 bit;
Win 8.1 32 and 64 bit;
Win 10 32 and 64bit and;
Windows Server 2012.
To be honest, these seem like awfully incremental improvements to be justifying major sofware number increments.
Alongside Unity Pro v10, there are new firmware images for the Modicon Quantum CPUs, and a major revision of the M580 firmware images for all the new features.
Alongside Unity Pro v11, there are new firmware images for the M580 platform.
Thanks for reading!Jan 102016
January 10, 2016
I'm Jason Firth.
Happy New year to everyone!
I promised to talk more about virtual machines last time, and I'm finally following up on that.
First, some definitions.
A Virtual Machine is basically a machine (a computing machine with what we're about to talk about) implemented in software.
A simple virtual machine may be very simple. Most programmers could bang a simple one out in a few minutes to do very simple operations, such as taking a file and doing operations with what's contained in that file.
There are also much more complicated virtual machines. For example, the Java programming language is compiled into something called 'bytecode'. This bytecode is not executable on your computer by itself. However; your computer has a virtual machine designed specifically for running Java bytecode loaded on it, called the Java JRE. That's how Java programs run. One of the benefits of this is that you can write java code once to run on a virtual machine, and then as long as you have a working virtual machine, you should be able to run that program on any platform, no matter the processor, or the operating sytem.
A much more complicated version of a virtual machine is to have a PC create a virtual machine pretending to be another PC in software. This is sometimes called emulation. There's a number of software packages that do this for us. VMWare is probably the most popular one. Windows Server 2008 and onwards has the ability to run virutal machines built in. Oracle has Virtual PC. Bochs is a popular open source program for virtualization.
Intel has included special hardware acceleration of virtual machines for server machines. It helps provide direct access to hardware and CPU resources without having a software layer in between.
There is a clear benefit to using virtual machines on servers. Basically, you can have one incredibly powerful computer that is acting like 100 different computers. You can also have the virtual machine running in parallel on different computers so if there's a hardware failure, a different parallel system can immediately take over. In terms of resource utilization, virtual machines have useful benefits: not every virtual server will be using all resources at all times, so you can run multiple virtual servers, called "instances" on the same amount of hardware that might run one server if you had dedicated hardware for each server. Also, if you're selling instances, you can assign more or less resources to that instance depending on the demand, allowing you to set your prices based on resourcing.
So, how can this possibly apply to PLCs?
Well, the idea is that instead of running PLC logic on a PLC processor, you could run the software in an instance in a data center.
There's no reason why this can't work. Most PLCs today run a proprietary operating system called VxWorks by a company called Wind River. This operating system is probably the most popular operating system you've never heard of. It's used in spacecraft, in military and commercial aircraft, in communication hardware, and in large scale electrical infrastructure, to name a few. This operating system runs just fine in VMWare, according to some sources I found on the Internet. It's extremely likely that a PLC controller can be made to run in a Virtual Machine on a server.
Of course, there's reasons why it's not a great idea. After all, a PLC controller is a purpose built piece of hardware. It is designed to handle dirty power, and extreme temperatures and humidity. It is designed to fail gracefully when it does fail. By contrast, consumer grade or even business grade servers are much less robust, are designed with much shorter running life in mind, and usually designed with more performance in mind than reliability.
There are PLCs that do make decisions from a PC. They've existed for decades. They're called "Soft PLCs". NASA uses a Soft PLC named "National Instruments LabView". So there is a place for these, but as you can see it's not a simple one size fits all answer.
Now for the trick I mentioned last time.
See, for all this talk of "Should PLCs use Virtual Machines", there's one simple fact it's easy to forget: PLCs are Virtual Machines!
Inside a PLC, there is no giant rack of relays. The original purpose of a PLC was to take that giant cabinet filled with relays and replace it with a CPU running a program that emulates the function of that relay cabinet -- the definition of a virtual machine.
Some PLCs have a special custom chip designed to run the virtual machine logic called an ASIC. This means that however we do it, a PLC does, and will almost always, run a virtual machine!
Well, there's one more spanner in the works. Not every PLC is a virtual machine.
How would this work? Well, if you had a microcontroller, and programmed a compiler designed to produce native code using ladder logic or FBD, then uploaded the complied code to the microcontroller, then you'd have a normal controller running native code, and not a virtual machine. On the other hand, every major PLC brand today doesn't do that. They all use virtual machines.
Thanks for reading!Nov 042015
November 4, 2015
I'm Jason Firth.
I've noticed a number of discussions lately about using different technologies that are less expensive than typical controls equipment.
"Why not use Arduino? Why not use virtualized PCs? Why not use the cloud?"
These questions really trouble me.
"Cloud computing" generally relates to computing resources being pooled in third party data centers and their operation being shown as a "cloud" on network diagrams.
Three current examples of cloud computing are solutions from Microsoft Azure, Amazon AWS, and Google Cloud Platform.
Some of the benefits of cloud computing are:
You can immediately spin up or down capacity as required with no capital expenditure. For example, if you're running a website and your website becomes more and more popular, you can press a button and Amazon will dedicate additional computing resources to your website.
Your resources aren't tied to a single point of failure like a single server. Most virtualized computers can be swapped between different servers to allow failover without any disruption of services. This can mean that if server hardware is failing, it can be replaced with no impact on the provided services.
Another company can focus on the specifics of managing large IT infrastructure and can carry the risks of investing in individual pieces of IT infrastructure
By having many huge companies pooling their data infrastructure requirements together, the total pool increases, which provides some safety. For example, it's very difficult to DDOS Amazon AWS compared to a single corporate file server located in a company head office. In addition, in the event of such attacks, a single company specializing in network infrastructure can have experts on hand to work on such risks, where a single company may not be able to afford an expert for a few servers.
So those are some benefits, but people honestly suggesting this solution really troubles me. The reason is that they show a lack of appreciation for the gravity of your control devices.
What do I mean by this?
Well there's a number of things.
Before we start with any of the reasons, I need to explain the difference between two types of failure: Revealed and Unrevealed failures.
Revealed failures are failures that are immediately apparent. They shut down devices, or they immediately cause a process upset. These failures are immediately found, and likely immediately fixed.
Unrevealed failures, by contrast, are not immediately apparent.
A few years back I was facilitating Reliability Centered Maintenance analyses. In many of the scenarios we covered, a motor would overheat and fail, and the control systems would shut it down. A relay would fail, and the control system would shut it down. A boiler would reach overpressure, and the control system would shut it down.
Then we started covering different control systems. Failure wouldn't immediately harm anything, but when other problems occurred, that's when the problem starts. Suddenly, the control systems wouldn't be shutting down failing systems. In fact, they might be actively working to make things worse.
I've witnessed the trouble caused when control systems go down. Operators rely on their control systems more than they realize to tell them exactly what's going on. If the alarm doesn't sound, if the screen doesn't light up, if the flashing light doesn't flash, then often they'll have no way to know something bad is happening until something terrible has happened.
So now we have this critical piece of infrastructure we've sent to another site, perhaps thousands of kilometers away, that we're relying on the Internet to maintain connectivity to.
The ability to spin up or spin down CPU resources is sort of a non-starter, in terms of relevant benefits. CPU resources are not usually a bottleneck in PLC or DCS systems. You might have an issue if your PLC scantimes are getting too high, but PLCs are already using CPUs that have been obsolete for 15 years -- if CPU resources were a primary consideration, then that wouldn't be the case.
The risks associated with controls infrastructure are not the same risks associated with IT infrastructure. That being the case, I don't think companies like Microsoft, Google, or Amazon are better prepared to manage those risks than companies that deal with the real risks every day. We're not talking about having to issue refunds to some customers, we're potentially talking about dead people and a pile of smoking rubble where your plant used to be.
Whether the data center side of your infrastructure is safe from DDoS attack or not, your plant likely is not. Therefore, a potential attack, or a potential loss of connectivity has the capability not just of taking down your email and intranet, but your plant.
Along the same lines, industry best practices for security uses a "Defense in depth" strategy, where your core devices are protected by layers and layers of security. By contrast, a cloud strategy puts your control data directly onto the public internet. If your system has a vulnerability (and it will), then you're going to be risking your people, and the environment, and your plant, and your reputation.
Industrial network security generally places availability at the highest priority, and security as important, but secondary. If your controller can cause an unrevealed release of toxic gas if it gets shut down, it's important that the controller continues to function. If that controller stops controlling because a security setting told it to, then that's going to be a question to answer for later.
A single point of failure can be a good thing, or a horrible thing. In September of this year, Amazon AWS suffered an outage. Half a dozen companies that rely on them suffered reduction in availability as a result. By contrast, not a single system in my plant realized it. What if you were using AWS to control your plant?
Now that we've covered cloud computing, let's look at Arduino.
Arduino is an open source hardware platform based on ATmel microcontrollers. An arduino provides a circuit board that lets people use breadboard materials to connect to their microprocessor; a USB programming interface; and a software package that makes it relatively simple to write software for the controller and send your program to the microcontroller.
Benefits? Well, an arduino comes cheap. You can buy an arduino board for 20 Euro from arduino.cc.
Downsides? There's a lot, actually.
Let's start from the outside and work our way in.
Arduino I/O isn't protected with optoisolators. The analog IO isn't filtered. None of it is certified for industrial use, and I'm guessing any insurance company would tear you apart for directly connecting an Arduino to plant devices, especially safety related ones.
Arduino is "modular" in a sense that you can plug devices called "shields" onto the main board, but this is not modular the same way a Schneider Quantum or an Allen Bradley ControlLogix. You can get a shield onto the board. Once it's on, it's on until you power down, and don't expect to get 16 different kinds of card mounted together.
Arduino isn't hardened on the power side, either. Noisy power could cause problems, and unless you break out a soldering iron and start building a custom solution, multiple power supplies aren't an option.
Arduino isn't protected physically. Admittedly, most PLCs are only IP20 rated, but at least that means you can't stick your fingers on the exposed board, which isn't something you can say for Arduino.
Arduino has a fairly limited amount of logic capacity. The number of things you can do with one are limited to begin with, and become more limited as you add things like TCP/IP for HMI access.
Speaking of logic, don't expect to change logic on the fly without shutting down the controller, or view logic as it is running. That's just not possible.
With Arduino, you're likely to be reinventing the wheel in places. Want Modbus TCP? Hope you've brushed up on your network coding, because you're going to be writing a Modbus TCP routine for everything you'd like to do.
As well, congratulations on your new Arduino installation! The guy who wrote the software quit. Can a regular electrician or instrument guy muddle their way through the software?
I'm not saying you can't build a PLC using an arduino, or that you can't control a process with one. What I am saying is that if you're doing industrial control, it probably isn't the right tool for the job.
I'm going to save Virtual Machines for another time, because there's a trick in there I want to expand on a bit more.
Thanks for reading!Sep 012015
September 1, 2015
I'm Jason Firth.
Fly-in Fly-out jobs are becoming commonplace in Canada, because the places where resources happen to be are not the places people want to live, or places where it's not practical to place a down, or simply because the skills you need are not in the same place as where you need them.
I've been flying a lot lately in Ontario, and so I've been flying a lot on Porter Airlines.
I noticed something interesting in their frequent flier program. They have a "goal meter" on their website for their VIPorter program. It shows the $1500 level before you get to the Passport level, and the $3000 level before you get to the Priority level, but after that it shows $10,000.
How bizzare? I decided to look further into it.
It turns out Porter has another level to their frequent flier program. It's invitation only, and requires a $10,000 annual spend on flights. It's called the "VIPorter First" program.
It looks like it has everything the VIPorter Priority status has, and in addition:
2 Free checked bags
Free premium seat selection
Free last seat guarantee on sold-out flights
Free same day changes to reservations
Now, another note for people who travel very frequently:
Your spending shows up on your VIPorter level when you have completed the flight, not when you book your flight. By contrast, your VIPorter level when you book the flight is the level the flight will be treated as in their computer system, not the level you're at when you fly.
This means that if you buy 6 months of tickets in advance in January, you won't VIPorter status for any of those flights, in spite of potentially spending more than the $3000 level with them, even once you've taken $3000 worth of flights. Your new status won't kick in until you've purchased a flight AFTER your status has activated.
Thanks for reading!Aug 152015
August 15, 2015
I'm Jason Firth.
Recently, I accepted a new role within my organization for a while: I'm assisting with a project to deploy a new Enterprise Resource Planning (ERP) system. My role is to bring maintenance expertise to the project specific to the plant I'm stationed at.
When I told my team about it, my partner was confused: "Why don't you want to be an instrument guy anymore?", he asked.
In fact, it's exactly because I'm an instrument guy and because I plan to continue being an instrument guy that I accepted the role!
Let's look at some of my reasons.
1. Instrument technicians are information mongers.
There's no two ways to put this: Instrument Techs, or at least good instrument techs, are information mongers. Every piece of information we gather is another tool in our belt that we might make use of.
This project is going to leave me and my shop with comprehensive lists regarding equipment and maintenance for everything on site. We're going to have access to more people and more information than we ever would have had as just instrument guys. That sort of information is invaluable at moments you'd never have imagined -- because until you have it, you never consider it.
Everything is connected, and the more information you have, the easier you can make the connections yourself. The more you can make those connections on your own, the more effective you can be.
2. Potential synergies between these two roles.
I always hate to use the S-word, but it's absolutely true. I'll explain.
Work that's been done as part of other completely unrelated projects suddenly becomes relevant, and you don't have to do anything but cross-reference. This means the project benefits from that work. Work I previously did on SCADA, establishing equipment taxonomies and working within constraints, suddenly becomes extremely important because it's already done.
As well, work that's done today may have a dramatic impact on future trades projects. In particular, having a say as to how business critical systems are set up on Day 1 means those same systems are structured in a way that might facilitate the information's re-use later. All it takes is a little vision, and you can make everyone's life easier in those future projects.
3. The wrong person doing this can sink entire shops.
The Computerized Maintenance Management System (CMMS) aspect of ERP is one of the most important elements of a modern shop. It facilitiates communication between operations and trades, it stores information about the history of work done, it keeps track of costs and of time spent on jobs, and it plays a key role in material management.
If the wrong person is setting up the system, communication between operations and trades may become ineffective, history may be lost or unusuable, costs can't be tracked, and materials can't be found. All this adds up to a skilled worker not spending time using their skills.
4. The right person doing this can let us spend more time being tradesmen.
Along the same lines, if the right person sets up the system to its potential, communcations between operations and trades (and between trades and other trades) may be enhanced, history may become a key tool in predicting failures or detecting current failures, costs and time spent on jobs can be effectively measured and managed, and materials will always be where they need to be when they're needed.
Modern ERP systems also allow supervisors to assign jobs to certain individuals, and to allow those individuals to see their work queues on mobile devices, so the work is always at their fingertips. They can allow test results to be stored and historized immediately without additional paperwork. They can allow work completion comments to be added directly when a job is complete, increasing the speed and accuracy of the history. They can even allow documentation to be carried around for instant access to key information.
All this adds up to one thing: Tradesmen spending less time on paperwork, and more time on trades. That's good for the business, it's good for the tradesmen, and it makes everyone's life a little easier.
5. Ultimately, you need a voice from the front.
There's a lot of perfectly reasonable sounding suggestions out there.
It's easy to sit around a desk and come up with this stuff, and the technology is amazing, you can implement anything you want. The problem is, how will it affect someone on the front line? It's those people who are going to keep your plant running day to day, and even smart people, with the best of intentions, can make a decision that works very well in theory, but is disasterous in practice. Someone from the front line is absolutely neccessary. You need a canary to tell you when things could get bad.
Thanks for reading!Apr 062015
April 6, 2015
I'm Jason Firth.
One part of the OACETT and CTTAM codes of conduct is a responsibility to learn the applicable laws and codes to your field. To that end, I often load up CanLII and search for information relating to the field of engineering technology, and people certified as engineering technologists.
I found this 1996 case, and it's interesting to me.
This case relates to the construction of a paper mill in Alberta in the late 80s. There are 3 main groups: A contractor called Dilcon, an engineering company called NLK, and a corporation created solely for the purpose of the construction of this new paper mill, called ANC.
What is disputed is the amount of money owed to Dilcon by ANC. The amount disputed is quite substantial, with Dilcon claiming they are owed a further $20 million dollars, and ANC claiming they have overpaid by $10 million dollars.
At first glance, Dilcon sort of accepted a dangerously vague contract: They placed a bid on a job for which the detailed engineering was only around 10% complete. They came up with a fixed bid for a contract for which scope wasn't even remotely defined yet. On the basic merits, this turned out to be as bad an idea as it sounds: In one area, the amount of work turned out to be literally twice as much as originally proposed, and in another area, it was 25% greater.
Luckily for them, the engineering company was set as the facilitator in contract measures, and acted in a fair and reasonable manner, allowing different additions to be considered as "extras" despite the fact that ANC ostensibly asked for a "no extras" contract.
The case provides a fairly in-depth look into how one major contract was negotiated and carried out, including a lot of detail about what looks like a fairly well done change management process. (the case centered around changes, but I don't think the problems related to the processes in place)
In addition, it gives painstaking details about the consequences of not considering lead times.
I started my career as an engineering technologist working in an engineering department, designing and managing projects and ordering parts. Later on, as a technician, I had broad latitude to manage my projects and order parts as well.
One lesson I learned is this: Stakeholders want to ignore lead times, and will pressure you to do so. However; especially if you're working in a remote site or remote community, failing to pay attention to lead times will result in looking like an idiot and wasting the company's money.
In this case, NLK took too long to deliver certified engineering documents, which caused several vendors to be delayed in delivering materials. In other cases, the vendors simply took long to deliver, with one vendor taking 5 months longer than scheduled to deliver a key part. In another case, Dilcon was fully mobilized on site for months before an acceptable number of pipes had been delivered. By not properly accounting for the amount of time it would take for tasks to get done and for parts to arrive, the entire project suffered massive cost overruns in the millions of dollars.
Of course, you could say "But you should be able to plan based on the best possible information!", which seems like a great idea on paper. In practice, if you don't have any good reason to believe that a part will be at your site, I don't think it's a good idea to start lining up huge resources. All it takes is a lack of a single critical component and suddenly you're not just not done, but you're down.
That's what happened in this case. The engineering wasn't complete, the shipping wasn't complete, the parts hadn't arrived, and they'd just mobilized a bunch of trades workers to stand around doing nothing for months at a time.
This sort of bad planning has massive consequences. In this case, Dilcon did manage to get their contract completed in time, but at huge cost: The court found that they were owed a full 10 million dollar more by ANC -- Considering the original contract was under $20M, that's a huge cost increase -- and all that was needed to avoid it was for someone to say "Slow down! We're going to make sure we're ready before we start signing contracts and hiring people."
Thanks for reading!Feb 202015
February 20, 2015
I'm Jason Firth.
A few months back, I had a discussion with someone about how to communicate with a PC over Modbus Plus without breaking the bank. I decided to take the highlights of that conversation and bring them together in a blog post so others might be able to use the information.
Modbus Plus is still a widely used protocol in industry where Schneider Modicon PLCs are in use. However; it's not a cheap protocol to work with. A USB Modbus Plus dongle for your PC will run you over $2000!
That's a lot of money if you just want to look at some bits in the PLC. Today, I want to look at some other options.
Modbus Plus is a completely different protocol than Modbus.
Modbus Plus uses a proprietary signalling standard, where Modbus generally uses RS-232 or RS-485. Modbus Plus supports routing, where Modbus has no networking features. Modbus Plus requires a DSP to handle the communications, where Modbus can use a standard UART. Modbus Plus is peer-to-peer, where Modbus is master/slave. Modbus Plus uses a token passing system to ensure everyone gets a turn on the line, where Modbus doesn't have any mechanisms (hence requiring the master/slave architecture where the master dictates who will speak). You just set your addresses, and all the devices will start talking. The Bridge Multiplexer acts as a gateway between the two protocols.
Modbus Plus allows for routing between devices called Bridge Pluses. The way you do this is by defining the modbus plus address of each bridge plus you pass through. For example, if you have node 1 on a modbus plus network with a bridge plus node 64, and there's a node 32 on the other side you want to communicate with, you'd be communicating with 126.96.36.199.0
So to make sure your modbus devices can talk over a modbus plus network, you need to map the single values to modbus plus addresses.
Modbus plus requires a DSP. This means that no matter what, you're going to need a dedicated piece of hardware to communicate on the network. You can't just slap an RS-232 pigtail together and hope it will work.
Here are two potential options: first, if you get a PC with an ISA slot, the cards are quite inexpensive. I found some on eBay for 100-200 USD. A PICMG backplane can use an ISA slot, and there are industrial main boards still available. The downside to this is that you may be stuck using a very slow PC just to communicate with your one PLC.
Another option is a bridge multiplexer, which converts modbus plus to modbus. It takes a bit to put together, but it should work. I found a bridge MUX on eBay for 100usd when I first investigated this option. Today, I've found them for under $250 on eBay.
The manual really overcomplicates things.
There are models on page 10 of the manual -- nw-bm85-000; NW-bm85C000; and NW-BM85D008 which don't need a special progran. You don't need to make a C++ program. You connect to one of the serial ports (I think the second one) and put the bridge into programming mode by flipping a DIP switch on the back, then it gives you a fairly nice and easy menu based interface to configure what it does.
You'll need to figure out master/slave stuff, because modbus is client/host but modbus plus is peer to peer, but it should be very doable.
The master in a modbus interaction is the device that actually sends the commands. For example, I programmed a modbus TCP library, and in that case, the device that initiates the connection is the master: you connect, then either tell the device you want to read or write a coil or register, then the slave device that you connected to will respond with either a success/fail message for a write, or the data you wanted or an error message.
The Peer-to-peer feature is relevant because it means nothing on the Modbus Plus network is a master or a slave. They'll just take care of communications on their own. In my tests, the only problem I could cause to the modbus plus network is if you set your bridge mux to the same address modbus plus address as something else on the modbus plus network. If you set a wrong modbus setting in the text interface, all you're going to do is not communicate with the Modbus Plus network over the Modbus port.
Here's how I managed to talk to a PLC using my PLC software and a BM85 connected to my serial port.
I set my modbus plus address of the BM85 to 2 by turning the first switch on and the rest off.
I entered config mode using the dip switch on the back
Once the screen loaded, I entered the following commands:
E1 01 20 00 00 00 00
Press y to confirm
Turn off bm85 and return config mode to run mode.
One key thing seems to be the rs-232 cable from the PC to the BM85. I used a 990NAA26320 but the wiring diagram should work ok to make a cable similar.
So what you'll have is:
Your PLC, and bm85 daisy chained together on the modbus plus network.
Your PC plugged into the BM85 on modbus port 1.
Your BM85 set to a free Modbus Plus port
Your PLC set to whatever it is (no change)
Your bm85 configured using the keystrokes above, in run mode.
I use Fasttrak softworks, so I open it up and set it up to look at the COM port. Next, I did a PLC connect and did a port scan. I saw the BM85 and the PLC. Next, I chose the PLC from the list and hit connect. I was talking to the PLC!
So what's going on: In this situation, you have two types of communication going on: The Modbus Plus communication, and the Modbus communication. So the PC acts as the master in a modbus communication. It sends commands to the bridge mux. The bridge mux, while configured with the master option, is actually acting as a slave on the modbus port: It is only responding to commands the PC master provides. The BM85 receives the message, and saves it so it can send a corresponding message on the modbus plus network to the PLC.
Now, you have another network, the Modbus Plus network. What's going on there is, each device gets a token which is its turn to speak, and it says what it has to say on the modbus plus network and passes the token to the next device.
If the PLC and the BM85 are all configured, all this is transparent behind the scenes.
So let's talk troubleshooting.
If you seem to have all the right pieces connected in the right way, I'd start from the outside and work my way in.
Can you connect to a PLC using your directly using the serial cable? If so, then you've proven the converter and the cable. I know that those serial converters can be flaky -- If you plug them into different USB ports, they'll use different COM addresses. You can check device manager to make sure you're configured for the COM port you think you're using.
What is your BM85 Modbus Plus LED doing? The following are the flash codes for Modbus Plus:
Six flashes/second Normal operating state. All nodes on a healthy network flash this pattern.
One flash/second The node is off-line. After being in this state for 5 seconds, the node attempts to go to its normal operating state.
Two flashes, then OFF for 2 seconds The node detects the network token being passed among other nodes, but it never receives the token.
Three flashes, then OFF for 1.7 seconds The node does not detect any token passing on the network.
Four flashes, then OFF for 1.4 seconds The node has detected another node using the same address.
If the light is flashing 6 flashes per second, then it suggests that your Modbus Plus network is working correctly.
Thanks for reading!Feb 172015
February 17, 2015
I'm Jason Firth.
There's one thing most people don't know about the law that they should: The law isn't the same everywhere.
Often, people will talk about "how things are", as if their experience in their location describes everyone's. That's incorrect, and it's quite dangerous.
In an earlier entry, I talked about what it takes to become a Certified Engineering Technologist, and in another entry, I talked about what it takes to become a red seal Journeyman. I know first-hand about these things because I went through the process in 2013.
However; in 2013, I also made a mistake. I applied for, and achieved, my Certified Engineering Technologist designation in Manitoba. At the time, I didn't know if I was covered nationwide, so I called the Canadian Council of Technicians and Technologists and asked if I could use my designation across the country. They told me it was fine.
They were not being entirely truthful. In 2010, the governments of British Columbia, Alberta, Saskatchewan, and Ontario split from the Canadian Council of Technicians and Technologists to create Technology Professionals Canada, a new organization dedicated to the profession of Engineering Technology in Canada.
As a result, and as a result of the wording of Section 11 of the Ontario Association of Certified Engineering Technicians and Technologists Act, 1998, S.o. 1998 C.Pr7, the use of the CET designation is restricted and it is an offense for anyone who is not a full member of OACETT to use the title.
Not realizing that the title didn't automatically transfer like a red seal, I used my CET title in Ontario, only to receive a Cease and Desist letter from OACETT's lawyers.
In my case, I asked about my options as a member of CTTAM, and the lawyer told me:
1. You can maintain your primary membership in Manitoba and apply to OACETT as an out-of-province member. You will pay full dues to Manitoba. You will need to pay out-of-province member's dues in Ontario which are one-third of what a regular member pays;
2. You can transfer your membership to Ontario; or
3. You can transfer your membership to Ontario and maintain out-of-province status with Manitoba (assuming Manitoba has this provision).
After paying a small fee, I was able to transfer my membership to Ontario without any further difficulty. It took about a month, during which I stopped using my designation in Ontario.
I ended up taking the third option, transferring my primary membership to the province I practice in, and using an out-of-province membership (at a cost of about $100/yr) in Manitoba.
Something to keep in mind!
Thanks for reading!