Quick Primer On Cascade Control

Definition

The definition of cascade closed loop system uses two or more independent process control loops where there is a ‘slower’ outer control loop and a faster inner control loop and where the output from the slower control loop is combined with another input for the inner control loop is where the controlled variable is measured and this measurement is used to manipulate a process variable..

 

Example

The easiest way to explain closed loop control is to take a blank sheet of paper and put a dot in the middle of the page….now put your finger on the dot…easy right? This is an example of closed loop control where your eyes provide the feedback information you need to move your finger onto the dot.  The target is defined and there is a measurement process loop to control the processing action.

In this example a temperature sensor measures the temperature of the liquid flowing out and that temperature reading is compared to the desired temperature (known as the setpoint) and the controller will increase or decrease the steam valve opening accordingly, affecting the flow of steam. Taking this example a little further, with Cascade Control there are two or more controllers where one controller’s output drives the setpoint of another controller.

The controller that drives the setpoint of the system, in this example that is the output temperature of the process fluid, is considered the primary, or outer, control loop. The second control loop, in this example that is the flow of heating steam, is considered the secondary, or inner, control loop and is the “faster” control loop.

There are several advantages of cascade control and most come down to isolating a dynamically slow control loop from nonlinearities and disturbances in the final control element. Cascade control should always be used if you have a process with relatively slow dynamics (like level, temperature, composition, humidity) and a liquid or gas flow, or some other relatively-fast process, has to be manipulated to control the slow process. As in the above example, where modifying the steam flow rate is used to control heat exchanger outlet temperature, the steam flow control loop is used as the inner loop in a cascade arrangement.

It should be noted that cascade control does have some disadvantages. Firstly, it requires an additional measurement and an additional controller to work and that second controller will require tuning. Also, the control strategy is more complex. These disadvantages have to be weighed against the benefits of the expected improvement in control to decide if cascade control should be implemented.

Cascade control is beneficial only if the dynamics of the inner loop are fast compared to those of the outer loop and, as a rule of thumb, should not be used if the inner loop is not at least three times faster than the outer loop, because the improved performance may not justify the added complexity. Additionally, when the inner loop is not significantly faster than the outer loop, there is a risk of interaction between the two loops that could result in instability – especially if the inner loop is tuned very aggressively.

With these concepts and principles in mind, we can determine the basic criteria for the design and implementation of cascade control.

  • Cascade control is desirable when single loop control cannot provide sufficient control performance, and
  • When a measurable second variable is available.

If these criteria are met then cascade control can be considered and there are three additional criteria that must now be satisfied.

  • The secondary variable must indicate the occurrence of an important disturbance
  • There must be a casual relationship between the manipulated and secondary variables
  • The secondary variable dynamics must be faster than the primary variable dynamics

Mathematical Model

As a starting point, let’s put the example scenario into an engineering line diagram and labelled at each test, measurement and control junction. It should look something like this:

You will note that the stirrer / impeller has been removed since it has no bearing on the cascade control. In the above model, CV represents the controlled variable (CV1 is the temperature of the process fluid leaving the reactor and CV2 is the flow of steam); MV represents the modified variable; SP represents the different set points for the different controllers; F represents the flow of material at the various measurement points; P represents the pressure at various measurement points; and T represents the temperature at various measurement points.

While it may seem odd to use the two closed loop controllers to achieve the same process goal, considering the degrees of freedom of the system indicates that cascade control is legitimate since

The mass and energy balances for heating loop follow the equation:

The heating flow is related to the valve position (v) according to the general equation:

where we have to assume that the pressures and the coefficient Cv are constant.

The final equations are the two cascade controllers:

Degrees Of Freedom      = 5 – 5 = 0

Variables:                           F1 , Fh , T , (Fh)sp , v

External Variables:          F0 , T0 , Thin , Tsp

Parameters:
V = valve stem position (equivalent to the percent open)
ρ = density
Cp = heat capacity at constant pressure
ρh  = density of the heating medium
Cph = process capability
Cv = heat capacity at constant volume (a valve characteristic that relates pressure, orifice opening and flow through the orifice)
P0 = pressure at measurement point 0
P1 = pressure at measurement point 1
Kcl = feedback controller gain for first controller
T1l = integral time in first PID controller
Kc2 = feedback controller gain for second controller
T12 = integral time in second PID controller
IFh = constant to be determined by the initial condition of the problem
Iv = constant to be determined by the initial condition of the problem

The number of degrees of freedom is equal to the number of variables minus the number of equations, as such the system is exactly specified when the outer / primary temperature controller set point is defined.

The number of parameters in a cascade system , which include primary control loop dynamics, secondary control loop dynamics and disturbance dynamics, make general performance correlations difficult to work with.  The block diagram below shows the structure of a cascade control system, and this summarizes the “flow” of information throughout the system and can be used to determine key properties such as the stability and frequency response of the individual control loops.

Block diagram of the standard cascade control structure.

Transfer functions can be derived from this block diagram for the relationships between the controlled variable, CV1(s), of the primary / outer loop and the secondary disturbance, D2(s), the primary loop disturbance D1(s), and the primary loop set point, SP1(s), as follows:

Where G(s) = transfer function for continuous systems. Note that the (s) denotes that the system is continuous.

Common transfer functions include:
Gc(s) = feedback controller function
Gd(s) = disturbance transfer function
Gp(s) = feedback process transfer function
Gs(s) = sensor transfer function
Gv(s) = valve (or final element) transfer function

As stated earlier, the key factor in cascade control is the relative dynamic behaviour of the secondary and primary processes, with emphasis on the disturbances in the secondary process. If we assume the transfer functions for the sensors and valve are taken as 1.0 and the relative dynamics between the secondary and primary processes are defined by a variable η, then the feedback process transfer functions boil down to:

Quick Primer On Single Zone Closed Loop PID Control

Quick Primer On Closed Loop Control

 Definition

The definition of a closed loop system is where the controlled variable is measured and this measurement is used to manipulate a process variable..

 

Example

The easiest way to explain open loop control is to take a blank sheet of paper and put a dot in the middle of the page….now put your finger on the dot…easy right? This is an example of closed loop control where your eyes provide the feedback information you need to move your finger onto the dot.  The target is defined and there is a measurement process loop to control the processing action.

In this example a temperature sensor measures the temperature of the liquid flowing out and that temperature reading is compared to the desired temperature (known as the setpoint) and the controller will increase or decrease the steam valve opening accordingly, affecting the flow of steam.

The amount of opening, or closing, of the steam valve is determined by the algorithms used by the controller which have, hopefully, been properly tuned to how the process reacts. There are five types of mathematical models that are used to determine the system response and the ‘weight’ given to each model will determine the effectiveness of the controller response to the system.

These five models are simple On/Off, Proportional response, Proportional with Integral response (PI), Proportional with Derivative response (PD), and Proportional Integral Derivative (PID) response…

  1. On / Off. On-Off control has two states, fully off and fully on. To prevent rapid cycling, some hysteresis is added to the switching function. In operation, the controller output is on from start-up until temperature set value is achieved. After overshoot, the temperature then falls to the hysteresis limit and power is reapplied.

On-Off control can be used where:
a) The process is underpowered and the heater has very little storage capacity.
b) Where some temperature oscillation is permissible.
c) On electromechanical systems (compressors) where cycling must be minimized.

  1. Proportional.  Proportional controllers modulate power to the process by adjusting their output power within a proportional band. The proportional band is expressed as a percentage of the instrument span and is centered over the setpoint. At the lower proportional band edge and below, power output is 100%. As the temperature rises through the band, power is proportionately reduced so that at the upper band edge and above, power output is 0%.

Proportional controllers can have two adjustments:

  1. Manual Reset. Allows positioning the band with respect to the setpoint so that more or less power is applied at setpoint to eliminate the offset error inherent in proportional control.
  2. Bandwidth (Gain). Permits changing the modulating bandwidth to accommodate various process characteristics. High-gain, fast processes require a wide band for good control without oscillation. Low-gain, slow-moving processes can be managed well with narrow band to on-off control. The relationship between gain and bandwidth is expressed inversely:

         

Proportional-only controllers may be used where the process load is fairly constant and the setpoint is not frequently changed. Proportional control and controllers are not frequently used.

  1. Proportional with Integral (PI), automatic reset. Integral action moves the proportional band to increase or decrease power in response to temperature deviation from setpoint. The integrator slowly changes power output until zero deviation is achieved. Integral action cannot be faster than process response time or oscillation will occur. Proportional with Integral control is perhaps the most widely used type of control.
  1. Proportional with Derivative (PD), rate action. Derivative moves the proportional band to provide more or less output power in response to rapidly changing temperature. Its effect is to add lead during temperature change. It also reduces overshoot on start-up. Proportional with Derivative control and controllers are not frequently used but have found popularity for controlling servomotors.
  1. Proportional Integral Derivative (PID). This type of control is useful on difficult processes. Its Integral action eliminates offset error, while Derivative action rapidly changes output in response to load changes. Full PID control is surprisingly only used occasionally and, as stated, for ‘difficult’ processes.

Here’s a simplified block diagram of what the PID controller does:

The principle of operation in its most basic form is as follows:

The process value (PV) is subtracted from the setpoint (SP) to create the Error. The error is simply multiplied by one, two, or all the calculated P, I and D actions (depending which ones are turned on). Then the resulting “error  x  control actions” are added together and sent to the controller output.

 

 

Mathematical Model

PID control is named such after its three correcting terms, whose sum constitutes the manipulated variable (MV). The proportional, integral, and derivative terms are summed to calculate the output of the PID controller. Defining  u (t)  as the controller output, the final form of the PID algorithm is :

where

K p  is the proportional gain, a tuning parameter,
K i  is the integral gain, a tuning parameter,
K d  is the derivative gain, a tuning parameter,
is the error (SP is the setpoint, and PV(t) is the process variable),
t  is the time or instantaneous time (the present),
is the variable of integration (takes on values from time 0 to present t ).

The steady state and dynamic behavior of a system can be determined by solving the differential equation representing the system. This may be a long and tedious task, especially when there are many elements in the system. One technique for solving such differential equations uses the Laplace transformation. Laplace transformations, as useful as they are, can only be used for linear differential equations; here the problem is stated in terms of a second variable which allows the problem to solved algebraically. So, by transformation back to the original independent variable, the solution to the original differential equation is obtained. Equivalently, the Laplace transfer function of the PID controller is:

Where  s  is the complex frequency.

Proportional term

The proportional term produces an output value that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant Kp, called the proportional gain constant.

The proportional term is given by

Response of PV to step change of SP vs time, for three values of Kp (Ki and Kd held constant)

A high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable. In contrast, a small gain results in a small output response to a large input error, and a less responsive or less sensitive controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances. Tuning theory and industrial practice indicate that the proportional term should contribute the bulk of the output change.

Steady-state error

The steady-state error is the difference between the desired final output and the actual one. Because a non-zero error is required to drive it, a proportional controller generally operates with a steady-state error. Steady-state error (SSE) is proportional to the process gain and inversely proportional to proportional gain. SSE may be mitigated by adding a compensating bias term to the setpoint AND output, or corrected dynamically by adding an integral term.

Integral term

The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The integral in a PID controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain (Ki) and added to the controller output.

The integral term is given by:

Response of PV to step change of SP vs time, for three values of Ki (Kp and Kd held constant)

The integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller. However, since the integral term responds to accumulated errors from the past, it can cause the present value to overshoot the setpoint value.

Derivative term

The derivative of the process error is calculated by determining the slope of the error over time and multiplying this rate of change by the derivative gain Kd. The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain, Kd.

The derivative term is given by

Response of PV to step change of SP vs time, for three values of Kd (Kp and Ki held constant)

Derivative action predicts system behavior and thus improves settling time and stability of the system. An ideal derivative is not causal, so that implementations of PID controllers include an additional low-pass filtering for the derivative term to limit the high-frequency gain and noise. Derivative action is seldom used in practice because of its variable impact on system stability in real-world applications.

Quick Primer On Open Loop Control

Quick Primer On Open Loop Control

 Definition

The definition of an open loop control system is one where there is no information about ‘controlled’ variable, which in this example is the temperature of the liquid out.

 

Example

The easiest way to explain open loop control is to take a blank sheet of paper and put a dot in the middle of the page….now close your eyes and put your finger on the dot…chances are that you missed the dot in the middle since you don’t have any visual feedback or guide, but this is open loop! The goal is defined but there is no feedback information to let you know how well the goal is being achieved.

In the example shown below, the position of steam valve is the control of the process and since there is no measurement of the temperature of the outgoing liquid, any process control is based on the familiarity and experience of the operator adjusting the steam valve.

 

Mathematical model

Since there is no measurement or feedback, the control portion is entirely subjective and consequently there can be no mathematical model for this type of control.

Tighten your process control with different types of control

TIGHTENING CONTROL OF YOUR PROCESS BY UNDERSTANDING DIFFERENT TYPES OF CONTROL

Process control covers a multitude of disciplines and technologies. Whether it be mass flow, fluid flow, pressure, position or temperature, the principles of control follow the same basic rules of measurement followed by adjustment of a process variable to maintain control of the process. The more precise the control, the more uniform and higher quality the end product, and, more importantly, the less the waste and the higher the profitability.
Let’s start with basic descriptions of the types of control, starting with open loop control. The easiest way to explain open loop control is to take a blank sheet of paper and put a dot in the middle of the page….now close your eyes and put your finger on the dot…chances are that you missed the dot in the middle since you don’t have any visual feedback or guide, but this is open loop! The goal is defined but there is no feedback information to let you know how well the goal is being achieved.
Now open your eyes and repeat this test; chances are that you hit the dot precisely since you have the visual feedback of where the dot is; this is a closed loop process. Another analogy for a closed loop control process involves your car. When you push the accelerator, the car’s velocity increases and your experience and eyes tell you when to stop accelerating and ease up on the accelerator; this is an example of closed loop control where you, the driver, close the speed loop. When you have reached your desired speed and let the car ‘do the driving’ you can turn on the cruise control where the speed of the car is measured and monitored with the car occasionally accelerating or decelerating to maintain a constant cruising speed – this is an example of an automatic closed loop process.
Taking this analogy a little further with some of today’s cars that have automatic driving options, where the distance between your car and those around it is measured and monitored, as well as measuring and monitoring how well you stay in your lane, is an example of a cascade control loop where multiple, nested process control loops have an effect on the primary variable, which in this example is the speed of the car.
Now let’s put these examples into the context of an industrial process and have a look at these different types of process control and the advantages, and disadvantages, of each. I’ll use an example of a simple heat exchanger vessel to help explain these different types of industrial control.
Let’s first look at a simple open loop control system:
As I stated above, the definition of an open loop control system is one where there is no information about ‘controlled’ variable, which in this example is the temperature of the liquid out. In this example, the position of steam valve is the control of the process and since there is no measurement of the temperature of the outgoing liquid, any process control is based on the familiarity and experience of the operator adjusting the steam valve.

The advantage of this type of control is that it is cheap from an equipment expenditure perspective, however the disadvantages of this type of control are numerous and include inaccurate and widely variable output temperature, large waste costs, experienced operators are required to maintain the process, and any changes, or disturbances, to the process cannot be resolved quickly, leading to more waste and unnecessary expense.
Next, let’s consider the use of a single automatic closed loop controller on the process. The definition of a closed loop system is where the controlled variable (temperature of the liquid flowing out of the vessel) is measured and this measurement is used to manipulate a process variable. In this example a temperature sensor measures the temperature of the liquid flowing out and that temperature reading is compared to the desired temperature (known as the setpoint) and the controller will increase or decrease the steam valve opening accordingly, affecting the flow of steam.


The amount of opening, or closing, of the steam valve is determined by the algorithms used by the controller which have, hopefully, been properly tuned to how the process reacts. There are five types of mathematical models that are used to determine the system response and the ‘weight’ given to each model will determine the effectiveness of the controller response to the system.
These five models are simple On/Off, Proportional response, Proportional with Integral response (PI), Proportional with Derivative response (PD), and Proportional Integral Derivative (PID) response. Let’s investigate these a little further…

1. ON / OFF. On-Off control has two states, fully off and fully on. To prevent rapid cycling, some hysteresis is added to the switching function. In operation, the controller output is on from start-up until temperature set value is achieved. After overshoot, the temperature then falls to the hysteresis limit and power is reapplied.
On-Off control can be used where:
a) The process is underpowered and the heater has very little storage capacity.
b) Where some temperature oscillation is permissible.
c) On electromechanical systems (compressors) where cycling must be minimized.

On/Off control is surprising widely used (think about your home thermostat) but not so much in industrial processes.

2. PROPORTIONAL. Proportional controllers modulate power to the process by adjusting their output power within a proportional band. The proportional band is expressed as a percentage of the instrument span and is centered over the setpoint. At the lower proportional band edge and below, power output is 100%. As the temperature rises through the band, power is proportionately reduced so that at the upper band edge and above, power output is 0%.

Proportional controllers can have two adjustments:
a) Manual Reset. Allows positioning the band with respect to the setpoint so that more or less power is applied at setpoint to eliminate the offset error inherent in proportional control.
b) Bandwidth (Gain). Permits changing the modulating bandwidth to accommodate various process characteristics. High-gain, fast processes require a wide band for good control without oscillation. Low-gain, slow-moving processes can be managed well with narrow band to on-off control. The relationship between gain and bandwidth is expressed inversely:

Proportional-only controllers may be used where the process load is fairly constant and the setpoint is not frequently changed. Proportional control and controllers are not frequently used.

3. PROPORTIONAL WITH INTEGRAL (PI), automatic reset. Integral action moves the proportional band to increase or decrease power in response to temperature deviation from setpoint. The integrator slowly changes power output until zero deviation is achieved. Integral action cannot be faster than process response time or oscillation will occur. Proportional with Integral control is perhaps the most widely used type of control.

4. PROPORTIONAL WITH DERIVATIVE (PD), RATE ACTION. Derivative moves the proportional band to provide more or less output power in response to rapidly changing temperature. Its effect is to add lead during temperature change. It also reduces overshoot on start-up. Proportional with Derivative control and controllers are not frequently used but have found popularity for controlling servomotors.

5. PROPORTIONAL INTEGRAL DERIVATIVE (PID). This type of control is useful on difficult processes. Its Integral action eliminates offset error, while Derivative action rapidly changes output in response to load changes. Full PID control is surprisingly only used occasionally and, as stated, for ‘difficult’ processes.

Here’s a simplified block diagram of what the PID controller does:


The principle of operation in its most basic form is as follows:
The process value (PV) is subtracted from the setpoint (SP) to create the Error. The error is simply multiplied by one, two or all of the calculated P, I and D actions (depending which ones are turned on). Then the resulting “error x control actions” are added together and sent to the controller output.

The advantages of using a closed loop control process are numerous and include reduced waste of process variable, in our example, steam, tighter control and accuracy of the controlled variable (in this case the temperature of the liquid flow out), and automatic control means no significant human involvement which allows the process to be located in inaccessible or remote locations. This example shows one process variable being controlled, but the addition of multiple controllers for different process variables only increases the degree of process control, reliability, accuracy, repeatability and safety. The only disadvantage I can conceive is the cost of a reliable controller, but this disadvantge is essentially eliminated given the savings in process variable waste of an open loop system.

Taking this concept of multiple controllers a little further introduces other types of control that further increase the degree of process control and all the advantages associated with this. The first type is cascade control where the output of one controller serves as the setpoint for a second controller. In the example here, the temperature of the outflowing liquid is the primary feedback loop and the output from this temperature controller, ie the opening of the steam input valve, serves as the setpoint for the second controller, which monitors the flow of steam. It is also worth noting that the two control loops are independent and that any variations in the secondary, or inner, loop are effectively removed from the primary, or outer, control loop. Cascade control is particularly useful for processes with a slower primary/outer control loop and a faster secondary/inner control loop.

In this example, any variation (often referred to as a disturbance) in the flow of heating steam is controlled, and compensated for, by the secondary inner loop before effecting the primary outer temperature loop. The primary outer loop control is, however, affected by other variables and disturbances from the heat exchange vessel and heated liquid flow etc. and time, since this is the slower of the two process loops.

The advantages of a cascade control loop are as follows:
• The slow, primary, outer loop is isolated from variations and disturbances within the fast, secondary, inner loop.
• If the process has a non-linear response, as most do, the process can be stabilized using a cascade loop.
• By effectively ‘doubling up’ on the measurement and feedback loops increases the efficiency of the process.
This “doubling up “can be done using two discrete automatic process controllers, more commonly referred to as single zone controllers, or, more cost effectively, with a multizone controller.
The second type of control that can be implemented with two controllers is feedforward control. Feedforward control, sometimes called predictive or anticipatory control, involves a multiple input process where the inputs are measured. Feedforward control uses these input measurements and their relationships with the input and output to adjust the process so that variations and disturbances of the input are minimized or eliminated on the process output. In the example below, the measurements of temperature and flow of the incoming liquid, together with the knowledge of the process, are used to adjust the amount of steam being applied to the process.
It should be noted that feedforward control can rarely fulfill all the control requirements of a process and usually incorporates a feedback control loop as well.

Like cascade control, feedforward control can be achieved using a couple of discrete single zone closed loop controllers, or a single multizone controller. The more control zones in the single zone controllers required, or a single controller with a higher loop zone count.

Examples of a single loop controller are as follows….

Shown here are the Athena model 19C, 16C, 1ZC, 25C and 18C DIN sized controllers, each capable of PID control of one zone / control loop.

Examples of multi zone controllers that are suitable for both cascade control and feedforward control are as follows…..

Shown here are the Athena models Foundation 20, the Foundation 40 and the Foundation 50 along with some of the many display options available for these controllers. The Foundation 20 is a two zone controller, the Foundation 40 is a four zone controller and the Foundation 50 is an eight zone controller. Also shown here are a few examples of the different types of displays that are available for the Foundation series.

Options for Industrial Temperature Measurement

A primer on thermocouples – the most widely used temperature measurement devices – and also explores other instruments and devices to measure, and subsequently control, temperature in industrial processes.

The versatile thermocouple is the most widely used industrial temperature-measurement device. …

Click here to read the final installment of a two part article as published in Process Cooling magazine

Classification of Temperature Measurement Devices

The materials of construction, temperature range, accuracy and control requirements for each type of temperature-measuring device varies, but the key function remains the same.

Temperature is a critical — and constantly measured — controllable process variable for engineers with applications ranging from water feed to a boiler to the temperature inside an induction furnace. In fact, temperature measurement in industrial processes covers a diverse universe of needs….

Click here to read the first installment of a two part article as published in Process Cooling magazine

Today’s Challenges Of Maintaining Legacy Control Systems

The struggle in maintaining aging automation platforms is very real. According to ARC Advisory Group, there are $65 billion worth of installed distributed control systems (DCSs) nearing their end of life, with many of those systems over 25 years old. Unfortunately, manufacturers experience a much greater rate of failure with aging components, along with a host of other associated issues and risks, not least of which is the scarcity of suitable replacement components. It should be noted that most electronic components have a usable life of ten to twelve years before they start to dry-up or become at risk, so overcoming these obstacles and finding the best path forward toward a more effective automation solution is key to future success. Areas that we believe are today’s biggest challenges are as follows:

No Spare Parts

Sourcing spare parts becomes increasingly difficult as control system suppliers can no longer source component parts to build their control systems, or as replacement parts to existing installed systems. Suppliers may choose not to redesign the old circuit boards with new components either due to significantly increased costs, or impractical and cost prohibitive recertification. This forces users to rely on the aftermarket for used parts or remanufactured components, which simply don’t have the reliability of new parts. Failures in systems without redundancy often cause immediate production downtime, even systems with redundancy will eventually experience failure rates high enough to impact production due to multiple failures occurring before parts can be replaced.

Fortunately for Athena’s customers, we maintain a large stock of component parts for our most popular controllers and are able to manufacture replacement boards with relative ease. In the case where components are made obsolete by our suppliers, we typically make a substantial last buy and our engineering teams start to develop direct replacement boards using newer parts and components.

Tribal Knowledge

Not only do parts become difficult to find, having personnel knowledgeable with the legacy platform is also a challenge. Again, according to ARC, over 20% of personnel familiar with legacy DCS platforms have retired with many more approaching retirement, leaving many facilities without people able to modify or even maintain the control system. The options for replacing this tribal knowledge are limited because DCS suppliers often no longer provide training on older platforms, most commonly due to a lack of demand. Even if training is offered, millennials are none too excited about learning a “new” technology which will not give them skills to enhance their career or find a job in the future.

Documentation, a major part of this tribal knowledge, allows the new developers to understand the system. Any lack of documentation turns systems into unchartered territory for a new developer; in such cases mission critical applications suffer huge losses as even small batch work will take a really long time, as people who actually developed the application and process are either no longer in the company or have retired.

At Athena we have always maintained a superior documentation system and if any of our personnel retire, their in-depth product knowledge has been captured and transcribed into engineering notes for each product, ready for tomorrow’s replacement to pick up where retirees left.

Scalability issues

Though not faced by all legacy systems, this certainly can be a big hiccup. When additional workload is presented to the system, additional hardware resources should be efficiently utilized to service the load increase. In this case with older control systems, hardware availability is not the only issue; new hardware may be incompatible with older hardware and the knowledge on how to integrate old and new hardware may be hard to come by.

For Athena’s customers, we make a concerted effort with all our new products to make them backwards compatible so that they can talk with older controllers. If this is not feasible because the new and older technologies are totally incompatible, then conventional wisdom indicates that the embedded control system is long overdue for an upgrade.

Code fragmentation

Up to this point, we have only considered hardware concerns and its support; there is another critical part of control system operation that needs to be considered, namely software and firmware. Over the years, there are multiple implementations of codes around the core software done by different developers. This results in fragmented code. Incorporating new functionality within the core system is difficult, hence people start building new code around it or adding middleware and front end systems which increases the complexity of the system. The result is a cluster of code which requires a lot more manpower for maintenance. There is also a lot of redundant code (as a byproduct) in the system making it even more difficult to fix the errors whenever they show up. In addition to some code fragmentation over time, the addition of new functionality and added new code can make the resultant program more bug-prone, making it more difficult to test and prove, especially for newer code developers unfamiliar with earlier original code.

At Athena, good code design and structure are paying dividends. Firmware in Athena’s controllers is structured in a way that key routines are compartmentalized; if, for example, the code that addresses how an analog to digital converter (ADC) operates needs to updated, it can be worked on, altered, tested and implemented without affecting any other part of firmware ‘system’. Additionally, while Athena’s code is sophisticated, it has been designed, and well documented, such that it will be easily understood by future generations.

Limited connectivity

Most of the legacy systems were developed before the concept of internet was introduced to the masses. These systems were designed and developed to work without internet. Introducing web solutions to such legacy systems is a daunting task which is filled with issues like security, new business requirements and compatibility. The limited ability of legacy to interact with other systems also poses a challenge when expanding the scope of business. Interoperability issues are mostly tackled by workarounds which are not foolproof and prone to errors. Integrating the existing system with today’s network of mobile, cloud, web services etc is again very challenging and results in more fragmented code.

Athena’s more recent products have communications abilities and can talk, using familiar protocols, with MES and ERP software. It should be noted that while the Internet Of Things (IOT) and Industry 4.0 are the subject of much current hype, reality shows that there is a slow implementation of these tools driven primarily by concerns over data security and data ownership. This indicates that we should be asking whether the upside to upgrading to Industry 4.0 technologies is worth the investment; the hype promises all kinds of plant capabilities, data reporting and data analyses, but how much of all this potential will be used has yet to be seen – after all, who uses all the awesome power of Excel or Word?

What’s the Risk?

The risks for keeping a legacy automation system are numerous. As the failure rate of components increases, so does the impact to production. Facility outages lasting several weeks in duration can occur due to a significant control system failure. The risk escalates when you combine failures with a lack of resources able to troubleshoot and make repairs. The cost of this lost production quickly exceeds the cost to upgrade the control system to a modern platform. And nearly every production upset comes with associated safety and environmental risks.

Fortunately for Athena’s customers, our ability to keep our legacy automation systems running with the ability to replace older boards and systems, means that this risk is not eliminated but significantly reduced.

Hidden Costs

Even if the legacy system is still working, there are hidden costs to keeping it around. OEM parts and support costs are higher for older platforms and there is often a lack of functionality when compared to a modern distributed control systems. Limitations in the older technology prevent open communication to smart field devices, subsystems and higher-level enterprise resource planning (ERP) systems. New operators are less effective using the older style human-machine interfaces (HMIs) in legacy DCS platforms, and their response to abnormal situations is inhibited by unfamiliar legacy alarm systems. Additionally, older platforms often utilize unsupported operating systems and slower technologies that have early iterations of communication capabilities, may be more vulnerable to cyber-attacks, with limited options to adequately secure them.

Modernization Is Key

What can be done to mitigate all these risks and find the best path forward? Some users will adopt a strategy of accumulating a quantity of spare parts hoping to extend the life of their system but this approach still leaves them vulnerable to all the risks previously mentioned.

The other option for long-term operational efficiency is to modernize the automation system. Modernization is best done in a planned, disciplined fashion. As with any project, you will want to utilize proven best practices and implementation resources that will deliver value throughout the new system’s entire lifecycle. Of utmost importance is to begin with a front-end loading engineering effort for successful planning and budgeting. This will allow you to:

  • define a scope aligned with business needs and facility requirements
  • evaluate and select the best platform and project options
  • develop an execution plan and schedule
  • develop an accurate cost estimate and associated justification

If you look closely all the issues are overlapping in nature but solutions are mutually exclusive of each other. It implies that all the issues have to be fixed individually, indirectly stating a lot of investment is required. Sticking to legacy systems/software and trying to squeeze out every drop of service can save a cash-strained organization some significant investment but this strategy may be shortsighted in the long run. The Industrial Internet of Things (IIoT) and Industry 4.0 have been launched with a very rocky start with major fears about security, privacy and data ownership making headlines in the news almost every day, but the advantages and efficiencies that IIoT and Industry 4.0 offer are too enticing and are here to stay (and be developed); the learning curve will be a shallow one.

All said, however, the scenario is not grim. There are effective solutions available in the market to ensure that your legacy applications can be converted into modern updated applications without incurring loss in terms of investment or downtime.