Course Modules



The more advanced modules assume knowledge of many of the basic modules. Whitehouse's courses are intensive and it is therefore strongly recommended that delegates attend the pre-requisite modules.

Course

Module

Pre-requisite modules

Process Dynamics

PID Control

Signal Conditioning

Level Control

Feedforward Control

Constraint Control

Modern Control Techniques

Introduction

.

.

.

.

.

.

Modern Control Techniques

Process Dynamics

.

.

.

.

.

.

Modern Control Techniques

PID Algorithm

X

.

.

.

.

.

Modern Control Techniques

Signal Conditioning

X

X

.

.

.

.

Modern Control Techniques

Level Control

X

X

X

.

.

.

Modern Control Techniques

Feedforward Control

X

X

X

.

.

.

Advanced Control Techniques

Deadtime Compensation

X

X

.

.

.

.

Advanced Control Techniques

Non-linear Control

X

X

X

.

.

.

Advanced Control Techniques

Constraint Control

X

X

X

X

X

.

 

Inferential Properties

X

.

.

.

.

.

 

Statistics for Control Engineers

X

.

.

.

.

.

 

Optimisation

X

X

X

X

X

X

 

Project Execution

.

.

.

.

.

.

 

Steam Boiler and Fired Heater Control

X

X

X

X

X

.

 

Compressor Control

X

X

X

.

.

.

 

Distillation Control

X

X

X

X

X

.

 

Gasoline Blending

X

.

 

A detailed description of Whitehouse's course modules follows. Whitehouse regularly generates new modules to meet client demand. Those currently available include:

Introduction
course introduction

Good process control can have a substantial impact on process performance – potentially doubling the profitability of some continuous processes. This module aims to explain the source of these improvements and introduce the technologies involved in their capture. It includes an introduction to the course itself, followed by an introduction to the subject of process control.

The format of the course is first described. It emphasises that the lectures should be treated very informally and questions encouraged. It gives details as to how the approach is very practical, minimising the use of control theory. It encourages students to contact the tutor after the course should they encounter difficulties applying what has been covered.

Moving on to process control in general, it first outlines the hierarchical levels of control – specifically defining what is meant by regulatory control, constraint control and optimisation, and how these operate together.

It then identifies the key technologies applicable to regulatory control, such as signal conditioning, the PID algorithm, feedforward techniques and dynamic compensation. It similarly lists the techniques applicable to constraint control and optimisation.

The module then describes the main sources of benefits that may be captured by regulatory control, such as those arising from stable operation, faster change of operating conditions, maintenance savings, better use of operator etc. It also shows how better regulatory control permits closer approach to operating constraints and how process non-linearities can be a major source of profit improvement. It then takes a similar approach to show what benefits may be captured by constraint control and optimisation.

The remainder of the module is a detailed description of the case study process that will be used throughout the course. This addresses the operating objectives and how these are satisfied by the planned control schemes.

benefits of improved control
regulatory control
constraint control
closed loop optimisation
terminology
hierarchy of control
case study description

 

Process Dynamics
4 to 5 hours
process gain, deadtime and lag Understanding process dynamics is key to the successful implementation of virtually all process control technologies. This module is therefore seen as an essential first step. It typically fills the remainder of the day following the Introduction module. It includes a number of paper exercises and one hands-on simulation exercise. Its aim is to give the student a clear understanding of process dynamics and how they may be obtained from plant tests. It is key to the successful application of the techniques included later in the course.

It uses a simple process response to explain the concept of process gain, process deadtime and process lag. It also describes what is meant by the order of the process and how, in most cases, a simplifying approximation can allow most processes can be treated as first order.

It then describes the effect that changes in gain, deadtime, lag and order have on the response of processes. In particular the phenomenon of inverse response is explained.

Guidelines on how to successfully execute a plant test are presented. Two methods of analysing the response (Ziegler-Nichols and Whitehouse) are presented. Attention is paid to the impact of process non-linearity and the correct choice of engineering units. The student is then given the task of quantifying the process dynamic constants for a number of process response curves. Many of these cover problems that are likely to be encountered on a real process. The results of this work are reviewed so that students will be aware of such problems and their solution.

The module concludes with a practical session in which the students obtain process dynamics of a simulated process heater over a range of different operating conditions.

concept of order
simplifying approximations
obtaining dynamics from plant tests
linearity
non-self-regulating process

 

PID Algorithm
10 to 12 hours
development of control algorithm

Although now more than 70 years old, the PID controller remains the key technology in most basic controllers. In the last 30 years, as DCS has become commonplace, there are now a broad range of modifications to the original algorithm – most of which are available as standard in most DCS. However industry has yet to appreciate the value of these modified versions and is still largely using the standard version. Further the majority of controller tuning is completed by trial and error. This is not only time-consuming but often results in poorly tuned controllers.

If presented on site, this module benefits greatly from applying what is covered to real controllers. As well as having an immediate positive impact on process performance, the improvement is apparent to operations personnel who will then be strong supporters of further improvements. The module would be extended by one or two days if this approach is taken.

This module aims to develop an appreciation of the full range of PID controllers and provide a method by which their tuning can be quickly optimised. It is broken into six sessions, each separated by practical work in which the students apply what they have learnt to a simulation of a fired heater.

It starts with a definition of the basic terms, e.g. set-point (SP), process variable (PV), error (E) and manipulated variable (MV). It defines feedback and stresses its importance. It describes the symbology of process control drawings – both block diagrams and P&I style drawings.

It defines what is meant by on-off control and explains why this has limited application to most process industries.

It then moves on to the development of the PID algorithm. This is developed first as proportional-only controller – defining what is meant by controller gain, proportional band, normal/reverse action, full-position/incremental forms and manual reset. It shows that the main purpose of proportional action is to generate a “proportional kick” in response to SP changes. It describes how the problem of “offset” arises and how it might be addressed.

Integral action is then introduced and defined in both analog and digital terms. Its main purpose is explained. The meaning of the integral tuning constant is described and how it may be quantified in minutes, seconds or repeats/minute. Its effect on controller performance is shown.

Derivative action is added, first to a proportional-only controller. Again it is presented in analog and digital form; and its main purpose explained. Similarly its tuning constant is presented in units of both time and reciprocal time. The benefit of applying derivative action to processes with long deadtimes is explained. The negative effect, of applying to situations where there is measurement noise present, is described and possible solutions presented. The impact that analog-to-digital conversion has on the applicability of derivative action is described.

Once the students have understood the theory of PID control they are given the opportunity to tune a controller on a simulated process. This is first done by trial and error to help demonstrate how time-consuming this can be. It also enables the student to develop a benchmark performance against which other tuning methods may be evaluated.

Up to this point only a fast approach to SP has been addressed. The module then moves on to the effect that this may have on the MV and describes how good tuning is usually a compromise between quickly reducing controller error and not excessively overshooting the steady-state MV required to eliminate the error. It identifies the circumstances under which a problem may arise with rapid changes in MV and how this may be dealt with.

The module then moves on to the multiple versions of the PID algorithm that are in common use. It first describes the difference between the “parallel” or “non-interactive” and the “series” or “interactive” versions and the effect that the choice has on controller tuning and tuning method.

It then covers the “derivative on PV” version of the algorithm, describing how this is important in eliminating the derivative spike that arises from the use of the traditional version of the algorithm.

It also describes the “proportional on PV” algorithm. It shows why this version of the algorithm should be used in most situations since it can be tuned to respond to process disturbances much more effectively than the traditional version.

At this stage the module moves on to tuning methods. Firstly tuning criteria are described - including quarter-decay, IAE, ISE, ITAE, IAE and Lambda.

Of the several hundred published methods, some specimen examples are covered. These include Ziegler-Nichols (both closed loop and open loop), Cohen-Coon, Smith-Murrill and IMC (also known as Lambda or Direct Synthesis). The module also includes tuning charts and the use of tuning optimisation software – both developed by Whitehouse.

The student is then given the opportunity to apply each of the methods and draw conclusions about their limitations. In particular students evaluate how well each method adapts to digital control, how they take account of both PV and MV, how they apply to DCS-specific versions of the algorithm and whether they work well for both SP and load changes.

The module also describes the range of adaptive tuning methods that can be applied – such as scheme configuration changes, gain scheduling, PV linearisation and self-tuners. It also describes the commonly available computer-based or DCS-based tuning methods.

Finally a number of key design issues are addressed - such as controller initialisation, cascade control, controller interactions, valve positioner calibration, split-range valves, dual-acting valves and anti-reset windup (both in cascaded controllers and overrides).

tuning by trial and error
tuning criteria
published tuning methods
tuning for setpoint and load changes
use of proportional on PV algorithm
manipulated variable response
cascade control
split-ranging and dual acting control
anti-reset windup

 

Signal Conditioning
3 to 4 hours
linearisation A common problem in the industry is the over-use of measurement filtering. Filters are often installed merely because the measurement looks noisy. Filters change the apparent process dynamics; even if this is taken account of in controller tuning the control performance can never be as good as it would be with a noise free measurement.

This module aims to address this and many similar problems caused by not fully appreciating the true purpose of signal conditioning. It comprises an introductory lecture followed by a lengthy hands-on session for the students.

It begins by describing the range of calculations that may be applied to process measurements to make them more suitable for control. These include linearisation, pressure compensation, flow compensation and filtering.

Examples of linearisation included are the conditioning of signals from instruments that are inherently non-linear and processes that behave in a non-linear manner.

Pressure compensation examples included are its application to steam drum level indicators, distillation column tray temperatures and gas flow measurements.

Special attention is paid to the compensation of fuel gas flows that must take account of the effect that composition has not only on density but also on heating value.

The module then focuses on the application of filtering techniques to remove noise from measurements. Firstly, sources of noise are explained and methods of characterising noise are presented. The importance of eliminating, if possible, noise at source is stressed and possible methods described.

A range of filters is then presented, including first-order exponential, rate of change, non-linear exponential, averaging and least squares. The properties of each are described in terms of noise reduction and base signal distortion. The students are given the opportunity to apply each of the filters and tune them to optimise their performance. Results are compared and the most effective filter selected.

The students then learn the effect that the implementation of filtering can have on process dynamics and hence on controller tuning.

pressure compensation of distillation temperatures
dealing with steam drum "swell"
gas flow compensation
heating value compensation
filtering noise
impact on controller tuning

 

Level Control
4 to 5 hours
importance of correct level control

Poorly tuned level controllers are probably the most common control problem on many processes. The aim of this module is to demonstrate that correct tuning is simple to determine and that it can have a major impact on process stability. It comprises two lectures each followed by student exercises and on simulated feed surge drum.

As with the module covering PID Control, this module benefits greatly if presented on site and real controllers used for some of the exercises. This would extend the module by about a day but will result in an immediate improvement to process stability.

It begins by describing why level control should be treated differently from other controllers. It explains that level is a non-self-regulating or integrating process and why it is often more important to maintain a steady downstream flow than it is to tightly control level.

It gives examples of the type of processes where level should be tightly controlled and those where “averaging” control is required. It shows the circumstances under which level should be controlled by a cascade to a flow controller rather than direct to a valve.

Tuning calculations, based only on vessel dimensions and instrument ranges, are derived from first principles for both tight and averaging control. An alternative method, based on plant testing, is also given.

Special purpose algorithms such as error squared, non-linear and gap control are explained and tuning methods given.

The student is then given the opportunity, on the simulated process, to design and implement both tight and averaging level controllers – comparing the performance of the various algorithms. The student then explores the effect of measurement noise and the use of filtering, again for both tight and averaging control.

Comparison is then made between self-regulating and non-self-regulating processes and the impact this has on tuning calculations. Explanations are given on how to adapt commonly used tuning methods (such as Ziegler-Nichols and IMC) to non-self-regulating processes. The student is given the opportunity to explore the use of these.

The impact of non-linearity introduced by horizontal drums, spherical drums and vessel internals is explained.

tight versus averaging control
determining vessel working volume
tuning methods
error squared algorithm
gap control
linearity
problem of noise

 

Feedforward Control
4 to 5 hours
use and advantages Feedforward control can have a major impact on process stability if there are frequent measurable disturbances. However it also is extremely beneficial on processes that have a high turndown ratio. This benefit is frequently overlooked.

The module comprises a lecture followed by hands-on work extending the feedback controller, already developed on the fired heater, to include feedforward control. It begins with an explanation of the difference between feedback and feedforward control – based a simple mixing process. It defines what is meant by the “disturbance variable” (DV) and explains the benefits of feedforward control along with its limitations.

Ratio and bias algorithms are explained and their use in feedforward control strategies described. Examples used include dual fuel firing and three-element steam drum level control.

The impact of process dynamics is described and guidelines presented on when dynamic compensation should be included in the feedforward controller. Deadtime and lead-lag algorithms are explained and the impact of their tuning constants demonstrated. A tuning method for the dynamic compensation is developed, for a first order process, from first principles.

The effect of measurement noise introduced by the DV is shown, showing how the dynamic compensation need be adjusted to take account of any filtering.

The effect of higher process orders is explained and methods given for modifying the calculated tuning constants for dealing with the inaccuracies introduced.

The potential for increased MV overshoot is described and a method given for modifying the dynamic compensation to limit the overshoot and accept a slower return to SP.

An explanation is given as to why the feedback controller tuning must be changed following the implementation of feedforward control, and methods given for calculating the revised tuning.

The student is then given the opportunity to design and implement a feedforward/feedback controller.

The wrap-up session describes some of the less obvious benefits and shows how feedforward control can dramatically improve the performance of processes that operate with high turndown ratios.

ratio and bias algorithms
types of decoupler
tuning feedforward controller
impact on feedback controller
compensation for changing process gain
dealing with noise
manipulated variable movement

 

Deadtime Compensation
4 to 5 hours
use of predictive techniques It is common for parts of a process to show dynamics where the deadtime is significantly larger than the lag. Under these circumstances derivative action becomes important. But tuning by trial and error becomes increasingly time-consuming and opportunity for improved performance can be easily overlooked. For longer deadtimes it is likely that special-purpose algorithms will show a substantial improvement over PID control.

This module comprises a relatively short lecture followed by a lengthy practical exercise on a simulated reactor.

Firstly the problem, of applying conventional PID control to a process with a large deadtime, is described. The student explores this by applying what was learnt on the Basic Control course. The circumstances, under which deadtime compensation algorithms should be considered, are defined; as are the situations in which the technique can cause problems.

Several techniques are presented in detail - including eeset feedback delay, Smith predictor, dynamic reconciliation, the Dahlin algorithm and internal model control (IMC). Each shows a different approach to predicting the future behaviour of the process, enabling much faster controller tuning.

The student then designs and implements each of the techniques – exploring their advantages and disadvantages. In particular this highlights the problem of discontinuous measurements (such as those from on-stream analysers) that give problems with long deadtime processes. It also addresses the robustness of each technique when deadtime changes, for example as a result of feed rate changes.

The student also has the opportunity to explore the problems of tuning an inherently unstable process in the form of an exothermic reactor.

reset feedback delay
Smith Predictor
Internal Model Control (IMC)
Dahlin Algorithm
tuning
impact of modelling error
limitations

 

Non-linear Control
4 to 5 hours
limitations of linear algorithms On processes where the process gain can vary by more than ±20% it is likely that controller tuning should often be changed. Failure to do so will result in much reduced process stability. If the change in gain is substantially greater then stable control may not be possible. This module aims to show the problem and offer a number of potential solutions.

It starts with a practical session where the student applies the techniques covered by the Basic Control course. This aims to demonstrate the problems of applying PID control to a highly non-linear process. The student works on a simulation of an effluent treatment system where pH must be controlled. However the techniques covered may be applied to similarly non-linear processes.

The student then is guided through the implementation of a “gain scheduling” approach to the problem. Once implemented the controller is then subjected to a series of disturbances – including effluent temperature, process upsets that change untreated effluent pH and the changes in the pH of the neutralising chemical.

A lecture then develops a number of more rigorous approaches. It starts with first principles – defining Kw and pH. It then explains the non-linearity of the process.

It then moves on to techniques that predict the process gain. These are developed from simple material and ionic balances. The technique is first developed to give an on-line measurement of process gain. The student then implements an adaptive controller that uses this value to continuously update the controller tuning to maintain a constant loop gain.

The technique is further developed to generate a signal conditioning function which linearises the PV. The student is given the opportunity to apply this with conventional PID control.

gain scheduling
programmed adaptive control
process variable linearisation
application to pH control

 

Constraint Control
8 to 10 hours
types of constraint Constraint control, on continuous plant, is usually the major source of benefits. This module covers those forms of constraint control that may be implemented using standard DCS algorithms – so called “traditional” control. For more complex problems the most practical solution is the use of a proprietary multivariable control package. This technology is covered as a separate module, although a brief introduction is included here.

This comprises three lectures separated by practical work. It begins by listing typical applications of constraint control. It then defines “hard” and “soft” constraints, giving examples and explaining how they must be handled differently. It then moves on to describe the three types of constraint control problem – single input/single output (SISO), multi-input/single output (MISO) and multi-input/multi-output (MIMO). Examples are given of each type.

The module briefly addresses the use of steady-state techniques and shows how they are of limited value, explaining why they have now be surpassed by dynamic techniques. It then addresses the use of PID control, first looking at the difference between “full position” and “velocity” forms of the algorithm and the impact these have on the design of constraint control.

It introduces the signal override algorithms – low signal select (LSS) and high signal select (HSS), explaining how they work – in particular how they prevent wind-up in the unselected path.

The students are then given the opportunity to implement a capacity utilisation control strategy operating against a single constraint on the fired heater. First they must ensure that the basic controls developed earlier in the course are properly implemented.

By applying design methods learnt on the Basic Control course, the student implements a strategy based on a PID controller to maximise feed rate to the heater limit.

A brief lecture then addresses the multi-input case, handing over to the students who then add a further constraint to the design – a hydraulic constraint on the reactor. Their design must now select the more limiting constraint. Two techniques are applied – one based on a single PID controller, the other based on multiple controllers. Their relative performance is compared and conclusions drawn about which is the more appropriate under differing circumstances.

The problem is then expanded to permit a second variable to be manipulated, i.e. reactor conversion. Changing conversion affects both the heater firing and reactor hydraulic constraints, resulting in a highly interactive control problem. A brief lecture describes how the two variables may be decoupled so that the problem can be treated as two separate constraint controllers. The student is given full guidance on how to tune both the decouplers and the controllers.

A third manipulated variable is introduced – reactor pressure. While not affecting heater firing it does have an impact on the reactor hydraulic limits. It therefore introduces a degree of freedom into the problem allowing some level of optimisation. The student first manually identifies the optimum operating conditions and determines the effect on these of changing process economics.

The student then, with guidance, applies “constraint conditioning” which enables the hard constraints to be approached more closely without risk of sustained violation.

It is possible to implement a true multivariable controller. While, during a standard course, there is not time to design and implement this technique, it is demonstrated to show its advantages over the DCS approach. As part of in-company courses the module can be extended to include multivariable control in more detail. After covering the general principles the student designs and implements a non-proprietary controller and explores how it manages changes in operating strategy and process economics.

The module concludes with a summary of the main multivariable control packages, including their history and market penetration.

PID based techniques
single input, single output controllers
multi-input, multi-output controllers
use of signal selectors
incremental versus full position algorithms
2x2 decoupling
multivariable techniques

 

Inferential Properties
16 to 20 hours
why inferentials are needed

Inferential properties, also known as 'soft sensors' or 'virtual analysers', offer the opportunity to quickly and reliably detect a change in property and so permit effective control. While they are mainly applied for product quality control, other applications include the measurement of parameters such as catalyst activity, exchanger fouling, reactor severity etc. The majority of benefits captured by improved process control depend on effective inferentials.

The module starts by explaining how the dynamic advantage of inferential properties can be used to dramatically reduce off-specification production - even if effective on-stream analysers are already in place. It then sets out to answer the key questions. Should regression or first principle models be used? Does sufficient good quality data exist to support the development of inferentials? Is the inferential sufficiently accurate? Should laboratory updating be applied? Should a specialist supplier be used? Should they be built in the DCS or in a special-purpose package (AspenIQ, ProfitSensor, RQE)? If an inferential proves infeasible, what additional measurements should be installed? Delaying answering these questions until after the APC contract is awarded jeopardises benefit capture.

The myths perpetuated by the suppliers of both regression-based inferentials (including artificial neural networks) and first-principle types are described. A reasoned approach is presented as to which technology should be chosen for each case and how external suppliers might be involved.

The principles of OLS (ordinary least squares) regression analysis are explained. This includes the choice of penalty function minimised by regression and the use of Pearson R to assess the accuracy of the resulting correlation. The student is presented with a case study aimed to assess different penalty functions and to identify the limitations of Pearson R. This is followed by a second case study in which the advantages of using the adjusted version of Pearson R in assessing how many sets of historical data are required and how many inputs should be used. Other techniques, such as the Akaike Information Critrerion and the Box-Wetz Ratio are explained and applied.

Issues concerned with the quality of the input data are then addressed, supported by a range of student exercises. This includes how the level of 'scatter' impacts the level of confidence in the resulting correlations, problems associated with data not being collected under steady-state conditions and the importance of accurately time-stamping of measured property. Students are given the opportunity to develop dynamically compensated inferentials.

An effective performance index is then described. Students learn through further case studies how it can be used to assess whether a new inferential is sufficiently accurate, how the index can be incorporated into ongoing monitoring and how it helps is assessing the benefit captured.

A number of real case studies are then presented demonstrating how process engineering knowledge should be included in regressed inferentials. Examples included the derivation of linear and non-linear pressure compensated temperatures for use in the control of distillation columns. This is extended to the use of multiple tray temperatures. The use of WLS (weighted least squares) regression is described as a means of dealing with suspect measurements. Other case studies show how changes in operating mode or feed type can be incorporated. Automatic bias updating is covered, using a reactor-based case study which is then extended to show how weighted temperatures, space velocity and catalyst activity can be incorporated to make updating redundant.

regressed versus 'first principle' models
regression techniques
data requirements
handling process dynamics
incorporating process engineering knowledge
bias updating
measuring performance

 

Statistics for Control Engineers
24 to 30 hours
central value

Perhaps more than any other engineering discipline, process control engineers make extensive use of statistical methods. Embedded in proprietary control design and monitoring software, the engineer may not even be aware of them. The purpose of this module is to draw attention to the importance of statistics throughout all stages of implementation of improved controls – from estimation of the economic benefits, throughout the design phase, ongoing performance monitoring and fault diagnosis.

The module starts by explaining the central tendency of data. In particular it addresses the importance of accurately determining the mean, since the forms the basis of many statistical calculations. For example, following implementation of a control improvement, small errors in the estimate of before and after values will result in a major error in estimating the improvement. Further, any error in the estimate of the mean will result in overestimating parameters such as standard deviation. In addition to the conventional arithmetic mean, uses of other versions (such as the harmonic mean, geometric mean and logarithmic mean) are described in detail. Other measures of central value, including median and mode, are addressed along with the different way in which quartiles are determined.

Moving on to data dispersion, the difference between sample and population is explained and the impact it has on the calculation of variance and standard deviation. Other measures of dispersion are also covered, including interquartile range, deciles, centiles and mean absolute deviation. Variance is an example of a moment. Others include skewness and kurtosis. While their numerical value might be of lesser importance, their role in properly fitting a statistical distribution to process data is explained. Covariance, as a mixed moment, is explained and extended to define the correlation coefficient (Pearson R2). Engineers frequently use the terms accuracy and precision interchangeably. The difference between these measures is explained, as is the role of each in assessing the reliability of process measurements and inferential properties.

Rather than calculating key statistical parameters (such as mean and variance) from the process data, the preferred method of fitting a distribution to the data is covered. Probability density, probability mass, cumulative distribution and quantile functions are covered as means of describing the chosen distribution. The histogram, kernel density function and empirical distribution function are covered as ways of describing the actual distribution. As examples the module includes the uniform and triangular distributions, showing how they can be fitted to both continuous and discrete data - then moving on to the normal (Gaussian) distribution. While not going into the mathematical detail, the Central Limit Theorem is introduced to explain why many process datasets are normally distributed. Techniques for fitting the chosen distribution to the data are covered, showing the improvement given in estimating mean and variance.

A common error is the assumption that all process data are normally distributed. Reasons for this are covered and alternative distributions (such as lognormal) are described. Techniques for ensuring that the correct distribution is chosen and properly fitted are included. These include probability-probability (P-P) and quantile-quantile (Q-Q) plots.

The concepts of the null hypothesis and confidence interval are introduced. Their use in assessing the reliability of laboratory results and process measurements is described. Methods of identifying outliers are shown to be unreliable and ways of making them unnecessary are covered.

Moving on to regression analysis, the principles are described and a worked example, covering the development of an inferential property, completed. Several methods (such as the F test, Akaike Information Criterion and adjusted R2) are applied to identify whether the inclusion of additional inputs are justified.

Issues arising from the sample size are covered. These include assessing the limitation that the sample size imposes on the accuracy of the statistical parameters and identifying the minimum sample size required. Worked examples are included of techniques specifically designed for small samples, such as the Student t distribution.

Many statistical studies are concerned with assessing the probability of extreme process behaviour. Because relatively little historical data exist in this region, conventional fitting of a distribution function can be very unreliable. Improved extreme value analysis techniques are applied to assessing the probability of a hazardous situation arising on a distillation column.

The concept of memory in process data is introduced. In many cases the probability of an event is not constant but changes, for example, over time. Techniques are covered which identify whether this is occurring. Examples of their application include assessing the mean time between failures (MTBF) and the accumulation of bias error in process measurements and inferentials.

Throughout this module, in addition to applying the Statistics modules in Whitehouse's Process Control Toolkit, all the relevant Excel functions are described and the student given the opportunity to apply them.

dispersion
moments
correlation
data conditioning
distribution function
confidence interval
outliers
sample size
extreme value analysis
memory

 

Optimisation
8 to 10 hours
economic justification

Closed loop real-time optimisation (CLRTO) can be an expensive technology. There are many examples of installed optimisers not justifying their investment. Care must be taken to ensure that sufficient benefits exist and that these cannot largely be captured by a less costly approach. The aim of this module is to develop within the student an awareness of the work involved in implementing and supporting CLRTO and what alternative approaches may exist.

It starts by distinguishing between true optimisation opportunities and those where constraint control would achieve most, if not all, of the potential profit improvement. It describes situations in which degrees of freedom may exist. It then identifies the key considerations for determining the benefits available. Using the simulated process the student is set the task of identifying the optimum operation and exploring how this changes as process constraints and economics change. From this study the student then estimates the return on investment.

The key parts of the optimiser structure are then described - beginning with steady state detection. It then moves on to how model updating is performed, identifying which parameter estimates can effectively be updated and those where inaccuracies cannot be resolved by reference to process conditions. It shows how inaccuracy can result in the apparent optimum being displaced away from the true one.

The module then describes how the optimiser integrates with constraint control, so that consistency is maintained between the operating conditions targeted by both technologies. It then moves on to output conditioning - showing, once the optimum operating conditions are known, how they are best approached.

The student is then given the opportunity to develop the open equations that describe the behaviour of the process and combine these to generate the process model. The model is then commissioned on the simulated process, model update is then set up and model accuracy validated. The practical work is then extended to the output conditioning and steady state detection. The student then explores how the optimiser responds to changes in feedstock availability, market demand, feedstock cost, product price and operating problems.

As an alternative, the user can switch to a direct search optimiser. This illustrates how it can be a much simpler, more robust approach but also what effort is required to commission it successfully.

A brief wrap-up lecture describes the main technologies available in the market, giving some background on their origin and user base.

optimiser structure
steady state detection
process model development
impact of model errors
output conditioning
use with constraint control
available technologies

 

Project Execution
16 to 24 hours
how to determine the benefits of improved control There are a large number of pitfalls at the early stages of an advanced control project. There is a high risk of overlooking something important which later can prove very costly to correct, or may cause a substantial loss of benefit. This module aims to address these issues. It also provides guidance on how to make the project a long-term success – covering items such performance monitoring and organisational issues.

While the material typically takes two days to cover, as an option, it can be extended to incorporate exercises carried on the real plant. These might include basic controller tuning, inferential development and the installation of prototype performance monitoring tools. This approach would extend the course by a few days but results in much faster assimilation and substantial improvement in project success. It also gives the opportunity for the tutor to draw management’s attention to any specific issues for the project being considered.

It begins with a description of the process control technologies that are likely to be included – from basic control through to closed loop optimisation, describing the contribution that each can make to the overall benefits. The benefit study itself is covered - describing the statistical techniques in common use, the impact that process changes can have on the results and expertise required to execute the study.

The module then moves on to the general principles of multivariable control. It first draws comparisons with traditional advanced control techniques and their relative merits. Using a case study based on simple distillation column, the steps involved in developing the controller are described. It then describes how the controller will perform under different conditions. It shows the effect of changing economics and the impact that the operator can have in imposing artificial constraints. It describes the problems commonly encountered with plant step-testing and the controller design. It then shows how the controller can be effectively monitored and managed; and how it can be used to quantify the benefits captured by advanced control.

The importance of the basic controls is described in detail – addressing not only their mechanical reliability but also the correct choice of control algorithm and its tuning. It describes the limitations of published tuning method and proprietary tuning packages. It shows what can happen if basic controls are not addressed until later in the project.

The benefit of inferential properties, even if used with on-stream analysers, is described. A number of key issues are addressed, such the availability of good quality calibration data, technology selection, vendor selection, platform, accuracy, the use of laboratory updating and the need for additional instrumentation.

The module then addresses the use of on-stream analysers, in particular ensuring their suitability for advanced control, how they should be installed, monitored and supported.

A range of performance monitoring techniques is described, covering all aspects of process control. These range from diagnostic tools to help the control engineer through to management reporting and how this can be used to increase management interest and support.

Key organisational changes, important to the success of advanced control, are described. This covers issues such as manpower requirements and the impact on other groups that will become involved.

The module concludes by covering in detail issues such as performance guarantees, vendor selection, management of the implementation phases and safety.

introduction to multivariable control and the common pitfalls
ensuring the basic controls are working well
work involved in developing inferential properties
use of on-stream analysers
monitoring the performance of all aspects of process control
organisational impact
performance guarantees
vendor selection
safety considerations
management of design and commissioning
post-commissioning work

 

Steam Boiler and Fired Heater Control
12 to 16 hours
process description Boilers and fired heaters can offer large incentives for improved control. They are often large energy consumers and any disturbance to their operating conditions is usually propagated downstream to other processes. The opportunity can exist for significant energy savings and much improved process stability.

This module uses a simulation of two boilers – one base-loaded and the other a swing boiler. It aims to show the students the techniques that are available, their benefits and the key aspects of their implementation. It begins with a description of the process – both the firing side and that of steam generation. It also identifies the safety systems that are likely to exist and the implication that plant trip systems may have on plant testing and controller commissioning.

It then focuses on the steam drum – describing how pressure disturbances may cause “swell”, the control problem this causes and the possible solutions. It also shows how inverse response can occur. Referring to the methods covered on the Basic Control course, it looks at both tight level control and averaging control, plus the use of a three-element level controller. The student is then given the opportunity to experience the problems on a simulated boiler and evaluate the potential solutions.

The module then moves on to the fuel firing. It shows how dual oil and gas firing can be controlled. It describes the common problem of incorrectly compensating the fuel gas flows for variation in pressure, temperature and molecular weight. It also shows the techniques available for compensation for variation in composition and heating value. The student then implements a number of schemes and evaluates the performance of each.

This is followed by a session covering control of combustion air. It shows how air requirement varies as fuel composition changes, how excess air is defined and the constraints that govern the amount of excess air. It then looks how the benefits of reducing excess air may be assessed and the potential problems of doing so are described. It covers the use of flue gas analysers and the cross-limiting control strategy. Again the student is given the chance to assess the benefits; and then implement and test the schemes described.

A number of feedforward strategies are covered - including, as disturbance variables, feed rate and feed temperature.

A number of balancing techniques, applicable to multi-pass heaters, are described and the means described by which their benefit might be assessed.

The module then moves on to the steam system itself, describing the thermodynamics and how these may be used to determine boiler efficiency. It then shows the impact of boiler design changes and of variations in operating conditions. These enable the student to develop some conclusions as the how to optimise the steam system.

fuel gas flow compensation
fuel gas heating value compensation
total duty control with dual firing of oil and gas
steam drum swell and inverse response
3-element steam drum level control
flue gas oxygen and CO control
cross-limiting control
feedforward on feed rate and feed enthalpy
heater pass-balancing
steam header pressure control
basic thermodynamics
steam system optimisation

 

Compressor Control
8 to 10 hours
compressor types Compressors can be high energy consumers and therefore potentially provide an opportunity for reducing operating costs. Inefficient methods of load control can be replaced to deliver the same gas flow as lower compression costs. Excessive recycling, to avoid surge, can be reduced without jeopardising compressor reliability. The purpose of this module is to show what control strategies are possible and their impact on compressor performance.

It begins by describing compressor types, covering reciprocating and turbo-compressors. It then introduces polytropic head - explaining what it means, how it is derived and its purpose in characterising compressor performance. The compressor capacity limits of surge and stonewall are described.

The module then moves on to compressor performance curves. It initially covers constant speed machines, moving on to those with variable speed and adding surge and stonewall curves. It shows how the process curve can be added to the compressor performance chart and how this determines the flow through the machine.

The case study is then introduced. It is based on a simulation of a variable speed turbo-machine compressing a hydrogen rich stream of varying molecular weight. The purpose of the control design is to deliver a variable demand for a molar hydrogen flow with minimum anti-surge recycle.

From plant tests the student develops the process curves and identifies any non-linearities that might impact on the performance of the chosen load control scheme. The “equal percentage” valve type is described and its effect compared to the linear valve is explored.

The module then describes the possible load control strategies – including discharge throttling, suction throttling, inlet guide-vanes, recycle, speed and cylinder loading. The effect of those applicable to turbo-machines is described by reference to the compressor and process curves. Using the knowledge gained from the Basic Control course, the student then implements each of them and explores their stability over the operating range, their rangeability and their impact on power consumption.

The module then moves on to anti-surge control. It gives details of a number of commonly applied methods, showing how they might be expected to perform as process conditions vary. The student then designs and implements many of the strategies and explores their effectiveness. Further details are given on how surge protection and surge recovery schemes are integrated and how they can interact with load controls. The problems involved in controlling parallel compressors are also addressed.

polytropic head
equal percentage and quick opening valves
discharge throttling
inlet guide-vanes
speed control
anti-surge and surge recovery control
multi-compressor balancing

 

Distillation Control (3 to 5 days)

This module aims to demonstrate how basic and advanced control techniques may be applied to the distillation process. It assumes that the delegate is generally familiar with the techniques covered by Whitehouse's introductory courses, although brief refresher material is included where needed. The course comprises four sub-modules as follows:

Process Technology  
mechanism of distillation Understanding the underlying process technology is an essential first step in designing effective strategies for the control of distillation columns. Valuable opportunities may be otherwise overlooked or avoidable problems encountered. Without going into the level of detail more suited to column design, this module aims to provide the student with the basic process information needed for good control engineering.

It starts with a brief description of distillation fundamentals, such as column internals and common loading constraints. It then moves on to relative volatility – defining it explaining the impact that it has on the number of trays in the column and operating conditions. As a simplification it defines and uses key components, moving on later to multi-component systems.

It introduces feed “quality” as a measure of enthalpy, showing its importance in column mass balancing. It then defines “cut” and “separation” showing how each affect product composition. It describes how the true boiling point (TBP) curve may be used to represent cut and separation.

The student is then given the opportunity to work with a steady state simulation of a simple distillation column to explore how column design and operating conditions affect separation. Specifically this covers the impact of reboiler duty, number of trays, tray efficiency, position of feed tray and column pressure.

A number of commonly used short-cut modelling techniques are described as the basis for the later development of inferential properties.

vapour pressure
relative volatility
azeotropes
key components
feed quality "q"
cut and separation
impact of column design
modelling correlations
adjusting product composition

Basic Controls  
control problems As with any process, good basic control is an essential first step in achieving the benefits that may be achieved by the later addition of more advanced controls. This module addresses the strategies fundamental to ensuring that the rules of energy and material balance are satisfied. It first lists the control objectives that must be met and describes the problems than may exist in doing so. It identifies the instrumentation essential to meeting the control objectives.

The module first deals with the control strategies that are intended to maintain the energy balance. It explains how column pressure is a good indicator of energy balance and describes the ways in which it might be controlled. It first describes ways in which pressure can be maintained by adjusting the condensation of vapour - including techniques which manipulate coolant rate, those that change the effective condenser area, the use of a vapour bypass and manipulation of coolant temperature. It presents the advantages and disadvantages of each. It specifically addresses internal reflux control, the use of flooded condensers the problem of inverse response. It then moves on to similarly describe pressure control techniques that manipulate the flow of a vapour product and those that manipulate vapour production. It completes this section by describing a number of hybrid schemes that use the split-ranging (or better) techniques to combine the schemes above to improve the operating range.

The module then moves on to schemes that maintain the mass balance across the column – showing how level controllers can be designed to meet this objective. It identifies the 20 possible schemes, showing how many can be discarded - leaving five feasible strategies. It then addresses the circumstances under which each of these strategies might be used. Specifically addresses the schemes known as “material balance” and “energy balance”. The student then applies each of these schemes to the steady simulation to show the impact that each has on maintaining product compositions close to target. It shows how reboiler duty controllers can help minimise the effect of disturbances. It also explains a number of hybrid schemes, most notably the Rijskamp scheme, again giving the student the opportunity to experiment with these.

The student then moves on to the full dynamic simulation to design and commission, using the techniques covered in the Basic Control course, the drum and column level controllers. Each are tested with a number of combinations of manipulated variables, such as distillate, reflux, reboil and bottoms flow. The student is then able to draw conclusions about the strategies most suited to the any column and whether the controllers should be tuned for tight or averaging control.

maintaining the energy balance
column pressure control
condenser duty control
internal reflux control
flooded condenser
hot gas bypass
inverse response problems
manipulation of vapour rate
use of split range control
maintaining mass balance
energy versus material balance schemes
Rijskamp scheme
overcoming reflux drum lag
tuning the drum level controller

Composition Controls  
temperature profile Key to control of distillation columns is maintaining product compositions at their targets. This module covers all aspects of composition control from simple inferential techniques through to the use of on-stream analysers. It begins by presenting the general principles of adding composition controls to the basic controls already in place, referring again to the use of cut and separation as the key manipulated variables.

It explains why some form of inferential control is worthwhile even if reliable on-stream analysers are already installed. It begins with the use of tray temperature, describing the issues that can arise with this technique and how they can be best dealt with. It describes how to locate the best tray(s) for control – taking account of aspects such as sensitivity, linearity and the impact of process changes. Using the steady state simulation the student then analyses column temperature profiles and selects the most appropriate trays for control of distillate and bottoms composition. Moving on to the dynamic simulation the student, using the techniques covered on the Basic Control course, then performs plant tests for model identification and controller tuning – commissioning and testing the controllers to determine how well they maintain product compositions.

The module then moves on to the effect that varying column pressure has on the ability of tray temperature control to maintain constant composition. Pressure compensation techniques are developed based on a number of approaches - such as Antoine, Clausius-Clapeyron, Maxwell, regression analysis and plant testing. Again the student is given the opportunity to design and implement pressure compensated temperatures (PCT).

The ideas are then extended into full inferential property calculations. A number of techniques are covered – including those based on empirical correlations, those using semi-rigorous models and the use of regression of historically collected data. The student implements each of the techniques and compares their performance. On-stream analysers are then incorporated into the inferential to provide automatic bias updating - the student having been shown how to determine suitable dynamic compensation. The use of laboratory updating is described and shows why generally this has an adverse effect on accuracy.

The schemes are then supplemented with feedforward controls. Using knowledge gained from the Basic Control course, plus refresher information, the student implements scheme that incorporate feed rate, feed enthalpy and feed composition as disturbance variables. Additional feedforward schemes, such as those for reboiler duty changes and reflux sub-cooling, are described.

The module then describes how the simultaneous use of both distillate and bottoms composition controllers can cause problems through their interaction. A number of decoupling methods are described from simple de-tuning of the less critical controller, through to a full 2x2 dynamic decoupler. The relative gain technique is described as a means of both quantifying the need for decoupling and of designing the final scheme. The student is given the opportunity to apply this method and draw conclusions from its results. Full details of the design technique for the full decoupler are given for the student to follow and implement.

The use of on-stream analysers is then covered. The benefit of minimising the sample delay is shown and a number of methods described for achieving this. Guidance is given on locating the analyser and on continuous measurement validation.

More complex distillation columns such as super-fractionators and towers with sidestreams are covered and the guidance given on how the techniques learnt may be applied.

Proprietary multivariable control techniques are described, their advantages over conventional control identified and potential suppliers listed.

locating tray temperatures
choice of manipulated variable
pressure compensation
cut and separation models
inferential properties
feedforward on feed rate
feedforward on feed enthalpy
feedforward on feed composition
sigma-T/delta-T control
steady state decouplers
relative gain analysis
dynamic decoupling
on-stream analysers
towers with sidestreams
multivariable control packages
technology suppliers

Optimisation  
available variables Column optimisation is an opportunity frequently misunderstood and opportunities often overlooked. It can be that large benefits may be captured for relatively little cost. This module presents the opportunities that exist on most columns, describing how to quantify their benefits and the technology required for their capture.

It begins by identifying the variables that have not yet been used for basic and composition control. It also lists the constraints that are likely to be reached when these are adjusted. It draws the distinction between linear constraint control, which can be designed to ensure the column operates at the most profitable limits, and closed-loop real time optimisation that uses a non-linear method to identify an optimum that may not be fully constrained. It identifies the proprietary technologies that are available for both approaches.

The module first addresses pressure as an optimisation variable, dispelling any belief that pressure should always be minimised. It shows how adjusting pressure may relax other more valuable constraints and give an overall increase in profitability. The student is given the opportunity to use simulation to explore the relative benefits of the energy saving that pressure reduction gives versus the capacity increase that might be achieved if pressure is increased.

Energy/yield optimisation is then described. The module shows why it can often be attractive to set composition targets that are substantially more demanding than that demanded by product specifications. It shows how to balance the additional energy cost against yield improvement. The student again is given the opportunity to explore this on the steady state simulation.

The module concludes with full optimisation by the student of the six variables remaining, i.e. feed rate, feed composition, feed temperature, column pressure and composition targets.

common constraints
benefits
available technologies
flooding protection
pressure minimisation
energy-yield optimisation

 

Gasoline Blending
8 to 10 hours
key specifications The module begins with details of the common specifications applied to gasoline. These include those that can be controlled at the blender, such as density, octane (RON and MON), aromatics content, benzene content, Reid vapour pressure (RVP), % evaporated at 70'C and vapour lock index (VLI). Each is defined and reasons given for why they are important. Other specifications, determined upstream of blending, are also covered. Methods are given for determining the properties of blends of components. Different grades of gasoline are listed, showing how specifications can change both seasonally and geographically.

Key gasoline components are then defined. These include reformate, isomerate, ethyl tertiary butyl ether (ETBE), butane and ethanol. Other less common components, produced in more complex oil refineries and petrochemical plants are included. The key properties of each component are given - particularly those which can be changed upstream of the blender to improve profitability. The student then has the opportunity to apply the methods to calculate the properties of a given blend and then explore how the component ratios may be changed to ensure product specifications are met at minimum cost.

Distinction is then made between batch blending, in-line blending and rundown blending. The relative benefits of each are described, with case studies conducted by the student, to demonstrate these. Blend ratio control (BRC) is then covered in detail, with the student shown how start-up and completion actions are taken, how pacing is used to deal with any hydraulic limitions and how component shortages are handled. Other features, that integrate BRC with a full oil movement and storage (OM&S) automation, are listed.

Blend property control (BPC) is then added, showing first how inconsistencies between measured and predicted blend properties arise and how they are handled. Methods of dealing with process dynamics are shown. The student then completes a number of hands-on exercises, on a blender simulator, which show the benefit of BRC and how the addition of BPC substantially improves blend economics. Degrees of freedom are explored to understand how blends are constrained. Alternative cost functions are detailed, describing where each would be applicable.

Distinction is drawn between maintaining the current blend within specification and control of the finished product. In particular, this includes the impact of any 'heel' that might remain in the product tank from an earlier blend, that might not meet the target specification. Tank quality integration (TQI) is described and then applied by the student to both final product properties and its composition. The student then explores the use of the 'on-spec horizon' technique, to determine at what stage the tank should meet the specifications. This is followed by detailed examination of its advantages and disadvantages. The impact of blend infeasibility is included.

The use of on-stream analysers is covered in some detail. This starts with validity checking, how to respond to failure and how to respond to a measurement again becoming valid. Other aspects include performance historisation, how to manage the instrumentation when no blend is taking place and the choice of technology. In particular, the use of near infra-red (NIR) technology is detailed. Using on-stream measurement in certifying product quality is compared to the more traditional laboratory approach. The importance of data collection, as a means to improve blend predictability, is emphasised.

The module concludes with a detailed description of the man-machine interface. This includes integration with the laboratory information management system (LIMS), auto tank gauging (ATG), OM&S automation, product recipe database and production scheduling. Specimen operator DCS screens are included. It concludes with a description of the impact that such automation has on the organisation, and how best to exploit the technology.

blend components
blend ratio control
on-stream analysis
blend property control
historisation
man-machine interface
organisation

 

Home