November 24, 2013

OOP design and event driven programming

In a previous article, one of the 3 main design advices was “Use events to free your classes from things that can change”. In this article, I would like to talk a little bit more about it.

Following good OOP practice, you develop by encapsulating almost everything into specialized objects. This makes all the code and all data related to a given subject grouped into a single entity named object.

This encapsulation is quite easy. You first this about what the object is or made of, and what operation can be done on the object or with the object. What the object is or is made of becomes properties. Operations become methods.

Let’s see this in action with a very simple and basic example derived from one of my real world applications. You can skip the following 3 paragraphs if you are not interested by the actual application and only want to read software design part of this article.

First, a little background: As you may know I’m working for a company building – among other things – automated digital radiography (DR) system. A DR system is made of an X ray source, an X ray detector, a diaphragm, a manipulator, a control computer, an image processing system and a database system. It is automated because it does X ray inspection without human interaction. It takes hundreds of radiography fully unattended.

You can see a picture here. OK, when you don’t know what it is, it could be challenging to understand what you see. Actually in the background in orange you see a robotic arm. It holds a fork which support the X ray detector (White rectangular box on the left), the diaphragm (Yellow on the right) and the X ray source (just behind the diaphragm). On the foreground, you see the part being inspected (Component of the low pressure compressor of an aircraft engine). This component is secured on a rotating table you don’t see on the picture.

What my Delphi software does is drive the robot, the rotating table, the X ray source, the diaphragm, and the detector in coordinated movement to take X ray picture of all welds. Pictures are sent to an image processing system for examination and then stored in a database for later search and retrieve.

For such a complex system, it is very important to have a good software design. If you don’t, the application will be horrible to maintain and would probably quickly become unreliable.

Objects are defined at all levels in the application. High level objects making use of low level objects. Each object is specialized for his own purpose. If you don’t carefully think about the purpose, you’ll end up with a single huge object doing everything; or you get a myriad of object doing almost nothing each one. There is no rule to fix the boundaries. Your only guide I can give you is to always think about what the main purpose of the object is and concentrate on it. Everything that doesn’t fit the real purpose must be moved to another object with his own purpose.

Back to our real world example: Both the robot and the rotating table make use of serial communication (RS232 or RS485) between the computer and the electronic and embedded controller driving the motors. They use the same physical communication layer but different protocols. We can immediately see the object candidates, from low level to high level: Basic serial communication, robot communication protocol, rotating table communication protocol, robot itself and rotating table itself.

The basic serial communication will simply drive the computer serial port to send and receive characters. It has no knowledge of what represent the data sent or receive. It only knows how to send data and receive data.

Communication protocol object handle messages required to instruct a robot controller or a rotating table motor controller to do what it needs to. This object doesn’t know how to send characters to a serial port. It even doesn’t know it is making use of a serial port. But it knows how to format a message instructing the robot or the table to reach a given position. The communication protocol object has no idea what is the movement purpose, but it know how to request such movement.

Robot or rotating table objects are high level objects. They know what a robot or table is able to do, they know which sequences of instructions are required for everything a robot or table can do, at least from a low level point of view. The error to be avoided here is to put in that object something related to yet a higher level. That higher level is related to the coordination between the robot and the table, or related to the whole systems.

I have just scratched the surface of the OOP design for such an application. I don’t want to teach you how to build software for automated digital radiography system. I want to teach you how to write good Delphi code (Well, this applies to almost all object oriented programming language).

An object in the middle of a hierarchy receive order from the higher level and has to delegate work to the lower level objects. Those are simple method calls. For example, the robot having to move the arm to a given X, Y, Z position is space (You also need the 3 angles to fully define a position in space) will call a bunch of methods of the communication protocols, probably a message to send each parameter (coordinates and angles) to the robot controller. The communication protocol object will build messages, adding addresses, message number, checksum or CRC and similar items required to make a valid message. It will delegate the sending of the message to the lowest level object handling serial port communication. This object doesn’t know what the messages are but knows how to send it one byte or character at a time, with proper baud rate, parity, start and stop bit, and how to handle handshaking.

So far so good: we only used simple method calls up to now. We are now at the point where we need even driven programming!

The software is driving hardware. Moving a robot arm or rotating a table takes a huge time compared to the computer processing speed. What happens when the requested movement is done or when something went wrong? There is a data flow in the reverse direction! The hardware (Motor controller) sends a message thru the serial port to say – for example – the position has been reached.

The lowest level object receives bytes from the motor controller. After checking errors such as parity, it transmits those bytes to the communication protocol object which assembles complete messages. Messages are checked for validity (format, length, CRC, and so on). Complete messages are then used to notify the robot object or table object that the requested movement is done. It is likely that a single movement is part of a group of coordinated movement. The robot object knows about this coordination and collects all messages until the group is finished and only then forwards the information to the upper layer.

In an event driven software, the backward information flow is handled by events. This means that the software send a request for something and don’t care waiting until the requested operation is done. Rather, it handles notification when something happens, for example the end of requested operation or an error message. This event driven operation is frequently called asynchronous operation because requesting something is decoupled from waiting for it to be done.

Traditional programming, also known as synchronous programming or blocking programming, works by sending a request and waiting for the answer. This is simple and easy. Well easy until you have to do several things simultaneously. With synchronous programming you must then use multithreading to do several thing simultaneously. This works well but it is difficult to develop, debug and maintain. There are a lot of issues arising from thread synchronization. This is a very difficult matter.

Asynchronous programming or event driven programming solves those issues easily. There is no problem at all doing several things simultaneously since “things” are merely requests to do something. The request is almost instantaneous. There is no wait, no blocking. The program never wait that something is done. It just does the processing when it is done.

Think about Windows user interface. Your code never wait that the user clicks on a button. You just assign code to the event which is triggered when the user clicks on the button. The code is executed when the user clicks and you have nothing to do for that to happen.

This event driven behavior can be built into your own objects very easily. It fits very well along to the code you write for the user interface.

Here are the required steps:
  1. Think about which data you need to pass when the event is triggered
  2. Create a data type corresponding to the data found in step 1. Add a “Sender” argument. This data type is a pointer to a procedure of object.
  3. Create a protected member variable in your class to hold the data type from step 2. Usually this member variable name begins with “FOn”.
  4. Create a published property corresponding to the member variable.
  5. Create a protected “Trigger” virtual procedure.
  6.  Implement the trigger procedure
You want an example? The communication protocol object which receive data from the serial communication object will trigger an event when it has assembled a full message and this message in a message stating the position of the movement (Usually motor controller periodically send such message while moving). It is likely that the data is the actual position. This position is usually expressed as an integer count of a position encoder tick.

Step 1:
We need an integer.

Step 2:
type
  TPositionEvent = procedure (Sender : TObject; Position : Integer) of object;

Step3:
protected
  FOnXPosition : TPositionEvent;
  

Step 4:
  published
    property OnXPosition : TPositionEvent read  FOnXPosition
                                          write FOnXPosition;

Step 5:
protected
    procedure TriggerXPosition(Position : Integer); virtual;

Step 6:
procedure TMyObject.TriggerXPosition(Position: Integer);
begin
    if Assigned(FOnXPosition) then
        FOnXPosition(Self, Position);
end;

Usually I use a single source file for each individual object.
The complete code should looks like this:

unit RobotCommProtocol;

interface

uses
  Classes;

type
  TPositionEvent = procedure (Sender : TObject; Position : Integer) of object;

  TRobotProtocol = class(TComponent)
  protected
    FOnXPosition : TPositionEvent;
    procedure TriggerXPosition(Position : Integer); virtual;
  published
    property OnXPosition : TPositionEvent read  FOnXPosition
                                          write FOnXPosition;
  end;


implementation

procedure TRobotProtocol.TriggerXPosition(Position: Integer);
begin
    if Assigned(FOnXPosition) then
        FOnXPosition(Self, Position);
end;

end.

Carefully study how I named the various parts. Naming convention is very important to have readable and maintainable code. Keep naming same thing with same name, using prefixes or suffixes to make a distinction where required.

My event is supposed to return a position. Assuming we have several possible positions, I named everything related to the event “XPosition”. The data type is named “TPositionEvent” because the same event will apply to X, Y, Z and all others so the “X” has been dropped. It is to be used for an event so the suffix is “Event”. And it begins with letter “T” because it is a data type.

The property itself is named “OnXPosition” for obvious reasons. Think about the “OnClick” event of a TButton. This is similar.

The member variable has the same name as the property with an “F” prefix. This convention is almost always used.

The trigger procedure begins with prefix “Trigger” and become TriggerXPosition. When the object needs to trigger the OnXPosition event, it will call TriggerXPosition, passing the new position. The procedure is made virtual so that derived classes have a chance to override his behavior. For example, the derived class could enhance it be triggering another event when the value exceed some limit.

The unit containing the code has been named “RobotCommProtocol” because our object handles a communication protocol for a given robot. The object itself is named “TRobotProtocol” for obvious reasons. It derives from TComponent which makes possible to install the object as a component available in Delphi IDE component palette. There are other requirements which are out of this article scope.

There is much more to say about the topic. Please post a comment to the article to ask for the topics I should develop in the next article.



Follow me on Twitter
Follow me on LinkedIn
Follow me on Google+
Visit my website: http://www.overbyte.be
This article is available from http://francois-piette.blogspot.be

6 comments:

Bill said...

Great article! Not only for a clear presentation of structures all Delphi developers should follow, but for the fact, as well, that all too many these days seem to think that point to point serial communications is dead.

Sebastian Delling said...

I also like your article. I would like to see how to implement a process with events. For example: Move robot to position. Then wait for him to reach it. After that take a X-Ray picture. If that is done move back to initial position.

My problem with that is that different things happen after the same event. E.g. you don't want to always turn on the X-Ray when the robot reaches a position. So I'd like to see how to properly plug those events together.

FPiette said...

@Sebastian Good event driven programming means you never wait for an event. So you request the robot to move to the position and get control back immediately while the robot is moving.

During robot move, you receive a lot of events about his position so you may update the user interface.

Once the robot has reached his target position, you receive the OnTargetReached event (Or you receive the position you check against the target) and start taking the picture.

The X-Ray detector probably works also asynchronously. While the picture is taken (maybe several seconds because several frames get grabbed and summed), you get another event OnSnapshotDone (or similar).

From that event you send the command to the robot to move to the next position.

To keep track of all those events, you should use a "State" variable telling you where you are in the inspection cycle. What you have to develop is a kind of "Finite State Machine".

Didier said...

.. thus TPositionEvent.onXPosition property is assigned with a procedure that:
1. send an action to the higher-level layer
2. send a message to the lower-level layer
Have you an example of this build?
Similarly, I guess that procedure TriggerXPosition is called by a serial communication object procedure.
Can you please give an example of code relating this?
Thanks

FPiette said...

@Didier OnXPosition is indeed initialized by an upper layer procedure which is likely to update the user interface to show the robot movement.
TriggerXPosition is called from the same level, probably from the processing of character received from the lower level (Serial port component). At one time the message is assemnled, verified and position extracted. Then TriggerXPosition is called passing the decoded position. Exact code depends on the message format and is rather complex. It depends on the motor controller itself. I use SEW motors and controller (RS485 protocol) as well as Maxon motors and controller (CAN interface).

Roland Bengtsson said...

Great post!
I blogged about a similar subject here http://boldfordelphi.blogspot.fi/2012/11/custom-events.html