1   Tutorial: Distributed Computer Vision with RSB

The aim of this tutorial is to give you an overview of how computer vision tasks can be performed in a distributed fashion using the middleware RSB and assorted libraries. For this purpose we will explain example programs in C++ and Python, which receive input images via RSB, do some processing on them using the well-known OpenCV library, and publish their processing results again via RSB. The example projects can be downloaded and are based on the project structure explained in the Build System Essentials Tutorial. In case the explanations about the build system used in this tutorial are too sparse, please refer to the respective section in that tutorial.

Note

While this tutorial is generally usable for everyone, it was specifically created for the use in a lecture. Some explanations are specific for this lecture and marked in special boxes.

Note

For questions and feedback regarding RSB, you can write a mail to the mailing-list rsb@lists.cor-lab.uni-bielefeld.de. You can subscribe to the mailing-list here. There is also an IRC channel for RSB support: irc://irc.freenode.net/rsb. Bug reports are always welcome on our bug-tracker.

1.1   Preconditions

In order to complete the tutorial you need a working installation of the following software packages on a Linux computer:

  • RSB version 0.10, language implementation in the programming language you would like to work with
  • rsb-tools-cl, version 0.10
  • rsb-gstreamer, version 0.10
  • RST and rst-converters in version 0.10
  • OpenCV, at least version 2.4.8 (should be installed via system packages)
  • CMake (should be installed via system packages)

For the installation of the RSx software, please refer to the installation instructions.

Moreover, you need a webcam for generating live images.

You should be familiar with the programming language you are going to use in this tutorial and basic knowledge about build tools and compiler invocations in this language should exist.

The required software is already installed on the lab computers and you will stream webcam images from your own camera on your computer. In order to use the ISY installation of these dependencies (PATH, ...), please execute the following command for each terminal (or by adding it to your login profile) that you want to use in this tutorial:

module load isyenv

This command will also define the $prefix variable, which is used constantly throughout the tutorial. It points to the installation prefix for this semester, which resides in a subdirectory of /vol/isy.

1.2   Tutorial

The following outline will give you a rough idea of the steps performed in this tutorial.

Note

In this online documentation we will only list relevant source code lines.

If it is unclear to you how they fit into the whole program, please refer to the

Warning

This is not the project you should start the tutorial with!

1.2.1   Setting up RSB Connection and Webcam Streaming

For this tutorial we will use a simple socket based transport of RSB. As RSB supports this functionality out of the box, there is no specific configuration required.

Note

If you want to ensure that everything is correctly set up, you can specifically enable the socket transport in the configuration of RSB and disable other transports. For this purpose, create or change ~/.config/rsb.conf with/to these contents:

[transport.inprocess]
enabled = 0
[transport.socket]
enabled = 1
[transport.spread]
enabled = 0

These settings should be the defaults, however.

1.2.1.1   Setting up a RSB logger

In order to see the events which are exchanged via RSB, we will firstly set up a logger which plots the

different scopes and their activity against time. In order to do this, we simply call the rsb-loggercl tool with the correct --style argument on the socket transport root.
$prefix/bin/rsb-loggercl0.10 --style=timeline/scope socket:/

Note

When using socket based transport, the first launched RSB process will bind to the socket and works as a server process. Therefore, after launching the logger, we should keep it open during the entire tutorial to avoid socket blocking/binding issues. This will not be an issue when using other transports (e.g. spread based).

1.2.1.2   Using a Webcam

We will use the GStreamer multimedia framework to capture data from the webcam and send it over RSB using the GStreamer plugins.

Each of you should be supplied with a webcam in order to set up your own streaming pipeline.

For this purpose, ensure that you have installed the RSB plugins as described and launch:

gst-launch-0.10 gconfvideosrc ! video/x-raw-yuv, width=320, height=240, framerate=30/1 ! ffmpegcolorspace ! video/x-raw-rgb, blue_mask=16711680, green_mask=65280, red_mask=255 ! queue ! rsbvideosink "scope=/video"

Afterwards, images will be streamed on the RSB scope /video. Leave the streaming running during the whole work on the tutorial.

1.2.1.3   Using the pre-recorded bag file

If you do not have a webcam, you can also use a pre-recorded video snippet as an alternative. It was recorded using the bag-recordcl tool (part of the rsbag-tools-cl) and optionally replaces the use of a webcam.

For your convenience we put this file already in the isy volume so that you do not need to download the file.

$prefix/share/data/short-video.tide

You can simply re-play the file with the bag-playcl command.

$prefix/bin/bag-playcl0.10 $prefix/share/data/short-video.tide socket:

Note

The replay stops after a successful run. For this tutorial you can simply loop the execution of this command in order to get a continuous video stream.

while true; do $prefix/bin/bag-playcl0.10 $prefix/share/data/short-video.tide socket: ; sleep 1; done;

1.2.2   Getting the Scratch Project

We have created a version of the example code without the instructions discussed in this tutorial to speed up the initial setup phase.

Please, download the scratch project and extract it on your computer.

The project template builds a single binary named rsbcv. In order to prevent name clashes, you should rename the built executable to a unique name for your group. Therefore, modify the following line in the CMakeListst.txt in the project root folder.

1
SET(BINARY_NAME "rsbcv")

In order to verify that the required libraries are properly installed on your computer and to get you started with CMake, we will try to configure and compile this test project:

  1. Inside the root folder of the downloaded project, create a folder called build:

    cd path/to/cpp
    mkdir build
    cd build
    
  2. Call CMake to configure the project. This means CMake will search for all dependencies of our test project and create a Makefile.

    cmake -DCMAKE_BUILD_TYPE=debug -DCMAKE_PREFIX_PATH=${prefix} ..
    

    ${prefix} indicates for each upstream project, the path where it was installed to.

    ${prefix} is set to /vol/isy/current/releases/trusty by the module isyenv.

    Note

    In case that multiple required libraries are installed into different prefixes, then you need to indicate for each of the required libraries where they can be found. For the above call this would look like:

    cmake -DCMAKE_BUILD_TYPE=debug -DRSC_DIR=${prefix}/share/rsc0.10/ -DRSB_DIR=${prefix}/share/rsb0.10/ -DRST_DIR=${prefix}/share/rst0.10/ -Drst-converters_DIR=${prefix}/share/rst-converters0.10 ..
    

    CMake should exit successfully with a log comparable to this one:

    -- The C compiler identification is GNU
    -- The CXX compiler identification is GNU
    -- Check for working C compiler: /usr/bin/gcc
    -- Check for working C compiler: /usr/bin/gcc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- protoc does not support matlab
    -- Found PROTOBUF: /usr/lib/libprotobuf.so
    -- Performing Test CHECK_CXX_FLAG_pipe
    -- Performing Test CHECK_CXX_FLAG_pipe - Success
    -- Performing Test CHECK_CXX_FLAG_Wall
    -- Performing Test CHECK_CXX_FLAG_Wall - Success
    -- Performing Test CHECK_CXX_FLAG_Wextra
    -- Performing Test CHECK_CXX_FLAG_Wextra - Success
    -- Performing Test CHECK_CXX_FLAG_DIAGNOSTICS
    -- Performing Test CHECK_CXX_FLAG_DIAGNOSTICS - Success
    -- Found Subversion: /usr/bin/svn (found version "1.6.17")
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /homes/jwienke/workspace/distributed-computer-vision-with-rsb/build/cpp
    
  3. Afterwards, you can try to compile the project using make. If this succeeded, you can finally launch the example binary for the first time by executing:

    src/rsbcv
    

    The program will exit immediately as you will have to fill it with life during the remainder of this tutorial.

    You do not need to modify the build system during the course of this tutorial. Please remember to check the Build System Essentials for details on that, especially when you start developing your own code!

Inside the project you will find a single source file example-projects/cpp/src/rsbcv/binary.cpp. We have already provided a method to parse two scopes from the command line:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
string inScope = "/video";
string outScope = "/results";

void handleCommandline(int argc, char *argv[]) {

    po::options_description options("Allowed options");
    options.add_options()("help,h", "Display a help message.")("inscope,i",
            po::value < string > (&inScope),
            "Scope for receiving input images.")("outscope,o",
            po::value < string > (&outScope), "Scope for sending the results.");

    // allow to give the value as a positional argument
    po::positional_options_description p;
    p.add("value", 1);

    po::variables_map vm;
    po::store(
            po::command_line_parser(argc, argv).options(options).positional(p).run(),
            vm);

    // first, process the help option
    if (vm.count("help")) {
        cout << options << "\n";
        exit(1);
    }

    // afterwards, let program options handle argument errors
    po::notify(vm);

}

Please, download the scratch project and extract it on your computer.

The project template contains a script with a single name rsbcv-harris. In order to prevent name clashes, you should rename the script that is going to be built to a unique name for your group. Therefore, modify the left side of the following line in the setup.py in the project root folder.

1
                 'rsbcv-harris = rsbcv:detectHarris',

In order to verify that the required libraries are properly installed on your computer and to get you started with setuptools for Python we will try to compile and install the project without any modifications:

  1. Choose a temporary prefix where you are going to install the project for testing, e.g. prefix=/tmp/testinstall. Execute the following code to set up a Python site directory in this prefix and make it available in the shell:

    export prefix=/tmp/testinstall
    mkdir -p $prefix/lib/python2.7/site-packages
    export PYTHONPATH=$prefix/lib/python2.7/site-packages:$PYTHONPATH
    

    Note

    In case you are working with a different Python version, adapt the aforementioned path to that version.

  2. Change the working directory to the root folder of the downloaded project and execute:

    python2 setup.py install --prefix=$prefix
    

    This will install the project in the selected prefix.

  3. Launch the installed program for testing purposes:

    $prefix/bin/rsbcv-harris
    

    The program will exit immediately as you will have to fill it with life during the remainder of this tutorial.

    You will need to use your custom program name instead of the generic one given here for launching.

    You do not need to modify the build system during the course of this tutorial. Please remember to check the Build System Essentials for details on that, especially when you start developing your own code!

Inside the project you will find a single source file example-projects/python/rsbcv/__init__.py. We have already provided a method to parse two scopes from the command line:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
import argparse

def parseArguments():

    parser = argparse.ArgumentParser(description='An example program sending some magic numbers')
    parser.add_argument('--inscope', type=str, default='/video',
                        help='scope to receive images from')
    parser.add_argument('--outscope', type=str, default='/results',
                        help='scope to send processing results to')
    return parser.parse_args()

def detectHarris():

    args = parseArguments()

Note

It might happen that you are unable to kill the running Python program using Control + C. This can happen if a dependency used by RSB blocks this signal. Instead, you can try Control + 4. However, if none of these work, you can still kill it using killall -15 rsbcv-harris*.

This makes it possible for the user of the resulting executable to specify the scope from which images will be received and the scope on which the processed results will be published.

1.2.3   Receiving Images

To receive events using RSB you need to instantiate a listener. A listener is an object which receives all incoming events on a specified scope and passes them to registered handlers, which are callbacks provided by the user of RSB.

We will now create a listener to receive images from the streamed webcam in OpenCV’s image data type IplImage. All of this will happen in the main() function after the command line parsing (except including headers).

  1. Include the required headers for RSB:

    1
    2
    #include <rsb/Factory.h>
    #include <rsb/Listener.h>
    
  2. Get the rsb::Factory instance.

    1
        rsb::Factory& factory = rsb::getFactory();
    

    The rsb::Factory is responsible for creating RSB participants like listeners.

  3. Create a rsb::Listener on the input scope specified via the command line option:

    1
    2
        rsb::ListenerPtr imageListener = factory.createListener(
                rsb::Scope(inScope));
    
  4. The listener will receive events now but it still needs to know what to do with these events. In our case we would like to store them in a queue for later processing. Therefore, we first instantiate such a queue. As the queue will contain OpenCV IplImage objects, we first need to include the OpenCV headers:

    1
    2
    #include <opencv2/core/core.hpp>
    #include <opencv2/highgui/highgui.hpp>
    

    We also need a queue implementation which is synchronized so that multiple threads can operate on it in parallel. For this purpose we will use the implementation from RSC and include the respective header:

    1
    #include <rsc/threading/SynchronizedQueue.h>
    

    Afterwards, create an instance of this queue class for IplImage instances:

    1
    2
        boost::shared_ptr<rsc::threading::SynchronizedQueue<boost::shared_ptr<IplImage> > > imageQueue(
                        new rsc::threading::SynchronizedQueue<boost::shared_ptr<IplImage> >(1));
    

    As this queue needs to get passed around later, we have to create it on the heap using the new operator. To avoid manual memory management and possible memory leaks, we maintain the instance of the queue inside a boost::shared_ptr. The rsc::threading::SynchronizedQueue itself also uses templates to be usable for varying data types. In this case it will maintain boost::shared_ptrs of IplImage. The reason for this is that every received RSB event contains its payload inside these shared pointers and hence we will resemble this structure. Finally, the 1 in the constructor call of the queue instance limits the queue size to 1. Assuming that our own program is too slow to process all incoming images, this will ensure we do not queue up more and more images. Instead, old images will be deleted and only the most recent one will remain in the queue.

  5. Now that the queue is available, we need to instruct the listener to store the images contained in the received events into this queue. For this purpose we install a specialized handler called rsb::QueuePushHandler that comes with RSB which pushes the data of each event into a queue. First, include the respective header:

    1
    #include <rsb/QueuePushHandler.h>
    

    And afterwards install the handler in the rsb::Listener instance:

    1
    2
        imageListener->addHandler(
                rsb::HandlerPtr(new rsb::QueuePushHandler<IplImage>(imageQueue)));
    
  6. We have instructed RSB to fill the queue with the most recent images. Now we can start a main working loop which reads from the queue and in the first iteration displays the received images:

    1
    2
    3
    4
    5
    6
    7
        while (true) {
    
            cv::Mat image = cv::Mat(imageQueue->pop().get(), true);
    
            cv::imshow("input", image);
            cv::waitKey(1);
    }
    

    The rsc:rsc::threading::SynchronizedQueue::pop method of the queue will block if our processing is faster than the streaming from the webcam and the queue was hence empty. At this point, we convert the legacy data type IplImage into the cv::Mat data type which is used in current OpenCV versions. Images of this type can be displayed using cv::imshow which, in our example, creates a new window with the name input (if it does not already exist) and displays the received image. However, we need to explicitly trigger repainting of the window by calling cv::waitKey.

You can try to compile the program now again by calling make and execute it as described before. However, you will not yet see the webcam images. Instead, warnings like this one will be printed:

1349972760057 rsb.spread.ReceiverTask [WARN]: ReceiverTask::notifyHandler catched std exception: No converter for wire-schema or data-type `.rst.vision.Image'.
Available converters: {.*: *rsb::converter::ByteArrayConverter[wireType = std::string, wireSchema = .*, dataType = bytearray] at 0x20c2460
.rsb.protocol.EventId: *EventIdConverter[wireType = std::string, wireSchema = dummy, dataType = rsb::EventId] at 0x20c23d0
bool: *rsb::converter::BoolConverter[wireType = std::string, wireSchema = bool, dataType = bool] at 0x20c1850
uint64: *rsb::converter::Uint64Converter[wireType = std::string, wireSchema = uint64, dataType = unsigned long] at 0x20c2bf0
utf-8-string: *rsb::converter::StringConverter[wireType = std::string, wireSchema = utf-8-string, dataType = std::string] at 0x20c2750
void: *rsb::converter::VoidConverter[wireType = std::string, wireSchema = void, dataType = void] at 0x20c2cb0}

RSB, so far, does not know how to decode the received data (which are in a binary format for network transmission) and transform them into IplImages. We need to install a so called converter for this purpose. In this case the rst-converters project provides rst::converters::opencv::IplImageConverter, which does exactly the required thing. To install this converter, first include the respective header file:

1
#include <rst/converters/opencv/IplImageConverter.h>

Moreover, we need to include a header from RSB to register this converter in RSB‘s repository of known converters:

1
#include <rsb/converter/Repository.h>

Finally, we can install the IplImageConverter. This needs to be done before creating the listener:

1
2
3
    rsb::converter::Converter<string>::Ptr imageConverter(
            new rst::converters::opencv::IplImageConverter());
    rsb::converter::converterRepository<string>()->registerConverter(imageConverter);

After recompiling the program you should finally see the unprocessed images from the webcam.

We will now create a listener to receive images from the streamed webcam in OpenCV’s image data type cv.iplimage. All of this will happen in the detectHarris() function after the command line parsing.

  1. Import the required package for RSB:

    1
        import rsb
    
  2. Create a rsb.Listener on the input scope specified via the command line option:

    1
        listener = rsb.createListener(args.inscope)
    
  3. The listener will receive events now but it still needs to know what to do with these events. In our case we would like to store them in a queue for later processing. Therefore, we first instantiate such a queue. As the queue will contain OpenCV images, which are represented as numpy arrays in recent versions of OpenCV, we first need to import the opencv module and the numpy module in the global namespace:

    1
    2
    import cv2
    import numpy
    

    Moreover, we need to import a queue class:

    1
        from Queue import Queue
    

    Afterwards, create an instance of this queue for the images:

    1
        lastImage = Queue(1)
    

    As we only want to process the most recent image, the queue is limited to size one. However, it is important to note that the Python queue implementation does not throw away old images in case your code is too slow to cope with the amount of incoming images. We will handle this case in the next step.

  4. Now that the queue is available, we need to instruct the listener to store the images contained in the received events into this queue. For this purpose we need to install a handler, which is a callable() accepting one argument, the event:

    1
    2
    3
    4
    5
    6
    7
        def addLastImage(imageEvent):
            try:
                lastImage.get(False)
            except:
                pass
            lastImage.put(numpy.asarray(imageEvent.data[:,:]), False)
        listener.addHandler(addLastImage)
    

    The function first tries to erase the only item that can potentially be in the queue if our own processing code was too slow to grab it from the queue in the time since the last insert. This operation throws an exception if no element was in the queue (which would be ideal because no image was missed) and we can safely ignore this exception. Afterwards, the new image is inserted into the queue. As the new API version of OpenCV from the cv2 module preferably works with numpy arrays as the data type for images, but we will receive backwards-compatible cv.iplimage instances, we need to convert the received data to this representation, which is done in line 6 before filling the queue. Finally, the function is added as a handler to the listener created earlier.

  5. We have instructed RSB to fill the queue with the most recent images. Now we can start a main working loop which reads from the queue and in the first iteration displays the received images:

    1
    2
    3
    4
    5
    6
        while doRun:
    
            image = lastImage.get(True)
    
            cv2.imshow("input", image)
            cv2.waitKey(1)
    

    The Queue.Queue.get() method of the queue will block if our processing is faster than the streaming from the webcam and the queue was hence empty. cv2.imshow() creates a new window with the name input if it is not existing yet and displays the received image. However, we need to explicitly trigger repainting of the window by calling cv2.waitKey().

Note

We use the variable doRun as a condition for the loop in order to be able to abort the program at any point in time in a clean way. For that we will write a simple signal handler which will catch Control + C and exit the main loop.

1
2
3
4
5
6
7
import signal
doRun = True
def signal_handler(signal, frame):
    global doRun
    print "exiting."
    doRun = False
signal.signal(signal.SIGINT, signal_handler)

If you don not want to do this, you can simply use a while True: loop.

You can try to install the program now again as described before and execute it. However, you will not yet see the webcam images. Instead, warnings like this one will be printed:

Exception in thread Thread-6:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 504, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/languitar/.local/lib/python2.7/site-packages/rsb_python-0.8.0-py2.7.egg/rsb/transport/rsbspread/__init__.py", line 216, in __call__
    raise e
KeyError: '.rst.vision.Image'

RSB, so far, does not know how to decode the received data (which are in a binary format for network transmission) and transform them into cv.Iplimages. We need to install a so called converter for this purpose. In this case the rst-converters project provides rstconverters.opencv.IplimageConverter, which does exactly the required thing. To install this converter, first import it:

1
    from rstconverters.opencv import IplimageConverter

Moreover, we need to include a function from RSB to register this converter in RSB‘s repository of known converters:

1
    from rsb.converter import registerGlobalConverter

Finally, we can install the IplimageConverter. This needs to be done before creating the listener:

1
2
    registerGlobalConverter(IplimageConverter())
    rsb.setDefaultParticipantConfig(rsb.ParticipantConfig.fromDefaultSources())

After reinstalling the program you should finally see the unprocessed images from the webcam.

1.2.4   Processing the Received Image

We will now apply a Harris Corner detector which is included in OpenCV and produces a set of corner points when given an input image. These sets of points will be sent via RSB (e.g. for further processing).

  1. Detect Harris corners:

    1
    #include "opencv2/imgproc/imgproc.hpp"
    

    This header file provides the interface to the Harris corner detector.

    1
    2
    3
    4
            cv::Mat image_gray;
            cvtColor(image, image_gray, CV_BGR2GRAY);
            cv::Mat dst = cv::Mat::zeros(image_gray.size(), CV_32FC1);
            cv::cornerHarris(image_gray, dst, 2, 3, 0.04);
    

    We convert the image to grayscale using the cvtColor function and also create an empty Matrix using cv::Mat::zeros into which the detector can store its detection result mask. The cv::cornerHarris function performs the actual detection and stores the discovered corner points into dst. The additional arguments for the cv::cornerHarris can be used to influence the detection of corners in the image. For now we simply choose the arguments 2, 3, 0.04, as they will produce good results for this tutorial.

  2. Provide debug output for the detected features:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
            cv::Mat dst_norm, dst_norm_scaled;
            // normalize and scale the image
            cv::normalize(dst, dst_norm, 0, 255, cv::NORM_MINMAX, CV_32FC1,
                    cv::Mat());
            cv::convertScaleAbs(dst_norm, dst_norm_scaled);
    
            // The target
            cv::Mat imageWithKeyPoints = image;
            // draw a circle around corners
            for (int j = 0; j < dst_norm.rows; j++) {
                for (int i = 0; i < dst_norm.cols; i++) {
                    // a somewhat good threshold
                    if (dst_norm.at<float>(j, i) > 195) {
                        // Circle the result
                        circle(imageWithKeyPoints, cv::Point(i, j), 5,
                                cv::Scalar(0), 2, 8, 0);
                    }
                }
            }
    
            cv::imshow("keypoints", imageWithKeyPoints);
            cv::waitKey(1);
    

    A good way to visualize the detected points is overlaying them over the input image.

    This can be achieved by calling the circle function, which draws a circle for a point into a copy of an existing image. As before, using cv::imshow and cv::waitkey, this new image can be displayed. In principle one of the calls to cv::waitKey could be removed to avoid an unnecessary amount of repainting.

You should now be able to recompile the program.

  1. Detect Harris corners:

    In order to detect Harris corners we have call the according function on our input image:

    1
    2
    3
    4
            imageGray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
            cornersResult = cv2.cornerHarris(imageGray,2,3,0.04)
            cornersResult = cv2.dilate(cornersResult,None)
            mask = cornersResult>0.01*cornersResult.max()
    

    The cv2.cornerHarris() method performs the detection. We have to convert our color image to grayscale before using this method as the Python bindings do not accept color images. This is done by using cv2.cvtColor(). The arguments for the cv2.cornerHarris() can be used to influence the detection of corners in the image. For now we simply choose the arguments 2, 3, 0.04, as they will produce good results for this tutorial. Finally, we create a mask which contains all corner points.

  2. Provide debug output for the detected features:

    A good way to visualize the detected points is overlaying them over the input image:

    1
    2
    def drawKeyPoints(target, mask):
        target[mask]=[255,0,0]
    

    Now that we have this function we can display the resulting image:

    1
    2
    3
    4
    5
            imageWithKeyPoints = numpy.array(image)
            drawKeyPoints(imageWithKeyPoints, mask)
    
            cv2.imshow("result", imageWithKeyPoints)
            cv2.waitKey(1)
    

    With the first line of this code fragment we provide a copy of our original image on which the keypoints are drawn in the next line. As before, using cv2::imshow and cv2::waitkey, this new image can be displayed. In principle one of the calls to cv2::waitKey could be removed now to avoid an unnecessary amount of repainting.

You should now be able to reinstall the program.

After launching you should see two debug windows: One with the original image and a second window with the same image and the detected keypoints.

1.2.5   Publishing the Processed Results via RSB

Now we will make the detected keypoint locations available via RSB. In order to publish information to the network you will use an informer instance.

Data that is sent over the network needs to be serialized into a binary representation that is understandable by all interested system components. That means components need to agree on a specific binary format. To ensure this fact, we maintain a library of data types called RST. This library contains data type definitions for many different robotics and intelligent systems purposes specified using Google Protocol Buffers. Protocol buffers allows us to define data types in an abstract interface definition language that can then be translated into concrete classes for most major programming languages. These classes act as data holders and include serialization functionality.

For this tutorial we will use one of the data types from RST, namely rst.geometry.PointCloud2DInt, to encapsulate and serialize the detected Harris corners. As this is a protocol buffers based data type, we will also learn how to enable RSB to send such data types by registering a ProtocolBufferConverter inside RSB.

  1. Create an informer. For this purpose first include the respective header file:

    1
    #include <rsb/Informer.h>
    

    And afterwards instantiate a new informer instance using the RSB factory. This code must be outside of the main processing loop to avoid creating the instance over and over again, which will waste performance.

    1
    2
        rsb::Informer<rst::geometry::PointCloud2DInt>::Ptr informer =
                factory.createInformer<rst::geometry::PointCloud2DInt>(rsb::Scope(outScope));
    

    The rsb::Informer class requires a template argument with the data type it will send. Additionally, it is also maintained by a shared pointer. In this case, a typedef exists inside rsb::Informer, which is the rsb::Informer::Ptr member. The informer is created on the output scope specified through the command line option.

  2. Publish the resulting image at the end of the main loop:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
            boost::shared_ptr<rst::geometry::PointCloud2DInt> cloud(
                    new rst::geometry::PointCloud2DInt());
            for (int j = 0; j < dst_norm.rows; j++) {
                for (int i = 0; i < dst_norm.cols; i++) {
                    // a somewhat good threshold
                    if (dst_norm.at<float>(j, i) > 195) {
                        rst::math::Vec2DInt *point = cloud->add_points();
                        point->set_x(i);
                        point->set_y(j);
                    }
                }
            }
            informer->publish(cloud);
    

    This populates the PointCloud2DInt data structure with the detected points and then publishes it using the informer. However, the publishing can only work if RSB knows how to send the PointCloud2DInt data type over the network.

  3. Register the PointCloud data type

    The PointCloud2DInt data type is made available and prepared for network transmission by including the following headers:

    1
    2
    #include <rsb/converter/ProtocolBufferConverter.h>
    #include <rst/geometry/PointCloud2DInt.pb.h>
    

    After that, RSB has to be informed about the new data type and its associated converter by registering an instance of the ProtocolBufferConverter for our point cloud data type.

    1
    2
    3
        rsb::converter::Converter<string>::Ptr pointCloudConverter(
                new rsb::converter::ProtocolBufferConverter<rst::geometry::PointCloud2DInt> ());
        rsb::converter::converterRepository<string>()->registerConverter(pointCloudConverter);
    
  1. Create an informer.

    In order to send data, we instantiate a new informer for the data type PointCloud2DInt, which we will use to send the computed keypoints. This code should be outside of the main processing loop to avoid creating the instance over and over again.

    1
        informer = rsb.createInformer(args.outscope, dataType=PointCloud2DInt)
    

    To create a rsb.Informer instance we need to specify the scope on which to send the results and the data type which shall be sent. In this case, the informer is created on the output scope specified through the command line option, which will be passed into the factory function.

  2. Publish the resulting image at the end of the main loop:

    1
    2
    3
    4
    5
    6
    7
    8
    9
            cloud = PointCloud2DInt()
    
            x,y = numpy.where(mask)
            for xe, ye in zip(x,y):
                p = cloud.points.add()
                p.x = int(xe)
                p.y = int(ye)
    
            informer.publishData(cloud)
    

    This populates the PointCloud2DInt data structure with the detected points and then publishes it using the informer. However, the publishing can only work if RSB knows how to send the PointCloud2DInt data type over the network.

  3. Register the PointCloud data type

    The PointCloud2DInt data type is made available and prepared for network transmission by first importing the required data type and a converter for it:

    1
    2
    3
        import rstsandbox
        from rst.geometry.PointCloud2DInt_pb2 import PointCloud2DInt
        from rsb.converter import ProtocolBufferConverter
    

    After that, RSB has to be informed about the new data type and its associated converter.

    1
    2
        registerGlobalConverter(ProtocolBufferConverter(messageClass=PointCloud2DInt))
        rsb.setDefaultParticipantConfig(rsb.ParticipantConfig.fromDefaultSources())
    

After reinstalling and launching your program the results will be published on the specified output scope. This fact will not produce any immediately noticeable effect. Therefore, the final section explains how to inspect the published results.

Be sure to select a unique output scope in the ISY Lab using the command line arguments. Otherwise your output will be mixed with the one of other groups.

1.2.6   Verifying the Output

To verify that the output is actually generated and is correct we will inspect the data sent on the network using the RSB logger:

$prefix/bin/rsb-loggercl0.10 -I $prefix/share/rst0.10/proto/stable -I $prefix/share/rst0.10/proto/sandbox -l $prefix/share/rst0.10/proto/sandbox/rst/geometry/PointCloud2DInt.proto socket:/YOUR/OUTPUT/SCOPE

Where YOUR/OUTPUT/SCOPE has to be replaced with the name of the scope on which you publish the results.

You should see a continuous stream of events appearing on the selected output scope.

The Logger binary (and other tools) are installed in the prefix at /vol/isy/current/releases/trusty/bin.

Note

More information regarding the logger can be found in the online documentation.

1.3   Indices and Tables