David Trussel

C++ / Python / Embedded Linux


My personal blog Project maintained by dtrussel
  • Yocto: How to add packages to the SDK?

    Yocto: How to add packages to the SDK?

    Normally a Yocto SDK includes all dependencies that are needed to build everything on your target image. But how can you add something to your SDK specifically? That might be useful if you want to build software components that are not included in your final image or you might want to provide developers with some development tools.

    As you might already have realized, the Yocto SDK usually consists of two sysroots (check the top directory of your installed SDK to verify this yourself). It has a host and a target sysroot. The host sysroot includes all the libraries and executables that need to run on your SDK host e.g. the cross-compiler and code-generators (i.e. all the nativesdk packages). The target sysroot on the other hand includes all the cross-compiled libraries that are needed to build your software for the target. Therefore there also exist two bitbake variables that specify what is packed into each of the SDK’s sysroots. These are TOOLCHAIN_HOST_TASK and TOOLCHAIN_TARGET_TASK.

    Somewhere in your image you probably inherit the populate_sdk class. You can then append the packages you want to add to the SDK to those variables. E.g. let’s add googletest to the target sysroot and cmake and ninja to the host sysroot:

    inherit populate_sdk
    
    TOOLCHAIN_TARGET_TASK += "gtest"
    TOOLCHAIN_HOST_TASK += "nativesdk-cmake nativesdk-ninja"
    
    

    Happy baking!

    References:

  • Yocto: Recipe flavors

    Yocto: Recipe flavors

    If you have done some Yocto development you might already have encountered them in the wild… native and nativesdk recipes… Recipes cannot only be built for the target, but also for your build host or your SDK host. This post gives a short summary about what the different recipe “flavors” are used for and how to add them to your recipes.

    spices by Andra Ion

    The holy trinity

    1. foo
    2. foo-native
    3. nativesdk-foo

    The most common case is just building your recipe foo. This builds the recipe for your target architecture e.g. aarch64.

    But you also might need to build your recipe in the native flavor i.e. foo-native. This builds the recipe such that it can be used on the build host e.g. x86_64. Why would you need that? Let’s say you want to build a recipe that, as part of its build, needs to generate some code. This code is generated with python. Now your recipe needs to add DEPENDS += "python-native" because you want to run this code generation as part of the build process on your host and not on the target machine. Adding a DEPENDS += "python" would not make sense, since that would be the cross compiled version of python, which cannot run on your build host.

    What about nativesdk-foo? Assume you want to build the same project mentioned above, but this time not within your Yocto project, but with the SDK. So the SDK should include python as well, but again not the cross-compiled version but a version that can run on the host where the SDK is installed. In 99% that is probably the same architecture as the build host (e.g. x86_64) but theoretically the two could be different. Hence we need to add nativesdk-python to our SDK.

    Add native and nativesdk support to your recipes

    Let’s start simple: Your recipe is built the same way for all flavors. Then it is enough to just add

    BBCLASSEXTEND = "native nativesdk"
    

    to your recipe.

    And if the package is built differently for native and nativesdk you can either add a foo-native.bb and a nativesdk-foo.bb to your layer or you can customize single tasks of your foo.bb recipe e.g.:

    python do_install_class-target () {
      bb.plain("Install for target");
    }
    
    python do_install_class-native () {
      bb.plain("Install for native");
    }
    
    python do_install_class-nativesdk () {
      bb.plain("Install for nativesdk");
    }
    

    References:

  • C++ Design Patterns: Low effort observers

    C++ Design Patterns: Low effort observers

    Another classic Gang of Four Pattern is the Observer Pattern. In this pattern, observers want to be notified about state changes of a subject. In this post we will look at how to easily implement this with std::function.

    As before I will stick to a sensor example. Let’s assume we have a sensor that changes its state in unpredictable intervals and different parts of your system need to know about these changes. Of course you could poll the sensor state from each part. However this is not very elegant and might lead to several unneeded busy loops. In the classic pattern the subject would keep a list of the observers but since C++11 we can register callbacks very easily with std::function. For simplicity we will assume the sensor state is represented by an int.

    Low effort observers

    #include <functional>
    #include <list>
    
    class Sensor {
    
    std::list<std::function<void(int)>> callbacks_{};
    
    public:
    
    void attach(std::function<void(int)> callback) {
      callbacks_.emplace_back(callback);
    }
    
    void measure() {
      // some complex measurement logic which is waiting for hardware
      // state changes...
      const int new_sensor_state = 42;
      notify(new_sensor_state);
    }
    
    private:
    
    void notify(int state) {
      for (const auto& callback : callbacks_) {
          callback(state);
      }
    }
    
    };
    
    

    The usage would then be like this:

    Sensor sensor;
    sensor.attach([](int state) {
      std::cout << "New sensor state: " << state << std::endl;
    });
    
    

    That’s it. Super simple and implemented within minutes. Most interestingly there is no observer class in this observer pattern.

    But you might have realized that there is one small catch. There is no detach method. Once we registered an observer with attach there is no way to deregister. That is due to the fact, that std::function is not comparable (or only against nullptr). In many cases this is fine and you want to observe a state during the whole runtime. However if you need to be able to unregister there is an easy fix for this.

    Observers with handles

    When attaching a callback to our sensor we can just return a handle. When we then want to unregister from notifications about the state changes of the sensor we pass this handle to the detach method.

    Change the Sensor::attach method to return a handle to the inserted callback:

    
    std::list<std::function<void(int)>>::iterator
    attach(std::function<void(int)> callback) {
      callbacks_.emplace_back(callback);
      return --callbacks_.end();
    }
    
    

    And add a Sensor::detach method:

    
    void detach(std::list<std::function<void(int)>>::iterator handle){
      callbacks_.erase(handle);
    }
    
    

    And use it like this:

    Sensor sensor;
    auto handle = sensor.attach([](int state) {
      std::cout << "New sensor state: " << state << std::endl;
    });
    
    /// Do some stuff...
    
    sensor.detach(handle);
    
    

    Unfortunately there is no free lunch. We now placed a burden on the user to keep track of the handles. But as long as std::function is not comparable there is no easy workaround for this. If keeping track of handles is not acceptable you might at this point be better off not reinventing the wheel and use an existing library e.g. boost::signals2 or Qt signals.

    References

  • C++ Design Patterns: A Modern Command Pattern

    C++ Design Patterns: A Modern Command Pattern

    Don’t worry. This is not yet another take on the classic Gang of Four Command Pattern. Instead we look at how we can use modern C++ features to solve the same problem in a different way. Namely we want to send commands to a (possibly) remote application, whilst choosing a testable and maintainable design.

    Let’s best have a look at an example to illustrate the task at hand. We will control a remote light bulb.

    // our mock hardware
    struct Lightbulb{
      Lightbulb(std::string name) : name_(std::move(name)){}
    
      void set_brightness(unsigned val) {
        std::cout << "Lightbulb(" << name_ << ") set brightness to " << val << '\n';
      }
    
      void set_color(unsigned r, unsigned g, unsigned b) {
        std::cout << "Lightbulb(" << name_ << ") set color to RGB("
                  << r << ',' << g << ',' << b << ")\n";
      }
      std::string name_ = "";
    };
    

    From the perspective of the software which will control our light bulb, we need to do the following things:

    1. Receive the command from a communication interface
    2. Deserialize the command
    3. Pass the command on to the hardware (i.e. call a function of our hardware class)

    One could of course just hardwire the commands to directly call the Lightbulb methods when deserializing them. However this would introduce a very strong coupling between the communication and the “business” logic. That would neither be easily testable nor very maintainable.

    The Commands

    Since C++17 we have the visitor pattern built into the STL with std::variant and std::visit. Which in my opinion is a great way how to address the above problem.

    So we will define structs/classes for the commands we want to send.

    
    namespace cmd {
    
    struct SetBrightness {
      unsigned val = 0;
    };
    
    struct SetColor {
      unsigned r = 0;
      unsigned g = 0;
      unsigned b = 0;
    };
    
    using Command = std::variant<SetBrightness, SetColor>;
    
    } // namespace cmd
    
    

    An instance of std::variant holds one of its template types. So it is a great way to store unrelated types like our command structs.

    We then only need a visitable object (we need an action for every type the variant can hold):

    
    namespace cmd {
    
    struct CommandExecutor{
      CommandExecutor(Lightbulb& bulb) : bulb_(bulb){};
    
      void operator()(const SetBrightness& cmd){
        bulb_.set_brightness(cmd.val);
      }
    
      void operator()(const SetColor& cmd){
        bulb_.set_color(cmd.r, cmd.g, cmd.b);
      }
    
    private:
      Lightbulb& bulb_;
    };
    
    } // namespace cmd
    
    

    And that is basically it.

    • Define simple POD structs that represent your commands and use std::variant to pass them around
    • Define a visitor object that performs different actions based on the actual value of the std::variant

    To give a more complete example we will also look at the communication and serialization steps mentioned above.

    Deserialize it!

    On the communication interface (which we will define below) we will receive the commands in a certain format / protocol and we need to parse these messages into our command representation above. For this example I will use JSON. JSON is in my opinion a very good starting point for machine to machine communication, because it is also human readable and hence easy to debug. Most of the time its performance is also good enough for sending small data like the commands mentioned here.

    I decided to use nlohmann/json for this example:

    #include <nlohmann/json.hpp>
    
    namespace cmd {
    
    inline
    void to_json(json& j, const SetBrightness& cmd) {
      j = json{{"brightness", cmd.val}};
    }
    
    inline
    void from_json(const json& j, SetBrightness& cmd) {
      j.at("brightness").get_to(cmd.val);
    }
    
    inline
    void to_json(json& j, const SetColor& cmd) {
      j = json{{"red", cmd.r},{"green", cmd.g},{"blue", cmd.b}};
    }
    
    inline
    void from_json(const json& j, SetColor& cmd) {
      j.at("red").get_to(cmd.r);
      j.at("green").get_to(cmd.g);
      j.at("blue").get_to(cmd.b);
    }
    
    inline
    Command deserialize(const json& j) {
      Command ret;
      const auto type = j.at("command_type").get<std::string>();
      if (type == "set_brightness") {
        ret = j.at("command_arguments").get<SetBrightness>();
      } else if (type == "set_color") {
        ret = j.at("command_arguments").get<SetColor>();
      } else {
        throw std::runtime_error("Could not deserialize json command " + j.dump());
      }
      return ret;
    }
    
    } // namespace cmd
    
    

    Communication

    Like for the serialization part above, there exist many possibilities how to perform communication between your applications. For the application here I chose to use websockets, since they are easy to use for both local and remote communication.

    Here we will be using boost’s websocket implementation, which is maybe a bit verbose, but available for almost any decent OS.

    At some point we will also need some thread safe mechanism to pass the commands from the communication thread to our main thread. For simplicity’s sake I will just use a boost::lockfree::queue, since we are already using boost. A thread safe queue is a great way to deal with communication between threads, because from the point of view of the main thread there will only be a single source of commands. This makes it easier to test; e.g. for unit tests you can ignore the communication and just fill the command queue another way.

    #include <boost/lockfree/queue.hpp>
    
    namespace cmd {
    
    // for convenience
    using CommandQueue = boost::lockfree::queue<cmd::Command>;
    
    } // namespace cmd
    

    Our light bulb application will be a websocket server to which clients can connect. So we define a listener class, which listens for new connections and starts a new session for each (we will define later what a session does).

    
    #include "commands.hpp"
    
    #include <boost/asio/bind_executor.hpp>
    #include <boost/asio/ip/tcp.hpp>
    #include <boost/asio/strand.hpp>
    #include <boost/beast/core.hpp>
    #include <boost/beast/websocket.hpp>
    
    namespace beast = boost::beast;         // from <boost/beast.hpp>
    namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
    namespace net = boost::asio;            // from <boost/asio.hpp>
    using tcp = net::ip::tcp;               // from <boost/asio/ip/tcp.hpp>
    
    // Report a failure
    void fail(beast::error_code ec, char const *what) {
      std::cout << what << ": " << ec.message() << "\n";
    }
    
    // Accepts incoming connections and launches the Sessions
    class Listener : public std::enable_shared_from_this<Listener> {
      net::io_context &ioc_;
      tcp::acceptor acceptor_;
      cmd::CommandQueue& cmd_queue_;
    
    public:
      Listener(net::io_context &ioc,
               tcp::endpoint endpoint,
               cmd::CommandQueue& cmd_queue)
          : ioc_(ioc), acceptor_(ioc), cmd_queue_(cmd_queue) {
        beast::error_code ec;
    
        // Open the acceptor
        acceptor_.open(endpoint.protocol(), ec);
        if (ec) {
          fail(ec, "open");
          return;
        }
    
        // Allow address reuse
        acceptor_.set_option(net::socket_base::reuse_address(true), ec);
        if (ec) {
          fail(ec, "set_option");
          return;
        }
    
        // Bind to the server address
        acceptor_.bind(endpoint, ec);
        if (ec) {
          fail(ec, "bind");
          return;
        }
    
        // Start listening for connections
        acceptor_.listen(net::socket_base::max_listen_connections, ec);
        if (ec) {
          fail(ec, "listen");
          return;
        }
      }
    
      // Start accepting incoming connections
      void run() { do_accept(); }
    
    private:
      void do_accept() {
        // The new connection gets its own strand
        acceptor_.async_accept(
            net::make_strand(ioc_),
            beast::bind_front_handler(&Listener::on_accept, shared_from_this()));
      }
    
      void on_accept(beast::error_code ec, tcp::socket socket) {
        if (ec) {
          fail(ec, "accept");
        } else {
          // Create the Session and run it
          std::make_shared<Session>(std::move(socket), cmd_queue_)->run();
        }
    
        // Accept another connection
        do_accept();
      }
    };
    
    

    With a Session which just waits for new messages, deserializes them, puts them on our queue and resumes waiting for new messages.

    class Session : public std::enable_shared_from_this<Session> {
      websocket::stream<beast::tcp_stream> ws_;
      beast::flat_buffer buffer_;
      cmd::CommandQueue& cmd_queue_;
    
    public:
      // Take ownership of the socket
      explicit Session(tcp::socket &&socket, cmd::CommandQueue& cmd_queue)
       : ws_(std::move(socket)), cmd_queue_(cmd_queue) {}
    
      // Start the asynchronous operation
      void run() {
        // Set suggested timeout settings for the websocket
        ws_.set_option(
            websocket::stream_base::timeout::suggested(beast::role_type::server));
    
        // Accept the websocket handshake
        ws_.async_accept(
            beast::bind_front_handler(&Session::on_accept, shared_from_this()));
      }
    
      void on_accept(beast::error_code ec) {
        if (ec)
          return fail(ec, "accept");
    
        // Read a message
        do_read();
      }
    
      void do_read() {
        // Read a message into our buffer
        ws_.async_read(buffer_, beast::bind_front_handler(&Session::on_read,
                                                          shared_from_this()));
      }
    
      void on_read(beast::error_code ec, std::size_t bytes_transferred) {
        boost::ignore_unused(bytes_transferred);
    
        // This indicates that the Session was closed
        if (ec == websocket::error::closed)
          return;
    
        if (ec)
          fail(ec, "read");
    
        auto data = reinterpret_cast<char*>(buffer_.data().data());
        const auto json_cmd = json::parse(data, data + buffer_.data().size());
        buffer_.consume(buffer_.size());
        const auto command = cmd::deserialize(json_cmd);
        cmd_queue_.push(command);
    
        do_read();
      }
    
      void on_write(beast::error_code ec, std::size_t bytes_transferred) {
        boost::ignore_unused(bytes_transferred);
    
        if (ec)
          return fail(ec, "write");
    
        // Clear the buffer
        buffer_.consume(buffer_.size());
    
        do_read();
      }
    };
    

    Putting it all together

    Now we got all the building blocks for our application. So let’s write main().

    #include "commands.hpp"
    #include "websocket.hpp"
    
    #include <chrono>
    #include <thread>
    
    #include <csignal>
    
    void process_commands(cmd::CommandExecutor& executor,
                          cmd::CommandQueue& cmd_queue){
      cmd::Command command;
      while (cmd_queue.pop(command)) {
        std::visit(executor, command);
      }
    }
    
    sig_atomic_t signaled = false;
    
    void signal_handler(int signal){
      if ((SIGTERM == signal) or (SIGINT == signal)) {
        signaled = true;
      }
    }
    
    int main(){
      std::signal(SIGTERM, signal_handler);
      std::signal(SIGINT, signal_handler);
    
      Lightbulb bulb("LED");
      cmd::CommandExecutor executor(bulb);
      cmd::CommandQueue cmd_queue(100);
    
      net::io_context io_context(1);
      
      // listen on all IPv4 interfaces on port 8888
      std::make_shared<Listener>(io_context, tcp::endpoint{tcp::v4(), 8888},
        cmd_queue)->run();
      std::thread io_task([&io_context](){ io_context.run(); });
    	
      // our main loop
      while (not signaled) {
        const auto now = std::chrono::steady_clock::now();
        process_commands(executor, cmd_queue);
        // some other tasks...
        std::this_thread::sleep_until(now + std::chrono::milliseconds(500));
      }
    
      std::cout << "=== THE END ===\n";
      io_context.stop();
      if (io_task.joinable()) {
        io_task.join();
      }
    }
    
    

    And that’s it. From the websocket client we can then send our commands as json strings e.g.:

    {
      "command_type": "set_color",
      "command_arguments": { "red": 11, "green": 22, "blue": 33 }
    }
    

    Some take away points:

    • std::variant and std::visit are great alternatives to an inheritance based command design.
    • Separating the communication from the hardware control makes it easy to maintain e.g. replacing the communication interface or protocol.
    • Having a single source of commands (here our command queue) makes it very suitable for testing.
  • C++ Design Patterns: Template Method - no templates involved

    C++ Design Patterns: Template Method - no templates involved

    Assume you have something that has an overall structure, but some parts of it need to be customized depending on the use case. The idea of the Template Method pattern is to define the overall structure in a base class and let the derived classes override the specific behavior.

    Going back to the sensor example from my last post:

    struct Sensor {
      double measure(){ return do_measure(); }
      virtual ~Sensor() = default;
    private:
      virtual double do_measure() = 0;
    };
    
    struct AccelerationSensor : Sensor {
    private:
      double do_measure() override { ... }
    };
    
    struct PositionSensor : Sensor {
    private:
      double do_measure() override { ... }
    };
    
    

    This is great because the interface measure() is separated from the implementation do_measure(). If we simply made the measure() method virtual and overrode it in the derived classes, the implementation and interface would be coupled and it would make it harder to maintain such classes. If we want to add some functionality later it will be much easier with the template method / a non-virtual interface. E.g. we want to filter each sensor value. Then we can just modify the base class like this:

    struct Sensor {
      double measure(){
        const auto val = do_measure();
        return filter(val);
      }
      virtual ~Sensor() = default;
    private:
      virtual double do_measure() = 0;
      double filter(double value) { /* some filter implementation */ }
    };
    

    The interface measure() stays exactly the same and none of the client code has to be modified. This would not have been possible, had we worked by overriding a virtual interface.

    So make your interface non-virtual and public in the base class and separate the implementation into private/protected virtual functions which the derived classes can override.

    References

  • Back to basics: C++ Inheritance in a nutshell

    Back to basics: C++ Inheritance in a nutshell

    C++ is an object-oriented language and inheritance can be used to define relationships between objects in the form of class hierarchies. It provides a means to structure and organize your code. Assuming that you already know the basics of inheritance, we will look at how and when to best use it. In particular, we look at how to use it for runtime and compile time polymorphism.

    Before you start using inheritance to describe your object’s relationships, consider also other alternatives. Often composition is a better approach to structure your code. A simple guideline is to use composition when the relationship between your objects can be described with a has-a relationship (e.g. the Robot class has a Leg). And use inheritance when it is better described with an is-a relationship e.g. a Dog is an Animal. However that does not completely work in some cases. It is better to follow the Liskov Substitution Principle. This principle basically states, that if you choose inheritance, then the Derived class should be able to be used anywhere, where the Base class is used i.e. you could substitute all your Deriveds with Bases.

    To give a simple example: At first it might seem like a good idea to make your Penguin class inherit from the Bird class because a penguin is a bird. However this does not work well in code if the Bird class has a fly() method in its interface. So following Liskov, we should not use this inheritance, because we could not use Penguin everywhere we used Bird.

    Liskov Meme

    When is it then a good idea to use inheritance? Often it is used together with virtual functions for runtime polymorphism. Following the don’t repeat yourself principle we want to write code that does the same thing for similar objects, but we do not want to write the same code for every single class again.

    Runtime Polymorphism

    So let’s assume we have a PositionSensor and an AccelerationSensor class. However our program should detect during runtime how many of each are present and should store them in a container. So a possible solution is to introduce a Sensor base class and use this base class’ interface in the part of the code that deals with sensors in a generic way.

    struct Sensor {
      virtual double measure_value() = 0;
      virtual ~Sensor() = default;
    };
    
    struct AccelerationSensor : Sensor {
      double measure_value() override { ... }
    };
    
    struct PositionSensor : Sensor {
      double measure_value() override { ... }
    };
    
    std::vector<std::unique_ptr<Sensor>> detect(Hardware* hw){
      std::vector<std::unique_ptr<Sensor>> sensors;
      for (size_t i = 0; i < hw->num_sensors(); ++i) {
        auto type = hw->get_next_sensor_type();
        switch (type) {
          case SensorType::Position:
            sensors.emplace_back(std::make_unique<PositionSensor>());
            break;
          case SensorType::Acceleration:
            sensors.emplace_back(std::make_unique<AccelerationSensor>());
            break;
          default:
            throw std::runtime_error("Sensor type not supported");
        }
      }
      return sensors;
    }
    
    ...
    
    void GUI::update_sensor_values(std::vector<std::unique_ptr<Sensor>>& sensors) {
      for (size_t i = 0; i < sensors.size(); ++i) {
        display_values_.at(i) = sensors.at(i)->measure_value();
      }
    }
    
    

    And then use it like this:

    Hardware hw = load_HW_from_configuration_file();
    auto sensors = detect(hw);
    GUI gui;
    gui.update_sensor_values(sensors);
    
    

    The example above demonstrates a very basic use of inheritance to achieve runtime polymorphism through virtual function calls.

    Compile time Polymorphism

    In some cases you want to write generic code, but you already know your polymorphic types at compile time and you don’t want to pay the runtime cost of virtual function calls. One way of achieving this in C++ is called the Curiously Recurring Template Pattern (CRTP) idiom. Its main idea is to use the derived class as a template parameter of the base class.

    template <typename T>
    struct Base {
    ...
    };
    
    
    struct Derived : Base<Derived> {
    ...
    };
    
    

    If you see this for the first time it might look a bit weird. The Derived class inherits from the Base class with itself as a template parameter. But this way we have the possibility to access the methods of the derived class in the base class. And if we name the method of the derived class the same, we can override the base class method. But let’s just change above example to compile time polymorphism to better see how this works:

    template <typename T>
    struct Sensor {
      double measure_value() {
        T& derived = static_cast<T&>(*this);
        return derived.measure_value();
      } 
    };
    
    struct AccelerationSensor : Sensor<AccelerationSensor> {
      double measure_value() { ... }
    };
    
    struct PositionSensor : Sensor<PositionSensor> {
      double measure_value() { ... }
    };
    
    ...
    
    template <typename Derived>
    void GUI::update_sensor_value(int i, Sensor<Derived>& sensor) {
      display_values_.at(i) = sensor.measure_value();
    }
    
    

    And then use it like this:

    AccelerationSensor sensor0{};
    PositionSensor sensor1{};
    GUI gui;
    gui.update_sensor_value(0, sensor0);
    gui.update_sensor_value(1, sensor1);
    
    

    Now this example is of course a bit oversimplified and you could achieve the same with a simple templated update_sensor_value function. But as soon as you have more logic built into your classes you will see that the CRTP is a good way to write generic code for types you already know at compile time.

    As a final note: Remember that inheritance is not the only way to achieve polymorphism and consider other choices as well e.g. using std::variant.

    References

    • Hands-On Design Patterns with C++ by Fedor G. Pikus (ISBN 978-1-78883-256-4)
    • Fluent C++
  • C++ Design Patterns: Singleton - the Classic

    C++ Design Patterns: Singleton - the Classic

    The singleton is one of the simplest object-oriented C++ patterns. Probably due to its simplicity, it is also an often misused one. It is easy to implement your own, and therefore one might tend to use it a bit too often as a design choice. When should you use a singleton then? The answer is quite obvious, when you need a unique global object. However remember, global variables are usually frowned upon and you shouldn’t treat a singleton any other way. Avoid global variables as much as you can, but in some cases they are valid design decisions.

    SingletonMeme

    Global variables are considered bad practice, because they make it very hard to reason about code locally. The state of a global object might be changed anywhere in the program and therefore you usually cannot make any assumption about its state when looking at it in a local scope. However the singleton can be used to address two categories of design problems.
    The first one is to represent physical objects or resources that are unique in the context of the program. E.g. imagine you are designing software that will run on a single car. So a Car object might be a singleton, since the software will never have to deal with a fleet of cars (of course you could also envision a software that controls several cars, where you would not make Car a singleton).
    The second category is objects that are global by design (without representing any physical object). E.g. you might decide to implement a resource manager as a singleton. There might be several instances of the resource itself, but the manager, which keeps track of the - limited - number of instances is a unique global object. Or as another example, loggers are often implemented as singletons since you want your whole program to log to the same instance.

    So in short. Do not think if you should use a singleton or not, but instead think if your design should enforce that a certain object is global and unique.

    So if after some consideration you still decided to go ahead with a singleton, let’s have a look at how to implement one.

    For the sake of simplicity we will consider the associated data of the singleton to be an int.

    Quick and Dirty

    In the header:

    struct Singleton {
      int& get() { return value_; }
    private:
      int value_ = 0;
    };
    
    extern Singleton global_instance;
    

    In the .cpp file:

    Singleton global_instance;
    

    But that really is just a global object. Nothing prevents the user from creating another Singleton instance. So let us change this slightly into the static singleton.

    The Static Singleton

    struct Singleton {
      int& get() { return value_; }
    private:
      inline static int value_ = 0;
    };
    

    (Note that we used the nice C++17 feature of initializing an inline static data member.) Now the user might still create several instances of the singleton, but they are really nothing else than handles to the same static data member.

    E.g. user code

    Singleton one;
    ++one.get();
    ...
    Singleton another;
    int value = another.get();
    

    And one and another will return references to the same data instance.

    Of course you can take that a step further and make the get function also static to make the singleton nature of your class more clear (since not all your classes which are singletons will have Singleton in the name).

    struct Singleton {
      static int& get() { return value_; }
    private:
      Singleton() = delete;
      inline static int value_ = 0;
    };
    

    Which will make the usage look like this:

    ++Singleton::get();
    ...
    int value = Singleton::get();
    

    And thus hopefully a bit more readable and clear.

    Now all is fine and well. Everything is static and the singleton is initialized before the program (i.e. main()) starts executing and is destroyed after it ends. But what if code in another static object uses our singleton? The order of initialization of static objects is generally undefined (implementation dependent). The standard only guarantees that all static objects defined in the same file will be initialized in order of their declaration. But as soon as they are spread out over several files, we do have a problem. E.g. imagine a singleton logger that makes use of a singleton memory manager. How can we make sure the memory manager is initialized when we use it in the logger in the static part of our code?

    The Meyers’ Singleton

    Named after Scott Meyers, this implementation of a singleton defers its initialization to its first use. Thus solving the above mentioned problem of undefined static object initialization order.

    struct Singleton {
      static Singleton& instance() {
        static Singleton inst;
        return inst;
      }
      int& get() { return value_; }
    private:
      Singleton() = default;
      ~Singleton() = default;
      Singleton(const Singleton&) = delete;
      Singleton& operator=(const Singleton&) = delete;
      int value_ = 0;
    };
    

    The private constructor prevents the program from directly constructing the object. Instead the initialization will take place only on the first call of the static instance() member function. Since this is the only way to access the singleton, it is guaranteed to be initialized on the first usage.

    The usage would look like this:

    ++Singleton::instance().get();
    ...
    int value = Singleton::instance().get();
    

    Looking at the instance() function you might have realized that we return a reference of a local variable (which is usually a really bad idea). However it is a reference to a static local variable, hence only one instance of this local variable exists in the entire program and therefore returning its reference is perfectly fine. And unlike a file static object, static local variables are initialized the first time they are used.

    However there is one downside. There is a performance overhead. Every time the instance() function is called, there is an implicit check to see if the static variable is already initialized. So if you repeatedly need to access the singleton, it is best to store a reference to the returned instance. E.g.:

    Singleton& inst = Singleton::instance();
    for (auto i = 0; i < N; ++i) {
      do_something(inst.get());
    }
    

    The Pimpl Singleton

    Sometimes it is desired to have a clear separation between interface and implementation. The pointer to implementation (pimpl) idiom exposes only the interface in the header file, and the implementation is hidden inside a class inside the .cpp file. The actual singleton class then only holds a pointer to that implementation class (we will actually store a reference instead of pointer, but let’s call it pimpl anyway).

    So a Pimpl Singleton looks like this. In the header:

    struct SingletonImpl; // Forward declare
    struct Singleton {
      Singleton();
      int& get();
    private:
      static SingletonImpl& impl();
      SingletonImpl& impl_;
    };
    

    In the .cpp file:

    // User code will not mind changes here as long as the interface stays the same
    struct SingletonImpl {
      int value_ = 0;
    };
    
    Singleton::Singleton() : impl_(impl()) {}
    
    int& Singleton::get() { return impl().value_; }
    
    SingletonImpl& Singleton::impl(){
     static SingletonImpl instance;
     return instance;
    }
    

    Each singleton instance also saves a reference to the implementation instance to avoid the performance overhead mentioned in the previous section. (There is also a small overhead of an indirection).

    How about thread safety?

    Until now we ignored the thread safety of the singletons. But while the singleton instances presented here are thread safe, because they are static, the associated data members are no different than any other shared variable. When used in a multi-threaded context the programmer is responsible to use it in a thread safe manner, e.g. protecting them by a mutex.

    For a more in-depth view into the subject I highly recommend the book referenced below. I hope now you know how to implement your singleton, but remember to think twice before deciding to choose a global variable in disguise.

    References

    • Hands-On Design Patterns with C++ by Fedor G. Pikus (ISBN 978-1-78883-256-4)
  • Yocto: Switch to systemd

    Yocto: Switch to systemd

    Yocto’s reference distribution poky comes with SysVinit as an initialization manager. However many major linux distributions use systemd as a system and service manager. In this post we will look at how to easily switch your yocto distro to systemd.

    systemd

    Systemd has been a quite controversial replacement for SysVinit and I am not going to discuss the pros and cons of them here. This has already been done enough. However I am used to systemd from working on Debian, Ubuntu and ArchLinux. Therefore I also wanted my distribution to use systemd. It turns out that this is actually quite easy. (Only be aware that systemd will be bigger in size than SysVinit, although it can be customized for embedded projects).

    In your distribution config file conf/distro/<distroname>.conf add the following lines. E.g. my meta-foundation/conf/distro/foundation.conf:

    DISTRO_FEATURES_append = " systemd"
    DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit"
    VIRTUAL-RUNTIME_init_manager = "systemd"
    VIRTUAL-RUNTIME_initscripts = ""
    

    And that’s it. We are done. (Note: the space in DISTRO_FEATURES_append = " systemd" is required!)

    With DISTRO_FEATURES_append = " systemd" and VIRTUAL-RUNTIME_init_manager = "systemd" we added systemd and told bitbake to use it as the initialization manager.

    With DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit" and VIRTUAL-RUNTIME_initscripts = "" we completely removed all SysVinit dependencies in our image. If you do not specify these you can still use SysVinit for your rescue/minimal image.

    DISTRO_FEATURES_BACKFILL_CONSIDERED lists features which should not be used for feature backfilling.

  • Back to basics: How to organize your C/C++ project

    Back to basics: How to organize your C/C++ project

    Back to basics. When I started C++ programming I had a hard time figuring out how to organize my projects. Most text books or lectures were more focused on teaching either programming principles or language features. So in this post I would like to share what is in my opinion the best project structure.

    C++ logo

    The answer you usually get when asking how to structure your project is that you should look at a well established project / library and stick to something similar. However when looking at projects like Boost, Abseil, JSON for Modern C++ it might be difficult to figure out the essentials. Some of these projects are huge and need some time to get familiar with as a beginner. Or some libraries were header-only, but I wanted to build a shared library. I had to work on a couple of projects till I got the hang of it.

    So let’s save you this time and look at a simple example that still should cover the essentials of every project.

    I am going to use CMake as my build tool to give a concrete example. But the organization should be independent of the tool or IDE you use.

    myproject
    ├── CMakeLists.txt
    ├── extern (any gitsubmodules or third-party sources)
    ├── include
    │   └── mylib
    │       └── public-header.hpp
    ├── src
    │   └── mylib
    │       ├── implementation.cpp
    │       └── private-header.hpp
    └── test
        └── test-myclass.cpp
    

    Already when starting with a project it is important to define which parts of code will be part of the public interface of the library and only place those parts in the include directory (when you are building only an application you basically do not need this directory). Then in the src folder you place your private headers and the compilation units (aka. .cpp files) for your project.

    I also usually keep the unit and integration tests of the library in a separate test folder.

    I consider it a good practice to mirror the namespaces of my libraries as subdirectories in the src and include folder. The includes in your files then look like this:

    #include "mylib/mysubnamespace/myheader.hpp"
    
    namespace mylib::mysubnamespace {
    
    struct MyObject {
    ...
    

    A corresponding CMakeLists.txt file for a library that links to Boost internally and to Threads externally, and uses googletest for unit testing looks like this:

    cmake_minimum_required(VERSION 3.11)
    
    project(mylib VERSION 2020.1
                  LANGUAGES CXX
                  HOMEPAGE_URL "https://github.com/dtrussel/cpp_project_template")
    
    #####################################################################
    # DEPENDENCIES
    #####################################################################
    
    find_package(Threads REQUIRED)
    find_package(Boost REQUIRED)
    
    include(FetchContent)
    
    FetchContent_Declare(
      googletest
      GIT_REPOSITORY https://github.com/google/googletest.git
      GIT_TAG        release-1.10.0)
    
    FetchContent_GetProperties(googletest)
    if(NOT googletest_POPULATED)
      FetchContent_Populate(googletest)
      add_subdirectory(${googletest_SOURCE_DIR} ${googletest_BINARY_DIR}
        EXCLUDE_FROM_ALL)
    endif()
    
    #####################################################################
    # LIBRARY
    #####################################################################
    
    add_library(${PROJECT_NAME}
      src/mylib/implementation.cpp)
    
    add_library(${PROJECT_NAME}::${PROJECT_NAME} ALIAS ${PROJECT_NAME})
    
    set_target_properties(${PROJECT_NAME} PROPERTIES
      VERSION ${PROJECT_VERSION})
    
    target_include_directories(${PROJECT_NAME}
      PUBLIC include
      PRIVATE src)
    
    target_link_libraries(${PROJECT_NAME}
      PUBLIC Threads::Threads
      PRIVATE Boost::boost)
    
    target_compile_options(${PROJECT_NAME}
      PRIVATE -Wall -Wextra -pedantic -Werror)
    
    target_compile_features(${PROJECT_NAME}
      PRIVATE cxx_std_17)
    
    include(GNUInstallDirs)
    
    install(TARGETS ${PROJECT_NAME}
      EXPORT MyLibTargets
      LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
      ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
      RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})
    
    install(DIRECTORY include/ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})
    
    #####################################################################
    # UNIT TESTS
    #####################################################################
    
    add_executable(${PROJECT_NAME}-tests
      test/test-myclass.cpp)
    
    target_link_libraries(${PROJECT_NAME}-tests
      PRIVATE gtest_main Threads::Threads ${PROJECT_NAME}::${PROJECT_NAME})
    
    target_compile_options(${PROJECT_NAME}-tests
      PRIVATE -Wall -Wextra -pedantic -Werror)
    
    target_compile_features(${PROJECT_NAME}
      PRIVATE cxx_std_17)
    
    

    In my next post I will go more into detail regarding the CMake file. But for now I hope you got a short and easy introduction into how to structure your C/C++ projects.

  • Yocto: Build your own linux distribution

    Yocto: Build your own linux distribution

    In this post we are going to create our own yocto layer. The Yocto Project is an open source project that lets you create your own embedded linux distribution. You create recipes that are bundled into layers (which are usually called meta-something). The recipes consist themselves of tasks (do_compile, do_install…) and let you specify dependencies between tasks. These recipes are then baked into an image with yocto’s build system called bitbake. The generated outputs of a recipe are called packages (one recipe can provide several packages).

    Yocto

    Now you are probably thinking: Why would I want to build yet another linux distro? I can just say that in my case, we did not want to rely on an external distribution provider for the embedded device we were developing. We wanted to have full control over what software will run on the device and we did not want to deal with external distribution support or updates.

    While the project’s documentation is excellent, the learning curve is quite steep. In the beginning it can be hard to find a good starting point. So instead of wasting any time, let us just jump into creating our own distribution. All you need is a Debian/Ubuntu host with at least 50 GBytes free disk space.

    But be warned. Yocto builds can take a long time when run for the first time.

    xkcd advanced technology

    Let’s start!

    Open a terminal and switch to the directory where you would like to start the project:

    sudo apt-get install gawk wget git-core diffstat \
      unzip texinfo gcc-multilib build-essential chrpath socat
    mkdir yocto && cd yocto
    git clone git://git.yoctoproject.org/poky -b zeus
    source poky/oe-init-build-env build
    bitbake-layers create-layer ../meta-foundation --priority 10
    bitbake-layers add-layer ../meta-foundation
    cd ..
    

    As a first step we installed the dependencies, created a project directory and cloned the poky (Yocto’s example linux distribution) into it. We then sourced the yocto build environment (which also creates a build directory with the provided name if it does not exist yet) and created our own layer with the bitbake-layers command.

    The project’s structure

    So let us have a look at the whole project and how it is organized:

    yocto
    ├── build
    │   ├── bitbake-cookerdaemon.log
    │   ├── cache
    │   ├── conf
    │   └── tmp
    ├── meta-foundation
    │   ├── conf
    │   ├── COPYING.MIT
    │   ├── README
    │   └── recipes-example
    └── poky
        ├── bitbake
        ├── contrib
        ├── documentation
        ├── LICENSE
        ├── LICENSE.GPL-2.0-only
        ├── LICENSE.MIT
        ├── meta
        ├── meta-poky
        ├── meta-selftest
        ├── meta-skeleton
        ├── meta-yocto-bsp
        ├── oe-init-build-env
        ├── README.hardware -> meta-yocto-bsp/README.hardware
        ├── README.OE-Core
        ├── README.poky -> meta-poky/README.poky
        ├── README.qemu
        └── scripts
    
    

    We have the poky layer which provides us with the build system bitbake, a script oe-init-build-env to initialize the build environment and the core layers (all subdirectories starting with meta). On the same level we have our newly created meta-foundation layer and the build directory, where bitbake keeps all the build output and cache as well as some local build configuration (e.g. how many threads to use for bitbake and make etc.).

    Now let’s have a closer look at the structure of our created layer meta-foundation:

    .
    ├── conf
    │   └── layer.conf
    ├── COPYING.MIT
    ├── README
    └── recipes-example
        └── example
            └── example_0.1.bb
    

    The conf/layer.conf tells bitbake how we organize our layer and e.g. with which versions it is compatible with:

    # We have a conf and classes directory, add to BBPATH
    BBPATH .= ":${LAYERDIR}"
    
    # We have recipes-* directories, add to BBFILES
    BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
                ${LAYERDIR}/recipes-*/*/*.bbappend"
    
    BBFILE_COLLECTIONS += "meta-foundation"
    BBFILE_PATTERN_meta-foundation = "^${LAYERDIR}/"
    BBFILE_PRIORITY_meta-foundation = "10"
    
    LAYERDEPENDS_meta-foundation = "core"
    LAYERSERIES_COMPAT_meta-foundation = "warrior zeus"
    

    With the bitbake-layers command we also created an example recipe recipes-example/example/example_0.1.bb:

    SUMMARY = "bitbake-layers recipe"
    DESCRIPTION = "Recipe created by bitbake-layers"
    LICENSE = "MIT"
    
    python do_build() {
        bb.plain("***********************************************");
        bb.plain("*                                             *");
        bb.plain("*  Example recipe created by bitbake-layers   *");
        bb.plain("*                                             *");
        bb.plain("***********************************************");
    }
    
    

    This only prints something during building. Let us change this to do something more meaningful. Often you want to include some library / application to your OS. Since CMake is quite popular and I often use it for my projects, we will add a recipe that builds a CMake project.

    Add an own recipe

    Now we will rename our recipe to the name of the library which we want to add to our layer:

    cd meta-foundation
    mv recipes-example recipes-support
    mv recipes-support/example recipes-support/dtr
    mv recipes-support/dtr/example_0.1.bb recipes-support/dtr/dtr_git.bb
    

    Overwrite our example recipe with one that builds a CMake-based library:

    cat > recipes-support/dtr/dtr_git.bb <<'__EOF__'
    SUMMARY = "dtr - C++ utility library"
    LICENSE = "MIT"
    LIC_FILES_CHKSUM = "\
      file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
    
    SRC_URI = "git://github.com/dtrussel/dtr.git"
    SRCREV = "866c777907e096f9d88d01cf104984906afc6425"
    
    S = "${WORKDIR}/git"
    
    inherit cmake
    
    FILES_${PN}-dev = "${includedir}"
    
    DEPENDS = "boost"
    __EOF__
    

    I am not going into detail how to write recipes, but basically we just told bitbake where to find the source by setting SRC_URI and SRCREV. With inherit cmake we included the default cmake recipe tasks. In DEPENDS we can set the dependency of this recipe on other recipes (here on the boost library which is provided by one of the layers in the poky repo).

    The Distro

    Since we want our own distribution we add a config file for our distro:

    mkdir conf/distro
    cat > conf/distro/foundation.conf <<'__EOF__'
    require conf/distro/poky.conf
    
    DISTRO = "foundation"
    DISTRO_NAME = "Foundation (Linux Distribution)"
    DISTRO_VERSION = "2020.1"
    DISTRO_CODENAME = "asimov"
    SDK_VENDOR = "-foundation"
    SDK_VERSION = "${DISTRO_VERSION}"
    SDK_NAME = "${DISTRO}-${DISTRO_VERSION}-${TUNE_PKGARCH}-${MACHINE}"
    SDKPATH = "/opt/${DISTRO}/${SDK_VERSION}"
    
    DISTRO_VERSION[vardepsexclude] = "DATE"
    SDK_VERSION[vardepsexclude] = "DATE"
    SDK_NAME[vardepsexclude] = "DATE"
    __EOF__
    

    The Image

    A distro can have several images (e.g. base, server, development, production), which contain more or less packages. So let’s add an image that is based on the core-image-base and add our recipe to it.

    mkdir -p recipes-core/images/
    cat > recipes-core/images/foundation-image-base.bb <<'__EOF__'
    include recipes-core/images/core-image-base.bb
    
    IMAGE_INSTALL_append = " dtr-dev"
    __EOF__
    

    Sidenote: The package we add is dtr-dev, and not dtr, because it is a header-only library and the main package of the CMake-based recipe is just the test executable in this case.

    So now we are ready and can finally bake it:

    DISTRO=foundation bitbake foundation-image-base
    

    Great, we are building our own linux distribution! Go grab a coffee — the initial build will take a long time, since it builds everything from source. But don’t worry, subsequent builds will be incremental.

    Some tips:

    • Only put your build configuration into build/conf/local.conf (and NOT your distro, image or machine configuration).
    • Never modify another layer (if you want to modify an existing recipe, use a recipes-something/somelibrary_<VERSION>.bbappend file in your layer instead)

    References: