This is a tiny bit of code that was my solution to a recent company code-dojo. A bit of background: It’s quite common for bare metal systems to disallow dynamic allocation of memory, i.e. all memory the application might need must be known at compiletime. This has the benefit, that the linker will tell you, if the memory used by your application will fit into the available memory of the target. However, this also means, that quite a lot of the C++ standard library will not cut it, as it is built around the assumption that dynamic allocations are allowed (and possible, for that matter). However: We’ll usually need some kind of dynamic object storage for certain problem classes. Very often memory pools provided by an RTOS are used for this purpose. Using pools allows use to gracefully degrade service, if a pool is exhausted: Other parts of the application might not notice and can still happily use memory from their pools, while the part that has run out of poolspace can implement its own policiy for handling the situation. So, how does the title of this blogpost fit in?
Priority Queues are a very useful datastructure e.g. to schedule or prioritize commands sent to an application (hence the name!). They usually work by using a backing store that will hold our objects. The implementation will insert new objects at the position in the backing store, that corresponds to it’s priority. For more details see the documentation on std::priority_queue. Now, the standard’s implementation is great, it even allows us to use our own allocator. However, the trouble here is the interface provided:
void push( const value_type& value ); void pop(); const value_type& top() const;
At first glance this interface doesn’t look too unreasonable. We’ve got our push() Method to put values into the container, we’ve a top() method, that will give us the topmost element by calling .front() on the underlying container. Uhm, what does front do? Well, priority_queue can use basically any container. E.g. a std::vector. What does the reference tell us about the behavior of front()? Well:
Returns a reference to the first element in the container. Calling
front
on an empty container is undefined.
Wait? You’re telling me it’s UB if I call “front” on an empty vector? Even worse, there’s no way to use an iterator over the queue? We should be able to do a lot better here with C++17. Add to that, that we actually want to be able to control the amount of memory the implementation uses at compiletime, we get another problem: push() does not provide us with a return value or similar to indicate success. It just assumes, that the container implementation will handle this problem, and indeed it std::vector does:
If an exception is thrown (which can be due to
Allocator::allocate()
or element copy/move constructor/assignment), this function has no effect (strong exception guarantee).
Another not so nice property: For one, it will attempt to allocate if the new element does not fit, and if that fails it might throw an exception. While the latter is not a no-go it isa matter of discussion, and a lot of my customers don’t allow exceptions to be used.
So, what do we actually want?
I’m a huge fan of the way Rust does errorhandling and omits things like nullpointers. And since C++ has had an option type for a while we can easily build ourselves a nice priority queue, that:
- Never allocates
- Never throws exceptions
- Is easy to iterate over
- Has a size known at compile time and is therefore baremetal safe.
So, let’s have a look at the interface I’d like to use:
template<class T, uint32_t num_levels, uint32_t num_items> class PriorityQueue { public: bool enqueue(T&& data, uint32_t prio_level); std::optional<T> dequeue(); };
This is so simple it’s almost embarassing. So, at first we’re using a template to know the amount of memory we need (num_levels and num_items). Num_levels tells us the number of priorities we’d like. Num_items tells us the number of items per level we want to store at most. This is a bit of a compromise as I did not want to introduce a recursive template here to allow for complete control here. Adding this into the mix is possible but would increase the complexity here. Okay, now how should our two methods behave?
- enqueue should move an item into the queue using the given priority level. If that succeeds it should return true, otherwise false. The failure conditions we know are: Either, the priority_level is invalid or there’s no more room for items at this level.
- dequeue should just return the item in the queue with the highest priority or a “none” value if the queue is empty.
Okay, to make our implementation a bit simpler let’s introduce a queue that is fixed size and adheres to the same requirements as stated for the priority queue:
template<class T, uint32_t num_items> class FixedLenQueue { public: bool enqueue(T&& item) { if (filled_slots >= num_items) { return false; } data[filled_slots] = std::move(item); filled_slots++; return true; } std::optional<T> dequeue() { if (filled_slots == 0) { return {}; } auto ret_val = std::move(data[0]); std::rotate(data.begin(), data.begin() + 1, data.end()); filled_slots--; return ret_val; } private: std::array<T, num_items> data = {}; uint32_t filled_slots = 0; };
Again, pretty simple. We use an std::array to make sure the object has the correct size at compiletime. This class will basically:
- Move an object into its internal storage if there’s room. If that works, enqeue will yield true. Otherwise it will yield false.
- Calling deqeue will: Return a none-object if the queue is empty. Otherwise it will move the first object in the queue into an optional and return it. Internally it will then shift all other objects by one slot. This is to make accessing the data a bit easier, although it is not exactly efficient.
To build our priority queue we just bolt together “num_priorities” of these queues:
template<class T, uint32_t num_levels, uint32_t num_items> class PriorityQueue { public: bool enqueue(T&& data, uint32_t prio_level) { if (prio_level >= num_levels) { return false; } return this->data[prio_level].enqueue(std::move(data)); } std::optional<T> dequeue() { for (auto& q: data) { if (auto res = q.dequeue()) { return res; } } return {}; } private: std::array< FixedLenQueue<T, num_items>, num_levels> data {}; };
Using it…
int main() { PriorityQueue<std::string, 4, 10> q; q.enqueue("911", 3); q.enqueue("fourtyseven", 0); q.enqueue("thirtyone", 2); q.enqueue("twentyfour", 0); std::cout << * q.dequeue(); std::cout << * q.dequeue(); std::cout << * q.dequeue(); std::cout << * q.dequeue(); return 0; }
Easy, right? Well, iterating over it is actually even easier, due to the behavior of std::optional:
int main() { PriorityQueue<std::string, 4, 10> q; q.enqueue("911", 3); q.enqueue("fourtyseven", 0); q.enqueue("thirtyone", 2); q.enqueue("twentyfour", 0); while(auto val = q.dequeue()) { std::cout << * val; } return 0; }
The whileloop is nearly as good as a range-based for-loop.
Image Credits: Christopher Burns @ usplash