Using Zephyr and Rust together (II)

In the last part of this multi-part post we looked at the basic setup needed to get Rust and Zephyr to play together. If you’ve not read that yet do it now, as I assume knowledge of that post for this post.

We left off a bit disappointed at the code that bindgen spat out, when we fed Zephyr’s kernel headers into it. Today we want to figure out a way to make this at least bit more usable by means of some light code generation.

Code generation

To get around the limitations that are created by Zephyr’s use of macros and static inline functions, while at the same time avoiding having to use C in a Rust project a sensible approach is to generate the required code. We’ll write a bit of code for the build script, that will spit out thread definitions for us in C and then use those in Rust.

ZephyrIF

To get started add a new file called “zephyrif.rs” in the crate’s root next to the build script. Insert the following:

use std::{fs};
use std::io::{Write};
use std::io::Cursor;
use std::path::PathBuf;

pub struct zephyrif
{
    c_code_writer: std::io::Cursor<Vec<u8>>,
    rust_code_writer: std::io::Cursor<Vec<u8>>,
    impl_module: String
}

impl zephyrif
{

    pub fn new(impl_module: String) -> Self
    {
        let mut cursor = Cursor::new(Vec::new());
        let _= writeln!(cursor, "#include <zephyr/kernel.h>");
        let mut rust_cursor = Cursor::new(Vec::new());
        let _= writeln!(rust_cursor, "use crate::{};", impl_module);
        let _= writeln!(rust_cursor, "use crate::bindings::k_mutex;");

        zephyrif
        {
            c_code_writer: cursor,
            rust_code_writer: rust_cursor,
            impl_module
        }
    }

    /// This sadly needs to be stringly typed, as we don't have acceess to the actual crate being compiled
    /// right now. Ideally we'd be able to import the actual crate to get the correct entry names
    pub fn add_thread(&mut self, thread_name: &str, stack_size: usize, entry_name: &str, priority: u8)

    {
        // render C code part:
        let _= writeln!(self.c_code_writer, "extern void {}();", entry_name);
        let _ = writeln!(self.c_code_writer, "K_THREAD_DEFINE({}, {}, {}, NULL, NULL, NULL, {}, 0, 0);", thread_name, stack_size, entry_name, priority);

        // render rust code part:
        let _ = writeln!(self.rust_code_writer, "#[no_mangle]");
        let _ = writeln!(self.rust_code_writer, "extern \"C\" fn {}() {{ crate::{}::{}_impl(); }}", entry_name, self.impl_module, entry_name);
        let _ = writeln!(self.rust_code_writer, "");
    }

    pub fn add_mutex(&mut self, mutex_name: &str)
    {
        let _= writeln!(self.c_code_writer, "K_MUTEX_DEFINE({});", mutex_name);
        let _= writeln!(self.rust_code_writer, "extern \"C\" {{ pub static mut {} : k_mutex ; }}", mutex_name);
        let _ = writeln!(self.rust_code_writer, "");
    }

    pub fn render_c_to_file(&self, dest: PathBuf)
    {
        fs::write(dest, self.c_code_writer.get_ref()).expect("Unable to write file");
    }

    pub fn render_rust_to_file(&self, dest: PathBuf)
    {
        fs::write(dest, self.rust_code_writer.get_ref()).expect("Unable to write file");

    }

}

What this does is, it creates Rust and C code in tandem for calls to “add_thread” and “add_mutex”. As an example, calling “add_thread” will produce the following:

C-Code

#include <zephyr/kernel.h>

extern void some_thread_func();

K_THREAD_DEFINE(some_thread, 1024, some_thread_func, NULL, NULL, NULL, 7, 0, 0);

Rust Code:

use crate::entry;
use crate::bindings::k_mutex;

#[no_mangle]
extern "C" fn some_thread_func() { crate::entry::some_thread_func_impl(); }

With this output, all we have to do is, to implement entry::some_thread_func_impl(). To use this add the following to the build script:

// Line 2
mod zephyrif;

/* ... */

    bindings

        .write_to_file(PathBuf::from("./src/bindings.rs"))
        .expect("Couldn't write bindings!");

    render_kernel_objects();
}

fn render_kernel_objects() {

    let mut zeph = zephyrif::zephyrif::new("threads".to_string());
    zeph.add_thread("some_thread", 1024, "some_thread_func", 7);
    zeph.add_thread("another_thread", 1048, "another_thread_func", 7);
    zeph.render_c_to_file(PathBuf::from("../blinky/src/zobjects.c"));
    zeph.render_rust_to_file(PathBuf::from("./src/zephyrbridge.rs"));

}

When building the crate the file “zobjecs.c” should show up in the C application’s src directory. Add it to the app’s  CMakeListsFile:

target_sources(app PRIVATE src/main.c src/zobjects.c)

Further the file “zephyrbridge.rs” should show up in the Rust crate’s src folder. To get it compiled, add it to lib.rs. At last we need to implement the actual thread functions. Create a file called “threads.rs” in the crate’s source folder and add it to lib.rs. In “threads.rs” add the following:

use crate::zephyrbridge;

pub fn another_thread_func_impl()
{
    loop 
    {

    }
}

pub fn some_thread_func_impl()
{
    loop 
    {

    }
}

Compiling the whole shebang using `west build` should yield an Elf file that contains a working kernel, which runs Rust functions as threads.

Debugging with Ozone

Since the DK I use has an integrated J-Link I can use Segger’s Ozone debugger to flash and test the Elf. While a bit bare bones Ozone will happily process any Elf file you throw at it and it is compatible with a huge range of processors. It also has a – somewhat minimalistic – Zephyr plugin. To get started, create a new project with the MCU you built for and load the elf file. Connect the device. If you now put breakpoints into threads.rs you’ll see them getting hit by the device. At this point the scheduler won’t really do much since none of our threads will ever trigger a rescheduling (see https://docs.zephyrproject.org/latest/kernel/services/scheduling/index.html for more informations on Zephyr’s scheduling behavior), so you’ll probably only see one thread getting all the CPU time. We’ll get to that at a later point. First we’ll want the RTOS plugin to work.

Getting the Zephyr plugin to work

To get the plugin to work, create a new Ozone project, open it in the text view and add the line `Project.SetOSPlugin(“ZephyrPlugin”);` before the Elf is loaded. This should trigger the Zephyr view to be displayed. If it says, that the “THREAD_MONITOR” needs to be activated. This is a kernel feature that can be activated in the kernel configuration menu. Run `west build -t menuconfig` and navigate to “General Kernel options”/”Kernel Debugging and Metrics”, activate “Thread monitoring” and “Thread name”. After recompiling and flashing the resulting binary the Zephyr plugin in Ozone should now display the threads and their stack usage.

Wrap-Up

By now we’re able to use Rust in a Zephyr application and to start threads that run entirely in the “Rust” world without ever having to dive into C. We’ve not yet tackled Zephyr’s kernel API – we’ll have a look at that in the next post.

Nik via Unsplash

1 thought on “Using Zephyr and Rust together (II)”

  1. An excellent read that will keep readers – particularly me – coming back for more! Also, I’d genuinely appreciate if you check my website Webemail24 about Print on Demand Services. Thank you and best of luck!

Leave a Reply

Your email address will not be published. Required fields are marked *