One of the most confusing things about bluetooth low energy is how data is moved around. Depending on your application, your device state may be fairly complex. That means having an individual endpoint for every piece of data is suicide by Bluetooth.

So, what’s he solution?

Sheep with Protobuf

Protocol Buffers.

A protocol buffer is a programatic way to encode/decode optimized structured data. They can be shared and manipulated across almost any platform. Nordic actually uses a variant of it for their DFU service.

There was a lot of buzz words in the first few sentences. Hopefully, by the end of this post you’ll understand exactly what I’m talking about.

So, how do you use this magical software?

Read on!

Install

The first part of the process is to make sure you’ve installed all the correct utilities. Depending on what programming language will determine what you install and use. In this case I’ll outline the utilities that I have used for the Dondi Lib.

  1. protoc is the most important utility you’ll have to install here.

    1. For Mac, download the appropriate release here.
    2. Unzip the folder
    3. Run ./autogen.sh && ./configure && make in the folder
    4. If you get an error autoreconf: failed to run aclocal: No such file or directory install autoconf using Homebrew:

    brew install autoconf && brew install automake

    Then, re-run step 3. 5. Then run:

    make check
    sudo make install
    which protoc
    

Consider protoc the compiler for Protocol Buffers. It can either output raw files or libraries directly. That’s because it’s got Go support built in.

That raw data can also be used to generate static libraries for other languages. That usually requires an extra utility (or utilities). I describe the two that the Dondi Lib project used below.

  1. nanopb is a python script used to create C libraries that encode/decode your structured data.

It can be installed by navigating to the nanopb git repo and downloading the appropriate files. The most important ones include:

  1. pb_encode.c, pb_decode.c and pb_common.c
  2. /generator/nanopb_generator.py
  3. And the /generator/nanopb/directory co-located with nanopb_generator.py

nanopb is meant for deployment on embedded platforms. The libraries, when installed, are small but aren’t trivial in size either. For the Dondi Lib project, I ended up removing the encoding functionality because it saved a significant amount of flash space.

  1. pbjs uses the output from protoc to generate a static javascript library. This is powerful because you can then use it in any javascript application. The best way to install pbjs is by running:
npm install -g protobufjs

Setting up the protocol buffer

I organized all the source files for the protocol buffer setup within the Git repo for the Doni Lib. I then created config.proto and placed it in a folder called proto.

Then, I create a structure definition (see below) and placed it inside config.proto

syntax = "proto2";

package sign;

// Ratio of Red, Green and Blue
message LedColorConfig {
  required uint32 red = 1 [default = 255];
  required uint32 green = 2 [default = 0];
  required uint32 blue = 3 [default = 0];
}

// Determines the LED mode. This was for future improvements
message LedConfig {
  enum Mode {
    SINGLE_COLOR = 0;
    PATTERN = 1;
  }
  required Mode mode = 1 [default = SINGLE_COLOR];
  required LedColorConfig color_config = 2;
  required uint32 pattern = 3 [default = 0];
}

// Keeps the LED state but it's also used to control
// the remaining functionality of the Dondi Lib
message Config {
  required bool motion_enabled = 1 [default = true];
  required uint32 motion_timeout = 2 [default = 30];
  required uint32 device_brightness = 3 [default = 100];
  required LedConfig led_config = 4;
  optional bytes _unused = 5;
}

It may look foreign at first but once you take a deeper look, it’s not that much different than a standard C struct or hash table.

In my case I’ve nested a LedColorConfig inside a LedConfig inside a Config. Each one of these representing a structure data. As you can imagine, you can nest as much as your heart desires. My main goal was to structure the device state logically but also in a single efficient package.

The above example includes typical data structures and features that you’d find in C code. For instance, it includes the definition of an enumeration. It provides default values for each of the entries.

I also added a _unused variable. These are useful for adding padding to the structure. That way you don’t have an un-even amount of bytes to transfer. This turns into a bit of a nightmare because the MTU is usually evenly sized.

Speaking of bytes, Protoc by default doesn’t have anything smaller than an unsigned 32 bit number. For this application, the most I needed was a 16 bit number. So for the sake of efficiency, I also created config.options file which defined the structures further:

sign.LedColorConfig.red    int_size:IS_8
sign.LedColorConfig.green  int_size:IS_8
sign.LedColorConfig.blue   int_size:IS_8
sign.LedConfig.pattern     int_size:IS_8
sign.Config._unused        max_size:1 fixed_length:true
sign.Config.motion_timeout int_size:IS_16
sign.Config.device_brightness int_size:IS_8

I also used this file to define the max size of the __unused variable. That way I could tweak it depending on all the other data points. Too many bytes? Minus 1 on __unused. Not enough? Add 1.

This was a bit of trial and error depending on how many bytes the compiler wants to put in the struct generated by nanopb. My best suggestion is to print out the result of sizeof once the program compiles. Then you are darn sure how many bytes your generated c-struct is.

Generating the appropriate static libraries

In order to make this as easy as possible, I put all the following code into a Makefile. When you make a change to the Protocol Buffer, that every library for every language used gets generated.

Here’s the code:

%.pb: %.proto
protoc -I$(PROTO_DIR) --go_out=$(PROTO_DIR) $<
protoc -I$(PROTO_DIR) -o$*.pb $<
@$(BIN_DIR)/protogen/nanopb_generator.py -I$(PROTO_DIR) $@
pbjs -t static-module -p$(PROTO_DIR) $*.proto > $@.js
@mv $*.pb.c $(SOURCE_DIR)
@mv $*.pb.h $(INCLUDE_DIR)

Let’s break it down:

%.pb: %.proto is a typical Makefile directive to search for every file that ends with .proto. $< refers to the output filename and $@ refers to the input file name. So if you had config.proto, $< would be equal to config.pb and $@ would be equal to config.proto. It’s a nifty Makefile structure that is used often.

protoc -I$(PROTO_DIR) --go_out=$(PROTO_DIR) $< generates a .go library from the .proto file in your proto directory.

protoc -I$(PROTO_DIR) -o$*.pb $< generates a .pb file which is the “compiled” version of the .proto file.

The .pb file gets used in the next step:

@$(BIN_DIR)/protogen/nanopb_generator.py -I$(PROTO_DIR) $@

This is where the C code is generated from the .proto file. I recommend placing nanopb_generator.py inside your repository as well. That way, you can maintain a consistent toolchain especially if there are multiple engineers working on the project.

Side note: there are several different ‘versions’ of Protocol Buffers. Make sure that all the tools you choose to use are operating on the right one!

Finally, pbjs -t static-module -p$(PROTO_DIR) $*.proto > $@.js generates a stand alone javascript library. For the Dondi Lib, the React Native app that I wrote used the javascript library directly.

The last two commands are there for housekeeping and automation only. (They place the now generated C source and C header where compiler expects them to be.)

Encoding and Decoding

Encoding using Javascript

Here’s a typical flow that you can follow when using a statically generated javascript library. First, initialize the library.

// Import the config message
var protobuf = require("protobufjs/minimal");
import { sign } from "./config.pb.js";

Then, compile the payload. i.e. turn human readable JSON into nicely packed binary. See below.

	  // Verify that the structure if valid
    var err = sign.Config.verify(this.state.config);
    if (err) {
      console.log("proto err: " + err);
    }

	  // Create amessage based on the JSON data
    var message = sign.Config.create(this.state.config);

	  // Finally encode it into packed binary data that can be sent
    this.payload = sign.Config.encode(message).finish();

The data that was encoded looked like the JSON below:

{
  "motionEnabled": true,
  "motionTimeout": 15,
  "deviceBrightness": 100,
  "ledConfig": {
    "mode": 0,
    "pattern": 0,
    "colorConfig": {
      "red": 255,
      "green": 0,
      "blue": 0
    }
  }
}

Notice the similarities compared with the .proto file?

In the React Native app, I also used a persisted state. That way the app knew what the state the light was in. If there wasn’t any persisted state, I imported the default JSON file like this:

// Default device config
var defaultConfig = require("./config.json");

This imports and converts the JSON text into a javascript object. Clean. Simple.

Encoding using Golang

Here’s another example of encoding but using Golang.

// Creates a default configuration
config := &sign.Config{}

// A function to pull the JSON data from file
err = parseConfigFile(config)
if err != nil {
log.Fatalf("Unable to parse %s: %s", *config_file, err)
}

// Marshals (i.e. encodes) that object into something useful
out, err := proto.Marshal(config)
if err != nil {
log.Fatalf("Unable to marshal config: %s", err)
}

The output of this operation is what we’re most interested in. In this case out has the encoded binary data. As you can imagine, we can then send that data through the web, or directly to a device via Bluetooth Low Energy. The world of Protocol Buffers is your oyster.

Decoding in C

Once the data is received it’s decoded on the embedded end. nanopb is confusing. But luckily I have some code here that will work for you:

	// Create temporary input stream
  pb_istream_t istream;

	// Generate an input stream from the raw data that was recieved
  istream = pb_istream_from_buffer((pb_byte_t *)buffer, buffer_len);

	// Decode the data based on the `fields` -> sign_Config_fields
  if (!pb_decode(&istream, sign_Config_fields, p_config)) {
    NRF_LOG_ERROR(“Unable to decode: %s”, PB_GET_ERROR(&istream));
    return NRF_ERROR_INVALID_STATE;
  }

First, you create an input stream based on the raw data and the size of the data.

Then, you use the pb_decode function. You point the first argument to the input stream. The second to the definition of our Protocol Buffer we’ve been working with. The last argument is a pointer to a struct which matches the data type represented by sign_Config_fields.

Depending on how you set up your Protocol Buffer, sign_Config_fields may be a different name for you. Look for the section called /* Struct field encoding specification for nanopb */ in your config.pb.h file.

Alternatives

If you’re not down for the complexity, there are other alternatives. I’ve used MessagePack with some success on previous products. It’s straightforward and has tons of support for a majority of programming languages. Check it out if Protocol Buffers don’t float your boat.

Conclusion

In this post I went through how to compile static libraries using Protocol Buffers. It’s a sane way to develop a platform and language agnostic communication structures for your projects.

In Part two, I’ll show you how to set up a very simple Bluetooth Service and Characteristic that will be used to transfer our freshly encoded data to-and-fro.

I hope you’ve found this post useful. Let me know what you think about this post in the comments below. 📣