Pretty Graphs from Twilio Narrowband

I just got my hands on the Twilio Narrowband dev kit… and it is a sweet piece of kit. It is centered around a small Arduino compatible board that has a lot of Seeed connectors. Most importantly it also has a cellular modem built in that is a designed to work with T-Mobile’s nationwide narrowband IoT network. This network use the guard bands on T-Mobiles channels. It is not high bandwidth; designed for moving text message sized bits of data. It does have good coverage though… and most importantly is cheap. Through Twilio, you can get a plan ~$10 per year for a device. At this price point you can start coming up with interesting ways to use this!

With the Twilio devkit you communicate with the board using “commands”, which are just short messages. There is an SDK, called Breakout, included with the devkit that makes it super simple to send and receive data over the network. While the examples are a quick way to start pulling data from the sensors, they just have the data going to the Twilio console. This is boring.

I wanted to drop the data into a nice graph, and it ended up being pretty easy to do. The Twilio console has a webhook that can do a HTTP Post everytime a new command comes in. I built a little function to take the data in and post it to a graphing app.

I wanted to graph Temperature and Humidity data and started from the Breakout example. I modified it a little so that output was a simple, comma separated value pair:

I then went over to Glitch and started working on a small function to take in the sensor data from HTTP Post and send it to Adafruit’s IO platform.

Here is my Project on Glitch:

Take the URL for your Glitch project, with the path of ‘/sms’ and add it as the Command Callback on the Twilio Console for your SIM:

After that, it is time to create some Feeds and a Dashboard on the Adafruit IO Platform. After you create your account, copy your Username and Key to the .env file on Glitch. This is how the function logins into IO to post data. In the Glitch Project readme, there is a description of how this should be formatted in the .env file. Create two feeds called hummidity and temp. Also – I just realized I spelled humidity wrong. On the plus side I did it consistently on both Glitch and IO.

Finally, create a Dashboard in IO to show the data from each Feed.

There are still a few things to work on:

  • It looks like there is an issue where the Dev Kit Board will only work if it is connected to the Serial Monitor in Arduino. If you just remove the Serial Prints you should be good to go though.
  • I want to try and make the Dev Kit Board sleep in between transmissions, or atleast turn off the modem to try and save power.

Getting Docker up on Linode, with Ubuntu 16.04

Unfortunately it is not a straight forward process for getting Docker installed on Linode using Ubuntu 16.04. The Image that Linode provides for Ubuntu 16.04 has a custom Linux Kernel installed in it. Docker requires AUFS, which is part of the linux-image-extra package. This package is tied to the kernel version, and since Linode kernel is custom, it is not clear how to load this package. In order to make this work, you have to revert your install back to the distribution kernel.

Here are the steps to get it working:

  • Deploy a Ubuntu 16.04 image on Linode
  • Follow these steps:
  • Then install the Docker-Engine:
  • Finally, lock-down and set things up:

Compiling GnuRadio on a Raspberry Pi

The best bet for most people looking to install GnuRadio on a Raspberry Pi, is to simply do:

$ sudo apt-get install gnuradio gnuradio-dev

…and you are done. Unfortunately, it is not so simple to actually compile the latest version on a Raspberry Pi. While the Pi has gotten faster with the 2 & 3, it is still pretty slow and does not have that much memory, so it takes a long time. If you are good with cross compiler, you can compile on your desktop and ship it your Pi. You are on your own to figure out how to do this… please let me know if you do!

It is not too bad to compile GnuRadio using PyBombs on the Pi. There are a few gotchas though.

The first step is to setup get PyBombs installed. Follow the instructions here. Basically it is:

sudo pip install PyBOMBS
pybombs recipes add gr-recipes git+  
pybombs recipes add gr-etcetera git+

Now that you have PyBombs installed, it is time to do some configuration. The first problem you have to work around is that the version of GCC installed from the Raspbian Repo is compiled for the Rasp Pi v1 chip, which has an older ARM processor. This forces some some older C Flags during compile and mess up the GnuRadio build, once it is time to compile Volk. Double check this by doing ‘gcc -v’ on the Pi and looking for:

--with-arch=armv6 --with-fpu=vfp --with-float=hard

The work around is to use the GCC build from the standard Debian repo. I didn’t figure this out on my own. There is a good blog post here. However, before you do this, make sure you have some of the dependancies for GnuRadio installed:

sudo apt-get install libssl-dev libtiff5-dev

To switch to Repos with the ARM7 code,  I edited `/etc/apt/sources.list` and it now looks like:

deb jessie main contrib non-free
deb jessie rpi
#deb jessie main contrib non-free rpi

If you end up getting weird dependency errors, you can comment out the new lines and enable to old way.

After making that edit, update the package list and install a better ver of GCC:

sudo apt-get update
sudo apt-get install debian-archive-keyring
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install gcc-4.9 gcc-4.9-base libgcc-4.9-dev libgcc1 gcc

After all that, there is still one more step before you can install GnuRadio. The PyBombs recipe for GnuRadio needs to have some C flags added. Edit the `~/.pybombs/recipes/gr-recipes/gnuradio.lwr` file and change config_opt:

  config_opt: " -DENABLE_DOXYGEN=$builddocs -DCMAKE_ASM_FLAGS='-march=armv7-a -mthumb-interwork -mfloat-abi=hard -mfpu=neon -mtune=cortex-a7' -DCMAKE_C_FLAGS='-march=armv7-a -mthumb-interwork -mfloat-abi=hard -mfpu=neon -mtune=cortex-a7'"

I also had to remove some of the old Deb packages from the Raspbian repo:

sudo apt-get remove libqtgui4 libqtcore4 libqt4-xmlpatterns libqt4-xml libqt4-network libqt4-dbus

There is one other thing you have to do… increase the swap size otherwise the Pi will run out of memory with such a large compile. You will probably want to turn this back to the default value after you are done compiling. Follow the directions here.

OK, sweet! Now the now it is time to run `pybombs install gnuradio`. On a Raspbery Pi 2, it took about 24 hours. It might not be a bad idea to run `screen` so thing don’t stop if you get disconnected.

QPSK in OP25 & Fixing Performance

I recently switched over from using a single SDR, with lots of bandwidth to using a bunch of RTL-SDRs each with a smaller amount of bandwidth. The system I am capturing broadcast the same transmission from multiple towers at the same time using something called Linear Simulcast Modulation (LSM). The only problem is that the modulation is a bit different and the multiple transmissions can interfere with each other. With the single SDR I had a Yagi antenna and could point it right at single tower so I would just be receiving one transmission. However, with multiple receivers, it is not so simple and RF splitter amplifiers look expensive.

I had added both OP25 and DSD into Trunk Recorder to capture the audio, but had primarily been using DSD because it had better performance. However, OP25 support QPSK modulation which is what is used with LSM and it is able to make sense of the multiple transmissions. That left me with no other choice than having to track down the performance problems with OP25. The issue was that when an OP25 flowgraph was running but there were no packets going it, it would eat up tons of CPU cycles. I tracked this down to the P25 Frame Assembler Block using the `top` command in linux, with the -tp switch to just look at the threads being generated by Trunk Recorder.

So I don’t fully understand it, but it looks like the problem is with forecast() in My guess is that with the current forecast it would sometimes return that 0 Input samples are required because of the float to int conversion. The following code below makes a huge different for performance. I have a version of OP25 with this fix available here:

p25_frame_assembler_impl::forecast(int nof_output_items, gr_vector_int &nof_input_items_reqd)
   // for do_imbe=false: we output packed bytes (4:1 ratio)
   // for do_imbe=true: input rate= 4800, output rate= 1600 = 32 * 50 (3:1)
   // for do_audio_output: output rate=8000 (ratio 0.6:1)
   const size_t nof_inputs = nof_input_items_reqd.size();
   double samples_reqd = 4.0 * nof_output_items;
    int nof_samples_reqd;
   if (d_do_imbe)
     samples_reqd = 3.0 * nof_output_items;
   samples_reqd = nof_output_items;
   if (d_do_audio_output)
     samples_reqd = 0.6 * nof_output_items;
   nof_samples_reqd = (int)ceil(samples_reqd);
    for(int i = 0; i < nof_inputs; i++) {
      nof_input_items_reqd[i] = nof_samples_reqd;

Using Multiple RTL-SDR to Capture a Trunking System


Most trunked radio systems have the channels they use spread out over a couple MHz worth of spectrum. In order to be able to record more than one broadcast at once, you pretty much need to have an SDR that has enough bandwidth to cover the lowest channel in the system and the highest on at the same time. For the DC Fire & EMS system, the lowest channel is at 854.86250 MHz and the highest is at 860.98750 MHz. This means that your SDR needs to have 6+ MHz of bandwidth. This isn’t too much for most modern SDR, like HackRF, BladeRF or one of the Ettus. These are great SDR, but they cost atleast $300.

The lowly RTL-SDR is a $20 SDR that is just a repurposed USB TV dongle. While it makes for an OK SDR, it only has ~2MHz of bandwidth. When I re-wrote Trunk Recorder, I designed it to support using multiple SDR, but I never really tried it out, so I decided to give it a try.

One of the reasons I was interested was to see if I could get better CPU performance. There is a lot of CPU time dedicated to cutting down the full bandwidth that is coming from the SDR into the small sliver that is interesting. This has to be done for each of the “recorders” that is actively trying to capture a channel. When I was using a single SDR, each Recorder had to take in the full 8MHz and pull out the small 12.5KHz that was interesting. The end results is that I could only record about 3 channels at once before the CPU got overloaded. Since that control channel was going at the same time, that was the equivalent of about 32MHz of bandwidth to process.

With the RTL-SDR, each Recorder only has to look at 2MHz, which puts a lot lighter load on the CPU. Roughly speaking, having 3 Recorders active, plus the control channel would mean that only a total of 8MHz was being processed. As you can see, this means that it scales much more efficiently.

The new code for multiple SDRs has been merged in, give it a try!

Trunk Recorder on GitHub

A couple notes on my setup…

First, start out with an RTL-SDR dongle that has a TXO. This helps make sure the tuning is pretty accurate and will not drift as it gets warmer. These dongles from have been great and the antenna they come with seems to be pretty good.

Also, make sure you assign your dongles each a unique serial number. You can do this with the rtl-eeprom -s command. Using this serial number, you can define different error corrections values for each dongle. I have notice they will very by a few hundred hertz between dongles. There is an example of how to do this in the multi-sdr config file.

The other thing I got was a USB 3 hub. Having more than 2 USB 2 dongle connected to a USB 2 hub might max out the bandwidth. I am not sure if this is necessary, but I wanted to limit the places where things could get screwed up.

Getting iOS Notifications in Linux


Apple provides access to iOS notifications over Bluetooth LE through the Apple Notification Center Service. It is really a pretty cool thing. I got support for it up and running in Arduino and documented it here. It works great and has been pretty reliable. The problem is that it is not very practical. First off, it really only works with one specific chipset from Nordic Semiconductor. Secondly, by the time you have bought an Arduino and the BLE shield, you have already spent close to $40.  The final kicker is that once you load up the library for BLE and ANCS, there isn’t much memory space left on the Arduino, so you can’t do anything too complex.

This led me to start looking at what was possible on Linux. Given how common and cheap Raspberry Pis are, it seemed like an interesting platform. Sandeep Mistry is doing some great work with Bluetooth LE  on Linux and has even created a sample ANCS app. It is all based in Node.js, which is a fun language to use. Unfortunately, I could never get the ANCS example running on the Raspberry Pi. It also required the iOS device to be running a virtual peripheral through an App like LightBlue Explorer.

In order to be able to work natively with iOS without requiring any app to run, you have to have your Linux system initially look like a BLE peripheral and then pivot to act like a BLE Central. In ANCS the iOS device acts like a peripheral, and the Linux device need to act like a Central. This means combing Sandeep’s libraries for BLE Peripheral and  BLE Central.

One of the other challenges I ran into is that you need to have Bluetoothd initially run, and then kill it. I am not exactly sure why, but it seems to initialize the kernel portion of Bluez.  I am going to look more into this in the future, but for now do a ‘stop bluetooth’ or ‘/etc/init.d/bluetooth stop’ and watch out for respawns.

The code for this is up on github:

You should be able to do some pretty fun stuff with this. Since the Raspberry Pi has lots of memory, you can make notifications trigger some complex interactions. Lights can blink, balloons can be popped, and Nerf guns can be fired. And since it is written in Node, it is easy to write complex runs to make sure only VIPs get the balloon popping.

I am not sure which USB Bluetooth LE dongles to recommend. I think it is best to use a Cambridge Silicon Research or Broadcomm based dongle. There is a list up here:

I have just been using a couple of cheap, random ones I got off Amazon.

MagicBands + RPi + LEDs = Fun


Disney has made RFID cool with their MagicBands. You can unlock your hotel door, buy a meal and even get into the park just using your band. It was honestly a lot of fun to use… so much fun that I wanted to keep using them when I got home. Luckily, you get to get the bands!

All that I had to do was create a complete MagicBand enabled infrastructure at my house. That seemed tough, so I decided to start small and make some LEDs blink.


Luckily, someone had already done the tough work of taking apart the band and digging up the FCC papers. The band has 2 radios, and one of the is standard RFID! This makes things quite a bit easier.

Knowing this, I hacked together a MagicBand reader using a Raspberry Pi 2, a Unicorn Hat, and a USB RFID/NFC reader. The RFID reader doesn’t come with software for Linux, but the LibNFC project has everything you need to do some fun stuff and it fully supports the SCL3711.


It actually didn’t take too much programming to put something fun together. I just hacked together the example program from LibNFC and the demo C program for the Unicorn Hat. I hardcoded in the ID number for my bracelets and play different LED animations when one is read on the RFID reader.

For extra fun, I put the Pi in a shoe box, cut out a window and lined it with velum to make a little viewer. Not sure what it is supposed to be exactly, but it is a lot of fun.

IMG_1096 IMG_1112

With all of this connected together, it is easy to see how it could be extended into something useful. You could create a smart lock, or fire off some internet task when a card or band is touched. It turns out that Washington DC’s SmartTrip card uses standard RFID and you could probably do something pretty fun since almost everyone in DC carries one around.

I put the code up here:

SmartNet-Recorder & GNU Radio 3.7

I finally got my SmartNet-Recorder software working correctly with GNU Radio 3.7. Or more correctly, I finally squashed the lingering bugs in 3.7, so my code now runs correctly.

There is a 3.7 branch in my GitHub Repo and that will shortly become the Master

Just to capture them correctly, here are the threads from the mailing list:

iOS Notifications with Arduino


One of the few things I really like about my Blackberry was the red blinky light that let me know if anything new happened. The iPhone is a great device, but it is easy miss that you have a new email or text. Apple has create a centralized place to collect all of these new things you might want to know about, called the Notification Center. However, there is still no red blinky light. If you miss the screen turning on when something comes in, you wouldn’t know that you had anything new.

Apple has added something that can help fix this, Apple Notification Center Services (ANCS). This allows for devices to connect over Bluetooth LE  (BLE), also known as Bluetooth Smart, and receive notifications from the iOS device. It is in iOS 7.0 and most recent iPhones and iPads. ANCS is behind a lot of the new smart watches and fitness bands that do stuff when you have a new email. All this is great, but wouldn’t it be fun if you could do something really big when you get something new?

I thought it would, so I looked for a way to build a Arduino power device that would connect to iOS ANCS. There is a BLE Arduino shield that plugs right in and comes with lots of great code. AdaFruit also has a BLE Breakout Board. They are both based on the great nRF 8001 Chip from Nordic Semiconductor. They provide an Arduino library for their chip and a free software package that makes it easy to configure the features of the chip.

I also came across some great ANCS code for AVRs & Arduinos that was 95% there. I updated it the be compatible with the latest version of the Noridc library, made it easier to pair and rolled it into an Arduino library. I am going to work on getting my code Pulled back up, but until then use my branch to pull down the latest updates.

ANCS Arduino Library

Things are still a little messy, but the overall functionality seems to be pretty stable. You pair the devices once and after that, they automatically join. That alone could be used for some cool functionality. It would be easy to build a box that automatically unlocks when you are near-by or have it play a trumpet sound. This part about ANCS compared to other BLE profiles, is that it works natively and doesn’t require any Apps to be installed on the iOS device. With the BLE Proximity stuff, you need to have an App running on the device to make the initial connection.

Of course the reason for going through this trouble is to get notifications. ANCS pass along all of the Notifications from iOS, including 3rd party ones. You can filter them based on type or source, so you can have a different response depending on what it is. For example, you could do something really annoying for incoming calls, but more subtle for a Twitter mention.

The library works arounds Call Backs and will call the function you specify when a device Connects/Disconnects and sends a Notification.

I have included an example I am working on that displays Notifications on an LCD and turns on the Backlight.

Memory is a problem, so I have some notification fields turned off in order to have a smaller cache.

Also, is Notif a good name? Any thoughts on a better one for the library?

Using a HackRF to Capture an Entire Radio System


After a bit of work, I have put together a system that lets you monitor the radio system for DC Fire, EMS & City Services. Everything gets recorded and can be played back through a website. Thanks to the magic websockets, any new call that comes in gets added to the top of the list. Better yet, if you hit the Autoplay button in the upper left hand corner, it will automatically play through the list of calls. You can narrow the list of calls display to specific group using the filters.

Anyhow, give it a try at and let me know what you think. If you want more background on how it works, read on…

Software Defined Radios are pretty awesome. For the uninitiated, they let you receive a lot of radio signal over a wide range of frequencies. It pass you the raw information and you use a computer to filter out a transmission and process the signal. There is tons of flexibility… and that is also a challenge. There are a lot of rough edges and ways you can mess things up. That is half the fun though, boldly coding where few have coded before.

I was one of the lucky recipients of a great SDR, the HackRF Jawbreaker. It is an amazing piece of Hardware, capable of sending or receiving 20MHz of spectrum between ~13MHz – 6GHz. It is Open Source Hardware too, so if you are wicked smart you can build your own boards. The design work for it was funded by DARPA and I got one of the boards from that pre-production run. In order to go into production, a Kickstarter project was put together. The original target amount was $80k and it quickly blew through that and ended up raising $600k. One of the most impressive stats is that the target price is $300.

You can do a lot of things with this board. It use the popular Osmosdr drivers, so it works with a lot of existing things and plugs right into GNURadio.

There are an endless number of things to try. What I have been focusing on is trying to monitor the radio system that the Washington, DC Fire/EMS & City Services use. Of course monitoring a radio system is nothing new. Radio Shack sells a bunch of different scanners that can do it. However, these scanners can only follow one conversation on a system on a time. Since an SDR can receive a wide swatch a spectrum at once and all the processing happens on the computer, you can decode multiple transmissions. Since you are doing the processing on a computer you can easily save and archive the transmissions, which you sort of needed since you could be getting a couple at once.

Luckily for me, a couple of people have already setup systems that do exactly that. The code from Nick Foster, GR-SmartNet, seems to be the first out there, and the only publicly available code. The Super Trunking Scanner took it a step further and made it playable over the web. It monitor a trunked system with analog channels. The Radio Capture system took a similar approach, except made it work for a system with digital voice channels.

How Trunking Works
Here is a little background on trunking radio systems, for those not familiar. In a Trunking system, one of the radio channels is set aside for to manage the assignment of radio channels to talkgroups. When someone wants to talk, they send a message on the control channel. The system then assigns them a channel and sends a Channel Grant message on the control channel. This lets the talker know what channel to transmit on and anyone who is a member of the talkgroup know that they should listen to that channel.

In order to follow all of the transmissions, this system constantly listens to and decodes the control channel. When a channel is granted to a talkgroup, the system creates a monitoring process. This process will start to process and decode the part of the radio spectrum for that channel which the SDR is already pulling in. In the DC system, the audio is digitally encoded using the P25 CAI process. Decoding it is a bit of pain. I am taking a quick and dirty approach right now and have shoe horned in the DSD program. In the future I would like to try using the code from the OP25 project and a hardware dongle for the decoding. Unfortunately, the dongle is $500, so that might not be happening too soon.

No message is transmitted on the control channel when a talkgroup’s conversation is over. So instead the monitoring process keeps track of transmissions and if there has been no activity for 5 seconds, it ends the recording and uploads to the webserver. I convert the WAV file that gets recorder into an MP3 file. Since the audio is original converted to digital by the radio system, put it through another lossy digital conversion is probably not a good idea, but sending the full-size WAV file ate up too much space.

My Setup
The monitoring and recording is being run off of a laptop in my apartment and uses a crappy antenna. The website is run off a VPS I have running up in the magical cloud.

The webserver is pretty simple. It is written in NodeJs. The audio is stored as WAV files and indexed using MongoDB. The server simply watches for new files being placed in a directory and then moves them and adds them to the DB. is used to updated all of the browsers visiting the site that a new transmission has been added.

The Code
The recorder portion of the system is C++ code that uses GnuRadio

The recorder uses DSD to decode the digital audio. Unfortunately DSD isn’t supported or being developed, isn’t designed to work with GNURadio or SDR. Luckily someone wrapped DSD into a GNURadio block. It works, but it isn’t pretty. I had to futz with it a bit to run concurrently. My version is here.

The final portion is the website for listening to the recordings. The code for that is available here.

The Punch List
Right now, I think everything is pretty much stable. I have a small memory leak somewhere, but I can keep it up for long time without it being a problem.

  •  Upgrade everything to GNURadio 3.7. Right now I am on the 3.6 branch and it will take a bit of work to switch.
  • Get OP25 working nicely. The code is designed to work with GR and is being actively developed.