Photographing all the planes!

So I put together a system that automatically photographs all of the airplanes that fly over my house. I tweeted about it and things exploded a little on Hacker News. So I thought it could be helpful for folks if I go through some of the details.

This story starts about 4 years ago, when I came across an old networked Pan-Tilt-Zoom (PTZ) camera from Axis. My office at In-Q-Tel overlooked the flight path for National Airport. I already had built a little display using my LED sign that showed the airplane that was currently landing. The sign got this information from the ADS-B information that airplanes broadcast these days. Each broadcast contains the airplanes location, heading and some identifying information. Using a cheap, $25 software defined radio you can receive these broadcasts and track which planes are nearby… or with a good antenna, within a couple hundred miles of you. Of course, the natural thing to do was to see if I could have the camera track the planes as they landed using the information from ADS-B. Since I knew the location of the camera, and was getting the location of the airplane, some geometry would tell me where to point the camera. The network security cameras from Axis are great and have a built-in web server and API. You can pass a bearing and angle to the API, and it will point the camera where ever you want. So I hooked this all together and to my surprise, it actually worked! Well, sort of worked. The camera was old, and the image quality was not that great, but it was taking pictures of the planes passing by. Of course, all good things do not last. The setup looked like a pile of junk. I had it setup in the corner by a window and one day it got cleaned and scrapped.

Fast forward a few years. I switched over to working in IQT Labs and we had a little down time between projects. I proposed the we pick up the project again and we got to tinkering. The resulting project, SkyScan, is a much more refined version of the original vision. It is nicely containerized and uses MQTT to pass messages between the components. We ended up working well enough that we bought a fancy Axis camera with a 30x zoom… which we nicknamed “MegaZoom”! We tested perfected it against aircraft at cruising altitude, as well as ones on approach… and we learned a lot about different ellipsoid projections!

However, I left IQT in the fall and went over to Microsoft. One of the awesome things about working at In-Q-Tel is that you get to Open Source a lot of your work. I recently left to work on Software Defined Radios at Microsoft, but wanted to keep working on this project. I missed getting to see all of the planes that were passing by at 40k feet. It was like having a super power.

So I fired up Ebay and got a used traffic camera off Ebay. It is super ruggedized and designed to live on the top of a pole. There is even heater you can turn on. SkyScan simples saves the photos it captures to disk and I wanted to share them. One of the perks of working at Microsoft, is that you get a $150 Azure credit. This seemed like the perfect excuse for me to up my Azure skills. To process the photos from SkyScan, I setup a series of serverless Azure Functions. One of them pulls in an ML model I built in Azure Custom Vision that spots airplanes and automatically builds thumbnails. Another pulls up information about the aircraft based on the identifier from the ADS-B broadcast. It does this by looking it up in a Cosmos DB I built with data I downloaded from the FAA. 

All of this gets served up using Azure App Services and a simple React frontend. Going with App Services, instead of just a VM turned out to be a great choice. After the site hit the frontpage of HN, it fell over from the traffic. I was able to easily upgrade it from the free tier, to something much more substantial with a single click.

I am honest surprised the whole thing works at all. The camera I am using when fully zoomed in, has a horizontal field of view of around 2 degrees. So there is not much room for error or latency!

Wrestling WNYC’s Audiogram Generator

The Public Radio station, WNYC, released a really nice audiogram generates. Audiograms are those short videos that are created for an audio clip, that show a waveform. Video is easy to share on a lot of social platforms and turning the audio into a video also makes it a little more engaging.

WNYC’s tool is open source, but it hasn’t been updated since 2016 and there is a bit of bit-rot. I first tried to install it on my MacBook… but that was a dismal failure. I was able to get it working on my Linux box, which is using Ubuntu 18.04. Here are the steps I took:

The full instructions are here.

sudo apt-get install git nodejs npm \
libcairo2-dev libjpeg8-dev libpango1.0-dev libgif-dev libpng-dev build-essential g++ \
ffmpeg \

git clone 
cd audiogram

Now you have all the dependencies, and the source code. The first bug you hit is that the package.json specifies an old version of node-canvas.

Run npm installit will fail!

Now, install the latest version of node-canvas:

npm install canvas@next

Unfortunately, you will hit another bug when you try to run things. Luckily, someone has a fix. Open audiogram/draw-frames.js and edit line 12.
canvases.push(new Canvas(options.width, options.height));
canvases.push(new Canvas.createCanvas(options.width, options.height));

Now you should be good to go! Start it up with: npm start

My 7yo made a $99 in-app purchase and Apple is refusing a refund!

With COVID-19, both the kids have been spending a lot of time on iPads. I let my 7 year old son use my iPad and we added his fingerprint so he could log in easier and I could still keep using a long device password. This setup was working great until I got a receipt from Apple for a $99 in-app purchase.

I didn’t think this would be a big deal, my son is 7 and tapped on something without realizing he was spending real money. To be clear, this was a $99 in-app purchase, mistakenly made by a minor for a game he never plays. This should be exactly what the refund process is for!

As soon as I saw the receipt I clicked on “Report a Problem” and went through Apple’s automated process. For the reason, I put “Purchased by a Minor”, everything seemed pretty straightforward. A couple days later, I got a reply… “Refund Declined”! Worse yet, there was no explanation given or additional steps provided. This made no sense to me, so I wrote to and explained what had happened and that I was wondering why the refund request was denied. The support person didn’t provide any rational but instead resubmitted a request back into the system for a refund. A few days later I got a response back through the automated process, with the same answer “Refund Declined” and no explanation!

Since I still wasn’t getting an explanation, I gave Apple Support a call. After a bit of a wait I was able to talk with a courteous support rep. I went through what had happened, the steps I had taken and that I was looking for a refund and an explanation. The rep let me know that he was unable to do anything else, since it had already gone through the system and an appeal. He was unable to provide any reason why the refund would have been denied. He did mention it is more difficult with in-app purchases because it goes through developers systems. Left without any other avenues, I mentioned I was going to dispute the charge with my credit card since I was being denied a refund. The support rep cautioned against it because Apple would view it as fraud and lock my entire account. Seems like Mob tactics to me.

I tried contacting the developer to see if they could initiate a refund from their side. They said they were happy to offer a refund but it has to be done through Apple’s system… and Apple is not allowing me to make any more refund requests.

At this point, I don’t think there is anything else I can do. Given Apple’s previous settlement around in-app purchase’s by minors, I am pretty shocked:

Pretty Graphs from Twilio Narrowband

I just got my hands on the Twilio Narrowband dev kit… and it is a sweet piece of kit. It is centered around a small Arduino compatible board that has a lot of Seeed connectors. Most importantly it also has a cellular modem built in that is a designed to work with T-Mobile’s nationwide narrowband IoT network. This network use the guard bands on T-Mobiles channels. It is not high bandwidth; designed for moving text message sized bits of data. It does have good coverage though… and most importantly is cheap. Through Twilio, you can get a plan ~$10 per year for a device. At this price point you can start coming up with interesting ways to use this!

With the Twilio devkit you communicate with the board using “commands”, which are just short messages. There is an SDK, called Breakout, included with the devkit that makes it super simple to send and receive data over the network. While the examples are a quick way to start pulling data from the sensors, they just have the data going to the Twilio console. This is boring.

I wanted to drop the data into a nice graph, and it ended up being pretty easy to do. The Twilio console has a webhook that can do a HTTP Post everytime a new command comes in. I built a little function to take the data in and post it to a graphing app.

I wanted to graph Temperature and Humidity data and started from the Breakout example. I modified it a little so that output was a simple, comma separated value pair:

I then went over to Glitch and started working on a small function to take in the sensor data from HTTP Post and send it to Adafruit’s IO platform.

Here is my Project on Glitch:

Take the URL for your Glitch project, with the path of ‘/sms’ and add it as the Command Callback on the Twilio Console for your SIM:

After that, it is time to create some Feeds and a Dashboard on the Adafruit IO Platform. After you create your account, copy your Username and Key to the .env file on Glitch. This is how the function logins into IO to post data. In the Glitch Project readme, there is a description of how this should be formatted in the .env file. Create two feeds called hummidity and temp. Also – I just realized I spelled humidity wrong. On the plus side I did it consistently on both Glitch and IO.

Finally, create a Dashboard in IO to show the data from each Feed.

There are still a few things to work on:

  • It looks like there is an issue where the Dev Kit Board will only work if it is connected to the Serial Monitor in Arduino. If you just remove the Serial Prints you should be good to go though.
  • I want to try and make the Dev Kit Board sleep in between transmissions, or atleast turn off the modem to try and save power.

Silly WebSocket mistake

I recently was trying to add a websocket to an existing NodeJS Express app I had written. Here is the important code below:


const WebSocket = require('ws');
const app = express();
const server = http.createServer(app);
const wss = new WebSocket.Server({ server });

app.listen(8080, function listening() {
  console.log('Listening on %d', server.address().port);

I kept getting 502 errors from NGINX with this code when I tried to establish a connection. See if you can spot the error…

I was pretty dumb. You can’t have app listening anymore. Instead you have to have server. The correct code is:

server.listen(8080, function listening() {
  console.log('Listening on %d', server.address().port);

Leave a comment if this happened to you too… I will feel less dumb.

Getting Docker up on Linode, with Ubuntu 16.04

Unfortunately it is not a straight forward process for getting Docker installed on Linode using Ubuntu 16.04. The Image that Linode provides for Ubuntu 16.04 has a custom Linux Kernel installed in it. Docker requires AUFS, which is part of the linux-image-extra package. This package is tied to the kernel version, and since Linode kernel is custom, it is not clear how to load this package. In order to make this work, you have to revert your install back to the distribution kernel.

Here are the steps to get it working:

  • Deploy a Ubuntu 16.04 image on Linode
  • Follow these steps:
  • Then install the Docker-Engine:
  • Finally, lock-down and set things up:

Compiling GnuRadio on a Raspberry Pi

The best bet for most people looking to install GnuRadio on a Raspberry Pi, is to simply do:

$ sudo apt-get install gnuradio gnuradio-dev

…and you are done. Unfortunately, it is not so simple to actually compile the latest version on a Raspberry Pi. While the Pi has gotten faster with the 2 & 3, it is still pretty slow and does not have that much memory, so it takes a long time. If you are good with cross compiler, you can compile on your desktop and ship it your Pi. You are on your own to figure out how to do this… please let me know if you do!

It is not too bad to compile GnuRadio using PyBombs on the Pi. There are a few gotchas though.

The first step is to setup get PyBombs installed. Follow the instructions here. Basically it is:

sudo pip install PyBOMBS
pybombs recipes add gr-recipes git+  
pybombs recipes add gr-etcetera git+

Now that you have PyBombs installed, it is time to do some configuration. The first problem you have to work around is that the version of GCC installed from the Raspbian Repo is compiled for the Rasp Pi v1 chip, which has an older ARM processor. This forces some some older C Flags during compile and mess up the GnuRadio build, once it is time to compile Volk. Double check this by doing ‘gcc -v’ on the Pi and looking for:

--with-arch=armv6 --with-fpu=vfp --with-float=hard

The work around is to use the GCC build from the standard Debian repo. I didn’t figure this out on my own. There is a good blog post here. However, before you do this, make sure you have some of the dependancies for GnuRadio installed:

sudo apt-get install libssl-dev libtiff5-dev

To switch to Repos with the ARM7 code,  I edited `/etc/apt/sources.list` and it now looks like:

deb jessie main contrib non-free
deb jessie rpi
#deb jessie main contrib non-free rpi

If you end up getting weird dependency errors, you can comment out the new lines and enable to old way.

After making that edit, update the package list and install a better ver of GCC:

sudo apt-get update
sudo apt-get install debian-archive-keyring
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install gcc-4.9 gcc-4.9-base libgcc-4.9-dev libgcc1 gcc

After all that, there is still one more step before you can install GnuRadio. The PyBombs recipe for GnuRadio needs to have some C flags added. Edit the `~/.pybombs/recipes/gr-recipes/gnuradio.lwr` file and change config_opt:

  config_opt: " -DENABLE_DOXYGEN=$builddocs -DCMAKE_ASM_FLAGS='-march=armv7-a -mthumb-interwork -mfloat-abi=hard -mfpu=neon -mtune=cortex-a7' -DCMAKE_C_FLAGS='-march=armv7-a -mthumb-interwork -mfloat-abi=hard -mfpu=neon -mtune=cortex-a7'"

I also had to remove some of the old Deb packages from the Raspbian repo:

sudo apt-get remove libqtgui4 libqtcore4 libqt4-xmlpatterns libqt4-xml libqt4-network libqt4-dbus

There is one other thing you have to do… increase the swap size otherwise the Pi will run out of memory with such a large compile. You will probably want to turn this back to the default value after you are done compiling. Follow the directions here.

OK, sweet! Now the now it is time to run `pybombs install gnuradio`. On a Raspbery Pi 2, it took about 24 hours. It might not be a bad idea to run `screen` so thing don’t stop if you get disconnected.

QPSK in OP25 & Fixing Performance

I recently switched over from using a single SDR, with lots of bandwidth to using a bunch of RTL-SDRs each with a smaller amount of bandwidth. The system I am capturing broadcast the same transmission from multiple towers at the same time using something called Linear Simulcast Modulation (LSM). The only problem is that the modulation is a bit different and the multiple transmissions can interfere with each other. With the single SDR I had a Yagi antenna and could point it right at single tower so I would just be receiving one transmission. However, with multiple receivers, it is not so simple and RF splitter amplifiers look expensive.

I had added both OP25 and DSD into Trunk Recorder to capture the audio, but had primarily been using DSD because it had better performance. However, OP25 support QPSK modulation which is what is used with LSM and it is able to make sense of the multiple transmissions. That left me with no other choice than having to track down the performance problems with OP25. The issue was that when an OP25 flowgraph was running but there were no packets going it, it would eat up tons of CPU cycles. I tracked this down to the P25 Frame Assembler Block using the `top` command in linux, with the -tp switch to just look at the threads being generated by Trunk Recorder.

So I don’t fully understand it, but it looks like the problem is with forecast() in My guess is that with the current forecast it would sometimes return that 0 Input samples are required because of the float to int conversion. The following code below makes a huge different for performance. I have a version of OP25 with this fix available here:

p25_frame_assembler_impl::forecast(int nof_output_items, gr_vector_int &nof_input_items_reqd)
   // for do_imbe=false: we output packed bytes (4:1 ratio)
   // for do_imbe=true: input rate= 4800, output rate= 1600 = 32 * 50 (3:1)
   // for do_audio_output: output rate=8000 (ratio 0.6:1)
   const size_t nof_inputs = nof_input_items_reqd.size();
   double samples_reqd = 4.0 * nof_output_items;
    int nof_samples_reqd;
   if (d_do_imbe)
     samples_reqd = 3.0 * nof_output_items;
   samples_reqd = nof_output_items;
   if (d_do_audio_output)
     samples_reqd = 0.6 * nof_output_items;
   nof_samples_reqd = (int)ceil(samples_reqd);
    for(int i = 0; i < nof_inputs; i++) {
      nof_input_items_reqd[i] = nof_samples_reqd;

Using Multiple RTL-SDR to Capture a Trunking System


Most trunked radio systems have the channels they use spread out over a couple MHz worth of spectrum. In order to be able to record more than one broadcast at once, you pretty much need to have an SDR that has enough bandwidth to cover the lowest channel in the system and the highest on at the same time. For the DC Fire & EMS system, the lowest channel is at 854.86250 MHz and the highest is at 860.98750 MHz. This means that your SDR needs to have 6+ MHz of bandwidth. This isn’t too much for most modern SDR, like HackRF, BladeRF or one of the Ettus. These are great SDR, but they cost atleast $300.

The lowly RTL-SDR is a $20 SDR that is just a repurposed USB TV dongle. While it makes for an OK SDR, it only has ~2MHz of bandwidth. When I re-wrote Trunk Recorder, I designed it to support using multiple SDR, but I never really tried it out, so I decided to give it a try.

One of the reasons I was interested was to see if I could get better CPU performance. There is a lot of CPU time dedicated to cutting down the full bandwidth that is coming from the SDR into the small sliver that is interesting. This has to be done for each of the “recorders” that is actively trying to capture a channel. When I was using a single SDR, each Recorder had to take in the full 8MHz and pull out the small 12.5KHz that was interesting. The end results is that I could only record about 3 channels at once before the CPU got overloaded. Since that control channel was going at the same time, that was the equivalent of about 32MHz of bandwidth to process.

With the RTL-SDR, each Recorder only has to look at 2MHz, which puts a lot lighter load on the CPU. Roughly speaking, having 3 Recorders active, plus the control channel would mean that only a total of 8MHz was being processed. As you can see, this means that it scales much more efficiently.

The new code for multiple SDRs has been merged in, give it a try!

Trunk Recorder on GitHub

A couple notes on my setup…

First, start out with an RTL-SDR dongle that has a TXO. This helps make sure the tuning is pretty accurate and will not drift as it gets warmer. These dongles from have been great and the antenna they come with seems to be pretty good.

Also, make sure you assign your dongles each a unique serial number. You can do this with the rtl-eeprom -s command. Using this serial number, you can define different error corrections values for each dongle. I have notice they will very by a few hundred hertz between dongles. There is an example of how to do this in the multi-sdr config file.

The other thing I got was a USB 3 hub. Having more than 2 USB 2 dongle connected to a USB 2 hub might max out the bandwidth. I am not sure if this is necessary, but I wanted to limit the places where things could get screwed up.