Ubuntu 22.04: Headless Remote Desktop

Getting Remote Desktop sharing to work on Ubuntu is a real pain. It looks like it has something to do with the screen sharing service starting up before the user has logged in and having the keyring being still locked.

Changing from having my user automatically login to using a TimedLogin atleast allow for me to access the desktop over VNC and RDP.

Edit the Gnome Desktop config file: sudo nano /etc/gdm3/custom.conf

And uncomment the TimedLogin block of code:

# GDM configuration storage
# See /usr/share/gdm/gdm.schemas for a list of available options.


# Uncomment the line below to force the login screen to use Xorg

# Enabling automatic login

# Enabling timed login
  TimedLoginEnable = true
  TimedLogin = luke
  TimedLoginDelay = 10

Building a battery cable for the Arduino Nicla

The Arduino Nicla is a nice little dev board series. They have managed to cram a ton on them… but it has meant sacrificing the battery connector. Most of the dev boards from Sparkfun and Adafruit have a JST PH connector… as do most of the common LiPo batteries out. The Nicla boards however have a JST ACH. Which has 3 pins and is tiny…


Arduino put out a note on this… but it leaves a lot of the leg work up to the reader: https://support.arduino.cc/hc/en-us/articles/4408893476498-Connect-a-battery-to-Nicla-Sense-ME-and-Nicla-Vision

Here is how I built a converter cable that lets you plug normal JST PH LiPo batteries into Nicla Boards

For each cable you need:

To make the cable:

  • Cut the pre-crimped wire in half, at the middle
  • Insert the wires into the ACH Connector Housing
  • Now the tricky part is figuring out which wire goes to which pin on the JST PH connector. The diagram from the Arduino article is sort of help:
Nicla Sense ME battery connector
  • Look at the connector and orient it the same way as the diagram and note which wire should be the red wire. If you have some red tape, now is the time to use it.
  • On the JST PH connector, there is a side with a slot down the middle of it. With the slot facing up, and the connector port facing you, the Red/Positive side will be on the left. You can temporarily slide in a battery to confirm.
  • Mark which pin should be Red/Positive with a marker, if you want.
  • The next step is to strip the ends of the wires and solder them to the pins of the connector. It is a huge pain. I found it helpful to position everything and tape it down. and to swear. and buy extra parts.
  • I added a bunch of hot glue over the solder joints with the hope of providing some stress relief and also adding some strength.
  • Finally, I slide some heat shrink tubing over it all.

Photographing all the planes!

So I put together a system that automatically photographs all of the airplanes that fly over my house. I tweeted about it and things exploded a little on Hacker News. So I thought it could be helpful for folks if I go through some of the details.

This story starts about 4 years ago, when I came across an old networked Pan-Tilt-Zoom (PTZ) camera from Axis. My office at In-Q-Tel overlooked the flight path for National Airport. I already had built a little display using my LED sign that showed the airplane that was currently landing. The sign got this information from the ADS-B information that airplanes broadcast these days. Each broadcast contains the airplanes location, heading and some identifying information. Using a cheap, $25 software defined radio you can receive these broadcasts and track which planes are nearby… or with a good antenna, within a couple hundred miles of you. Of course, the natural thing to do was to see if I could have the camera track the planes as they landed using the information from ADS-B. Since I knew the location of the camera, and was getting the location of the airplane, some geometry would tell me where to point the camera. The network security cameras from Axis are great and have a built-in web server and API. You can pass a bearing and angle to the API, and it will point the camera where ever you want. So I hooked this all together and to my surprise, it actually worked! Well, sort of worked. The camera was old, and the image quality was not that great, but it was taking pictures of the planes passing by. Of course, all good things do not last. The setup looked like a pile of junk. I had it setup in the corner by a window and one day it got cleaned and scrapped.

Fast forward a few years. I switched over to working in IQT Labs and we had a little down time between projects. I proposed the we pick up the project again and we got to tinkering. The resulting project, SkyScan, is a much more refined version of the original vision. It is nicely containerized and uses MQTT to pass messages between the components. We ended up working well enough that we bought a fancy Axis camera with a 30x zoom… which we nicknamed “MegaZoom”! We tested perfected it against aircraft at cruising altitude, as well as ones on approach… and we learned a lot about different ellipsoid projections!

However, I left IQT in the fall and went over to Microsoft. One of the awesome things about working at In-Q-Tel is that you get to Open Source a lot of your work. I recently left to work on Software Defined Radios at Microsoft, but wanted to keep working on this project. I missed getting to see all of the planes that were passing by at 40k feet. It was like having a super power.

So I fired up Ebay and got a used traffic camera off Ebay. It is super ruggedized and designed to live on the top of a pole. There is even heater you can turn on. SkyScan simples saves the photos it captures to disk and I wanted to share them. One of the perks of working at Microsoft, is that you get a $150 Azure credit. This seemed like the perfect excuse for me to up my Azure skills. To process the photos from SkyScan, I setup a series of serverless Azure Functions. One of them pulls in an ML model I built in Azure Custom Vision that spots airplanes and automatically builds thumbnails. Another pulls up information about the aircraft based on the identifier from the ADS-B broadcast. It does this by looking it up in a Cosmos DB I built with data I downloaded from the FAA. 

All of this gets served up using Azure App Services and a simple React frontend. Going with App Services, instead of just a VM turned out to be a great choice. After the site hit the frontpage of HN, it fell over from the traffic. I was able to easily upgrade it from the free tier, to something much more substantial with a single click.

I am honest surprised the whole thing works at all. The camera I am using when fully zoomed in, has a horizontal field of view of around 2 degrees. So there is not much room for error or latency!

Wrestling WNYC’s Audiogram Generator

The Public Radio station, WNYC, released a really nice audiogram generates. Audiograms are those short videos that are created for an audio clip, that show a waveform. Video is easy to share on a lot of social platforms and turning the audio into a video also makes it a little more engaging.

WNYC’s tool is open source, but it hasn’t been updated since 2016 and there is a bit of bit-rot. I first tried to install it on my MacBook… but that was a dismal failure. I was able to get it working on my Linux box, which is using Ubuntu 18.04. Here are the steps I took:

The full instructions are here.

sudo apt-get install git nodejs npm \
libcairo2-dev libjpeg8-dev libpango1.0-dev libgif-dev libpng-dev build-essential g++ \
ffmpeg \

git clone https://github.com/nypublicradio/audiogram.git 
cd audiogram

Now you have all the dependencies, and the source code. The first bug you hit is that the package.json specifies an old version of node-canvas.

Run npm installit will fail!

Now, install the latest version of node-canvas:

npm install canvas@next

Unfortunately, you will hit another bug when you try to run things. Luckily, someone has a fix. Open audiogram/draw-frames.js and edit line 12.
canvases.push(new Canvas(options.width, options.height));
canvases.push(new Canvas.createCanvas(options.width, options.height));

Now you should be good to go! Start it up with: npm start

My 7yo made a $99 in-app purchase and Apple is refusing a refund!

With COVID-19, both the kids have been spending a lot of time on iPads. I let my 7 year old son use my iPad and we added his fingerprint so he could log in easier and I could still keep using a long device password. This setup was working great until I got a receipt from Apple for a $99 in-app purchase.

I didn’t think this would be a big deal, my son is 7 and tapped on something without realizing he was spending real money. To be clear, this was a $99 in-app purchase, mistakenly made by a minor for a game he never plays. This should be exactly what the refund process is for!

As soon as I saw the receipt I clicked on “Report a Problem” and went through Apple’s automated process. For the reason, I put “Purchased by a Minor”, everything seemed pretty straightforward. A couple days later, I got a reply… “Refund Declined”! Worse yet, there was no explanation given or additional steps provided. This made no sense to me, so I wrote to itunesstoresupport@apple.com and explained what had happened and that I was wondering why the refund request was denied. The support person didn’t provide any rational but instead resubmitted a request back into the system for a refund. A few days later I got a response back through the automated process, with the same answer “Refund Declined” and no explanation!

Since I still wasn’t getting an explanation, I gave Apple Support a call. After a bit of a wait I was able to talk with a courteous support rep. I went through what had happened, the steps I had taken and that I was looking for a refund and an explanation. The rep let me know that he was unable to do anything else, since it had already gone through the system and an appeal. He was unable to provide any reason why the refund would have been denied. He did mention it is more difficult with in-app purchases because it goes through developers systems. Left without any other avenues, I mentioned I was going to dispute the charge with my credit card since I was being denied a refund. The support rep cautioned against it because Apple would view it as fraud and lock my entire account. Seems like Mob tactics to me.

I tried contacting the developer to see if they could initiate a refund from their side. They said they were happy to offer a refund but it has to be done through Apple’s system… and Apple is not allowing me to make any more refund requests.

At this point, I don’t think there is anything else I can do. Given Apple’s previous settlement around in-app purchase’s by minors, I am pretty shocked: https://www.ftc.gov/news-events/press-releases/2014/03/ftc-approves-final-order-case-about-apple-inc-charging-kids-app

Pretty Graphs from Twilio Narrowband

I just got my hands on the Twilio Narrowband dev kit… and it is a sweet piece of kit. It is centered around a small Arduino compatible board that has a lot of Seeed connectors. Most importantly it also has a cellular modem built in that is a designed to work with T-Mobile’s nationwide narrowband IoT network. This network use the guard bands on T-Mobiles channels. It is not high bandwidth; designed for moving text message sized bits of data. It does have good coverage though… and most importantly is cheap. Through Twilio, you can get a plan ~$10 per year for a device. At this price point you can start coming up with interesting ways to use this!

With the Twilio devkit you communicate with the board using “commands”, which are just short messages. There is an SDK, called Breakout, included with the devkit that makes it super simple to send and receive data over the network. While the examples are a quick way to start pulling data from the sensors, they just have the data going to the Twilio console. This is boring.

I wanted to drop the data into a nice graph, and it ended up being pretty easy to do. The Twilio console has a webhook that can do a HTTP Post everytime a new command comes in. I built a little function to take the data in and post it to a graphing app.

I wanted to graph Temperature and Humidity data and started from the Breakout example. I modified it a little so that output was a simple, comma separated value pair:

I then went over to Glitch and started working on a small function to take in the sensor data from HTTP Post and send it to Adafruit’s IO platform.

Here is my Project on Glitch:

Take the URL for your Glitch project, with the path of ‘/sms’ and add it as the Command Callback on the Twilio Console for your SIM:

After that, it is time to create some Feeds and a Dashboard on the Adafruit IO Platform. After you create your account, copy your Username and Key to the .env file on Glitch. This is how the function logins into IO to post data. In the Glitch Project readme, there is a description of how this should be formatted in the .env file. Create two feeds called hummidity and temp. Also – I just realized I spelled humidity wrong. On the plus side I did it consistently on both Glitch and IO.

Finally, create a Dashboard in IO to show the data from each Feed.

There are still a few things to work on:

  • It looks like there is an issue where the Dev Kit Board will only work if it is connected to the Serial Monitor in Arduino. If you just remove the Serial Prints you should be good to go though.
  • I want to try and make the Dev Kit Board sleep in between transmissions, or atleast turn off the modem to try and save power.

Silly WebSocket mistake

I recently was trying to add a websocket to an existing NodeJS Express app I had written. Here is the important code below:


const WebSocket = require('ws');
const app = express();
const server = http.createServer(app);
const wss = new WebSocket.Server({ server });

app.listen(8080, function listening() {
  console.log('Listening on %d', server.address().port);

I kept getting 502 errors from NGINX with this code when I tried to establish a connection. See if you can spot the error…

I was pretty dumb. You can’t have app listening anymore. Instead you have to have server. The correct code is:

server.listen(8080, function listening() {
  console.log('Listening on %d', server.address().port);

Leave a comment if this happened to you too… I will feel less dumb.

Getting Docker up on Linode, with Ubuntu 16.04

Unfortunately it is not a straight forward process for getting Docker installed on Linode using Ubuntu 16.04. The Image that Linode provides for Ubuntu 16.04 has a custom Linux Kernel installed in it. Docker requires AUFS, which is part of the linux-image-extra package. This package is tied to the kernel version, and since Linode kernel is custom, it is not clear how to load this package. In order to make this work, you have to revert your install back to the distribution kernel.

Here are the steps to get it working:

  • Deploy a Ubuntu 16.04 image on Linode
  • Follow these steps:
  • Then install the Docker-Engine:
  • Finally, lock-down and set things up: