DNS Tunnels for IoT

Urban sensor networks often have to deal with a particular challenge: how to get data from the device to a server. Depending on your budget and data needs, there are various options, from low-power wide area networks (LP-WANs such as LoRaWAN) to cellular modems.

Rain gauge, river, temperature and other sensors might typically send a tiny amount of data periodically, a single measurement every 10 minutes for example.

Low-power networks are ideal, but require some form of network to get the data from the device to a gateway, and ultimately on to a server. There are lots of efforts to create community networks, such as The Things Network, but they’re by no means universal.

Building a sensor with a cheap (think $2) ESP device1 is relatively easy, and typically come with wi-fi. LoRA hats and antennae add to the cost – admittedly they seem to have come down since I last looked a few years ago – this one’s about £14.

Given that wi-fi is pretty ubiquitous in urban areas, couldn’t we leverage this to send data? Enter DNS Tunnelling. I first read about this years ago when a proof of concept was developed to tunnel regular traffic over DNS – a potential way to gain Internet access without signing over your personal details to public wi-fi networks.

DNS Tunnelling takes advantage of the fact that – although public wi-fi networks typically intercept web browsing (to block you until you agree to terms), they can’t reliably block DNS without breaking a whole bunch of browser functionality in the process.

So, theoretically, a device could connect to a public wi-fi network, send its data via DNS, receive instructions (in the DNS reply, if needed) and disconnect again, all without having to navigate login screens and minimising traffic.

Diagram shows a sensor reporting its temperature through a custom DNS request. It passes through a public wi-fi network to a DNS server under control of the IoT company. This is then translated into a meaningful message which is stored as data.
A simple example of a temperature sensor reporting its reading via a public wi-fi network

I started to look into this during my time working on IoT devices, as it would have significant cost benefits versus rolling out low-power networks or subscription cellular services.

A network of this type requires two key items: a device that can find and connect to public wi-fi networks (relatively easy with Arduino, Raspberry Pi, etc.), and a DNS server on the Internet capable of translating requests into specific packets of data (various options are ripe for modification).

However, there are significant implications around using public networks in this way, which I’d never explored fully before I left this particular project. Although I do wonder whether network operators would cause a fuss over a few kilobytes of data, in the context of urban city-wide public wi-fi installations. So, while my closed tests appeared to suggest it would work quite well, a rollout of any kind would need to be properly scrutinised and sanctioned by the operators.

Nonetheless, communications remains a major factor – particularly around low-budget sensor networks. Although it seems costs are falling, coverage is improving and – if you are building out a low-power network – consider the added benefits of joining one of the various networks to pool resources.

  1. Yes, I know a $2 device isn’t going to have the best antenna, although still potentially good enough for this to work in many locations. ↩︎

Github Codespaces

Many production and supply chains have some form of retooling cost. Software is similar. It takes times to set up a development environment, ensure consistency and remind oneself of the work context.

A few years ago, I spent most days writing code in one form or another for broadly related projected. More recently, it’s become an occasional foray, and quite scattered, so retooling has been more of a concern. The overhead of setting up my workspace, just to get a potentially small change made, is more significant.

Various options, contributions and ways of working have emerged over the years. From synced profiles, config inside projects and easier setups. Containerisation lends itself well to the development space, as do virtual environments, virtual machines, and so on.

Github Codespaces is an interesting and potent take on this. It effectively bundles the editor, working space, source control and developer testing. Of course, as part of Github, it also brings closer the wider sphere of testing and deployment.

I’m now revisiting my ways of working to make the most of Github Codespaces, pushing myself towards one-click editing, simplified self-test and deployment. Crucially, it means that the wider dev environment is fully configurable and cloud-based, so I don’t need to have a desktop set up with all the requisite tools – a browser is enough.

It’s also giving new life to old kit. I’m using older devices that would ordinarily have been sent for recycling. Equally, the pressure is off to invest in something new.

A quick rattle through some other benefits:

  • Codespaces can be ephemeral, and I’ve specifically reduced their lifespan. This encourages regular commits, and thinking about clean deployments more routinely.
  • The entire dev environment is configured inside the repository in Github, so I know every contributor has a like-for-like place to work. That reduces errors.

Security is a potential winner, in various ways, but as usual there are pros, cons and professional factors to consider that are worth exploring separately.

Ultimately, it means I can pick up a project, make a needed change, and create a commit in a significantly faster way.

The Perfect Website Doesn’t Exist

This blog has been going for about 20 years. Various iterations of WordPress, a custom CMS, back to WordPress again.

It’s not my first blog. That was running all the way back in the 90s, before blogs were really a thing. I’ve lost most of it but I remember it was based on hand-editing HTML files in Vim. Posts were still dated, sorted by latest and contained various updates. Just seemed like a natural way to track updates at the time.

I’ve probably been responsible for the creation of upwards of a hundred websites so far. From personal projects, commercial ones, customer sites, intranets and hobbies. Many of the commercial ones still exist, although my work has largely been overwritten.

I have also created websites that create other websites. Building content management systems and blog engines. If I count those, there’s probably a few more hundred. Sadly all of those disappeared, along with the companies that I worked for that hosted them.

Along the way I’ve tried to adhere to principles based on well-established concepts. Good stable URLs, minimal overhead, decent semantics. It’s helped me steer clear of some of the fads over time, like single-page content sites.

Many of those principles have come from managing the back-end. I’ve created websites attracting millions of users, sometimes in a very short time. This firmly focuses the mind on responsiveness: avoid dynamic pages, minimise bytes, cache smart.

More recently, static website generators have caught my mind. These aren’t new. I build a couple of iterations back in the 90s – desktop apps that produced static HTML with a template. Nowadays this works well with source control and CI/CD, so a site can be edited in GitHub, prepared by Actions and deployed to a server.

Static websites are pretty much the fastest kind. No real processing on the client or the server, lots of cache opportunities, robust. For a blog – particularly one like this that isn’t that fussed on user comments – they’re almost a no-brainer. Why re-render the page for every user when they each receive the same content? There’s quite a number of static site generators, and I’ve been using Jekyll for a while.

All good in principle, but it’s not enough. It misses some of the interesting – and oft-forgotten – aspects of the web.

What about redirects? This is a server configuration challenge – Apache lets you use .htaccess files but that might be inefficient. Nginx needs to be configured. Others have in-between implementations.

What about content negotiation? I’m quite interested in this, because the transfer of knowledge shouldn’t assume HTML. Want this in JSON? Fine. After text? Why not. PNG image? Well, that’s a bit strange but yeah let’s give it a go.

Language negotiation? Sure, why not. Apache kind-of supports this (as well as content) and you end up with various versions of the same file, so intro.en.html and intro.fr.json – the server handles the selection, so this kind of feels like a good outcome even if our generation gets a little more complex.

As the World Wide Web is well into its thirties as I write, some aspects keep coming round. Maybe that suggests they have a relevance that we shouldn’t let go of; that original intentions and designs were good or – at least – grounded in reasonable premises.

Keen to explore, as always, so watch this space…

Recombobulation

I’ve neglected blogging for a number of years. Bit of a shame, as I think I’ve had plenty of contributions to make in that time. Blogging, writing, everything is a habit.

With all the activity around Twitter, I’m less minded to post there. I’m not inclined to go to Threads, or Mastodon, or any of the other social media platforms. Hosting my thoughts here has a renewed appeal.

I don’t mind if 5, 50 or 5000 people read this – or none at all – writing in this way is a means to release ideas brewing and trapped in my head.

With that in mind, here are some things I want to write about, to describe my journeys. I aim to teach and support. Maybe they’ll be useful to you as well:

  • Document my adventures, including new frameworks, practises, lessons learnt.
  • Convert this blog to some kind of static generated site.
  • Talk about what I’m learning with 3D printing.
  • Record my travels.

So, there’s my starter for ten. I’ve promised this before and ran out of steam fairly quickly. Let’s see how this goes….

Schlenkerla Rauchbier

It seems this one has a bit of fame. The “real Schlenkerla smoked beer” certainly doesn’t disappoint. It’s how I’d imagine kissing an ashtray. The smokiness comes through immediately and throughout and – if I dare to say – overpowers much of the remaining beer taste.

Not sure I’ll be back to this one in a hurry. The novelty wore off quickly, and it wasn’t the most pleasant to drink. Still, good as a reminder of some of the more extreme flavours out there.

Hercule Stout – Brasserie des Légendes

Named after the Belgian detective, Hercule Poirot, this has been on my to try list for a while.

It’s a decent stout with a nice, dark flavour. Reminds me of chocolate, coffee bitterness which works well. Slightly stronger at 9% ABV. I’d certainly be happy to add this to the regular rotation when it becomes available.

Dev switching for Nginx Reverse Proxy with cookies

I’m a fan of using Nginx as a reverse proxy, and I’ve now used it for multiple projects.

Essentially Nginx sits in front of multiple other web servers (Apache or other Nginx instances) and redirects traffic according to a series of simple rules.

There are numerous benefits, but one particular use case is proving very useful. This is the ability to switch between a production and dev website using cookies.

Here’s a broad example (note other proxy/server config instructions are omitted)…

server {
location / {
# By default, serve from the production server(s):
        proxy_pass https://production;
# If the special cookie appears in the request, redirect to the dev server:
        if ($http_cookie ~* "usedev31246") {
            proxy_pass https://development;
        }
}
# Special convenience URLs to enable/disable the cookies we need:
location /enable-dev {
    add_header Set-Cookie "usedev31246;Path=/;Max-Age=9999999";
    return 302 /;
}
location /disable-dev {
    add_header Set-Cookie "usedev31246;Path=/;Max-Age=0";
    return 302 /;
}
}

By default, a proxy connection will be made to the production server for all requests unless a cookie is provided in the request matching the string usedev31246, in which case it’ll be sent via the development server.

For convenience, any user can set the cookie by visiting /enable-dev (which will redirect back to the home page). They can clear it by visiting /disable-dev.

This means you can quickly help somebody get to the dev site by directing them to https://example.org/enable-dev

Of course, you probably don’t want anybody finding this so best to password protect and use a suitably secure cookie string, but I leave that as a challenge to the reader 😉

This is quite nice for things like WordPress instances which tend to put the full URL in image and resource requests. By doing this, we don’t need to rewrite those URLs and worry about permalinks changing.

If you’re also using Nginx as a cache, beware as a user on the dev site might pollute/poison the cache with development code. Ensure the cache is disabled before dev requests are fulfilled (this is good practice anyway for a dev website).

Quarr Abbey, Goddards

Bit off-piste for my usual hunts, but this is from the Isle of Wight, so a local brew as well as a former stomping ground, and brewed in a Belgian “Dubbel” style.

Apparently, they don’t usually sell these in kegs, but the local shop had a supply so a 500ml “Crowler” was needed!

Can’t say I’m a huge fan of the beer on presentation. Weak, slightly metallic taste – hints of darker Belgian beers – creamy and chocolatey to taste. Just not quite meeting expectations. However, a bit disclaimer – that might well be down to the way it’s arrived (also worth noting this was near-end of stock). I’d be keen to try it again in bottled form and – being so close – a great opportunity to support local breweries.

Triple Secret des Moins

Light (in colour) blonde beer with a slight fermentation aftertaste. 8.0% … Aroma is a little sour on first impression, but the taste takes hold pretty quickly. It’s not a “bad” beer but I’m not fussed on finding it again.

Switching Virtual and Remote Desktops

If I’m using full-screen remote desktop in Windows 10, it can be handy to have my local computer on one virtual desktop, and the remote computer on another.

I usually turn off the title bar, so it’s a seamless experience.

Switching from local to remote is easy: Ctrl+Windows Key+Right Arrow

Switching back is more difficult, since the remote desktop will capture your keys.

Trick is to press Ctrl + Alt + Home first. This brings up the Remote Desktop title bar, which then means you can press Ctrl + Windows + Left to switch back.