DNS Tunnels for IoT

Urban sensor networks often have to deal with a particular challenge: how to get data from the device to a server. Depending on your budget and data needs, there are various options, from low-power wide area networks (LP-WANs such as LoRaWAN) to cellular modems.

Rain gauge, river, temperature and other sensors might typically send a tiny amount of data periodically, a single measurement every 10 minutes for example.

Low-power networks are ideal, but require some form of network to get the data from the device to a gateway, and ultimately on to a server. There are lots of efforts to create community networks, such as The Things Network, but they’re by no means universal.

Building a sensor with a cheap (think $2) ESP device1 is relatively easy, and typically come with wi-fi. LoRA hats and antennae add to the cost – admittedly they seem to have come down since I last looked a few years ago – this one’s about £14.

Given that wi-fi is pretty ubiquitous in urban areas, couldn’t we leverage this to send data? Enter DNS Tunnelling. I first read about this years ago when a proof of concept was developed to tunnel regular traffic over DNS – a potential way to gain Internet access without signing over your personal details to public wi-fi networks.

DNS Tunnelling takes advantage of the fact that – although public wi-fi networks typically intercept web browsing (to block you until you agree to terms), they can’t reliably block DNS without breaking a whole bunch of browser functionality in the process.

So, theoretically, a device could connect to a public wi-fi network, send its data via DNS, receive instructions (in the DNS reply, if needed) and disconnect again, all without having to navigate login screens and minimising traffic.

Diagram shows a sensor reporting its temperature through a custom DNS request. It passes through a public wi-fi network to a DNS server under control of the IoT company. This is then translated into a meaningful message which is stored as data.
A simple example of a temperature sensor reporting its reading via a public wi-fi network

I started to look into this during my time working on IoT devices, as it would have significant cost benefits versus rolling out low-power networks or subscription cellular services.

A network of this type requires two key items: a device that can find and connect to public wi-fi networks (relatively easy with Arduino, Raspberry Pi, etc.), and a DNS server on the Internet capable of translating requests into specific packets of data (various options are ripe for modification).

However, there are significant implications around using public networks in this way, which I’d never explored fully before I left this particular project. Although I do wonder whether network operators would cause a fuss over a few kilobytes of data, in the context of urban city-wide public wi-fi installations. So, while my closed tests appeared to suggest it would work quite well, a rollout of any kind would need to be properly scrutinised and sanctioned by the operators.

Nonetheless, communications remains a major factor – particularly around low-budget sensor networks. Although it seems costs are falling, coverage is improving and – if you are building out a low-power network – consider the added benefits of joining one of the various networks to pool resources.

  1. Yes, I know a $2 device isn’t going to have the best antenna, although still potentially good enough for this to work in many locations. ↩︎

Github Codespaces

Many production and supply chains have some form of retooling cost. Software is similar. It takes times to set up a development environment, ensure consistency and remind oneself of the work context.

A few years ago, I spent most days writing code in one form or another for broadly related projected. More recently, it’s become an occasional foray, and quite scattered, so retooling has been more of a concern. The overhead of setting up my workspace, just to get a potentially small change made, is more significant.

Various options, contributions and ways of working have emerged over the years. From synced profiles, config inside projects and easier setups. Containerisation lends itself well to the development space, as do virtual environments, virtual machines, and so on.

Github Codespaces is an interesting and potent take on this. It effectively bundles the editor, working space, source control and developer testing. Of course, as part of Github, it also brings closer the wider sphere of testing and deployment.

I’m now revisiting my ways of working to make the most of Github Codespaces, pushing myself towards one-click editing, simplified self-test and deployment. Crucially, it means that the wider dev environment is fully configurable and cloud-based, so I don’t need to have a desktop set up with all the requisite tools – a browser is enough.

It’s also giving new life to old kit. I’m using older devices that would ordinarily have been sent for recycling. Equally, the pressure is off to invest in something new.

A quick rattle through some other benefits:

  • Codespaces can be ephemeral, and I’ve specifically reduced their lifespan. This encourages regular commits, and thinking about clean deployments more routinely.
  • The entire dev environment is configured inside the repository in Github, so I know every contributor has a like-for-like place to work. That reduces errors.

Security is a potential winner, in various ways, but as usual there are pros, cons and professional factors to consider that are worth exploring separately.

Ultimately, it means I can pick up a project, make a needed change, and create a commit in a significantly faster way.

The Perfect Website Doesn’t Exist

This blog has been going for about 20 years. Various iterations of WordPress, a custom CMS, back to WordPress again.

It’s not my first blog. That was running all the way back in the 90s, before blogs were really a thing. I’ve lost most of it but I remember it was based on hand-editing HTML files in Vim. Posts were still dated, sorted by latest and contained various updates. Just seemed like a natural way to track updates at the time.

I’ve probably been responsible for the creation of upwards of a hundred websites so far. From personal projects, commercial ones, customer sites, intranets and hobbies. Many of the commercial ones still exist, although my work has largely been overwritten.

I have also created websites that create other websites. Building content management systems and blog engines. If I count those, there’s probably a few more hundred. Sadly all of those disappeared, along with the companies that I worked for that hosted them.

Along the way I’ve tried to adhere to principles based on well-established concepts. Good stable URLs, minimal overhead, decent semantics. It’s helped me steer clear of some of the fads over time, like single-page content sites.

Many of those principles have come from managing the back-end. I’ve created websites attracting millions of users, sometimes in a very short time. This firmly focuses the mind on responsiveness: avoid dynamic pages, minimise bytes, cache smart.

More recently, static website generators have caught my mind. These aren’t new. I build a couple of iterations back in the 90s – desktop apps that produced static HTML with a template. Nowadays this works well with source control and CI/CD, so a site can be edited in GitHub, prepared by Actions and deployed to a server.

Static websites are pretty much the fastest kind. No real processing on the client or the server, lots of cache opportunities, robust. For a blog – particularly one like this that isn’t that fussed on user comments – they’re almost a no-brainer. Why re-render the page for every user when they each receive the same content? There’s quite a number of static site generators, and I’ve been using Jekyll for a while.

All good in principle, but it’s not enough. It misses some of the interesting – and oft-forgotten – aspects of the web.

What about redirects? This is a server configuration challenge – Apache lets you use .htaccess files but that might be inefficient. Nginx needs to be configured. Others have in-between implementations.

What about content negotiation? I’m quite interested in this, because the transfer of knowledge shouldn’t assume HTML. Want this in JSON? Fine. After text? Why not. PNG image? Well, that’s a bit strange but yeah let’s give it a go.

Language negotiation? Sure, why not. Apache kind-of supports this (as well as content) and you end up with various versions of the same file, so intro.en.html and intro.fr.json – the server handles the selection, so this kind of feels like a good outcome even if our generation gets a little more complex.

As the World Wide Web is well into its thirties as I write, some aspects keep coming round. Maybe that suggests they have a relevance that we shouldn’t let go of; that original intentions and designs were good or – at least – grounded in reasonable premises.

Keen to explore, as always, so watch this space…

Recombobulation

I’ve neglected blogging for a number of years. Bit of a shame, as I think I’ve had plenty of contributions to make in that time. Blogging, writing, everything is a habit.

With all the activity around Twitter, I’m less minded to post there. I’m not inclined to go to Threads, or Mastodon, or any of the other social media platforms. Hosting my thoughts here has a renewed appeal.

I don’t mind if 5, 50 or 5000 people read this – or none at all – writing in this way is a means to release ideas brewing and trapped in my head.

With that in mind, here are some things I want to write about, to describe my journeys. I aim to teach and support. Maybe they’ll be useful to you as well:

  • Document my adventures, including new frameworks, practises, lessons learnt.
  • Convert this blog to some kind of static generated site.
  • Talk about what I’m learning with 3D printing.
  • Record my travels.

So, there’s my starter for ten. I’ve promised this before and ran out of steam fairly quickly. Let’s see how this goes….

Google AJAXSLT

http://goog-ajaxslt.sourceforge.net/

Google have released under the BSD License Javascript code for performing AJAX and XSLT operations in most modern browsers.

It appears to be compatible with (amongst other browsers) only version 6 of Internet Explorer, leaving out the 7% of IE5.x users (still a significant amount). On a commercial piece of software that’s interesting – Google need as near universal support as possible, but they’ve chosen to either reject the users outright or degrade to a plain HTML version (as with GMail) over including IE5 support in this package.

Still, as more and more applications come online designed to work with XSLT and AJAX surely the incentive can only increase for users to upgrade their PCs.

Five Live blogging

http://www.bbc.co.uk/fivelive/programmes/upallnight.shtml

BBC Radio Five Live last night had a section on blogs (I think they do it every Tuesday night/Weds morning at 2AM British). The focus was on business blogging; how to make money from blogs, but includes a story on the forthcoming Iranian elections and an interview with Microsoft blogger Robert Scoble. You can listen to the show by following the link above, and choosing to listen to ‘Wed’ show (but it’s only there for a week…)

Yahoo! buys blo.gs

"the sale of blo.gs has been completed, and i’m proud to announce that yahoo! has acquired the service. as of right now, give or take a few minutes, yahoo! is running blo.gs." – Jim Winstead, today (http://trainedmonkey.com/entry/2251)

Veeeeerrryyy interesting. The Yahoo! favicon and copyright is already up there 🙂

Letters to the DHSS

http://www.mick.tilbury.btinternet.co.uk/index.htm?dhss.htm

" In accordance with your instructions I have given birth to twins in the enclosed envelope."

" I am pleased to inform you that my husband who was reported missing, is dead."

" The toilet is blocked and we can’t bath the children until it is cleared."

Extracts from actual letters (written in good faith) to the UK Social Security.

It all makes sense

 http://www.dynamite.co.uk/local/

Google Maps with data showing travel news, speed cameras, local photos, local website, and potentially much more.

This kind of demonstration is a real eye-opener – both in terms of how dynamic HTML & datasets can make truly excellent applications and how otherwise unassociated data sources can be pulled together gracefully into a common interface. My head is already buzzing with new ideas after seeing this…

Processing

http://www.processing.org/

" Processing is a programming language and environment for people who want to program images, animation, and sound."

Runs from within Java, so the results are quite portable. Some of the exhibition material for this neat little language is amazing.

Thanks Dave M