Sunday, July 19, 2015

Wireless charging for the Parrot Jumping Sumo

I've successfully managed to add a basic wireless charging system to my Parrot Jumping Sumo - this opens the door for self-propelled docking and charging - no human input needed. The module itself is from Little Bird Electronics.

The next step is a Pi with a docking dashboard for starting and stopping the charger, reporting on charge state etc. Below you can see it actually charging (indicated by the red LED in the top-right).

Wednesday, June 10, 2015

Parrot Jumping Sumo

My amazing wife bought me one of these for my Birthday and it is awesome!

That is all :)

Saturday, May 9, 2015

A Forest of Sentences

Machine learning as a service is awesome!

I really like the idea of the Google Prediction API. As in really like it. I especially like that it supports both numbers and strings for the training inputs, out of the box.

I quickly found that it is a bit fiddly to set up for just playing around with ideas though, and you need to pay for some of their cloud storage for your training data.

That lead me down the rabbit hole of whether I could use RandomForest algorithms (currently regarded as pretty awesome for minimal-configuration machine learning) to perform the same sort of basic machine learning tasks as suit Google's service.

I decided to start with Google's "Hello Prediction" example which classifies sentences as being in either English, Spanish or French.

The obvious issue here is that these algorithms support input arrays of floats for training, not just my random assortment of sentences - cue rabbit hole number two.

I'd been interested in fuzzy grouping of similar sentences/strings for a long time, and had had some small successes (and epic failures) trailing an idea where I would use a set of three "landmark" sentences to "localise" a new sentence in three-dimensional space. The position in each dimension would be calculated using the Levenshtein Distance or a similar, from each landmark. I had hoped this would allow for sentence grouping and of course cool three-dimensional scatter diagrams. As I said, this didn't really work.

That work did give me an idea for creating vector representations of strings though:
  1. Randomly select a small (100-200) subset all the training inputs for a feature. In the case of Hello Prediction, this was 200 of the sentences that were in the training set. These sentences become the "landmarks" for the input array generator.
  2. For each sentence in the training data, measure the distance (I used the FuzzyWuzzy token set ratio) between the training sentence and each landmark (divided by the training sentence length). This creates a 200 element array of distances, per training data sentence, in my example.
  3. Train a RandomForestRegressor using the languages (mapped to integers) as the targets and the 200-element arrays as input features.
  4. Profit?
For each new sentence, perform the same translation to a 200 element array and pass that to the model's predict() method. This seems to work remarkably well, though it sometimes classifies Spanish sentences as French:


Loading training data... Done!
Preparing vectoriser... Done!
Preparing training data... Done!
Training RandomForest... Done!

Hello mate, how are you? I need to find the toilet. ('English', 99)
Bonjour compagnon , comment êtes-vous? Je dois trouver les toilettes . ('French', 87)
Hola amigo, ¿cómo estás? Necesito encontrar el inodoro. ('Spanish', 89)


This worked-example code is available on my GitHub, and I'll attempt to apply it to some more difficult problems in future posts.

Friday, May 8, 2015

On GenghisIO

More than eight months have passed since I announced GenghisIO, my attempt to remove the installation barriers for real world robotics programming.

Since then the project has progressed significantly. Unfortunately progress has now slowed significantly.

The long and the short of it is that with a full-time job and family I simply don't have the time to commit to developing such a large undertaking. It has also become clear that whilst I am solving some "really cool technical problems" with this platform, my sales pitch was actually more of a 10-minute sales show-and-tell/essay.

This was making it very hard to convince people of the project's worth, which in turn was making it really hard to keep motivated. Frankly, if the "need" of this project is so hard to explain, maybe it doesn't exist as much as I'd like it to.

With all that in mind, GenghisIO is going on the back burner for now and I expect I will shortly opensource the development done so far so others may benefit.

Monday, July 28, 2014

Announcing GenghisIO

For the last twelve months I've been working on a "secret" project that's got me very, very excited! Even better, it's finally at a point where I can show you all something substantial.

The aim of the project, GenghisIO, is the development of a web-delivered platform for no-installation interactive robotics programming, using Android and iOS devices to bridge the gap between your software and your robot.

The targeted platforms are currently Sphero, LEGO NXT and IOIO, with "more to come" of course.

The video below shows the current test app UI, with a simple program driving the Sphero 2.0 around in a square pattern. The scrolling text shows real time serial communications with the Sphero.

GenghisIO is extremely alpha, but if you're interested in finding out more, I recommend following GenghisIO on Twitter where I'll be posting previews, updates, beta invites etc etc.

Happy robots!

Saturday, July 19, 2014

Simple Python Turing machine implementation

I've always been a fan of using problem-solving to learn, over the years I've found it far more effective that rote learning or being lectured. Recently I came across this interesting implementation of a Turing machine. I'd obviously heard of Turing machines but I had never sat down and thought about how to "build" one.

Feeling inspired I cooked up this fairly simple Python example of a Turing machine*, using a rule set that counts up in binary on the "tape": This example counts up to 64 in Binary, printing out the tape each time the head resets to the start position.

* In this case, the "infinite tape" is only infinite for values of infinity less than the roughly 3 GB of free RAM that my laptop had at the time :)

Thursday, July 10, 2014

Detecting WiFi clients on TP-LINK routers using Python and telnetlib

Inspired by this project on Hackaday where submitter Mattia used Python to nmap scan his WiFi network, triggering alerts when particular MAC addresses are found, and with my dreams of home-automation in mind, I came up with a slightly different way of achieving the same thing.

My router is a cheapo TP-LINK, but it does come with a "currently connected MAC addresses" page in the web interface so my first thought was using BeautifulSoup to do some parsing. Then I found references to a telnet interface.

Connecting to the Telnet interface I quickly found that the command "wlctl assoclistinfo" gave me this output:

Associated stations:4
Num     Mac Address        Time
1    F0:C1:F1:22:xx:xx    00:02:30:04
2    90:21:55:B0:xx:xx    01:02:20:26
3    00:0F:54:10:xx:xx    03:09:17:28
4    74:E1:B6:2C:xx:xx    30:04:37:48 

Firing up Python and the telnetlib telnet-automation module meant that 10 minutes later I was printing comma-separated MAC addresses to the console using this snippet of code:

Finally, I am triggering this in my Raspberry Pi via a simple crontab entry:

* * * * * logger `python /home/pi/wlan_sensor/`

This gives me per-minute logging of WiFi clients, giving me the information I need to the first of my home-automation projects - turning on the lights when I get home.

Monday, April 21, 2014

Dynamic CSS updates without page refresh

I'm currently prototyping some CSS for a small webpage and this little trick occurred to me to save having to press F5 every couple of seconds (requires jQuery):

The script simple reloads each css stylesheet four times a second, giving me a near real-time CSS preview.

Sunday, December 22, 2013

Sneak-peek at what I've been working on for the last six months...

Sorry about the aspect ratio, still sorting out a proper screen-caster that'll do Android and PC concurrently.

Tuesday, July 16, 2013

Django doctesting with WebTest

I’ve been a big fan of Python’s doctest since I first worked with Zope 3. I know a lot of people knock it, but there was always a sort of "magic" in pasting use-cases into rst documents, inserting some python and and you're done.

Recently I've been working on a number of Django applications and I really wanted to re-use this pattern.

Initially, I used the built in django.test.client - this was a fairly close approximation of Zope's testbrowser and lead to doctests like:

Where this falls down is the testing of forms - most recently I was testing the uploading of a file and the various server-side validations that would trip (name, size, contents etc). To do this in django.test.client, you must use the post() method with the following result:

The testing of file uploads is even worse.

Trying to solve this problem I came across this excellent slideshow about using WebTest. This looked like the perfect solution with its simple form access and file upload wrappers. Combining WebTest with django webtest gave me a very similar base API to django.test.client.

Here I ran into a problem though. All the demos and documentation for WebTest showed usage in unit tests. A Google search for "doctest WebTest" wasn't helpful either. Pulling out Python's dir() function, I discovered a very interestingly named class DjangoTestApp in django_webtest. A couple of minutes later and my doctests looked like this (abbreviated):

The best bit was the actual uploading of files - the "name" and "content" is just assigned to the field on the form:

This is an incredibly elegant interface and allowed me to quickly perform a huge range of upload testing.

Why not just use unittest, you ask?

Simply put, a doctest can be handed to the client, my line manager or any co-worker, and they can line it up against a set of functional requirement, or just their domain knowledge. The same simply cannot be said for something like the following (from here):

Monday, March 11, 2013

Python iRacing API client

Some of you will already know that I am a massive fan of the racing simulator iRacing. I signed up to this "game" a couple of years ago and it's lead to the purchase of new computers, steering wheels and even a dedicated "racing seat".

What I've always wanted to do was build a basic motion rig for the game, something that at least would represent lateral acceleration to give me a better idea of my grip limits when driving. The first step towards this is parsing the telemetry from the game.

iRacing has a telemetry API that's solely documented in the manner of a demo client built from a pile of C code. Whilst I am certainly capable of programming in C, my preference is definitely my pet language, Python. Some brave souls have built clients in C#, but that isn't much better *

Aside from my own bias, Python is a really nice language to use as a new programmer, so developing this client should allow for more people to develop iRacing add-ons which can only be a good thing!

Anyway, I decided to build a Python implementation of a client for iRacing. The API uses a memory mapped file, something Python happily supports out of the box, as well as C data structures for all the high-frequency telemetry which I was able to parse using Python's struct module.

As I was secretly hoping, Python's dynamic-programming abilities allowed me to write a client in short order, using very little code.

The code's right here on GitHub. It's, honestly, quite rough-and-ready, but it works. Please do let me know if you have success or problems with it, I'll do my best to help out. I expect it'll be updated regularly, or whenever I find a bug...

You'll have to extend and modify it to work with your application, but that's half the fun - enjoy! :)

* Whilst I'm not a huge fan of C#, the in-line debugging/inspection functionality in Visual Studio was a god-send when applied it to the C# demo client.

Sunday, January 27, 2013

My home audio streaming setup

As a slightly different post I thought I'd share the details of my multi-room audio streaming setup.

There's nothing special about it, but it's a very simple and cheap system that actually works really well, allowing multiple sources and speakers (with simultaneous playback), across a range of hardware and operating systems. I'm not going to link to every app and provide installation guides, that's what Google's for :)

The components
PC 1
  • Role(s): Source + destination + media share
  • OS: Windows 7
  • Software: iTunes (source), Shairport4w (destination), Air Playit HD (media share)
PC 2
  • Role(s): Source + destination + media share
  • OS: Windows 7
  • Software: Airfoil (source), Shairport4w (destination), Air Playit HD (media share)
Raspberry Pi
  • Role(s): Destination
  • OS: Rasbian "wheezy"
  • Software: shairport
iPad 2
  • Role(s): Media share -> destination bridge
  • OS: iOS 6
  • Software: Air Playit HD
How it all works together
Ignoring the iPad and the Air Playit HD software for a moment, the rest of the system involves Apple AirPlay sources and destinations. To be honest, I'm not a huge fan of some of Apple's work, but in this case AirPlay was simply the "right tool for the job".

Looking at the PCs, iTunes on the first one "just works", sending audio to one or more (simultaneously) destinations without a problem. Airfoil on the second PC allows audio capture from running programs (eg VLC) and sends that out in the same manner as iTunes to one or more destinations. Both PCs then perform double-duty as destinations thanks to the excellent Shairport4w.

The Raspberry Pi acts purely as a lightweight destination, thanks to the installation of shairport. I've attached an external USB sound card which helps with the sound quality.

Separately to this, Air Playit HD allows the iPad to play music/video off either of the PCs - the software on the PCs shares chosen folders and there's a client on the iPad of course. Any played music can then by pushed (via the iPad's built-in AirPlay tool) to a single destination.

These components work really well together, and allow me to scale the system really easily in the future. I hope they give you a good idea or two - let me know your versions in the comments!

Hat tip
Lifehacker's articles on setting up a Raspberry Pi and AirPlay streaming from/to all sorts of devices.

Sunday, November 4, 2012

Use a Kindle and Python as a clock/dashboard

When the Kindle Wifi came out I snapped it up and it became the most used electronic device I've ever owned. Then the Kindle Touch came out and I got that too (by which point I was well on the way to becoming an Amazon fanboy...)

The only problem was that I couldn't actually read two Kindles at once.

Then I came across a post by Matthew Petroff - Kindle Weather Display. This sparked my curiosity and I decided to build a clock/dashboard with the Wifi - something that'd show things like number of unread emails, weather etc. This appealed to me as the Kindle has fantastic battery life, is silent, and the e-ink display is both large and invisible at night (when I really don't want to know the time). The goal was something I could glance at in the morning before going to work and ignore otherwise.

Initially I intended to go down the route of jailbreaking to display an image like Matthew did, but I didn't have any luck with that on my particular device. It then occurred to me that I could just use the built-in web-browser to display a self-updating page. The only blocker to this was stopping the screensaver from turning on, something I was able to work around. The browser chrome was not pretty, but also not a deal-breaker.

From then on it was all about building an appropriate web site, serving it (on my desktop, but it'll work on anything running Python) and pointing the Kindle's browser at the right URL. So far, I've managed to show the time (updated on the minute), the current temperature, today's forecast and today's agenda from my Google Calendar. There's nothing magical there and as the site can be displayed on any JS-aware web-browsing device I'm sharing this project on GitHub. It'll change a lot over time but hopefully there are some basics you can use in your own project.


Update 05/11/2012: I've now added a count of unread emails in my Gmail inbox so head over to the GitHub repo if you're interested in that sort of functionality for your own project. I've also got working on a Raspberry Pi without modification, and it's perfectly fast enough for my use.

Sunday, August 5, 2012

Getting a Logitech C270 webcam working on a Raspberry Pi running Debian

I thought it was about time I shared something after all the hours I've committed to Pi-hacking and the above title says it all. These instructions are very simple but should hopefully save you some trial-and-error.

Importantly, hat-tip to Insipid Ramblings and Hexxeh for their info and work that helped me get this far.

Firstly, I started with a slightly old version of Debian - namely the debian6-19-04-2012 image. Your results may vary depending on what version you use. I am also assuming that you have already installed the image and can successfully boot to the desktop.

So, here goes:

1. Add UVC support to the image
Download and run rpi-update as described here. This will update your image to include the initially-missing UVC support. Reboot as suggested.

2. Update your packages
sudo apt-get update
sudo apt-get upgrade 

3. Install the guvcview webcam viewer
sudo apt-get install guvcview

4. Set up your permissions and enable the driver
sudo usermod -a -G video pi
sudo modprobe uvcvideo

Reboot the Pi.

5. Open up the cam (these are the settings that worked for me)
guvcview --size=544x288 --format=yuyv

Well, you are almost done, there are a few things to keep in mind before you rush out to buy one of these webcams for your Pi.
  • Before you view the C270 you must disconnect your mouse*. I am not sure if this is problem specific to my install but if I don't the camera will either not connect or will drop out constantly. The error I saw was along the line of not having any "periodic" USB channels.
  • The resolution is low. Clicking on the above image will open it at full size (544x288). Trying resolutions above this didn't work.
  • The webcam "must" be connected before powering up the Pi. If not you need to run sudo rmmod uvcvideo and sudo modprobe uvcvideo before it will work.
Even with this caveats, this is better than nothing and step one towards my Pi-powered mobile robot.

Hopefully this how-to helps you out and if you have more luck than I using a mouse and/or higher resolutions please let me know in the comments.

* Now, "real" Linux people would say that you shouldn't be using one anyway, but when your goal is to use a webcam, it's somewhat inferred that you'd like to see the result in a mouse-equipped GUI :-)

Sunday, June 24, 2012


A small mobile testbed I'll be trying it out on. And a cat.
The Pi has landed.

Sunday, April 15, 2012

A new Android app!

Yes, I did say I wasn't going to write any more Android apps, but there's a really good reason this time :)

At work a couple of weeks ago two of my co-workers were inventorying a large quantity of stock that had just arrived. They were hoping to scan the barcodes for each item into a simple CSV file. Their first though was obviously "there's an app for that". Turns out there wasn't. There are hundreds of barcode-scanning and inventory apps available, but none that simple scanned to a CSV list of barcodes, then allowed that CSV data to be emailed/saved etc.

So yesterday, after 4 hours work, I can now say there is such an app. Stock Scanner isn't pretty, nor feature-packed, but it exactly fulfils the above requirement.

Stock Scanner is available in a limited-scans free version, or a very cheap paid version, on the Android market Google Play.

Tuesday, April 10, 2012

Bucket-brigading neural networks

I've recently been playing around with some Python code to explore a hunch I've had for a couple of years: that you can train a feed-forward neural network by simply indicating whether a output in response to an input was "good" or "bad".

I'd always imagined that I would hook up a small robot with a embedded neural network, giving myself a remote control with a button like this:

The robot would rove around, and whenever it did something "bad" (e.g. ran into a wall that it should have registered on its sensors) I'd press the button and it would train itself using that "bad" input->output pairing - e.g. that "move forward" when the front sonar sensor is registering an obstruction is "bad". I could also have a "good" button if it did something like turn just before a wall, for instance, to reinforce the correct behaviours.

This appealed to me as it was also very similar to how I (attempt to) train our cat...

Yes, that is our cat. No, that was not a training session...
Anyway, I have migrated this hunch to the GitHub repository BadCat. It has taken a few twists and turns along the way, but I have been able to "train" some very elementary neural networks using a simple set of rules based on the original hunch. I ended up taking a few pointers out of genetic algorithms theory just for fun too.

The algorithm works in the following way:

  1. Read the "sensors"
  2. Apply sensor readings to a learning tool (neural network), get the output
  3. Try out the output "in the real world"
  4. If the result of trying out the output is "bad":
    1. Slightly mutate the output
    2. Goto 3 above
  5. Train the network with the resultant (original or mutated) output
The mutation amount increases the longer the output is "bad", based on the assumption that the original output will be close to the desired already, but allowing the output to chance dramatically if the robot is stuck in a new situation. The "good" input->output pairs form part of a fixed length queue of recent memory that is used for regular training.

This approach is similar to the "bucket brigade" rule-reinforcement technique that can be used to train expert systems. It is also not dissimilar to reinforcement learning principles, except that the observation-action-reward mechanism is implicit instead of being explicit - the action is the output generated based on the observation and the weighting of the neural network and the reward (or penalty) is externally sourced and applied to the network only when needed.*

I am looking forward to trying this out a real mobile robot as soon as I can order my Pi and I will keep you up-to-date on how it turns out.

* Oh, and just to be clear, I am not a robotics or AI PHD student and this is not part of a proper academic research paper. It is very likely that what I am doing here has been done before so I make no claim to extraordinary originality or breakthrough genius - just consider this some musings and a pinch of free Python code :)

Thursday, March 22, 2012

Some small Python scripts

So ... that's not quite the "picture of a robot" I was intending to lead this post off with :-)

Strictly speaking, the 'R' in the image above represents a "robot" in the very simple mobile robot simulator that I just developed. RoboSim is written in Python and allows a developer to include a very rudimentary 2D simulator in their project - for instance to test a neural network or genetic algorithm. The robot can rotate on the spot in 45° increments as well as move forward and backwards. Maps are defined as simple nested lists, with internal "walls" defined for areas that cannot be traversed. The robot is fitted with two front bumper switches that are triggered depending on what the robot is pressed against. RoboSim is available on GitHub, and may receive the odd tweak here and there in the future although it has served its purpose in another project already.

My other project is probably going to keep me going for a little while longer, at least until my Raspberry Pi(s) arrive... The project was born out of a hope to combine a couple of them together for a seriously powerful mobile robot. I really wanted to use one for nothing but OpenCV video processing and another for navigation planning etc. What I really didn't want to do was to be constantly swapping between each Pi to upload new code as I tried out different ideas.

Then it occurred to me: wouldn't it be nice if I could just get one or more Pis to act as a "dumb" nodes to run arbitrary Python code provided it to it by a "master" Pi...

A couple of days programming later and the newly Github'd project, DisPy, does this. The README explains it better but essentially, instead of instantiating classes normally, I use a wrapper class to perform the instantiation. Behind the scenes the class' source code is copied over the network to a "node" machine, the class is instantiated on that node and all the local copy's methods and members are replaced by stubs that perform XML-RPC calls back to the "node".

The result is that method calls and member access happens transparently over XML-RPC, allowing for the runtime offloading of arbitrary code to one or many Pis (or anything else that can run Python).

The code is all contained in one module and has minimal dependencies, hopefully it works on other OS' but I haven't tested it on anything other than Ubuntu 11.10 yet. Please fork it, break it and have a play, I'd love your feedback on this one!

Sunday, January 22, 2012

A few changes and an exciting future

Tomorrow morning I will begin a new job and more importantly, a different direction in my career.

As you can tell from the history of this blog I have always had a passion for robotics and other embedded hardware systems. Graduating with a Bachelor of Computing, instead of Engineering, has obviously limited my job prospects in these more hardware-oriented fields. As a consequence, for the last five or so years I have been employed primarily as a web application developer with occasional forays into desktop application and embedded hardware development.

This all changed four weeks ago when I received an offer of employment at a local electricity generation business. I will taking on a role assisting with developing, administering and supporting their Energy Management System. This will involve working with complex hardware-oriented SCADA systems. I am extremely excited about this new role and the learning opportunities it will offer and I have decided it is time to adjust my non-employment priorities too.

These adjustments will have the greatest effect on my Android application development. I will still continue to bug-fix existing applications and I may even develop a few more new applications, but this will now be a low priority - a couple of hours a month. I've enjoyed working with this platform greatly but, frankly, I am not willing (with this new role) to put the time and effort in to turn this into a self-supporting business, and it doesn't make enough money to continue in a half-hearted manner.

The good news is that as a consequence of the above I intend to spend a lot more time on my embedded hardware/hobby-robotics projects. I've already been working on some as-yet undocumented projects and I would like to blog about these as they reach milestones and conclusions.

Thank you for indulging me in a personal post, I look forward to a picture of a robot leading my next one! :)