Saturday, July 19, 2014

Simple Python Turing machine implementation

I've always been a fan of using problem-solving to learn, over the years I've found it far more effective that rote learning or being lectured. Recently I came across this interesting implementation of a Turing machine. I'd obviously heard of Turing machines but I had never sat down and thought about how to "build" one.

Feeling inspired I cooked up this fairly simple Python example of a Turing machine*, using a rule set that counts up in binary on the "tape": This example counts up to 64 in Binary, printing out the tape each time the head resets to the start position.

* In this case, the "infinite tape" is only infinite for values of infinity less than the roughly 3 GB of free RAM that my laptop had at the time :)

Thursday, July 10, 2014

Detecting WiFi clients on TP-LINK routers using Python and telnetlib

Inspired by this project on Hackaday where submitter Mattia used Python to nmap scan his WiFi network, triggering alerts when particular MAC addresses are found, and with my dreams of home-automation in mind, I came up with a slightly different way of achieving the same thing.

My router is a cheapo TP-LINK, but it does come with a "currently connected MAC addresses" page in the web interface so my first thought was using BeautifulSoup to do some parsing. Then I found references to a telnet interface.

Connecting to the Telnet interface I quickly found that the command "wlctl assoclistinfo" gave me this output:

Associated stations:4
Num     Mac Address        Time
1    F0:C1:F1:22:xx:xx    00:02:30:04
2    90:21:55:B0:xx:xx    01:02:20:26
3    00:0F:54:10:xx:xx    03:09:17:28
4    74:E1:B6:2C:xx:xx    30:04:37:48 

Firing up Python and the telnetlib telnet-automation module meant that 10 minutes later I was printing comma-separated MAC addresses to the console using this snippet of code:

Finally, I am triggering this in my Raspberry Pi via a simple crontab entry:

* * * * * logger `python /home/pi/wlan_sensor/sense.py`

This gives me per-minute logging of WiFi clients, giving me the information I need to the first of my home-automation projects - turning on the lights when I get home.

Monday, April 21, 2014

Dynamic CSS updates without page refresh

I'm currently prototyping some CSS for a small webpage and this little trick occurred to me to save having to press F5 every couple of seconds (requires jQuery):

The script simple reloads each css stylesheet four times a second, giving me a near real-time CSS preview.

Sunday, December 22, 2013

Sneak-peek at what I've been working on for the last six months...

Sorry about the aspect ratio, still sorting out a proper screen-caster that'll do Android and PC concurrently.

Tuesday, July 16, 2013

Django doctesting with WebTest

I’ve been a big fan of Python’s doctest since I first worked with Zope 3. I know a lot of people knock it, but there was always a sort of "magic" in pasting use-cases into rst documents, inserting some python and and you're done.

Recently I've been working on a number of Django applications and I really wanted to re-use this pattern.

Initially, I used the built in django.test.client - this was a fairly close approximation of Zope's testbrowser and lead to doctests like:


Where this falls down is the testing of forms - most recently I was testing the uploading of a file and the various server-side validations that would trip (name, size, contents etc). To do this in django.test.client, you must use the post() method with the following result:

The testing of file uploads is even worse.

Trying to solve this problem I came across this excellent slideshow about using WebTest. This looked like the perfect solution with its simple form access and file upload wrappers. Combining WebTest with django webtest gave me a very similar base API to django.test.client.

Here I ran into a problem though. All the demos and documentation for WebTest showed usage in unit tests. A Google search for "doctest WebTest" wasn't helpful either. Pulling out Python's dir() function, I discovered a very interestingly named class DjangoTestApp in django_webtest. A couple of minutes later and my doctests looked like this (abbreviated):


The best bit was the actual uploading of files - the "name" and "content" is just assigned to the field on the form:


This is an incredibly elegant interface and allowed me to quickly perform a huge range of upload testing.

Why not just use unittest, you ask?

Simply put, a doctest can be handed to the client, my line manager or any co-worker, and they can line it up against a set of functional requirement, or just their domain knowledge. The same simply cannot be said for something like the following (from here):

Monday, March 11, 2013

Python iRacing API client



Some of you will already know that I am a massive fan of the racing simulator iRacing. I signed up to this "game" a couple of years ago and it's lead to the purchase of new computers, steering wheels and even a dedicated "racing seat".

What I've always wanted to do was build a basic motion rig for the game, something that at least would represent lateral acceleration to give me a better idea of my grip limits when driving. The first step towards this is parsing the telemetry from the game.

iRacing has a telemetry API that's solely documented in the manner of a demo client built from a pile of C code. Whilst I am certainly capable of programming in C, my preference is definitely my pet language, Python. Some brave souls have built clients in C#, but that isn't much better *

Aside from my own bias, Python is a really nice language to use as a new programmer, so developing this client should allow for more people to develop iRacing add-ons which can only be a good thing!

Anyway, I decided to build a Python implementation of a client for iRacing. The API uses a memory mapped file, something Python happily supports out of the box, as well as C data structures for all the high-frequency telemetry which I was able to parse using Python's struct module.

As I was secretly hoping, Python's dynamic-programming abilities allowed me to write a client in short order, using very little code.

The code's right here on GitHub. It's, honestly, quite rough-and-ready, but it works. Please do let me know if you have success or problems with it, I'll do my best to help out. I expect it'll be updated regularly, or whenever I find a bug...

You'll have to extend and modify it to work with your application, but that's half the fun - enjoy! :)

* Whilst I'm not a huge fan of C#, the in-line debugging/inspection functionality in Visual Studio was a god-send when applied it to the C# demo client.

Sunday, January 27, 2013

My home audio streaming setup

As a slightly different post I thought I'd share the details of my multi-room audio streaming setup.

There's nothing special about it, but it's a very simple and cheap system that actually works really well, allowing multiple sources and speakers (with simultaneous playback), across a range of hardware and operating systems. I'm not going to link to every app and provide installation guides, that's what Google's for :)

The components
PC 1
  • Role(s): Source + destination + media share
  • OS: Windows 7
  • Software: iTunes (source), Shairport4w (destination), Air Playit HD (media share)
PC 2
  • Role(s): Source + destination + media share
  • OS: Windows 7
  • Software: Airfoil (source), Shairport4w (destination), Air Playit HD (media share)
Raspberry Pi
  • Role(s): Destination
  • OS: Rasbian "wheezy"
  • Software: shairport
iPad 2
  • Role(s): Media share -> destination bridge
  • OS: iOS 6
  • Software: Air Playit HD
How it all works together
Ignoring the iPad and the Air Playit HD software for a moment, the rest of the system involves Apple AirPlay sources and destinations. To be honest, I'm not a huge fan of some of Apple's work, but in this case AirPlay was simply the "right tool for the job".

Looking at the PCs, iTunes on the first one "just works", sending audio to one or more (simultaneously) destinations without a problem. Airfoil on the second PC allows audio capture from running programs (eg VLC) and sends that out in the same manner as iTunes to one or more destinations. Both PCs then perform double-duty as destinations thanks to the excellent Shairport4w.

The Raspberry Pi acts purely as a lightweight destination, thanks to the installation of shairport. I've attached an external USB sound card which helps with the sound quality.

Separately to this, Air Playit HD allows the iPad to play music/video off either of the PCs - the software on the PCs shares chosen folders and there's a client on the iPad of course. Any played music can then by pushed (via the iPad's built-in AirPlay tool) to a single destination.

These components work really well together, and allow me to scale the system really easily in the future. I hope they give you a good idea or two - let me know your versions in the comments!

Hat tip
Lifehacker's articles on setting up a Raspberry Pi and AirPlay streaming from/to all sorts of devices.

Sunday, November 4, 2012

Use a Kindle and Python as a clock/dashboard


When the Kindle Wifi came out I snapped it up and it became the most used electronic device I've ever owned. Then the Kindle Touch came out and I got that too (by which point I was well on the way to becoming an Amazon fanboy...)

The only problem was that I couldn't actually read two Kindles at once.

Then I came across a post by Matthew Petroff - Kindle Weather Display. This sparked my curiosity and I decided to build a clock/dashboard with the Wifi - something that'd show things like number of unread emails, weather etc. This appealed to me as the Kindle has fantastic battery life, is silent, and the e-ink display is both large and invisible at night (when I really don't want to know the time). The goal was something I could glance at in the morning before going to work and ignore otherwise.

Initially I intended to go down the route of jailbreaking to display an image like Matthew did, but I didn't have any luck with that on my particular device. It then occurred to me that I could just use the built-in web-browser to display a self-updating page. The only blocker to this was stopping the screensaver from turning on, something I was able to work around. The browser chrome was not pretty, but also not a deal-breaker.

From then on it was all about building an appropriate web site, serving it (on my desktop, but it'll work on anything running Python) and pointing the Kindle's browser at the right URL. So far, I've managed to show the time (updated on the minute), the current temperature, today's forecast and today's agenda from my Google Calendar. There's nothing magical there and as the site can be displayed on any JS-aware web-browsing device I'm sharing this project on GitHub. It'll change a lot over time but hopefully there are some basics you can use in your own project.

Enjoy!

Update 05/11/2012: I've now added a count of unread emails in my Gmail inbox so head over to the GitHub repo if you're interested in that sort of functionality for your own project. I've also got server.py working on a Raspberry Pi without modification, and it's perfectly fast enough for my use.

Sunday, August 5, 2012

Getting a Logitech C270 webcam working on a Raspberry Pi running Debian


I thought it was about time I shared something after all the hours I've committed to Pi-hacking and the above title says it all. These instructions are very simple but should hopefully save you some trial-and-error.

Importantly, hat-tip to Insipid Ramblings and Hexxeh for their info and work that helped me get this far.

Firstly, I started with a slightly old version of Debian - namely the debian6-19-04-2012 image. Your results may vary depending on what version you use. I am also assuming that you have already installed the image and can successfully boot to the desktop.

So, here goes:

1. Add UVC support to the image
Download and run rpi-update as described here. This will update your image to include the initially-missing UVC support. Reboot as suggested.

2. Update your packages
sudo apt-get update
sudo apt-get upgrade 

3. Install the guvcview webcam viewer
sudo apt-get install guvcview

4. Set up your permissions and enable the driver
sudo usermod -a -G video pi
sudo modprobe uvcvideo

Reboot the Pi.

5. Open up the cam (these are the settings that worked for me)
guvcview --size=544x288 --format=yuyv

Caveats
Well, you are almost done, there are a few things to keep in mind before you rush out to buy one of these webcams for your Pi.
  • Before you view the C270 you must disconnect your mouse*. I am not sure if this is problem specific to my install but if I don't the camera will either not connect or will drop out constantly. The error I saw was along the line of not having any "periodic" USB channels.
  • The resolution is low. Clicking on the above image will open it at full size (544x288). Trying resolutions above this didn't work.
  • The webcam "must" be connected before powering up the Pi. If not you need to run sudo rmmod uvcvideo and sudo modprobe uvcvideo before it will work.
Even with this caveats, this is better than nothing and step one towards my Pi-powered mobile robot.

Hopefully this how-to helps you out and if you have more luck than I using a mouse and/or higher resolutions please let me know in the comments.

* Now, "real" Linux people would say that you shouldn't be using one anyway, but when your goal is to use a webcam, it's somewhat inferred that you'd like to see the result in a mouse-equipped GUI :-)

Sunday, June 24, 2012

Pi

A small mobile testbed I'll be trying it out on. And a cat.
The Pi has landed.

Sunday, April 15, 2012

A new Android app!

Yes, I did say I wasn't going to write any more Android apps, but there's a really good reason this time :)

At work a couple of weeks ago two of my co-workers were inventorying a large quantity of stock that had just arrived. They were hoping to scan the barcodes for each item into a simple CSV file. Their first though was obviously "there's an app for that". Turns out there wasn't. There are hundreds of barcode-scanning and inventory apps available, but none that simple scanned to a CSV list of barcodes, then allowed that CSV data to be emailed/saved etc.

So yesterday, after 4 hours work, I can now say there is such an app. Stock Scanner isn't pretty, nor feature-packed, but it exactly fulfils the above requirement.


Stock Scanner is available in a limited-scans free version, or a very cheap paid version, on the Android market Google Play.

Tuesday, April 10, 2012

Bucket-brigading neural networks

I've recently been playing around with some Python code to explore a hunch I've had for a couple of years: that you can train a feed-forward neural network by simply indicating whether a output in response to an input was "good" or "bad".

I'd always imagined that I would hook up a small robot with a embedded neural network, giving myself a remote control with a button like this:


The robot would rove around, and whenever it did something "bad" (e.g. ran into a wall that it should have registered on its sensors) I'd press the button and it would train itself using that "bad" input->output pairing - e.g. that "move forward" when the front sonar sensor is registering an obstruction is "bad". I could also have a "good" button if it did something like turn just before a wall, for instance, to reinforce the correct behaviours.

This appealed to me as it was also very similar to how I (attempt to) train our cat...

Yes, that is our cat. No, that was not a training session...
Anyway, I have migrated this hunch to the GitHub repository BadCat. It has taken a few twists and turns along the way, but I have been able to "train" some very elementary neural networks using a simple set of rules based on the original hunch. I ended up taking a few pointers out of genetic algorithms theory just for fun too.

The algorithm works in the following way:

  1. Read the "sensors"
  2. Apply sensor readings to a learning tool (neural network), get the output
  3. Try out the output "in the real world"
  4. If the result of trying out the output is "bad":
    1. Slightly mutate the output
    2. Goto 3 above
  5. Train the network with the resultant (original or mutated) output
The mutation amount increases the longer the output is "bad", based on the assumption that the original output will be close to the desired already, but allowing the output to chance dramatically if the robot is stuck in a new situation. The "good" input->output pairs form part of a fixed length queue of recent memory that is used for regular training.

This approach is similar to the "bucket brigade" rule-reinforcement technique that can be used to train expert systems. It is also not dissimilar to reinforcement learning principles, except that the observation-action-reward mechanism is implicit instead of being explicit - the action is the output generated based on the observation and the weighting of the neural network and the reward (or penalty) is externally sourced and applied to the network only when needed.*

I am looking forward to trying this out a real mobile robot as soon as I can order my Pi and I will keep you up-to-date on how it turns out.

* Oh, and just to be clear, I am not a robotics or AI PHD student and this is not part of a proper academic research paper. It is very likely that what I am doing here has been done before so I make no claim to extraordinary originality or breakthrough genius - just consider this some musings and a pinch of free Python code :)

Thursday, March 22, 2012

Some small Python scripts


So ... that's not quite the "picture of a robot" I was intending to lead this post off with :-)

Strictly speaking, the 'R' in the image above represents a "robot" in the very simple mobile robot simulator that I just developed. RoboSim is written in Python and allows a developer to include a very rudimentary 2D simulator in their project - for instance to test a neural network or genetic algorithm. The robot can rotate on the spot in 45° increments as well as move forward and backwards. Maps are defined as simple nested lists, with internal "walls" defined for areas that cannot be traversed. The robot is fitted with two front bumper switches that are triggered depending on what the robot is pressed against. RoboSim is available on GitHub, and may receive the odd tweak here and there in the future although it has served its purpose in another project already.

My other project is probably going to keep me going for a little while longer, at least until my Raspberry Pi(s) arrive... The project was born out of a hope to combine a couple of them together for a seriously powerful mobile robot. I really wanted to use one for nothing but OpenCV video processing and another for navigation planning etc. What I really didn't want to do was to be constantly swapping between each Pi to upload new code as I tried out different ideas.

Then it occurred to me: wouldn't it be nice if I could just get one or more Pis to act as a "dumb" nodes to run arbitrary Python code provided it to it by a "master" Pi...

A couple of days programming later and the newly Github'd project, DisPy, does this. The README explains it better but essentially, instead of instantiating classes normally, I use a wrapper class to perform the instantiation. Behind the scenes the class' source code is copied over the network to a "node" machine, the class is instantiated on that node and all the local copy's methods and members are replaced by stubs that perform XML-RPC calls back to the "node".

The result is that method calls and member access happens transparently over XML-RPC, allowing for the runtime offloading of arbitrary code to one or many Pis (or anything else that can run Python).

The code is all contained in one module and has minimal dependencies, hopefully it works on other OS' but I haven't tested it on anything other than Ubuntu 11.10 yet. Please fork it, break it and have a play, I'd love your feedback on this one!


Sunday, January 22, 2012

A few changes and an exciting future

Tomorrow morning I will begin a new job and more importantly, a different direction in my career.

As you can tell from the history of this blog I have always had a passion for robotics and other embedded hardware systems. Graduating with a Bachelor of Computing, instead of Engineering, has obviously limited my job prospects in these more hardware-oriented fields. As a consequence, for the last five or so years I have been employed primarily as a web application developer with occasional forays into desktop application and embedded hardware development.

This all changed four weeks ago when I received an offer of employment at a local electricity generation business. I will taking on a role assisting with developing, administering and supporting their Energy Management System. This will involve working with complex hardware-oriented SCADA systems. I am extremely excited about this new role and the learning opportunities it will offer and I have decided it is time to adjust my non-employment priorities too.

These adjustments will have the greatest effect on my Android application development. I will still continue to bug-fix existing applications and I may even develop a few more new applications, but this will now be a low priority - a couple of hours a month. I've enjoyed working with this platform greatly but, frankly, I am not willing (with this new role) to put the time and effort in to turn this into a self-supporting business, and it doesn't make enough money to continue in a half-hearted manner.

The good news is that as a consequence of the above I intend to spend a lot more time on my embedded hardware/hobby-robotics projects. I've already been working on some as-yet undocumented projects and I would like to blog about these as they reach milestones and conclusions.

Thank you for indulging me in a personal post, I look forward to a picture of a robot leading my next one! :)

Thursday, December 22, 2011

Video review of Sythe by content3300

I just came across this video by the YouTube user content3300, showing Sythe in action. It appears to be an entry for a competition, but it shows all the features quiet well. Thanks content3300!

Sunday, December 4, 2011

Distributed tournaments for the Google AI Challenge

As I noted a couple of posts ago, I am taking part in the Google AI Challenge again this year (my entry). The challenge this year is Ants, a game which requires entries (agents) to control a number of ants in an environment made up of land, water, food and enemy ants.

The design of my agent is fairly simple and has a large number of parameters that are a adjustable (e.g. distance between an enemy ant and my base that is considered a "threat"). This made it a perfect candidate for trialling out some Genetic Algorithms (GA) theory to tune those parameters, as well as to evalute some algorithmic design decisions.

To start using GA one must generate an initial batch of solutions to the problem. This is currently in the form of 12 versions of my agent.

Once an initial set of solutions has been generated, the next step is the evaluation of the fitness of each solution. Each agent I design is a different "solution" to the problem of being the best agent - the best agent is the fittest.

I decided the simplest way to evaluate the fitness of each agent is for it to compete against other agents that I have made, and sample agents, in the standard game format that is used on the official servers.

As I have a number of laptops and computers, none of which super-powerful, I decided to try and make a distributed tournament system so that I could play as many games as possible to get the best idea of fitness - my setup is as follows.

  • Each machine is running Ubuntu 11.10, with Dropbox installed. The game's Dropbox folder contains the game engine, maps and all the agents that are currently being tested.
    • This allows for new agents to be added at any point and all machines to be immediately updated.
  • Each machine continuously:
    1. Selects a random map
    2. Selects a random set of agents to play on that map
    3. Plays the game
    4. Writes the score to a file based on it's host name - eg "log-ubuntubox.txt". These files are also in the Dropbox folder.
  • Any machine can run a shared script that aggregates the results from all log-*.txt files, computing the average points/game for each agent. This is used as the fitness.
Because I am using Python 2.7 (installed by default on Ubuntu 11.10) for the game engine, agents and extra scripting the provisioning of a new machine is this simple:
  1. Install Ubuntu
  2. Install Dropbox
  3. Run "python play.py"
So far this is working quiet well with quiet dramatic and unexpected performance differences between some nearly identical agents. Once each agent has played at least 30 games I will remove some of the lowest scoring agents and add some new versions based on combining the traits that are the most successful.

With any luck this should result in a pretty competitive entry in this year's Challenge - I will keep you posted!

Thursday, December 1, 2011

Milestones

I just had a look at my Market stats and I've just hit a couple of milestones:
  • More than 100 ratings of Sythe Free (average 4.3/5)
  • More than 10,000 active users of Sythe Free
  • More than 25,000 downloads of Sythe Free
If only the paid version was going so well... :-)

Wednesday, November 30, 2011

Sythe update released

Just a quick one - I've just released an update to Sythe to fix:
  • Never-ending playback after closing Sythe
  • Incorrect step between octaves
  • Incorrect octave start/finish
  • Mis-match between note and frequency when switching modes
Thanks for the patience with this one guys, I've gotten totally bogged down in the 2011 Google AI Challenge (a greater time-sink than Skyrim...)

Sythe 1.3 is now available on the Android Market for free or very, very cheap.

Thursday, October 6, 2011

Sneak peek


You are looking at the main screen of an early version of my next app - a high-quality drum synthesiser. Currently it mixes 3 sine-wave sources with independent frequencies, amplitudes and ADSR envelopes.

Oh, and yes, it'll use my minimalist red-on-black UI again :-)