Do you speak robot?

Our future – a lot of movies and written pieces say – will be filled with robots. Sometimes they are portrayed as caring, helpful and generous, sometimes they are vicious things that aim to destroy humanity for the plague we are. In whatever color we paint them, they are always sentient or near sentient and have moral values imbued in their programming. And, remarkably, they are almost always portrayed in the future.

Now let me put my Morpheus glasses on and say: what if I told you the robots are already here and have been for quite a while? We have been dealing with them, benefiting from them and screaming at them for a long time now. It’s just that they never claimed to be robots themselves! Why is that?

Back in the day our robots used to be dumb. Take our VCRs, for example. Our what? They knew how to play videos. More specifically, they knew how to play videos magnetically encoded in tapes. But nothing else. They didn’t know where to get the tapes. The early models didn’t know how to rewind the tapes. You could program it to display the time, but that required dark sorcery or a tech savvy family member. And had to be redone after every power loss. DVD players were better: rewinding became obsolete. However, the damn clock remained a puzzle for a lot of people, except for that smart nephew. “He just knows these things!”, his aunt would say.

Speaking of clocks, remember alarm clocks? You could set it to make you hate it once per day, but you hated it even more when you forgot to turn it off on weekends. Some of them were as puzzling as the VCR and the DVD clocks, but that pesky little nephew could push the right buttons in no time and get it right every time. Microwaves were different beasts, although not untamable by the mighty nephew. All of these robots performed different tasks and were created by different people, but the puny brat could set their clocks with the same ease, without ever flinching. Their interfaces were wildly different, and yet all of these devices spoke the same language: the language of robots.

Can you brick it remotely?

At first their was a crude language, you had to press the right buttons in the correct order and with perfect timing. We had power over them, for our language was far superior. Once one knew how to set up one or two of them, she could configure any other robot. However, time relentlessly passed, as it always does, and things changed. Robots got bigger and bigger brains. Now they can have thousands of buttons attached to them and can perform practically any task. The truth, however, is that they are still fundamentally and irremediably dumb.

They are so dumb that they are embarrassed by it. They try to call themselves ‘smart’. You had phones, now you have smartphones. You had watches, these days they are smart watches. You had cars, guess what? Their language became sophisticated. They understand clicks, swipes and hand gestures. They understand body gestures. They understand speech. Heck, they even talk back!

Present day robots know where to get the tape. They play the tape automatically to you. They rewind the tape and put it back where it belongs. They suggest new tapes for you to watch. They show you what other people thought of the tape you just watched. They show the names of everyone that appeared in that tape. Your alarm clock? It wakes you up every workday. It knows to be quiet on weekends. You still scream at it in the morning, but you caress it to make it stop ringing. You carry it with you everywhere you go. This freaking clock knows when you cross timezones and changes accordingly! It shows you the local weather, and it is always local, because it knows where you are.

The scary thought is that those dumb robots know all of these things because someone taught their sorry metal asses and big brains. Those people made robots look smart. We still have power over the robots, but it is waning. You can turn off its GPS, but you have to know how to talk to it. Because their language is more complicated now, fewer and fewer people know how to speak robot, and that became accepted: it is too difficult to configure the VCR clock, so why bother? That’s why there are things that can’t be turned off anymore!

Sit with your robot. Know what it can do, what it can’t do and, more importantly, what you can make it do or not do. Of all the languages in the world, learn robot. Learn it not only because everyone else is learning, but also because otherwise your robots will not be yours anymore.

A very simple ToDo Indicator list for Ubuntu

Pretty screenshot
Screenshot of the running app

Pretty, huh? The icon is from the Open Icon Library.

(You can synchronize your ToDo list over multiple PC’s just placing the database in a Dropbox folder)

The Motivation

I have to admit: I’m not an organized person. Over the past few months I have kind of forced myself to write down everything I was doing and all the thoughts I was having, for future (and present) reference. A natural pattern that emerged was that I always had a ToDo list on one of the margins of the most recent notebook page. As I wrote more things on the notebook, I ended up copying the ToDo list to a more recent page.

So I decided to build a simple and easily accessible ToDo list.

I wanted the app to be easily accessible and not get in the way of other things. Therefore, I decided to build it as an Ubuntu Application Indicator (those pesky icons that stand next to the system clock).

I wrote this app using Python; the code is available on GitHub.

The Specs

Ideally I wanted to have a more dynamic Indicator, allowing me to add, remove and check items using the dropdown menu directly. However, the appindicator library for Python doesn’t allow me to do that, as I can’t embed arbitrary GTK Widgets inside a GTK MenuItem.

So, the menu was designed as follows:

  • A list of items. Each item has an indication of Done/Not Done (Unicode checkboxes). Clicking the item toggles its Done/Not Done status;
  • A button that allows the list of items to be changed. This button launches GEdit to edit the database;
  • A button to close the app.

The Database

The database is composed of a single text file in the following format:

  • Each line contains one item;
  • The line starts with the string “Yes” or the string “No”, representing the Done/Not Done state;
  • A semicolon follows;
  • A description of the item follows.

For example, the file below quite obviously describes two (easily accomplishable) tasks: the first one marked as Done and the second one as Not Done.

The Code

The code relies on three libraries: pygtk, appindicator (with faulty documentation) and pyinotify. The first one provides an API to build the menu. The second one provides bindings to attach the menu to the system as an Indicator. The last one allows watching for changes in the “database”.

When the application starts it loads the “database” file and fills the menu. If the database changes, the app reloads it and rebuild the menu. If an item is clicked, its state is toggled and the database is updated (which in turn triggers a reload).

 

 

Xenomai on the Beaglebone Black in 14 easy steps

EDIT: Mark wrote an updated guide here.

The BeagleBone Black is an amazingly cheap and powerful development platform that is being used by many people in a lot of projects. That was intentionally vague, because I know that if you ended up here you already know what a BeagleBone Black is.

In this post I’ll explain how I got Xenomai to run on my BeagleBone.

First of all I tried these instructions, but couldn’t get past the kernel compilation step. I believe that this is due to the instructions being six months old, which are like two and a half centuries in computer time. So I continued searching and found a post in a Japanese blog. Using my fluent Japanese Google Translator I could understand what was going on and could successfully reproduce the steps and get Xenomai up and running (big thanks to the author!). Here I’ll reproduce the steps. I’m assuming that you are on a computer running Ubuntu (like mine) and are familiar with the command line.

Getting the tools

Step 0: Get all the tools that will be needed (cross-compiler and dev libraries).

Building the Kernel

Step 1: First of all, make a directory to hold all of our development files. I’ll call mine bbb

Step 2: Get the Linux kernel for the BeagleBone and the Xenomai sources. This might take a while.

Step 3: Checkout kernel 3.8 version branch. Apply BeagleBone’s patches.

Note: In this step I revert to a specific commit because newer ones are known to cause problems.

Step 4: Get a firmware that the kernel config will need (I’m not sure whether this firmware is really needed).

Step 5: Copy the BeagleBone default config as the running config.

Step 6: Apply I-pipe patches to the BeagleBone kernel.

Step 7: Run the Xenomai prepare-kernel  script for the BeagleBone kernel.

Step 8: Configure the kernel to be built.

Under  CPU Power Management --->  CPU Frequency scaling, disable  [ ] CPU Frequency scaling . (Note: Don’t know if it’s better to leave it enabled, read the comments!)

Under  Real-time sub-system  ---> Drivers ---> Testing drivers, enable everything.

Step 9: Compile the kernel.

Note: I chose 16 to the -j  option, because my computer has 8 cores. Choose a value appropriate to your computer. I read somewhere that 2 times the number of cores is a good number.

Note: If there were errors in the compilation, the messages will probably be lost among all other output. To see them, simply run the command again.

Preparing an SD Card

Now let’s get an SD Card ready with the Angstrom distribution and our kernel. If you want to use this kernel with the distribution on the eMMC memory, just put it in the appropriate place.

Step 10: Download and copy the default Angstrom distribution to your SD Card. Replace /dev/sdX  with the path to your SD Card. sudo fdisk -l  is your friend. Note: I used a SanDisk 4GB SD Card.

CAUTION: YOU WILL LOSE ALL YOUR PREVIOUS DATA ON THE DEVICE /dev/sdX !

Step 11: Mount the Angstrom partition. Copy kernel and kernel modules (thanks for your comment, Jurg Lehni!), Xenomai modules and source folder to that partition. Replace  /dev/sdX2  with your actual path to the partition.

Testing

Now put the SD card on the BeagleBone, boot it, ssh into it and test Xenomai.

Step 12: Configure the date and compile Xenomai

Note: an example for the date command would be  date -s "21 May 2014 13:25 GMT-3" 

Step 13: Load the testing driver

Step 14: Run some tests!

User-mode latency:

In-kernel Latency:

Poke around:

Change parameters:

Great, huh? Now go develop something real time =)