IMG_2989

Getting started with the Arietta G25 board

This weekend, I received my ACME Arietta G25 Atmel ARM board, and tried to get it going with my Mac. I ordered the “plain” version with 128 MB RAM, as well as one with 256MB RAM, as well as Wifi boards and the DPI debug board.

After soldering in the necessary posts, I attached the DPI board and verified that my Mac has the right FT232 driver to access the console, no problems here.

I then fired up by Ubuntu VirtualBox machine and followed the instructions to build the Micro-SD-Card image. I then proceeded to boot successfully from the card.

IMG_2989

Through the console, I could get the Wifi card going; since I wanted to have support for more than a single network, I extended the configuration through wpa_supplicant.conf.

ACME has configured a Gadget driver to supply an Ethernet interface through the host port on the Arietta; in ACMEs configuration, this is set to offer both RNDIS and CDC EEM modes. Unfortunately, Mac OS X 10.9 doesn’t support either.

After some more reading, I decided to build my own kernel and modules. To get a Mac-compatible setup for the USB Gadget driver, run menuconfig, and navigate to

  • Device Drivers
  • USB support
  • USB Gadget Support (at the very bottom of the list)

For Mac compatibility, de-select the RNDIS support and the Ethernet Emulation Model (EEM) support under Ethernet Gadget.

I also chose to enable the Serial Gadget and the CDC Composite De
vice (Ethernet and ACM).

After building the kernel and replacing the files on the SD Card, I changed the module load line in /etc/network/interfaces to load the g_cdc module instead of the g_ether module, and added an entry in /etc/inittab for /dev/ttyGS0 to also have a console through the host port.

Debugging Java proxy settings

Java proxy settings are highly annoying. On Windows and Mac, Java does use the system proxy settings, but it doesn’t necessarily understand the same syntax as say IE or Safari/WebKit for the proxy exceptions. The end result is that you keep guessing why certain services deep inside your huge web application keep failing in mysterious ways.  On other platforms, you have to configure the proxy through system properties. Restarting a web application to test out configuration changes can take a long time.

While solving the problem might not be easy, at least I can help with debugging it, with a very simple one-liner: Java Proxy Check (GitHub, direct download link).

For java.net.URL and most other HTTP client code, Java 7 uses a class that can decide on a per-URL basis which proxy should be used: java.net.ProxySelector. javaproxycheck.jar can be run  from the command line to quickly test one or more URLs and see which proxy is selected for that URL:

$ java -Dhttps.proxyHost=proxy.example.com -Dhttps.proxyPort=3128 -jar javaproxycheck.jar http://www.example.com https://secure.example.com
http://www.example.com                   [DIRECT]
https://secure.example.com               [HTTP @ proxy.example.com:3128]

On Mac OS X and Windows, Java uses the system proxy configuration; on other systems, it optionally can use Gnome settings, but by default relies on system properties being set at startup, as documented in Networking Properties, Proxies.

nsupdate woes

Over the past three days, I’ve been trying unsuccessfully to set up a fresh name server (using Ansible and Vagrant) to play around with nsupdate and some potential middleware to automate updating DNS zones. Unfortunately, I couldn’t find any good documentation on all the specifics, just a lot of how-tos that somehow all did’t really work for me. I kept getting this error:

client 127.0.0.1#60530: request has invalid signature: TSIG example.com: tsig verify failure (BADKEY)

I will upload my Ansible role for this to Github shortly, but I felt it necessary to spare the next person the pain of getting it all to work.

To use nsupdate, you need to give bind a key, associate that with a zone, and then use that key with nsupdate. Sounds easy enough, right? What all the tutorials fail to mention is that the key files and the name of the key entries in named.conf are all significant.

First, you need to decide what to call your key. It doesn’t matter for which zones you will use it, or from which hosts you will run nsupdate, but you will need to pick a name and stick to it. You can’t change it later, you will need to create a new key if you don’t like the name.

$ dnssec-keygen -a HMAC-MD5 -n HOST -b 512 mysamplekey

This will create two files. You need both of them, and you can’t change the files’ names.

Next up, you need to add the key to named.conf, and allow requests signed by that key to update a zone (or more):

key "mysamplekey" {
    algorithm hmac-md5;
    secret "base 64 encoded key";
};

zone "example.com" {
    allow-update { key "mysamplekey"; };
    type master;
    file "dynamic/example.com";
};

It is important to remember that “mysamplekey” needs to be the exact same string as from the key generation!

Armed with this configuration, you should be able to update example.com:

$ nsupdate -k Kmysamplekey+123+45678
debug yes
server 127.0.0.1
zone example.com
update add foobar 10 A 192.0.2.1
send

Download from Tivo fails with HTTP error 400

So tonight I was wondering why no new shows had been downloaded from my Tivo to my file server. Investigating my home-grown script’s log output, I  could see that all download attempts were met with an HTTP error 400 since Sunday. (My script works similarly to kmttg.)

I immediately tried in Safari, and had no problems downloading one of these shows. So more debugging was necessary. Enabling additional output for Python’s urllib2 (used for downloading the XML table of contents) and curl (used for downloading the media files) showed an unusual Set-cookie response header on all of the requests, setting a new value for the “sid” cookie. Quick recap: when you load the XML TOC over HTTPS, the Tivo sets a “sid” cookie which you need to supply to download the actual show’s file. No problem, my script was recording the cookie using cookielib.CookieJar, passing the resulting file to curl. Or so I thought. Inspecting the cookie jar I found it to be empty. How could that be?

At this point I turned to Firefox and the Live HTTP Headers extension. Interestingly, the problem was the same in Firefox, so it wasn’t a problem in my script. but was was going wrong? After some experimentation with variations of the Set-cookie header (using my internal web server and Apache’s very useful mod_asis), plus remembering the details of the cookie protocol, it became clear:

Set-Cookie: sid=542329DF12A3586B; path=/; expires="Saturday, 16-Feb-2013 00:00:00 GMT";

If a cookie has an expires attribute, and the current date and time is later than the date specified, the cookie becomes invalid, and the HTTP client should remove it and not transmit it anymore. If a server sends a Set-Cookie header with an expires attribute in the past, the cookie should be deleted. Bingo! (Safari  had an old cookie still set, so the Tivo just used that and did not remove the cookie.)

So the Tivo is apparently trying to set a persistent cookie (confusingly named “sid” as it’s not a session cookie), but doing so with a date in the past. And everything was working just fine until last Saturday! So either Tivo pushed a software update on Sunday, changing the expires date of that cookie, or it has had that date for ever, and eternity just ran out. I haven’t reverse engineered the firmware, but I would bet that someone way back when Tivo got started hard-coded the date “way in the future”, and everbody promptly forgot about it.

No word from Tivo yet, but this Tivo forum thread mentions the same broken cookie.

I managed to add a hack to my script that extracts the “sid” from the Set-Cookie header manually and hand the proper cookie value to curl for the downloads. And I got to learn about mechanize for Python!

Dicker Pott

Dicker Pott

Heute war die Marco Polo der französischen CMA-CGM-Reederei auf ihrer Jungfernfahrt am Burchardkai in Hamburg zu besuch. Das Schiff ist so groß, dass man es praktisch immer sieht, ganz egal, wo man gerade hinguckt.

Keep those privacy-eating websites at bay

Facebook, Google, Twitter, and all those other services that sell ads want to know as much about you as they can possibly find out. Consider for example Facebook’s Like button that is part of most web pages: even if you don’t click it, Facebook knows that you have viewed that page. How can they know? Since you have logged in to Facebook before, they have stored a cookie in your browser. And the Like button’s image is loaded from Facebook’s servers, so that cookie is sent to Facebook, together with information about the page that you’ve just visited. (My blog does not contain any tracking codes, not even Google analytics.)

So how can you avoid this? You could delete all stored cookies every single time you log out of Facebook (and Twitter and Google and…) but that’s not really convenient. Fluid is a small browser for Mac OS X (based on the same WebKit engine as Safari), but with a twist: instead of using the application to browse the Internet, you create specialized browsers for specific websites. I’ve set up a number of them: one each for my personal Facebook and Twitter accounts plus a couple more for accounts that I help maintain for my historic railway club.

Since these specialized browsers all store their cookies seperately from each other, I can use my main browser without ever having a Facebook or Twitter login. For those services, I appear as some random surfer, not connected to my actual profile.

Of course, this little trick is not perfect. WIth advanced analytics that all the ad networks employ, some information is still gathered about me (such as when I click a link in my Facebook browser that takes me to a different website), but I still feel a lot better about not giving away all my browsing history all the time.

Services to sign up for: app.net and ifttt.com

So I’ve plunked down 100 bucks for an early bird developer account at join.app.net, a new social network. So why would anyone pay for something like this, if Facebook and Twitter and everything else is free? Because it isn’t.

Facebook and Twitter live off selling their users to advertisers. Nothing wrong with that, but it means that more and more, they choose to implement features and control the user experience to maximize value for the advertisers. What the users want and how they would like to be treated becomes less important. And third-party applications and their developers make things complicated, since they do not help generating ad revenue, so more and more, they’re shut out or severely restricted.

Dalton Caldwell was annoyed by that, decided to do something about it: create a new social media platform, but instead of financing it through advertising, have the users pay for it. At least in principle, it aligns the interests of the platform company with that of the users, since happy users will generate more revenue. Right now, it’s an experiment (but it’s looking like it might get very successful, with over 12.000 paying users signing up in less than two months). Right now I have the bragging rights to be the first among my friends to have signed up, but that mainly means that none of my friends are on app.net yet. We’ll see if that changes within the next 12 months.

Which brings me to the second service: if-this-then-that, or IFTTT for short. I now have a plethora of social media accounts, and distributing the various thoughts that I sometimes find worth publishing can become tedious. IFTTT helps with that: it monitors your account on any of 30 or so services (you decided with ones), and then cross-posts new posts from one service to any other one of your choosing. You decide which service to monitor and what to post where by settings up simple recipes: if I publish a new blog post, take it’s title and URL and post it to app.net, Facebook and Twitter. I’ve just started using it, but from what I can see from my friends using it, it looks like a very useful tool.

Ab Herbst muss man auch seinen PC rooten

Wer noch nichts davon gehört hat: Mit Windows 8 hat Microsoft die BIOS- und Systemhersteller „überzeugt“, dass PC nur dann mit Windows 8 geliefert werden dürfen, wenn diese SecureBoot unterstützen und per Default aktiviert haben. Gnädigerweise erlaubt Microsoft den Herstellern, SecureBoot abschaltbar zu machen. Wie einfach das geht, und ob es überhaupt implementiert wird, bleibt aber den Herstellern überlassen. Soweit so schlecht.

Fedora hat sich jetzt entschieden, über das Microsoft-Programm ihren Bootloader zu signieren, damit man in Zukunft Fedora booten (und installieren) kann, ohne SecureBoot auszuschalten.

Booting from an USB disk – even when your system doesn’t support it

I like using VMware Fusion on my Mac, but it has one shortcoming: it cannot boot from USB devices. You can use disk images (floppy, CD/DVD and harddisk) as well as a physical optical drive, but USB devices are not available. That’s unfortunate if you want to use VMware to prepare a hard disk for a machine, and want to test booting off that system before installing it in the machine.

When I install a new FreeBSD machine, I often start out from an existing FreeBSD machine and install directly from that running system, instead of booting off an install DVD. Obviously, using a virtual machine for this bootstrapping system, together with a USB hard disk adapter, is very convenient. But without being able to boot the VM off that USB disk, testing can be cumbersome.

I was very happy to come across Plop, a boot manager with many features. The most interesting one for me is it’s support for booting off USB devices without BIOS support. Plop includes its own U/O/EHCI driver, supporting standard USB 1.1 and USB 2.0 devices and ports.

Also very important: Plop can be run off a CD or floppy image, so you don’t need to (re-)configure your main hard disk. If I feel adventurous, I might look into patching the Plop BIOS extension into VMware, making booting even easier. For the time being, I’m using the floppy image, since none of my virtual (nor physical) machines have floppy drives any more.

Also, when you have an older machine which BIOS does not support booting off USB devices, Plop might be very helpful!

Getting sshd to start as early as possible

In FreeBSD, sshd by default gets started quite late in the boot process, about the same time a console will show the login prompt. There’s quite a few services that can make trouble and hang before that. Annoyingly, you can’t fix a stuck system via ssh, since it’s not started yet. But as it turns out, sshd can be started quite a bit earlier than FreeBSD does by default.

The rcorder keywords in /etc/rc.d/sshd normally look like this:

# PROVIDE: sshd
# REQUIRE: LOGIN cleanvar
# KEYWORD: shutdown

Change the rcorder keywords like so:

# PROVIDE: sshd
# REQUIRE: NETWORKING cleanvar
# BEFORE: mountcritremote
# KEYWORD: shutdown

 

Now sshd will be started right after the network has been configured.

Note that starting sshd before certain parts of the system are ready might give you temporary or permanent errors. For example, starting sshd before the user home directories are mounted might cause problems with logins. However, if your machine has all critical filesystems on local disks, making these changes should not pose any problems, and will allow you to log in while the rc scripts are still running, giving you the opportunity to fix any misbehaving services.