Monday, December 19, 2011

Signals and Threads with Python on FreeBSD

Over the last few months, I've been plagued by a fun bug in Python around handling of signals in multi-threaded programs on FreeBSD.

If you kill a multi-threaded program, FreeBSD will deliver the signal to any running thread, while Linux will only deliver the signal to the main thread. Python guarantees that as far as Python is concerned, only the main thread will handle the signal, but it makes no guarantees about anything else. Unfortunately, this leads to a few problems.

If you use the FreeBSD ports build of Python, for the most part you'll get correct thread and signal behavior. The upstream maintainers have installed a patch that basically blocks signals for all threads but the main thread. Unfortunately, this leads to one flaw. If ever you fork, from a thread (e.g. to spawn a subprocess), the signals are not unblocked, so your subprocess is unkillable.

If you use a stock version of Python on FreeBSD, you get the following, different, problems. This makes it difficult to write portable code on FreeBSD and Linux.

Working with signals and threads in Python on FreeBSD (stock Python)

If you're writing multi-threaded code on FreeBSD, and you want to handle signals, you need to ensure that you are prepared to handle interrupted system calls in every thread, not just the main thread. Usually this means wrapping them in a try/except block like this:

while True:
        data =
        if not data:
    except socket.error, e:
        if e.errno != errno.EINTR:

On Linux, you only need to worry about this in the main thread. On FreeBSD, you need to worry about it everywhere.

The other thing you need to avoid is blocking indefinitely in the main thread. It's a common pattern to spawn a thread to handle connections and have your main thread wait for a signal or some other indication it's time to quit. Unforunately, because of Python's assumptions about signals, this doesn't work on FreeBSD. Not even signal.pause() in the main thread will return when a signal is received. For example, the following code will never exit on FreeBSD.

import os
import signal
import threading
import time

def handler(signum, frame):
    print 'Signal %d handled' % (signum,)

def kill_me():
    print 'Suicide?'
    os.kill(os.getpid(), signal.SIGTERM)

if __name__ == '__main__':
    signal.signal(signal.SIGTERM, handler)
    t = threading.Thread(target=kill_me).start()
    print 'Got a signal, exit.'

The fix is to replace the blocking call (in the example above the signal.pause() in the main thread with a sleep-loop.

def my_signal_handler(signal, frame):
    global _run
    _run = False

_run = True
signal.signal(signal.SIGTERM, my_signal_handler)

# Spawn some threads ...

while _run:

# Join the threads ...

If your application has a need to handle signals faster, you'll want to have a shorter sleep. If you can tolerate a longer delay, pick a longer sleep time. This is obviously inefficient, but it's the best you can do with a stock Python.

Monday, August 15, 2011

Getting XML Output from Python Unit Tests with unittest2 and pyjunitxml

I use Jenkins for continuous integration. (If you've not heard of Jenkins, but you've heard of Hudson, they're basically the same thing.)  Jenkins an amazingly powerful (yet easy to use) piece of software that you can set up to build and test your code every time you check in. This immediate feedback is really useful so you can detect and fix problems right away rather than waiting for your co-workers (or your customers!) to find them.

Jenkins can be set up to run Python unit tests and track test failures over time. The tricky part is that you need to be able to run your tests in such a way as to produce XML output. nose is an extension to the built-in Python unittest library that provides numerous plugins to handle things like XML output and tracking code coverage.

I've been using the unittest2 module for writing my unit tests. Basically, it's a backport of the new and much-improved unittest library that's available for Python 2.7, making it available for Python 2.4, 2.5 and 2.6. Sadly, nose does not work well with unittest2. There is the beginnings of a nose2 that depends on a new plugin-capable branch of the unittest2 module, but as of now there's nothing ready for "production". So how do people get there unittest2 tests to run with XML output?

I asked about this on the Testing in Python mailing list, and I was pointed at pyjunitxml. This module basically implements a new TestResult class that can be used in place of the default unittest TextTestResult class to write XML output. It still needs an wrapper script to set up the result and run all of your tests, so it's not quite the solution I was looking for, but it was a start.

I took pyjunitxml and added my own command-line script to run named unit tests or hook in to unittest2's test discovery. It works with Python 2.4, 2.5, 2.6, 2.7, 3.1 and 3.2, and it works with or without the unittest2 package installed (although you won't get test discovery without it on older Python versions). (I used Jenkins to test all of those combinations at once!) It's currently available from my branch, but hopefully it'll get merged into the pyjunitxml trunk soon. Feel free to try it out, and please give me any feedback you might have!

Flymake and Pyflakes with TRAMP

When I get everything figured out, I'll post more of my Emacs config, but here's the bit to get flymake and Pyflakes working.

I don't remember exactly why I needed all this magic, but I think the flymake-create-temp-intemp function is needed to get Flymake to work with TRAMP; it makes flymake put the temporary file on the local machine instead of the remote one, where Pyflakes (running locally) can see it. The rest is just standard stuff to tell flymake to use Pyflakes with Python code.

;; pyflakes
;; adapted from
;; and

(defun flymake-create-temp-intemp (file-name prefix)
  "Return file name in temporary directory for checking FILE-NAME.
This is a replacement for `flymake-create-temp-inplace'. The
difference is that it gives a file name in
`temporary-file-directory' instead of the same directory as

For the use of PREFIX see that function.

Note that not making the temporary file in another directory
\(like here) will not if the file you are checking depends on
relative paths to other files \(for the type of checks flymake
  (unless (stringp file-name)
    (error "Invalid file-name"))
  (or prefix
      (setq prefix "flymake"))
  (let* ((name (concat
                 (file-name-sans-extension file-name))
                "_" prefix))
         (ext  (concat "." (file-name-extension file-name)))
         (temp-name (make-temp-file name nil ext))
    (flymake-log 3 "create-temp-intemp: file=%s temp=%s"
   file-name temp-name)

(when (load "flymake" t)
  (defun flymake-pyflakes-init ()
    (let* ((temp-file (flymake-init-create-temp-buffer-copy
    (local-file (file-relative-name
   (file-name-directory buffer-file-name))))
      (flymake-log 3 "flymake-pyflakes-init: dir=%s %s"
     buffer-file-name (file-name-directory temp-file))
      (list "pyflakes" (list local-file)
     (file-name-directory temp-file))))

  (add-to-list 'flymake-allowed-file-name-masks
        '("\\.py\\'" flymake-pyflakes-init)))

(add-hook 'find-file-hook 'flymake-find-file-hook)

Sunday, August 14, 2011

Emacs, Tramp and Python auto-completion

At work, I use Aqauamcs Emacs on my OS X laptop for editing code that runs on FreeBSD and Linux servers in our colo. Since the files are remote (as is the environment for running them), I use TRAMP to let me conveniently edit these files from my laptop. Before TRAMP, I used sshfs to achieve the same goal, but I ran into issues where files would get corrupted, and I haven't tried it since.

I have a few useful customizations for Python editing: flymake and Pyflakes support to flag the more egregious errors, and highlight-80+-mode to show me when my lines get too long. But it still leaves quite a bit to be desired.

About once a year, I get all ambitious and try to get auto-completion working for Python code in Emacs. This generally results in me searching the internet endlessly, installing a number of packages that ultimately don't work, contemplating fixing some of them until I realize I don't know Emacs Lisp, and then promising myself I'll learn it. (I never do.) This generally results in at least 2 days of wasted effort, and I still don't have auto-completion.

It seems that pretty much everyone out there uses Pymacs and pycomplete or ropemacs to do this, but unfortunately Pymacs only runs on the local host. (I've patched it to get it to run on a remote host, but it's not a great solution as it can only run on one machine.) All these tools get confused by TRAMP filenames, so all in all its a pretty ugly situation.

This year, I made a little bit of progress. I found auto-complete mode. By default, it doesn't give you anything fancy for Python editing, but it will automatically complete other words you've used in the same buffer. As it turns out that's most of them, making this is a great 90% solution. It'd be nice to get completion that's a little more semantically-aware, but that's not so easy.

I also installed yasnippet, which is a pretty powerful tool that can save you some typing and integrates somewhat with auto-complete. Getting auto-complete and yasnippet to play nicely together was a bit tricky; I ended up having to change yasnippet's key bindings to conflict less with auto-complete.(Otherwise, the behavior I'd get depended on whether I waited long enough for auto-complete's completion window to pop up. Yikes!)

All in all, I'd say I've still got some work to do to make my Emacs setup as efficient as I'd like, but this week's efforts have definitely helped quite a bit!

Monday, May 23, 2011

Debian on Amazon EC2: Booting issues

Recently I've been playing around with PyPy and its new support for C extensions. I've been wanting to jump into it more, but outside of my desktop at work, I don't really have a suitable environment for hacking on stuff, especially for non-work related stuff. My linux box at home is a MythTV DVR; probably not the best place to do PyPy "translations". Clearly I need another dev box.

For a while I've been meaning to start playing around with Amazon EC2, so I figured this might be a good opportunity, plus I just found out recently they have a free service tier for the first year. Unfortunately, the default Amazon images are Fedora, and I'm more of a Debian guy, so I wanted to install a Debian machine image. Being the paranoid sort, I don't really want to install somebody else's machine image; plus it's more fun to do it myself!

I set up a normal 64-bit Linux AMI, mounted a second "target" EBS volume, and ran the ec2ubuntu script to build an AMI. (I had to make a couple of changes and run a few things by hand to get the script to work.) It downloads Debootstrap, and installs Debian to a target directory/volume, using chroot as necessary. When I was done setting up the image, I created a snapshot (using the management console), ran ec2-register to create an AMI, and launched it using the management console. The ec2-register command was:

ec2-register -a x86_64 -b /dev/sda1=<snapshot-id>:2:false -n 'Debian-testing' -d 'Debian testing 2011-05-22' --kernel <hd0 pvgrub aki id>

The biggest challenge was to get the AMI to actually boot. Many of the guides out there pre-date Amazon's use of PV-GRUB. Previously, in order to boot an EC2 image, you had to use one of Amazon's pre-made Kernels, and you had to have their kernel modules installed on your image. Now you can have Amazon boot your own custom kernel image using PV-GRUB, and in fact this is how Amazon's pre-made Linux AMIs work. Troubleshooting and debugging the boot process is particularly frustrating; all you get is a dump of the console log (visible in the AWS Management Console). In the end for your image to boot, you need a few things: grub, the correct Amazon kernel ID, and a working /boot/grub/menu.lst.

By default, Debian comes with GRUB 2. I'm not sure if this works with EC2. I installed the grub-legacy package instead, as this matched the version of PV-GRUB I saw on the console. It's possible you don't even need a version of grub installed, as long as you have a working menu.lst.

The Amazon kernel IDs are listed in the Amazon EC2 Developers Guide, but it's pretty misleading, or flat out wrong. You need to use an "hd0" PV-GRUB AKI if the root device contains your /boot/grub/menu.lst directly and and an "hd00" PV-GRUB AKI if the root device is partitioned and your menu.lst is on the first partition. So, even though I'm using an EBS-backed instance, I needed an "hd0" AKI, because my root EBS volume is not partitioned. In us-west-1, for a 64-bit OS I used AKI "aki-9ba0f1de".

Setting up the /boot/grub/menu.lst correctly was the hardest part for me. The biggest issue is that grub (or the OS?) sees the "root device" as /dev/xvda1 instead of /dev/sda1, so the "kernel" line of the menu.lst needs to say /dev/xvda1 instead of /dev/sda1 (The console output when this was incorrect was very misleading; I was getting errors like "Gave up waiting for root device" and "Alert! /dev/sda1 does not exist.") I'm not sure why it did not occur to me sooner that I could use the Amazon Linux AMI's menu.lst as an example, but this realization proved to be helpful. Anyways, my full menu.lst is:

default 0
timeout 1

title EC2
        root (hd0)
        kernel /boot/vmlinuz-2.6.38-2-amd64 root=/dev/xvda1
        initrd /boot/initrd.img-2.6.38-2-amd64

After correcting my menu.lst, everything booted just fine. I was able to ssh in, as root. The init.d scripts added by the ec2ubuntu script copied over the correct SSH key, so logging in was easy.

I'm now the proud "owner" of a dev machine running Debian "in the cloud". My next step is to configure it, but that will have to wait for later.

Thursday, May 5, 2011

Memorial Day 2011: Which Ski Resorts will be Open?

I've been compiling a list of ski resorts that are going to be open for Memorial Day (May 28–30, 2011), since I'm hoping to get one last weekend of skiing in this season. I've also been trying to keep track of which lifts, and how much terrain will be open, since I know some resorts like to open token amounts of terrain in order to lure unsuspecting victims visitors. I'll try to keep this up to date as best I can, but some resorts (Whistler Blackcomb) like to change their plans frequently.

Arapahoe Basin, Summit County, CO
Open Daily (8:30–4:00):
All lifts, subject to conditions.

Crystal Mountain, WA
Open Saturday – Monday (8:00–3:00):
Green Valley, Mt Rainier Gondola.

Donner Ski Ranch, Lake Tahoe Area, CA
Open Saturday – Monday (8:30–2:00):
No info on which lifts.

Keystone, Summit County, CO
Open Friday – Monday (10:30–2:30):
No info on which lifts; it might just be a park?

Kirkwood, Lake Tahoe Area, CA
Open Saturday & Sunday (8:00–2:00):
Lifts 10 & 11.

Mammoth Mountain, Eastern Sierras, CA
Open Daily (8:30–4:00):
Chairs 1, 2, 3, 4, 5, 6, 10, 11, 23, G1 and G2

Mt Bachelor, Central OR
Open Daily until Sunday May 29 (9:00–2:00):
Pine Marten Express, Skyline Express, Summit Express.

Snowbird, Salt Lake City Area, UT
Open Friday – Monday (9:00–3:00):
Tram (summer schedule), Mineral Basin Express, Little Cloud.

Squaw Valley, Lake Tahoe Area, CA
Open Saturday – Monday (9:00–2:00):
Cable Car, KT-22, Headwall, Shirley Lake, Links, Bailey's Beach.

Timberline Lodge, Northern OR
Open Daily (9:00 - 2:30):
Best 3 lifts, based on conditions (until May 31; only two lifts after that, all summer).

Whistler Blackcomb, Whistler, BC
Open Daily until Monday May 30 (10:00–4:00):
Blackcomb mountain only: Excalibur Gondola, Solar Coaster Express, Excelerator Express, Wizard Express (maybe, Saturday - Sunday only), Catskinner (Saturday-Sunday only), Jersey Cream Express, Crystal, Glacier Express.

About this blog...

Over the years, I've realized that the internet is a small place, and I've occasionally tried to find information that simply does not exist on the internet. Usually this works as follows:
  • Come across an obscure problem or question that I want solved or answered.
  • Search for variations of the same really vague keywords on the internet for several hours
  • Realize that what I'm looking for is not on the internet.
  • Dig deeper and figure out the answer or solution for myself.
  • Come to the questionable conclusion that I'm the only one in the entire world that knows some factoid.
  • Think existentially about the limitations of human knowledge and how small the world is.
I've finally decided to add another step: share what I've learned with the internet. With any luck, maybe search engines will index something useful here. Certainly that's one of the goals of this blog. (Probably the only one I've come up with so far.)

I expect that posts here will include lots of random topics. Most of them will be nerdy. There might be occasional posts about skiing, living in the Bay Area, and who knows what else.

Using Cloudflare Access to Protect Home Assistant

In my last post, I mentioned how I've been using Cloudflare Access to secure my Home Assistant server. If you're not familiar wit...