Sunday, April 14, 2013

Getting an IR receiver to work with a different remote with LIRC

I've been using MythTV for a long time, and recently I decided to upgrade my DVR to become a dedicated box, using Mythbuntu (replacing my old Debian setup).

The installation went pretty smoothly, but I had a ton of trouble getting my Hauppauge PVR-350 remote working with my Pinnacle PCTV USB TV Tuner / IR Receiver and LIRC. I know I got it working before on Debian only a month ago, so I figured it would be easy.

Of course, it wasn't. I think I spent longer trying to figure out what I'd done before than it would have taken me to fix it from scratch, and that's when I knew it was time to write a Google Fodder entry.

In recent versions of Linux, the kernel has support for many IR Receivers built in, and exposes it to LIRC via the "devinput" protocol. LIRC itself does not have any idea what kind of hardware is being used, nor should it. (I mistakenly tried telling LIRC about the actual remote I was using, but that was a dead end.) By default, each piece of hardware loads the keymap associated with the corresponding remote. This mapping of hardware to keymap is done by udev and contolled by the file /etc/rc_maps.cfg. To tell udev to load a different keymap, you can change rc_maps.cfg to point to your custom remote in /etc/rc_keymaps/. The syntax of these files are a little finicky so be sure to use ir-keytable to test your work. (Notably, I couldn't put comments in my keymap file.) Since /etc/rc_maps.cfg is used at boot time, the fix is actually very easy, which might explain why I forgot it. I've seen forum posts all over the place advocating for running some special ir-keytable commands at boot time, but that's not necessary.

Saturday, September 29, 2012

NoMachine NX, FreeNX, NeatX: Which should I use?

TL;DR: Don't use any of those, use X2Go.

I've used various different VNC servers and clients before, but I've never found them to be very useful. Most of the time the connection is too slow to get anything done. Recently, I found out about NoMachine NX, which has a bunch of really cool technology to make it actually reasonable to use your desktop machine over the internet. And, it runs over SSH, so you don't need to worry about opening up additional ports or encrypting everything.

My own use case was not terribly demanding; I want to access the computer on my desk (running Debian) from my laptop on the couch (running OS X), but I figured if NX works over the internet, it'll work even better over my local wireless network, right?

Unfortunately, NoMachine NX isn't free software. (Although some parts of it are.) Over the years several groups have tried to create an alternative server implementations, striving for compatibility with the NoMachine client software.

I first tried FreeNX, using Ubuntu packages on Debian. (I suppose that should have been a red flag: never use software compiled for Ubuntu on Debian, or vice versa. It's just asking for trouble.) It almost worked. I could connect, but then it would immediately crash. I spent hours trying anything I could to fix it, to no avail.

I tried using NoMachine NX Free, the free-of-charge version of NoMachine NX. It installed everything in weird locations on my machine and that made me angry. Also, I couldn't get it working.

I then tried going back to FreeNX, recompiling all the Ubuntu packages from source on a Debian box to rule out version incompatibilities (which was surprisingly difficult). It wasn't until I was half way through this that I discovered X2Go. The X2Go team maintains a lot of the packages used by FreeNX. Many of the underlying libraries are shared by the two products.

X2Go accomplishes everything that NoMachine NX does, except that it doesn't try for compatibility with the NoMachine NX Client; there is a separate X2Go client, with Windows, Linux and OS X support. That's a good thing, as it allows the X2Go project to control both the client and the server. And, it's packaged for Debian. I installed it and it just worked. Very simple.

So, long story short: use X2Go. Stay away from NoMachine NX, FreeNX, NeatX; they aren't worth your time.

Monday, January 9, 2012

Shutting worker threads down gracefully after a signal in Python

Recently I wrote about a bug in Python around handling of signals in multi-threaded programs. The upstream Python developers suggested that in order to properly handle signals in multi-threaded programs across several operating systems, developers should use some of the newer APIs in the signal library to make sure we get the behavior we want.

A common scenario for threads and signals is a daemon with long running worker threads that need to exit gracefully when SIGTERM or some other signal is received. The basic idea is:
  • Set up signal handlers
  • Spawn worker threads
  • Wait for SIGINT, and wake up immediately when it is received
  • Shutdown workers gracefully

The challenge is to write the simplest code that will do this portably, (at least) on FreeBSD and Linux, and that requires absolutely no CPU in the main thread while waiting (we want to be able to sleep completely, not have to wake up periodically to check for signals).

This is the best solution I've come up with (skip down to the bottom, it's where the interesting stuff is):

import errno
import fcntl
import os
import signal
import threading

NUM_THREADS = 2
_shutdown = False


class Worker(threading.Thread):

    def __init__(self, *args, **kwargs):
        threading.Thread.__init__(self, *args, **kwargs)
        self._stop_event = threading.Event()

    def run(self):
        # Do something.
        while not self._stop_event.isSet():
            print 'hi from %s' % (self.getName(),)
            self._stop_event.wait(10)

    def shutdown(self):
        self._stop_event.set()
        print 'shutdown %s' % (self.getName(),)
        self.join()


def sig_handler(signum, frame):
    print 'handled'
    global _shutdown
    _shutdown = True


if __name__ == '__main__':

    # Set up signal handling.
    pipe_r, pipe_w = os.pipe()
    flags = fcntl.fcntl(pipe_w, fcntl.F_GETFL, 0)
    flags |= os.O_NONBLOCK
    flags = fcntl.fcntl(pipe_w, fcntl.F_SETFL, flags)
    signal.set_wakeup_fd(pipe_w)

    signal.signal(signal.SIGTERM, sig_handler)

    # Start worker threads.
    workers = [Worker() for i in xrange(NUM_THREADS)]
    for worker in workers:
        worker.start()

    # Sleep until woken by a signal.
    while not _shutdown:
        while True:
            try:
                os.read(pipe_r, 1)
                break
            except OSError, e:
                if e.errno != errno.EINTR:
                    raise

    # Shutdown work threads gracefully.
    for worker in workers:
        worker.shutdown()

Basically, we have to use set_wakeup_fd() to ensure that we can reliably wake up when a signal is delivered. The obvious function to use here (signal.pause()) doesn't work

Monday, December 19, 2011

Signals and Threads with Python on FreeBSD

Over the last few months, I've been plagued by a fun bug in Python around handling of signals in multi-threaded programs on FreeBSD.

If you kill a multi-threaded program, FreeBSD will deliver the signal to any running thread, while Linux will only deliver the signal to the main thread. Python guarantees that as far as Python is concerned, only the main thread will handle the signal, but it makes no guarantees about anything else. Unfortunately, this leads to a few problems.

If you use the FreeBSD ports build of Python, for the most part you'll get correct thread and signal behavior. The upstream maintainers have installed a patch that basically blocks signals for all threads but the main thread. Unfortunately, this leads to one flaw. If ever you fork, from a thread (e.g. to spawn a subprocess), the signals are not unblocked, so your subprocess is unkillable.

If you use a stock version of Python on FreeBSD, you get the following, different, problems. This makes it difficult to write portable code on FreeBSD and Linux.

Working with signals and threads in Python on FreeBSD (stock Python)

If you're writing multi-threaded code on FreeBSD, and you want to handle signals, you need to ensure that you are prepared to handle interrupted system calls in every thread, not just the main thread. Usually this means wrapping them in a try/except block like this:

while True:
    try:
        data = my_sock.read()
        if not data:
            break
        buffer.append(data)
    except socket.error, e:
        if e.errno != errno.EINTR:
            raise

On Linux, you only need to worry about this in the main thread. On FreeBSD, you need to worry about it everywhere.

The other thing you need to avoid is blocking indefinitely in the main thread. It's a common pattern to spawn a thread to handle connections and have your main thread wait for a signal or some other indication it's time to quit. Unforunately, because of Python's assumptions about signals, this doesn't work on FreeBSD. Not even signal.pause() in the main thread will return when a signal is received. For example, the following code will never exit on FreeBSD.

import os
import signal
import threading
import time

def handler(signum, frame):
    print 'Signal %d handled' % (signum,)

def kill_me():
    time.sleep(1)
    print 'Suicide?'
    os.kill(os.getpid(), signal.SIGTERM)
    time.sleep(1)

if __name__ == '__main__':
    signal.signal(signal.SIGTERM, handler)
    t = threading.Thread(target=kill_me).start()
    signal.pause()
    print 'Got a signal, exit.'


The fix is to replace the blocking call (in the example above the signal.pause() in the main thread with a sleep-loop.

def my_signal_handler(signal, frame):
    global _run
    _run = False

_run = True
signal.signal(signal.SIGTERM, my_signal_handler)

# Spawn some threads ...

while _run:
    time.sleep(1)

# Join the threads ...

If your application has a need to handle signals faster, you'll want to have a shorter sleep. If you can tolerate a longer delay, pick a longer sleep time. This is obviously inefficient, but it's the best you can do with a stock Python.

Monday, August 15, 2011

Getting XML Output from Python Unit Tests with unittest2 and pyjunitxml

I use Jenkins for continuous integration. (If you've not heard of Jenkins, but you've heard of Hudson, they're basically the same thing.)  Jenkins an amazingly powerful (yet easy to use) piece of software that you can set up to build and test your code every time you check in. This immediate feedback is really useful so you can detect and fix problems right away rather than waiting for your co-workers (or your customers!) to find them.

Jenkins can be set up to run Python unit tests and track test failures over time. The tricky part is that you need to be able to run your tests in such a way as to produce XML output. nose is an extension to the built-in Python unittest library that provides numerous plugins to handle things like XML output and tracking code coverage.

I've been using the unittest2 module for writing my unit tests. Basically, it's a backport of the new and much-improved unittest library that's available for Python 2.7, making it available for Python 2.4, 2.5 and 2.6. Sadly, nose does not work well with unittest2. There is the beginnings of a nose2 that depends on a new plugin-capable branch of the unittest2 module, but as of now there's nothing ready for "production". So how do people get there unittest2 tests to run with XML output?

I asked about this on the Testing in Python mailing list, and I was pointed at pyjunitxml. This module basically implements a new TestResult class that can be used in place of the default unittest TextTestResult class to write XML output. It still needs an wrapper script to set up the result and run all of your tests, so it's not quite the solution I was looking for, but it was a start.

I took pyjunitxml and added my own command-line script to run named unit tests or hook in to unittest2's test discovery. It works with Python 2.4, 2.5, 2.6, 2.7, 3.1 and 3.2, and it works with or without the unittest2 package installed (although you won't get test discovery without it on older Python versions). (I used Jenkins to test all of those combinations at once!) It's currently available from my branch, but hopefully it'll get merged into the pyjunitxml trunk soon. Feel free to try it out, and please give me any feedback you might have!

Flymake and Pyflakes with TRAMP

When I get everything figured out, I'll post more of my Emacs config, but here's the bit to get flymake and Pyflakes working.

I don't remember exactly why I needed all this magic, but I think the flymake-create-temp-intemp function is needed to get Flymake to work with TRAMP; it makes flymake put the temporary file on the local machine instead of the remote one, where Pyflakes (running locally) can see it. The rest is just standard stuff to tell flymake to use Pyflakes with Python code.

;; pyflakes
;; adapted from http://plope.com/Members/chrism/flymake-mode
;; and http://www.emacswiki.org/emacs/FlymakeRuby

(defun flymake-create-temp-intemp (file-name prefix)
  "Return file name in temporary directory for checking FILE-NAME.
This is a replacement for `flymake-create-temp-inplace'. The
difference is that it gives a file name in
`temporary-file-directory' instead of the same directory as
FILE-NAME.

For the use of PREFIX see that function.

Note that not making the temporary file in another directory
\(like here) will not if the file you are checking depends on
relative paths to other files \(for the type of checks flymake
makes)."
  (unless (stringp file-name)
    (error "Invalid file-name"))
  (or prefix
      (setq prefix "flymake"))
  (let* ((name (concat
                (file-name-nondirectory
                 (file-name-sans-extension file-name))
                "_" prefix))
         (ext  (concat "." (file-name-extension file-name)))
         (temp-name (make-temp-file name nil ext))
         )
    (flymake-log 3 "create-temp-intemp: file=%s temp=%s"
   file-name temp-name)
    temp-name))

(when (load "flymake" t)
  (defun flymake-pyflakes-init ()
    (let* ((temp-file (flymake-init-create-temp-buffer-copy
         'flymake-create-temp-intemp))
    (local-file (file-relative-name
   temp-file
   (file-name-directory buffer-file-name))))
      (flymake-log 3 "flymake-pyflakes-init: dir=%s %s"
     buffer-file-name (file-name-directory temp-file))
      (list "pyflakes" (list local-file)
     (file-name-directory temp-file))))

  (add-to-list 'flymake-allowed-file-name-masks
        '("\\.py\\'" flymake-pyflakes-init)))

(add-hook 'find-file-hook 'flymake-find-file-hook)

Sunday, August 14, 2011

Emacs, Tramp and Python auto-completion

At work, I use Aqauamcs Emacs on my OS X laptop for editing code that runs on FreeBSD and Linux servers in our colo. Since the files are remote (as is the environment for running them), I use TRAMP to let me conveniently edit these files from my laptop. Before TRAMP, I used sshfs to achieve the same goal, but I ran into issues where files would get corrupted, and I haven't tried it since.

I have a few useful customizations for Python editing: flymake and Pyflakes support to flag the more egregious errors, and highlight-80+-mode to show me when my lines get too long. But it still leaves quite a bit to be desired.

About once a year, I get all ambitious and try to get auto-completion working for Python code in Emacs. This generally results in me searching the internet endlessly, installing a number of packages that ultimately don't work, contemplating fixing some of them until I realize I don't know Emacs Lisp, and then promising myself I'll learn it. (I never do.) This generally results in at least 2 days of wasted effort, and I still don't have auto-completion.

It seems that pretty much everyone out there uses Pymacs and pycomplete or ropemacs to do this, but unfortunately Pymacs only runs on the local host. (I've patched it to get it to run on a remote host, but it's not a great solution as it can only run on one machine.) All these tools get confused by TRAMP filenames, so all in all its a pretty ugly situation.

This year, I made a little bit of progress. I found auto-complete mode. By default, it doesn't give you anything fancy for Python editing, but it will automatically complete other words you've used in the same buffer. As it turns out that's most of them, making this is a great 90% solution. It'd be nice to get completion that's a little more semantically-aware, but that's not so easy.

I also installed yasnippet, which is a pretty powerful tool that can save you some typing and integrates somewhat with auto-complete. Getting auto-complete and yasnippet to play nicely together was a bit tricky; I ended up having to change yasnippet's key bindings to conflict less with auto-complete.(Otherwise, the behavior I'd get depended on whether I waited long enough for auto-complete's completion window to pop up. Yikes!)

All in all, I'd say I've still got some work to do to make my Emacs setup as efficient as I'd like, but this week's efforts have definitely helped quite a bit!

Using Cloudflare Access to Protect Home Assistant

In my last post, I mentioned how I've been using Cloudflare Access to secure my Home Assistant server. If you're not familiar wit...