The Fugue Counterpoint by Hans Fugal


tmux C-Left in

I use on OS X, and I want to bind ctrl-left/right to cycle through tmux windows (like I did with screen).

The tmux incantation is easy to find online:

bind-key -n C-Left prev
bind-key -n C-Right next

This doesn't work though, because is sending ^[[5D for left instead of what tmux expects. In my case, with TERM=xterm-256color, tmux is expecting ^[[1;5D for C-Left and ^[[1;5C for C-Right. You can change this in the settings. Ideally would magically send the right values based on the TERM setting, if there is such a thing as the right values in the world of terminfo and modified arrow keys.

I would prefer to tell tmux to accept ^[[5D instead of ^[[1;5D, which is what I did in my screen config, but I can't see any way to tell tmux to take a raw escape sequence instead of logical keys. I prefer that so that I don't have to remember (or research) magical incantations to configure the next time I start from scratch on OS X. So if you know how, let me know.


Code Reading on a Kindle

First, add this to your ~/.enscriptrc file:

Media: kindle 498 612 0 0 498 612

Now, here's a script (I call it kindlecode) to generate a pdf on stdout:

enscript -Mkindle -E -p- "$@" | ps2pdf - -

Usage is something like this:

$ kindlecode *.{c,h} > /Volumes/Kindle/documents/foo.pdf

kindlecode in real life


clock_getres() and clock_gettime()

In Linux kernel 2.6.33 at least, when you call clock_getres() you may not be getting the whole story if the kernel was compiled without high-res timers. (CONFIG_HIGH_RES_TIMERS)

clock_getres() returns {0, 999848}, which you might think means clock_gettime() has a resolution of about 1ms. But no, if you call clock_gettime() twice in rapid succession you find that it is far higher resolution than that—I usually get either 698 or 699 nanoseconds as the difference. So is clock_getres() wrong? No, not exactly. It is reporting the jiffy that limits clock_settime() and the timers. If you time clock_nanosleep() you find that even if you ask to sleep for 1ns you will sleep until the next jiffy boundary, i.e. as much as 1ms and about 0.5ms on average.

If you look at /proc/timer_list on this machine, it reports a tick resolution of 999848, but hrtimer functions are being used underneath the covers:

active timers:
#0: , hrtimer_wakeup, S:01, hrtimer_start_range_ns, smc_proxy/14652
# expires at 1322421490000000000-1322421490000050000 nsecs [in 1303778479239642688 to 1303778479239692688 nsecs]
clock 1:
.base: ffff88002820e680
.index: 1
.resolution: 999848 nsecs
.get_time: ktime_get

My guess is that clock_gettime() is using HRT and giving high-resolution answers even though you didn't enable using HRT for POSIX timer functions by leaving CONFIG_HIGH_RES_TIMERS unset.

A side effect of this silly situation is that gettimeofday() is also accurate (to µs), though usleep() and nanosleep(), like clock_nanosleep(), are limited to 1ms ticks.


OSX ignores ownership on external drives by default.

I had reason to copy my harddrive off and back (to reformat as case-sensitive), and I missed one very important detail.

IMPORTANT STEP: disable "Ignore ownership on this volume".The Secret Life of Pets film

Yeah, with everything owned by user 99, it won't even boot. You can boot into single-user mode and "chown -R root:wheel /" which will at least let it boot, then you can go into Disk Utility and repair permissions (which will actually do something useful and vital in this situation). Then chown -R your home directory. But it's still a mess to clean up, and you lose all that user and group info.

Tagged as: , , , , No Comments

Defeating the AC_CHECK_HEADER cache

The AC_CHECK_HEADER macro caches its result, so if you want to call it again with a different CPPFLAGS it will just remember the result of the first execution.

If you want to defeat this cache, as I did, this is the pattern:

$as_unset $cache_var

AS_TR_SH does the escaping (giving ac_cv_header_foo_h in this case), $as_unset is the portable way to unset a shell variable in autotools.

Incidentally, if you don't restore CPPFLAGS to its original user-set value, I will hunt you down and shave your head.


AC_TYPE_UINT8_T and friends

If you get errors like this:

$ autoreconf error: possibly undefined macro: AC_TYPE_UINT8_T
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation. error: possibly undefined macro: AC_TYPE_UINT16_T error: possibly undefined macro: AC_TYPE_UINT32_T error: possibly undefined macro: AC_TYPE_UINT64_T error: possibly undefined macro: AC_TYPE_SSIZE_T
autoreconf: /usr/bin/autoconf failed with exit status: 1

It probably just means you have an old autoconf. These macros were introduced in autoconf 2.60. But it's probably no big deal if you have a sensible stdint.h.


Terminal Merge Conflict Resolution

A very important tool in the toolbox of any collaborating developer is a merge conflict resolution tool. OS X has the fantastic FileMerge, there are various graphical tools for linux like kdiff3, but I have yet to hear of one for the terminal. There's vimdiff, but it is really not up to the task of merge conflict resolution (doesn't handle 3-way diffs). There's probably something in emacs, just because there's always something for emacs. Emacs users please enlighten me, I'm not above using emacs for merge-conflict resolution. Might even be the gateway drug.The Diary of a Teenage Girl movie

It doesn't seem overly hard (at least, no harder than writing kdiff3 or FileMerge) to make an ncurses tool that will take a 3-way merge and let you efficiently choose A, B, or edit for each diff section. Can it really be that nobody has done it yet?


Encoder mailbox not found

My PVR-150 that I got used from eBay would occasionally spaz out and complain "Encoder mailbox not found" when trying to load the firmware. It happened when I first got it, then didn't happen for months, then happened again and went away with a reboot, then happened and wouldn't stop until I switched PCI slots.

Then it worked, for about an hour, and puked with "Encoder firmware dead!" or something like that. Something was dead, anyway. I took this as a sign that it wasn't a driver or IRQ problem, but a bad card. (Yes, I'm rather thick.) So I got a (supposedly) new card from eBay for just a few more dollars than the original used one cost. It has been working for several days without complaint, so I think it must be a bad card.

But if you want to try your luck with this PVR-150, I'll be happy to mail it to you. Maybe it was a problem with my VIA motherboard.

And most importantly, for those of you out there searching for "encoder mailbox not found", it may just be time to get a new card. If not, try switching PCI slots.


Kill your Kids

Not literally, of course. This is programming talk, those of you who aren't programmers can let your eyes glaze over.

I wanted a script to start a bunch of little servers, then wait around for them to finish or when the user interrupts with Ctrl-C, clean up the servers instead of orphaning them. I wanted to propagate the SIGINT to the child processes. I wanted to kill the kids.

The simple way, if you just want to make sure the kids are killed and you don't care how:

sleep 300 &
# etc.
trap "kill $(echo $(jobs -p)) 2>/dev/null" EXIT

If you only want to trap SIGINT and want to make sure you send SIGINT (not SIGKILL) to the children, then you want to do something like:

trap "kill -INT $(echo $(jobs -p)) 2>/dev/null" INT

Update: I was asked by a shell scripting guru why I needed to do $(echo $(jobs -p)) and not just $(jobs -p). I intended to cover that but forgot. The reason is that $(jobs -p) has newlines and while that's not usually a problem it is in a trap statement, because it's evaluated at creation time not at run time. It also means that processes created after you create the trap wouldn't be killed. Then, he suggested a function instead. Pure brilliance. Where does he come up with these things? Here's the improved version:

function killkids() { kill $(jobs -p); }
trap "killkids" EXIT

You can still redirect stderr if you want to, but the reason I was directing stderr was because some of the kids may have already died (early evaluation remember) and then kill would needlessly complain. This way, it kills all the kids that are still alive, none more none less.


git GUIs

One of the nice things about git is due to its UNIXy design and its massive and ever-growing popularity, there are a lot of really nice bells and whistles, and I think we can expect to see even more. For example, GitHub.

While most git interaction is with simple commands in the terminal, it often pays to be able to get a birds-eye view of the revision history, or what I will call the DAG. The original tool for this is gitk. Gitk is functional, but it's really really unpleasant. It's written in Tcl/Tk—what did you expect? Some of us have higher standards for usability.

I tried out a few git GUIs and I have settled on two that I think are best of breed. The first is tig. Tig is an ncurses program, so it excels for remote operation over ssh, for quick dives into the repository without reaching for the mouse, and in keyboard use. Think of it as mutt for git. It's a fantastic program and I use it most frequently.

I have customized my tig setup slightly:
$ cat /Users/fugalh/.tigrc
set show-rev-graph = yes
color cursor white blue
$ alias | grep tig
alias tiga='tig --all'

The second is GitX. It's a mac app in every good sense, and it's an excellent git GUI. As you can tell from the screenshot, it's a bit easier on the eyes for visualizing complicated DAGs (not that this screenshot is of a complicated DAG).

If you use GitX be sure to "Enable Terminal Usage…" so you can start it on the current repository on the terminal by typing gitx.