Categories
Programming

Working on remote unix hosts

After much tweaking, I’ve found a good way of working with a set of geographically distributed unix hosts. This isn’t the most exciting topic in the world but, like your choice of keyboard, it affects you every minute you spend working at a computer. So it’s worth some attention.

Every day, I work on localhost and three other (distant) servers. I run gnome-terminal with four tabs, one per host so I can switch between hosts with alt-1-4. Each tab has its own ‘profile’ which I use to set a slight background tint as a visual reminder of which host I’m working on, as well as showing the hostname in the tab title. Why gnome-terminal? Well, because it does unicode right and has tabs – it’s dog slow at rending though. I tried urxvt for a while, but went back to gnome-terminal.

In each tab, I run “ssh -tt HOST screen -DR” to login to the remote host and reconnect to my GNU screen session. This gives all of the win. Firstly, it makes it easy to start new shells on that host without the overhead of a new ssh login. When I say ‘overhead’ I mean both time to do ssh connection negotiation (these are distant hosts) and the niggling asymmetry of ControlMaster. Using screen inside gnome-terminal effectively gives me two dimensions of tabs (and two sets of keybindings) but it works well.

The second win of screen is that if my internet connection goes down or my localhost crashes, I don’t lose my state. Any long running jobs on the remote hosts are still running just fine when I log back in and reconnect to screen. I can also connect to the screen session from different places. For example, I can leave a long job running whilst I cycle home and check on it from home.

The last piece of the puzzle is emacs. I love emacs, and the recent multitty support is just awesome. However, it’s a bit of a pain having to have my ‘actual’ emacs running on a real tty, when 99% of the time I ‘use’ it via emacsclient -t. However, I recently started running emacs under detachtty which allows you to run the main emacs process ‘headless’. I also saw a patch to do the same thing direct in emacs. So now I have a ‘headless’ emacs running on each host 24/7. Then, when I want to edit something I used ’emacsclient -t’ to temporarily connect my current terminal to it. And when I’m done, C-x C-c disconnects from emacs but doesn’t actually kill it. So my emacs now acts like a zero-startup time lightweight editor, but I get all the advantages of a having long-running emacs process. And I don’t have to worry about accidentally closing the window which the ‘real’ emacs is running in. Sweet.

It occurs to me that there’s a lot of duplication in the setup. Screen and detachtty and emacs have overlapping features in numerous ways. Emacs/screen can manage multiple shells, screen/detachtty do the ‘tty decoupling’ thing. But it’d take work to make emacs manage multiple shells as nicely as screen does. And screen-inside-gnometerminal is easier to manage than remote-screen-inside-local-screen. I think I’ve got to a pretty sweet spot with this setup.

Categories
Programming

Squawk (simple queues using awk)

If you are easily offended, look away now …

Reliable message queues (ActiveMQ in particular) are pretty handy things. They make it a lot easier to build reliable systems which are able to network problems, hardware trouble and temporary weirdness. However, they always feel pretty heavyweight; suitable for “enterprise systems” but not quick shell scripts.

Well, let’s fix that. My aim is publish and receive messages to an ActiveMQ broker from the unix shell with a minimum of overhead. I want to have a ‘consume’ script which reads messages from a queue and pipes them to a handler. If the handler script succeeds, the message is acknowledged and we win. If the handler script fails, the message is returned back to the queue, and can be re-tried later (possibly by a different host).

STOMP is what makes this easy. It’s a ‘simple text-oriented message protocol’ which is supported directly by ActiveMQ. So we won’t need to mess around with weighty client libraries. A good start.

But we still need to write a ‘consume’ program which will speak STOMP and invoke the message handler script. There are existing STOMP bindings for perl and ruby, but I’m pitching for a pure unix solution.

In STOMP, messages are NUL separated which made me wonder if it’d be possible to use awk, by setting its ‘record separator’ to NUL. The short answer is: yes, awk can do reliable messaging – win!

We’ll need some network glue. Recent versions of awk have builtin network support, but I’m going to use netcat because it’s more common than bleeding-edge awks.

I also want to keep ‘consume’ to be a single file, but I don’t want to pull my hair out trying to escape everything properly. So, I’ll use a bash here document to write the awk script out to a temporary file before invoking awk. (is there a nicer way to do this?)

There’s not much more to say except here’s the scripts: consume and produce.

To try it out, you’ll need to download ActiveMQ and start it up; just do ./bin/activemq and you’ll get a broker which has a stomp listener on port 61613.

To publish to a queue, run: echo ‘my message’ | ./produce localhost 61613 /queue/a

To consume, first write a message handler, such as:

#!/bin/bash
echo Handling a message at $(date).  Message follows:
cat
echo '(message ends)'
exit 0

and then run: ./consume localhost 61613 /queue/a ./myhandler.

To simulate failure, change the handler to “exit 1”. The message will be returned to the queue. By default, the consumer will then immediately try again, so I added in a ‘sleep 1’ to slow things down a bit. ActiveMQ has many tweakable settings to control backoff, redelivery attempts and dead-letter queue behaviour.

I’m done.

If you want to learn more about awk, check out the awk book on my amazon.com bookshelf.

Y’know, come the apocalypse, the cockroaches’s programming language of choice is probably going to be awk.

Categories
General

RAID-1 sans disks

In preparation for a somewhat more dramatic future experiment, I’ve been trying out RAID-1 failure modes using linux’s loopback capabilities to avoid having to actually buy any more real hard drives. You can simulate drive failures, failover and easily see what the current disk contents are. It should go without saying that, if you have a real RAID system currently running, you probably don’t want to execute these commands without thinking a bit first:

# Creating and destroying disks from the safety of your own console
mkdir ~/raid; cd ~/raid

# Create two 10Mb files called disk0 and disk1
for d in 0 1; do dd if=/dev/zero of=disk${d} bs=1024 count=10240; done

# Make them available as block devices using the loopback device
for d in 0 1; do sudo losetup /dev/loop$d disk$d; done

# Combine the two 'disks' into a RAID-1 mirrored block device
# Using '--build' rather than '--create' means there is no device
# specific metadata, and so the contents of the disks will be identical
sudo mdadm --build --verbose /dev/md0 --level=1 --raid-devices=2 /dev/loop[01]

# Create a filesystem on our raid device and mount it
sudo mkfs.ext3 /dev/md0 
mkdir /tmp/raidmnt
sudo mount /dev/md0 /tmp/raidmnt
sudo chown $USER /tmp/raidmnt

# The contents of both disks change in unison
md5sum disk[01]
date > /tmp/raidmnt/datefile
sync
md5sum disk[01]

# If we mark one disk as failed, disk contents diverge
sudo mdadm --fail /dev/md0 /dev/loop0
date > /tmp/raidmnt/datefile
sync
md5sum disk[01]

# Remove the failed disk and readd it, and RAID1 will sync
sudo mdadm --remove /dev/md0 /dev/loop0
sudo mdadm --add /dev/md0 /dev/loop0
sleep 1
md5sum disk[01]

# Add a third (unused) disk into the system to test failover
dd if=/dev/zero of=disk2 bs=1024 count=10240
sudo losetup /dev/loop2 disk2
sudo mdadm --add /dev/md0 /dev/loop2
sudo mdadm --detail /dev/md0

# When one of original two disks fail, the new disk gets used
md5sum disk[012]
sudo mdadm --fail /dev/md0 /dev/loop0
date > /tmp/raidmnt/datefile
sync
md5sum disk[012]

# Tidy up the world
sudo umount /dev/md0
sudo mdadm -S /dev/md0
for x in /dev/loop[012]; do sudo losetup -d $x; done
rm -rf /tmp/raidmnt ~/raid
Categories
Programming

Sleight of Haskelly Hand (and The Appearance Of A Process)

Here’s some low-level hackery fun which revealed something I didn’t know about unix until yesterday. Yi (the emacs clone in haskell) currently implements “code updating” by persisting the application state, and calling exec() to replace the program code with the latest version, and then restores the previous state. Lots of applications do this to some extent. However, yi needs to be a bit smart because (like emacs) it can have open network connections and open file handle which also need to survive the restart but aren’t trivially persistable. For example, yi could be running subshells or irc clients.

Fortunately, this is possible! When you call exec(), existing file descriptors remain open. This is very different from starting a new process from scratch. So all we need to do is persist some information about which descriptors were doing which particular job. Then, when we start up again, we can rewire up all our file handles and network connections and carry on as if nothing has happened.

Here’s an example haskell app which shows this in action. First of all we need to import various bits:

 
import System.Posix.Types
import System.Posix.Process
import System.Posix.IO
import System.IO
import Network.Socket
import System( getArgs, getProgName )
import Foreign.C.Types

Next we have a “main” function which distinguishes between “the first run” and “the second run” (ie. after re-exec’ing) by the presence of command line arguments:

 
main  :: IO ()
main = do
  args < - getArgs
  case args of
    [] -> firsttime
    [ file_fd, net_fd ] -> reuse (read file_fd) (read net_fd)

The first time we run, we open a network connection to http://example.com and we also open a disk file for writing. We then re-exec the current process to start over again, but also pass the disk file fd as the first command line argument, and the network socket fd as the second argument. Both are just integers:

 
firsttime :: IO ()
firsttime = do 
  -- Open a file, grab its fd
  Fd file_fd < - handleToFd =<< openFile "/tmp/some-file" WriteMode

  -- Open a socket, grab its fd
  socket <- socket AF_INET Stream defaultProtocol 
  addr <- inet_addr "208.77.188.166" -- example.com
  connect socket (SockAddrInet 80 addr)
  send socket "GET / HTTP/1.0\n\n"
  let net_fd = fdSocket socket

  -- rexec ourselves
  pn <- getProgName
  putStrLn $ "Now re-execing as " ++ pn ++ " " ++ show file_fd ++ " " ++ show net_fd
  executeFile ("./" ++ pn) False [ show file_fd, show net_fd ] Nothing

The second time we run, we pick up these two file descriptors and proceed to use them. In this code, we read an HTTP response from the network connection and write it to the disk file.

 
reuse :: CInt -> CInt -> IO ()
reuse file_fd net_fd = do
  putStrLn $ "Hello again, I've been re-execd!"

  putStrLn $ "Using fd " ++ show net_fd ++ " as a network connection"
  socket < - mkSocket net_fd AF_INET Stream defaultProtocol Connected
  msg <- recv socket 100

  putStrLn $ "Using fd " ++ show file_fd ++ " as an output file"
  h <- fdToHandle (Fd file_fd)
  hPutStrLn h $ "Got this from network: " ++ msg

  hClose h
  sClose socket  

  putStrLn "Now look in /tmp/some-file"

.. and we end up with the file containing text retrieved from a network connection which was made in a previous life. It is a curious and useful technique. But I find it interesting because it made me realise that I usually think of a "unix process" as being the same thing as "an instance of grep" or "an instance of emacs". But a process can change its skin many times during its lifetime. It can "become" many different creatures by exec()ing many times, and it can keep the same file descriptors throughout. I've only ever seen exec() paired with a fork() call before, but that's just one way to use it.