Here’s some low-level hackery fun which revealed something I didn’t know about unix until yesterday. Yi (the emacs clone in haskell) currently implements “code updating” by persisting the application state, and calling exec() to replace the program code with the latest version, and then restores the previous state. Lots of applications do this to some extent. However, yi needs to be a bit smart because (like emacs) it can have open network connections and open file handle which also need to survive the restart but aren’t trivially persistable. For example, yi could be running subshells or irc clients.
Fortunately, this is possible! When you call exec(), existing file descriptors remain open. This is very different from starting a new process from scratch. So all we need to do is persist some information about which descriptors were doing which particular job. Then, when we start up again, we can rewire up all our file handles and network connections and carry on as if nothing has happened.
Here’s an example haskell app which shows this in action. First of all we need to import various bits:
import System.Posix.Types import System.Posix.Process import System.Posix.IO import System.IO import Network.Socket import System( getArgs, getProgName ) import Foreign.C.Types
Next we have a “main” function which distinguishes between “the first run” and “the second run” (ie. after re-exec’ing) by the presence of command line arguments:
main :: IO () main = do args < - getArgs case args of  -> firsttime [ file_fd, net_fd ] -> reuse (read file_fd) (read net_fd)
The first time we run, we open a network connection to http://example.com and we also open a disk file for writing. We then re-exec the current process to start over again, but also pass the disk file fd as the first command line argument, and the network socket fd as the second argument. Both are just integers:
firsttime :: IO () firsttime = do -- Open a file, grab its fd Fd file_fd < - handleToFd =<< openFile "/tmp/some-file" WriteMode -- Open a socket, grab its fd socket <- socket AF_INET Stream defaultProtocol addr <- inet_addr "220.127.116.11" -- example.com connect socket (SockAddrInet 80 addr) send socket "GET / HTTP/1.0\n\n" let net_fd = fdSocket socket -- rexec ourselves pn <- getProgName putStrLn $ "Now re-execing as " ++ pn ++ " " ++ show file_fd ++ " " ++ show net_fd executeFile ("./" ++ pn) False [ show file_fd, show net_fd ] Nothing
The second time we run, we pick up these two file descriptors and proceed to use them. In this code, we read an HTTP response from the network connection and write it to the disk file.
reuse :: CInt -> CInt -> IO () reuse file_fd net_fd = do putStrLn $ "Hello again, I've been re-execd!" putStrLn $ "Using fd " ++ show net_fd ++ " as a network connection" socket < - mkSocket net_fd AF_INET Stream defaultProtocol Connected msg <- recv socket 100 putStrLn $ "Using fd " ++ show file_fd ++ " as an output file" h <- fdToHandle (Fd file_fd) hPutStrLn h $ "Got this from network: " ++ msg hClose h sClose socket putStrLn "Now look in /tmp/some-file"
.. and we end up with the file containing text retrieved from a network connection which was made in a previous life. It is a curious and useful technique. But I find it interesting because it made me realise that I usually think of a "unix process" as being the same thing as "an instance of grep" or "an instance of emacs". But a process can change its skin many times during its lifetime. It can "become" many different creatures by exec()ing many times, and it can keep the same file descriptors throughout. I've only ever seen exec() paired with a fork() call before, but that's just one way to use it.