Categories
Programming

Haskell is possibly too lazy for me

This is the first of several posts on the topic of Haskell’s laziness. After several weeks of playing, I’m coming to the conclusion that laziness-by-default is a hinderance rather than a virtue. Let’s start at the start though by trying to add some numbers together.

-- Non tail recursive; 5Mb of live objects at end.
mysum []     = 0
mysum (x:xs) = x + mysum xs
main = putStrLn $ show $ mysum $ take 100000 $ [1..]

As the comment says, this is a dumb version. It consumes 5Mb of memory because it’s not tail recursive.

Incidentally, after causing my machine to thrash several time during my experiments, I found it useful to use ‘ulimit’ to restrict the maximum heap size available to the process. Also, you can pass extra args to your haskell app to get it to report real-time memory stats, like this:

ghc sum.hs && /bin/bash -c 'ulimit -Sv 100000; ./a.out +RTS  -Sstderr'

Anyhow, the memory blowup is easy to fix; just pass an ‘accumulator’ parameter when you do the recursive call:

-- Tail recursive, but 3.5Mb of live objects at end.
mysuma acc []     = acc
mysuma acc (x:xs) = mysuma (acc+x) xs
main = putStrLn $ show $ mysuma 0 $ take 100000 $ [1..]

Hmm, it’s now tail recursive but it still consumes 3.5Mb? This is where Haskell’s laziness makes things quite different from ocaml and other strict languages. When we pass the accumulated value, haskell does not actually evaluate the addition prior to making the recursive call. It will delay the computation until its value is actually required. So, on each recursive call, the accumulator looks like an unevaluated “1+2” and then “1+2+3” etc.

We can fix this by explicitly telling haskell to evaluate the addition prior to making the call:

-- Tail recursive, with 'seq' to force immediate evaluation of addition. 
-- 40k of live objects at end.
mysumas acc []     = acc
mysumas acc (x:xs) = (acc+x) `seq` mysumas (acc+x) xs
main = putStrLn $ show $ mysumas 0 $ take 100000 $ [1..]

Finally we have a program which only consumes a tiny amount of heap space. But it took a surprising amount of effort. There’s lots more information about this situation on the haskell wiki.

Categories
Programming

Sleight of Haskelly Hand (and The Appearance Of A Process)

Here’s some low-level hackery fun which revealed something I didn’t know about unix until yesterday. Yi (the emacs clone in haskell) currently implements “code updating” by persisting the application state, and calling exec() to replace the program code with the latest version, and then restores the previous state. Lots of applications do this to some extent. However, yi needs to be a bit smart because (like emacs) it can have open network connections and open file handle which also need to survive the restart but aren’t trivially persistable. For example, yi could be running subshells or irc clients.

Fortunately, this is possible! When you call exec(), existing file descriptors remain open. This is very different from starting a new process from scratch. So all we need to do is persist some information about which descriptors were doing which particular job. Then, when we start up again, we can rewire up all our file handles and network connections and carry on as if nothing has happened.

Here’s an example haskell app which shows this in action. First of all we need to import various bits:

 
import System.Posix.Types
import System.Posix.Process
import System.Posix.IO
import System.IO
import Network.Socket
import System( getArgs, getProgName )
import Foreign.C.Types

Next we have a “main” function which distinguishes between “the first run” and “the second run” (ie. after re-exec’ing) by the presence of command line arguments:

 
main  :: IO ()
main = do
  args < - getArgs
  case args of
    [] -> firsttime
    [ file_fd, net_fd ] -> reuse (read file_fd) (read net_fd)

The first time we run, we open a network connection to http://example.com and we also open a disk file for writing. We then re-exec the current process to start over again, but also pass the disk file fd as the first command line argument, and the network socket fd as the second argument. Both are just integers:

 
firsttime :: IO ()
firsttime = do 
  -- Open a file, grab its fd
  Fd file_fd < - handleToFd =<< openFile "/tmp/some-file" WriteMode

  -- Open a socket, grab its fd
  socket <- socket AF_INET Stream defaultProtocol 
  addr <- inet_addr "208.77.188.166" -- example.com
  connect socket (SockAddrInet 80 addr)
  send socket "GET / HTTP/1.0\n\n"
  let net_fd = fdSocket socket

  -- rexec ourselves
  pn <- getProgName
  putStrLn $ "Now re-execing as " ++ pn ++ " " ++ show file_fd ++ " " ++ show net_fd
  executeFile ("./" ++ pn) False [ show file_fd, show net_fd ] Nothing

The second time we run, we pick up these two file descriptors and proceed to use them. In this code, we read an HTTP response from the network connection and write it to the disk file.

 
reuse :: CInt -> CInt -> IO ()
reuse file_fd net_fd = do
  putStrLn $ "Hello again, I've been re-execd!"

  putStrLn $ "Using fd " ++ show net_fd ++ " as a network connection"
  socket < - mkSocket net_fd AF_INET Stream defaultProtocol Connected
  msg <- recv socket 100

  putStrLn $ "Using fd " ++ show file_fd ++ " as an output file"
  h <- fdToHandle (Fd file_fd)
  hPutStrLn h $ "Got this from network: " ++ msg

  hClose h
  sClose socket  

  putStrLn "Now look in /tmp/some-file"

.. and we end up with the file containing text retrieved from a network connection which was made in a previous life. It is a curious and useful technique. But I find it interesting because it made me realise that I usually think of a "unix process" as being the same thing as "an instance of grep" or "an instance of emacs". But a process can change its skin many times during its lifetime. It can "become" many different creatures by exec()ing many times, and it can keep the same file descriptors throughout. I've only ever seen exec() paired with a fork() call before, but that's just one way to use it.

Categories
Programming

Polymorphic function; a tale of two forM_’s

Tonight I spent over an hour learning that there are two functions called “forM_” in haskell.

The first, which I was already familiar with, lives in Control.Monad. Its purpose is to take a list of values, apply some function to them all to produce a list of actions, and then run these actions one after the other.

The other one, which I did not know about, lives in Data.Foldable. It does exactly the same job, but it works on any “Foldable” data type; ie. those which can be collapsed/folded in some way to a single summary value. Lists are foldable. Maps are also foldable (across their elements, not their keys).

And this was the cause of my perplexification this evening; I had been storing my data in a list, and using forM_ (the monadic one, the only which I knew about!) to process it. At some point, I decided to change to storing my data in a map instead, and ran the compiler to see the errors which would point me to the bits of code I needed to fix up.

Except the compiler happily compiled everything without errors. *confusion*

Much headscratching followed, until eventually I added “-ddump-tc” to get ghc to dump out the results of its typechecking pass. This lead me to the root cause: I had been picking up forM_ from Data.Foldable all along, not Control.Monad. Duh! Everything had been working fine up until then, since Foldable.forM_ and Monad.forM_ operate identically on lists. But the former also works on Data.Maps whereas the latter does not.

I’m sure there’s a moral to this story. I’m just not sure what it is yet.

Categories
Programming

MethodFinder for Ruby

I was preparing a talk about Smalltalk for the boys and girls at Amazon when it occured to me that it’s easy to implement Smalltalk’s MethodFinder in Ruby. So, without further ado, here’s a Ruby MethodFinder. Take the red pill, dynamic Neo!

Excerpt from the article:

I use lots of different programming languages, and they all seem to have different names for the same concepts. For example, string concatenation is “+” in ruby and java, “.” in perl, “,” in smalltalk and “^” in ocaml. After a while, this starts to drive you mad. I know that I want a method which takes “foo” and “bar” and returns “foobar” but I can’t remember which incantation I need to utter today.

Smalltalk has this neat thing called the MethodFinder. It lets you find methods by providing an example. If you’ve got the string “foo”, you can ask the MethodFinder to find all the method that, when called with argument “bar” return “foobar”. This is a very useful tool. No more scrabbling around with documentation to find the name of a method which you know exists. Stay in the red-pill world, and ask the code.

Now, ruby is basically smalltalk (without lots of the k3wl bits). So we can easily build a method finder in ruby too!

Read the full article here.

Categories
Programming

Callstacks of the future

I read a blog entry recently (but can’t find the reference now, sorry) which changed my view about callstacks. Now, these are pretty fundamental things and I’ve been aware of them for nearly twenty years. So this was a pretty remarkable change.

In my head, a callstack was a record of history. When I drop into the debugger, I look at the callstack to see “where I’ve been”.

That’s wrong. The callstack isn’t a record of history. It’s a record of the future. It tells you where your computation is going to go next. The “return addresses” on the stack are where computation will continue from in the future.

Now, it’s easy to see why my misconception has continued for so long. There’s usually not much difference between the two views. Most of the time in a C-like language, you get called from some point in a function and then (later) you resume again from just after that point. So is it really worth worrying about such a tiny different?

Yes it is, because it gets your mind ready to understand concepts such as tail-call optimization and continuations much more easily. In the “future” view of callstacks, tailcall optimization becomes obvious. If we don’t need an activation record in the future, we don’t need it on the stack. Similarly, continuations make much more sense.

It’s a pretty small change in metaphor. I haven’t magically learned anything new because of it. But several bits of my knowledge now fit together in a much more satisfying way because of this switch. Whoever wrote that original blog article, thank you! 🙂

UPDATE: Rar, finally found the original blog article. Dave Herman was the man with the wisdom. 🙂