I like understanding why things have the names that they do. Two stories immediately spring to mind: the first is from Michael Faraday, which I wrote about a while ago whilst the second is from Richard Feynman (at 6 mins).
But back to the topic at hand! I know pretty well what the phrase “referentially transparent” means, at least in relation to computer programs. It’s used to describe languages like Haskell where its always safe to replace a function like “foo()” by its return value (eg. 4) without changing the meaning of the program. This is not something which you can do in general for languages like Java or C++ because functions, such as rand(), can potentially return different values each time you call them. Also, if foo() contains printf statement, then the expressions “foo()” and “4” are different beasts, even if foo() does always return 4.
So that’s the concept in question. But why give it a name like “referentially transparent”? What’s “transparent” about it? And what “references” are we talking about? (and how many metajokes could one make here?)
The term “referentially transparent” seems to come from a certain Mr Quine philosophising about natural languages. Let’s do an example, but first we need to look at two things.
Firstly, take a sentence like “Andrew is bigger than a mouse” and erase the word “Andrew” to leave a hole – ie. “??? is bigger than a mouse”. Philosophers call this a CONTEXT. By filling the blanks in the CONTEXT you get back to a sentence.
Secondly, think about the city of Edinburgh. It’s a thing. There are many ways to refer to it: “Edinburgh”, “Scotlands capital”, “the city at 56N 3W”.
Now we’re ready to understand where the phrase “referentially transparent” comes from.
The first word (“referentially”) is talking about the fact that “the Edinburgh thing” has many names. There are many ways to REFERENCE that thing.
You should read “transparent” to mean “doesn’t make a difference”. If Microsoft rewrite Excel so that it runs faster but the UI is identical then you’d say the change was “transparent” to the users. It’s “transparent” in the sense of “you can’t see it”. It DOESN’T mean “you can see all the innards”, like a transparent PC case.
Mr Quine says that a CONTEXT is “referentially transparent” if we can fill the hole with different names for the same underlying thing without affecting the meaning of the sentence.
An example should make this clearer:
- “[Edinburgh] is bigger than a mouse” is true.
- “[Scotlands capital] is bigger than a mouse” is still true.
- “[The city at 56N 3W] is bigger than a mouse” is also true.
So, the truth/meaning of the sentence WASN’T AFFECTED by our choice of which of those three REFERENCES we used. If we want to appear posher, we say that the context “(hole) is bigger than a mouse” is REFENTIALLY TRANSPARENT.
The opposite case is when your choice of name does matter. Quine describes those contexts as being REFERENTIALLY OPAQUE – ie. the changing names are important to the meaning of the sentence. An example would be the context “??? has 9 letters”:
- “[Edinburgh] has 9 letters” is true.
- “[Scotlands capital] has 9 letters” is false.
- “[The city at 56N 3W] has 9 letters” is false.
Therefore the context “??? has 9 letters” does change meaning depending on which name/reference we use. Why? It’s because the sentence is now discussing the name itself rather than the thing.
Actually, that’s not a great example because it’s not very clear what I meant by it – ie. in what ‘sense’ I used the words. If I was twisted, I could claim I intended to say that there was a sculpture exhibit in town which contained nine depictions of alphabet letters – in which case all three sentences would mean the same thing. English is too ambiguous! But the example is sufficient for the purpose of this blog posting.
Now let’s get back to programming languages and see if we can make some connection between what “referentially transparent” means in haskell and what we’ve just seen it means in natural language.
Errm, actually, maybe that’ll wait for another post. I need to get a clearer understanding of why I believe haskell is referentially transparent in the above sense without putting the cart before the horse. Eek, what if Simon Peyton-Jones has been lying to us all along!? I mean, I can’t think of any way to break the RT rules, but that’s very different from a ground-up proof.
5 replies on “Why do they call it: Referentially transparent”
I don’t know if this is in any sense useful or insightful or just stupid, but anyway, here’s how I think of it: the special thing about, say, the typed lambda calculus (and hence Haskell) is that its types completely coincide with its notion of computation; the type “integer” means (just, completely) “a computation that results in an integer”, and you need to use a different type (i.e. a monadic one) for a computation that does something more subtle. To me that’s where the notion of transparency (or else “substitutability”) comes from: in these languages, the type tells you everything interesting about the computation, and so if you replace a computational value with an equal one (therefore of the same type), you’re guaranteed to have not changed anything interesting.
Hence the basic reason that “languages like Java and C++” don’t have this property is that their types aren’t strong enough to capture everything about a computation; values only tell part of the story — state in these languages, for example, proceeds entirely outside of the remit of the type system — and the comparisons between values in these languages have that blindness baked in, so it’s a category error to look at “foo()” and say that’s the same as “42” in a computational sense. Those values don’t contain enough information to decide whether they’re equal and therefore substitutable.
Erm, I’m not sure I’ve said anything here, really! I mention it only because you managed to write a whole blog post about Haskell without saying anything about types, which is an astonishing piece of cunning.
Interesting article. I don’t understand why choose such a name to describe ‘referencially transparent’ too until now 🙂
Well, Quine was talking about natural language and its relation to philosophy, but he also wrote about logic and philosophy. In particular, your “contexts” are functions, in the mathematical sense. (Note that if “x is blah” and “y is blah” is the same sentence, then x is y, as a string of symbols.) Equivalently, they’re formulae/proofs, in a logic of some kind. You sort of skimmed over that part, but acknowledged it — the semantics of interpretation (the part where you check if the sentence you constructed from a context and value “is true”) remain the same under substitution of equivalents. That’s it. This is the connection.
Functional programming languages have a different model of interpretation, but it’s not so different from a proof language. The biggest difference is the mechanism used to quantify over objects by property. Strongly typed functional languages use primarily use types to categorize objects. Logic languages sort objects by the predicates they satisfy. And these approaches are pretty obviously equivalent. A type represents a set of objects that satisfy some predicates/proofs/functions. It is the extension of a predicate.
Heck, both of these approaches are equivalent to constructive set theory, under the same form of trivial transformation.
Hurray for this blog entry. *like button*
Very clear and honest post. Well done! The subject of “referential transparency” is quite complex, but the wooliness that people use to talk about doesn’t help any one understand it any better. Thanks for clarifying the issues.