This original sense of 'call' (deriving from the 'call number' used to organize books and other materials in physical libraries) was also responsible for the coinage of 'compiler', according to Grace Hopper: 'The reason it got called a compiler [around 1952] was that each subroutine was given a "call word", because the subroutines were in a library, and when you pull stuff out of a library you compile things. It's as simple as that.'
I think Library Science has contributed much more to modern computing than we ever realize.<p>For example, I often bring up images of card catalogs when explaining database indexing. As soon as people see the index card, and then see that there is a wooden case for looking up by Author, a separate case for looking up by Dewey Decimal et. cet. the light goes on.<p><a href="https://en.wikipedia.org/wiki/Library_catalog" rel="nofollow">https://en.wikipedia.org/wiki/Library_catalog</a>
I always think of the indexes in the back of books as the origin of the term in computing. The relationship to "index cards" never even occurred to me!
I’m old enough to have used (book) dictionaries and wooden case card catalogues in the local library. So when I learned about hashmaps/IDictonary a quarter century ago, that’s indeed the image that helped me grok the concept.<p>However, the metaphor isn’t that educationally helpful anymore. On more than one occasion I found myself explaining how card catalogues or even (book) dictionaries work, only to be met with the reply: “oh, so they’re basically analogue hashmaps”.
A few months ago I was asking myself, why is the "standard" width of a terminal 80 characters? I assumed it had to do with the screen size of some early PCs.<p>But nope, its because a punch card was 80 characters wide. And the first punch cards were basically just index cards. Another hat tip to the librarians.<p>I guess this is the computing equivalent of a car being the width of two horse's asses...
Absolutely! I confess I assumed this was explicitly part of how things were taught. With the "projected" attributes in the index being what you would fit on a card. I'm surprised that so many seem to not have any mental model for how this stuff works.
Even if it did contribute more, it still contributed an absolutely miniscule amount to modern computing.
If the librarian's 'call for' meaning was indeed the one originally intended, then even in Mauchly's 1947 article you can already see slippage towards the more object-oriented or actor-oriented 'call to' meaning.
I'm Finnish and in in Finnish we translate "call" in function context as "kutsua", which when translated back into English becomes "invite" or "summon".<p>So at least in Finnish the word "call" is considered to mean what it means in a context like "a mother called her children back inside from the yard" instead of "call" as in "Joe made a call to his friend" or "what do you call this color?".<p>Just felt like sharing.
In German, we use "aufrufen", which means "to call up" if you translate it fragment-by-fragment, and in pre-computer times would (as far as I know) only be understood as "to call somebody up by their name or nummer" (like a teacher asking a student to speak or get up) when used with a direct object (as it is for functions).<p>It's also separate from the verb for making a phone call, which would be "anrufen".
'Summon' implies a bit of eldritch horror in the code, which is very appropriate at times. 'Invite' could also imply it's like a demon or vampire, which also works!
In russian it's kind of similar, back translation is "call by phone", "summon", "invite".
Unrelated, but if you happen to be in Helsinki, you should join the local Hacker News meetup: <a href="https://bit.ly/helsinkihn" rel="nofollow">https://bit.ly/helsinkihn</a>
[Wilkes, Wheeler, Gill](<a href="https://archive.org/details/programsforelect00wilk" rel="nofollow">https://archive.org/details/programsforelect00wilk</a>) [1951] uses the phrase “call in” to invoke a subroutine.<p>Page 31 has:<p>> … if, as a result of some error on the part of the programmer, the order Z F does not get overwritten, the machine will stop at once. This could happen if the subroutine were not called in correctly.<p>> It will be noted that a closed subroutine can be called in from any part of the program, without restriction. In particular, one subroutine can call in another subroutine.<p>See also the program on page 33.<p>The Internet Archive has the 1957 edition of the book, so I wasn’t sure if this wording had changed since the 1951 edition. I couldn’t find a paper about EDSAC from 1950ish that’s easily available to read, but [here’s a presentation with many pictures of artefacts from EDSAC’s early years](<a href="https://chiphack.org/talks/edsac-part-2.pdf" rel="nofollow">https://chiphack.org/talks/edsac-part-2.pdf</a>). It has a couple of pages from the 1950 “report on the preparation of programmes for the EDSAC and the use of the library of subroutines” which shows a subroutine listing with a comment saying “call in auxiliary sub-routine”.
Somewhat less frequently, I also hear "invoke" or "execute", which is more verbose but also more generic.<p>Incidentally, I find strange misuses of "call" ("calling a command", "calling a button") one of the more grating phrases used by ESL CS students.
Invoke comes from Latin invocō, invocāre, meaning “to call upon”. I wouldn’t view it as a misuse, but rather a shortening.
> Invoke comes from Latin invocō, invocāre, meaning “to call upon”.<p>(In the way you'd call upon a skill, not in the way you'd call upon a neighbor.)
But vocare (the voco in invoco) is how you'd call a neighbor
It's calling a person like by saying their name loudly.
Which fits nicely for calling a function - you use its skill, you don't call for a chat.
>Incidentally, I find strange misuses of "call" ("calling a command", "calling a button") one of the more grating phrases used by ESL CS students.<p>From my own experience, native speakers (who are beginners at programming) also do this. They also describe all kinds of things as "commands" that aren't.
> <i>strange misuses of "call"</i><p>My favourite (least favourite?) is using “call” with “return”. On more than one occasion I’ve heard:<p>“When we call the return keyword, the function ends.”
I remember someone in university talking about the if function (which ostensibly takes one boolean argument).
In Excel formulas everything is a function. IF, AND, OR, NOT are all functions. It is awkward and goes against what software devs are familiar with, but there are probably more people familiar with the Excel IF function than any other forms. Here is an example taken from the docs... =IF(AND(A3>B2,A3<C2),TRUE,FALSE)
I frequently see people treating if as if it was "taking a comparison", so: if (variable == true) ...
Sounds like something Prof. John Ousterhout would say:-; The place where this was literally accurate would be Tcl.<p>I don't know enough Smalltalk to be sure but I think to remember it has a similar approach of everything is an object and I wouldn't be surprised if they'd coerced control flow somehow into this framework.<p>Also Forth comes to mind, but that would probably be a stretch.
> I don't know enough Smalltalk to be sure but I think to remember it has a similar approach of everything is an object and I wouldn't be surprised if they'd coerced control flow somehow into this framework.<p>It does. It's been discussed on HN before, even: <a href="https://news.ycombinator.com/item?id=13857174">https://news.ycombinator.com/item?id=13857174</a>
I would include the cond function from lisp, or the generalization from lambda calculus<p><pre><code> λexpr1.λexpr2.λc.((c expr1) expr2)</code></pre>
If takes two or three arguments, but never one. The condition is the one made syntactically obvious in most languages, the consequent is another required argument, and the alternative is optional.
There <i>are</i> languages in which `if` <i>is</i> a function.<p>In in Tcl, `if` is called a "command".
Try implementing that in most languages and you'll run into problems.<p>In an imperative programming language with eager evaluation, i.e. where arguments are evaluated before applying the function, implementing `if` as a function will evaluate both the "then" and "else" alternatives, which will have undesirable behavior if the alternatives can have side effects.<p>In a pure but still eager functional language this can work better, if it's not possible for the alternatives to have side effects. But it's still inefficient, because you're evaluating expressions whose result will be discarded, which is just wasted computation.<p>In a lazy functional language, you can have a viable `if` function, because it will only evaluate the argument that's needed. But even in the lazy functional language Haskell, `if` is implemented as built-in syntax, for usability reasons - if the compiler understands what `if` means as opposed to treating it as an ordinary function, it can optimize better, produce better messages, etc.<p>In a language with the right kind of macros, you can define `if` as a macro. Typically in that case, its arguments might be wrapped in lambdas, by the macro, to allow them to be evaluated only as needed. But Scheme and Lisp, which have the right kind of macros, don't define `if` as a macro for similar reasons to Haskell.<p>One language in which `if` is a function is the pure lambda calculus, but no-one writes real code in that.<p>The only "major" language I can think of in which `if` is actually a function (well, a couple of methods) is Smalltalk, and in that case it works because the arguments to it are code blocks, i.e. essentially lambdas.<p>tl;dr: `if` as a function isn't practical in most languages.
I don't think Haskell needs 'if' to be a construct for compiler optimization reasons; it could be implemented easily enough with pattern matching:<p>if' :: Bool -> a -> a -> a<p>if' True x _ = x<p>if' False _ y = y<p>The compiler could substitute this if it knew the first argument was a constant.<p>Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet. The early versions of Haskell had pretty terrible I/O, too.
With a function version of `if`, in general the compiler needs to wrap the alternative in closures ("thunks"), as it does with all function arguments unless optimizations make it unnecessary. That's never needed in the syntactic version. That's one significant optimization.<p>In GHC, `if` desugars to a case statement, and many optimizations flow from that. It's pretty central to the compiler's operation.<p>> Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet.<p>Neither of these are true. My comment above was attempting to explain why `if` isn't implemented as a function. Haskell is a prime example of where it could have been done that way, the authors are fully aware of that, but they didn't because the arguments against doing it are strong. (Unless you're implementing a scripting-language type system where you don't care about optimization.)
A short search lead to this SE post [1], which doesn't answer the "why" but says "if" is just syntactic sugar that turns into `ifThenElse`...<p>[1]: <a href="https://softwareengineering.stackexchange.com/questions/195708/why-does-haskell-have-built-in-if-then-else-instead-of-defining-it-as-a-simple" rel="nofollow">https://softwareengineering.stackexchange.com/questions/1957...</a><p>The post claims that this is done in such a basic way that if you have managed to rebind `ifThenElse`, your rebound function gets called. I didn't confirm this, but I believed it.
Some people use parentheses for the return value, to make it look like a function call:<p><pre><code> return(value);</code></pre>
Eh, "return" is just a very restricted continuation with special syntax… it's a stretch to say you "call" it, but not unjustified.
I've heard that too --- the voice in my head automatically read it in the customary thick Indian accent.
C# seems to like to use "Invoke" for things like delegates or reflected methods. Then it proceeds to use "Call Stack" in the debugger view.
On an old Nokia you follow links by pressing the call button.
I actually see the converse often with novices often, referring to statements (or even entire function decls) as "commands".
"Command" is a better term for what we call "statements" in imperative programming languages. "Statement" in this context is an unfortunate historical term; except in Prolog, these "statements" don't have a truth-value, just an effect. (And in Prolog we call them "clauses" instead.)
Now that it's commonly "tap or click the button" I might be down with the next gen using "call". Anything, as long as they don't go with "broh, the button".
> ... but those of any complexity presumably ought to be in a library — that is, a set of magnetic tapes in which previously coded problems of permanent value are stored.<p>Oddly, I never thought of the term library as originating from a physical labelled and organized shelf of tapes, until now.
At <a href="https://youtu.be/DjhRRj6WYcs?t=338" rel="nofollow">https://youtu.be/DjhRRj6WYcs?t=338</a> you can see EDSAC's original linker, Margaret Hartrey, taking a library subroutine from the drawer of paper tapes. (But you should really watch the whole thing, of course!)
I've never heard of a library being called anything else - look at the common file extension .lib, for example.
I don't see .lib being all that common, but it might just be what I'm used to. `.so` or `.dll` or such sure (though to be fair, the latter does include the word library.)
It's not _that_ they are called libraries, but _why_ they are called libraries. I had assumed, like many others that it was purely by analogy (ie, a desktop), and not that the term originated with a physical library of tapes.
Another use of call is in barn / folk / country dancing where a caller will call out the moves. “Swing your partner!”, “Dosey do!”, “Up and down the middle!” etc. Each of these calls describes a different algorithmic step. However, it’s unlikely this etymology has anything to do with function calling: each call modifies the <i>global state</i> of the dancers with no values returned to the caller.
There is also the phrase in music, "call and response" - even referencing a return value.
Not a scientific theory, but an observation. New words propagate when they "click". They are often short, and for one reason or another enable people to form mental connections and remember what they mean. They spread rapidly between people like a virus. Sometimes they need to be explained, sometimes people get it from context, but afterward people tend to remember them and use them with others, further propagating the word.<p>A fairly recent example, "salty". It's short, and kinda feels like it describes what it means (salty -> tears -> upset).<p>It sounds like "call" is similar. It's short, so easy to say for an often used technical term, and there are a couple of ways it can "feel right": calling up, calling in, summoning, invoking (as a magic spell). People hear it, it fits, and the term spreads. I doubt there were be many competing terms, because terms like "jump" would have been in use to refer to existing concepts. Also keep in mind that telephones were hot, magical technology that would have become widespread around this same time period. The idea of being able to call up someone would be at the forefront of people's brains, so contemporary programmers would likely have easily formed a mental connection/analogy between calling people and calling subroutines.
Side-note: for me, at least, "salty" isn't anything to do with tears; in my idiolect when someone's "salty" it doesn't mean they're <i>sad</i>, it means they're <i>angry</i> or <i>offended</i> or something along those lines. The metaphor is more about how salt (in large quantities) tastes strong and sharp.<p>(Which maybe illustrates that a metaphor can succeed even when everyone doesn't agree about just what it's referring to, as you're suggesting "call" may have done.)
Off topic somewhat but where the hell did the verb 'jump' come from for video calls? I'm always being asked to jump on a call
I assume it's because you're doing something you'd rather not do to benefit somebody else, much like you'd jump on a grenade :p
I thought it's like hopping onto a bus. Then it doesn't take a lot for "hop" to change into "jump".
It seems like it has the connotation of being spontaneous and not requiring preparation.
I always thought it was just a metaphor for suddenly leaving whatever you were doing at the time. You’re doing something else, and then you <i>jump</i> on a call. You don’t mosey on over and show up on the call fifteen minutes later.
You would jump on a call before video was involved. Not even necessarily a conference call either, you could jump on the horn etc.<p>It just means to start doing something, no great mystery.
That's just a standard meaning of the word jump. You're jumping from whatever you were doing to a video call.
This was a result of Zoom's acquisition of the band House of Pain.
Maybe it's a hint that the operation is irreversible! :-)
That has nothing to do with the video aspect, but the group aspect. "Jump", "Hop" and the like are making a group call analogous to a bus ride, where people can jump on and off.
Another interesting one is in games when an effect is said to "proc", which I guess is from a procedure getting called.
Basically, yeah. It actually has its origin in MUDs, from 'spec_proc', short for special procedure.
My friends always seemed to think this one comes from "procure".
It's interesting to think of "calling" as "summoning" functions. We could also reasonably say "instantiating", "evaluating", "computing", "running", "performing" (as in COBOL), or simply "doing".<p>In Mauchly's "Preparation of Problems for EDVAC-Type Machines", quoted in part in the blog post, he writes:<p>> <i>The total number of operations for which instructions must be provided will
usually be exceedingly large, so that the instruction sequence would be far in excess of the
internal memory capacity. However, such an instruction sequence is never a random sequence,
and can usually be synthesized from subsequences which frequently recur.</i><p>> <i>By providing the necessary subsequences, which may be utilized as often as desired, together
with a master sequence directing the use of these subsequences, compact and easily set up
instructions for very complex problems can be achieved.</i><p>The verbs he uses here for subroutine calls are "utilize" and "direct". Later in the paper he uses the term "subroutine" rather than "subsequence", and does say "called for" but not in reference to the subroutine invocation operation in the machine:<p>> <i>For these,
magnetic tapes containing the series of orders required for the operation can be prepared once
and be made available for use when called for in a particular problem. In order that such
subroutines, as they can well be called, be truly general, the machine must be endowed with
the ability to modify instructions, such as placing specific quantities into general subroutines.
Thus is created a new set of operations which might be said to form a calculus of instructions.</i><p>Of course nowadays we do not pass arguments to subroutines by modifying their code, but index registers had not yet been invented, so every memory address referenced had to be contained in the instructions that referenced it. (This was considered one of the great benefits of keeping the program in the data memory!)<p>A little lower down he says "initiate subroutines" and "transferring control to a subroutine", and talks about linking in subroutines from a "library", as quoted in the post.<p>He never calls subroutines "functions"; I'm not sure where that usage comes from, but certainly by BASIC and LISP there were "functions" that were at least implemented by subroutines. He <i>does</i> talk about <i>mathematical</i> functions being computed by subroutines, including things like matrix multiplication:<p>> <i>If the subroutine is merely to calculate a
function for a single argument, (...)</i>
<i>"He never calls subroutines "functions"; I'm not sure where that usage comes from, but certainly by BASIC and LISP there were "functions" that were at least implemented by subroutines."</i><p>I think the early BASIC's used the subroutine nomenclature for GOSUB, where there was no parameter passing or anything, just a jump that automatically remembered the place to return.<p>Functions in BASIC, as I remember it, were something quite different. I think they were merely named abbreviations for arithmetic expressions and simple one line artithmetic expressions only. They were more similar to very primitive and heavily restricted macros than to subroutines or functions.
Right, that's what Algol-58 functions were, too. I think FORTRAN also has a construct like this, but I forget.
Huh, I gosub my functions ...
GLENDOWER: I can call spirits from the vasty deep.<p>HOTSPUR: Why, so can I, or so can any man;
But will they come when you do call for them?<p>-- Henry the Fourth, Part 1
I love this sort of cs history. I’m also curious—why do we “throw” an error or “raise” an exception? Why did the for loop use “for” instead of, say, “loop”?
I'm guessing "throw" came about after someone decided to "catch" errors.<p>As for "raise", maybe exceptions should've been called objections.
It's been ages, but I think an earlier edition of Stroustrup's <i>The C++ Programming Language</i> explains that he specifically chose "throw" and "catch" because more obvious choices like "signal" were already taken by established C programs and choosing "unusual" words (like "throw" and "catch") reduced chance of collision. (C interoperability was a pretty big selling point in the early days of C++.)
The design of exception handling in C++ was inspired by ML, which used 'raise', and C++ might itself have used that word, were it not already the name of a function in the C standard library (as was 'signal'). The words 'throw' and 'catch' were introduced by Maclisp, which is how Stroustrup came to know of them. As he told Guy Steele at the second ACM History of Programming Languages (HOPL) conference in 1993, 'I also think I knew whatever names had been used in just about any language with exceptions, and "throw" and "catch" just were the ones I liked best.'
I think “raise” comes from the fact that the exception propagates “upward” through the call stack, delegating the handling of it to the next level “up.” “Throw” may have to do with the idea of not knowing what to do/how to handle an error case, so you just throw it away (or throw your hands up in frustration xD). Totally just guessing
You throw something catchable and if you fail to catch it it’ll break. Unless it’s a steel ball.<p>You raise flags or issues, which are good descriptions of an exception.
That's a great question. The first language I learned was python, and "for i in range(10)" makes a lot of sense to me. But "for (int i = 0; i < 10; i++)" must have come first, and in that case "for" is a less obvious choice.
BASIC had the FOR-NEXT loop back in 1964.<p>10 FOR N = 1 TO 10<p>20 PRINT " ";<p>30 NEXT N<p>C language would first release in 1972, that had the three-part `for` with assignment, condition, and increment parts.
This reminds me of a little bit of trivia. In very old versions of BASIC, "FORD=STOP" would be parsed as "FOR D = S TO P".<p>I found that amusing circa 1975.
In Fortran, it is a do-loop :)
FOR comes from ALGOL in which as far as I know is was spelled:<p><pre><code> for p := x step d until n do</code></pre>
Algol 58 had "for i:=0(1)9". C's for loop is a more general variant.
"For" for loop statements fits with math jargon: "for every integer i in the set [1:20], ..."
“for” is short for “for each”, presumably. `for i in 1..=10` is short for “for each integer i in the range 1 to 10”.
Because, obviously, we stand out in a field of the code segment and shout the address we which to jump to or push onto the stack. ;)
Also interesting to contrast this to <i>invocation</i> or <i>application</i> (e.g. to invoke or apply). I'm sure there are fair few 'functional dialects' out there!
<a href="https://youtu.be/xrIjfIjssLE?t=205" rel="nofollow">https://youtu.be/xrIjfIjssLE?t=205</a><p>A good time to link this classic
I seem to remember people used to say "call it up" when asking an operator to perform a function on a computer when the result was displayed in front of the user.
Also exists: „activate“ from activation record
You “call upon” the function to perform a task, or return a value as the case may be. Just as you may call upon a servant or whatever.
The answer is in the article. You "call" functions because they are stored in libraries, like books, and like books in libraries, when you want them, you identify which one you want by specifying its "call number".
Another answer also in the article is that you call them like you call a doctor to your home, so they do something for you, or how people are “on call”.<p>The “call number” in that story comes after the “call”. Not the other way around.
I'd like to point at CALL, a CPU instruction, and its origins. I'm not familiar with this, but it could reveal more than programming languages. The instruction is present at least since first intel microprocessors and microcontrollers were designed.
I kept scrolling expecting to see this story:<p>> Dennis Ritchie encouraged modularity by telling all and sundry that function calls were really, really cheap in C. Everybody started writing small functions and modularizing. Years later we found out that function calls were still expensive on the PDP-11, and VAX code was often spending 50% of its time in the CALLS instruction. Dennis had lied to us! But it was too late; we were all hooked...<p><a href="https://www.catb.org/~esr/writings/taoup/html/modularitychapter.html" rel="nofollow">https://www.catb.org/~esr/writings/taoup/html/modularitychap...</a>
The first Intel microcontrollers were the 8008 and the 4004, designed in 01971. This is 13 years after this post documents the "CALL X" statement of FORTRAN II in 01958, shortly after which (he documents) the terminology became ubiquitous. (FORTRAN I didn't have subroutines.)<p>In the Algol 58 report <a href="https://www.softwarepreservation.org/projects/ALGOL/report/Algol58_preliminary_report_CACM.pdf/" rel="nofollow">https://www.softwarepreservation.org/projects/ALGOL/report/A...</a> we have "procedures" and "functions" as types of subroutines. About invoking procedures, it says:<p>> <i>9. Procedure statements</i><p>> <i>A procedure statement serves to initiate (call for) the execution of a procedure, which is a closed and self-contained process with a fixed ordered set of input and output parameters, permanently defined by a procedure declaration. (cf. procedure declaration.)</i><p>Note that this <i>does</i> go to extra pains to include the term "call for", but <i>does not</i> use the phraseology "call a procedure". Rather, it calls <i>for</i> not the procedure itself, but <i>the execution of</i> the procedure.<p>However, it also uses the term "procedure call" to describe either the initiation or the execution of the procedure:<p>> <i>The procedure declaration defining the called procedure contains, in its heading, a string of symbols identical in form to the procedure statement, and the formal parameters occupying input and output parameter positions there give complete information concerning the admissibility of parameters used in any procedure call, (...)</i><p>Algol 58 has a different structure for defining functions rather than procedures, but those too are invoked by a "function call"—but not by "calling the function".<p>I'm not sure when the first assembly language with a "call" instruction appeared, but it might even be earlier than 01958. The Burroughs 5000 seems like it would be a promising thing to look at. But certainly many assembly languages from the time didn't; even MIX used a STJ instruction to set the return address in the return instruction in the called subroutine and then just jumped to its entry point, and the PDP-10 used PUSHJ IIRC. The 360 used BALR, branch and link register, much like RISC-V's JALR today.
The instruction comes from programming languages.