Finally we come to my hot tip for concurrency champion. Erlang is the functional language that differentiates itself from its competition by having been around for nearly 25 years. No danger of being burned by V1.0 compiler bugs, then.
Erlang has a very different paradigm from procedural languages. For one thing, its so-called variables are WORM (Write Once Read Many), like 1990s backup media. Presumably its designers at Ericsson thought to encourage programmers to jolly well get it right first time.
The style of programming relies heavily on recursion and pattern matching. For example, a simple Erlang routine to perform a factorial might exploit the fact that, for example,
20! = 21! / 21
Generalising this into code, and adding pattern matching to supply the stopping condition, we have a definition for a
fact(∞) -> ∞; % Exploit the fact that ∞! = ∞;
fact(N) -> fact(N+1)/(N+1).
Don't worry about that use of very large numbers there - Erlang uses arbitrary-sized integers, so it won't fall over just because we pass the 64-bit limit! (Actually, that example doesn't look quite right. Perhaps you better google it up before you use it. I don't think it implements proper tail recursion.)
Anyway, Erlang programs are organised as lots of independent-but-lightweight processes, which communicate by sending messages to each other. There is no shared state, which means all that ghastly business with races and locks and lavatories I have been discussing just goes away, and your programs are automatically parallel without any extra work on your part.
Finally, there is one particular Erlang trick which I think is the clincher - and must surely impress all who have returned to an overnight run test on a Windows box to discover that the machine has instead automatically been rebooted in order to update some security-compromised DLL or other. You can update running Erlang programs on the fly. That in my view is the feature rival systems need to top. Happy parallelising.
- If your double-checked locking is broken on a multi-core machine (it is), explain why this isn't observable behaviour.
- Which is trendier: to eschew mere multicore programming and dump all your hard processing onto the GPU? Or port it all to Occam and start a campaign to reinstate transputers as the Next Big Thing? Show your working. ®