(yes, C as in language, not c as in speed of light). I looked recently at the great computer language shootout, for some benchmarks and to make some speed comparisons. I use this benchmark, modified it to be rpythonic-enough and compared speeds. The code is here (the only change from the Python version was to create a class instead of tuple, so actually this version is more OO). Also the benchmark is very likely flawed because it favours better GCs :).
So, here we go:
|Language:||Time of run (for N=14):|
|Python version running on Python 2.5.1, distribution||25.5s|
|Python version running on PyPy with generational GC||45.5|
|Python with psyco||20s|
|RPython translated to C using PyPy's generational GC||0.42s|
|compiling the Haskell version with GHC 6.6.1||1.6s|
|compiling the C version with gcc 4.1.2 -O3 -fomit-frame-pointer||0.6s|
Also worth noticing is that when using psyco with the original version (with tuples) it is very fast (2s).
So, PyPy's Python interpreter is 80% slower than CPython on this (not too horrible), but RPython is 40% faster than gcc here. Cool. The result is mostly due to our GC, which also proves that manual memory-management can be slower than garbage collection in some situations. Please note that this result does not mean that RPython is meant for you. It requires a completely different mindset than the one used to program in Python. Don't say you weren't warned! :-)