Skip to main content

So, you want to try PyPy

Hello.

During the PyCon trip multiple people asked me how exactly they could run their stuff on PyPy to get the speedups. Now, in an ideal world, you would just swap CPython with PyPy, everything would run tons of times faster and everyone would live happily ever after. However, we don't live in an ideal world and PyPy does not speed up everything you could potentially run. Chances are that you can run your stuff quite a bit faster, but it requires quite a bit more R&D than just that. This blog post is an attempt to explain certain steps that might help. So here we go:

  • Download and install PyPy. 2.0 beta 1 or upcoming 2.0 beta 2 would be a good candidate; it's not called a beta for stability reasons.
  • Run your tests on PyPy. There is absolutely no need for fast software that does not work. There might be some failures. Usually they're harmless (e.g. you forgot to close the file); either fix them or at least inspect them. In short, make sure stuff works.
  • Inspect your stack. In particular, C extensions, while sometimes working, are a potential source of instability and slowness. Fortunately, since the introduction of cffi, the ecosystem of PyPy-compatible software has been growing. Things I know are written with PyPy in mind:
    • the new version of pyOpenSSL will support PyPy via cffi
    • psycopg2cffi is the most actively maintained postgres binding for PyPy, with pg8000 reported working
    • mysql has a ctypes based implementation (although a cffi-based one would be definitely better)
    • PyPy 2.0 beta 2 will come with sqlite-using-cffi
    • lxml-cffi
    • uWSGI, while working, is almost certainly not the best choice. Try tornado, twisted.web, cyclone.io, gunicorn or gevent (note: gevent support for PyPy is not quite finished; will write about it in a separate blog post, but you can't just use the main branch of gevent)
    • consult (and contribute to) pypy compatibility wiki for details (note that it's community maintained, might be out of date)
  • Have benchmarks. If you don't have benchmarks, then performance does not matter for you. Since PyPy's warm-up time is bad (and yes, we know, we're working on it), you should leave ample time for warm-ups. Five to ten seconds of continuous computation should be enough.
  • Try them. If you get lucky, the next step might be to deploy and be happy. If you're unlucky, profile and try to isolate bottlenecks. They might be in a specific library or they might be in your code. The better you can isolate them, the higher your chances of understanding what's going on.
  • Don't take it for granted. PyPy's JIT is very good, but there is a variety of reasons that it might not work how you expect it to. A lot of times it starts off slow, but a little optimization can improve the speed as much as 10x. Since PyPy's runtime is less mature than CPython, there are higher chances of finding an obscure corner of the standard library that might be atrociously slow.
  • Most importantly, if you run out of options and you have a reproducible example, please report it. A pypy-dev email, popping into #pypy on irc.freenode.net, or getting hold of me on twitter are good ways. You can also contact me directly at fijall at gmail.com as well. While it's cool if the example is slow, a lot of problems only show up on large and convoluted examples. As long as I can reproduce it on my machine or I can log in somewhere, I am usually happy to help.
  • I typically use a combination of jitviewer, valgrind and lsprofcalltree to try to guess what's going on. These tools are all useful, but use them with care. They usually require quite a bit of understanding before being useful. Also sometimes they're just plain useless and you need to write your own analysis.

I hope this summary of steps to take is useful. We hear a lot of stories of people trying PyPy, most of them positive, but some of them negative. If you just post "PyPy didn't work for me" on your blog, that's cool too, but you're missing an opportunity. The reasons may vary from something serious like "this is a bad pattern for PyPy GC" to something completely hilarious like "oh, I left this sys._getframe() somewhere in my hot loops for debugging" or "I used the logging module which uses sys._getframe() all over the place".

Cheers,
fijal


Unknown wrote on 2013-03-28 09:45:

waiting for gevent's support

Anonymous wrote on 2013-03-28 13:39:

Just curious, why is uwsgi not the best choice?

Unknown wrote on 2013-03-28 21:28:

I'm also curious what are the issues with uWSGI.

Unknown wrote on 2013-03-28 22:12:

As the main uWSGI author i can only confirm the post. Embedding pypy in c applications (not the inverse) is still hacky, and afaik uWSGi is the only project trying to do it. So albeit the combo works, it is only a proof of concept that require still lot of effort (both from pypy and uWSGI) to be production-ready.

Jacob Stoner wrote on 2013-03-28 23:06:

looking forward to the post on gevent with pypy

Josell wrote on 2013-03-29 05:04:

Ruby or nothing. Sorry.

Anonymous wrote on 2013-04-02 13:05:

thanks for share...

Anonymous wrote on 2013-04-02 14:46:

will there maybe be an asm.js backend for pypy? :) that would be kind of nice. finally python in the browser.

to me it seems like asm.js will be more successful than google's native client since it is much simpler to implement and since it is a subset of javascript it already works everywhere, just slower.

Numpy status update and developer announcement

Hello, some good news!

First the update:

  • dtype support - NumPy on PyPy now supports non-native storage formats. Due to a lack of true support for longdoubles in rpython, we decided to back out the support of longdouble-as-double which was misleading.
  • missing ndarray attributes - work has been made toward supporting the complete set of attributes on ndarrays. We are progressing alphabetically, and have made it to d. Unsupported attributes, and unsupported arguments to attribute calls will raise a NotImplementedError.
  • pickling support for numarray - hasn't started yet, but next on the list
  • There has been some work on exposing FFI routines in numpypy.
  • Brian Kearns has made progress in improving the numpypy namespace. The python numpypy submodules now more closely resemble their numpy counterparts. Also, translated _numpypy submodules are now more properly mapped to the numpy core c-based submodules, furthering the goal of being able to install numpy as a pure-python module with few modifications.

And now the good news:

While our funding drive over 2012 did not reach our goal, we still managed to raise a fair amount of money in donations. So far we only managed to spend around $10 000 of it. We issued a call for additional developers, and are glad to welcome Romain Guillebert and Ronan Lamy to the numpypy team. Hopefully we will be able to report on speedier progress soon.

Cheers,
Matti Picus, Maciej Fijalkowski


cournape wrote on 2013-03-19 08:46:

Regarding long double, that's clearly something you should not waste your time on. I think the way it was implemented in numpy is not good, and I generally advise against it (the only real use I can see is if you need to interoperate with binary formats that use it, but even there, the complete platform specificity of it is a killer).

Power Cords wrote on 2013-03-20 06:15:

Joining of two additional developers is a good sign for Numpy and so we hope that they will now focus on speedier progress soon.

Py3k status update #10

This is the tenth status update about our work on the py3k branch, which we
can work on thanks to all of the people who donated to the py3k proposal.

There's been significant progress since the last update: the linux x86-32
buildbot
now passes 289 out of approximately 354 modules (with 39 skips) of
CPython's regression test suite.

That means there's only 26 test module failures left! The list of major items
remaining for 3.2 compatibility are now short enough to list here, with their
related tests:

  • Tokenizer support for non-ascii identifiers
  • test_importlib
  • test_pep263
  • test_memoryview
  • multiprocessing module currently deadlocks
  • test_multiprocessing
  • Buggy handling of the new extended unpacking syntax by the compiler:
  • test_unpack_ex
  • The new Global Interpreter Lock and new thread signal handling
  • test_threading
  • test_threadsignals
  • test_sys
  • Upgrade unicodedata to 6.0.0 (requires updates to the actual unicodedata
    generation script)
  • test_ucn
  • test_unicode
  • test_unicodedata
  • test_capi (currently crashes)
  • Update int's hash code to match to CPython (float's is already updated on the
    py3k-newhash branch. note that PyPy 2.x doesn't even totally match
    CPython's hashing)
  • test_decimal
  • test_fractions
  • test_numeric_tower
  • Miscellaneous:
  • test_complex
  • test_float
  • test_peepholer
  • test_range
  • test_sqlite (a new cffi based version seems to be coming)
  • test_ssl
  • test_struct
  • test_subprocess
  • test_sys_settrace
  • test_time

Additionally there are still a number of failures in PyPy's internal test
suite. These tests are usually ran against untranslated versions of PyPy during
development. However we've now began running them against a fully translated
version of PyPy on the buildbot too (thanks to Amaury for setting this
up). This further ensures that our tests and implementation are sane.

We're getting closer to producing an initial alpha release. Before that happens
we'd like to see:

  • further test fixes
  • the results of test runs on other major platforms (e.g. linux x86-64 and osx
    seem to have some additional failures as of now)
  • some basic real world testing

Finally I'd like to thank Manuel Jacob for his various contributions over the
past month, including fixing the array and ctypes modules among other things,
and also Amaury Forgeot d'Arc for his ongoing excellent contributions.

cheers,
Phil

Ernst Sjöstrand wrote on 2013-03-05 20:47:

A chart with failing tests over time would be cool. Or, just work on fixing those tests! :-)

René Dudfield wrote on 2013-03-06 10:54:

Congrats!

Arne Babenhauserheide wrote on 2013-03-07 10:59:

That’s really, really, REALLY COOL!

Power Cords wrote on 2013-03-12 13:57:

Cool. How many errors have been fixed in current update? Is there any log available?

10 years of PyPy

From a software engineering perspective, 10 years is indistinguishable from infinity, so I don't care what happens 10 years from now -- as long as you don't blame me. :-)

- Guido van Rossum, Python creator.

10 years is indeed a long time. PyPy was created approximately 10 years ago, with the exact date being lost in the annals of the version control system. We've come a long way during those 10 years, from a "minimal Python" that was supposed to serve mostly as an educational tool, through to a vehicle for academic research to a high performance VM for Python and beyond.

Some facts from the PyPy timeline:

  • In 2007, at the end of the EU funding period, we promised the JIT was just around the corner. It turned out we misjudged it pretty badly -- the first usable PyPy was released in 2010.
  • At some point we decided to have a JavaScript backend so one could compile RPython programs to JavaScript and run them in a browser. Turned out it was a horrible idea.
  • Another option we tried was using RPython to write CPython C extensions. Again, it turned out RPython is a bad language and instead we made a fast JIT, so you don't have to write C extensions.
  • We made N attempts to use LLVM. Seriously, N is 4 or 5. But we haven't fully given up yet :-) They all run into issues one way or another.
  • We were huge fans of ctypes at the beginning. Up to the point where we tried to make a restricted subset with static types, called rctypes for RPython. Turned out to be horrible. Twice.
  • We were very hopeful about creating a JIT generator from the beginning. But the first one failed miserably, generating too much assembler. The second failed too. The third first burned down and then failed. However, we managed to release a working JIT in 2010, against all odds.
  • Martijn Faassen used to ask us "how fast is PyPy" so we decided to name an option enabling all optimizations "--faassen". Then "--no-faassen" was naturally added too. Later we decided to grow up and renamed it to "-O2", and now "-Ojit".
  • The first time the Python interpreter successfully compiled to C, it segfaulted because the code generator used signed chars instead of unsigned chars...
  • To make it more likely to be accepted, the proposal for the EU project contained basically every feature under the sun a language could have. This proved to be annoying, because we had to actually implement all that stuff. Then we had to do a cleanup sprint where we deleted 30% of codebase and 70% of features.
  • At one sprint someone proposed a new software development methodology: 'Terminology-Driven Programming' means to pick a fancy name, then discuss what it could mean, then implement it. Examples: timeshifter, rainbow interpreter, meta-space bubble, hint annotations (all but one of these really existed).
  • There is a conspiracy theory that the reason why translation is so slow is because time is stored away during it, which is later retrieved when an actual program runs to make them appear faster

Overall, it was a really long road. However, 10 years later we are in good shape. A quick look on the immediate future: we are approaching PyPy 2.0 with stackless+JIT and cffi support, the support for Python 3 is taking shape, non-standard extensions like STM are slowly getting ready (more soon), and there are several non-Python interpreters around the corner (Hippy, Topaz and more).

Cheers,
fijal, arigo, hodgestar, cfbolz and the entire pypy team.


Anonymous wrote on 2013-02-28 22:43:

My best wishes to whole PyPy team! And thanks for all the hard work!

Anonymous wrote on 2013-02-28 23:01:

You guys rock!

Anonymous wrote on 2013-02-28 23:04:

Best blog posting - ever! Heres to another 10 pypy years and N llvm endeavours. -- rxe

Anonymous wrote on 2013-02-28 23:33:

You've made a great work so far, please continue with it!!

Vanessa wrote on 2013-03-01 00:37:

Only those who dare to fail greatly can ever achieve greatly. --RFK
Congrats, guys!

Anonymous wrote on 2013-03-01 01:45:

Congratulations and thank you for the great work, looking forward to the next 10 years!

dmatos wrote on 2013-03-01 02:16:

Great work!

Anonymous wrote on 2013-03-01 06:20:

How will PyPy impact Python future and it's adoption as preferred language?

Anonymous wrote on 2013-03-01 08:23:

indeed: congratulations and much respect for the perseverance and hard work you have put into this project over the years!

Gaëtan de Menten wrote on 2013-03-01 08:42:

First, congratulations for keeping at it for 10 years! PyPy is one of the most interesting project I know of.

This blog post is also very interesting but by reading it I can't help but think: are all those "failures" documented somewhere in one place? It could be a very interesting read.

Or more specifically:
* Why was the JavaScript backend a horrible idea?
* Why is RPython a bad language (for writing CPython extensions)?
* What went wrong in the different attempts at using LLVM?
* What were those "70% of features" that were dropped after the EU project?

glyph wrote on 2013-03-01 09:16:

Congratulations! Here's to another 10 years!

And the JavaScript backend was a great idea - bring it back! It's certainly better than the other Python-to-JS translators out there, at least in terms of actually parsing some Python. I want Python in my browser!

kayhayen wrote on 2013-03-01 11:29:

I was and always will be impressed by PyPy. And the self-critic of this post only furthers it. You are cool people, looking forward to meet you again.

Anonymous wrote on 2013-03-01 12:12:

I remember 10 years ago, when I decided to learn to program... I didn't know what language to choose, and someone suggested python. It was someone I approached through a mailing list, and he was passionate explaining why python is so special.

I remember reading about it being cool but with a "performance problem". However, there were some nerds out there talking about a minimal python, that would eventually become a fast python, so I said "cool, perhaps in a few months there will be a fast python...".

I spent ten years following silently this story, and I'm happy to say "Happy birthday Pypy!".

I've never met any of you, but I feel I know you.
You showed me the value of perseverance, that every failure is one step closer to success.

Congratulations and a big THANK YOU!
Luis Gonzalez, from Buenos Aires.

Paul Jaros wrote on 2013-03-01 14:12:

PyPy is my favorite open-source project. Best of wishes for the future development.
May you find all the funding you need, become the leading STM Implementation and become the defacto Python standard.

Stefane Fermigier wrote on 2013-03-01 14:34:

+1 on Gaëtan de Menten's comment.

Daniel wrote on 2013-03-01 22:06:

One more +1 on Gaëtan de Menten's comment. :)

Anonymous wrote on 2013-03-02 01:06:

You are incredible people and you do such cool stuff! Best of luck to you and keep up the great work!

Arne Babenhauserheide wrote on 2013-03-02 11:03:

Thank you for the great post - and thank you for sticking to it and finding ways to get time to make it work - including to add everything under the sun into that EU project to be able to go full-time!

You’re a great example how to really do stuff right - by actually doing it and keeping at it through every stumbling block on the way.

Happy birthday - and thank you for pypy!

Jan Brohl wrote on 2013-03-03 12:32:

+1 on Gaëtan de Menten's comment.

Anonymous wrote on 2013-03-04 14:11:

I'd also like to see the failures documented. Trying and failing is a great way to learn - but even better is to learn from other's failures.

Anonymous wrote on 2013-03-05 11:49:

Great work guys! Happy birthday PyPy!

Электроник wrote on 2013-03-10 01:34:

Thanks for making fast Python possible and creating a masterpiece in process!
About Terminology-Driven Programming: let me guess, the only nonexistent thing is a timeshifter? Three other names make a lot of sense in context of PyPy.

Armin Rigo wrote on 2013-03-23 16:42:

Электроник: no :-) Try again.

cppyy status update

The cppyy module provides C++ bindings for PyPy by using the reflection information extracted from C++ header files by means of the Reflex package. In order to support C++11, the goal is to move away from Reflex and instead use cling, an interactive C++ interpreter, as the backend. Cling is based on llvm's clang. The use of a real compiler under the hood has the advantage that it is now possible to cover every conceivable corner case. The disadvantage, however, is that every corner case actually has to be covered. Life is somewhat easier when calls come in from the python interpreter, as those calls have already been vetted for syntax errors and all lookups are well scoped. Furthermore, the real hard work of getting sane responses from and for C++ in an interactive environment is done in cling, not in the bindings. Nevertheless, it is proving a long road (but for that matter clang does not support all of C++11 yet), so here's a quick status update showing that good progress is being made.

The following example is on CPython, not PyPy, but moving a third (after Reflex and CINT) backend into place underneath cppyy is straightforward compared to developing the backend in the first place. Take this snippet of C++11 code (cpp11.C):

    constexpr int data_size() { return 5; }

    auto N = data_size();

    template<class L, class R>
    struct MyMath {
       static auto add(L l, R r) -> decltype(l+r) { return l + r; }
    };

    template class MyMath<int, int>;

As a practical matter, most usage of new C++11 features will live in implementations, not in declarations, and are thus never seen by the bindings. The above example is therefore somewhat contrived, but it will serve to show that these new declarations actually work. The new features used here are constexpr, auto, and decltype. Here is how you could use these from CPython, using the PyROOT package, which has more than a passing resemblance to cppyy, as one is based on the other:

    import ROOT as gbl
    gbl.gROOT.LoadMacro('cpp11.C')

    print 'N =', gbl.N
    print '1+1 =', gbl.MyMath(int, int).add(1,1)
which, when entered into a file (cpp11.py) and executed, prints the expected results:

    $ python cpp11.py
    N = 5
    1+1 = 2
In the example, the C++ code is compiled on-the-fly, rather than first generating a dictionary as is needed with Reflex. A deployment model that utilizes stored pre-compiled information is foreseen to work with larger projects, which may have to pull in headers from many places.

Work is going to continue first on C++03 on cling with CPython (about 85% of unit tests currently pass), with a bit of work on C++11 support on the side. Once fully in place, it can be brought into a new backend for cppyy, after which the remaining parts of C++11 can be fleshed out for both interpreters.

Cheers,
Wim Lavrijsen

Anonymous wrote on 2013-02-28 00:17:

How would memory management work for C++ objects which own PyPy objects? In CPython, or any similar reference counting system, a C++ class can hold only references via special smart pointers. These smart pointers don't need to be registered in any way with the outer class, since there's no need for a garbage collector to traverse from the outer object to the inner smart pointer instances.

For decent garbage collection to work, presumably one needs to be able to enumerate the PyPy objects pointed to by a C++ object. How would this work?

Wim Lavrijsen wrote on 2013-02-28 00:34:

Right now, there are no PyPy objects exposed as such, but only PyObjects through cpyext in support of the python C-API. In cppyy, cpyext is used for any interface that has a PyObject* as argument or return value. It is cpyext that takes care of marrying the ref-count API with the garbage collector.

Don't pin me down on the details, but from what I understand of cpyext, a wrapper object with the proper C layout is created, and given a life line by putting it in an internal container holding all such objects safe from the gc simply by existing. When the ref count hits zero, the life line gets removed. Object identity is preserved by finding objects in the internal container and reusing them.

PyCon Silicon Valley and San Francisco visit

Hello everyone.

We (Armin Rigo and Maciej Fijalkowski) are visiting San Francisco/Silicon Valley for PyCon and beyond. Alex Gaynor, another core PyPy dev is living there permanently. My visiting dates are 12-28 of March, Armin's 11-21st. If you want us to give a talk at your company or simply catch up with us for a dinner please get in touch. Write to pypy-dev@python.org, if you want this publically known or simply send me a mail at fijall@gmail.com if you don't want it public.

Cheers,
fijal


Announcing Topaz, an RPython powered Ruby interpreter

Hello everyone

Last week, Alex Gaynor announced the first public release of Topaz, a Ruby interpreter written in RPython. This is the culmination of a part-time effort over the past 10 months to provide a Ruby interpreter that implements enough interesting constructs in Ruby to show that the RPython toolchain can produce a Ruby implementation fast enough to beat what is out there.

Disclaimer

Obviously the implementation is very incomplete currently in terms of available standard library. We are working on getting it useable. If you want to try it, grab a nightly build.

We have run some benchmarks from the Ruby benchmark suite and the metatracing VMs experiment. The preliminary results are promising, but at this point we are missing so many method implementations that most benchmarks won't run yet. So instead of performance, I'm going to talk about the high-level structure of the implementation.

Architecture

Topaz interprets a custom bytecode set. The basics are similar to Smalltalk VMs, with bytecodes for loading and storing locals and instance variables, sending messages, and stack management. Some syntactical features of Ruby, such as defining classes and modules, literal regular expressions, hashes, ranges, etc also have their own bytecodes. The third kind of bytecodes are for control flow constructs in Ruby, such as loops, exception handling, break, continue, etc.

In trying to get from Ruby source code to bytecode, we found that the easiest way to support all of the Ruby syntax is to write a custom lexer and use an RPython port of PLY (fittingly called RPly) to create the parser from the Ruby yacc grammar.

The Topaz interpreter uses an ObjectSpace (similar to how PyPy does it), to interact with the Ruby world. The object space contains all the logic for wrapping and interacting with Ruby objects from the VM. It's __init__ method sets up the core classes, initial globals, and creates the main thread (the only one right now, as we do not have threading, yet).

Classes are mostly written in Python. We use ClassDef objects to define the Ruby hierarchy and attach RPython methods to Ruby via ClassDef decorators. These two points warrant a little explanation.

Hierarchies

All Ruby classes ultimately inherit from BasicObject. However, most objects are below Object (which is a direct subclass of BasicObject). This includes objects of type Fixnum, Float, Class, and Module, which may not need all of the facilities of full objects most of the time.

Most VMs treat such objects specially, using tagged pointers to represent Fixnums, for example. Other VMs (for example from the SOM Family) don't. In the latter case, the implementation hierarchy matches the language hierarchy, which means that objects like Fixnum share a representation with all other objects (e.g. they have class pointers and some kind of instance variable storage).

In Topaz, implementation hierarchy and language hierarchy are separate. The first is defined through the Python inheritance. The other is defined through the ClassDef for each Python class, where the appropriate Ruby superclass is chosen. The diagram below shows how the implementation class W_FixnumObject inherits directly from W_RootObject. Note that W_RootObject doesn't have any attrs, specifically no storage for instance variables and no map (for determining the class - we'll get to that). These attributes are instead defined on W_Object, which is what most other implementation classes inherit from. However, on the Ruby side, Fixnum correctly inherits (via Numeric and Integer) from Object.

This simple structural optimization gives a huge speed boost, but there are VMs out there that do not have it and suffer performance hits for it.

Decorators

Ruby methods can have symbols in its names that are not allowed as part of Python method names, for example !, ?, or =, so we cannot simply define Python methods and expose them to Ruby by the same name.

For defining the Ruby method name of a function, as well as argument number checking, Ruby type coercion and unwrapping of Ruby objects to their Python equivalents, we use decorators defined on ClassDef. When the ObjectSpace initializes, it builds all Ruby classes from their respective ClassDef objects. For each method in an implementation class that has a ClassDef decorator, a wrapper method is generated and exposed to Ruby. These wrappers define the name of the Ruby method, coerce Ruby arguments, and unwrap them for the Python method.

Here is a simple example:

@classdef.method("*", times="int")
def method_times(self, space, times):
    return self.strategy.mul(space, self.str_storage, times)

This defines the method * on the Ruby String class. When this is called, the first argument is converted into a Ruby Fixnum object using the appropriate coercion method, and then unwrapped into a plain Python int and passed as argument to method_times. The wrapper method also supplies the space argument.

Object Structure

Ruby objects have dynamically defined instance variables and may change their class at any time in the program (a concept called singleton class in Ruby - it allows each object to have unique behaviour). To still efficiently access instance variables, you want to avoid dictionary lookups and let the JIT know about objects of the same class that have the same instance variables. Topaz, like PyPy (which got it from Self), implements instances using maps, which transforms dictionary lookups into array accesses. See the blog post for the details.

This is only a rough overview of the architecture. If you're interested, get in touch on #topaz.freenode.net, follow the Topaz Twitter account or contribute on GitHub.

Tim Felgentreff
Shin Guey wrote on 2013-02-12 19:25:

Interesting. Although I code a lot in python but still quite like Ruby. Am looking forward for a fast ruby...

Unknown wrote on 2013-02-12 20:37:

Does this mean that JVM is now obsolete?

Anonymous wrote on 2013-02-13 14:36:

Don't worry. JVM will outlive you and your grandgrandchildren.

smurfix wrote on 2013-02-17 09:05:

"Its __init__ method", not "It's".

CFFI 0.5

Hi all,

A short notice to tell you that CFFI 0.5 was released. This contains a number of small improvements from 0.4, but seems to otherwise be quite stable since a couple of months --- no change since January 10, apart from the usual last-minute fixes for Python 3 and for Windows.

Have fun!

Armin

Dirkjan Ochtman wrote on 2013-02-08 11:53:

Nice! I've added it to the Gentoo package repository; all the tests passed without any issues, this time.

mattip wrote on 2013-03-31 14:41:

Note that pypy uses a builtin cffi_backend which must match the cffi version. As of March 31 for instance nightly builds work with cffi 0.6

NumPyPy 2013 Developer Position

Introduction

Proposed herein is a part-time fellowship for developing NumPy in PyPy. The work will initially consist of 100 hours with the possibility of extension, until the funds run out. Development and improvement of PyPy's NumPyPy (as with most Open Source and Free Software) is done as a collaborative process between volunteer, paid, and academic contributors. Due to a successful funding drive but a lack of contributors willing to work directly for PyPy, we find ourselves in the enviable situation of being able to offer this position.

Background

PyPy's developers make all PyPy software available to the public without charge, under PyPy's Open Source copyright license, the permissive MIT License. PyPy's license assures that PyPy is equally available to everyone freely on terms that allow both non-commercial and commercial activity. This license allows for academics, for-profit software developers, volunteers and enthusiasts alike to collaborate together to make a better Python implementation for everyone.

NumPy support for PyPy is licensed similarly, and therefore NumPy in PyPy support can directly help researchers and developers who seek to do numeric computing but want an easier programming language to use than Fortan or C, which is typically used for these applications. Being licensed freely to the general public means that opportunities to use, improve and learn about how NumPy in PyPy works itself will be generally available to everyone.

The Need for a Part-Time Developer

NumPy project in PyPy has seen some slow, but steady progress since we started working about a year ago. On one hand, it's actually impressive what we could deliver with the effort undertaken, on the other hand, we would like to see the development accelerated.

PyPy has strict coding, testing, documentation, and review standards, which ensures excellent code quality, continually improving documentation and code test coverage, and minimal regressions. A part-time developer will be able to bring us closer to the goal of full numpy-api implementation and speed improvements.

Work Plan

The current proposal is split into two parts:

  • Compatibility:

    This part covers the core NumPy Python API. We'll implement most NumPy APIs that are officially documented and we'll pass most of NumPy's tests that cover documented APIs and are not implementation details. Specifically, we don't plan to:

    • implement NumPy's C API
    • implement other scientific libraries, like SciPy, matplotlib or biopython
    • implement details that are otherwise agreed by consensus to not have a place in PyPy's implementation of NumPy or agreed with NumPy community to be implementation details
  • Speed:

    This part will cover significant speed improvements in the JIT that would make numeric computations faster. This includes, but is not necesarilly limited to:

    • write a set of benchmarks covering various use cases
    • teaching the JIT backend (or multiple backends) how to deal with vector operations, like SSE
    • experiments with automatic parallelization using multiple threads, akin to numexpr
    • improving the JIT register allocator that will make a difference, especially for tight loops

    As with all speed improvements, it's relatively hard to predict exactly how it'll cope, however we expect the results to be withing an order of magnitude of handwritten C equivalent.

Position Candidate

We would like people who are proficient in NumPy and PyPy (but don't have to be core developers of either) to step up. The developer selection will be done by consensus of PyPy core developers and consulted with the Software Freedom Conservancy for lack of conflict of interest. The main criterium will be past contributions to the PyPy project, but they don't have to be significant in size.

A candidate for the Developer position will demonstrate the following:

  • The ability to write clear, stable, suitable and tested code
  • The ability to understand and extend the JIT capabilities used in NumPyPy.
  • A positive presence in PyPy's online community on IRC and the mailing list.

Ideally the Developer will also:

  • Have familiarity with the infrastructure of the PyPy project (including bug tracker and buildbot).
  • Have Worked to provide education or outreach on PyPy in other forums such as workshops, conferences, and user groups.

Conservancy and PyPy are excited to announce the Developer Position. Renumeration for the position will be at the rate of 60 USD per hour, through the Software Freedom Conservancy.

PyPy community is promising to provide necessary guidance and help into the current codebase, however we expect a successful candidate to be able to review code and incorporate external patches within two months of the starting date of the contract.

Candidates should submit their proposal (including their CV) to:

pypy-z@python.org

The deadline for this initial round of proposals is February 1, 2013.


Anonymous wrote on 2013-01-26 11:37:

I was wondering, why is PyPy so eager to support NumPy of all things? Surely there are things more interesting to a general python/pypy user base. Can someone clarify that for me?

Maciej Fijalkowski wrote on 2013-01-26 11:40:

There was a numpy fundraiser due to popular demand. Feel free to suggest a different fundraiser if you want something else. I would be willing to even do a survey.

Anonymous wrote on 2013-01-26 14:56:

The thing is, the most interesting use of Python is in science, IMHO at least. And absolute majority of python scientific libraries use numpy as base. So, it would be awesome to have fast and robust numpy compatible library running on pypy.

Armin Rigo wrote on 2013-01-26 17:28:

The deadline seems too tight: it's next Friday.

Anonymous wrote on 2013-01-26 18:31:

It's been said before but as a long time NumPy and SciPy user, please please please don't call this project NumPy. It's great for PyPy to have an nd-array lib and for sure NumPy has some of the best semantics and user API for that so by all means make it compatible, but giving it the same name just makes tremendous confusion for users. For scientific users without the C-API which allows most of the widely used scientific extensions it is simply not "numpy".

Wes Turner wrote on 2013-01-26 22:24:

@201301261931

As NumPyPy intends to implement NumPy APIs, as a non-contributor, I feel like NumPyPy is a good name.

So then the package names would be:

* https://pypi.python.org/pypi/numpy
* https://pypi.python.org/pypi/numpypy

@201301261237

IMHO, this is not the forum for discussing what sort of pony you would like?

Anonymous wrote on 2013-01-27 16:19:

FWIW I think that numpypy to work is hugely important for the acceptance of pypy. Simple things like using matplotlib are crucial to lots of people who aren't using much of the rest of scipy, for example.

Rahul Chaudhary wrote on 2013-01-28 01:31:

You can post it on https://jobs.pythonweekly.com/ and it will be included in Python Weekly newsletter too.

Anonymous wrote on 2013-01-30 01:36:

I am following each of your announcements with great interest.
JIT optimization of array manipulations would enormously benefit my daily work.

Even though I am trying hard to follow the discussion, I have difficulty understanding the issues at hand, and what numpypy is going to be when it is finished.

Probably I am not the only one, considering the sometimes controversial discussion.

My current understanding is this:
All python code in numpy will run much better under pypy.

The problem are the external libraries. Depending on the type, there will be different approaches.

I assume that you will re-write a large part of the c-part of numpy directly in python, and then make use of the JIT optimizer. That would be the approach for all of the algorithms that are currently written in c, but could be easily re-implemented in python.
Something like ufunc_object.c could probably be rewritten in python without a loss of speed.
Of course, even though this would still run under normal python, it would be far to slow.

Then you have external dlls, like BLAS. I assume you will call them differently (ctypes?), and not as extension modules. If you use ctypes, it will still run under normal python, maybe a bit slower.

Then you have parts that are currently written in c, but that you can neither re-implement in python, nor call as a dll. Will you re-write those in c, using a different c-api? Or re-write them, so that they can be called using ctypes?


Maybe you give a short general overview about the issues with the c-api and what you are doing?

Something like. "Currently the function numpy.dot is written as a c-extension. It makes extensive use of PyArray_GETITEM. This limits the optimizer. We are therefore completely rewriting the function in python"

What is the best approach for a user like me, who makes heavy use of numpy, but also scipy and my own extension modules, cython and f2py?

Should I preferably write future modules as dlls, so that they can be called with ctypes (or cffi or something else), instead of making extension modules?

Do you think it will be possible at all to use scipy, which makes much more use of non-python libraries, or do you think that scipy will have to be re-written?

Alendit wrote on 2013-02-09 12:09:

Just a question - the donation figures on the homepage seem to be the same for the last 6 month or so. Is there really no donation or aren't they updated anymore.

Py3k status update #9

This is the ninth status update about our work on the py3k branch, which
we can work on thanks to all of the people who donated to the py3k
proposal
.

Just a very short update on December's work: we're now passing about 223 of
approximately 355 modules of CPython's regression test suite, up from passing
194 last month.

Some brief highlights:

  • More encoding related issues were addressed. e.g. now most if not all the
    multibytecodec test modules pass.
  • Fixed some path handling issues (test_os, test_ntpath and
    test_posixpath now pass)
  • We now pass test_class, test_descr and almost test_builtin (among
    other things): these are notable as they are fairly extensive test suites of
    core aspects of the langauge.
  • Amaury Forgeot d'Arc continued making progress on CPyExt (thanks again!)

cheers,
Phil

Unknown wrote on 2013-01-14 10:58:

Nice! Thank you for your update!

Kevin S. Smith wrote on 2013-01-24 17:24:

The update was expected. Thank you for your update. Hope to see more.