[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Python packaging, dependencies, upstream facilities



On Sep 21, 2010, at 3:26 AM, Piotr Ożarowski wrote:

> [Simon McVittie, 2010-09-21]
>> On Tue, 21 Sep 2010 at 10:30:33 +0200, Piotr Ożarowski wrote:
>>> I see only one sane way to fix the problem - changing Python interpreter
>>> to recognize API from filenames, like foo.1.py foo.2.py foo.2.3.py
>>> (with `import foo <= 2` as valid syntax) and let upstream authors decide
>>> when to bump it, just like C guys do, but that's a topic for
>>> python-devel mailing list...
>> 
>> If upstreams are going to do this, surely they could do so just as effectively
>> without interpreter changes, by versioning the imported module?
> 
> but this way you cannot `import foo` anymore, you'll have to change all
> import lines (s/foo/foo2/) even if your code is not affected by API change

Because languages like python do runtime call resolution, they
cannot, and should not, be treated like compiled languages in this
respect.

In python's case, if I do

import foo

x = foo.bar()

And foo has renamed bar to baz, I won't find out until I actually
run the code that calls the missing class definition. With C++,
there's a class missing, and at the bare minimum, I'll fail at link
time, probably during compile.

Because of this, I think this is at least somewhat out of band for
the interpreter, and up to the integrators and consumers of the
libraries to define the requirements of a particular set of programs.

In the java world, they use maven because it handles this for them.
They create a maven spec file that says "I need libX, libY, and
libZ (v1.1)". maven, during the build, goes out and finds libX and
libY's latest versions, then finds the closest match to libZ v1.1.
These are placed in jars in the classpath for the project, and viola
it "just works".

Of course, java also has compile time checks, so they can actually
do this with a bit more likelihood of things working (if it compiles
it works, right? ;)

Robert is, I think, suggesting that packaging could do this as well.
Right now each python library can exist on the system once. If it
conflicts, game over.

But, if you can get the order of the library loading path right,
then this structure solves Robert's original desire:


/usr/share/pyshared/foo1
/usr/share/pyshared/foo2
/usr/share/pyshared/foo => foo2
/usr/share/pyshared/consumer-of-foo1/lib-packages/foo => /usr/share/pyshared/foo1


This would be quite easily doable in debhelper scripts, and it means
that users can do 'import foo' in their code, but integrators can
package things appropriately to override the "whatever version"
with specific versions.

Off the top of my head, these are a few non-trivial issues to solve:


* What about instances where a dependent-library of consumer-of-foo1
also wants to 'import foo' but needs foo2? Now I have to make sure
the entire chain works with foo1. How can I do that?

* How do I efficiently and reliably prepend that lib-packages
directory only when using consumer-of-foo1


While I think Robert described it as "TERRIBLE" when I suggested
it the other day, the way that pylons does this, I think its at
least simple and understandable.

For working on the CLI, pylons simply spawns a shell that sets
PYTHONPATH/PYTHONHOME. Likewise, one is required to do so when
running their particular pylons based app as a web application.
This allows you to run easy_install or anything else, inside that
CLI, and the libraries are installed local to the application.

Its not necessarily ideal, but it shows the great lengths one must
go to to have multiple versions of libraries.


Reply to: