[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Python 2, Python 3, Stretch & Buster



On Mon, Apr 20, 2015 at 01:33:37PM -0400, Barry Warsaw wrote:

> Personally, I think the use of dict.iteritems is way overused.  The
> performance hit is theoretical unless you've actually measured it, and even
> then I think it probably doesn't help much unless you've got huge dicts.

I tend not to take other people's personal preferences as useful data
points when choosing my coding style, so I tend to get easily irritated
by comments like yours, just as I was irritated by people on that lwn
thread saying that all the incompatibilities introduced by python 3.5
where "obscure".

I generally pick my coding style according to the official
documentation. If iteritems is documented to iterate a dictionary with
an O(1) of memory usage, and items() is documented as creating a copy of
the dictionary into a list and therefore having an O(n) memory
complexity, I will use iteritems() unless I need to have a list as a
result. If iteritems() turns out to be less performing, I consider it a
problem in the way things are documented, and I won't be held
accountable for not making measurements.

It is my habit to carefully follow the documentation trying to pick the
tool that has built precisely with the intention of addressing what I am
doing. I would rather hear people saying "I'm sorry that things took a
different direction than expected", rather than hear people moaning
about the use of obscure features, or the neglect of doing measurements.

> >Also, if I implement new features I am not necessarily going to test
> >them on python3 unless it somehow happens automatically.
> 
> One word: tox :)

Nice word. It is a tool that I have never used, and that is not
available on debian stable where the services are currently deployed.

> >Running the test suite twice is not much of an option, though, because it
> >already takes a lot of time to run, and doubling the time it takes just means
> >that code is going to get tested less. Unless of course someone reviews my
> >test code and finds a way to make it run super fast: I'd certainly welcome
> >that!
> 
> The nice thing is you can run `tox -e py27` for your normal cases, and then
> `tox` before you push updates to just be sure it works for all supported
> versions.

The unfortunate thing is that I would have to budget several hours of my
time to learn a new tool and integrate it in my workflow, and I have
more urgent things on top of my todo list than "redo what currently
works fine using a new tool".

I would be fine if someone sent me patches to make the code *also* work
with tox, but I would not personally use it myself until I have
allocated time that I currently don't have to master it, because I
refuse to be the primary maintainer of code that uses tools that I do
not know well[1].

Knowledge transfer and migrations may be fun for some, but for others
*they are a cost*, as shown by the mere existence of this thread.


Enrico


[1] also, given the volatility of a lot of new tools in the python
    ecosystem, I have adopted the safety practice of making sure that a
    tool has been /widely/ adopted for at least a year or two before
    even bothering to look at it. I like that practice: I saved myself
    the burden of rewriting setup.py files countless times because of
    that.
-- 
GPG key: 4096R/E7AD5568 2009-05-08 Enrico Zini <enrico@enricozini.org>

Attachment: signature.asc
Description: Digital signature


Reply to: