## Learning Python

When you come from different programming languages like C and C++, where you are used to control very finely what the machine does, learning Python is quite disconcerting, especially when it comes to performance.

Without being exactly an optimization freak, I rather like to use the best algorithms, favoring $O(n)$ algorithms over an $O(n^2)$ naïve algorithms/implementations, so you can guess my surprise when after replacing the $O(n^2)$ initial implementation by an $O(n)$ implementation I did not observe the expected speed-up.

The specifics of the algorithm doesn’t really matter, but the important part of my supposedly $O(n)$ implementation boils down to

i=0
s=0
while a:
s+=a.pop(0)
t[i]=s
i+=1

Where pop returns a value from a list and removes it from the list. a_list.pop() will return and remove the last item; a_list.pop(0) will return and remove the zeroth item, and you can use any index to return and remove that item. But this version did show significant speed-up over the the initial version using sum and slices.

So the other thing I tried is to use pop without an argument, that is, doing it from the end (that reverses the values stored in t, but let’s ignore that for now):

i=0
s=0
while a:
s+=a.pop()
t[i]=s
i+=1

Big surprise, the above is incredibly faster than the preceding version. Not 50% faster. 500×.

OK, so, what’s the deal?

Well, after discussing the issue with the people on Freenode’s #python channel, what I suspected is confirmed: Python’s lists are not lists, but array lists.

That is, they’re not implemented in the CPython engine as linked lists, but as arrays. This means that removing an item from the front of the list results in an $O(n)$ copy, which isn’t really fast. Appending or removing and item at the far end is (amortized) $O(1)$. Removing you can understand, it suffice to update the length of the array to make it appear shorter. Adding can be amortized $O(1)$ if the underlying container pre-allocates large chunks of memory when the current storage is exhausted—a strategy used by the STL vector container (something you can inspect by looking at capacity() rather than size().

 pop(0) 18.70820s pop() 0.04867s sum() 0.00035s

The above table summarizes the performance differences between the two version, for an array of 1 million elements, with sum (that computes the sum over an iterable object such as a list/array). The speed of sum, compared to the other two, explains why the $O(n^2)$ version wasn’t significantly slower than the first $O(n)$ version.

*
* *

So while the pythonistas usually advise you to not worry too much about performance, it turns out that knowing small details such as the underlying implementation of a given container may lead to very different strategies for performance.

### 14 Responses to Learning Python

1. I’m pretty sure most “pythonistas” aren’t really able to understand this post. Or any other of your posts.

• Steven Pigeon says:

I think it’s pretty dismissive of pythonistas to say such a thing.

2. […] aren’t implemented like I’d've expected—like lists that aren’t lists, but array lists, leading to for operations you would otherwise expect to be […]

3. Paul Prescod says:

Steven: List is an abstract data structure. Array lists and linked lists are two equally valid mechanisms for implementing them. Python uses array syntax: you have array syntax right in your code snippet.

How could t[i] = s be efficient if Python used linked lists?

It’s not correct to say that Python “lists aren’t lists”. It’s correct to say that “Python implements its list data type using array lists”.

• Steven Pigeon says:

I meant linked lists. And, yes, I expected t[i] to be expensive for this reason. This being said, I didn’t mean to say that it doesn’t make sense, just that not what I expected.

4. Corbin Simpson says:

Perhaps you should explain what you are actually doing. Many operations over sequences can be expressed as list comprehensions, which always operate in linear time. Slicing is another way to inform Python of your actual intent. (I have a strong feeling that you came into #python without saying anything about what you’re actually working on, and expecting people to fawn over your superior experience in C++.)

Your example could have been expressed as “s = 0; for i, item in enumerate(a): s += item; t[i] = s” but that indicates that t was preallocated, which is somewhat curious.

http://wiki.python.org/moin/TimeComplexity might be interesting to you. These are not guarantees; implementations are allowed to be much slower (or much faster!) on any operation.

The collections.deque class is a real doubly-linked list. Use it if you want a linked list.

• Steven Pigeon says:

You’re not knowing me very well if you think I expect people to get convulsions in admiration of my superior C++ knowledge. :p

The point I was trying to make (not clearly enough, it seems) is that if you’re coming from C or C++, you expect certain things and think in certain ways that have to be “unlearned” when using Python.

For example, the construct you propose is entirely alien for me. Likely it’s more efficient than what I did for interpreter-specific reasons; specifics I don’t know yet. I’m …learning python.

Still, comments from people like you that take time to point out a particular mechanism for this or that, is exactly what I want here. I’m sure that I am not the only one that will benefit from better knowledge of how Python works.

5. John Y. says:

“So while the pythonistas usually advise you to not worry too much about performance, it turns out that knowing small details such as the underlying implementation of a given container may lead to very different strategies for performance.”

If you more actively follow the Python community, you will discover that most Pythonistas do recommend knowing some implementation details, for exactly cases like this. The fundamental performance characteristics of the built-in types are perhaps not made prominent enough in “learn Python” materials, but knowledge of them is considered de rigueur for any Python user advanced enough to recognize big-O notation.

• Steven Pigeon says:

Well, that’s my experience with pythonistas, that doesn’t mean it represents all pythonistas ever. Still, on many occasions I was told to not worry about those details. Then, like the pythonistas you‘re telling me about, I do worry about implementation details…

…it’s just that right now, I’m discovering them as I go.

• John Y. says:

I wonder if the people you’ve dealt with think of you as simply “Python beginner” rather than “computer scientist fluent in C/C++ who happens to be learning Python”. Or perhaps they want to warn against premature *micro-optimization* (the root-of-all-evil kind).

In any case, I hope you enjoy the language. I don’t think you will have trouble picking up the performance characteristics of the built-in data types. I suspect the bigger adjustment will be getting used the notion that all names (and most container elements) are references, not storage areas. This has important consequences in mutability, (often inadvertent) aliasing, and parameter-passing.

• Steven Pigeon says:

Well… Figuring out the complexity of the different data structures (here for example a list which is in fact closer to std::vector than std::list) is not, in my opinion “micro-optimisation.”

And I try to make clear that I am a beginner in Python with a C/C++ background. I always ask of the “most pythonèsque” way of doing something, rather than “if I were to write C in Python”.

• John Y. says:

Of course the complexity of data structures is not micro-optimization. My point is that all programmers, and especially C programmers, have a reputation for a certain love of all kinds of optimization. There are tons of questions on sites like Stack Overflow asking things like “which is quicker, using ‘is’ or ‘==’?” Pythonistas for sure do not want you worrying about that level of optimization. Certain other micro-optimizations are actually acceptable in some cases (for example, making local references to external functions, to speed up name lookup inside tight loops), but only *after* you’ve (1) exhausted all higher-level algorithmic improvements and (2) profiled the code to identify genuine bottlenecks. (In other words, optimization is not evil, but *premature* optimization usually is.)

But the other edge of the sword regarding experienced programmers, especially C programmers, and hopefully all computer science degree-holders, is that people should expect you to already understand the difference between an array and a linked list, and thus leverage that existing knowledge when explaining Python concepts to you. I’m sorry if that hasn’t happened for you.

If the standard documentation isn’t enough for you, you may want to try forums like Stack Overflow or comp.lang.python (also known as python-list).

6. Steven Pigeon says:

(I’m replying to the previous message, but the indentation gets stupid)

I usually start with a fairly efficient algorithm in mind, with the right data structures (which I can do easily in C/C++), so sometimes it would seems that a list should perform well when you use it as a queue, for example. Of course, if I would have had known that it was a std::vector (and array list) and not a std::list (a linked list), the base decision would have been different. The problem is, I’m an absolute beginner in Python. I think C and C++, not Python. Not yet.

However, it is clear that not all computer science degree-holders (as you call them) have even a rough idea of the complexity of operations on data structures… t[i] can be $O(1)$ or it can be $O(\lg n)$ and certainly not every programmer I know (even the “experimented” ones) can really tell you if inserting in ADT X is order of what exactly.

…and thus leverage that existing knowledge when explaining Python concepts to you. I’m sorry if that hasn’t happened for you.

It’s probably more that I haven’t asked the right question, from a pythonèsque point of view. For example, “what’s the best data structure for a queue?”, rather than, “why is pop of the first element so slow?”

7. […] coder, to know and understand what implementation your library or language offer. I have been bitten by the Python quite a few times because I somewhat naïvely assumed that a certain data structure would have the […]