C++ benchmarks: vector vs list vs deque

This post shows the results of several benchmarks I wrote to verify the performance of 3 C++ STL containers: vector, list and deque. The operations tested are insertion at the end of the container, insertion at random positions and removal of elements from random positions.


Last night a video on Reddit caught my attention: “Bjarne Stroustrup: Why you should avoid Linked Lists“. Basically it’s a 8 minutes video in which Bjarne Stroustrup tells you to not use linked lists because they suck.

Bjarne Stroustrup hates linked listsAfter watching the video and assuming Stroustrup was obviously right, I decided to write few benchmarks to test how much linked lists suck perform compared to other STL containers.


These are synthetic tests based on spaghetti-code I wrote without considering any kind of optimization or coding / design style, they test exclusively STL containers and I ran them only on my Linux machine.

Things could be totally different in your program running your code on a different machine.

The Benchmarks

I wrote 3 benchmarks which are based on 3 different STL containers: vector, list, deque. All the benchmarks perform creation or destruction of a different number of objects which are based on a simple class containing 4 ints and few functions.

The first one tests the performance of adding objects at the end of the container using the push_back function.

The second one tests the performance of inserting objects at random positions using the insert function.

The Third one tests the performance of removing object from random positions using the erase function.


Here the results of running the 3 benchmarks with 1000, 5000, 10000 and 25000 objects.

The tests were executed on a 64-bit Kubuntu Linux 13.10 machine powered by an Intel i7-4770 CPU @ 3.40GHz and 8Gb of DDR3 RAM.

All the times are in milliseconds, lower values (green) are better.


1000 5000 10000 25000
std::vector 0 0 0 1
std::list 0 0 1 2
std::deque 0 0 0 1



1000 5000 10000 25000
std::vector 0 7 25 140
std::list 2 50 272 1893
std::deque 0 4 17 111



1000 5000 10000 25000
std::vector 0 5 22 139
std::list 1 69 395 2712
std::deque 0 5 18 126

It’s easy to see the pattern here… according to these tests at least, push_back is a (virtually) free operation for any container, but for everything else list is the worst performing container, whereas deque is the best one.

Source Code

You can download the source code of these benchmarks and run them on your machine, then feel free to post your results here commenting this post.



  1. Micah Chambers

    You need to use push_front for std::list, that should be much faster. Also…every programmer should already know this. push_back for vector is quite fast but it uses more memory (up to twice the used slots, since every time it reallocates it doubles the size).

    1. Sean Hunter

      push_front and push_back are the same speed. std::list requires push_back be constant time, so the only way to do that is to store a direct pointer to the last element.

      push_back doesn’t necessarily use more memory, because of pointer overhead. This is true in the case of very large objects, but there are still ways to use vectors or other sorts of contiguous storage that are better than linked lists.

      The advantage of linked lists is that you get ORDERED constant time deletes and inserts. If your vector can be unordered, you can just swap the last element and do pop_back to do a constant time delete.

      A lot of time when you want these benefits, you want to have an intrusive linked list (so you can delete your object from a list without a search) so std::list is largely useless. Also, ID software largely switched to vectors when they found the overhead of searching through a vector was much less than the overhead of using linked lists in general.

      Another use is to use std::list when you need iterators/pointers to not be invalidated, but there’s also better storage in these cases as well.

      These benchmarks are very misleading. The linked lists here will be dominated by search time, where the real problem usually is problems because each element is allocated separately (which is why push_back is slower.) You’re also using vectors and deques of raw pointers which should generally be discouraged, and will warp the times. Keeping indices will also skew things by being the worst possible case for lists, which is rarely encountered in practice. A deque is also not faster than a vector except in a few cases, this speed difference is largely due to testing methodology.

      So take all of this with a grain of salt.

  2. Kevin Lewis

    Yes with vector you will double your memory footprint right after a reallocation if your vector is not big enough. However, with list your have pointers forward and backwards which is triple the memory.

  3. Benjohn Barnes

    How are you picking the list element to delete? If you are randomly choosing them, the algorithms will be iterating from the start needing linear time, which will be horribly slow, of course.

    1. Benjohn Barnes

      Oh, and similarly for the random inserts too!

  4. Francis

    For the list, you did not benchmark insert and delete times! You are actually searching for an item, using a list as a vector. This is where most of the time is spent! This benchmark is useless, try to understand how to use the different containers instead, it would be more useful. A list can be useful in some situation, it can easily outperform a vector in some use cases. It’s a different solution for different problems.

  5. Ethan

    What about larger lists, up to a million elements?

  6. Mickey

    It is rare that a linked-list can outperform a vector/array. This is because of caches and the search time (walking the list). In terms of inserting front and back, they have almost the same performance in a list. Ultimately, a pointer to a pointer to a pointer is going to *always* be a cache miss (extremely likely anyway) and a single cache miss is 554 clock cycles with current memory buses.

    When an array (a vector) is fetched, you receive the head of the array and a block of memory following that… usually several bus fetches. This generally means that when you access a linear memory block, you receive that block and a bunch of more memory that follows for free. This is how all major chip manufacturers do it. Then when you access vector[0], vector[1], vector[2], etc… you never see a cache miss and the access time is basically O(1).

    This is not even close to what happens with a linked list. If you access the first item you get that as part of your first cache miss just like the vector. But thing go haywire. The next element is almost guaranteed to not be in the cache. Not only that, the CPU won’t know that you want that address in the ‘next’ pointer so it won’t even start loading that address. That means that every time that you advance the iterator, you are thrashing the cache, stalling the memory bus that might be servicing other threads, and it’s only for a small block of memory and a pointer (usually). This happens with every iterator++. It’s a major hit.

    Anyway, there is a great little book entitled “Memory-Systems” that can help you understand how memory buses work and then you will begin to see why the list is a horrible data structure on performance.

  7. Loup Vaillant

    One last benchmark should be performed: “Random” insertions and deletions when you already have an iterator pointing where you want.

    Say, take an iterator right in the middle in the container, *then* insert or delete a couple hundred elements right there. I predict the linked list should do better here (Nearly as well as push_back(), in fact).


Leave a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>