paint-brush
Clickhouse vs Elasticsearch vs Manticore Search Query Times With a 1.7B NYC Taxi Rides Benchmarkby@snikolaev
1,369 reads
1,369 reads

Clickhouse vs Elasticsearch vs Manticore Search Query Times With a 1.7B NYC Taxi Rides Benchmark

by Sergey NikolaevJune 1st, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

New York City taxi rides are probably the most commonly used benchmark in the area of data analytics. Data collection constitutes 1.7 billion taxi and for-hire vehicle (Uber, Lyft, etc.) trips originating in NYC since 2009. The data collection record includes a lot of different attributes of a taxi ride: Pickup date and time, pickup date and dropoff location names, wind speed, snow depth and tip amount. It can be used mostly for testing queries, but it also includes a couple of full-text fields that can also be used to test free text capabilities of databases.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Clickhouse vs Elasticsearch vs Manticore Search Query Times With a 1.7B NYC Taxi Rides Benchmark
Sergey Nikolaev HackerNoon profile picture


New York City (NYC) taxi rides are probably the most commonly used benchmark in the area of data analytics.


It started with Todd W. Schneider deciding to prepare the collection first in 2015 to analyze 1.1 billion NYC Taxi and Uber Trips. Then Mark Litwintschik continued by testing lots of databases and search engines using the data collection.


Now we at DB Benchmarks have dockerized preparation of the data collection to make it easier to use and made it available as a part of the most transparent and open source database benchmarks suite.

Data Collection

The data collection constitutes 1.7 billion taxi and for-hire vehicle (Uber, Lyft, etc.) trips originating in New York City since 2009. Most of the raw data comes from the NYC Taxi & Limousine Commission.


The data collection record includes a lot of different attributes of a taxi ride:


  • Pickup date and time
  • Coordinates of pickup and dropoff
  • Pickup and dropoff location names
  • Fee and tip amount
  • Wind speed, snow depth
  • And many other fields


It can be used mostly for testing analytical queries, but it also includes a couple of full-text fields that can be used to test free text capabilities of databases.


The whole list of fields and their data types is:


       "properties": {
         "vendor_id": {"type": "keyword"},
         "pickup_datetime": {"type": "date", "format": "epoch_second"},
         "dropoff_datetime": {"type": "date", "format": "epoch_second"},
         "store_and_fwd_flag": {"type": "keyword"},
         "rate_code_id": {"type": "integer"},
         "pickup_longitude": {"type": "float"},
         "pickup_latitude": {"type": "float"},
         "dropoff_longitude": {"type": "float"},
         "dropoff_latitude": {"type": "float"},
         "passenger_count": {"type": "integer"},
         "trip_distance": {"type": "float"},
         "fare_amount": {"type": "float"},
         "extra": {"type": "float"},
         "mta_tax": {"type": "float"},
         "tip_amount": {"type": "float"},
         "tolls_amount": {"type": "float"},
         "ehail_fee": {"type": "float"},
         "improvement_surcharge": {"type": "float"},
         "total_amount": {"type": "float"},
         "payment_type": {"type": "keyword"},
         "trip_type": {"type": "byte"},
         "pickup": {"type": "keyword"},
         "dropoff": {"type": "keyword"},
         "cab_type": {"type": "keyword"},
         "rain": {"type": "float"},
         "snow_depth": {"type": "float"},
         "snowfall": {"type": "float"},
         "max_temp": {"type": "byte"},
         "min_temp": {"type": "byte"},
         "wind": {"type": "float"},
         "pickup_nyct2010_gid": {"type": "integer"},
         "pickup_ctlabel": {"type": "keyword"},
         "pickup_borocode": {"type": "byte"},
         "pickup_boroname": {"type": "keyword"},
         "pickup_ct2010": {"type": "keyword"},
         "pickup_boroct2010": {"type": "keyword"},
         "pickup_cdeligibil": {"type": "keyword"},
         "pickup_ntacode": {"type": "keyword"},
         "pickup_ntaname": {"type": "text", "fields": {"raw": {"type":"keyword"}}},
         "pickup_puma": {"type": "keyword"},
         "dropoff_nyct2010_gid": {"type": "integer"},
         "dropoff_ctlabel": {"type": "keyword"},
         "dropoff_borocode": {"type": "byte"},
         "dropoff_boroname": {"type": "keyword"},
         "dropoff_ct2010": {"type": "keyword"},
         "dropoff_boroct2010": {"type": "keyword"},
         "dropoff_cdeligibil": {"type": "keyword"},
         "dropoff_ntacode": {"type": "keyword"},
         "dropoff_ntaname": {"type": "text", "fields": {"raw": {"type":"keyword"}}},
         "dropoff_puma": {"type": "keyword"}
       }

Databases

So far we have made this test available for 3 databases



In this test we make as little changes to database default settings as possible to not give either of them an unfair advantage. Testing at max tuning is no less important, but it's a subject for another benchmark. Here we want to understand what latency a regular non-experienced user can get after just installing a database and running it with its default settings. But to make it fair to compare one with another we still had to change a few settings:



About caches

We've also configured the databases to not use any internal caches. Why this is important:

  1. In this benchmark, we conduct an accurate latency measurement to find out what response time users can expect if they run one of the tested queries at a random moment, not after running the same query many times consequently.
  2. Any cache is a shortcut to low latency. As written in Wikipedia "cache stores data so that future requests for that data can be served faster". But caches are different, they can be divided into 2 main groups:
    • 👌 those that just cache raw data stored on disk. For example many databases use mmap() to map the data stored on disk to memory, access it easily and let the operating system take care about the rest (reading it from disk when there's free memory, removing it from memory when it's needed for something more important etc). This is ok in terms of performance testing, because we let each database leverage the benefit of using the OS page cache (or its internal similar cache that just reads data from disk) That's exactly what we do in this benchmark.

    • ❗ those that are used to save results of previous calculations. And it's fine in many cases, but in terms of this benchmark letting database enable such a cache is a bad idea, because:

      • it breaks proper measuring: instead of measuring calculation time you start measuring how long it takes to find a value by a key in memory. It's not something we want to do in this test (but it's interesting in general and we'll perhaps do it in the future and publish some article "Benchmark of caches").
      • even if they save not a full result of a particular query, but results of its sub-calculations it's not good, because it breaks the idea of the test - "what response time users can expect if they run one of the tested queries at a random moment".
      • some databases have such a cache (it's usually called "query cache"), others don't so if we don't disable database internal caches we'll give an unfair advantage to those having that.

      So we do everything to make sure none of the database does this kind of caching.


What exactly we do to achieve that:


  • Clickhouse:
    • SYSTEM DROP MARK CACHE, SYSTEM DROP UNCOMPRESSED CACHE, SYSTEM DROP COMPILED EXPRESSION CACHE before testing each new query (not each attempt of the same query).


  • Elasticsearch:
    • "index.queries.cache.enabled": false in its configuration

    • /_cache/clear?request=true&query=true&fielddata=true before testing each new query (not each attempt of the same query).


  • Manticore Search (in configuration file):
    • qcache_max_bytes = 0

    • docstore_cache_size = 0


  • Operating system:
    • we do echo 3 > /proc/sys/vm/drop_caches; sync before each NEW query (NOT each attempt). I.e. for each new query we:
      • stop database
      • drop OS cache
      • start it back
      • make the very first cold query and measure its time
      • and make tens more attempts (up to 100 or until the coefficient of variation is low enough to consider the test results high quality)

Queries

The queries are mostly analytical queries that do filtering, sorting and grouping. We’ve also included one full-text query:


 
[
"SELECT count(*) FROM taxi where pickup_ntaname = '0'",
"SELECT pickup_ntaname, count(*) c FROM taxi GROUP BY pickup_ntaname ORDER BY c desc limit 20",
"SELECT cab_type, count(*) c FROM taxi GROUP BY cab_type order by c desc LIMIT 20",
"SELECT passenger_count, avg(total_amount) a FROM taxi GROUP BY passenger_count order by a desc LIMIT 20",
"SELECT count(*) FROM taxi WHERE tip_amount > 1.5",
"SELECT avg(tip_amount) FROM taxi WHERE tip_amount > 1.5 AND tip_amount < 5",
"SELECT rain, avg(trip_distance) a FROM taxi GROUP BY rain order by a desc LIMIT 20",
{
  "manticoresearch": "SELECT * FROM taxi where match('harlem east') LIMIT 20",
  "clickhouse": "SELECT * FROM taxi where match(dropoff_ntaname, '(?i)\\WHarlem\\WEast\\W') or match(pickup_ntaname, '(?i)\\WHarlem\\WEast\\W') LIMIT 20",
  "elasticsearch": "SELECT * FROM taxi where query('harlem east') LIMIT 20"
},
"SELECT avg(total_amount) FROM taxi WHERE trip_distance = 5",
"SELECT avg(total_amount), count(*) FROM taxi WHERE trip_distance > 0 AND trip_distance < 5",
"SELECT count(*) FROM taxi where pickup_ntaname != '0'",
"select passenger_count, count(*) c from taxi group by passenger_count order by c desc limit 20",
"select rain, count(*) c from taxi group by rain order by c desc limit 20",
"SELECT count(*) from taxi where pickup_ntaname='Upper West Side'",
"SELECT * from taxi limit 5",
"SELECT count(*) FROM taxi WHERE tip_amount = 5",
"SELECT avg(total_amount) FROM taxi"
]

Results

You can find all the results on the results page by selecting “Test: taxi”.


Remember that the only high-quality metric is “Fast avg” since it guarantees a low coefficient of variation and a high query count conducted for each query. The other 2 (“Fastest” and “Slowest”) are provided with no guarantee since:


  • Slowest - is a single attempt result, in most cases the very first coldest query. Even though we purge OS cache before each cold query it can’t be considered stable. So it can be used for informational purposes only (even though many benchmark authors publish such results without any disclaimer).


  • Fastest - just the very fastest result, it should be in most cases similar to the “Fast avg” metric, but can be more volatile from run to run.


Remember the tests including the results are 100% transparent as well as everything in this project, so:


Unlike other less transparent and less objective benchmarks we are not making any conclusions, we are just leaving screenshots of the results here:

All Three Competitors at Once

Clickhouse vs Elasticsearch


Manticore Search vs Elasticsearch

Manticore Search vs Clickhouse

Disclaimer

The author of this test and the test framework is a member of Manticore Search core team and the test was initially made to compare Manticore Search with Elasticsearch, but as shown above and can be verified in the open source code and by running the same test yourself Manticore Search wasn’t given any unfair advantage, so the test can be considered unprejudiced. However, if something is missing or wrong (i.e. non-objective) in the test feel free to make a pull request or an issue on Github. Your take is appreciated! Thank you for spending your time reading this!


Also published here.