paint-brush

This story draft by @configuring has not been reviewed by an editor, YET.

Benchmarking Database Performance: Key OLTP and OLAP Tools for System Evaluation

featured image - Benchmarking Database Performance: Key OLTP and OLAP Tools for System Evaluation
Configuring HackerNoon profile picture
0-item

Authors:

(1) Limeng Zhang, Centre for Research on Engineering Software Technologies (CREST), The University of Adelaide, Australia;

(2) M. Ali Babar, Centre for Research on Engineering Software Technologies (CREST), The University of Adelaide, Australia.

Table of Links

Abstract and 1 Introduction

1.1 Configuration Parameter Tuning Challenges and 1.2 Contributions

2 Tuning Objectives

3 Overview of Tuning Framework

4 Workload Characterization and 4.1 Query-level Characterization

4.2 Runtime-based Characterization

5 Feature Pruning and 5.1 Workload-level Pruning

5.2 Configuration-level Pruning

5.3 Summary

6 Knowledge from Experience

7 Configuration Recommendation and 7.1 Bayesian Optimization

7.2 Neural Network

7.3 Reinforcement Learning

7.4 Search-based Solutions

8 Experimental Setting

9 Related Work

10 Discussion and Conclusion, and References

8 EXPERIMENTAL SETTING

In the realm of database performance optimization, a diverse array of open-source benchmarks has emerged. These benchmarks encapsulate both data sets and workloads, effectively emulating real-world database systems. Typically, they are categorized into online transaction processing (OLTP) and online analytical processing (OLAP). OLAP prioritizes complex data analysis and reporting, whereas OLTP specializes in transactional processing and real-time updates.


Typical OLTP benchmarks are listed as follows:


• Sysbench[1] : is a popular open source scriptable multithreaded benchmark tool based on LuaJIT. It provides a set of modular, configurable tests that measure various aspects of system performance, including CPU, memory, file I/O, and OLTP-like database performance (supporting MySQL, MariaDB, PostgreSQL, SQLite, and others). It offers various OLTP workloads designed to simulate different transaction scenarios commonly encountered in online transaction processing systems. These workloads include read-only, read-write, point-select, update-index, insert, and delete operations, each assessing different aspects of database performance such as concurrency, response times, and efficiency of data manipulation.


• TPC-C[2] : This is the current industry standard for evaluating the performance of OLTP systems. TPCC involves a mix of 5 concurrent transactions of different types and complexity either executed online or queued for deferred execution. The database is comprised of 9 types of tables with a wide range of record and population sizes.


Wikipedia: This OLTP benchmark is derived from the software that runs the popular on-line encyclopedia. Since the website’s underlying software, MediaWiki, is open-source, the real schema, transactions, and queries used in the live website are therefore available.


SEATS: The SEATS benchmark [76] models an airline ticketing system where customers search for flights and make on-line reservations. It consists of eight tables and six transaction types. Approximately 60% of the transactions are read-only (e.g., customers searching for open seats), while the other 40% involve creating, updating, and deleting reservation records.


YCSB: The Yahoo! Cloud Serving Benchmark (YCSB) [77] is modeled after data management applications with simple work- loads and high scalability requirements. It is comprised of six OLTP transaction types that access random tuples based on a Zipfian distribution. The database contains a single table with 10 attributes.


• BenchBase[3] (formerly OLTPBench [78]) is a MultiDBMS SQL Benchmarking Framework via JDBC. This benchmark suite offers a standardized set of workloads and metrics to assess the performance and scalability of database systems, supporting transactional Benchmarks such as TPC-C, TPC-H, TATP, SmallBank, SEATS, etc. as well as web-oriented benchmarks such as Twitter, Wikipedia, etc., and feature testing benchmarks including ResourceStresser (RS), YCSB, etc.


Typical OLAP benchmarks are listed as follows:


• TPC-H[4]: TPC-H is a standard industry database benchmark with 8 tables. It consists of a suite of business oriented ad-hoc queries and concurrent data modifications. It is similar to TPC-DS that simulates a decision support system workloads that simulates an OLAP environment where there is little prior knowledge of the queries. It contains 8 tables in 3NF schema and 22 queries with varying complexity.


• JOB[5]: Join Order Benchmark (JOB) [79] is a comprehensive benchmarking framework, with a set of queries over the Internet Movie DataBase (IMDB), designed to assess cardinality estimation and query optimization strategies. The benchmark encompasses 113 multi-join query instances derived from 33 query templates, tailored to the IMDB dataset. Each query is characterized by a varying number of joins, ranging from 3 to 16, with an average of 8 joins per query. It has 3.6GB data (11GB when counting indexes) and 21 tables.


• TPC-DS[6] containing 99 queries, has been widely used in Spark SQL systems for research and development of optimization techniques [80]–[82]. It models complex decision support functions to provide highly comparable, controlled, and repeatable tasks in evaluating the performance of Spark SQL systems.


• The HiBench [83] is a big data benchmark suite that helps evaluate different big data frameworks in terms of speed, throughput and system resource utilizations. It contains a set of Hadoop, Spark and streaming workloads, including Sort, WordCount, TeraSort, Repartition, SQL, etc. In its SQL related benchmarks it contains three different kinds of queries: Join, Scan, and Aggregation. Join is a query that typically executes in two phases: Map and Reduce. Scan is a query that consists of only a Map operation initiated by the ”select” command, which splits the input value based on the field delimiter and outputs a record. Aggregation is a query that consists of both Map and Reduce operations. The Map operation (”select” command) first splits the input value by the field delimiter and then outputs the field defined by the Reduce operation (”group by” command) as a new key/value pair.


This paper is available on arxiv under CC BY 4.0 DEED.


[1] https://github.com/akopytov/sysbench


[2] https://www.tpc.org/TPC


[3] https://github.com/cmu-db/benchbase


[4] https://www.tpc.org/tpch/


[5] https://github.com/gregrahn/join-order-benchmark


[6] https://www.tpc.org/TPC