Scylla vs. Cassandra benchmark

The following benchmark compares Scylla and Cassandra on a single server. It supersedes our previous benchmark.

Please also check out our cluster benchmark.

Test Bed

The test was executed on SoftLayer servers. Configuration included:

1 DB server (Scylla / Cassandra):

  • SoftLayer bare metal server
  • CPU: 2x 12-core Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz, with hyperthreading
  • RAM: 128 GB
  • Networking: 10 Gbps
  • Disk: MegaRAID SAS 9361-8i, 4x 960 GB SSD
  • OS: Fedora 22 chroot running in CentOS 7, Linux 3.10.0-229.11.1.el7.x86_64

15 load servers (cassandra-stress):

  • SoftLayer VM server
  • XEN hypervisor
  • CPU: Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz, 16 logical cores
  • RAM: 16G
  • Networking: 1 Gbps
  • OS: CentOS 7, Linux 3.10.0-229.7.2.el7.x86_64

All machines were located in the same data center.

Server tuning

The DB server was tuned for better performance:

  • the bond interface was dropped and bare NIC interface was used.
  • NIC rx interrupts were bound each to a separate CPU
  • NIC tx queues were bound each to a separate CPU
  • Open file limit was increased: ulimit -n 1000000
  • 128 huge pages were configured
  • cpufreq’s scaling_governor was changed to performance


Client tuning

The client VMs were tuned for better performance:

  • NIC IRQs were bound to CPU 0
  • cassandra-stress was limited to CPUs 1-15 using taskset
  • tsc was set as the clock source.


  • version: e3429142651f55f47162a4b5c613979bd2dc079d
  • command line:

    scylla --data-file-directories=/var/lib/scylla/data --commitlog-directory=/var/lib/scylla/commitlog
    --collectd=1 --collectd-address=$COLLECTD_HOST:25827 --collectd-poll-period 3000 --collectd-host=\"$(hostname)\"
    --log-to-syslog=1 --log-to-stdout=1 -m80G --options-file=./scylla.yaml
  • scylla.yaml

Note that in this benchmark Scylla was using the default kernel networking stack (POSIX), not DPDK.

Scylla was compiled and run in Fedora 22 chroot, which is equipped with gcc 5.1.1-4. We saw that this gives about 10% performance improvement over the standard CentOS Scylla RPM running outside the chroot. Future Scylla RPMs will be based on gcc 5.1 so that default installations can also benefit.


The differences between cassandra.yaml used in the test and the distribution-provided one are:

  • Replaced IP addresses
  • concurrent_writes was set to 384 (8*48) as recommended


Three workloads were tested:

  • Write only: cassandra-stress write duration=15min -mode native cql3 -rate threads=700 -node $SERVER
  • Read Only: cassandra-stress mixed 'ratio(read=1)' duration=15min -pop 'dist=gauss(1..10000000,5000000,500000)' -mode native cql3 -rate threads=700 -node $SERVER
  • Mixed: 50/50 Read/Write: cassandra-stress mixed 'ratio(read=1,write=1)' duration=15min -pop 'dist=gauss(1..10000000,5000000,500000)' -mode native cql3 -rate threads=700 -node $SERVER

The cassandra-stress from scylla-tools-0.9-20150924.86671d1.el7.centos.noarch RPM was used. It’s based on version 2.1.8 of cassandra-stress. The main difference from the stock cassandra-stress is that it was changed not to use the thrift API in the setup phase as it is as of now disabled in Scylla. It was not modified in any way which would favor Scylla over Cassandra performance-wise.

Each cassandra-stress process was placed on a separate loader machine. Each test was executed on an empty database (sstables and commitlogs deleted). Before each workload the page cache was dropped.

Before read and mixed workloads the database was populated with 100000000 partitions. The population process was performed using all client VMs writing in parallel. Each client VM was responsible for populating a sub-range of partitions, adjacent to other sub-ranges:

cassandra-stress write n=$range_size -pop "seq=$range_start..$range_end"  -mode native cql3 -rate threads=700 -node $SERVER

The cassandra-stress processes were started simultaneously. The first 60 seconds and last 15 seconds of the results were ignored to allow for warmup and teardown. Reported throughput is the average server-perceived throughput of CQL operations.

Scylla results

Workload Average CQL tps Min CQL tps Max CQL tps
write 1,871,556 1,753,819 1,916,831
read 1,585,416 1,578,034 1,590,486
mixed 1,372,451 1,050,556 1,448,385

The throughput was sampled every 3 seconds.

Below you can find a snapshot of the metrics dashboard during the test. It covers all three workloads. There are 5 bumps on the “load” chart, which correspond, from left to right, to: write workload, populating before read workload, read workload, populating before mixed workload, mixed workload.

The database was restarted clean before each of the three phases, which can be seen on the cache graphs below.

It’s important to note that Scylla provides the same persistence guarantees as Cassandra. You can observe high disk activity during write and mixed workloads which corresponds to commitlog and sstable flushing activity.


Memory statistics:


Cassandra results

Workload Average CQL tps Min CQL tps Max CQL tps
write 251,785 232,529 271,387
read 95,874 77,666 111,553
mixed 108,947 91,111 114,071

The “Reads”, “Writes” and “Total operations” metrics were obtained from the StorageProxy MBean with the help of the mx4j plugin. Throughput was sampled once every 10 seconds.

The results we got for Cassandra are slightly worse than in our previous benchmark. We haven’t yet pin-pointed the culprit of that, the investigation is in progress. We present the numbers here for reference, the comparison graph on the main page is still using the previous (better) measurements.

Created on 2015-10-09