The ScyllaDB team is pleased to announce the release of ScyllaDB 2.2, a production-ready ScyllaDB Open Source minor release.
The ScyllaDB 2.2 release includes significant improvements in performance and latencies, improved security with Role-Based Access Control (RBAC), improved high availability with hinted-handoff, and many others. More information on performance gains will be shared in a follow-up blog post.
Moving forward, ScyllaDB 2.3 and beyond will contain new features and bug fixes of all types, while future ScyllaDB 2.2.x and 2.1.y releases will only contain critical bug fixes. ScyllaDB 2.0 and older versions will not be supported.
- Get ScyllaDB 2.2
- Upgrade from ScyllaDB 2.1 to ScyllaDB 2.2
- Please let us know if you encounter any problems.
- Role Based Access Control (RBAC) – compatible with Apache Cassandra 2.2. RBAC is a method of reducing lists of authorized users to a few roles assigned to multiple users. RBAC is sometimes referred to as role-based security.
CREATE ROLE agent;
GRANT CREATE ON customer.data
GRANT DESCRIBE ON customer.data
GRANT SELECT ON ALL KEYSPACES TO agent;
GRANT MODIFY ON customer.data TO agent;
CREATE ROLE supervisor;
GRANT agent TO supervisor;
Note that you need to have Authentication enabled to use roles.
- Hinted Handoff – Experimental. Hinted handoff is a ScyllaDB feature that improves cluster consistency and is compatible with Apache Cassandra’s 2.1 Hinted handoff feature. When a replica node is not available for any reason, the coordinator keeps a buffer of writes (hints) to this replica. When the node becomes available again, hints are replayed to the replica. The buffer size is limited and configurable. You can enable or disable hinted handoff in the scylla.yaml file. More on Hinted Handoff.
- GoogleCloudSnitch – Experimental. ScyllaDB now supports GCE snitch and it is compatible with Apache Cassandra’s GoogleCloudSnitch. You can now use GoogleCloudSnitch when deploying ScyllaDB on Google Compute Engine
across one or more regions. As with EC2 snitches, regions are handled as Data Centers and availability zones as racks. More on GCE snitch #1619.
- CQL: Support for timeuuid functions:
currentTimeUUID(alias of now),
We continue to invest in increasing ScyllaDB’s throughput and reducing latency, and in particular, improving consistent latency.
- Row-level cache eviction. Partitions are now evicted from in-memory cache with row granularity which improves the effectiveness of caching and reduces the impact of eviction on latency for workloads which have large partitions with many rows.
- Improved paged single partition queries #1865. Paged single partition queries are now stateful, meaning they save their state between pages so they don’t have to redo the work of initializing the query on the beginning of each page. This results in improved latency and vastly improved throughput. This optimization is mostly relevant for workloads that hit the disk, as initializing such queries involves extra I/O. The results are 246% better throughput with lower latency when selecting by partition key with an empty cache. More here.
- Improved row digest hash #2884. The algorithm used to calculate a row’s digest was changed from md5 to xxHash, improving throughput and latency for big cells. See the issue’s comment for a microbenchmark result. For an example of how row digest is used in a ScyllaDB Read Repair, see here.
- CPU Scheduler and Compaction controller for Size Tiered Compaction Strategy (STCS). With ScyllaDB’s thread-per-core architecture, many internal workloads are multiplexed on a single thread. These internal workloads include compaction, flushing memtables, serving user reads and writes, and streaming. The CPU scheduler isolates these workloads from each other, preventing, for example, a compaction using all of the CPU and preventing normal read and write traffic from using its fair share. The CPU scheduler complements the I/O scheduler which solves the same problem for disk I/O. Together, these two are the building blocks for the compaction controller. More on using Control Theory to keep compactions Under Control.
- Promoted index for wide partitions #2981. Queries seeking through a partition used to allocate memory for the entire promoted index of that partition. In case of really huge partitions, those allocations would also grow large and cause ‘oversized allocation’ warnings in the logs. Now, the promoted index is consumed incrementally so that the memory allocation does not grow uncontrollably.
- Size-based sampling rate in SSTable summary files – automatically tune
min_index_interval propertyof a table based on the partition sizes. This significantly reduces the amount of index data that needs to be read in tables with large partitions and speeds up queries. #1842
As noted above, ScyllaDB 2.2 ships with a dynamic compaction controller for the Size-Tiered Compaction Strategy. Other compaction strategies have a static controller in this release.
ScyllaDB 2.1 had a static controller for all compaction strategies (disabled by default on most configurations; enabled by default on AWS i3 instances), but the 2.2 static controller may allocate more resources than the 2.1 static controller. This can result in reduced throughput for users of Leveled Compaction Strategy and Time Window Compaction Strategy.
If you are impacted by this change, you may set the
compaction_static_shares configuration variable to reduce compaction throughput. Contact the mailing list for guidance.
ScyllaDB 2.3 will ship with dynamic controllers for all compaction strategies.