ScyllaDB’s internal developer conference and hackathon this past year was a lot of fun and very productive. One of the projects we put our efforts into was a new shard-aware Rust driver. We’d like to share with you how far we’ve already gotten, and where we want to take the project next.
The CQL protocol, used both by ScyllaDB and Cassandra, already has some drivers for the Rust programming language on the market – cdrs, cassandra-rs and others. Still, we have rigorous expectations towards the drivers, and in particular we really wanted the following:
- support for prepared statements
- token-aware routing
- shard-aware routing (ScyllaDB-specific optimization)
- paging support
Also, it would be nice to have the driver written in pure Rust, without having to use any unsafe code. Since none of the existing drivers fulfilled our strict expectations, it was clear what we have to do — write our own async CQL driver in pure Rust! And the ScyllaDB hackathon was the perfect opportunity to do just that.
After intensive work during the hackathon, we completed the first version of the scylla-rust-driver and made it open-source available here:
Here’s our hackathon team, discussing crucial design decisions for the new most popular CQL driver for Rust!
Rust driver hackathon team members beginning with Piotr Sarna in the upper left and clockwise: Pekka Enberg, Piotr Dulikowski, and Kamil Braun.
Design and API example
Our driver exposes a set of asynchronous methods which can be used to establish a CQL session, send all kinds of CQL queries, receive and parse results into native Rust types, and much more. At the core of our driver, there’s a class which represents a CQL session. After establishing a connection to the cluster, the aforementioned session can be used to execute all kinds of requests.
Here’s what you can currently do with the API:
API features include:
- connect to a cluster
- refresh cluster topology information
- how many nodes are in the cluster
- how many nodes are up
- which nodes are responsible for which data partitions
- perform a raw CQL query
- not paged or with custom page size
- prepare a CQL statement
- execute a prepared statement
- not paged or with custom page size
To see more comprehensive examples, take a look at https://github.com/scylladb/scylla-rust-driver/tree/main/examples
A snapshot of the documentation is available here:
Implementing the driver with Rust async/await and Tokio
Rust language already has built-in support for asynchronous programming through the async/await mechanism. Additionally, we decided to base the driver on the Tokio framework, which provides an asynchronous runtime for Rust along with many useful features.
Our first step was to implement connection pools used to connect to ScyllaDB/Cassandra clusters and to ensure the driver can handle both unprepared and prepared CQL statements. In order to provide that, we meticulously followed the CQL v4 protocol specification and implemented the initial request types:
Having such a solid footing, we split the work to also provide proper paging and the ability to fetch topology information from the cluster. The latter was needed to make our driver token-aware and shard-aware.
Token awareness allows the driver to route the request straight to the right coordinator node which owns the particular partition, which avoids inter-node communication and generally lowers the overhead. Shard awareness is one step further and is only supported when using the driver to connect to ScyllaDB. The idea is that the request ends up not only on the right node, but also on the right CPU core, thus avoiding inter-core communication and minimizing the overhead even further. Read more about ScyllaDB shard awareness and its positive effect on performance in a great blog which described this optimization for a Python driver.
Interlude: fixing murmur3 by implementing it with bugs
Wait, what? That’s right, during the hackathon we ended up needing to rewrite a murmur3 hashing algorithm with bugs in order to stay fully compatible with Apache Cassandra!
When performing token awareness tests, I noticed that around 30% of all requests ended up on a wrong node. That shouldn’t happen, so we quickly started an investigation. We meticulously checked:
- That the topology information fetched from ScyllaDB is indeed correct,
and consistent with the output of `nodetool describering`
- That the token computations return correct results on the first 100 keys,
which makes it highly unlikely that token computation is to blame
… and we shouldn’t have stopped at checking just 100 keys! It turns out that the first failure happened after we rerun the test for the first 10,000 keys. Further investigation showed that a similar problem occurred for our Golang friends: https://github.com/gocql/gocql/issues/1033.
In short, Cassandra’s murmur3 implementation handwritten in Java operates on signed integers, while the original algorithm used unsigned ones. That creates some subtle differences when shifting the values, which in turn translates to around 30% of tokens being calculated inconsistently with the Cassandra way.
We had no choice but to stop using a comfy crate from crates.io which provided us with a neat murmur3 algorithm implementation and instead we spent the whole night rewriting the algorithm by hand, bugs included™!
We ran two simple benchmarks to see how scylla-rust-driver compares to other existing drivers.
The benchmark’s goal was to send 10 million prepared statements as fast as possible, given a fixed concurrency of 1,024. The usage of token-aware and shard-aware routing was allowed, if supported by the driver. All drivers were compared against the same 3-node ScyllaDB cluster, each node having 2 shards. We compared against GoCQL (enhanced by us with shard awareness) and cdrs.
(reads and writes)
Output of Linux’ time command for processing 10 million prepared statements with a fixed concurrency of 1,024 using different drivers.
Source code of all benchmarks:
The future of our project is very bright. As a matter of fact, it’s already scheduled for another year of hands-on work! A team of four talented students from the University of Warsaw will continue developing the driver from where we left off, as part of the ZPP program. This is the second time ScyllaDB is proudly taking part in the program as a mentor. Here are the other projects from last year:
- ScyllaDB Student Projects, Part I: Parquet
- ScyllaDB Student Projects, Part II: Implementing an Async Userspace File System
- ScyllaDB Student Projects, Part III: Kafka Client for Seastar and ScyllaDB
We also have an official roadmap, updated version of which can always be found in our repository (https://github.com/scylladb/scylla-rust-driver):
- driver-side metrics
- number of sent requests
- latency percentiles of sent requests
- number of errors
- handling topology changes and presenting them to the user
- CQL batch statements
- custom error types
- robust handling of various errors (e.g. repreparing statements)
- CQL authentication support
- TLS support
- configurable load balancing algorithms
- configurable retry policies
- query builders
- CQL tracing
- [additional] performance benchmarks against other drivers
- handling events pushed by the server
- speculative execution
- expanding the documentation
- more correctness tests
- more integration tests – preferably using ScyllaDB’s ccm framework and Python
- preparing the work to be published on crates.io