fbpx

ScyllaDB Rust Driver: One Driver to Rule Them All

18 minutes

Register for access to all 30+ on demand sessions.

Enter your email to watch this video and access the slide deck from the ScyllaDB Summit 2022 livestream. You’ll also get access to all available recordings and slides.

In This NoSQL Presentation

The idea for implementing a brand new Rust driver for ScyllaDB emerged from an internal hackathon in 2020. The initial goal was to provide a native implementation of a CQL driver, fully compatible with Apache Cassandra™, but also containing a variety of ScyllaDB-specific optimizations. The development was later continued as a Warsaw University project led by ScyllaDB. Now it's an officially supported driver with excellent performance and a wide range of features. This session shares the design decisions taken in implementing the driver and its roadmap. It also presents a forward-thinking plan to unify other ScyllaDB-specific drivers by translating them to bindings to our Rust driver, using work on our C++ driver as an example.

Piotr Sarna, Software Engineer, ScyllaDB

Piotr is a software engineer very keen on open-source projects, C++ and Rust. He previously developed an open-source distributed file system (LizardFS) and had a brief adventure with Linux kernel during an apprenticeship at Samsung Electronics. Piotr graduated from the University of Warsaw with MSc in Computer Science.

Video Transcript

Hello. My name is Piotr, and today I’ll talk about ScyllaDB Rust Driver, our motivation for implementing it, important design decisions and our official plans. I work on ScyllaDB Core by day, but lately I also started leading and maintaining the ScyllaDB Rust Driver effort. The idea to implement a new CQL Rust driver from scratch emerged during an internal hack-a-thon at ScyllaDB around 2 years ago.

The Rust ecosystem already had a few drivers available, but none of them really fit our needs for a multitude of reasons, unsatisfying performance, known bugs and so on. One of the options was cdrs, which now looks discontinued, and forked cdrs.io, which is coded in pure Rust, but it had a bunch of performance issues. It wasn’t asynchronous and had a few known bugs. And the other options was cassandra-cpp which is based on the C++ driver. The project simply provides a thin Rust interface and binds it to the C++ core.

Thus we decided to use our experience with existing drivers and carefully planning new open-source Rust driver written natively in Rust and with very, very specific goals in mind. First of all, you want it to be asynchronous from day one. Then it should support token awareness and shard awareness, and it should be fast, produce as little overhead as possible and be easy to extend. And based on these ideas, we started coding ScyllaDB Rust Driver.

A short interlude, asynchronous Rust is based on a rather unique approach, namely a future/promise model in which a future represents the computation itself, not just the result of a task that gets executed in the background, which is common for other languages that support asynchronous operations. In Asynch Rust, nothing is implicitly started in the background, and instead it’s the programmer’s responsibility to advise the state of asynchronous computations, and in order to do that, a runtime is customarily used. A runtime includes the capabilities of a scheduler, the event loop, the reactor, and so on. Due to asynchronous design, a runtime is not part of the language, but instead there are multiple runtimes to choose from, most of them open source. And we picked Tokio. Tokio is pretty much a standard choice due to its very active development community, lots of features and in general its ease of use and of course popularity. Side note, it would be perfect. We would like it very much to write our driver in a runtime-agnostic way, but we decided that the ecosystem is not yet ready for it. There are tons of important bits in Tokio like good TCP utils, timers and other things that are not easy to code in a generic way.

And now let’s go back to our driver. We want the interface to be straightforward yet powerful. So many of the details can be configured, but we also provide default values for everything that makes sense for us. For instance, when a user wants to establish a CQL session with ScyllaDB, the only required parameter is the address of one of the nodes. And a session is pretty much everything that the user needs to create to start communicating with the database. With the help of a session, it’s possible to simply send requests and receive the results from the database. Depending on the workload type, users might want to specify a custom number of connections to the database nodes. Our default establishes a single connection per shard per each node, but users are free to define a fixed number of connections if it fits better to their workloads. And by the way, per shard connections are a ScyllaDB specific feature, but Cassandra users can also benefit from our driver in full. It is fully compatible with Cassandra as well. For Cassandra, we simply create one connection per node by default. Token-aware routing support is in our driver from the start. It’s a load-balancing policy which computes token values from queries on the client’s side and tries to send the requests straight to the database nodes, which own the particular data. This strategy is pretty much a must-have for workloads which care about low latency. It’s important to remember the token-awareness works correctly only when prepared statements are used. And that’s not usually an issue because prepared statements are a huge performance boost anyway so most applications are already coded with prepared statements in mind. Our driver here of course is no exception. We fully support preparing statements, executing them later and making sure that everything is automatically re-prepared when needed. And our driver also takes it one step further. Once a request is ready to be sent, the driver will try to send it not only to the node that has the data, but directly to the CPU core, which owns a particular partition. That translates to even better latency because the data goes straight to the shard responsible for handling the request. And that’s possible because by default ScyllaDB Rust Driver maintains a separate connection for each core and keeps up to date topology information cached on the client’s side which says which data belongs to which CPU in which node.

Next thing that we are quite proud of, our observability tools available for ScyllaDB Rust Driver users. Firstly the driver is integrated with a tracing crate, which provides a way of integrating multiple log-in environments be it printing the logs to your screen, saving them on disk for later, sending them to an external observability tool or service, anything you imagine. That’s very useful for debugging and looking for bottlenecks but also for more complex analysis, gathering statistics, et cetera. Our driver also exposes a number of useful metrics on its own. For instance, how many requests were sent in each session? What was their latency? How many of them were successful? How many of them returned errors and many more. We also integrated with the CQL tracing mechanism. In CQL each query can be individually traced so that a detailed list of steps and their time stamps can be later investigated. For instance, in search for performance regressions. Our driver fully supports query tracing and its results can be simply printed to logs or saved for later, more complex analysis. You can see in the slide that it’s possible to deduce sweep step to how many microseconds, how many nodes were contacted, how long it took for the database to actually fetch the data from disk, was the data already in cache, did we need to touch the disk at all and so on. All in all, tracing output is full of very important details which are invaluable for diagnosing potential performance issues.

Load balancing policy is a very important aspect of a CQL driver. A load balance policy fitting to the workload can ensure satisfying the latency and availability requirements. And our driver’s default policy is called token-aware round robin. Which means that we’ll first prepare a list of nodes responsible for a particular partition and then pick one of the nodes in a round robin fashion from this list. On top of that, we’ll also try to reach the appropriate shard later. Aside from the default policy, users are able to pick something else. For instance, DC-aware round robin, which is useful in multi data center scenarios to make sure that we prefer contacting node from the same data center. Finally, users can implement their own custom load balancing policy if they believe it will better suit their needs. Such a custom policy can be implemented in Rust by using a trait that we prepared for that purpose and showed them this slide. Retry policy is also very important for latency as well as availability. While not useful at all when the connection behaves correctly, a retry policy is used when a query fails for some reason. By default our driver will retry such a query but only if two conditions are met. First of all, the error type must suggest that retrying make sense. For instance, a timeout might make sense because the next time we try maybe the node will keep up with us and return the results. If the error sounds permanent, for instance it’s invalid query syntax, such a query will not be retried because it makes no sense. And the second condition is that the driver must be able to deduct if the query can be retried without any side effects. In other words if it’s idempotent. A user can explicitly mark a query as idempotent to let the driver know that it’s safe to retry. And the same as with load balancing policies, users can implement their own retry policies if the available ones are not good enough for some specific workloads. That can be done by implementing two traits that we provide and that you can see on this slide.

We also take the performance of ScyllaDB Rust Driver very seriously. As part of the effort, we continuously run benchmarks against other drivers to make sure that we remain competitive. On the next few slides I’ll present a few results from a recent run. Our benchmarks are open sourced, just like the rest of the code. So you’re welcome to rerun the experiments yourself. You can find the links on this slide. And in this particular experiment, we compared ScyllaDB Rust Driver against ScyllaDB C++ Driver, which is our shard-aware implementation of the C++ driver, data stack C++ driver which was implemented with Cassandra in mind, so not shard-aware. Then cassandra-cpp which is a Rust driver which is a wrapper over datastart C++ driver, and finally gocql, a driver implemented in Go with additional shard-awareness provided by us. So this chart shows how many millisecond it took for the tested drivers to perform 1 million inserts into a ScyllaDB node which has 16 CPU cores, so 16 shards for a few different values of concurrency. Measuring different concurrency values lets us check a broader spectrum of workloads and also see if the driver scales linearly along with increasing concurrency which is a desired trait. Here is a similar run, but this time the workload consists of half a million inserts and half a million reads intertwined with each other. And we can observe that the results are acquired similar to the first workload, which are neutral to the database. And finally a workload consisting of database selects only.
A few conclusions that we are very happy to make after the tests are first, ScyllaDB Rust Driver is generally faster than any other driver we compare it against. Second it’s very close to the performance of drivers coded in C++, which were previously de facto treated as reference implementations in terms of performance. They were simply the fastest. And then that our driver scales very well when concurrency is increased, which is very nice because it’s usually people tend to use ScyllaDB for scenarios which demand high scalable concurrency.

Now, the sole fact that our driver is coded in Rust makes it much, much easier to hunt for potential bottlenecks and other performance issues. One of the tools from the Rust ecosystem that helped us a lot is cargo flamegraph, a wrapper over Brendan Gregg’s flamegraph tool kit integrated into Rust’s built system cargo. With cargo-flame graph, it’s extremely easy to run your program under a profiler and generate a graph of calls which shows which function calls took longest to execute. This is how the output looks like. It’s full of details and not much can be seen on a single slide. But this generated flamegraph is interactive. You can zoom in on specific parts of it to see how much time the processor spent inside certain frames. By studying flamegraphs, we were already able to identify a few bottlenecks in our previous releases of our previous driver, including some that were caused by a lack of buffering reads and writes. When reading and writing too and from the CP sockets. It was clear on the flamegraph output that our driver spent way too much time on Cisco’s related TCP networking send message and receive message. And all of these problems are already fixed thanks to partially thanks to flamegraphs. And the newest release already has very nice performance. Then since our product is based on Tokio as I mentioned before, since a few weeks we can also leverage another very nice tool called tokio-console. It’s a task manager that looks like the top utility in Linux that I’m used to using, but it works directly on Rust futures. With that, you can browse which tasks are currently active, which resources like semaphores or queues are taken and which are free and so on. All in all, it’s a very useful tool for investigating how your asynch code looks underneath and how it works.

We also have a big plan of replacing other drivers with bindings to our one true driver ScyllaDB Rust Driver. A single centralized implementation would make it easier to maintain and Rust’s superior performance can speed up the solutions available in other languages, like Python, Ruby or any other dynamic languages that tend to be slower. We actually already have a working proof of concept for replacing our C++ driver with Rust bindings. You can take a look at it under the link. And more languages are already on the road map, so stay tuned. Finally, I’d like to take a moment to thank our existing open source contributors from the community. We already accepted a number of fantastic contributions, and I’d like to hereby invite everyone interested to join our effort and contribute as well. Thanks to our community, we now have a special implementation of a session that automatically prepares all statements. We can customize connection timeouts. Our serialization routines got faster. We no longer have certain issues with user-defined types, et cetera. The list is actually must longer, but I can only fit so much in a single slide. In any case, we’re happy to answer any questions, and we’re happy to help with any issues if somebody wants to become a contributor. Finally, our driver has a very nice quick-start guide and thorough documentation, and it’s also officially out on Rust package manager crates, so go ahead and give it a try. Thanks, everyone. Feel free to reach out to me if you have any questions about the presentation or anything else. Thanks.

Read More