See all blog posts

Mutant Monitoring Systems (MMS) Day 7 – Multi-datacenter Consistency Levels

mms
IMPORTANT: Since the first publication of the Mutant Monitoring System we have made a number of updates to the concepts and code presented in the blog series. You can find the latest version now in ScyllaDB University. Click here to be directed to the new version.

 

This is part 7 of a series of blog posts that provides a story arc for ScyllaDB Training.

In the previous post, we learned what a multi-datacenter configuration is and expanded the initial Mutant Monitoring ScyllaDB Cluster from one to two datacenters. By having multiple datacenters, Division 3 was able to protect their data from an entire physical site failure. Now we must begin to learn more about how to work with our data in such a setup. Let’s begin by learning more about the consistency level options available when using ScyllaDB with multiple datacenters.

The Consistency Level (CL) determines how many replicas in a cluster that must acknowledge read or write operations before it is considered successful. ONE is the default Consistency Level that cqlsh uses. This means that only one replica node needs to be available to honor read or write requests. Depending on the use case, you may want to use a stronger consistency level to ensure data integrity such as Quorum in a single datacenter. With Quorum, the majority of replica nodes must respond to honor read or write requests. In a multi-datacenter setup of ScyllaDB, there are additional consistency levels available.

The common consistency levels to use when working with multiple datacenters are LOCAL_QUORUM and EACH_QUORUM. Consistency levels can be specified when using the cqlsh utility or with the chosen programming language driver used to interact with a ScyllaDB cluster. When LOCAL_QUORUM is used, a quorum of replicas in the local datacenter responds to read or write requests.

In many cases, the application is located in the same location (a region in AWS terms) as one of the ScyllaDB Data Centers (DC). In such a case, LOCAL_QUORUM provides low latency while keeping a higher consistency level.

For EACH_QUORUM, A quorum of replicas in ALL of the datacenters but be available to respond to write requests. Let’s explore this more by spinning up our Mutant Monitoring System.

Starting the ScyllaDB Cluster

The ScyllaDB Cluster should be up and running with the data imported from the previous blog posts. The MMS Git repository has been updated to provide the ability to automatically import the keyspaces and data. If you have the Git repository cloned already, you can simply do a “git pull” in the scylla-code-samples directory.

git clone https://github.com/scylladb/scylla-code-samples.git
cd scylla-code-samples/mms

Modify docker-compose.yml and add the following line under the environment section of scylla-node1:

- IMPORT=IMPORT

Now the container can be built and run:

docker-compose build
docker-compose up -d

Roughly after 60 seconds, the existing MMS data will be automatically imported.

Bringing Up the Second Datacenter

The second datacenter will be referred to as DC2 throughout this post. To bring up the second datacenter, simply run the docker-compose utility and reference the docker-compose-dc2.yml file:

docker-compose -f docker-compose-dc2.yml up -d

After about 60 seconds, you should be able to see DC1 and DC2 when running the “nodetool status” command.

Like in the previous post, we will need to change the replication factor for the keyspace that we will be working with (tracking):

docker exec -it mms_scylla-node1_1 cqlsh
ALTER KEYSPACE "tracking" WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'DC1':3, 'DC2':3};

For data consistency across both datacenters, we will need to run the “nodetool rebuild” command on each node in the second datacenter as shown below.

docker exec -it mms_scylla-node4_1 nodetool rebuild -- DC1
docker exec -it mms_scylla-node5_1 nodetool rebuild -- DC1
docker exec -it mms_scylla-node6_1 nodetool rebuild -- DC1

Now that the cluster is up and running with the keyspaces properly configured, let’s begin exploring consistency levels with cqlsh.

Exploring Consistency Levels

Let’s begin by diving into EACH_QUORUM. This consistency level requires a quorum of replicas in ALL of the datacenters to be available to respond to write requests. To get started, we will use cqlsh by connecting to the running container with the following command:

docker exec -it mms_scylla-node1_1 cqlsh

Now that we are in the container, we can run cql commands. To get started, let’s use the tracking keyspace

To change the consistency level to EACH_QUORUM, run the following command:

consistency EACH_QUORUM;

In another window, let’s take down the second datacenter and then see what happens when trying to access the keyspace:

docker-compose -f docker-compose-dc2.yml pause

We can now return to the cqlsh window and query the keyspace:

The query failed because EACH_QUORUM is only supported for writes. Let’s see if data can be added to the keyspace:

INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Alex ','Jones','2018-05-11 08:05+0000','Dallas',1.0,300.0,17) ;

As you can see from the output, we were not able to add data because quorum was not met.

Let’s see what happens when LOCAL_QUORUM is set. When set, a quorum of replicas in the local datacenter responds to read or write requests.

The write operation completed successfully even though the second datacenter was down. When the second datacenter is brought back up, “nodetool repair” should be run to ensure data consistency across both sites. Here is the procedure:

Conclusion

In this post, we explored how LOCAL_QUORUM and EACH_QUORUM work with examples using the ScyllaDB cluster for the Mutant Monitoring System. For ultimate data consistency, EACH_QUORUM will provide the most resilience at the cost of performance. For optimal performance with eventual consistency, LOCAL_QUORUM is recommended. In our next post, we will explore how to use ScyllaDB Monitoring to monitor the Mutant Monitoring System. Stay safe out there.

Next Steps

  • Learn more about ScyllaDB from our product page.
  • See what our users are saying about ScyllaDB.
  • Download ScyllaDB. Check out our download page to run ScyllaDB on AWS, install it locally in a Virtual Machine, or run it in Docker.