Amazon recently unveiled a new class of machines—the AWS i3 family. Targeted at I/O intensive applications and featuring up to 15TB of fast storage, these machines offer unprecedented power with a great balance between I/O and CPU. At a lower price than the previous i2 family, we expect the i3 family to become the default class for NoSQL workloads.
This article will cover i3 instances and provide information about the status of ScyllaDB support for the hardware. Although we don’t yet officially provide i3 AMIs, customers are already running them in production with positive results. ScyllaDB’s native architecture takes advantage of the vast resources available on i3 instances.
Need help and guidance setting your ScyllaDB system to run on i3 instances? You can join our Slack channel and ask for help!
The i3 family brings Non-Volatile Memory Express (NVMe) drives as local ephemeral storage. The performance of these drives is among the best in the public cloud offering claiming up to 3.3 million IOPS at a 4 KB block and up to 16 GB/second of sequential disk throughput.
The i3.16xlarge instance, the larger instance in the i3 family, claims to have up to 20Gbps of network throughput and a decent number of network queues. However, Kernel support is needed to take advantage of the large throughput and network queues. The standard CentOS kernel that ships with ScyllaDB AMIs does not support the i3.16xlarge instance’s network devices, so the kernel has to be replaced with one of the official kernels provided by AWS.
The i3 family is equipped with powerful Intel processors. This allows up to 64 vCPUs per instance, compared to the largest legacy of i2 family, which allows only 32 vCPUs. ScyllaDB’s shard-per-core, lockless architecture takes advantage of every vCPU, resulting in a much higher overall performance. During the recent AWS Summit in San Francisco, we demonstrated a simple key-value schema yielding more than one million operations per second in a single server—1,087,000 to be precise—with i3 instances (see Figure 1), using 100% of all vCPUs.
As described above, ScyllaDB deployments on i3 instances will benefit from the I/O, CPU and Memory abundance. The benefits translate to bottom-line margins. The i3 instance family is cheaper on a per-instance comparison by more than 50% from its predecessor i2 family. Users can reduce their cost of operations by more than 75%, taking advantage of the lower cost and better efficiency ScyllaDB brings to the database deployments.
ScyllaDB on i3 hardware
Users running i3 in production are fully supported, but our automatic configuration will require manual tuning. ScyllaDB’s upcoming 1.8 version, scheduled for release this summer, will fully support i3 instances out of the box
A kernel change is necessary for users who want to run i3 in production before the 1.8 release and are using ScyllaDB-provided AMIs. The CentOS default kernel does not properly support i3’s disks and network cards, and newer AMIs will use the AWS official kernel instead. The network interrupt affinities need to be manually configured, and also the ScyllaDB I/O configuration may have to be manually tweaked.
Ready to start running i3 instances even sooner? Our preview AMIs automatically apply the needed changes. Get them now in these selected regions:
- us-west-1: ami-78173118
- us-east-1: ami-3f295c29
The i3 family of instances is still in early stages and ScyllaDB won’t fully support them until the 1.8 release this summer. ScyllaDB’s currently released versions require manual tuning and configuration to run properly in those boxes. However, once the tuning is done, the performance results are significant.
We are presenting a webinar, ‘How to Monitor and Size Workloads on AWS i3 Instances’ on May 18th at 10:00 AM Pacific. Register now so you can join us to learn how to ensure ScyllaDB fully leverages the great resources in the i3 family and effectively navigate the ScyllaDB monitoring system and identify bottlenecks. You’ll also see a live demonstration with a dashboard featuring an i3 cluster with different data models and workloads.