The Scylla team is pleased to announce the release of Scylla Enterprise 2018.1.5, a production-ready Scylla Enterprise minor release. Scylla Enterprise 2018.1.5 is a bug fix release for the 2018.1 branch, the latest stable branch of our enterprise NoSQL database offering. In addition to bug fixes, 2018.1.5 includes major improvements in single partition scans. For more details, refer to the Efficient Query Paging blog post.
- More about Scylla Enterprise here.
Scylla Enterprise customers are encouraged to upgrade to Scylla Enterprise 2018.1.5 in coordination with the Scylla support team. Note that the downgrade procedure from 2018.1.5, if required, is slightly different from previous releases. For instructions, refer to the Downgrade section in the Upgrade guide.
- Get Scylla 2018.1.5 (customers only, or 30-day evaluation)
- Upgrade from 2018.1.x to 2018.1.5
- Upgrade from 2017.1.x to 2018.1
- Upgrade from Scylla Open Source 2.1 to Scylla 2018.1
- Submit a ticket
Issues fixed by this release, with open source references, if applicable:
- CQL: DISTINCT was ignored with IN restrictions #2837 – CQL: Dropping a keyspace with a user-defined type (UDT) resulted in an Error #3068
- CQL: Selecting from a partition with no clustering restrictions (single partition scan) might have resulted in a temporary loss of writes #3608
- CQL: Fixed a rare race condition when adding a new table, which could have generated an exception #3636
- CQL: INSERT using a prepared statement with the wrong fields may have generated a segmentation fault #3688
- CQL: MIN/MAX CQL aggregates were broken for timestamp/timeuuid values. For example SELECT MIN(date) FROM ks.hashes_by_ruid; where date is of type timestamp #3789
- CQL: TRUNCATE request could have returned a succeeds response even if it failed on some replicas #3796
- CQL: In rare cases, SELECT with LIMIT could have returned a smaller number of values than was necessary #3605 – Performance: eviction of large partitions may have caused latency spikes #3289
- Performance: a mistake in static row digest calculations may have lead to redundant read repairs #3753, #3755
- Performance: In some cases, it was noted that scylla nodes were stalling due to the max_task_backlog exceeding. Preventive measures have been implemented to keep this from happening. Enterprise issue #555
- Stability: In some cases following a reset, the coordinator was sending a write request with the same ID as the request is sent prior to the restart. This triggered an assert in the coordinator. #3153
- Stability: on rare cases eviction from invalidated partitions may cause an infinite loop. Enterprise issues #567
- Monitoring: Added a counter for speculative retries #3030