As an Amazon Associate I earn from qualifying purchases.

Amazon EBS addresses the challenge of the CAP Theorem at scale

[ad_1]

Amazon Elastic Block Store (EBS) is a high-performance, cloud-based block storage system designed to work with Amazon Elastic Compute Cloud (EC2). EC2 instances are secure, resizable allotments of computing capacity running in the cloud. EBS allows customers to create high-performance storage volumes and attach them to their EC2 instances. These volumes behave much like local hard drives on your PC.

Just as Amazon Web Services (AWS) builds new services that customers use directly, such as AWS Outposts and Graviton2 instances, we are also constantly investing in the distributed microsystems within the architectures of our services. In a paper my colleagues and I are presenting this month at the USENIX Symposium on Networked Systems Design and Implementation (NSDI ’20), we describe an EBS-specialized data store we call Physalia, which helps AWS data centers recover from communications interruptions, with minimal customer impact.

Physalia is one of the ways we’ve continued to innovate in EBS, which has been delivering block storage for AWS customers for more than 10 years.

Each EC2 instance runs in an Availability Zone, which is one or more data centers with redundant power, networking, and connectivity. EBS volumes in an Availability Zone are distributed across a number of storage servers and are architected to handle network partitions, or impairments in communication links between EBS servers and EC2 instances (such as a damaged optical cable running between two data centers).

EBS maintains availability during partitions through replication technology. EBS stores each piece of data on multiple servers, using a fault-tolerant replication protocol. When a network partition occurs, affected servers contact a distributed service called the configuration master.

The master stores a small amount of configuration data indicating which servers hold the data for a given volume and the order in which they replicate it, which is important for identifying up-to-date data. The replication protocol uses the configuration data to decide where application data should be stored, and it updates the configuration to point to the application data’s new location. Physalia is designed to play the role of the configuration master.

Data stores have two chief reliability criteria: availability and consistency. Availability means that every query to the database should result in an answer in a reasonable amount of time. Consistency means that the results of database reads and writes should reflect the order in which they were issued. If user A writes data to a location, and user B then retrieves data from the same location, B should retrieve what A wrote.

Circumventing CAP

A central theorem in network theory, called the CAP theorem, holds that in the face of a partition (the P in CAP), it’s possible to solve for either consistency or availability but not both. EBS needs to solve for both, and Physalia is able to offer strong statistical assurances of consistency and availability by combining a knowledge of data center and power topology and an architecture that uses the concept of the cell as a logical unit.

With Physalia, every EBS volume in an Availability Zone has its own cell, which consists of seven copies of the configuration data for that volume on seven separate servers. We call each copy of the configuration data a node, and a single physical server will typically store thousands of nodes.

A Physalia cell, consisting of seven nodes (N’s), each of which can communicate with all the others.

Stacy Reilly

Availability is very important to AWS customers, and it’s part of how we design and build. When we think about availability, we’re focused not only on making interruptions infrequent and short but also on limiting the impact to as small a subset of customers as possible. We call this idea “blast radius reduction”, and it is a core design tenet for Physalia. Physalia reduces blast radius by placing configuration data close to the servers that need it, when they need it.

In deciding where to place a given node, Physalia faces two competing demands. On the one hand, the nodes should be close together, to minimize the risk that they’ll be cut off from each other in the case of a network partition. On the other hand, they shouldn’t be too close together — or share a power supply — because you wouldn’t want a localized incident such as a rack failure to affect the entire cell. Our knowledge of the network topology and the power topology of the data center helps Physalia manage this trade-off in real time.

By keeping cells small and local, we can ensure that they are available to the instances and volumes that need them. Maintaining consistency within the cell ensures that the configuration data is accurate, and volume data is not corrupted.

The small size of the cells means, essentially, that Physalia is not a single database but a collection of millions of tiny databases. That’s the title of our paper: “Millions of Tiny Databases.” The assurance of cell-level consistency also explains Physalia’s name. The Portuguese man o’ war, Physalia physalis, is, contrary to appearance, not a single organism but a collection of organisms functioning together symbiotically.

In addition to evaluating Physalia empirically, we also verified its correctness using formal methods, a powerful tool that helps AWS researchers design, test, and verify systems at all scales. In the specification language TLA+, we defined the operational parameters of the Physalia system and mathematically verified that it reduces blast radius during network partitions, even in extremely unlikely edge cases.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo