A Deep Dive into DynamoDB Partitions

Databases are the backbone of most modern web applications and their performance plays a major role in user experience. Faster response times – even by a fraction of a second – can be the major deciding factor for most users to choose one option over another. Therefore, it is important to take response rate into consideration whilst designing your databases in order to provide the best possible performance. In this article, I’m going to discuss how to optimise DynamoDB database performance by using partitions.

Introducing Partitions

DynamoDB performance starts and ends with the concept of partitions. Partitions are like units of storage and performance. Not understating partitions means you will not be able to design highly effective and available databases with DynamoDB. So it’s worth understanding what’s going on under the hood.

Initially when you create one table on DynamoDB, it’ll create one partition and allocate this partition to the table. Any operations on this table – such as insert, delete and update – will be handled by the node where this partition is stored. It is important to remember that you do not have full control over the number of partitions created, but this can be influenced.

One partition can handle 10GB of data, 3000 read capacity units (RCU) and 1000 write capacity units (WCU), indicating a direct relationship between the amount of data stored in a table and performance requirements. A new partition will be added when more than 10GB of data is stored in a table, or RCUs are greater than 3000, or WCUs are greater than 1000. Then, the data will get spread across these partitions.

So how does DynamoDB spread  data across multiple partitions? The partition that a particular row is place within is selected based on a partition key. For each unique partition key value, the item gets assigned to a specific partition.

Let’s use an example to demonstrate. The below table shows a list of examinations and students who have taken them.

table

In this example, there is a many-to-one relationship between an exam and a student (for the sake of simplicity, we’ll assume that students do not resit exams). If this table was just for all the students at a particular school, the dataset would be fairly small. However, if it was all the students in a state or country, there could be millions and millions of rows. This might put us within range of the data storage and performance limits that would lead to a new partition being required.

Below is a virtual representation of how the above data would might be distributed if, based on the required RCU and WCP or the size of the dataset, DynamoDB were to decide to scale it out across 3 partitions:

AWS Network Diagram - New Page

As we can see above, each exam ID is assigned to a unique partition. A single partition may host multiple partition key values based on the size of the dataset, but the important thing to remember here is that one partition key can only be assigned to a single partition. One exam can be taken by many students. Therefore, the student ID becomes a perfect sort key value to query this data (as it allows sorting of exam results by student ID).

By adding more partitions, or by moving data between partitions, indefinite scaling is possible, based on the size or the performance requirements of the dataset. However, it is also important to remember that there are serious limitations that must be considered.

Firstly, the number of partitions are managed by DynamoDB, where partitions are added to accommodate increasing dataset size or increasing performance requirements. Whilst this is true for increasing the number of partitions, there is no automatic decrease in partitions during capacity or performance reductions.

This leads us to our next important point which is allocated RCU (read capacity unit) and WCU (write capacity unit) values spread across a number of partitions. Consider, for example, that you need 30000 RCUs to be allocated  to the database. The maximum an RCU single partition can support is 3000. Therefore, to accommodate the request, DynamoDB will automatically create 10 partitions.

If you are increasing your RCU and WCU via the console, AWS will provide you with an estimated cost per month as below,

RCU_WCU_Increase

Using the exam-student example, the dataset for each exam is assigned to one partition, which, as you will recall, can hold up to 10GB of data, 3000 RCUs and 1000 WCUs. Yet each exam can have millions of students. So the size of this dataset may go well beyond the 10GB capacity limit (which  must be kept in mind when selecting partition keys for a specific dataset).

Increasing the RCU or WCU values for a table beyond 3000 RCUs and 1000 WCUs prompts DynamoDB to create additional partitions with no way to reduce the number of partitions even if the number of required RCUs and WCUs drops. This can lead to a situation where each partition only ends up having a tiny number of RCUs and WCUs.

AWS Network Diagram - New Page

Because it is possible to have performance issues due to over-throttling – even though the overall assigned RCUs and WCUs are appropriate for the expected load – a formula can be created to calculate the desired number of partitions, whilst taking performance into consideration.

Based on our required read performance,

Partitions for desired read performance = 
  Desired RCU / 3000 RCU

and based on our required write performance,

Partitions for desired write performance = 
  Desired WCU / 1000 WCU

Giving us the number of partitions needed for the required performance,

Total partitions for desired performance = 
  (Desired RCU / 3000 RCU) + (Desired WCU / 1000 WCU)

But that’s only the performance aspect. We also have to look at the storage aspect. Assuming the max capacity supported by a single partition is 10GB,

Total partitions for desired storage = Desired capacity in GB / 10GB

The following formula can be used  to calculate the total number of partitions to accommodate the required performance aspect and capacity aspect.

Total partitions = 
  MAX(Total partitions for desired performance, 
      Total partitions for desired capacity)

As an example, consider the following requirements:

  • RCU Capacity:  7500
  • WCU Capacity: 4000
  • Storage Capacity: 100GB

The required number of partitions for performances can be calculated as:

(7500/3000) + (4000/1000) = 2.5 + 4 = 6.5

We’ll round this up to the nearest whole number: 7.

The required number of partitions for capacity is:

100/10 = 10

So the total number of partitions required is:

MAX(7, 10) = 10

A critical factor is that the total RCU and WCU is split equally across the total number of partitions. Therefore you will only get total allocated RCU and WCU amounts for a table if you are reading and writing in parallel across all partitions. This can only be archived via a good partition key model, meaning a key that is evenly distributed across all the key space.

Picking a good partition key

There is no universal answer when it comes to choosing a good key – it’s all dependant on the nature of the dataset. For a low-volume table, the key selection doesn’t matter as much (3000 RCU and 1000 RCU with a single partition is achievable even with a badly-designed key structure). However as the dataset grows, the key selection becomes increasingly important.

Partition key must be specified at the table creation time. If you’re using the console, you’ll see something similar to below,

DynamoDB_·_AWS_Console

Or if you’re using the CLI, you’d have to run something like,

aws dynamodb create-table \
  --table-name us_election_2016 \
  --attribute-definitions \
  AttributeName=candidate_id,AttributeType=S \
  AttributeName=voter_id,AttributeType=S \
  AttributeName=state,AttributeType=S \
  --key-schema AttributeName=candidate_id,KeyType=HASH AttributeName=voter_id,KeyType=RANGE \
  --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1

The first criteria in choosing a good partition key is to select an attribute that has as many distinct values as possible. For an example, you would choose an employee ID when there are many employees available. What should not be selected is a department ID  where there are only handful of departments available.

The next criteria is to pick an attribute with uniformity of access across all key values. For an example, in a voting record system, selecting a candidate ID would be ideal if you expect each candidate to receive similar number of votes. If one or two candidates are to receive 90% of the available votes then this become less optimal.

Another criteria for good partition key candidate is that the attribute should have a temporal read and write pattern across time. If these are hard to achieve with an existing attribute, it’s worth looking at a syntactic or hybrid value.

Let’s look at an example that uses the 2016 US Elections to highlight everything we’ve just discussed. Specifically, we want to store a record of all of the votes for all of the candidates.

Each political party will have many candidates competing for the party’s electoral nomination. You may have anywhere from two to ten candidates. The problem is that votes between candidates will not be distributed uniformly – there will be one or two candidates that will receive the majority of the votes.

For the sake of this example, let’s assume that we expect 10000 WCU worth of votes to be received. Say that, in the first instance, we create a table and naively select the candidate ID as the partition key,and date/time as a range key.

DynamoDB will create 10 partitions for this example (Based on our previous formula, 10 partitions are needed to support 10000 WCU). If we also assume we have 10 candidates, DynamoDB will spread these partitions keys across 10 partitions as shown here:

election_1

This model is largely flawed. Firstly, we are limiting the performance for a candidate for a value much lower than 10000 WCU. As we discussed above, real world candidate voting will be heavily weighted towards one or two popular candidates. Therefore performance allocated to the least popular candidates is just wasted WCU.

Even if we assume voting is uniformly weighted between candidates, their voters may be located in different time zones and may vote at different times. Therefore, there might be spikes of votes for certain candidates at specific times compared to others. Even with carefully designed partition keys, you can run into time-based issues like this.

Let’s think about a case when there are only two candidates in the national election. To support performance capacity, 100000 WCU are assigned, and DynamoDB will create 100 partitions to support this. However, if the candidate ID is chosen as the partition key, each candidate data will be limited to one partition – even though there are 98 unused partitions. Consequently, the storage limit will be hit quickly causing the application to fail and stop recording further votes.

This issue  is resolved by introducing a key-sharing plan. This means for each candidate – i.e. for each partition – the partition key is prefixed with a value of 1 to 10 or 1 to 1000 (deepening on the size of your dataset). This gives us a much wider range of partition keys. That means DynamoDB will distribute this data across multiple partitions evenly. It’ll look a bit like this:

election_2

Now, we can look at the histogram before key-sharing:

heat_1

Where the corresponding partition keys will look something like (please note, for this example, I’ve only inserted data for 2 candidates):

before_sharding.png

Now here’s the histogram after key-sharing:

heat_2

We can see how, with a key sharing plan, the load is much more evenly distributed across partitions. Throttling is minimal. The corresponding partition keys will look like this:

after_sharding

Conclusion

There are many other factors that needs to be considered when designing data models on DynamoDB such as Local Secondary Indexes and Global Secondary Indexes. For further information on these indexes, check out the AWS documentation to understand how they may impact database performance.

Database modelling is very important when choosing a database structure and it’s essential for an optimally-performing application. Even though DynamoDB is a fully managed and highly-scalable database solution, it all comes down to you when designing a solid application. No matter how powerful DynamoDB is, a poorly designed database model will cause your application to perform poorly.

from:A Deep Dive into DynamoDB Partitions – Shine Solutions Group

发表评论

电子邮件地址不会被公开。 必填项已用*标注