Automated Multi-Region deployments in AWS: DynamoDB

Virtually all of our data storage we do is within DynamoDB. We have chosen DynamoDB over others because there is literally nothing to manage—it is Platform as a Service ("PaaS") up and down.

You define a table, indices, and you are off and running.

For those that don't know, DynamoDB is analogous to a "NoSQL" database. Think of it as a large hash table that provides single digit millisecond response times. It scales so well it is what Amazon (proper) uses to drive the Amazon Store.

In 2017, Amazon launched "Global Tables" for DynamoDB. Global Tables enable you to define a table in one to many regions. DynamoDB automatically syncs to other regions without you doing additional work.

Thus, you can easily have multi-region capabilities with virtually no overhead. We'll dig into DynamoDB and Global Tables in this article. We will focus only on Global Tables.

Costs

First, you must understand that Global Tables have extra costs associated with them. For simplicity sake we'll examine On Demand pricing. When you use DynamoDB, you pay to read and write data to the table. You use 1 Write request unit ("WRU") to write and 1 Read request unit ("RRU").

For 1 WRU, you pay $1.25 per million write request units in Ohio. For 1 RRU you pay $0.25 per million read request units.

WRU and RRU are calculated on the number of KB in the item you are retrieving.

When using Global Tables, you now will use rWRU, or replicated write units. A rWRU is $1.875 per million replicated write request units in Oregon.

If, you are writing 1m 1KB objects to Ohio, you'll pay $1.25 + $1.875 to replicate to Oregon. Obviously, a 4KB object would be 4x that price—as well as data charges between the region, at $0.02 a Gigabyte. So, you are in effect paying a 1.5 multiplier to have cross region replication.

Finally, you are charged $0.25 per GB of data stored, 2x with cross region replication.

Is that acceptable for your use case? That is an internal decision.

Configuring Global Tables

We live in GovCloud, but these lessons apply anywhere. Global Tables, as above provide you with multi-region redundancy in the event of a region based outage. This also could enable you to build solutions that are quicker, as users on the West Coast could access a West Coast data center and on the East Coast an East Coast data center. So, the applicability for multi-region isn't just resiliency, it could be to ensure your customers have quicker response times to your solutions.

When you want to configure Global Tables, it is mostly the same as configuring regular DynamoDB tables. But, there are a few configuration elements you need to ensure exist.

First, you must create either a multi-region KMS key or you must create KMS keys that exist within the regions you will deploy to. If you go the route of creating KMS keys in each region, be warned that it must exist before you configure your replication settings. If you are using StackSet deployments, your KMS key in your primary region could be created and your table creation starts before the KMS key in the replication region has been created. If this occurs, your table creation fails.

You only need to deploy your Global Table to a single region, not multiple. You'll also want to note that you can only add one replica at a time. So if you want to add a third region, you need to iteratively deploy it—which isn't easy with automated StackSets.

Now, lets setup a DynamoDB global table:

TableIdentityServices:

Type: AWS::DynamoDB::GlobalTable

DependsOn:

- WaitForDependencyCreation

Properties:

Replicas:

- Region: !Ref parGlobalTableRegion1

SSESpecification:

KMSMasterKeyId: "alias/app/dynamodb-global"

- !If

- ConditionHasRegion2

- Region: !Ref parGlobalTableRegion2

SSESpecification:

KMSMasterKeyId: "alias/app/dynamodb-global"

- !Ref 'AWS::NoValue'

TableName: !Sub "${parAppDynamoDBPrefix}-identity-services"

AttributeDefinitions:

- AttributeName: 'itemRefId'

AttributeType: 'S'

KeySchema:

- AttributeName: 'itemRefId'

KeyType: 'HASH'

SSESpecification:

SSEEnabled: true

SSEType: KMS

BillingMode: PAY_PER_REQUEST

StreamSpecification:

StreamViewType: NEW_AND_OLD_IMAGES

TimeToLiveSpecification:

Enabled: true

AttributeName: 'item_tty'

We see here two properties you won't see in your typical DynamoDB table:

  • Replicas: Defines the regions you will deploy your Global table into

  • StreamSpecification: Defines how and what changes you'll send to the replica

Thats basically it to get up and running.

A big "pro tip" you'll want to take into account is leveraging KMS Aliases instead of ARNs. A big help here is when you are leveraging the KMS key itself, you can add the following policy:

AppGlobalTablesConfigurationDymamoDBPolicy:

Type: AWS::IAM::ManagedPolicy

Properties:

ManagedPolicyName: !Sub "app-id-dynamodb-${AWS::Region}"

Path: "/"

PolicyDocument:

Version: "2012-10-17"

Statement:

- Sid: kms

Effect: Allow

Action:

- kms:Decrypt

- kms:Encrypt

- kms:GenerateDataKey

Resource:

- !Sub "arn:${AWS::Partition}:kms:${AWS::Region}:${AWS::AccountId}:key/*"

Condition:

ForAnyValue:StringLike:

kms:ResourceAliases: "alias/app/dynamodb-global"

With the ForAnyValue:StringLike condition key, we can tie to the Alias itself and not worry about managing the key identifiers.

StackSet Deployment

The gotcha here as we noted above are the creation of KMS keys. If you are going to create KMS keys in every region (instead of a global KMS key) you'll need to wait for that to deploy.

Additionally, you only deploy your Global Table in a single region, AWS handles the rest and the replication to the replica regions automatically.

Next Up in series

Next up in this series will be:

  • Part 1: Intro

  • Part 2: Gotchas

  • Part 3: DynamoDB Global Tables

  • Part 4: AWS Lambda (Pending)

  • Part 5: S3 Replication (Pending)

  • Part 6: AWS Fargate (Pending)

  • Part 7: AWS CodePipeline (Pending)

Previous
Previous

AWS CodePipeline Random Musings

Next
Next

Automated Multi-Region deployments in AWS: Gotchas