Reputation: 7742
I'm creating serverless framework project.
DynamoDB table is created by other CloudFormation Stack.
How I can refer existing dynamodb table's StreamArn in serverless.yml
I have configuration as below
resources:
Resources:
MyDbTable: //'arn:aws:dynamodb:us-east-2:xxxx:table/MyTable'
provider:
name: aws
...
onDBUpdate:
handler: handler.onDBUpdate
events:
- stream:
type: dynamodb
arn:
Fn::GetAtt:
- MyDbTable
- StreamArn
Upvotes: 5
Views: 3436
Reputation: 7876
EDIT:
- If your tables were created in another Serverless service you can skip steps 1, 4 and 8.
- If your tables were created in a standard CloudFormation Stack, edit this stack to add the outputs from step 2 and skip steps 1, 4 and 8
Stuck with the same issue I came up the following workaround:
Create a new serverless service with only tables in it (you want to make a copy of your existing tables set-up):
service: MyResourcesStack
resources:
Resources:
FoosTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${opt:stage}-${self:service}-foos
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES # This enables the table's stream
(Optional) You can use serverless-dynamodb-autoscaling to configure autoscaling from the serverless.yml
:
plugins:
- serverless-dynamodb-autoscaling
custom:
capacities:
- table: FoosTable # DynamoDB Resource
read:
minimum: 5 # Minimum read capacity
maximum: 50 # Maximum read capacity
usage: 0.75 # Targeted usage percentage
write:
minimum: 5 # Minimum write capacity
maximum: 50 # Maximum write capacity
usage: 0.75 # Targeted usage percentage
Set up the stack to output the tables name, Arn and StreamArn:
Outputs:
FoosTableName:
Value:
Ref: FoosTable
FoosTableArn:
Value: {"Fn::GetAtt": ["FoosTable", "Arn"]}
FoosTableStreamArn:
Value: {"Fn::GetAtt": ["FoosTable", "StreamArn"]}
Deploy the stack
Copy the data from your old tables to the newly created ones.
To do so, I used this script which works well if the old and new tables are in the same region and if the table are not huge. For larger tables, you may want to use AWS Data Pipeline.
Replace your hardcoded references to your tables in your initial service with the previously outputed variables:
provider:
environment:
stage: ${opt:stage}
region: ${self:provider.region}
dynamoDBTablesStack: "MyResourcesStack-${opt:stage}" # Your resources stack's name and the current stage
foosTable: "${cf:${self:provider.environment.dynamoDBTablesStack}.FoosTableName}"
foosTableArn: "${cf:${self:provider.environment.dynamoDBTablesStack}.FoosTableArn}"
foosTableStreamArn: "${cf:${self:provider.environment.dynamoDBTablesStack}.FoosTableStreamArn}"
functions:
myFunction:
handler: myFunction.handler
events:
- stream:
batchSize: 100
type: dynamodb
arn: ${self:provider.environment.foosStreamArn}
Deploy those changes
Upvotes: 2