nicklee
nicklee

Reputation: 95

AWS Lambda serverless framework deployment error

I want to deploy my Lambda functions to AWS Lambda using Serverless Framework with this command.
serverless deploy --stage dev --region eu-central-1.
Here's my servless.yml file:

service: sensor-processor-v3


plugins:
  - serverless-webpack
  # - serverless-websockets-plugin

custom:
  secrets: ${file(secrets.yml):${self:provider.stage}}
  accessLogOnStage:
    dev: true
    prod: true
  nodeEnv:
    dev: development
    prod: production
  mqArn:
    dev:
    prod:

provider:
  name: aws
  runtime: nodejs18.x
  stage: ${opt:stage, 'dev'}
  region: eu-central-1
  logs:
    accessLogging: ${self:custom.accessLogOnStage.${self:provider.stage}}
    executionLogging: ${self:custom.accessLogOnStage.${self:provider.stage}}

  logRetentionInDays: 14
  memorySize: 128
  timeout: 30
  endpointType: REGIONAL
  environment:
    STAGE: ${self:provider.stage}
    NODE_ENV: ${self:custom.nodeEnv.${self:provider.stage}}
    REDIS_HOST_RW: !GetAtt RedisCluster.PrimaryEndPoint.Address
    REDIS_HOST_RO: !GetAtt RedisCluster.ReaderEndPoint.Address
    REDIS_PORT: !GetAtt RedisCluster.PrimaryEndPoint.Port
    SNIPEIT_INSTANCE_URL: ${self:custom.secrets.SNIPEIT_INSTANCE_URL}
    SNIPEIT_API_TOKEN: ${self:custom.secrets.SNIPEIT_API_TOKEN}
  apiGateway:
    apiKeySelectionExpression:
    apiKeySourceType: AUTHORIZER
    apiKeys:
      - ${self:service}-${self:provider.stage}
  iam:
    role:
      statements:
        - Effect: Allow
          Action:
            - ec2:CreateNetworkInterface
            - ec2:DescribeNetworkInterfaces
            - ec2:DeleteNetworkInterface
          Resource: "*"
        - Effect: Allow
          Action:
            - "dynamodb:PutItem"
            - "dynamodb:Query"
          Resource: { Fn::GetAtt: [ theThingsNetwork, Arn ] }
        - Effect: Allow
          Action:
            - "dynamodb:PutItem"
            - "dynamodb:Query"
          Resource: { Fn::GetAtt: [ loriotTable, Arn ] }
        - Effect: Allow
          Action:
            - firehose:DeleteDeliveryStream
            - firehose:PutRecord
            - firehose:PutRecordBatch
            - firehose:UpdateDestination
          Resource: '*'
        - Effect: Allow
          Action: lambda:InvokeFunction
          Resource: '*'
        - Effect: Allow
          Action:
            - s3:GetObject
            - s3:ListBucket
            - s3:PutObject
          Resource:
            - arn:aws:s3:::sensor-processor-v3-prod
            - arn:aws:s3:::sensor-processor-v3-prod/*
            - arn:aws:s3:::sensor-processor-v3-dev
            - arn:aws:s3:::sensor-processor-v3-dev/*
            - arn:aws:s3:::datawarehouse-redshift-dev
            - arn:aws:s3:::datawarehouse-redshift-dev/*
            - arn:aws:s3:::datawarehouse-redshift
            - arn:aws:s3:::datawarehouse-redshift/*

package:
  patterns:
    - '!README.md'
    - '!tools/rename-script.js'
    - '!secrets*'

functions:
  authorizer:
    handler: src/authorizer.handler
    memorySize: 128
    environment:
      STAGE: ${self:provider.stage}
      API_KEY_ALLOW: ${self:custom.secrets.API_KEY_ALLOW}
      USAGE_API_KEY: ${self:custom.secrets.USAGE_API_KEY}
  ibasxSend:
    handler: src/ibasxSend.ibasxSend
    memorySize: 256
    environment:
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
      NODE_TLS_REJECT_UNAUTHORIZED: 0
  processIbasxPayload:
    handler: src/processIbasxPayload.processor
    memorySize: 384
    timeout: 20
    environment:
      STAGE: ${self:provider.stage}
      LORIOT_DB: { Ref: loriotTable }
      OCCUPANCY_STREAM_NAME: { Ref: firehose }
      ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
      PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
      IBASX_DATA_SYNC_DB_NAME: ${self:custom.secrets.IBASX_DATA_SYNC_DB_NAME}
      IBASX_DATA_SYNC_DB_USER: ${self:custom.secrets.IBASX_DATA_SYNC_DB_USER}
      IBASX_DATA_SYNC_DB_PASSWORD: ${self:custom.secrets.IBASX_DATA_SYNC_DB_PASSWORD}
      IBASX_DATA_SYNC_DB_HOST: ${self:custom.secrets.IBASX_DATA_SYNC_DB_HOST}
      IBASX_DATA_SYNC_DB_PORT: ${self:custom.secrets.IBASX_DATA_SYNC_DB_PORT}
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
      CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
      CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
      CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
      CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
      CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
      FEATURES: snipeId
    vpc:
      securityGroupIds:
        - sg-0d7ec27d8c3e59a5f
      subnetIds:
        - subnet-093295e049fd0b192
        - subnet-0b4dd59bec892f1b5
        - subnet-0ba4e03f8d83d5cd4
  loriotConnector:
    handler: src/loriotConnector.connector
    memorySize: 384
    timeout: 20
    environment:
      STAGE: ${self:provider.stage}
      LORIOT_DB: { Ref: loriotTable }
      OCCUPANCY_STREAM_NAME: { Ref: firehose }
      ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
      PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
      CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
      CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
      CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
      CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
      CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
    vpc:
      securityGroupIds:
        - sg-0d7ec27d8c3e59a5f
      subnetIds:
        - subnet-093295e049fd0b192
        - subnet-0b4dd59bec892f1b5
        - subnet-0ba4e03f8d83d5cd4
    events:
      - http:
          path: loriot/uplink
          method: post
          # private: true
          authorizer:
            type: TOKEN
            name: authorizer
            identitySource: method.request.header.Authorization
  ibasxDiagnostics:
    handler: src/ibasxDiagnostics.diagnostics
    memorySize: 256
    timeout: 60
    vpc:
      securityGroupIds:
        - sg-0d7ec27d8c3e59a5f
      subnetIds:
        - subnet-093295e049fd0b192
        - subnet-0b4dd59bec892f1b5
        - subnet-0ba4e03f8d83d5cd4
  importDataFromS3:
    handler: src/importDataFromS3.importFn
    memorySize: 512
    timeout: 300
    environment:
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
  qrcodeSync:
    handler: src/qrcodeSync.sync
    memorySize: 256
    timeout: 30
    vpc:
      securityGroupIds:
        - sg-0d7ec27d8c3e59a5f
      subnetIds:
        - subnet-093295e049fd0b192
        - subnet-0b4dd59bec892f1b5
        - subnet-0ba4e03f8d83d5cd4
    environment:
      REDSHIFT_CLUSTER_TYPE: ${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
      ASSETS_DB_NAME: "${self:custom.secrets.ASSETS_DB_NAME}"
      ASSETS_DB_HOST: "${self:custom.secrets.ASSETS_DB_HOST}"
      ASSETS_DB_USER: "${self:custom.secrets.ASSETS_DB_USER}"
      ASSETS_DB_PASSWORD: "${self:custom.secrets.ASSETS_DB_PASSWORD}"
      ASSETS_DB_PORT: "${self:custom.secrets.ASSETS_DB_PORT}"
    events:
      - schedule: rate(5 minutes)
  # deduplicator:
  #   handler: src/deduplicator.deduplicate
  #   memorySize: 512
  #   environment:
  #     REDSHIFT_CLUSTER_TYPE: ${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}
  #     REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
  #     REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
  #     REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
  #     REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
  #     REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
  #   # events:
  #     # - schedule: rate(5 minutes)
  websocketMessage:
    handler: src/websocketConnector.onMessage
    memorySize: 256
    events:
      - websocket:
          route: '$default'
    environment:
      LORIOT_DB: { Ref: loriotTable }
      OCCUPANCY_STREAM_NAME: { Ref: firehose }
      ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
      PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
  wsAuthorizer:
    handler: src/authorizer.handler
    memorySize: 128
    environment:
      STAGE: ${self:provider.stage}
      API_KEY_ALLOW: ${self:custom.secrets.WS_API_KEY_ALLOW}
      USAGE_API_KEY: ${self:custom.secrets.USAGE_API_KEY}
  websocketConnect:
    handler: src/websocketConnect.connect
    environment:
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
    events:
      - websocket:
          route: $connect
          # routeKey: '\$default'
          authorizer:
            name: wsAuthorizer
            identitySource:
              - route.request.header.Authorization
      - websocket:
          route: $disconnect
  wifiConnector:
    handler: src/wifi.connector
    memorySize: 384
    vpc:
      securityGroupIds:
        - sg-0d7ec27d8c3e59a5f
      subnetIds:
        - subnet-093295e049fd0b192
        - subnet-0b4dd59bec892f1b5
        - subnet-0ba4e03f8d83d5cd4
    events:
      - http:
          path: wifi/uplink
          method: post
          authorizer:
            type: TOKEN
            name: authorizer
            identitySource: method.request.header.Authorization
    environment:
      STAGE: ${self:provider.stage}
      LORIOT_DB: { Ref: loriotTable }
      OCCUPANCY_STREAM_NAME: { Ref: firehose }
      ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
      PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
      CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
      CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
      CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
      CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
      CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
  missingPowerBIData:
    handler: src/missingPowerBIData.update
    memorySize: 256
    timeout: 600
    environment:
      STAGE: ${self:provider.stage}
      CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
      CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
      CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
      CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
      CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
  kinesisEtl:
    timeout: 60
    handler: src/kinesisTransformer.kinesisTransformer
    environment:
      TZ: "Greenwich"
      ROUND_PERIOD: 360000 # 6 minutes
      ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
      ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
      ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
      ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
      ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
      ASSETS_DB_NAME: "${self:custom.secrets.ASSETS_DB_NAME}"
      ASSETS_DB_HOST: "${self:custom.secrets.ASSETS_DB_HOST}"
      ASSETS_DB_USER: "${self:custom.secrets.ASSETS_DB_USER}"
      ASSETS_DB_PASSWORD: "${self:custom.secrets.ASSETS_DB_PASSWORD}"
      ASSETS_DB_PORT: "${self:custom.secrets.ASSETS_DB_PORT}"
  atmosphericEtl:
    timeout: 60
    handler: src/atmosphericTransformer.atmosphericTransformer
    environment:
      TZ: "Greenwich"
      ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
      ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
      ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
      ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
      ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
  peopleEtl:
    timeout: 60
    handler: src/peopleTransformer.peopleTransformer
    environment:
      ROUND_PERIOD: 360000 # 6 minutes
      TZ: "Greenwich"
      ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
      ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
      ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
      ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
      ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
  updateSensorSlot:
    timeout: 60
    handler: src/updateSensorSlot.updateSensorSlot
    environment:
      ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
      ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
      ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
      ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
      ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
      REDSHIFT_CLUSTER_TYPE: ${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}

resources: ${file(resources.yml)}

Here's the resources.yml file:


---
Resources:
    firehoseRole:
      Type: AWS::IAM::Role
      Properties:
        RoleName: ${self:service}-${self:provider.stage}-FirehoseToS3Role
        AssumeRolePolicyDocument:
          Statement:
          - Effect: Allow
            Principal:
              Service:
              - firehose.amazonaws.com
            Action:
            - sts:AssumeRole
        Policies:
        - PolicyName: FirehoseToS3Policy
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - s3:AbortMultipartUpload
                  - s3:GetBucketLocation
                  - s3:GetObject
                  - s3:ListBucket
                  - s3:ListBucketMultipartUploads
                  - s3:PutObject
                Resource: '*'
        - PolicyName: FirehoseLogsPolicy
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - logs:CreateLogStream
                  - glue:GetTableVersions
                  - logs:CreateLogGroup
                  - logs:PutLogEvents
                Resource: '*'
        - PolicyName: FirehoseLambdaPolicy
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - lambda:InvokeFunction
                  - lambda:GetFunctionConfiguration
                  - kinesis:GetShardIterator
                  - kinesis:GetRecords
                  - kinesis:DescribeStream
                Resource: '*'
    serverlessKinesisFirehoseBucket:
      Type: AWS::S3::Bucket
      DeletionPolicy: Retain
      Properties:
        BucketName: "${self:service}-${self:provider.stage}"
        LifecycleConfiguration:
          Rules:
            - Status: Enabled
              ExpirationInDays: 90
    theThingsNetwork:
      Type:                    "AWS::DynamoDB::Table"
      Properties:
        TableName:             "${self:custom.secrets.TTN_DB}"
        PointInTimeRecoverySpecification:
          PointInTimeRecoveryEnabled: true
        AttributeDefinitions:
        - AttributeName:       "device"
          AttributeType:       "S"
        - AttributeName:       "timestamp"
          AttributeType:       "S"
        KeySchema:
        - AttributeName:       "device"
          KeyType:             "HASH"
        - AttributeName:       "timestamp"
          KeyType:             "RANGE"
        BillingMode: PAY_PER_REQUEST
    loriotTable:
      Type:                    "AWS::DynamoDB::Table"
      Properties:
        TableName:             "${self:custom.secrets.LORIOT_DB}"
        PointInTimeRecoverySpecification:
          PointInTimeRecoveryEnabled: true
        AttributeDefinitions:
        - AttributeName:       "device"
          AttributeType:       "S"
        - AttributeName:       "timestamp"
          AttributeType:       "S"
        KeySchema:
        - AttributeName:       "device"
          KeyType:             "HASH"
        - AttributeName:       "timestamp"
          KeyType:             "RANGE"
        BillingMode: PAY_PER_REQUEST
    processed:
      Type: "AWS::Redshift::Cluster"
      Properties:
        AutomatedSnapshotRetentionPeriod: "${self:custom.secrets.REDSHIFT_SNAPSHOT_RETENTION_PERIOD}"
        AllowVersionUpgrade:   true
        ClusterIdentifier:     "${self:custom.secrets.REDSHIFT_IDENTIFIER}"
        ClusterType:           "${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}"
        DBName:                "${self:custom.secrets.REDSHIFT_DB_NAME}"
        MasterUsername:        "${self:custom.secrets.REDSHIFT_DB_USER}"
        MasterUserPassword:    "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
        Port:                  "${self:custom.secrets.REDSHIFT_DB_PORT}"
        NodeType:              "${self:custom.secrets.REDSHIFT_NODE_TYPE}"
        PubliclyAccessible:    true
        VpcSecurityGroupIds:   "${self:custom.secrets.REDSHIFT_SECURITY_GROUP_IDS}"
        ElasticIp:  "${self:custom.secrets.REDSHIFT_EIP}"
        ClusterSubnetGroupName: "${self:custom.secrets.REDSHIFT_SUBNET_GROUP}"
        # ClusterParameterGroupName: "${self:custom.secrets.REDSHIFT_PARAMETER_GROUP}"

    LogGroup:
      Type: AWS::Logs::LogGroup
      Properties:
        LogGroupName: ${self:service}-${self:provider.stage}-kinesis
        RetentionInDays: 30
    OccupancyS3LogStream:
      Type: AWS::Logs::LogStream
      Properties:
        LogGroupName: { Ref: LogGroup }
        LogStreamName: OccupancyS3LogStream
    OccupancyRedshiftLogStream:
      Type: AWS::Logs::LogStream
      Properties:
        LogGroupName: { Ref: LogGroup }
        LogStreamName: OccupancyRedshiftLogStream
    AtmosphericS3LogStream:
      Type: AWS::Logs::LogStream
      Properties:
        LogGroupName: { Ref: LogGroup }
        LogStreamName: AtmosphericS3LogStream
    AtmosphericRedshiftLogStream:
      Type: AWS::Logs::LogStream
      Properties:
        LogGroupName: { Ref: LogGroup }
        LogStreamName: AtmosphericRedshiftLogStream

    firehose:
      Type: AWS::KinesisFirehose::DeliveryStream
      Properties:
        DeliveryStreamName: ${self:service}-${self:provider.stage}
        DeliveryStreamType: DirectPut
        RedshiftDestinationConfiguration:
          ClusterJDBCURL: jdbc:redshift://${self:custom.secrets.REDSHIFT_IDENTIFIER}.copw8j1hahrq.eu-central-1.redshift.amazonaws.com:${self:custom.secrets.REDSHIFT_DB_PORT}/${self:custom.secrets.REDSHIFT_DB_NAME}
          CopyCommand:
            CopyOptions: "json 'auto' dateformat 'auto' timeformat 'auto'"
            DataTableName: "processed_data"
          Password: "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
          Username: "${self:custom.secrets.REDSHIFT_DB_USER}"
          RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          CloudWatchLoggingOptions:
            Enabled: true
            LogGroupName: { Ref: LogGroup }
            LogStreamName: { Ref: OccupancyRedshiftLogStream }
          S3Configuration:
            BucketARN: { Fn::GetAtt: [ serverlessKinesisFirehoseBucket, Arn ] }
            BufferingHints:
              IntervalInSeconds: 60
              SizeInMBs: 1
            CompressionFormat: UNCOMPRESSED
            CloudWatchLoggingOptions:
              Enabled: true
              LogGroupName: { Ref: LogGroup }
              LogStreamName: { Ref: OccupancyS3LogStream }
            RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          ProcessingConfiguration:
            Enabled: true
            Processors:
              - Parameters:
                - ParameterName: LambdaArn
                  ParameterValue: { Fn::GetAtt: [ KinesisEtlLambdaFunction, Arn ] }
                - ParameterName: BufferIntervalInSeconds
                  ParameterValue: "60"
                - ParameterName: BufferSizeInMBs
                  ParameterValue: "1"
                - ParameterName: NumberOfRetries
                  ParameterValue: "2"
                Type: Lambda
    AtmosphericFirehose:
      Type: AWS::KinesisFirehose::DeliveryStream
      Properties:
        DeliveryStreamName: ${self:service}-${self:provider.stage}-atmospheric
        DeliveryStreamType: DirectPut
        RedshiftDestinationConfiguration:
          ClusterJDBCURL: jdbc:redshift://${self:custom.secrets.REDSHIFT_IDENTIFIER}.copw8j1hahrq.eu-central-1.redshift.amazonaws.com:${self:custom.secrets.REDSHIFT_DB_PORT}/${self:custom.secrets.REDSHIFT_DB_NAME}
          CopyCommand:
            CopyOptions: "json 'auto' dateformat 'auto' timeformat 'auto'"
            DataTableName: "atmospheric_data"
          Password: "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
          Username: "${self:custom.secrets.REDSHIFT_DB_USER}"
          RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          CloudWatchLoggingOptions:
            Enabled: true
            LogGroupName: { Ref: LogGroup }
            LogStreamName: { Ref: AtmosphericRedshiftLogStream }
          S3Configuration:
            BucketARN: { Fn::GetAtt: [ serverlessKinesisFirehoseBucket, Arn ] }
            Prefix: atmospheric/
            BufferingHints:
              IntervalInSeconds: 60
              SizeInMBs: 1
            CompressionFormat: UNCOMPRESSED
            CloudWatchLoggingOptions:
              Enabled: true
              LogGroupName: { Ref: LogGroup }
              LogStreamName: { Ref: AtmosphericS3LogStream }
            RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          ProcessingConfiguration:
            Enabled: true
            Processors:
              - Parameters:
                - ParameterName: LambdaArn
                  ParameterValue: { Fn::GetAtt: [ AtmosphericEtlLambdaFunction, Arn ] }
                - ParameterName: BufferIntervalInSeconds
                  ParameterValue: "60"
                - ParameterName: BufferSizeInMBs
                  ParameterValue: "1"
                - ParameterName: NumberOfRetries
                  ParameterValue: "2"
                Type: Lambda
    PeopleFirehose:
      Type: AWS::KinesisFirehose::DeliveryStream
      Properties:
        DeliveryStreamName: ${self:service}-${self:provider.stage}-people
        DeliveryStreamType: DirectPut
        RedshiftDestinationConfiguration:
          ClusterJDBCURL: jdbc:redshift://${self:custom.secrets.REDSHIFT_IDENTIFIER}.copw8j1hahrq.eu-central-1.redshift.amazonaws.com:${self:custom.secrets.REDSHIFT_DB_PORT}/${self:custom.secrets.REDSHIFT_DB_NAME}
          CopyCommand:
            CopyOptions: "json 'auto' dateformat 'auto' timeformat 'auto'"
            DataTableName: "people_data"
          Password: "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
          Username: "${self:custom.secrets.REDSHIFT_DB_USER}"
          RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          CloudWatchLoggingOptions:
            Enabled: true
            LogGroupName: { Ref: LogGroup }
            LogStreamName: { Ref: AtmosphericRedshiftLogStream }
          S3Configuration:
            BucketARN: { Fn::GetAtt: [ serverlessKinesisFirehoseBucket, Arn ] }
            Prefix: people/
            BufferingHints:
              IntervalInSeconds: 60
              SizeInMBs: 1
            CompressionFormat: UNCOMPRESSED
            CloudWatchLoggingOptions:
              Enabled: true
              LogGroupName: { Ref: LogGroup }
              LogStreamName: { Ref: AtmosphericS3LogStream }
            RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          ProcessingConfiguration:
            Enabled: true
            Processors:
              - Parameters:
                - ParameterName: LambdaArn
                  ParameterValue: { Fn::GetAtt: [ PeopleEtlLambdaFunction, Arn ] }
                - ParameterName: BufferIntervalInSeconds
                  ParameterValue: "60"
                - ParameterName: BufferSizeInMBs
                  ParameterValue: "1"
                - ParameterName: NumberOfRetries
                  ParameterValue: "2"
                Type: Lambda
    RedisCluster:
      Type: 'AWS::ElastiCache::ReplicationGroup'
      Properties:
        AutoMinorVersionUpgrade: true
        ReplicationGroupId: "${self:custom.secrets.REDIS_CACHE_CLUSTER_NAME}"
        ReplicationGroupDescription: "${self:custom.secrets.REDIS_CACHE_CLUSTER_NAME}"
        CacheNodeType: cache.t4g.micro
        Engine: redis
        ReplicasPerNodeGroup: 3
        NumNodeGroups: 1
        EngineVersion: '7.0'
        MultiAZEnabled: true
        AutomaticFailoverEnabled: true
        PreferredMaintenanceWindow: 'sat:01:45-sat:04:45'
        SnapshotRetentionLimit: 4
        SnapshotWindow: '00:30-01:30'
        CacheSubnetGroupName: mm-vpc-cache
        SecurityGroupIds:
          - sg-07663c145bf3feb84
          - sg-0d7ec27d8c3e59a5f

It fails with the error message Error: CREATE_FAILED: serverlessKinesisFirehoseBucket (AWS::S3::Bucket) sensor-processor-v3-dev already exists.

I investigated the issue and determined that my S3 bucket must have a unique name across all AWS regions.


I adjusted my AWS S3 bucket to avoid the error during testing. However, it still fails and displays the error message Error: CREATE_FAILED: RedisCluster (AWS::ElastiCache::ReplicationGroup) Cache subnet group 'mm-vpc-cache' does not exist. (Service: AmazonElastiCache; Status Code: 400; Error Code: CacheSubnetGroupNotFoundFault; Request ID: 2cbfadb2-8086-4ce8-ae61-1d75dcaaa1aa; Proxy: null).

Upvotes: 0

Views: 159

Answers (1)

Sean Linguine
Sean Linguine

Reputation: 459

Question 1: I haven't changed anything on the S3 bucket. Should I still provision it? I don't want to change the name of my existing S3 bucket. But I want to use my existing bucket.

If your bucket was created outside of serverless.yaml then all you need to do is reference the name. There is no need to declare an AWS::S3::Bucket resource if you don't need to create one.

Question 2:: Is my resources.yml file out of date and not synchronised with my AWS resources?

That is unlikely. Can you confirm whether or not you have an existing AWS::ElastiCache::SubnetGroup with the name mm-vpc-cache? Go to the Elasticache Dashboard, on the left side it says "Subnet Groups." If it's not listed there, then you need to create it.

Upvotes: 0

Related Questions