Algamest
Algamest

Reputation: 1529

Create AWS Athena view programmatically

Can you create views in Amazon Athena? outlines how to create a view using the User Interface.

I'd like to create an AWS Athena View programatically, ideally using Terraform (which calls CloudFormation).

I followed the steps outlined here: https://ujjwalbhardwaj.me/post/create-virtual-views-with-aws-glue-and-query-them-using-athena, however I run into an issue with this in that the view goes stale quickly.

...._view' is stale; it must be re-created.

The terraform code looks like this:

resource "aws_glue_catalog_table" "adobe_session_view" {

  database_name = "${var.database_name}"
  name = "session_view"

  table_type = "VIRTUAL_VIEW"
  view_original_text = "/* Presto View: ${base64encode(data.template_file.query_file.rendered)} */"
  view_expanded_text = "/* Presto View */"

  parameters = {
    presto_view = "true"
    comment = "Presto View"
  }

  storage_descriptor {
    ser_de_info {
      name = "ParquetHiveSerDe"
      serialization_library = "org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe"
    }

    columns { name = "first_column" type = "string" }
    columns { name = "second_column" type = "int" }
    ...
    columns { name = "nth_column" type = "string" }
}

An alternative I'd be happy to use is the AWS CLI, however aws athena [option] provides no option for this.

I've tried:

Upvotes: 30

Views: 28332

Answers (8)

mberchon
mberchon

Reputation: 121

Another piece of code using AWS CDK Python in the context of a merge on read lakehouse usecase

def build_mor_view(scope: Construct, db_name: str, table_name: str, col_definitions: list[list[str]]) -> CfnTable:
"""
This function builds a virtual view for a given table which Merge on Read non compacted data according to the 
generation id.
For each column except timestamp and resourceId, the view retrieve the lastest non null available value by genId for
each unique tuples of timestamp and resourceId.
File must be formated roughly as follow /%lake_location%/%table_name%/%earliestId%-%genId%.parquet
CDK code is inspired from https://theglitchblog.com/2024/03/01/create-aws-athena-view-using-aws-cdk/

:param scope: CDK Construct
:param db_name: Database name
:param table_name: Table name
:param col_definitions: List of column definitions (name, type)
:return: CfnTable view
"""

columns = [{"name": col_def[0], "type": hive_type_of(col_def[1])} for col_def in col_definitions]

sql = "SELECT \"timestamp\",\"resourceId\""
for col_def in col_definitions:
    col_name = f"\"{col_def[0]}\""
    col_type = hive_type_of(col_def[1])

    if col_name not in ["\"timestamp\"", "\"resourceId\""]:
        sql += f",\nmax_by({col_name},-CAST(regexp_extract(\"$path\", '-(\d+)', 0) AS {col_type})) {col_name}"

sql += f" FROM \"{db_name}\".\"{table_name}\" GROUP BY \"timestamp\", \"resourceId\""

athena_json = {
    "originalSql": sql,
    "catalog": "awsdatacatalog",
    "schema": db_name,
    "columns": columns
}

type_casted_col_list = [{"name": item['name'],
                         "type": "string" if item['type'] == "varchar" else "float" if item['type'] == "real" else
                         item['type']}
                        for item in athena_json['columns']]

b64_en_view_config = (base64.b64encode((json.dumps(athena_json)).encode('utf-8'))).decode('utf-8')

view_name = f"{table_name}_mor"
# noinspection PyTypeChecker
return CfnTable(scope, view_name,
                catalog_id=Aws.ACCOUNT_ID,
                database_name=db_name,
                table_input=CfnTable.TableInputProperty(
                    name=view_name,
                    parameters={"presto_view": "true", "comment": "Presto View"},
                    table_type="VIRTUAL_VIEW",
                    storage_descriptor=CfnTable.StorageDescriptorProperty(
                        columns=type_casted_col_list,
                    ),
                    view_original_text=f"/* Presto View: {b64_en_view_config} */",
                    view_expanded_text="/* Presto View */"
                )
                )

def hive_type_of(col_type: str) -> str:
    if col_type in ["long", "LongType"]:
        return "bigint"
    elif col_type in ["float", "FloatType"]:
        return "float"
    elif col_type in ["double", "DoubleType"]:
        return "double"
    elif col_type in ["Instant", "TimestampType"]:
        return "timestamp"
    else:
        return col_type

Upvotes: 0

Marko E
Marko E

Reputation: 18203

Actually, the query for the view can be created using Athena named queries. For example:

resource "aws_athena_named_query" "my_named_query" {
  name      = "test-view"
  workgroup = "someworkgroupname"
  database  = "somedbname"
  query     = "CREATE OR REPLACE VIEW \"new_view\" AS SELECT field1, field2 FROM \"somedbname\".\"sometablename\""
}

One example of using the new terraform_data resource (terraform v1.4.x) could be:

resource "terraform_data" "crete_athena_view" {
  triggers_replace = [
    aws_athena_named_query.my_named_query.id
  ]

  provisioner "local-exec" {
    command = "aws athena start-query-execution --query-string \"${aws_athena_named_query.my_named_query.query}\" --work-group ${aws_athena_named_query.my_named_query.workgroup} --query-execution-context Database=${aws_athena_named_query.my_named_query.database},Catalog=AwsDataCatalog"
  }
}

For older terraform versions, null_resource can be used.

Upvotes: 0

JD D
JD D

Reputation: 8137

As you suggested, it is definitely possible to create an Athena view programmatically via the AWS CLI using the start-query-execution. As you pointed out, this does require you to provide an S3 location for the results even though you won't need to check the file (Athena will put an empty txt file in the location for some reason).

Here is an example:

$ aws athena start-query-execution --query-string "create view my_view as select * from my_table" --result-configuration "OutputLocation=s3://my-bucket/tmp" --query-execution-context "Database=my_database"

{
    "QueryExecutionId": "1744ed2b-e111-4a91-80ea-bcb1eb1c9c25"
}

You can avoid having the client specify a bucket by creating a workgroup and setting the location there.

You can check whether your view creation was successful by using the get-query-execution command.

$ aws --region athena get-query-execution --query-execution-id bedf3eba-55b0-42de-9a7f-7c0ba71c6d9b
{
    "QueryExecution": {
        "QueryExecutionId": "1744ed2b-e111-4a91-80ea-bcb1eb1c9c25",
        "Query": "create view my_view as select * from my_table",
        "StatementType": "DDL",
        "ResultConfiguration": {
            "OutputLocation": "s3://my-bucket/tmp/1744ed2b-e111-4a91-80ea-bcb1eb1c9c25.txt"
        },
        "Status": {
            "State": "SUCCEEDED",
            "SubmissionDateTime": 1558744806.679,
            "CompletionDateTime": 1558744807.312
        },
        "Statistics": {
            "EngineExecutionTimeInMillis": 548,
            "DataScannedInBytes": 0
        },
        "WorkGroup": "primary"
    }
}

Upvotes: 17

Maciej Majewski
Maciej Majewski

Reputation: 1054

Based on previous answers, here is an example that will execute queries only if source file has changed. Also instead pasting SQL query into command, it uses file:// adapter to pass it to AWS CLI command.

resource "null_resource" "views" {
  for_each = {
    for filename in fileset("${var.sql_files_dir}/", "**/*.sql") :
    replace(replace(filename, "/", "_"), ".sql", "") => "${var.sql_files_dir}/${filename}"
  }

  triggers = {
    md5 = filemd5(each.value)

    # External references from destroy provisioners are not allowed -
    # they may only reference attributes of the related resource.
    database_name = var.database_name
    s3_bucket_query_output = var.s3_bucket_query_output
  }

  provisioner "local-exec" {
    command = <<EOF
      aws athena start-query-execution \
        --output json \
        --query-string file://${each.value} \
        --query-execution-context "Database=${var.database_name}" \
        --result-configuration "OutputLocation=s3://${var.s3_bucket_query_output}"
EOF
  }

  provisioner "local-exec" {
    when    = destroy
    command = <<EOF
      aws athena start-query-execution \
        --output json \
        --query-string 'DROP VIEW IF EXISTS ${each.key}' \
        --query-execution-context "Database=${self.triggers.database_name}" \
        --result-configuration "OutputLocation=s3://${self.triggers.s3_bucket_query_output}"
EOF
  }
}

To make destroy work correct, name files exactly like filename - example.sql relates to query:

CREATE OR REPLACE VIEW example AS ...

Upvotes: 7

Joshua Samuel
Joshua Samuel

Reputation: 141

Updating the above examples for Terraform 0.12+ syntax, and adding in reading the view queries from the filesystem:

resource "null_resource" "athena_views" {
  for_each = {
    for filename in fileset("${path.module}/athenaviews/", "**"):
           replace(filename,"/","_") => file("${path.module}/athenaviews/${filename}")
  }

  provisioner "local-exec" {
    command = <<EOF
    aws athena start-query-execution \
      --output json \
      --query-string CREATE OR REPLACE VIEW ${each.key} AS ${each.value} \
      --query-execution-context "Database=${var.athena_database}" \
      --result-configuration "OutputLocation=s3://${aws_s3_bucket.my-bucket.bucket}"
EOF
  }

  provisioner "local-exec" {
    when    = "destroy"
    command = <<EOF
    aws athena start-query-execution \
      --output json \
      --query-string DROP VIEW IF EXISTS ${each.key} \
      --query-execution-context "Database=${var.athena_database}" \
      --result-configuration "OutputLocation=s3://${aws_s3_bucket.my-bucket.bucket}"
EOF
  }
}

Note also then when= "destroy" block to ensure the views are dropped when your stack is torn down.

Place text files with a SELECT query below your module path under a directory (athenaview/ in this example), and it will pick them up and create views. This will create views named subfolder_filename, and destroy them if the files are removed.

Upvotes: 11

TailorDurden
TailorDurden

Reputation: 73

Addition to Theo's answer: In the base64 encoded JSON file, the type "string" is not valid when defining the cloumn attributes! Always write "varchar" at this point.

edit: Also "int" must be declared as "integer"!

I went with the solution by Theo and it worked using AWS Cloud Formation Templates.

I just wanted to add a little hint, that can save you hours of debugging. I am not writing this as a comment, because I don't have rights to comment yet. Feel free to copy&paste this into the comment section of Theo's answer.

Upvotes: 3

Theo
Theo

Reputation: 132922

Creating views programmatically in Athena is not documented, and unsupported, but possible. What happens behind the scenes when you create a view using StartQueryExecution is that Athena lets Presto create the view and then extracts Presto's internal representation and puts it in the Glue catalog.

The staleness problem usually comes from the columns in the Presto metadata and the Glue metadata being out of sync. An Athena view really contains three descriptions of the view: the view SQL, the columns and their types in Glue format, and the columns and types in Presto format. If either of these get out of sync you will get the "… is stale; it must be re-created." error.

These are the requirements on a Glue table to work as an Athena view:

  • TableType must be VIRTUAL_VIEW
  • Parameters must contain presto_view: true
  • TableInput.ViewOriginalText must contain an encoded Presto view (see below)
  • StorageDescriptor.SerdeInfo must be an empty map
  • StorageDescriptor.Columns must contain all the columns that the view defines, with their types

The tricky part is the encoded Presto view. That structure is created by this code: https://github.com/prestosql/presto/blob/27a1b0e304be841055b461e2c00490dae4e30a4e/presto-hive/src/main/java/io/prestosql/plugin/hive/HiveUtil.java#L597-L600, and this is more or less what it does:

  • Adds a prefix /* Presto View: (with a space after :)
  • Adds a base 64 encoded JSON string that contains the view SQL, the columns and their types, and some catalog metadata (see below)
  • Adds a suffix */ (with a space before *)

The JSON that describes the view looks like this:

  • A catalog property that must have the value awsdatacatalog.
  • A schema property that must be the name of the database where the view is created (i.e. it must match the DatabaseName property of the surrounding Glue structure.
  • A list of columns, each with a name and type
  • A originalSql property with the actual view SQL (not including CREATE VIEW …, it should start with SELECT … or WITH …)

Here's an example:

{
  "catalog": "awsdatacatalog",
  "schema": "some_database",
  "columns": [
    {"name": "col1", "type": "varchar"},
    {"name": "col2", "type": "bigint"}
  ],
  "originalSql": "SELECT col1, col2 FROM some_other_table"
}

One caveat here is that the types of the columns are almost, but not quite, the same as the names in Glue. If Athena/Glue would have string the value in this JSON must be varchar. If the Athena/Glue uses array<string> the value in this JSON must be array(varchar), and struct<foo:int> becomes row(foo int).

This is pretty messy, and putting it all together requires some fiddling and testing. The easiest way to get it working is to create a few views and decoding working the instructions above backwards to see how they look, and then try doing it yourself.

Upvotes: 41

Algamest
Algamest

Reputation: 1529

To add to the answers by JD D and Theo, working with their solutions, we have figured out how to invoke the AWS Cli via terraform in the following:

resource "null_resource" "athena_view" {

  provisioner "local-exec" {
    command = <<EOF
aws sts assume-role \
  --output json \
  --region my_region \
  --role-arn arn:aws:iam::${var.account_number}:role/my_role \
  --role-session-name create_my_view > /tmp/credentials.json

export AWS_SESSION_TOKEN=$(jq -r '.Credentials.SessionToken' /tmp/credentials.json)
export AWS_ACCESS_KEY_ID=$(jq -r '.Credentials.AccessKeyId' /tmp/credentials.json)
export AWS_SECRET_ACCESS_KEY=$(jq -r '.Credentials.SecretAccessKey' /tmp/credentials.json)

aws athena start-query-execution \
  --output json \
  --region my_region \
  --query-string "CREATE OR REPLACE VIEW my_view AS SELECT * FROM my_table \
  --query-execution-context "Database=${var.database_name}" \
  --result-configuration "OutputLocation=s3://${aws_s3_bucket.my-bucket.bucket}"
EOF
  }
}

We use null_resource ... to run provisioners that aren't directly associated with a specific resource.

The result of aws sts assume-role is outputted as JSON into /tmp/credentials.json.

jq is used to parse the necessary fields out of the output of aws sts assume-role .

aws athena start-query-execution is then able to execute under the role specified by the environment variables defined.

Instead of --result-configuration "OutputLocation=s3://...., --work-group can be specified, NOTE that this is a separate flag on start-query-execution, not part of the --result-configuration string.

Upvotes: 2

Related Questions