Reputation: 3622
Versions
Setup
I have the following folder structure
.
├── main.tf
└── scripts
└── my_script.py
main.tf:
locals {
python = (substr(pathexpand("~"), 0, 1) == "/") ? "python3" : "python.exe"
}
resource "null_resource" "custom_objects" {
for_each = local.custom_objects
triggers = {
name = each.key
}
provisioner "local-exec" {
command = <<-EOT
python3 -m my_script '{"key_1": "value_1"}'
EOT
interpreter = [
local.python
]
working_dir = "${path.module}/scripts"
}
}
./scripts/my_script.py:
if __name__ == '__main__':
print('my_script executed.')
Problem
When I run terraform plan/apply
I get the following error:
Error running command 'python3 -m my_script '{"key_1": "value_1"}' │ ': exit status 2. Output: /home_path/.pyenv/versions/3.9.1/bin/python3: can't open file '/path_to_repo/scripts/python3 -m my_script '{"key_1": "value_1"}' │ ': [Errno 2] No such file or directory
Qeustions
Upvotes: 3
Views: 11432
Reputation: 18203
It seems when the interpreter
argument is provided it is appended to the path defined in the working_dir
, which explains this error:
'/path_to_repo/scripts/python3 -m my_script '{"key_1": "value_1"}'
As you can see, the path ends with python3
which probably is not there.
I managed to get the result using the following syntax:
resource "null_resource" "custom_objects" {
for_each = local.custom_objects
triggers = {
name = each.key
}
provisioner "local-exec" {
command = <<-EOT
${local.python} -m my_script '{"key_1": "value_1"}'
EOT
working_dir = "${path.module}/scripts" # works with ${path.root} as well
}
}
You might also find terraform_data
as a good replacement for the null_resource
.
Upvotes: 8