Reputation: 1363
I have a cloud-init bash script that pulls a zip from S3, unzips it, and then runs an Ansible playbook contained in it. Now, I also have environment variables baked into /etc/environment
, and the Ansible playbook uses lookup('env')
to grab those values. It works fine when run through bash, either as the main user or as root. But when it fires via cloud-init, the variables are not transferred through.
In my bash script, the first line is source /etc/environment
, and I can echo them out just fine. It's only when the Ansible playbook does the lookup
that it fails. Interestingly, I can force the variables as so:
FOO=$FOO BAR=$BAR ansible-playbook -c local ...
and that works. Does anyone have any idea how I can get around having to hardcode the variables into the playbook line, and just have them work as expected, i.e. pull from /etc/environment
?
Edit: here's the cloud-init:
#!/bin/bash
source /etc/environment
doit() {
aws s3 cp s3://my/scripts/dev-s3-push.tar.gz /tmp/my.tar.gz
mkdir -p /app/deploy
tar -C /app/deploy -zxvf /tmp/my.tar.gz
cd /app/deploy
FOO=$FOO BAR=$BAR ansible-playbook -i "localhost," -c local run.yml
}
doit
This is added into the User Data section in AWS.
Upvotes: 1
Views: 1619
Reputation: 1363
Okay, so I figured it out. lookup()
uses os.getenv
underneath, and I found a few other questions related to os.getenv
not returning properly.
The issue is that in my /etc/environment
, i had it as FOO=bar
, where it should have been export FOO=bar
. Changing all the values over to that makes it work. I still have the source
line in the cloud-init function, but I think this is solved now.
Upvotes: 1