imgeaslikok
imgeaslikok

Reputation: 626

Command 01_migrate failed on Amazon Linux 2 AMI

I have a Django project which is deployed to Elastic Beanstalk Amazon Linux 2 AMI. I installed PyMySQL for connecting to the db and i added these lines to settings.py such as below;

import pymysql

pymysql.version_info = (1, 4, 6, "final", 0)
pymysql.install_as_MySQLdb()

And also i have a .config file for migrating the db;

container_commands:
  01_migrate:
    command: "django-admin.py migrate"
    leader_only: true
option_settings:
  aws:elasticbeanstalk:application:environment:
    DJANGO_SETTINGS_MODULE: mysite.settings

Normally, i was using mysqlclient on my Linux AMI with this .config file but it doesn't work on Linux 2 AMI so i installed the PyMySQL. Now, i'm trying to deploy the updated version of my project but i'm getting an error such as below;

Traceback (most recent call last):
  File "/opt/aws/bin/cfn-init", line 171, in <module>
    worklog.build(metadata, configSets)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 129, in build
    Contractor(metadata).build(configSets, self)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 530, in build
    self.run_config(config, worklog)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 542, in run_config
    CloudFormationCarpenter(config, self._auth_config).build(worklog)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 260, in build
    changes['commands'] = CommandTool().apply(self._config.commands)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/command_tool.py", line 117, in apply
    raise ToolError(u"Command %s failed" % name)
ToolError: Command 01_migrate failed

How can i fix this issue?

Upvotes: 14

Views: 6618

Answers (4)

djvg
djvg

Reputation: 14255

The answer from @nick-brady is great, and it provides the basic solution.

However, the AWS docs on migrating to Amazon Linux 2 suggest that we should do things like this using .platform hooks (this also applies to Amazon Linux 2023):

We recommend using platform hooks to run custom code on your environment instances. You can still use commands and container commands in .ebextensions configuration files, but they aren't as easy to work with. For example, writing command scripts inside a YAML file can be cumbersome and difficult to test.

and from the AWS Knowledge Center:

... it's a best practice to use platform hooks instead of providing files and commands in .ebextension configuration files.

As a bonus, output from the platform hooks is collected in a separate log file (/var/log/eb-hooks.log), which is included in bundle and tail logs by default. This makes debugging a bit easier.

The basic idea is to create a shell script in your application source bundle, e.g. .platform/hooks/postdeploy/01_django_migrate.sh. This is described in more detail in the platform hooks section in the docs for extending EB linux platforms.

The file must be executable, so: chmod +x .platform/hooks/postdeploy/01_django_migrate.sh

Update: On AL2 and AL2023 execute permissions are now automatically granted to all platform hook scripts.

The file content could look like this (based on @nick-brady's answer):

#!/bin/bash

source "$PYTHONPATH/activate" && {
# log which migrations have already been applied
python manage.py showmigrations;
# migrate
python manage.py migrate --noinput;
}

You can do the same with collectstatic etc.

Note that the path to the Python virtual environment is available to platform hooks as the environment variable PYTHONPATH. You can verify this by inspecting the file /opt/elasticbeanstalk/deployment/env on your instance, e.g. via ssh. Also see AWS knowledge center.

For those wondering, the && in the shell script is a kind of conditional execution: only do the following if the preceding succeeded. See e.g. here.

Leader only

During deployment, there should be an EB_IS_COMMAND_LEADER environment variable, which can be tested in order to implement leader_only behavior in .platform hooks (based on this post):

...

if [[ $EB_IS_COMMAND_LEADER == "true" ]];
then 
  python manage.py migrate --noinput;
  python manage.py collectstatic --noinput;
else 
  echo "this instance is NOT the leader";
fi

...

File permission issues

Note that .platform hooks run as the root user, whereas the app runs as webapp. This may lead to file permission errors if a file is created during the manage.py call in a platform hook, e.g. a logfile.

If that happens, a workaround is to run manage.py as the webapp user, for example with the help of su and heredoc:

#!/bin/bash

su webapp << HERE
source "$PYTHONPATH/activate" && {
python manage.py showmigrations;
python manage.py migrate --noinput;
}
HERE

Upvotes: 12

Christopher
Christopher

Reputation: 25

I ran into this issue as well. @nick-brady answer was the solution until recently when I started to get the error again.

The issue seemed to be that when AL2 ran python manage.py migrate it didn't have access to my environment variables storing the database connection info.

The solution was to add another file to .ebextensions with the following code:

    commands:
    setvars:
        command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' > /etc/profile.d/sh.local
packages:
    yum:
        jq: []

I named this file setvars.config

Source: https://repost.aws/knowledge-center/elastic-beanstalk-env-variables-shell

Upvotes: 0

adrian stefan
adrian stefan

Reputation: 61

in my case worked this .config

container_commands: 01_migrate: command: "django-admin.py migrate" leader_only: true 02_collectstatic: command: "django-admin.py collectstatic --noinput"

i had this command: "source /var/app/venv/*/bin/activate && python3 manage.py config until 4 Jan and suddenly i got a deployment error

Upvotes: 1

Nick Brady
Nick Brady

Reputation: 6572

Amazon Linux 2 has a fundamentally different setup than AL1, and the current documentation as of Jul 24, 2020 is out of date. django-admin of the installed environment by beanstalk does not appear to be on the path, so you can source the environment to activate and make sure it is.

I left my answer here as well which goes into much more detail in how I arrived at this answer, but the solution (which I don't love) is:

container_commands:
    01_migrate:
        command: "source /var/app/venv/*/bin/activate && python3 manage.py migrate"
        leader_only: true

Even though I don't love it, I have verified with AWS Support that this is in fact the recommended way to do this. You must source the python environment, as with AL2 they use virtual environments in an effort to stay more consistent.

Upvotes: 17

Related Questions