Reputation: 393
I'm setting up aws-amplify to my project. I am facing a problem in amplify push when I configured for the first time it worked fine. now i changed the repository since i had to do sub-tree from the old repo. Now when i do amplify push i get
Resource is not in the state stackUpdateComplete
⠸ Updating resources in the cloud. This may take a few minutes...Error updating cloudformation stack ⠸ Updating resources in the cloud. This may take a few minutes...
Following resources failed
✖ An error occurred when pushing the resources to the cloud
Resource is not in the state stackUpdateComplete An error occured during the push operation: Resource is not in the state stackUpdateComplete
Upvotes: 25
Views: 43235
Reputation: 1127
Just to give some background about this error - what does Resource is not in the state stackUpdateComplete
actually mean?
Well basically Amplify is telling you that one of the stacks in your app did not deploy correctly, but it doesn't know why (which is remarkably unhelpful, but in fairness it's deploying a lot of potentially complex resources).
This can make diagnosing and fixing the issue really problematic, so I've compiled this kind of mental checklist that I go through to fix it. Each of the techniques will work some of the time, but I don't think there are any that will work all of the time. This list is not intended to help you diagnose what causes this issue, it's literally just designed to get you back up and running.
amplify env pull --restore
.amplify push --iterative-rollback
. It's supposed to roll your environment back to the last successful deployment, but tbh it rarely works.amplify push --force
. Although counter-intuitive, this is actually a rollback method. It basically does what you think --iterative-rollback
will do, but works more frequently.amplify-${project_name}-${environment_name}-${some_random_numbers}-deployment
). If there is a file called deployment-state.json
, delete it and try amplify push
again from the CLI.amplify/team-provider-info.json
file might be out of sync. Usually this is caused by the environment variable(s) in an Amplify Lambda function being set in one of the files but not in another. The resolution will depend on how out of sync these files are, but you can normally just copy the contents of the last working team-provider-info.json
file across to the other repo (from where the deployment is failing) and run the deployment again. However, if you've got multiple devs/machines/repos, you might be better off diffing the files and checking where the differences are.Hopefully you haven't got this far, but at this point I'd recommend you open a ticket in the amplify-cli GitHub with as much info as you can. They tend to respond in 1-2 working days.
If you're pre-production, or you're having issues with a non-production environment, you could also try cloning the backend environment in the Amplify console, and seeing if you can get the stack working from there. If so, then you can push the fixed deployment back to the previous env (if you want to) using amplify env checkout ${your_old_env_name}
and then amplify push
.
If none of the above work (or you don't have time to wait for a response on a GitHub issue), head over to CloudFormation in the AWS console and search for the part of your stack that is erroring. There's a few different ways to do this:
UPDATE_COMPLETE
. You can copy the name of the stack and search for it in CloudFormation.Parent stack
, repeat until you find a stack with no parent. You are now in the root stack of your deployment, there are two ways to find your erroring stack from here:
Resources
tab and find one with something red in the status column. Select the stack from this row.Events
tab and find one with something red in the status column. Select the stack from this row.Stack actions
button and select Detect drift
from the dropdown menu.Stack actions
button again and select View drift results
from the dropdown menu.Resource drift results
page, you'll see a list of resources in the stack. If any of them show DRIFTED
in the Drift status
column, select the radio button to the left of that item and then click the View drift details
button. The drift details will be displayed side by side, git-style, on the next page. You can also click the checkbox(es) in the list above to highlight the drift change(s). Keep the current page open, you'll need it later.amplify push
again and wait for the build to complete in order for the fix to be deployed to your environment).Detect stack drift
button at the top of the page and it will update. Hopefully you've solved the problem.Another fun thing that Amplify does from time-to-time is to (seemingly spontaneously) change the server-side encryption setting on the definition of some or all of your DynamoDB tables without you even touching it. This is by far and away the most bizarre Amplify error I've encountered (and that's saying something)!
I have a sort-of fix for this, which is to open amplify/backend/api/${your_api_name}/parameters.json
and change the DynamoDBEnableServerSideEncryption
setting from false
to true
, save it, then run amplify push
. This will fail. But it's fine, because then you just reverse the change (set it back to false
), save it, push again and voila! I still cannot for the life of me understand how or why this happens.
I said it's a sort-of fix, and that's because you'll still see drift for the stacks that deploy the affected tables in CloudFormation. This goes away after a while. Again, I have no idea how or why.
Obviously this one comes with a huge disclaimer: don't do this in production. If working with any kind of DB, you will lose the data.
You can make backups of everything and then start to remove the problematic resources one at a time, with an amplify push
in between each one, until the stack build successfully. Once it's built, you can start adding your resources back in.
Hopefully this helps someone, please feel free to suggest edits or other solutions.
Upvotes: 38
Reputation: 1
I used the below steps and it worked for me:
Upvotes: 0
Reputation: 899
I debugged my AWS Amplify CLI push error by doing the following:
CloudFormation
amplify-companyName-envName-123456
Events
tabUPDATE_FAILED
, which should give you a detailed description of why it failed. e.g. The following resource(s) failed to create: ...
Alternatively (to find parent stack):
Overview
tabView in CloudFormation
Stack info
tab, click link for Parent stack
Events
tabUpvotes: 3
Reputation: 786
The solution is:
a. Go to the s3 bucket containing project settings.
b. locate deployment-state.json
file in root folder and delete it.
c. amplify push
Upvotes: 3
Reputation: 61
In my case it was an issue when switching between amplify env (checkout), the error was not clear but this is what I did to fix it without having to "clear" api and lose the whole database :
Upvotes: 0
Reputation: 614
In my case the issue was due to multiple @connections
referring to GSI, which were not getting removed and added correctly when I do the amplify push api.
I was able to resolve this by amplify pull
then, comment off the @connection
then the GSI linked to connection then, add each new changes manually, but there was trouble in GSI getting linked again because the local update considered the GSI already removed but in cloud it seems to be retained, and I got error that a GSI is being added which was already in cloud. So I renamed the model name, so it got recreated to new tables in dynamoDB then I reverted it back to the correct name. This is ideal for dev environment which has no much impact.
But of course it ate up most of my time, but it did fix my issue.
Upvotes: 0
Reputation: 27
In my opinion, these kind of problems always related to 3rd party auth.
It will fix the problem
Upvotes: -1
Reputation: 1872
I got this after making some modifications to my GraphQL schema. I adjusted the way I was making @connection directives on a few tables. I was able to fix this by following these steps:-
amplify pull
to restore your local to be in sync with your backend in the cloud.amplify push
should work without flaws because it is synced to the cloud and there should be no updates.amplify push
once more to see if it works.If it doesn't work, undo the overwrite to the pulled schema and compare what is different between the pulled schema and the updated schema that you backed up. Do a line by line diffcheck and see what has changed and try to push the changes one by one to see where it is failing. I think it is wiser to not push too many changes to the schema at once. Do it one by one so that you can troubleshoot more easily. If you do have other issues, then it should be unrelated to the one highlighted in this question, because the pulling should solve this particular issue.
Upvotes: 0
Reputation: 29
As mentioned by others in this thread - the issue comes from one of the resources that you updated locally.
Check which ones did you modify:
$ amplify status
Then remove
and add
it again, followed by push
. The Api
is known not to work with updates right now, so you must remove it if you've changed it locally:
$ amplify api remove YourAPIName
$ amplify api add
$ amplify push
Upvotes: -2
Reputation: 124
It's look like a conflict between backend and local
The only thing that work for me is backing up the local schema and initiating the amplify pulling command.
Then use the back up schema file and initial the amplify push.
In most of case updates in the following file must be set manually (for Android): app/src/main/res/raw/amplifyconfiguration.json
Upvotes: -2
Reputation: 55
You can try as below
First do
amplify env checkout {environment}
and then
amplify push
Upvotes: 1
Reputation: 130
This worked for me
amplify remove storage
And, then
amplify add storage
Then, again
amplify push
As after amplify add storage
I mistakenly choose Y to Do you want to add a Lambda Trigger for your S3 Bucket?
I didn't have any Lamda function and also I didn't have anything in my bucket.
Upvotes: 0
Reputation: 3943
This worked for me:
$ amplify update auth
Choose the option “Yes, use default configuration” (uses the Cognito Identitypool).
Then:
$ amplify push
Another reason can be this
The issue is tied to the selection of this option - Select the authentication/authorization services that you want to use:
User Sign-Up & Sign-In only (Best used with a cloud API only)
which creates just the UserPool and not the IdentityPool which the rootstack is looking for. It's a bug and we'll fix that.To unblock, for just the first question, you could select - ❯
User Sign-Up, Sign-In, connected with AWS IAM controls (Enables per-user Storage features for images or other content, Analytics, and more)
which would create a user pool as well as as the identity pool and then choose any of the other configurations that you've mentioned above.
Upvotes: 5