Reputation: 1619
I am trying to deploy kypo
cyber range and am following its official guide. While deploying the whole range using ansible-playbook
, I am stuck on above error:
TASK [docker : install prerequisites] ******************************************************************
fatal: [192.168.211.208]: FAILED! => {"changed": false, "msg": "Failed to update apt cache: unknown reason"}
I have manually checked apt-get update
which initially gave me a notification of:
N: Skipping acquire of configured file 'stable/binary-i386/Packages' as repository 'https://download.docker.com/linux/ubuntu focal InRelease' doesn't support architecture 'i386'
I followed this to add [amd=64] to repository
which cleaned the error. Now apt-get update
runs with without any warnings or errors, but ansible-playbook keeps on generating this error.
I changed the verbosity level and got:
fatal: [192.168.211.208]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"allow_unauthenticated": false,
"autoclean": false,
"autoremove": false,
"cache_valid_time": 0,
"deb": null,
"default_release": null,
"dpkg_options": "force-confdef,force-confold",
"force": false,
"force_apt_get": false,
"install_recommends": null,
"name": [
"apt-transport-https",
"ca-certificates"
],
"only_upgrade": false,
"package": [
"apt-transport-https",
"ca-certificates"
],
"policy_rc_d": null,
"purge": false,
"state": "present",
"update_cache": true,
"update_cache_retries": 5,
"update_cache_retry_max_delay": 12,
"upgrade": null
}
},
"msg": "Failed to update apt cache: unknown reason"
}
How can I fix this?
Upvotes: 9
Views: 31527
Reputation: 1
I have exactly the same error messages. What I did is check if my Apache2.service is active. It was not so I did "systemctl start apache2" after running "vagrant up".
Upvotes: 0
Reputation: 21
Another solution you can use
ignore_errors: yes
usual errors is not so important and this can be good solution.
All task
- name: Update apt cache apt: update_cache: yes ignore_errors: yes
Upvotes: 0
Reputation: 1
In my case, the problem occurred because I did not allow access to port 80 in the pFsense firewall rule used in my study laboratory: by default, Pfsense does not create access rules for additional interfaces, as in my case.
Executing the command from the server I verified the connection being blocked.
Releasing port 80 communicated with Ansible and installed the packages using "apt".
Kind regards.
Upvotes: 0
Reputation: 4001
Check that your firewall isn't blocking ports required by apt. I hadn't opened port 80 or 443 which was causing apt to fail.
I found this by running sudo apt update
manually on the remote server which gave me a list of "Cannot initiate the connection to..." type warning messages which included the URL + PORT of the endpoint apt was trying to reach.
See manpages for a list of ports that you might need to open.
I suspect that udp-53 also needs to be open for DNS discovery. This is untested since I already had 53 open.
Upvotes: 0
Reputation: 398
I solved this by resolving warning messages on target host after apt-get update
execution. Any warnings are critical for this ansible task.
Example:
Err:7 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05
Fetched 409 kB in 1s (407 kB/s)
Reading package lists... Done
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05
W: Failed to fetch https://apt.kubernetes.io/dists/kubernetes-xenial/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05
W: Some index files failed to download. They have been ignored, or old ones used instead.
Upvotes: 3
Reputation: 171
For anyone getting this as a first hit in the future, if you are certain there are no connection or internet issues: there is another solution to this. If your apt task includes update_cache: true
, ensure that there are no warnings or errors when running apt update
on the remote machine.
In my case, there was a missing signature for a kubernetes list, which was a list not used anymore.
The warning I got after running sudo apt update
was:
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05
Removing /etc/apt/sources.list.d/kubernetes.list
resulted in no errors or warnings from running apt update
, and fixed the ansible issues.
The task then completed without issues and I was able to install packages using the apt task, including update_cache: true
.
Upvotes: 11
Reputation: 1619
In Kypo CRP, while playing the ansible playbook the error was actually coming from one of the instances of openstack which I found out by increasing verbosity in command -vvvv
. Everything was fine with the host machine. So I look for changes in instances and the root cause was there was no internet access. Once I managed to connect them to external world, the error went away.
Upvotes: 4