EDIT: I didn't include version information, my apologies- it's been a while.
Ansible 2.9.18
and as it stands we can't currently update this.
I've got a slightly complicated use case. I use ansible (via an AWX server) for managing a bunch of servers including one that hosts my own repository, which is what all my servers are configured to use when updating packages using dnf. The servers, including the repo, use podman for all their services
The workflow I need is:
- check if updates are available and set a variable based on this which is used moving forward
- if they are available, download them
- if they are available, stop pods
- if they are available, install them, and reboot if needed
- start all the pods again
As far as I can tell this is working as intended on all but one of my servers. When I try to do this on the server that hosts the repo, I keep getting errors on the install step that suggest the repo is unavailable. It is, because I've stopped the pods. So I need to skip anything that attempts to connect to the repo, and I thought having already downloaded the packages would do this, but it appears not.
This is the relevant part of my playbook (everything up to here and afterward is fine).
- hosts: all
remote_user: "{{ remote_user }}"
gather_facts: true
tasks:
- name: check for updates on AlmaLinux
become: true
command: dnf list updates --quiet
register: update_check_output
when: ansible_distribution == 'AlmaLinux'
changed_when: false
- name: set updates_available variable
set_fact:
updates_available: "{{ update_check_output.stdout_lines | length > 0 }}"
when: ansible_distribution == 'AlmaLinux'
- name: download updates for AlmaLinux
become: true
dnf:
update_cache: yes
download_only: true
when: ansible_distribution == 'AlmaLinux' and updates_available
- name: stop any pods to avoid conflicts
become: true
become_user: "{{ pod_user }}"
shell:
cmd: podman pod stop "{{ item }}"
loop: "{{ pods }}"
when: pods is defined and ansible_distribution == 'AlmaLinux' and updates_available
- name: stop any systemd pods to avoid conflicts
become: true
become_user: "{{ pod_user }}"
shell:
cmd: XDG_RUNTIME_DIR=/run/user/$(id -u) systemctl --user stop "{{ item }}"
loop: "{{ systemd_stop_pods }}"
when: systemd_stop_pods is defined and ansible_distribution == 'AlmaLinux' and updates_available
- name: install updates to AlmaLinux servers
become: true
dnf:
name: "*"
state: latest
disable_gpg_check: yes
when: ansible_distribution == 'AlmaLinux' and updates_available
- name: check if AlmaLinux servers require restarting
shell: needs-restarting -r
failed_when: false
register: reboot_required
changed_when: false
when: ansible_distribution == 'AlmaLinux'
- name: reboot AlmaLinux family servers if required
become: yes
reboot:
reboot_timeout: 300
when: ansible_distribution == 'AlmaLinux' and reboot_required.rc != 0
I tried adding disable_gpg_check: yes
to the play that install updates but I still get the same issue. So it appears this is working on other servers because it doesn't matter that they are still trying to hit the repo, but obviously doesn't work when I am doing it on this one.
For the record here is my error.
[MIRROR] bind-libs-9.16.23-11.el9_2.2.x86_64.rpm: Status code: 502 for https://myserver/yum-mirror/9/AppStream/x86_64/os/Packages/bind-libs-9.16.23-11.el9_2.2.x86_64.rpm (IP: 10.xx.xx.xx)
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: Error downloading packages:
bind-libs-32:9.16.23-11.el9_2.2.x86_64: Cannot download, all mirrors were already tried without success
Seems like I should be able to do this but I've been searching high and low for an answer and can't fine one.
Use cacheonly parameter, i.e.
If this parameter is unavailable in your ansible version - you can use command module, i.e.