openCivilisation
openCivilisation

Reputation: 936

Why are my nfs4 mounts not working with ansible?

With Ansible, I cannot mount nfs4.

I have nfs4 exports configured on the server, and I can mount both nfs4 and nfs using a bash shell.

I can also get nfs to work in ansible, just not nfs4.

so I'm wondering how I can mount a share like /pool1/volume1 on the server to the same style path on the client- /pool1/volume1

tried switching to standard nfs which worked, and I can mount nfs4 in a bash shell, but not with ansible

This works-

  - name: mount softnas NFS volume
    become: yes
    mount:
      fstype: nfs
      path: "/pool1/volume1"
      opts: rsize=8192,wsize=8192,timeo=14,intr,_netdev
      src: "10.0.11.11:/pool1/volume1"
      state: mounted

But this doesn't

  - name: mount softnas NFS volume
    become: yes
    mount:
      fstype: nfs4
      path: "/pool1/volume1"
      opts: rsize=8192,wsize=8192,timeo=14,intr,_netdev
      src: "10.0.11.11:/pool1/volume1"
      state: mounted

and if i use this command from a shell, this works fine in mounting the paths into test. sudo mount -t nfs4 10.0.11.11:/ /test although its not quite right, because id like /pool1/volume1 and /pool2/volume2 to not appear under /test

my exports file on the server is this-

/ *(ro,fsid=0)
# These mounts are managed in ansible playbook softnas-ebs-disk-update-exports.yaml
# BEGIN ANSIBLE MANAGED BLOCK /pool1/volume1/
/pool1/volume1/ *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)
# END ANSIBLE MANAGED BLOCK /pool1/volume1/
# BEGIN ANSIBLE MANAGED BLOCK /pool2/volume2/
/pool2/volume2/ *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)
# END ANSIBLE MANAGED BLOCK /pool2/volume2/

when I try to switch to nfs4, i get this error with ansible

Error mounting /pool1/volume1/: mount.nfs4: mounting 10.0.11.11:/pool1/volume1/ failed, reason given by server: No such file or directory

Upvotes: 1

Views: 2465

Answers (3)

joerobb
joerobb

Reputation: 11

Came across this same issue, using Ansible 2.11.5 the above answer didn't work for me. Ansible was complaining about the fstype "nfs4" and quoting was needed for the "opts" equal signs. Here is what I used:

- name: mount AWS EFS Volume
  mount:
    fstype: nfs
    path: "/mnt/efs"
    opts: "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport"
    src: "{{ efs_share_address[env] }}:/"
    state: mounted

From the roles var file:

efs_share_address:
  qa: 10.2.2.2
  stage: 10.3.3.3
  production: 10.4.4.4

Upvotes: 1

openCivilisation
openCivilisation

Reputation: 936

I'm not sure how I fixed it exactly, but I decided to opt for the recommended workflow of binding my exports below the /export folder, and using

/export *(ro,fsid=0)

...as the root share. and then these-

/export/pool1/volume1 *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)
/export/pool2/volume2 *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)

Upvotes: 0

Prakash Krishna
Prakash Krishna

Reputation: 1257

Check if mount point exists

mkdir /pool1/volume1 # if not exists. Or create an ansible task to create the directory

Updated to change as the share is /

  - name: mount softnas NFS volume
    become: yes
    mount:
      fstype: nfs4
      path: "/pool1/volume1"
      opts: rsize=8192,wsize=8192,timeo=14,intr,_netdev
      src: "10.0.11.11:/"
      state: mounted

If you don't want to mount /, then share the /pool1/volume1 in the server.

Upvotes: 0

Related Questions