Reputation: 38601
when I tried to load the nfs(v4) folder in 2 kubernetes(v1.30.x) pods, one of them throw error:
│ Persisting documents to "/opt/data/yjs-storage"
│
│ /home/node/app/node_modules/levelup/lib/levelup.js:119
│ return callback(new OpenError(err))
│ ^
│ Error [OpenError]: IO error: lock /opt/data/yjs-storage/LOCK: Resource temporarily unavailable
│ at /home/node/app/node_modules/levelup/lib/levelup.js:119:23
│ at /home/node/app/node_modules/abstract-leveldown/abstract-leveldown.js:38:14
│ at /home/node/app/node_modules/deferred-leveldown/deferred-leveldown.js:31:21
│ at /home/node/app/node_modules/abstract-leveldown/abstract-leveldown.js:38:14
│ at /home/node/app/node_modules/abstract-leveldown/abstract-leveldown.js:38:14
│ Emitted 'error' event on LevelUP instance at:
│ at /home/node/app/node_modules/levelup/lib/levelup.js:60:19
│ at /home/node/app/node_modules/levelup/lib/levelup.js:119:14
│ at /home/node/app/node_modules/abstract-leveldown/abstract-leveldown.js:38:14
│ [... lines matching original stack trace ...]
│ at /home/node/app/node_modules/abstract-leveldown/abstract-leveldown.js:38:14 {
│ [cause]: undefined
│ }
│
│ Node.js v18.20.6
it look like it did not allow the 2 pods load the same NFS folder? this is the NFS config:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-texhub-server-pv-qingdao
spec:
capacity:
storage: 8Gi
nfs:
server: 60cw9b7f-osv72.cn-qingdao.nas.aliyuncs.com
path: /k8s/reddwarf-pro/texhub-server-service
accessModes:
- ReadWriteMany
claimRef:
kind: PersistentVolumeClaim
namespace: reddwarf-pro
name: texhub-server-service-pv-claim-qingdao
uid: 926e70a4-651f-467a-9a81-0b87a7b696ee
apiVersion: v1
resourceVersion: '1080096'
persistentVolumeReclaimPolicy: Retain
mountOptions:
- vers=4.0
- noresvport
volumeMode: Filesystem
it it possible to let multi pod write from the same NFS directory?
Upvotes: 0
Views: 61
Reputation: 267
The error you are seeing suggests that the NFS mount is being used by several processes at once and that there is a locking problem on the NFS server, as stated in the comments.
Please be noted that LevelDB is designed to be used by a single process at a time. When you are trying for multiple processes this could also be done by using kubernetes mechanisms like Pod Affinity or Pod Anti-Affinity to make sure that only one pod can access the database at a time.
Adding to that you can also use the Pod Topology Spread Constraints which might help to resolve the constraint issue.
Upvotes: 0