Yinka
Yinka

Reputation: 2081

ENOSPC no space left on device -Nodejs

I just built an application with expressJs for an institution where they upload video tutorials. At first the videos were being uploaded to the same server but later I switched to Amazon. I mean only the videos are being uploaded to Amazon. Now I get this error whenever I try to upload ENOSPC no space left on device. I have cleared the tmp file to no avail.I need to say that I have searched extensively about this issue but none of d solutions seems to work for me

Upvotes: 125

Views: 255724

Answers (20)

dntrplytch
dntrplytch

Reputation: 159

My context is a node.js application developed in Visual Studio Code (VS Code) on a Windows machine. I needed to install and run clean-modules to overcome this issue.

npm install --global clean-modules
clean-modules

Please research and validate use of clean-modules for your own project.

Closing and launching VS Code, restarting the Windows machine, etc. did not help. Lack of disk space was not the issue.

As has been reported here, the number of files in the directory structure may have been the issue. There likely are Windows settings that I have not researched.

Upvotes: 0

Gargi Kantesaria
Gargi Kantesaria

Reputation: 77

This error can come due to your server running out of space. The below steps will solve your problem if you're using EC2.

  1. Connect with your instance using the terminal and fire the below command.
df -h

This will give you the detailed information about storage.

enter image description here

  1. To increase the space, go to the AWS console. Increase the EBS volume attached to your instance from X GB to any you want.

enter image description here

  1. Reboot your server once the volume is attached successfully.
  2. Check again with the command mentioned in step 1. Your issue is resolved.

Upvotes: 2

RdC1965
RdC1965

Reputation: 580

As pointed out here, there is a different case from that of the question where the reason for being left without disk space is a system upgrade (e.g. with the command sudo apt update && sudo apt upgrade -y), leaving the system partition with no disk space. Look for 100% usage in a partition like /dev/root using the df command.

I tried several solutions included above, but nothing helped until I ran the command:

sudo apt-get clean

As stated here:

The apt-get clean command helps to clean out the cache once you have installed the packages using apt-get install command in your system. It removes the files that are no longer required but are still residing on your system and keeping the system space.

Upvotes: 0

Daniel Danielecki
Daniel Danielecki

Reputation: 10590

  1. Open Docker Desktop
  2. Go to Troubleshoot
  3. Click Reset to factory defaults

Upvotes: 1

Awara Amini
Awara Amini

Reputation: 582

This worked for me:

sudo docker system prune -af

Upvotes: 5

Timotheo Mhoja
Timotheo Mhoja

Reputation: 61

I used to check free space first using this command. to show show human-readable output

free -h

then i reclaimed more free space to almost

Total reclaimed space: 2.77GB from 0.94GB using this command

   sudo docker system prune -af

this worked for me.

Upvotes: 0

Joshua Dyck
Joshua Dyck

Reputation: 2173

tldr;

Restart Docker Desktop

The only thing that fixed this for me was quitting and restarting Docker Desktop.

I tried docker system prune, removed as many volumes as I could safely do, removed all containers and many images and nothing worked until I quit and restarted Docker Desktop.

Before restarting Docker Desktop the system prune removed 2GB but after restarting it removed 12GB.

So, if you tried to run system prune and it didn't work, try restarting Docker and running the system prune again.

That's what I did and it worked. I can't say I understand why it worked.

Upvotes: 1

Subhrangshu Adhikary
Subhrangshu Adhikary

Reputation: 404

Adding to the discussion, the above command works even when the program is not run from Docker.

Repeating that command:

sudo sysctl fs.inotify.max_user_watches=524288
docker system prune

Upvotes: 3

spikeyang
spikeyang

Reputation: 690

In my case, Linux ext4 file system, large_dir feature should be enabled.

// check if it's enabled
sudo tune2fs -l /dev/sdc  | grep large_dir

// enable it
sudo tune2fs -O  large_dir  /dev/sda

On Ubuntu, ext4 FS will have a 64M limit on number of files in a single directory by default, unless large_dir is enabled.

Upvotes: 0

Xelphin
Xelphin

Reputation: 349

I had the same problem, you can clear the trash if you haven't already, worked for me:

(The command I searched from a forum, so read about it before you decide to use it, I'm a beginner and just copied it, I don't know the full scope of what it does exactly)

$ rm -rf ~/.local/share/Trash/*

The command is from this forum:

https://askubuntu.com/questions/468721/how-can-i-empty-the-trash-using-terminal

Upvotes: 5

Zia Ullah
Zia Ullah

Reputation: 323

I struggled hard with it, some time, following command worked.


docker system prune


But then I checked the volume and it was full. I inspected and came to know that node_modules have become the real trouble.

So, I deleted node_modules, ran again NPM install and it worked like charm.

Note:- This worked for me for NODEJS and REACTJS project.

Upvotes: 0

Renjith
Renjith

Reputation: 2451

Just need to clean up the Docker system in order to tackle it. Worked for me.

$ docker system prune

Link to official docs

Upvotes: 203

Carlos Orduño
Carlos Orduño

Reputation: 21

The previous answers fixed my problem for a short period of time. I had to do find the big files that weren't being used and were filling my disk. on the host computer I run: df I got this, my problem was: /dev/nvme0n1p3

Filesystem     1K-blocks      Used Available Use% Mounted on
udev            32790508         0  32790508   0% /dev
tmpfs            6563764    239412   6324352   4% /run
/dev/nvme0n1p3 978611404 928877724         0 100% /
tmpfs           32818816    196812  32622004   1% /dev/shm
tmpfs               5120         4      5116   1% /run/lock
tmpfs           32818816         0  32818816   0% /sys/fs/cgroup
/dev/nvme0n1p1    610304     28728    581576   5% /boot/efi
tmpfs            6563764        44   6563720   1% /run/user/1000

I installed ncdu and run it against root directory, you may need to manually delete an small file to make space for ncdu, if that's is not possible, you can use df to find the files manually:

sudo apt-get install ncdu
sudo ncdu /

that helped me to identify the files, in my case those files were in the /tmp folder, then I used this command to delete the ones that weren't used in the last 10 days: With this app I was able to identify the big files and delete tmp files: (Sep-4 12:26)

sudo find /tmp -type f -atime +10 -delete

Upvotes: 1

narasimhanaidu budim
narasimhanaidu budim

Reputation: 191

I have come across a similar situation where the disk is free but the system is not able to create new files. I am using forever for running my node app. Forever need to open a file to keep track of node process it's running.

If you’ve got free available storage space on your system but keep getting error messages such as “No space left on device”; you’re likely facing issues with not having sufficient space left in your inode table.

use df -i which gives IUser% like this

Filesystem       Inodes  IUsed    IFree IUse% Mounted on
udev             992637    537   992100    1% /dev
tmpfs            998601   1023   997578    1% /run

If your IUser% reaches 100% means your "inode table" is exhausted

Identify dummy files or unnecessary files in the system and deleted them

Upvotes: 19

vipinlalrv
vipinlalrv

Reputation: 3075

You can set a new limit temporary with:

sudo sysctl fs.inotify.max_user_watches=524288

sudo sysctl -p

If you like to make your limit permanent, use:

echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf

sudo sysctl -p

Upvotes: 6

Hailin Tan
Hailin Tan

Reputation: 1109

In my case, I got the error 'npm WARN tar ENOSPC: no space left on device' while running the nodeJS in docker, I just used below command to reclaim space.

sudo docker system prune -af

Upvotes: 64

alnorth29
alnorth29

Reputation: 3602

I got this error when my script was trying to create a new file. It may look like you've got lots of space on the disk, but if you've got millions of tiny files on the disk then you could have used up all the available inodes. Run df -hi to see how many inodes are free.

Upvotes: 5

Yinka
Yinka

Reputation: 2081

Well in my own case. What actually happened was while the files were been uploaded on Amazon web service, I wasn't deleting the files from the temp folder. Well every developer knows that when uploading files to a server they are initially stored in the temp folder before being copied to whichever folder you want it to(I know for Nodejs and php); So try and delete your temp folder and see. And ensure ur upload method handles clearing of your temp folder immediately after every upload

Upvotes: 6

Yinka
Yinka

Reputation: 2081

The issue was actually as a result of temp folder not being cleared after upload, so all the videos that have been uploaded hitherto were still in the temp folder and the memory has been exhausted. The temp folder has been cleared now and everything works fine now.

Upvotes: 0

omt66
omt66

Reputation: 5019

I had the same problem, take a look at the selected answer in the Stackoverflow here:

Node.JS Error: ENOSPC

Here is the command that I used (my OS: LinuxMint 18.3 Sylvia which is a Ubuntu/Debian based Linux system).

echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

Upvotes: 46

Related Questions