I moved my EFI partition which caused me to enter emergency mode on reboot.
mount -a confirmed fstab had the the UUID of the old partition...it wasn't mounting and that caused emergency mode.
You can't do ANY of the standard remount,rw tricks that normally work... it will always give you an error message the mount doesn't work
Sure, I could start from the USB stick and edit /etc/fstab, but isn't there an easier way?
This question has been around for 10 years and most people answer with a remount as rw, but that always fails.
The clever way is simply mount the / partition on /mnt like:
mount /dev/sda1 /mnt
This mounts it read-write and you just edit /mnt/etc/fstab to change the new UUID for your partition which you can get from either blkid or ls -lha /dev/disk/by-uuid
However, the readonly filesystem will NOT see your changes, so you think you've failed. You'll look at /etc/fstab and it will appear unchanged (at least under btrfs it looks unchanged).
However, when you reboot, you are back in business.
I re-sized my EFI partition down to 260MB, but had to remove it to do that which causes the UUID to change. Just changing the /etc/fstab with the new UUID is all that you need to do so you don't run into trouble. Best to do that when you move the partition, rather than after the fact.
I have RHEL 7.9 installed and here is what I did to edit the fstab after copying from one machine to another using scp.
mount -o remount,rw /dev/sda2 #sda2 is where my root directory is located.
I was then able to open in vim and save the UUID changes I had to make. Worked like a charm.
I recently ran a report on my EC2 server and was told that it ran out of space. I deleted the csv that was partially generated from my report (it was going to be a pretty sizable one) and ran df -h and was surprised to get this output:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 7.0G 718M 91% /
devtmpfs 15G 100K 15G 1% /dev
tmpfs 15G 0 15G 0% /dev/shm
I surprised not only by how little was available/how much space was used,(I am on the /dev/xvda1 instance) but also surprised to see 2 alternative filesystems.
To investigate what was taking so much space, I ran du -h in ~ and saw the list of all directories on the server. Their reported size in aggregate should not be even close to 7 gb...which is why I ask "what is taking up all that space??"
The biggest directory by far was the ~ directory containing 165MB all other were 30MB and below. My mental math added it up to WAY less than 7gb. (if I understand du -h correctly, all directories within ~ ought to be included within 165MB...so I am very confused how 7 gb could be full)
Anyone know what's going on here, or how I can clean up the space? Also, just out of curiosity, is there a way to utilize the devtmpfs/tmpfs servers from the same box? I am running on AWS Linux, with versions of python and ruby installed
According to this answer, it seems as though it might be because of log files getting too large. Try run the command OP mentioned in their answer, in order to find all large files: sudo find / -type f -size +10M -exec ls -lh {} \;
For me, the best option was to delete the overlay2 docker folder and to completely refresh docker to a clean state. It clears up more than 3GB in my case.
Important note: it will stop and remove your instances, so you need to rebuild them.
In order to do that, first stop the docker engine
sudo systemctl stop docker
Prune and then delete the entire docker directory (not just the overlay2 folder):
docker system prune
sudo rm -rf /var/lib/docker
Restart docker:
sudo systemctl start docker
The engine will restart without any images, containers, volumes, user created networks, or swarm state.
Additionaly you can remove snap with:
sudo apt autoremove --purge snapd
I just built an application with expressJs for an institution where they upload video tutorials. At first the videos were being uploaded to the same server but later I switched to Amazon. I mean only the videos are being uploaded to Amazon. Now I get this error whenever I try to upload ENOSPC no space left on device. I have cleared the tmp file to no avail.I need to say that I have searched extensively about this issue but none of d solutions seems to work for me
Just need to clean up the Docker system in order to tackle it. Worked for me.
$ docker system prune
Link to official docs
In my case, I got the error 'npm WARN tar ENOSPC: no space left on device' while running the nodeJS in docker, I just used below command to reclaim space.
sudo docker system prune -af
I had the same problem, take a look at the selected answer in the Stackoverflow here:
Node.JS Error: ENOSPC
Here is the command that I used (my OS: LinuxMint 18.3 Sylvia which is a Ubuntu/Debian based Linux system).
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
I have come across a similar situation where the disk is free but the system is not able to create new files. I am using forever for running my node app. Forever need to open a file to keep track of node process it's running.
If you’ve got free available storage space on your system but keep getting error messages such as “No space left on device”; you’re likely facing issues with not having sufficient space left in your inode table.
use df -i which gives IUser% like this
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 992637 537 992100 1% /dev
tmpfs 998601 1023 997578 1% /run
If your IUser% reaches 100% means your "inode table" is exhausted
Identify dummy files or unnecessary files in the system and deleted them
I got this error when my script was trying to create a new file. It may look like you've got lots of space on the disk, but if you've got millions of tiny files on the disk then you could have used up all the available inodes. Run df -hi to see how many inodes are free.
I had the same problem, you can clear the trash if you haven't already, worked for me:
(The command I searched from a forum, so read about it before you decide to use it, I'm a beginner and just copied it, I don't know the full scope of what it does exactly)
$ rm -rf ~/.local/share/Trash/*
The command is from this forum:
https://askubuntu.com/questions/468721/how-can-i-empty-the-trash-using-terminal
Well in my own case. What actually happened was while the files were been uploaded on Amazon web service, I wasn't deleting the files from the temp folder. Well every developer knows that when uploading files to a server they are initially stored in the temp folder before being copied to whichever folder you want it to(I know for Nodejs and php); So try and delete your temp folder and see. And ensure ur upload method handles clearing of your temp folder immediately after every upload
You can set a new limit temporary with:
sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl -p
If you like to make your limit permanent, use:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Adding to the discussion, the above command works even when the program is not run from Docker.
Repeating that command:
sudo sysctl fs.inotify.max_user_watches=524288
docker system prune
The previous answers fixed my problem for a short period of time.
I had to do find the big files that weren't being used and were filling my disk.
on the host computer I run: df
I got this, my problem was: /dev/nvme0n1p3
Filesystem 1K-blocks Used Available Use% Mounted on
udev 32790508 0 32790508 0% /dev
tmpfs 6563764 239412 6324352 4% /run
/dev/nvme0n1p3 978611404 928877724 0 100% /
tmpfs 32818816 196812 32622004 1% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 32818816 0 32818816 0% /sys/fs/cgroup
/dev/nvme0n1p1 610304 28728 581576 5% /boot/efi
tmpfs 6563764 44 6563720 1% /run/user/1000
I installed ncdu and run it against root directory, you may need to manually delete an small file to make space for ncdu, if that's is not possible, you can use df to find the files manually:
sudo apt-get install ncdu
sudo ncdu /
that helped me to identify the files, in my case those files were in the /tmp folder, then I used this command to delete the ones that weren't used in the last 10 days:
With this app I was able to identify the big files and delete tmp files: (Sep-4 12:26)
sudo find /tmp -type f -atime +10 -delete
tldr;
Restart Docker Desktop
The only thing that fixed this for me was quitting and restarting Docker Desktop.
I tried docker system prune, removed as many volumes as I could safely do, removed all containers and many images and nothing worked until I quit and restarted Docker Desktop.
Before restarting Docker Desktop the system prune removed 2GB but after restarting it removed 12GB.
So, if you tried to run system prune and it didn't work, try restarting Docker and running the system prune again.
That's what I did and it worked. I can't say I understand why it worked.
This worked for me:
sudo docker system prune -af
Open Docker Desktop
Go to Troubleshoot
Click Reset to factory defaults
The issue was actually as a result of temp folder not being cleared after upload, so all the videos that have been uploaded hitherto were still in the temp folder and the memory has been exhausted. The temp folder has been cleared now and everything works fine now.
I struggled hard with it, some time, following command worked.
docker system prune
But then I checked the volume and it was full. I inspected and came to know that node_modules have become the real trouble.
So, I deleted node_modules, ran again NPM install and it worked like charm.
Note:- This worked for me for NODEJS and REACTJS project.
In my case, Linux ext4 file system, large_dir feature should be enabled.
// check if it's enabled
sudo tune2fs -l /dev/sdc | grep large_dir
// enable it
sudo tune2fs -O large_dir /dev/sda
On Ubuntu, ext4 FS will have a 64M limit on number of files in a single directory by default, unless large_dir is enabled.
I used to check free space first using this command.
to show show human-readable output
free -h
then i reclaimed more free space to almost
Total reclaimed space: 2.77GB from 0.94GB using this command
sudo docker system prune -af
this worked for me.
Environment is in virtual box,ubuntu 12.04. It has 2 disks, /dev/sda1 and /dev/sdb1 are both ext4 type filesystem.
Since /dev/sdb1 is add after system installed, so I want to mount it manually. I'd try this command:
sudo mount -o user,defaults /dev/sdb1 ~/project
No errors report. Then I get mount info by mount:
/dev/sdb1 on /home/igsrd/project rw,noexec,nosuid,nodev
But when I ls -l to see /home/igsrd I found its permission is still belongs root, so I can't touch anything in it. Why it still belongs root?
I have another machine running ubuntu 12.04,too. I mount another partition with same option will be fine, correct permission(ownership). Are any differences between them?
*nix permissions on a filesystem that supports them natively, e.g. ext4, will be maintained regardless of how it is mounted when using a proper filesystem driver, e.g. the native ext4 driver built into Linux.
Why don't you just (while still root) do this:
chown -R <your-user-name> ~<your-user-name>/project
?
ie the partition of interest is already mounted as read-only.the partition need to be mounted as a rw enabled partition for executing particular lines of script alone.After that the partition should go to it's previous state of read only.
Question is for QNX operating system. And correct way to remount partition as read/write can be done using below command.
mount -uw /
To remout a partition read-write:
mount /mnt/mountpoint -oremount,rw
and to remout read-only
mount /mnt/mountpoint -oremount,ro
you may be interested in remount option.
for example, this command is widely used in rooted androids.
mount -o remount,rw /system
mount -o remount,ro /system
mount(8) - Linux man page
Filesystem Independent Mount Options
remount
Attempt to remount an already-mounted filesystem. This is commonly used to change the mount flags for a filesystem, especially to make a readonly filesystem writeable. It does not change device or mount point.
The remount functionality follows the standard way how the mount command works with options from fstab. It means the mount command doesn't read fstab (or mtab) only when a device and dir are fully specified.
mount -o remount,rw /dev/foo /dir
After this call all old mount options are replaced and arbitrary stuff from fstab is ignored, except the loop= option which is internally generated and maintained by the mount command.
mount -o remount,rw /dir
After this call mount reads fstab (or mtab) and merges these options with options from command line ( -o ).