Starting emulator for AVD 'AVD_for_Nexus_5_by_Google' NAND: could not write file /tmp/android-ehdg/emulator-xxjXQs, File exists - emulation

When I try to run an android emulator I get the error "NAND: Could not write file...file exists". When searching on this error on answer said I needed to free up some space on my hard drive. I have since freed up a few Gig from the hard drive but I still get the same error.
I get this error whether I am running the avd manager form Eclipse or from the command line. Does anyone know Why I am getting the NAND: Could not write file...file exists error or why I always get the message about the emulator running.

It's a real disturbing situation when error can't completely describe problem in detail!
finally I figured out all this kind of error messages ("File Exists" and "There is no space") are due to small amount of mounted space on /tmp folder.
in Linux try mount and you'll see specified parameters of tmpfs and /tmp
changing $ANDROID_TMP value is not a proper solution and even doesn't work at all.
try editing fstab
sudo gedit /etc/fstab
and increase /tmp size to a larger value that makes sense to you (like:1500M)
then reboot system
PS: It's not IDE-dependent (Eclipse or Android-Studio) but something that deal with AVD Emulator

Related

Arch Linux not booting (Maybe related to not having left space for root)

I commited an error during my Arch Linux install that I don't know how to remedy, so I just sticked with it these months. I gave just ~35GB of storage space for the root partition.
This caused that for the last few months my PC have been warning about low space left in the root partition, but it was always of about ~400 MB, so I just ignored my problems thinking they would go away.
Today, I tried to boot Arch Linux, it goes all normal until the file system is started. It then leaves me with an unblinking selector in a black screen.
The output after that is exactly like this:
_
I cannot open a terminal with Ctrl + Alt + F1..F6, and I can't do anything.
I assume that this is happening because there is no space left in the root partition (Maybe the file system crashes), is that right? If it is, how could I increase the root partition space without having access to Arch Linux? Would reinstalling Arch Linux solve the problem?
This is my fault, I should have been more careful
This is what I would do in your situation:
Make a bootable USB with Balena Etcher
Boot from the USB
Run lsblk and df -h to get more information about your storage devices
If your old root partition is (almost) full, you can mount it from your Live USB environment and make some space.
You can also consider moving your home directory to a separate partition.

SYSTEM ERROR: I/O error 0 in writeto, ret 2048, file 56(/mfgtmp/tmp/srtE5yybD), addr 77010944. (290) - PROGRESS 4GL

I am getting below error suddenly when my progress program was executed and running for more than 80 minutes. I think this is OS error and error 0 says its for out of disk space. I checked the disk space as it shows 14 GB available but I am not sure why I am getting this error.
Is it because of on a write out of disk space(exceeding 14 GB) and stopped ? so that available 14 GB kept same as it is?
SYSTEM ERROR: I/O error 0 in writeto, ret 2048, file 56(/mfgtmp/tmp/srtE5yybD), addr 77010944. (290)
By default temp files are created "unlinked". Because of this the space they were using is automatically reclaimed by the OS if the session crashes so you will often have a situation where your temp file ran out of space, the session crashed, and then when you investigate there is plenty of free space.
You can change the default behavior by using the -t (lower case) startup parameter. This will result in the files not being removed if a session crashes - so the space will not be returned to the OS. You will have to manually delete "stale" files if you enable -t.
On UNIX -t will also make the files visible in the -T (upper case) directory so that you can see their growth in real time.
On Windows the files are always visible and the current length is not consistently reported by system tools.
If your temp files are being written to a different filesystem than your working directory (the -T startup parameter is where temp files go) then you should have a "protrace.pid" file corresponding to the crashed session's process id and the timestamp of the crash. This will then lead you to the 4gl code that was creating the very large srt file.
14GB is far beyond "reasonable" so you really should look at that code and see if there is a better way to do whatever it is doing.
There are a number of k-base articles on that issue, for instance: https://knowledgebase.progress.com/articles/Knowledge/000027351
When you check disk space, please make sure you're checking the correct file system (/mfgtmp in this case).
The error messages references an srt file - so you might want to try to use srt file less heavy, see this article for some initial help: https://knowledgebase.progress.com/articles/Knowledge/P95930
Or: https://knowledgebase.progress.com/articles/Knowledge/P84475

How to initialize USB in Bash

So I'm trying to create a bootable USB to help out in tech support for some friends. I decide to got with Puppy Linux as some of my family members don't have such powerful machines.
I buy a flash drive only to be met by this error once I insert it.
All dandy I can probably fix this. I open up Disk Utility; navigate to the drive and try to run first aid hoping to get it up and running in no time. And, boom error:
Fixing damaged partition map.
The operation couldn’t be completed. (com.apple.DiskManagement error -69874.)
Operation failed…
Ok, a usable error I can google. Boy was I wrong, I found a whole load of nothing. So I do some digging in the terminal docs and come up with this command: diskutil eraseVolume ExFAT MyName disk3.
which returns another error:
Started erase on disk3
Unmounting disk
Error: -69888: Couldn't unmount disk
which lead me here. Is my drive halve DOA or is there some simple fix?
Looks like you are using macOS, in this case, I would suggest using etcher.io since the process is pretty easy and straightforward.
In case you would like to do it from the terminal and already have the image, you could do something like this:
Firs find your USB try this command:
$ diskutil list
Lest assume is /dev/disk3, then try to unmount it
$ diskutil unmountDisk /dev/disk3
Now use the dd command, keep in mint it will destroy any existing data on the target disk:
$ sudo dd bs=1m if=latest-USB.img of=/dev/rdisk3
Notice the r - raw in of /dev/rdisk3

What happen when delete shared memory files in dev/shm by using 'rm' command

I used Posix shared memory to communicate between 2 process. Then during 2 process were sharing data, I used 'rm' command to remove all shared file which mounted in dev/shm. I expected some errors will happen, but everything still work normal.
So I have a question as below:
What happen when I using rm command line to delete all shared memory files in dev/shm directory.
I have googled but cannot find anywhere discuss about this situation.
Can anyone please explain to me about it?
Thanks so much

centos free space on disk not updating

I am new to the linux and working with centos system ,
By running command df -H it is showing 82% if full, that is only 15GB is free.
I want some more extra spaces, so using WINSCP i hav done shift deleted the 15G record.
and execured df -H once again, but still it is showing 15 GB free. but the free size of the deleted
file where it goes.
Plese help me out in finding solution to this
In most unix filesystems, if a file is open, the OS will delete the file right way, but will not release space until the file is closed. Why? Because the file is still visible for the user that opened it.
On the other side, Windows used to complain that it can't delete a file because it is in use, seems that in later incarnations explorer will pretend to delete the file.
Some applications are famous for bad behavior related to this fact. For example, I have to deal with some versions of MySQL that will not properly close some files, over the time I can find several GB of space wasted in /tmp.
You can use the lsof command to list open files (man lsof). If the problem is related to open files, and you can afford a reboot, most likely it is the easiest way to fix the problem.

Resources