ecryptfs size different from home directory size [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm puzzled. My harddisk is full and most of the space is used by .eryptfs/$MYUSERNAME (810.4 GB). Strangly, my home directory /home/MYUSERNAME (22.2GB) consumes significantly less diskspace. Any idea what is wrong or where to look for the "missing" free space?

eCryptfs slightly pads files, and the overhead of encryption will slightly increase your over all disk usage, but certainly not to the degree you describe, 37x!
The only disk usage that really matters is that of your /home/.ecryptfs/$USER directory, which is really where the encrypted files are stored on disk. What you're seeing in terms of the usage of $HOME is really phantom -- the cleartext decryption of those files only appears in memory, and not on disk.
To see your true usage, use:
df -h /home
du -sh /home

Related

Increase /tmp directory space in linux 7 through terminal [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have problem in installing oracle DB since /tmp has no required freespace. How to increase the space of /tmp folder from terminal?
hope you have some free space in the disk. Its possible to make free space to a particular partition here its /tmp
Open the terminal and run
df -h
this will show the disk space currently you have in the system
to increase the space for the partition
type
`sudo umount /tmp`
sudo mount -t tmpfs -o size=1048576,mode=1777 overflow /tmp
this will increase the size by 1MB if you add and extra zero that is 10485760 will increase the size by 10MB. Add space upon how much you needed.

Cannot allocate memory [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm using a virtual machine to do the work.
I given the volume capacity to be 32MB. According to "cat /proc/meminfo", i have approximately of 1.4GB for the memory available. It is more than enough to be mounted.
However, whenever it mounted, it will automatically unmounted as it cannot allocate memory (as seen on below pic). I tried to adjust the heap size but the result is still the same.
Please take a look at the pic
I solved the problem. Assign more memory to the virtual machine even though it is already more than sufficient to hold the volume capacity

dd command to adjust to file system [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a problem using dd command, assume that I am writing 20MB file to 100MB partition. After the write I am not able to access the rest of 80MB.
dd if=temp_file of=/dev/sdb1
Is there a way I can specify dd to adjust to the file system that I am writing into?
All I am interested is know if there is a way to use the 80MB space without disturbing the initial 20MB.
By using the dd command the way you do, you overwrite the file-system data, including the important meta-data about the file-system. If the temp_file contains a file-system for a 20MB partition then that's what you will get.
If you want a 100MB partition, you need to create a 100MB disk-image to write to the disk.

Data destroy using shred agains ext4 filesystem [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm running shred against blockdevice with couple of etx4 filesystems on it.
The blockdevices are virtual drives - RAID-1 and RAID-5. Controller is PERC H710P.
command
shred -v /dev/sda; shred -v /dev/sdc ...
I can understand from shred man(info) page that shred might be no effective on journal filesystems but only when shredding files.
Anyone can please explain whether is shredding against blockdevice safe way to destruct all data on it?
This is a complex issue.
The only way that is 100% effective is physical destruction. The problem is that the drive firmware can mark sectors as bad and remap them to a pool of spares. These sectors are effectively no longer accessible to you but the old data may be recoverable from those sectors by other means (such as an alternate firmware or physically removing the platters).
That being said, running shred on the block device does not have the issues due to journaling.
The problem with journaling is that for partial overwrites to be recoverable you cannot actually overwrite the original data, so the overwrite of the file takes place in a second physical location, leaving the first in tact. Writing directly to the block device is not subject to journaling.

How to shrink an ext4 partition without formatting it? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
Recently i installed Ubuntu 13.04 and allocated 20 GB for it. The system got installed space less than 10 GB. Now, can i shrink it to 10 GB without formatting it?
Thats to say, i don't want to have large empty space in the partition.
You could use the resize2fs command.
However, I would suggest to backup the most important files (on e.g. an USB key) before doing that (e.g. /etc/ and some of /home/ )
See also this question...
BTW, 20GB for the system partition is not that much.....

Resources