Data destroy using shred agains ext4 filesystem [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm running shred against blockdevice with couple of etx4 filesystems on it.
The blockdevices are virtual drives - RAID-1 and RAID-5. Controller is PERC H710P.
command
shred -v /dev/sda; shred -v /dev/sdc ...
I can understand from shred man(info) page that shred might be no effective on journal filesystems but only when shredding files.
Anyone can please explain whether is shredding against blockdevice safe way to destruct all data on it?

This is a complex issue.
The only way that is 100% effective is physical destruction. The problem is that the drive firmware can mark sectors as bad and remap them to a pool of spares. These sectors are effectively no longer accessible to you but the old data may be recoverable from those sectors by other means (such as an alternate firmware or physically removing the platters).
That being said, running shred on the block device does not have the issues due to journaling.
The problem with journaling is that for partial overwrites to be recoverable you cannot actually overwrite the original data, so the overwrite of the file takes place in a second physical location, leaving the first in tact. Writing directly to the block device is not subject to journaling.

Related

LVM linux. What does it mean LEs in my task? May be it is typo [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
at the moment I preparing to RHCSA exam and complete some tasks to achieve desired result. I have a question by this task:"Create a logical volume called linuxadm of size equal to 10 LEs in vgtest volume group (create vgtest with PE
size 32MB) with mount point /mnt/linuxadm and xfs file system structures. Create a file called linuxadmfile in the mount
point. Set the file system to automatically mount at each system reboot."
In this task I don't understand what does it mean "LEs", I tried to google this information and i found nothing
LE means logical extent.
From lvcreate man page:
lvcreate creates a new LV in a VG. For standard LVs, this requires
allocating logical extents from the VG's free
physical extents. If there is not enough free space, the VG can be extended with other PVs (vgextend(8)), or
existing LVs can be reduced or removed (lvremove(8), lvreduce(8).)

See available space in all storage devices, mounted or unmounted, through a linux command? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I've seen that df -H --total gives me the total space available but only of mounted devices, and lsblk gives me the sizes of all storage devices, yet not how much space is available within them.
Is there a way I could see the sum total, available storage space of all devices, e.g. hard disks, thumb drives, etc., in one number?
The operation of mounting a medium makes the operating system analyze the file system.
Before a medium is mounted, it exists as a block device and the only fact the OS might know about it might be the capacity.
Other than that, it is just a stream of bytes not interpreted in any way. That "stream of bytes" very probably contains the information of used and unused blocks. But, dependent on file system types, in very different places and can thus not be known by the OS (other than mounting and analyzing it)
You could write a specific application that would extract that information, but I would consider that temporarily mounting the file system. Standard Unix/Linux doesn't come with such an application.
From the df man page, I'd say "No", but the wording indicates that it may be possible on some sytems/distributions with some version(s) of df.
The other problem is how things can be accessed. For example, the system I'm using right now shows 3 160gb disks in it... but df will show one of them at / and the other 2 as a software based RAID-1 setup on /home.

Is /dev/zero Unsafe to Use on Cygwin? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
So I ran the following command in Cygwin: dd if=/dev/zero of=E:
Now my C: drive lost all its free space. Upon unpluggin my E: drive from its USB port, the PC automatically shuts down. Is /dev/zero being created within the C: drive, and if so does that make it potentially unsafe to use with Cygwin?
Writing directly to disks is really DANGEROUS on any system
On cygwin the disks as physical entities are not the windows letter E: or the logical /cygdrive/e entities
If you want to completely overwrite the physical structure
dd if=/dev/zero of=/dev/sdXX
Pay REALLY attention to your final destination, or you will
BLOW your system.
/dev/sda full first disk
/dev/sda1 first disk, first partition
/dev/sdb full second disk
/dev/sdb1 second disk, first partition
and so on...

why compressed kernel image is used in linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have googled this question over the internet but couldn't find anything useful related to this question that "why compressed kernel image like bzImage or vmlinuz is used as initial kernel image",
Possible solutions which i could think of are:
Due to memory constraint?
But initially compressed kernel image is located at hard disk or some other storage media and during boot up time after second stage bootloader, kernel is first decompressed in main memory and then it is executed.
So, when at later stage kernel is to be decompressed in main memory then what is the need to compress it first. I mean if main memory could hold decompressed kernel image then what is the need for kernel compression?
Generally the processor can decompress faster than the I/O system can read. By having less for the I/O system to read, you reduce the time needed for boot.
This assumption doesn't hold for all hardware combinations, of course. But it frequently does.
An additional benefit for embedded systems is that the kernel image takes up less space on the non-volatile storage, which may allow using a smaller (and cheaper) flash chip. Many of these systems have ~ 32MB of system RAM and only ~ 4MB of flash.

ecryptfs size different from home directory size [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm puzzled. My harddisk is full and most of the space is used by .eryptfs/$MYUSERNAME (810.4 GB). Strangly, my home directory /home/MYUSERNAME (22.2GB) consumes significantly less diskspace. Any idea what is wrong or where to look for the "missing" free space?
eCryptfs slightly pads files, and the overhead of encryption will slightly increase your over all disk usage, but certainly not to the degree you describe, 37x!
The only disk usage that really matters is that of your /home/.ecryptfs/$USER directory, which is really where the encrypted files are stored on disk. What you're seeing in terms of the usage of $HOME is really phantom -- the cleartext decryption of those files only appears in memory, and not on disk.
To see your true usage, use:
df -h /home
du -sh /home

Resources