why compressed kernel image is used in linux? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have googled this question over the internet but couldn't find anything useful related to this question that "why compressed kernel image like bzImage or vmlinuz is used as initial kernel image",
Possible solutions which i could think of are:
Due to memory constraint?
But initially compressed kernel image is located at hard disk or some other storage media and during boot up time after second stage bootloader, kernel is first decompressed in main memory and then it is executed.
So, when at later stage kernel is to be decompressed in main memory then what is the need to compress it first. I mean if main memory could hold decompressed kernel image then what is the need for kernel compression?

Generally the processor can decompress faster than the I/O system can read. By having less for the I/O system to read, you reduce the time needed for boot.
This assumption doesn't hold for all hardware combinations, of course. But it frequently does.
An additional benefit for embedded systems is that the kernel image takes up less space on the non-volatile storage, which may allow using a smaller (and cheaper) flash chip. Many of these systems have ~ 32MB of system RAM and only ~ 4MB of flash.

Related

See available space in all storage devices, mounted or unmounted, through a linux command? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I've seen that df -H --total gives me the total space available but only of mounted devices, and lsblk gives me the sizes of all storage devices, yet not how much space is available within them.
Is there a way I could see the sum total, available storage space of all devices, e.g. hard disks, thumb drives, etc., in one number?
The operation of mounting a medium makes the operating system analyze the file system.
Before a medium is mounted, it exists as a block device and the only fact the OS might know about it might be the capacity.
Other than that, it is just a stream of bytes not interpreted in any way. That "stream of bytes" very probably contains the information of used and unused blocks. But, dependent on file system types, in very different places and can thus not be known by the OS (other than mounting and analyzing it)
You could write a specific application that would extract that information, but I would consider that temporarily mounting the file system. Standard Unix/Linux doesn't come with such an application.
From the df man page, I'd say "No", but the wording indicates that it may be possible on some sytems/distributions with some version(s) of df.
The other problem is how things can be accessed. For example, the system I'm using right now shows 3 160gb disks in it... but df will show one of them at / and the other 2 as a software based RAID-1 setup on /home.

Why SRAM is commonly used in cache memory? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am studying about RAM, i don't understood why Static RAM is commonly used in cache memory.
A memory cache, sometimes called a cache store or RAM cache, is a
portion of memory made of high-speed static RAM (SRAM) instead of the
slower and cheaper dynamic RAM (DRAM) used for main memory. Memory
caching is effective because most programs access the same data or
instructions over and over. By keeping as much of this information as
possible in SRAM, the computer avoids accessing the slower DRAM. †
The same reasoning is in this wiki.
Because it can be faster than dynamic RAM. And more expensive, otherwise it would be used everywhere.
Main reason is dynamic RAM are slower. As you know cache is for fast access so usually static RAM is better choice for cache memory.

Data destroy using shred agains ext4 filesystem [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm running shred against blockdevice with couple of etx4 filesystems on it.
The blockdevices are virtual drives - RAID-1 and RAID-5. Controller is PERC H710P.
command
shred -v /dev/sda; shred -v /dev/sdc ...
I can understand from shred man(info) page that shred might be no effective on journal filesystems but only when shredding files.
Anyone can please explain whether is shredding against blockdevice safe way to destruct all data on it?
This is a complex issue.
The only way that is 100% effective is physical destruction. The problem is that the drive firmware can mark sectors as bad and remap them to a pool of spares. These sectors are effectively no longer accessible to you but the old data may be recoverable from those sectors by other means (such as an alternate firmware or physically removing the platters).
That being said, running shred on the block device does not have the issues due to journaling.
The problem with journaling is that for partial overwrites to be recoverable you cannot actually overwrite the original data, so the overwrite of the file takes place in a second physical location, leaving the first in tact. Writing directly to the block device is not subject to journaling.

Who initialize the flash? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am learning about Linux boot process on ARM processors and find that U-boot is boot from the flash and then u-boot code intialize the RAM to set up the execution environment like stack set up and all
and relocate itself.
Now my question who initialize the flash so that u-boot code can execute?
Also is it any difference booting it from NOR flash or NAND flash?
Is booting from NOR flash is faster than booting from NAND flash?
naturally someone has to program that flash the first time. And each board design determines how that actually happens, sometimes the part is programmed before being soldered down, sometimes there is a backdoor a connector you can program through, etc. Sometimes not. Sometimes the processor or other hardware on the board has some other kind of bootloader that you can use to program that normal boot flash, etc.
NOR or NAND isnt usually much of a difference, my biggest problem with the newer flashes is worrying about read-disturb. Flash reading is faster than writing and a lot of the effort is or at least needs to be in write speed and density and cost, so I would assume that is where the efforts are and not so much read speed vs write speed. If you have a read speed problem, then just copy the bootloader to ram as soon as you can and run from there, stay off the prom after that.

How does bootloader(e.g. grub,lilo... ) find kernel image? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
For example, if use grub to boot system, there may be lines such as
root (hd0,6)
kernel /boot/vmlinuz-2.6.11-1.1369_FC4 ro root=LABEL=/
initrd /boot/initrd-2.6.11-1.1369_FC4.img
in menu.lst.
My question is:
before the kernel image is loaded into memory, I think no file system info(such as file system type, super block) which can be used to locate the kernel image in the disk exist.
So how does bootloader know the CHS of the image in the disk?
I guess that bootloader could find the super block according to "root (hd0,6)"; if so, it must blindly detect all the possible file systems and find a matching one. Is it too complicated?
I am gonna give you an answer using Lilo like example:
The reason that you have to run /sbin/lilo after installing a new
kernel is that the LILO bootloader doesn't understand file systems it
only knows about the lower level block structure of the disk. The
/sbin/lilo program does understand file systems, and translates the
kernel's path (i.e "/boot/vmlinuz-2.6.3") into a logical block address
ie 3-4-123) so that the LILO bootloader can find the kernel image to
load. Effectively, this is a big hack.
Source:
http://courses.cs.washington.edu/courses/cse451/02wi/section/bootloaders.txt
The setup process for GRUB includes generation of
full list of physical addresses of stage 2 file
encoded drive number (as used with BIOS calls)
encoded partition number (these two forms value represented as (hd0,6) from your example)
Stage 1 and bootstrapper of stage 2 can together use this to load full stage 2 into memory. Since that moment, stage 2 can detect FS type, activate respective read-only FS driver, read runtime config (grub.cfg or menu.lst) and proceed with following reading of kernel, initrd, etc. using FS driver.
NB this principally differs from LILO that the latter hardcodes kernel and initrd sector lists during loader installing; when booted, loader knows only sector lists but not FS structure.

Resources