Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have a Windows XP dd image. I'm checking the image with the file command:
root# fdisk -l ./hdddump.img
Disk ./hdddump.img: 2031 MB, 2031400960 bytes, 3967580 sectors
Units = Sektoren of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xef8b000a
Gerät boot. Anfang Ende Blöcke Id System
./hdddump.img1 63 3354623 1677280+ 7 HPFS/NTFS/exFAT
./hdddump.img2 3354624 3967487 306432 b W95 FAT32
Afterwards I'm trying to mount the image:
root# mount -t auto -o loop,ro,noexec,offset=32256 hdddump.img ./hddmount/
mount: /absolute_path/hdddump.img: failed to setup loop device: numerical result out of range
I have no idea why this happens and I can't find any hint in the internet. Converting the image with qemu-img and including it as harddisk in a VM Ware machine works, so the image isn't broken.
Try adding sizelimit=858767360 to your mount options. Maybe there's a problem if losetup (called by mount) tries to auto-calculate the size.
(858767360 =1677280*512)
But this question would better be asked on superuser or maybe serverfault.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
On Redhat server, can I increase /var online and how to do it? I have free space in the VG.
run vgdisplay to check free space
# vgdisplay
--- Volume group ---
VG Name XYZ
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 919.97 GiB
PE Size 32.00 MiB
Total PE 29439
Alloc PE / Size 29439 / 919.97 GiB
Free PE / Size 250 / 2 GiB
Suppose I want to allocate full 250 Free PE to /var, run below to extend /var. In my case there was no free space so I just created a dummy number.
#lvextend -l +250 /dev/<vg_name>/<Var LV name>
After extending re-size it.
#resize2fs /dev/<vg_name>/<Var LV name>
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a ubuntu machine with 8Gb of RAM and 250B of Harddisk. I am using this machine as my Jenkins Server for CI , Iam facing inode number full problem from the past few days
I fire command :
df -i
Output:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda5 18989056 15327782 3661274 81% /
none 989841 11 989830 1% /sys/fs/cgroup
udev 978914 465 978449 1% /dev
tmpfs 989841 480 989361 1% /run
none 989841 3 989838 1% /run/lock
none 989841 8 989833 1% /run/shm
none 989841 39 989802 1% /run/user
Suggest how to resolve this.
The program mkfs.ext4 allows an open -N which sets the number of inodes when making a new file system.
In this case you'd need to backup the entire / file system, boot from a live CD/USB and recreate the filesystem on /dev/sda5. WARNING: This will kill every single file on that drive. You'll probably need to reinstall the operating system onto that partition first because merely restoring a backup of a boot drive will likely not get all the fiddly bits necessary to boot.
If you are running out of inodes, it is likely you are doing something sub-optimal like using the filesystem as a poor-man's database. It is worth looking into why you are exhausting i-nodes, but that's another question.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am trying to restrict Io usage on my server using cgroups.
Here is my partition table info:
major minor #blocks name
8 0 10485760 sda
8 1 9437184 sda1
8 2 1047552 sda2
Here is my Filesystem structure:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 8.9G 8.4G 37M 100% /
none 1004M 0 1004M 0% /dev/shm
When i am trying to execute the following command:
echo "8:1 10485760" >
/cgroup/blkio/test2/blkio.throttle.write_bps_device
I get the output as :
-bash: echo: write error: No such device
Here is my cgroups configuration:
mount {
blkio = /cgroup/blkio;
}
group test2 {
blkio {
blkio.throttle.write_iops_device="";
blkio.throttle.read_iops_device="";
blkio.throttle.write_bps_device="";
blkio.throttle.read_bps_device="";
blkio.weight="";
blkio.weight_device="";
}
}
Why i can not restrict the /dev/sda1 IO usages?
You need to use the physical device when setting up blkio. Use the major:minor for the whole disk (8:0).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I'm using mhddfs to combine multiple drives that are mounted over network using NFS.
e.g.
There are three machines
Server Name Dir Space
Server 1 /home 10 GB Space
Server 2 /home 10 GB Space
Server 3 /home 10 GB Space
Using NFS i mounted the following:
Server 1 /home to Server 3 /home/mount1
Server 2 /home to Server 3 /home/mount3
Then using mhddfs i merge or unified mount1 and mount 2 e.g.
mhddfs /home/server/mount1,/home/server/mount2 /home/server/mount
Now i have 30 GB space altogether. but when i tried to write the file in mount directory that has more than 10 GB space it fails...
It seems mhddfs can't split large file e.g. 20 GB file.. so that it can store
Please give an idea, that how i can achive this ......
This is an inherent limitation of mhddfs. It works by simply combining the contents of the underlying devices into a single directory, and stores new files into whichever drive has sufficient free space. Since there is no drive in your system that can actually store a 20 GB file, the resulting merged filesystem cannot store one either.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm having problems to detect which one of my block devices is the hard drive. My system has a cd-rom drive, USB drives, and a single hard drive of unknown vendor/type.
How can I identify the hard drive with a linux command, script, or C application?
sudo lshw -class disk
will show you the available disks in the system
As shuttle87 pointed out, there are several other posts that answer this question. The solution that I prefer is:
root# lsblk -io NAME,TYPE,SIZE,MOUNTPOINT,FSTYPE,MODEL
NAME TYPE SIZE MOUNTPOINT FSTYPE MODEL
sdb disk 2.7T WDC WD30EZRX-00D
`-sdb1 part 2.7T linux_raid_member
`-md0 raid1 2.7T /home xfs
sda disk 1.8T ST2000DL003-9VT1
|-sda1 part 196.1M /boot ext3
|-sda2 part 980.5M [SWAP] swap
|-sda3 part 8.8G / ext3
|-sda4 part 1K
`-sda5 part 1.8T /samba xfs
sdc disk 2.7T WDC WD30EZRX-00D
`-sdc1 part 2.7T linux_raid_member
`-md0 raid1 2.7T /home xfs
sr0 rom 1024M CDRWDVD DH-48C2S
References:
https://unix.stackexchange.com/q/4561
https://askubuntu.com/q/182446
https://serverfault.com/a/5081/109417
If you have a list of the plausible block devices, then the file
/sys/block/[blockdevname]/removable
will contain "1" if the device is removable, "0" if not removable.