Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm having problems to detect which one of my block devices is the hard drive. My system has a cd-rom drive, USB drives, and a single hard drive of unknown vendor/type.
How can I identify the hard drive with a linux command, script, or C application?
sudo lshw -class disk
will show you the available disks in the system
As shuttle87 pointed out, there are several other posts that answer this question. The solution that I prefer is:
root# lsblk -io NAME,TYPE,SIZE,MOUNTPOINT,FSTYPE,MODEL
NAME TYPE SIZE MOUNTPOINT FSTYPE MODEL
sdb disk 2.7T WDC WD30EZRX-00D
`-sdb1 part 2.7T linux_raid_member
`-md0 raid1 2.7T /home xfs
sda disk 1.8T ST2000DL003-9VT1
|-sda1 part 196.1M /boot ext3
|-sda2 part 980.5M [SWAP] swap
|-sda3 part 8.8G / ext3
|-sda4 part 1K
`-sda5 part 1.8T /samba xfs
sdc disk 2.7T WDC WD30EZRX-00D
`-sdc1 part 2.7T linux_raid_member
`-md0 raid1 2.7T /home xfs
sr0 rom 1024M CDRWDVD DH-48C2S
References:
https://unix.stackexchange.com/q/4561
https://askubuntu.com/q/182446
https://serverfault.com/a/5081/109417
If you have a list of the plausible block devices, then the file
/sys/block/[blockdevname]/removable
will contain "1" if the device is removable, "0" if not removable.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 months ago.
Improve this question
On linux, I know procfs is a pseudo FS and only exist in memory. is there a simple way to access procfs (/proc) from disk by dumping that memory content to disk either through configuration or actively run command?
For the most part there is no "memory content" . The /proc pseudo file system not only does not exist on disk, the majority of it also isn't explicitly instantiated as files in memory. Much of the content is instantiated on-demand from information stored elsewhere in kernel data structures. From the manual:
The proc file system acts as an interface to internal data structures in the kernel. It can be used to obtain information about the system and to change certain kernel parameters at runtime (sysctl).
This is even reflected in the fact the /proc file system reports no actual usage (in contrast to /dev/shm, which is a real file system):
$ df -h /proc /dev/shm
Filesystem Size Used Avail Use% Mounted on
proc 0 0 0 - /proc
tmpfs 20G 336K 20G 1% /dev/shm
So for example, the contents of /proc/$$/status and /proc/$$/stack are constantly changing as a process runs, and the contents of those pseudofiles are only generated on-demand when you open the file.
If you want the contents of these files dumped to disk you can use a cp operation to capture some of it (with sufficient user permission), but dumping it ALL to disk is probably a bad idea and might not even terminate.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
To begin with ,I would build up the exact context to begin with. The link(cuz am low on reputations) is a screenshot of partitions in my laptop's hard disk.Hard disk filesystem partitions /dev/sda
As it must have been evident from the screenshot itself../dev/sda2 was a pre-existing partitions which has now been formatted into a clean btrfs format; And /dev/sda3 has ParrotOS in it.
Now I wish to make whole of hard disk memory from /dev/sda2 and /dev/sda3 to ParrotOS without losing any iota of any existing data in /dev/sda3...as per the software used here(Gparted) partitions can be extended only if they have empty unallocated space after them, so there is no apparent option here for to directly unallocate /dev/sda2 and put /dev/sda3 in front of it..Or is it?
Can some generous guys help me to atleast aid me to swap everything from /dev/sda3 so that I can unallocate it and can merge them together into a single large chunk of partition.
If sda2 and sda3 are the same size (low-level size, that is... not FS size... you can see that with say fdisk), then you can copy binary content of sda3 into sda2 with something as simple as:
sudo dd if=/dev/sda3 of=/dev/sda2
After that is done, sda2 would be an exact image of sda3. Just make sure to not have the partition of sda3 (or sda2, of course) mounted so that there are no operations going on against it. When that is copied over, you should be able to mount sda2 and see what you have in sda3.... when that is done, you can then remove sda3 so that sda2 can be extended.... this is not risk free, by the way... and some adjustments will need to be made, for example, adjusting /etc/fstab (among other things).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 6 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I need to launch an EC2 of instance type r3.4xlarge from a snapshot of a free-tier instance I initially had.
Now, first, I launched it directly, without editing the storage options. I was left with 8GiB root volume, 61GiB in /udev [ /dev ] and another 61GiB in none or /run/shm.
My question is: WHERE IS MY 122GB MEMORY AND HOW CAN I ACCESS IT?
The things that I have tried:
Added volume to the root, increased root size from 8GiB to 15GiB using these instructions. I also ended up with an increased root volume from 8GiB to 117GiB in my second attempt, when I tried to use 118GiB from my 122GiB, and changed the volume size of the root device during launch of the instance from the AMI. The problem here is that: Only the root volume changes from 8GiB to 117GiB, the /udev and /run/shm still have a total of 61GiB each.
Filesystem Size Used Avail Use% Mounted on
udev 61G 12K 61G 1% /dev
tmpfs 13G 328K 13G 1% /run
/dev/xvda1 117G 6.3G 105G 6% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 61G 0 61G 0% /run/shm
none 100M 0 100M 0% /run/user
Tried to Partition the existing system. Using these instructions. When I attempt sudo pvs there is no output and the terminal shows me the prompt. When I do sudo lvs, I get No Logical Volumes found. I then try to do this before I try to do anything with lvm2. But it's still the same on reboot.
Kindly help me with this issue. I'm stuck at a difficult place.
You are confusing RAM and Volume storage. An r3.4xlarge has 122GB of RAM. This is available for your applications to use, try:
free -m
to see more about your RAM.
r3.4xlarge comes with 1 320GB SSD ephemeral storage.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a ubuntu machine with 8Gb of RAM and 250B of Harddisk. I am using this machine as my Jenkins Server for CI , Iam facing inode number full problem from the past few days
I fire command :
df -i
Output:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda5 18989056 15327782 3661274 81% /
none 989841 11 989830 1% /sys/fs/cgroup
udev 978914 465 978449 1% /dev
tmpfs 989841 480 989361 1% /run
none 989841 3 989838 1% /run/lock
none 989841 8 989833 1% /run/shm
none 989841 39 989802 1% /run/user
Suggest how to resolve this.
The program mkfs.ext4 allows an open -N which sets the number of inodes when making a new file system.
In this case you'd need to backup the entire / file system, boot from a live CD/USB and recreate the filesystem on /dev/sda5. WARNING: This will kill every single file on that drive. You'll probably need to reinstall the operating system onto that partition first because merely restoring a backup of a boot drive will likely not get all the fiddly bits necessary to boot.
If you are running out of inodes, it is likely you are doing something sub-optimal like using the filesystem as a poor-man's database. It is worth looking into why you are exhausting i-nodes, but that's another question.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am trying to restrict Io usage on my server using cgroups.
Here is my partition table info:
major minor #blocks name
8 0 10485760 sda
8 1 9437184 sda1
8 2 1047552 sda2
Here is my Filesystem structure:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 8.9G 8.4G 37M 100% /
none 1004M 0 1004M 0% /dev/shm
When i am trying to execute the following command:
echo "8:1 10485760" >
/cgroup/blkio/test2/blkio.throttle.write_bps_device
I get the output as :
-bash: echo: write error: No such device
Here is my cgroups configuration:
mount {
blkio = /cgroup/blkio;
}
group test2 {
blkio {
blkio.throttle.write_iops_device="";
blkio.throttle.read_iops_device="";
blkio.throttle.write_bps_device="";
blkio.throttle.read_bps_device="";
blkio.weight="";
blkio.weight_device="";
}
}
Why i can not restrict the /dev/sda1 IO usages?
You need to use the physical device when setting up blkio. Use the major:minor for the whole disk (8:0).