Swap data of /dev/sda2 and /dev/sda3 [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
To begin with ,I would build up the exact context to begin with. The link(cuz am low on reputations) is a screenshot of partitions in my laptop's hard disk.Hard disk filesystem partitions /dev/sda
As it must have been evident from the screenshot itself../dev/sda2 was a pre-existing partitions which has now been formatted into a clean btrfs format; And /dev/sda3 has ParrotOS in it.
Now I wish to make whole of hard disk memory from /dev/sda2 and /dev/sda3 to ParrotOS without losing any iota of any existing data in /dev/sda3...as per the software used here(Gparted) partitions can be extended only if they have empty unallocated space after them, so there is no apparent option here for to directly unallocate /dev/sda2 and put /dev/sda3 in front of it..Or is it?
Can some generous guys help me to atleast aid me to swap everything from /dev/sda3 so that I can unallocate it and can merge them together into a single large chunk of partition.

If sda2 and sda3 are the same size (low-level size, that is... not FS size... you can see that with say fdisk), then you can copy binary content of sda3 into sda2 with something as simple as:
sudo dd if=/dev/sda3 of=/dev/sda2
After that is done, sda2 would be an exact image of sda3. Just make sure to not have the partition of sda3 (or sda2, of course) mounted so that there are no operations going on against it. When that is copied over, you should be able to mount sda2 and see what you have in sda3.... when that is done, you can then remove sda3 so that sda2 can be extended.... this is not risk free, by the way... and some adjustments will need to be made, for example, adjusting /etc/fstab (among other things).

Related

access /proc from disk [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 months ago.
Improve this question
On linux, I know procfs is a pseudo FS and only exist in memory. is there a simple way to access procfs (/proc) from disk by dumping that memory content to disk either through configuration or actively run command?
For the most part there is no "memory content" . The /proc pseudo file system not only does not exist on disk, the majority of it also isn't explicitly instantiated as files in memory. Much of the content is instantiated on-demand from information stored elsewhere in kernel data structures. From the manual:
The proc file system acts as an interface to internal data structures in the kernel. It can be used to obtain information about the system and to change certain kernel parameters at runtime (sysctl).
This is even reflected in the fact the /proc file system reports no actual usage (in contrast to /dev/shm, which is a real file system):
$ df -h /proc /dev/shm
Filesystem Size Used Avail Use% Mounted on
proc 0 0 0 - /proc
tmpfs 20G 336K 20G 1% /dev/shm
So for example, the contents of /proc/$$/status and /proc/$$/stack are constantly changing as a process runs, and the contents of those pseudofiles are only generated on-demand when you open the file.
If you want the contents of these files dumped to disk you can use a cp operation to capture some of it (with sufficient user permission), but dumping it ALL to disk is probably a bad idea and might not even terminate.

I changed Amazon EC2 instance size but my /dev/nvme0n1p1 partition does not increase in size [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 months ago.
Improve this question
I originally was running an 8gb size EC2 instance. When doing so, my /dev/nvme0n1p1 looked like this:
/dev/nvme0n1p1 7.7G 4.4G 3.4G 57% /
So, I upgraded to a r6i.large instance, which is 16 gb. I re-check my /dev/nvme0n1p1 partition, and lo and behold, it hasn't changed in size:
/dev/nvme0n1p1 7.7G 4.4G 3.4G 57% /
When I run free -h, however, I get:
total used free shared buff/cache available
Mem: 15G 130M 14G 740K 794M 14G
Swap: 0B 0B 0B
So maybe the instance has increased in size? I'm confused, as I would expect the /dev/nvme0n1p1 size now to be around 16gb, not 7.7gb
Selenium is still crashing on me and it appears my partition has not increased in size even though I've gotten an instance with more memory.
This matches your case:
suppose that you have resized the boot volume of an instance, such as a T2 instance, from 8 GB to 16 GB and an additional volume from 8 GB to 30 GB.
So, you need to extend the volume, by following these steps:
Extend the file system of EBS volumes.
EDIT:
Look, you know have a volume of size 16TB (terrabytes not gigabytes). As mentioned already, the volume formatted as MBR.
You need to convert it to a GPT format drive. Follow this guide.
WARNING! Make back-up as might lose your data. Here's how to create an EBS snapshot
Alternatively, you can downsize the EBS volume, but that's requires a little bit of EBS magic.
- Snapshot the volume
- Create a new smaller EBS volume
- Attach the new volume
- Format the new volume
- Mount the new volume
- Copy data from old volume to the new volume
- Prepare the new volume
- Detach and unmount old volume

Inode Number Full Ubuntu [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a ubuntu machine with 8Gb of RAM and 250B of Harddisk. I am using this machine as my Jenkins Server for CI , Iam facing inode number full problem from the past few days
I fire command :
df -i
Output:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda5 18989056 15327782 3661274 81% /
none 989841 11 989830 1% /sys/fs/cgroup
udev 978914 465 978449 1% /dev
tmpfs 989841 480 989361 1% /run
none 989841 3 989838 1% /run/lock
none 989841 8 989833 1% /run/shm
none 989841 39 989802 1% /run/user
Suggest how to resolve this.
The program mkfs.ext4 allows an open -N which sets the number of inodes when making a new file system.
In this case you'd need to backup the entire / file system, boot from a live CD/USB and recreate the filesystem on /dev/sda5. WARNING: This will kill every single file on that drive. You'll probably need to reinstall the operating system onto that partition first because merely restoring a backup of a boot drive will likely not get all the fiddly bits necessary to boot.
If you are running out of inodes, it is likely you are doing something sub-optimal like using the filesystem as a poor-man's database. It is worth looking into why you are exhausting i-nodes, but that's another question.

btrfs raid1 with multiple devices [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have 6 devices: 4TB, 3TB, 2TB, 2TB, 1.5TB, 1TB (/dev/sda to /dev/sdf).
First question:
With RAID-1 I'd have:
2TB mirrored in 2TB
1TB mirrored in 0.5#4TB + 0.5#3TB
1.5TB mirrored in 1.25#4TB + 0.25#3TB
the rest 2.25 of 3TB mirrored in the rest 2.25TB of 4TB.
My total size would be in that case (4 + 3 + 2 + 2 + 1.5 + 1) = 13.5/2 = 6.75TB
Will $ mkfs.btrfs --data raid1 --metadata raid1 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf provide me with approximately 6.75TB? If yes, how many disks (and which?) can I afford losing?
Second question:
With the RAID-1 I can afford, for example, losing three disks:
one 2TB disk,
the 1TB disk and
the 1.5TB disk,
without losing data.
How can I have the same freedom in losing the same disks with btrfs?
Thanks!
Btrfs distributes the data (and its RAID 1 copies) block-wise, thus deals very well with hard disks of different size. You will receive the sum of all hard disks, divided by two – and do not need to think on how to put them together in similar sized pairs.
If more than one disk fails, you're always in danger of losing data: RAID 1 cannot deal with losing two disks at the same time. In your example given above, if the wrong two disks die, you always lose data.
Btrfs can increase the chances of losing data if more than one disk fails, as it will distribute the blocks somewhat randomly, chances are higher that some blocks are only stored on the failed two devices. On the other hand, if you lose data, you probably lose less for the same reason. In average, it sums up to the same chance of losing n bits, but if you're interested in the chance of losing only a single bit you're worse of with btrfs.
And then again, you should also consider its advantage of using checksums which help against corrupted data on disk.

List all harddrives in a linux system [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm having problems to detect which one of my block devices is the hard drive. My system has a cd-rom drive, USB drives, and a single hard drive of unknown vendor/type.
How can I identify the hard drive with a linux command, script, or C application?
sudo lshw -class disk
will show you the available disks in the system
As shuttle87 pointed out, there are several other posts that answer this question. The solution that I prefer is:
root# lsblk -io NAME,TYPE,SIZE,MOUNTPOINT,FSTYPE,MODEL
NAME TYPE SIZE MOUNTPOINT FSTYPE MODEL
sdb disk 2.7T WDC WD30EZRX-00D
`-sdb1 part 2.7T linux_raid_member
`-md0 raid1 2.7T /home xfs
sda disk 1.8T ST2000DL003-9VT1
|-sda1 part 196.1M /boot ext3
|-sda2 part 980.5M [SWAP] swap
|-sda3 part 8.8G / ext3
|-sda4 part 1K
`-sda5 part 1.8T /samba xfs
sdc disk 2.7T WDC WD30EZRX-00D
`-sdc1 part 2.7T linux_raid_member
`-md0 raid1 2.7T /home xfs
sr0 rom 1024M CDRWDVD DH-48C2S
References:
https://unix.stackexchange.com/q/4561
https://askubuntu.com/q/182446
https://serverfault.com/a/5081/109417
If you have a list of the plausible block devices, then the file
/sys/block/[blockdevname]/removable
will contain "1" if the device is removable, "0" if not removable.

Resources