I'm trying to access to a logical volume where previously was used by docker. This is the result of various command:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 80G 0 disk
├─nvme0n1p1 259:3 0 80G 0 part /
└─nvme0n1p128 259:4 0 1M 0 part
nvme1n1 259:0 0 80G 0 disk
└─nvme1n1p1 259:1 0 80G 0 part
├─docker-docker--pool_tdata 253:1 0 79G 0 lvm
│ └─docker-docker--pool 253:2 0 79G 0 lvm
└─docker-docker--pool_tmeta 253:0 0 84M 0 lvm
└─docker-docker--pool 253:2 0 79G 0 lvm
fdisk
Disk /dev/nvme1n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00029c01
Device Boot Start End Blocks Id System
/dev/nvme1n1p1 2048 167772159 83885056 8e Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/nvme0n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 358A5F86-3BCA-4FB2-8C00-722B915A71AB
# Start End Size Type Name
1 4096 167772126 80G Linux filesyste Linux
128 2048 4095 1M BIOS boot BIOS Boot Partition
lvdisplay
--- Logical volume ---
LV Name docker-pool
VG Name docker
LV UUID piD2Wx-aDjf-CkpN-b4s4-YXWE-6ERm-GWTcOz
LV Write Access read/write
LV Creation host, time ip-172-31-39-159, 2020-02-16 09:18:57 +0000
LV Pool metadata docker-pool_tmeta
LV Pool data docker-pool_tdata
LV Status available
# open 0
LV Size 79.03 GiB
Allocated pool data 80.07%
Allocated metadata 31.58%
Current LE 20232
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
But when I try to mount the volume docker-docker--pool_tdata I get the following error:
mount /dev/mapper/docker-docker--pool_tdata /mnt/test
mount: /dev/mapper/docker-docker--pool_tdata is already mounted or /mnt/test busy
I've also tried to reboot the machine, to uninstall docker and to see if there is file opened on that volume using lsof
Do you have any clue about how can I mount that volume?
Thanks
Uninstalling docker does not really help as purge and autoremove only delete the installed packages and not the images, containers, volumes and config files.
To delete those you have to delete a bunch of directories contained in etc, var/lib, bin andvar/run
Clean up the env
try running docker system prune -a to remove unused containers, images etc
remove the volume with docker volume rm {volumeID}
create the volume again docker volume create docker-docker--pool_tdata
Kill the process
run lsof +D /mnt/test or cat ../docker/../tasks
this should display the PIDs of alive tasks.
Kill the task with kill -9 {PID}
Related
I have 2 servers, Ubuntu 18.04. First is my main server which I want to backup. Second is tested server with Ubuntu 18.04 - KS-4 Server - Atom N2800 - 4GB DDR3 1066 MHz - SoftRAID 2x 2To SATA - when I want test my backup.
I make backup by 'dd' command and nextly I download this backup (wget) by server 2 (490gb, ~ 24hours downloading).
Now I want test my backup so I tried:
dd if=sdadisk.img of=/dev/sdb
I get:
193536+0 records in
193536+0 records out
99090432 bytes (99 MB, 94 MiB) copied, 5.06239 s, 19.6 MB/s
But nothing will change.
fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8efed6c9
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 4096 1050623 1046528 511M fd Linux raid autodetect
/dev/sda2 1050624 3905974271 3904923648 1.8T fd Linux raid autodetect
/dev/sda3 3905974272 3907020799 1046528 511M 82 Linux swap / Solaris
GPT PMBR size mismatch (879097967 != 3907029167) will be corrected by w(rite).
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B3B13223-6382-4EB6-84C3-8E66C917D396
Device Start End Sectors Size Type
/dev/sdb1 2048 1048575 1046528 511M EFI System
/dev/sdb2 1048576 2095103 1046528 511M Linux RAID
/dev/sdb3 2095104 878039039 875943936 417.7G Linux RAID
/dev/sdb4 878039040 879085567 1046528 511M Linux swap
Disk /dev/md1: 511 MiB, 535756800 bytes, 1046400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md2: 1.8 TiB, 1999320842240 bytes, 3904923520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 511M 0 part
│ └─md1 9:1 0 511M 0 raid1 /boot
├─sda2 8:2 0 1.8T 0 part
│ └─md2 9:2 0 1.8T 0 raid1 /
└─sda3 8:3 0 511M 0 part [SWAP]
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 511M 0 part
├─sdb2 8:18 0 511M 0 part
├─sdb3 8:19 0 417.7G 0 part
└─sdb4 8:20 0 511M 0 part
I think the problem is with configuration of disks on server 2, specifically with 'Linux raid' between them. I searching how change its, I testing commands like 'mdadm...' but it's not working like I expected. So I have questions:
How change 'Linux raid' from 2 HDDS to 1 HDD with current system and 2 HDD clear, when I can test my backup properly?
It's generally possible to restore backup 490GB on 1.8TB?
I selected the best option to full linux backup?
To touch on question #3 "I selected the best option to full linux backup?"
You are using the correct tool, but not the correct items to use the dd command on.
It appears you created a .img file for your sda drive (which is normally only used to create a bootable disk and not a full backup of the drive.
If you want to create a full backup, for example of /dev/sda you would run the following:
dd if=/dev/sda of=/dev/sdb
But In your case your /dev/sda drive is a twice the size of your /dev/sdb drive so you might want to consider backing up important files using tar as well as compressing the backup using either gzip or bzip2.
Few days ago we have encountered an unexpected error where one of the mounted drive on our RedHat linux machine became Read-Only. The issue was cause by the network outage in the datacenter.
Now I need to see if I can reproduce the same behavior where drive will be re-mounted as Read-Only while application is running.
I tried to remounted it was read-only but that didn't work because there are files that are opened (logs being written).
Is there a way to temporary cause the read-only if I have root access to the machine (but no access to the hypervisor).
That volume is mounted via /etc/fstab. Here is the record:
UUID=abfe2bbb-a8b6-4ae0-b8da-727cc788838f / ext4 defaults 1 1
UUID=8c828be6-bf54-4fe6-b68a-eec863d80133 /opt/sunapp ext4 rw 0 2
Here are the output of few commands that shows details about our mounted drive. I can add more details as needed.
Output of fdisk -l
Disk /dev/vda: 268.4 GB, 268435456000 bytes, 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0008ba5f
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 524287966 262142959+ 83 Linux
Disk /dev/vdb: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Output of lsblk command:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 80G 0 disk
└─vda1 253:1 0 80G 0 part /
vdb 253:16 0 250G 0 disk /opt/sunup
Output of blkid command:
/dev/vda1: UUID="abfe2bbb-a8b6-4ae0-b8da-727cc788838f" TYPE="ext4"
/dev/sr0: UUID="2017-11-13-13-33-07-00" LABEL="config-2" TYPE="iso9660"
/dev/vdb: UUID="8c828be6-bf54-4fe6-b68a-eec863d80133" TYPE="ext4"
Output of parted -l command:
Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0
has been opened read-only.
Error: /dev/sr0: unrecognised disk label
Model: QEMU QEMU DVD-ROM (scsi)
Disk /dev/sr0: 461kB
Sector size (logical/physical): 2048B/2048B
Partition Table: unknown
Disk Flags:
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 268GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 268GB 268GB primary ext4 boot
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 42.9GB 42.9GB ext4
Yes, you can do it. But the method proposed here may cause data loss, so use it only for testing.
Supposing you have /dev/vdb mounted as /opt/sunapp, do this:
First, unmount it. You may need to shut down any applications using it first.
Configure a loop device to mirror the contents of /dev/vdb:
losetup /dev/loop0 /dev/vdb
Then, mount /dev/loop0 instead of /dev/vdb:
mount /dev/loop0 /opt/sunapp -o rw,errors=remount-ro
Now, you can run your application. When it is time to make /opt/sunapp read-only, use this command:
blockdev --setro /dev/vdb
After that, attempts to write to /dev/loop0 will result in I/O errors. As soon as file system driver detects this, it will remount the file system as read-only.
To restore everything back, you will need to unmount /opt/sunapp, detach the loop device, and make /dev/vdb writable again:
umount /opt/sunapp
losetup -d /dev/loop0
blockdev --setrw /dev/vdb
When I had some issues like corrupted disks, I had used ntfsfix.
Please see if these commands, solve the problem.
sudo ntfsfix /dev/vda
sudo ntfsfix /dev/vdb
I have an oVirt installation with CentOS Linux release 7.3.1611.
I want to add a new drive (sdb) to the oVirt volume group to work with VMs.
Here is the result of fdisk on the drive:
[root#host1 ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them. Be
careful before using the write command.
Orden (m para obtener ayuda): p
Disk /dev/sdb: 300.1 GB, 300069052416 bytes, 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512
bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk
label type: dos Identificador del disco: 0x7a33815f
Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sdb1 2048 586072367 293035160 8e Linux LVM
The partitions are showed up in /proc/partitions:
[root#host1 ~]# cat /proc/partitions
major minor #blocks name
8 0 293036184 sda
8 1 1024 sda1
8 2 1048576 sda2
8 3 53481472 sda3
8 4 1 sda4
8 5 23072768 sda5
8 6 215429120 sda6
8 16 293036184 sdb
8 17 293035160 sdb1
When I execute the command to create PV with "pvcreate /dev/sdb1" the result is:
[root#host1 ~]# pvcreate /dev/sdb1
Device /dev/sdb1 not found (or ignored by filtering).
I have revised the file /etc/lvm/lvm.conf for the filters, but I do not have any filter that makes LVM discarding the drive. I have rebooted the computer after creating the PV with pvcreate. I did research on Google for the error but no idea.
Thanks. Any help would be appreciated Manuel
Try to edit lvm.conf uncomment global_flter and edit like this
global_filter = [ "a|/dev/sdb|"]
After that edit multipath vi /etc/multipath.conf
[root#ovirtnode2 ~]#lsblk /dev/sdb NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb
8:16 0 200G 0 disk └─3678da6e715b018f01f1abdb887594aae 253:2
0 200G 0 mpath
edit
vi /etc/multipath.conf
append the following to multipath.conf blacklist {
wwid 3678da6e715b018f01f1abdb887594aae }
service multipathd restart
its work for me, and i have that problem to when im trying on ovirt but
[root#ovirtnode2 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb"
successfully created. [root#ovirtnode2 ~]#
After some problem with Docker and my dedicated server with a Debian (the provider give some OS image without some features needed by Docker, so I recompiled Linux kernel yesterday and activate the features needed, I followed some instruction in blog).
Now I was happy to have success with docker I tried to create image... and I have an error.
$ docker run -d -t -i phusion/baseimage /sbin/my_init -- bash -l
Unable to find image 'phusion/baseimage:latest' locally
Pulling repository phusion/baseimage
5a14c1498ff4: Download complete
511136ea3c5a: Download complete
53f858aaaf03: Download complete
837339b91538: Download complete
615c102e2290: Download complete
b39b81afc8ca: Download complete
8254ff58b098: Download complete
ec5f59360a64: Download complete
2ce4ac388730: Download complete
2eccda511755: Download complete
Status: Downloaded newer image for phusion/baseimage:latest
0bd93f0053140645a930a3411972d8ea9a35385ac9fafd94012c9841562beea8
FATA[0039] Error response from daemon: Cannot start container 0bd93f0053140645a930a3411972d8ea9a35385ac9fafd94012c9841562beea8: [8] System error: write /sys/fs/cgroup/docker/0bd93f0053140645a930a3411972d8ea9a35385ac9fafd94012c9841562beea8/cgroup.procs: no space left on device
More informations :
$ docker info
Containers: 3
Images: 12
Storage Driver: devicemapper
Pool Name: docker-8:1-275423-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 814.4 MB
Data Space Total: 107.4 GB
Data Space Available: 12.22 GB
Metadata Space Used: 1.413 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.146 GB
Udev Sync Supported: false
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Kernel Version: 3.19.0-xxxx-std-ipv6-64
Operating System: Debian GNU/Linux 8 (jessie)
CPUs: 4
Total Memory: 7.691 GiB
Name: ns3289160.ip-5-135-180.eu
ID: JK54:ZD2Q:F75Q:MBD6:7MPA:NGL6:75EP:MLAN:UYVU:QIPI:BTDP:YA2Z
System :
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 788M 456K 788M 1% /run
/dev/sda1 20G 7.8G 11G 43% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.7G 4.0K 1.7G 1% /dev/shm
/dev/sda2 898G 11G 842G 2% /home
Edit: command du -sk /var
# du -sk /var
3927624 /var
Edit: command fdisk -l
# fdisk -l
Disk /dev/loop0:
100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop1: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00060a5c
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 4096 40962047 40957952 19.5G 83 Linux
/dev/sda2 40962048 1952471039 1911508992 911.5G 83 Linux
/dev/sda3 1952471040 1953517567 1046528 511M 82 Linux swap / Solaris
Disk /dev/mapper/docker-8:1-275423-pool: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 byte
You should not remove cgroup support in docker. Otherwise you may get warning like WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded. when you run a docker container.
A simple command should do the trick.
sudo echo 1 > /sys/fs/cgroup/docker/cgroup.clone_children
If it still does not work, run below commands and restart docker service:
sudo echo 0 > /sys/fs/cgroup/docker/cpuset.mems
sudo echo 0 > /sys/fs/cgroup/docker/cpuset.cpus
I installed docker via docker-lxc in the debian repos, I followed a tuto. I tried another solution (with success), I updated my source.list /etc/apt/source.list from jessie to sid, I removed docker-lxc with a purge and I installed docker.io.
The error changed. It was mkdir -p /sys/... can't create dir : access denied
So I find a comment in a blog and I tried the solution it was to comment this line previously added by the tutorial :
## file /etc/fstab
# cgroup /sys/fs/cgroup cgroup defaults 0 0
and reboot the server.
yum install -y libcgroup libcgroup-devel libcgroup-tools
cgclear
service cgconfig restart
mount -t cgroup none /cgroup
vi /etc/fstab
cgroup /sys/fs/cgroup cgroup defaults 0 0
I just recovered data from an ec2 snapshot and created a volume with the data, i also attached the volume to my working instance, but I have problems mounting the new volume. On ec2-describe-volumes, I do get the find new volume created.
i-14305121 /dev/sdi
How do I mount this /dev/sdi on to a directory so that I can access the files on it? I tried mount /dev/sdi, but I got an error: mount: special device /dev/sdi does not exist.
On running lsblk from terminal, i get this
dev1#ip-10-244-164-7:/$ lsblk /dev/sdi
lsblk: /dev/sdi: not a block device
dev1#ip-10-244-164-7:/$ lsblk /dev/xvdi
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdi 202:128 0 8G 0 disk
On running mount /dev/xvdi, I also get this error. Other details of fdisk, mtab are also given here:
dev1#ip-10-244-164-7:/$ sudo mount /dev/xvdi /backup
mount: can't find /backup in /etc/fstab or /etc/mtab
dev1#ip-10-244-164-7:/$ cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 defaults 0 0
dev1#ip-10-244-164-7:/$ cat etc/mtab
/dev/xvda1 / ext4 rw 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
none /sys/fs/cgroup tmpfs rw 0 0
none /sys/fs/fuse/connections fusectl rw 0 0
none /sys/kernel/debug debugfs rw 0 0
none /sys/kernel/security securityfs rw 0 0
udev /dev devtmpfs rw,mode=0755 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0
tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0
none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0
none /run/shm tmpfs rw,nosuid,nodev 0 0
none /run/user tmpfs rw,noexec,nosuid,nodev,size=104857600,mode=0755 0 0
dev1#ip-10-244-164-7:/$ sudo fdisk -l
Disk /dev/xvda1: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvda1 doesn't contain a valid partition table
Disk /dev/xvdi: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvdi doesn't contain a valid partition table
dev1#ip-10-244-164-7:/$
I'm not sure if this is a good answer, try running:
lsblk /dev/sdi
And it may list partitions that exists on that drive like this:
sdi
|--sdi1
|--sdi2
|--sdi3
If you have something like sdi1 you can try to mount it:
mount /dev/sdi1 /your/folder/here
Hope it helps.
I think you should try
mount /dev/xvdi /your/folder
In EC2 devices are named differently from what they promise in AWS console. See EC2: EBS device id confusion (/dev/sdf vs. /dev/xvdf)