I am using Google Compute Engine in Europe and the maintenance window just hit us. The "automatic migration" didn't work, so all of our servers are offline. During the recovery from backup, we found a few files missing.
I have a persistent boot disk created from the debian-7-wheezy-v20130617 image with data, which I am trying to access.
I came up with 2 possible solutions to access the data:
Create a new VM with the old bootdisk. Sounds easy, but Google changed something and the VM won't boot.
Create a new VM with a new image and attach the old bootdisk. Sounds easy, but the old disk is not recognized using good old safe_format_and_mount.
Any ideas how to access the data from the disk? The migration doc didn't really help, it seems they assume you always have the old VM with the old disk still running.
As your disks were created before the migration to the current v1 API, before you can re-attach the disk to a new instance, you must upgrade the disk to use an embedded kernel.
Finally figured out how to access the data on old disks in a new VM.
Create a new VM with a current OS image.
In addition attach old boot disks as read only
In the VM check attached disks with ls -la /dev/sd* . "sda" is boot, the others attached.
brw-rw---T 1 root disk 8, 0 Jan 22 11:18 /dev/sda
brw-rw---T 1 root disk 8, 1 Jan 22 11:18 /dev/sda1
brw-rw---T 1 root disk 8, 16 Jan 22 11:18 /dev/sdb
brw-rw---T 1 root disk 8, 17 Jan 22 11:18 /dev/sdb1
brw-rw---T 1 root disk 8, 32 Jan 22 11:49 /dev/sdc
brw-rw---T 1 root disk 8, 33 Jan 22 11:49 /dev/sdc1
4 Create mount point mkdir /mnt/disk_b and mount disk partition mount /dev/sdb1 /mnt/disk_b.
mount: block device /dev/sdb1 is write-protected, mounting read-only
5 Check your data ls -la /mnt/disk_b
Related
I want to use GlusterFS as a distributed Filestorage on FreeBSD 11.1
Documentation is poor, so I followed some howtos on the net.
I could create the glusterfs volume, but I have trouble to mount it on an other clients machine. Here is what I did so far:
I have three hosts, all in the same subnet.
10.0.0.21 Webserver
10.0.0.31 gluster1
10.0.0.32 gluster2
I added the above entries in the /etc/hosts files on all of the three hosts.
I modified /etc/rc.conf on gluster1 and gluster2 with:
glusterd_enable="YES"
on gluster1 I did:
gluster peer probe gluster2
(succeeded)
each gluster1 and gluster2 has the following harddrives: /dev/da1
they are partitioned (BSD Label) and mounted on gluster1 and gluster2 as /datastore
"cat /etc/fstab" gives on both gluster1 and gluster2:
# Device Mountpoint FStype Options Dump Pass#
/dev/da0a / ufs rw 1 1
/dev/da1a /datastore ufs rw 2 2
I created the gluster volume1:
gluster volume create volume1 replica 2 transport tcp gluster1:/datastore gluster2:/datastore force
(I'm aware of the split brain risk, this is a simple test szenario)
I started the volume1 with:
gluster volume start volume1
A check of the volume1 with:
gluster volume info
gives me back:
Type: Replicate
Volume ID: a760c545-1cc9-47a4-bc9e-51f6180e4d7a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/datastore
Brick2: gluster2:/datastore
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
So far everything worked, and seems to be fine.
Now my trouble starts to mount and use this on the client / consumer machine (Webserver)
I read at several places that the glusterfs volume1 should be mountable with:
mount -t glusterfs gluster1:/volume1 /mnt
This gives me simple back the following error:
mount: gluster1:/volume1: Operation not supported by device
As I normally do before I ask "silly" questions, I googled a lot for this.
Played around with also installing glusterfs on the client (pkg install glusterfs), enabling it in the clients /etc/rc.conf, adding stuff for FUSE, but I could not bring it up to work.
I feel quite annoyed, because I know it must be a very small thing I'm missing here!?
Can anyone shed some light into my issue?
luster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster1:/datastore N/A N/A N N/A
Brick gluster2:/datastore N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A N 55181
Self-heal Daemon on gluster2 N/A N/A N 30318
Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks
So, I enabled NFS with this:
gluster volume set volume1 nfs.disable off
There was a warning of no longer using GlusterFS NFS, but instead to use NFS-Ganesha. The warning I ignored for this test.
now I restarted the volume:
gluster volume stop volume1
gluster volume start volume1
To check I did:
gluster volume info
which showed me now:
Volume Name: volume1
Type: Replicate
Volume ID: a760c545-1cc9-47a4-bc9e-51f6180e4d7a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/datastore
Brick2: gluster2:/datastore
Options Reconfigured:
nfs.disable: off
transport.address-family: inet
So the nfs.disable was set to off. NFS should be on now right?
But
gluster volume status volume1
still shows no NFS running:
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster1:/datastore N/A N/A N N/A
Brick gluster2:/datastore N/A N/A N N/A
NFS Server on localhost N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A N 99115
NFS Server on gluster2 N/A N/A N N/A
Self-heal Daemon on gluster2 N/A N/A N 37075
Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks
Disturbing here is also (beside NFS Online is N), that both bricks seems to be not online too (Online indicated as N)?!??
So I'm really stuck and could use some help.
Finally it is working:
/usr/local/sbin/mount_glusterfs gluster1:/volume1 /mnt
did the trick...
the client also need to have the net/glusterfs package installed, and the following statement in the /boot/loader.conf:
fuse_load="YES"
Cheers
I think issue may be with ufs file system. Does it support extended attributes extensively ?
GlusterFS required FS with extended attribute support. (XFS is one).
From the link: (https://access.redhat.com/articles/1273933)
As the Red Hat Storage makes extensive use of extended attributes, an XFS inode size of 512 bytes works better with Red Hat Storage than the default XFS inode size of 256 bytes. So, inode size for XFS must be set to 512 bytes, while formatting the Red Hat Storage bricks. To set the inode size, you need to use -i size option with the mkfs.xfs command.
I have properly connected an iSCSI target to my FreeBSD host using iscsictl. This new device shows up as da7. The disk shows up with:
geom disk list
as
Geom name: da7
Providers:
1. Name: da7
Mediasize: 4294967296000 (3.9T)
Sectorsize: 512
Stripesize: 8192
Stripeoffset: 0
Mode: r0w0e0
descr: SYNOLOGY iSCSI Storage
lunname: SYNOLOGYiSCSI Storage:44281bed-ce3d-4a9f-b95e-c89b6c74c345
lunid: 600140544281beddce3dd4a9fdb95edc
ident: 44281bed-ce3d-4a9f-b95e-c89b6c74c345
rotationrate: unknown
fwsectors: 63
fwheads: 255
I wanted to create a new ZFS zpool on this single disk with the command:
zpool create backuppool /dev/da7
The zpool command will now utilise a lot of cpu, but newer finishs. (Let it run for 2h).
If I create an ufs filesystem on the properly partitioned disk, the process is extremly fast. Also if I create a pool on a different raw disks, zpool finishs within seconds.
After some research I could not find any information if creating a zpool on a iSCSI target is allowed or not. Does anyone get this working?
Tested on: FreeBSD 11.1-RELEASE-p4 #0: Tue Nov 14 06:12:40 UTC 2017
We have two servers running ubuntu 14.04 using docker. Every other month when starting or building a container we get the message:
container_linux.go:247: starting container process caused "process_linux.go:258: applying cgroup configuration for process caused
\"mkdir /sys/fs/cgroup/memory/docker/cf657a58a1382e62976b4d339946f07e8a40f22f18b52822f884834f78830806: no space left on device\""
The disks have still lots of space but cat /proc/cgroups gives this: (num_cgroups keeps increasing)
#subsys_name hierarchy num_cgroups enabled
cpuset 1 65805 1
cpu 2 65807 1
cpuacct 3 65803 1
blkio 4 65803 1
memory 5 65535 1
devices 6 65805 1
freezer 7 65803 1
net_cls 8 65803 1
perf_event 9 65803 1
net_prio 10 65803 1
hugetlb 11 65803 1
Restarting the server always helped so far but we don't want to restart a server every few months.
So I started some research and found a directory in the /sys/fs/cgroup/*/user path.
/sys/fs/cgroup/systemd/user/998.user is itself holding 65662 subdirectories. All named somewhat like 36309.session (the number increases)
Is there a ways to see what process is creating those cgroups?
I thought it was process 998, but that doesn't even exists.
I ran into this same problem with AWS Batch. I have no solution but I found this discussion https://github.com/moby/moby/issues/29638. It seems that the problem is some kind of leak in kernel and/or Docker.
I encountered the same issue. You probably have a lot of dangling images/containers
which is causing the cgroup of docker to run out of space. check it by:
docker images -a
docker ps -a
You need to clean it up. One solution is to remove all images/containers/etc that are not being used at the moment:
docker system prune -a
Need: create keyspace on alternate device
Problem: service aborts on startup with dir-create failure messages below.
INFO [main] 2017-01-06 00:45:03,300 ViewManager.java:137 - Not submitting build tasks for views in keyspace system_schema as storage service is not initialized
ERROR [main] 2017-01-06 00:45:03,393 Directories.java:239 - Failed to create /var/lib/cassandra/data/opus/aa-15be7240d3db11e6ad0eed0a1d791016 directory
ERROR [main] 2017-01-06 00:45:03,397 DefaultFSErrorHandler.java:92 - Exiting forcefully due to file system exception on startup, disk failure policy "stop"
Context: Cassandra 3.9 single-node ubuntu 16.04; directory perms are below.
01:52 opus/ cd /var/lib/cassandra/data
01:52 opus/ ls -l
total 24
drwxr-xr-x 3 cassandra cassandra 4096 Jan 6 00:41 opus
drwxr-xr-x 24 cassandra cassandra 4096 Jan 5 23:49 system
drwxr-xr-x 6 cassandra cassandra 4096 Jan 5 23:50 system_auth
drwxr-xr-x 5 cassandra cassandra 4096 Jan 5 23:50 system_distributed
drwxr-xr-x 12 cassandra cassandra 4096 Jan 5 23:50 system_schema
drwxr-xr-x 4 cassandra cassandra 4096 Jan 5 23:50 system_traces
01:52 opus/ cd opus
01:52 opus/ ls -l
total 4
drwxr-xr-x 3 cassandra cassandra 4096 Jan 6 00:41 aa-15be7240d3db11e6ad0eed0a1d791016
when the link is installed
01:57 data/ ls -l
total 20
lrwxrwxrwx 1 root root 35 Jan 6 01:57 opus -> /media/opus/quantdrive/opus
Steps:
Vanilla install of cassandra 3.9;
Create keyspace in cqlsh create keyspace opus with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
Create table use opus; create table aa(aa int, primary key(aa));
Stop cassandra
Move keyspace dir mv /var/lib/cassandra/data/opus /media/opus/quantdrive
Create symbolic link ln -s /media/opus/quantdrive/opus /var/lib/cassandra/opus
Start cassandra [FAILS AS ABOVE] with create directory, when directory already present
No change in perms on opus keyspace directory, I just moved it. When I move it back, cassandra starts fine.
I would be grateful for any help with this and I apologize in advance if I the solution to my problem is described elsewhere or if I'm missing the obvious.
Move the mount point for the target drive from a user-owned directory to a root-owned one. I moved the mount-point in my case from /media/opus/quantdrive which is owned by user opus to /mnt/quantdrive which is owned by root and everything worked fine.
I have an application that I eventually want to run on a cloud computing service (e.g., such as AWS or Google Cloud) packaged inside a docker image. The reason the application will need to run in the cloud is because it's designed to process large data files, but before I actually deploy, I'd like to test it first on a local laptop, using a single large data file that I've stored (for test and development purposes) on an external USB drive.
My development machine is an OSX laptop, and I'm using a recent version of docker:
stachyra> uname -a
Darwin Andrews-MacBook-Pro-76.local 14.5.0 Darwin Kernel Version 14.5.0: Tue Sep 1 21:23:09 PDT 2015; root:xnu-2782.50.1~1/RELEASE_X86_64 x86_64
stachyra> docker --version
Docker version 1.10.2, build c3959b1
OSX has mounted my external USB drive, device /dev/disk2s2, as /Volumes/MGR DATA:
stachyra> df
Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk1 974770480 435721376 538537104 45% 54529170 67317138 45% /
devfs 375 375 0 100% 650 0 100% /dev
map -hosts 0 0 0 100% 0 0 100% /net
map auto_home 0 0 0 100% 0 0 100% /home
/dev/disk2s2 3906291632 3869523640 36767992 100% 483690453 4595999 99% /Volumes/MGR DATA
/dev/disk3s1 196608 193160 3448 99% 24143 431 98% /Volumes/VirtualBox
stachyra> diskutil list
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *500.3 GB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_CoreStorage 499.4 GB disk0s2
3: Apple_Boot Recovery HD 650.0 MB disk0s3
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: Apple_HFS Macintosh HD *499.1 GB disk1
Logical Volume on disk0s2
DB70B91A-3B57-4C82-A758-C4BDEA4160FD
Unlocked Encrypted
/dev/disk2
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *2.0 TB disk2
1: EFI EFI 209.7 MB disk2s1
2: Apple_HFS MGR DATA 2.0 TB disk2s2
/dev/disk3
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *100.7 MB disk3
1: Apple_HFS VirtualBox 100.7 MB disk3s1
and it should also be noted, the drive has several directories and data which are visible inside it, at least when viewed directly through OSX:
stachyra> ls -l /Volumes/MGR\ DATA
total 0
drwxr-xr-x 6 stachyra staff 204 Apr 14 2015 1000genomes
drwxr-xr-x 5 stachyra staff 170 Oct 12 17:41 GIAB
drwxr-xr-x 4 stachyra staff 136 Apr 28 2015 genome_browser_tracks
drwxr-xr-x 24 stachyra staff 816 Oct 6 14:00 mitty
I have tried to follow the advice from this question, which describes how to mount a USB drive in docker when docker is running within a linux host. But my local laptop is OSX, not linux, so it doesn't seem to work.
Explicitly, when attempting to follow the advice of the accepted answer, I obtain the following result:
stachyra> docker run -i -t --privileged -v /dev/disk2s2:/dev/foo ubuntu bash
root#8da7b492a707:/# uname -a
Linux 8da7b492a707 4.1.18-boot2docker #1 SMP Sat Feb 20 08:24:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
root#8da7b492a707:/# ls -l /dev/foo
total 0
root#8da7b492a707:/#
Based upon the response, one can see that docker does indeed launch a linux container correctly, and it also creates a volume /dev/foo inside of the container as requested, but the actual contents of the USB drive are not accessible via that location--the ls -l command claims there are no files or directories there.
I also tried the second method described in an alternate response to the same question, and that fails even worse:
stachyra> docker run -i -t --device=/dev/disk2s2 ubuntu bash
docker: Error response from daemon: error gathering device information while adding custom device "/dev/disk2s2": not a device node.
stachyra>
I have found another discussion thread on stackoverflow which suggests that raw USB access is handled quite differently in OSX than in linux, which I suspect is probably the reason why both of the above attempts at USB access are failing.
But, what should I actually do about it? That is to say, what is the correct sequence of actions or commands to allow docker to access a USB device mounted on an OSX host, rather than linux?
I was finally able to access my USB drive from /var/media inside my container by using the machine-diskutil.sh script mentioned in warmoverflow's comment like so
machine-diskutil.sh mount my-machine-name /Volumes/my-usb-drive
and then starting the container like so
docker run -v /Volumes/my-usb-drive:/var/media -it my/image:latest bash
Because I had tried to add /Volumes/my-usb-drive as a shared folder manually in VirtualBox, I first got this error.
Error: The shared folder /Volumes/Seagate already exists on the
docker machine, please unmount it first.
So I removed it manually and re-ran the machine-diskutil.sh mount command without any problems. Great stuff!
As per #pgayvallet comment on GitHub:
As the daemon runs inside a VM in Docker Desktop, it is not possible to actually share a mac host device with the container inside the VM, and this will most definitely never be possible.