Creating zfs zpool on initiated iSCSI disk on FreeBSD - freebsd

I have properly connected an iSCSI target to my FreeBSD host using iscsictl. This new device shows up as da7. The disk shows up with:
geom disk list
as
Geom name: da7
Providers:
1. Name: da7
Mediasize: 4294967296000 (3.9T)
Sectorsize: 512
Stripesize: 8192
Stripeoffset: 0
Mode: r0w0e0
descr: SYNOLOGY iSCSI Storage
lunname: SYNOLOGYiSCSI Storage:44281bed-ce3d-4a9f-b95e-c89b6c74c345
lunid: 600140544281beddce3dd4a9fdb95edc
ident: 44281bed-ce3d-4a9f-b95e-c89b6c74c345
rotationrate: unknown
fwsectors: 63
fwheads: 255
I wanted to create a new ZFS zpool on this single disk with the command:
zpool create backuppool /dev/da7
The zpool command will now utilise a lot of cpu, but newer finishs. (Let it run for 2h).
If I create an ufs filesystem on the properly partitioned disk, the process is extremly fast. Also if I create a pool on a different raw disks, zpool finishs within seconds.
After some research I could not find any information if creating a zpool on a iSCSI target is allowed or not. Does anyone get this working?
Tested on: FreeBSD 11.1-RELEASE-p4 #0: Tue Nov 14 06:12:40 UTC 2017

Related

Filesystem stats not available after CIFS reconnect

I am using a Windows Server 2019 with SMBServer Shares which get mounted on a SLES via CIFS.
When Windows does its periodic system cleanup (closing idle SMBServer sessions) the Linux server reconnects with following Kernel message:
CIFS: VFS: \\filer.example.com has not responded in 180 seconds. Reconnecting...
The reconnect seems to be successful, as reading and writing files to the mount is possible.
But querying disk stats is not possible anymore.
Bad file descriptor on df:
user#suse:~$ df -h
df: /mnt/test: Bad file descriptor
Filesystem Size Used Avail Use% Mounted on
...
Wrong data on stat:
user#suse:~$ stat /mnt/test
File: /mnt/test
Size: 0 Blocks: 0 IO Block: 1048576 directory
Device: 38h/56d Inode: 281474976710700 Links: 2
Access: (0755/drwxr-xr-x) Uid: ( 1100/ application-user) Gid: ( 80/ application-group)
Access: 2023-02-15 11:04:55.977807600 +0100
Modify: 2023-02-15 11:04:55.977807600 +0100
Change: 2023-02-16 09:50:07.638662000 +0100
Birth: 2023-02-10 14:07:14.408836200 +0100
I noticed the same problem when mounting subdirectories of a single share multiple times. The SLES does mount multiple shares, but each share only once.

Attempting to boot new uImage

I am attempting to boot uImage via uboot and I am getting some seemingly conflicting log info:
update Kernel1 tftp uImage-2.5 6.35. -digi-armv7a.LONEPEAK-Ver-4_33
Using FEC0 device
TFTP from server 10.12.1.77; our IP address is 10.12.1.205
Filename 'uImage-2.6.35-digi-armv7a.LONEPEAK-Ver-4_33'.
Load address: 0x94000000
Loading: #################################################################
#################################################################
###########################################
done
Bytes transferred = 2533360 (26a7f0 hex)
Calculated checksum = 0x49669c61
Updating partition 'Kernel1'
Erasing 128 KiB # 0x08540000: 0%
Erasing 128 KiB # 0x085e0000: 20%
Erasing 128 KiB # 0x08680000: 41%
Erasing 128 KiB # 0x08720000: 62%
Erasing 128 KiB # 0x087c0000: 83%
Erasing: complete
Writing: 0%
Writing: 51%
Writing: complete
Verifying: 0%
Verifying: 51%
Verifying: complete
Writing Parameters to NVRAM
Update successful
Above it shows a successful update but then when issue a reboot command I get:
scanning bus for devices... 1 USB Device(s) found
scanning bus for storage devices... 0 Storage Device(s) found
** Invalid boot device **
Booting partition 'Kernel0'
## Booting kernel from Legacy Image at 90007fc0 ...
Image Name: Linux-2.6.35.14-tjerbmx51_0005+
Created: 2018-10-16 21:35:37 UTC
Image Type: ARM Linux Kernel Image (uncompressed)
Data Size: 2533296 Bytes = 2.4 MB
Load Address: 90008000
Entry Point: 90008000
Loading Kernel Image ... OK
OK
Starting kernel ...
So my question is:
Is there a way for me to version my kernel when I build it s/t I can set the 'Image Name' so that I know its my kernel being loaded and not some type of Legacy Image??
Perhaps the CONFIG_LOCALVERSION-option of Linux kernel .config-file will help you.
From the Kernel.org:
Keep a backup kernel handy in case something goes wrong. This is
especially true for the development releases, since each new release
contains new code which has not been debugged. Make sure you keep a
backup of the modules corresponding to that kernel, as well. If you
are installing a new kernel with the same version number as your
working kernel, make a backup of your modules directory before you do
a make modules_install.
Alternatively, before compiling, use the kernel config option
“LOCALVERSION” to append a unique suffix to the regular kernel
version. LOCALVERSION can be set in the “General Setup” menu.
So during the kernel configuration you can add some clear suffix to your kernel e.g. CONFIG_LOCALVERSION="-test_some_stuff".
Some useful links: 1 and 2.

Docker Devmapper space issue - increase size

I have the same issue as in space issue on docker devmapper and CentOS7
It only specifies to clean up but not how I can increase the space and I dont have any images to clean. I tried several things with dm.min_free_space but nothing worked and want to increase the space.
OS Version/build: Red Hat Enterprise Linux Server release 7.3 (Maipo)
App version:
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-11.el7.centos.x86_64
Go version: go1.7.4
Git commit: 96d83a5/1.12.6
Built: Tue Mar 7 09:23:34 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-11.el7.centos.x86_64
Go version: go1.7.4
Git commit: 96d83a5/1.12.6
Built: Tue Mar 7 09:23:34 2017
OS/Arch: linux/amd64
Steps to reproduce
I have no containers running currently and have some docker images pertaining to Kubernetes which will be used by the Kubernetes service.
sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[kubeuser4#kubenode4 Employee]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/busybox latest 00f017a8c2a6 5 days ago 1.11 MB
registry.access.redhat.com/rhel7/pod-infrastructure latest 34d3450d733b 6 weeks ago 205 MB
docker.io/java 8 d23bdf5b1b1b 8 weeks ago 643.1 MB
gcr.io/google_containers/heapster_grafana v2.6.0-2 b43443930626 12 months ago 230 MB
When I try to create a docker image of my application that needs to be used, I get the below error.
devmapper: Thin Pool has 8783 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior
I tried the cleaning up as mentioned in the other forums, but not helped much and getting the same error. When I tried to run with this sudo docker --storage-opt dm.min_free_space=0%, seems like it starts as a daemon, but still it failed with another error "docker-runc not installed on system" and also I dont want to run it as a daemon.
Below are some command outputs
sudo dmsetup status
localvg00-lv_home: 0 20971520 linear
localvg00-lv_home: 20971520 20971520 linear
docker-251:5-134039-pool: 0 209715200 thin-pool 924 848/524288 1629226/1638400 - rw discard_passdown queue_if_no_space
localvg00-lv_tmp: 0 4194304 linear
localvg00-lv_swap: 0 8388608 linear
localvg00-lv_root: 0 2097152 linear
localvg00-lv_root: 2097152 20971520 linear
localvg00-lv_usr: 0 16777216 linear
localvg00-lv_var: 0 8388608 linear
localvg00-lv_var: 8388608 62914560 linear
sudo docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 4
Server Version: 1.12.6
Storage Driver: devicemapper
Pool Name: docker-251:5-134039-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 106.8 GB
Data Space Total: 107.4 GB
Data Space Available: 601.2 MB
Metadata Space Used: 3.473 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.144 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.135-RHEL7 (2016-11-16)
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: overlay null bridge host
Swarm: inactive
Runtimes: runc docker-runc
Default Runtime: docker-runc
Security Options: seccomp
Kernel Version: 4.1.12-61.1.28.el7uek.x86_64
Operating System: Oracle Linux Server 7.3
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 2
CPUs: 2
Total Memory: 7.545 GiB
Name: kubenode4
I had also tried increasing all the physical volume size and logical volume size(lv_var) on my linux machine, but still it doesnt work.
sudo lvs
[sudo] password for kubeuser4:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_home localvg00 -wi-ao---- 20.00g
lv_root localvg00 -wi-ao---- 11.00g
lv_swap localvg00 -wi-ao---- 4.00g
lv_tmp localvg00 -wi-ao---- 2.00g
lv_usr localvg00 -wi-ao---- 8.00g
lv_var localvg00 -wi-ao---- 34.00g
sudo ls -lsh /var/lib/docker/devicemapper/devicemapper/data
2.3G -rw------- 1 root root 100G Mar 14 22:16 /var/lib/docker/devicemapper/devicemapper/data
Someone please let me know how it can be done.
Thanks,
It is better move away from devicemapper for a few reasons.
devicemapper in loopback unrecoverable storage issue: https://github.com/docker/docker/issues/3182 "devicemapper not recommended for production use".
I found it easy enough to switch to overlay storage driver, YMMV of course but hopefully not too much. 'rm -rf /var/lib/docker' is somewhat optional when switching but easy and I would highly recommend it as long as you can load your images back in. http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/
systemctl stop docker
rm -rf /var/lib/docker
# if these files do not already exist . . . create them, otherwise you need to edit by hand, you can also just add -s overlay in the systemctl docker script
ls /etc/sysconfig/docker /etc/sysconfig/docker-storage
[[ $? != 0 ]] && {
echo OPTIONS='--selinux-enabled=false' > /etc/sysconfig/docker
echo "DOCKER_STORAGE_OPTIONS= -s overlay" > /etc/sysconfig/docker-storage
}
systemctl start docker
systemctl status docker
docker images
more reading:
https://docs.docker.com/engine/userguide/storagedriver/selectadriver/
https://integratedcode.us/2016/08/30/storage-drivers-in-docker-a-deep-dive/
Was able to get it working and have mentioned it in
https://forums.docker.com/t/devmapper-space-issue/29786/3

How do I access a USB drive on a OSX host from inside a docker container?

I have an application that I eventually want to run on a cloud computing service (e.g., such as AWS or Google Cloud) packaged inside a docker image. The reason the application will need to run in the cloud is because it's designed to process large data files, but before I actually deploy, I'd like to test it first on a local laptop, using a single large data file that I've stored (for test and development purposes) on an external USB drive.
My development machine is an OSX laptop, and I'm using a recent version of docker:
stachyra> uname -a
Darwin Andrews-MacBook-Pro-76.local 14.5.0 Darwin Kernel Version 14.5.0: Tue Sep 1 21:23:09 PDT 2015; root:xnu-2782.50.1~1/RELEASE_X86_64 x86_64
stachyra> docker --version
Docker version 1.10.2, build c3959b1
OSX has mounted my external USB drive, device /dev/disk2s2, as /Volumes/MGR DATA:
stachyra> df
Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk1 974770480 435721376 538537104 45% 54529170 67317138 45% /
devfs 375 375 0 100% 650 0 100% /dev
map -hosts 0 0 0 100% 0 0 100% /net
map auto_home 0 0 0 100% 0 0 100% /home
/dev/disk2s2 3906291632 3869523640 36767992 100% 483690453 4595999 99% /Volumes/MGR DATA
/dev/disk3s1 196608 193160 3448 99% 24143 431 98% /Volumes/VirtualBox
stachyra> diskutil list
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *500.3 GB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_CoreStorage 499.4 GB disk0s2
3: Apple_Boot Recovery HD 650.0 MB disk0s3
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: Apple_HFS Macintosh HD *499.1 GB disk1
Logical Volume on disk0s2
DB70B91A-3B57-4C82-A758-C4BDEA4160FD
Unlocked Encrypted
/dev/disk2
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *2.0 TB disk2
1: EFI EFI 209.7 MB disk2s1
2: Apple_HFS MGR DATA 2.0 TB disk2s2
/dev/disk3
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *100.7 MB disk3
1: Apple_HFS VirtualBox 100.7 MB disk3s1
and it should also be noted, the drive has several directories and data which are visible inside it, at least when viewed directly through OSX:
stachyra> ls -l /Volumes/MGR\ DATA
total 0
drwxr-xr-x 6 stachyra staff 204 Apr 14 2015 1000genomes
drwxr-xr-x 5 stachyra staff 170 Oct 12 17:41 GIAB
drwxr-xr-x 4 stachyra staff 136 Apr 28 2015 genome_browser_tracks
drwxr-xr-x 24 stachyra staff 816 Oct 6 14:00 mitty
I have tried to follow the advice from this question, which describes how to mount a USB drive in docker when docker is running within a linux host. But my local laptop is OSX, not linux, so it doesn't seem to work.
Explicitly, when attempting to follow the advice of the accepted answer, I obtain the following result:
stachyra> docker run -i -t --privileged -v /dev/disk2s2:/dev/foo ubuntu bash
root#8da7b492a707:/# uname -a
Linux 8da7b492a707 4.1.18-boot2docker #1 SMP Sat Feb 20 08:24:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
root#8da7b492a707:/# ls -l /dev/foo
total 0
root#8da7b492a707:/#
Based upon the response, one can see that docker does indeed launch a linux container correctly, and it also creates a volume /dev/foo inside of the container as requested, but the actual contents of the USB drive are not accessible via that location--the ls -l command claims there are no files or directories there.
I also tried the second method described in an alternate response to the same question, and that fails even worse:
stachyra> docker run -i -t --device=/dev/disk2s2 ubuntu bash
docker: Error response from daemon: error gathering device information while adding custom device "/dev/disk2s2": not a device node.
stachyra>
I have found another discussion thread on stackoverflow which suggests that raw USB access is handled quite differently in OSX than in linux, which I suspect is probably the reason why both of the above attempts at USB access are failing.
But, what should I actually do about it? That is to say, what is the correct sequence of actions or commands to allow docker to access a USB device mounted on an OSX host, rather than linux?
I was finally able to access my USB drive from /var/media inside my container by using the machine-diskutil.sh script mentioned in warmoverflow's comment like so
machine-diskutil.sh mount my-machine-name /Volumes/my-usb-drive
and then starting the container like so
docker run -v /Volumes/my-usb-drive:/var/media -it my/image:latest bash
Because I had tried to add /Volumes/my-usb-drive as a shared folder manually in VirtualBox, I first got this error.
Error: The shared folder /Volumes/Seagate already exists on the
docker machine, please unmount it first.
So I removed it manually and re-ran the machine-diskutil.sh mount command without any problems. Great stuff!
As per #pgayvallet comment on GitHub:
As the daemon runs inside a VM in Docker Desktop, it is not possible to actually share a mac host device with the container inside the VM, and this will most definitely never be possible.

How to migrate old Google Compute Engine disks?

I am using Google Compute Engine in Europe and the maintenance window just hit us. The "automatic migration" didn't work, so all of our servers are offline. During the recovery from backup, we found a few files missing.
I have a persistent boot disk created from the debian-7-wheezy-v20130617 image with data, which I am trying to access.
I came up with 2 possible solutions to access the data:
Create a new VM with the old bootdisk. Sounds easy, but Google changed something and the VM won't boot.
Create a new VM with a new image and attach the old bootdisk. Sounds easy, but the old disk is not recognized using good old safe_format_and_mount.
Any ideas how to access the data from the disk? The migration doc didn't really help, it seems they assume you always have the old VM with the old disk still running.
As your disks were created before the migration to the current v1 API, before you can re-attach the disk to a new instance, you must upgrade the disk to use an embedded kernel.
Finally figured out how to access the data on old disks in a new VM.
Create a new VM with a current OS image.
In addition attach old boot disks as read only
In the VM check attached disks with ls -la /dev/sd* . "sda" is boot, the others attached.
brw-rw---T 1 root disk 8, 0 Jan 22 11:18 /dev/sda
brw-rw---T 1 root disk 8, 1 Jan 22 11:18 /dev/sda1
brw-rw---T 1 root disk 8, 16 Jan 22 11:18 /dev/sdb
brw-rw---T 1 root disk 8, 17 Jan 22 11:18 /dev/sdb1
brw-rw---T 1 root disk 8, 32 Jan 22 11:49 /dev/sdc
brw-rw---T 1 root disk 8, 33 Jan 22 11:49 /dev/sdc1
4 Create mount point mkdir /mnt/disk_b and mount disk partition mount /dev/sdb1 /mnt/disk_b.
mount: block device /dev/sdb1 is write-protected, mounting read-only
5 Check your data ls -la /mnt/disk_b

Resources