I have some python code (with PyKCS11) that reads out data from an eID smart card. The eID card is connected to the PC via an USB smart card reader. I'm trying to move this entire setup to docker on a Windows 10 host. The image used is python:slim-buster. I was able to install all the necessary drivers + other software on the image. The problem however is with the USB itself, the docker container is unable to fetch the information. I assume it's because docker is unable to detect the USB smart card reader. The same python code, when executed outside the docker container, works fine.
I've tried running docker with docker run -it --privileged MY_IMAGE_ID, but with the same issue.
Is it even possible to achieve what I'm trying to do? If so, what am I doing wrong?
Related
I have been assigned a project to use docker and run an opencv script in which i have to use camera for real time object detection. I have seen multiple methods of working on docker. But i am unable to use camera inside a docker container. Can anyone suggest a proper method to use camera inside a docker container.
I am using this function to access camera cv2.VideoCapture(0) and cv2.imshow("frame",frame) method for display.
i haven't created a docker file and i have only run the instance of ubuntu using docker run -it ubuntu.
Please suggest me a method to use camera inside docker container.
I have a desktop with Windows 10 and an elgato capture card. I am using OpenCV to capture the frames of the video for processing. So far, everything works perfectly fine:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame', gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The next step I want to do is utilize the Remote Development Extensions. This extension, with VS Code, works perfectly fine for my other python projects. This is the first project I am writing that utilizes a hardware device, but my linux container cannot access the hardware that the host has access to. I attempted to search for a solution, but all I have found is a way for using the --device parameter in my docker command with the examples pointing from a *nix device path to another *nix device path.
I did come across a post from the docker desktop team that is over two (2) years old saying that you cannot access hardware from a windows host to a linux container. I'm not sure if that is still the case, and I'm not sure if the remote containers extension has a way to access the devices... there is some magic that goes on in that the extension installs a vscode-server on the container, so I'm not sure if that would allow hardware access?
According to Microsoft's Official documentation, you can't share a device from a Windows Host to a Linux container. You can however share a device to a Windows container, but I guess this isn't what you want.
You can however use docker-machine to share your devices, this is because docker-machine uses a Virtual Machine such a VirtualBox instead of their custom engine to run the container's processes.
Once you have docker-machine installed you can simply execute the command
docker-machine create --driver virtualbox <A name for the machine>
Then you open VirtualBox which should be in your programs and add the device manually.
So I've recently gotten hold of an ESP8266 chip with a microusb port.
I've been trying to program it with the arduino IDE but need to flash it.
So far I have tried this tutorial here but when I got to the stage of connecting in putty it would not connect giving me an error message, I tried running putty as root which was succesful however I could not type anything in the console.
I have also tried using the serial monitor in the arduino IDE which also only worked as root.
On this computer I'm currently using linux mint 18.1
Any help is greatly appreciated.
So I was able to flash the chip with pyflasher which you can get here.
To be able to communicate with the device without root I used this command to add myself to the group which can access serial devices.
usermod -a -G dialout MY_USER_NAME
Thanks
I wanted to have a separate virtualized OS environment (preferably Windows but Linux is also welcome) but running on a very small RAM to run a bot application.
I have tried Hyper-V (with disk differencing) and VMware (with linked/instant clones) and Virtualbox and qemu but so far they need full OS installation and it can take up so much space.
Basically I just needed multiple similar environment (close to 100) without having a big HDD space and I run all the apps from a local network folder.
(Similar to multiple vms running under one vhd but I dont want to take up so many HDD spaces)
I have tried using one customised Lubuntu livecd and WINPE live cd (Gandalf's WINPE 7) booting on multiple Hyper-V vms. They boot just fine but Gandalf's WINPE is not a full windows and require a high RAM usage while on Linux side I cant run my windows script + app well under WINE though Linux memory management is much better and I still can use a much smaller distro like Damn Small Linux if need be.
I checked Microsoft's App-V but it just virtualised the app not setting up a new standalone environment. I need a new environment with their own mouse pointer but needing very small RAM preferably just for running the bot and the app.
Thank you.
I have tried FreeBSD Jail, LXC, LXD, but unable to make it the way I want it to be (having one PC with multiple users but on a minimal footprint).
However, I am excited that I kind of find the solution and would like to share it.
For Windows host machine + Linux guest
Enable Hyper-V in Windows (if supported) or download VirtualBox
Install Docker for Windows
Install RealVNC (or any other VNC client)
Download (pull) or create any linux docker image with desktop
environment + VNC (optional: wine, winetricks, playonlinux for running windows apps + cimb-utils [it is for smb sharing network folder])
In powershell deploy multiple container using same image + assign each
different vnc port, for example :
For VNC + Samba network sharing + vncpassword
docker run -it --user 0 -d -p 5900:5900 -e VNC_PW=passwd --privileged --cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH --security-opt seccomp=unconfined ubuntu
For only VNC without vncpassword (depends on container)
docker run -d -p 5900:5900 abrahamb/lubuntu-vnc
docker run -d -p 5901:5900 abrahamb/lubuntu-vnc
docker run -d -p 5902:5900 abrahamb/lubuntu-vnc
etc
Open RealVNC and setup a connection to these addresses; for example :
localhost:5900
localhost:5901
localhost:5902
etc
Each ports will lead to separate containerised desktops
That way, you will have one base image for deploying multiple containers (like having one computer multiple users running at the same time) only requiring minimal RAM usage and Disk Size.
Another way is to boot a base live iso in multiple Hyper-V VMs. However, they are RAM intensive and can only deploy several separate environment.
Further info+findings:
Docker is actually kinda similar to LXC, LXD, and FreeBSD Jails since they are all containerised image. I believe if I try hard enough I can make similar setup in LXD. FreeBSD Jails might be a good alternative too.
However, I didnt try further since I couldnt find enough information regarding jails setup. I couldnt find any Youtube video that explains how to setup, only some articles/blog but still too frustrating since I dont have enough time to research further.
LXD/LXC can be configured to virtualize a desktop but not quite what I am looking for since that would mean I have to dual-boot/have Ubuntu vm.
Docker just recently implement Windows container but the base image is GUI-less. In the Linux side however, there are quite a few available images that have been configured with bare minimal desktop environment.
Also, using Docker, I dont need to have VM that is running Ubuntu/FreeBSD to setup lxd/lxc/jails or dual-booting Linux/FreeBSD. Another plus, Docker is cross-platform (can be used in Windows/Linux/MacOs).
tldr; Docker is awesome.
Suppose we have the Linux OS installed on two identical machines that supports the version of latest Docker. Then suppose we build a container image based on this OS. We can assume this image will now run on either machine. We now put this image onto a USB drive and plug it in the other identical machine.
Now, the hard part... is it possible, using that image on the USB drive to run the container on the same USB drive itself while plugged into the machine?
I'm trying to save and/or minimize memory used by the host OS by utilizing the memory on the USB drive as much as possible.
If this is possible, how would I go about setting up a demo case?
I see this question as "how do I persist Docker data on a USB device?".
On your machines, you need to mount your USB device into /var/docker. And then restart your Docker service.
However, with this solution, when you unplug the USB device, all of the containers have to be stopped. Otherwise, data will be lost.