Can we access camera in docker using Open CV - python-3.x

I have been assigned a project to use docker and run an opencv script in which i have to use camera for real time object detection. I have seen multiple methods of working on docker. But i am unable to use camera inside a docker container. Can anyone suggest a proper method to use camera inside a docker container.
I am using this function to access camera cv2.VideoCapture(0) and cv2.imshow("frame",frame) method for display.
i haven't created a docker file and i have only run the instance of ubuntu using docker run -it ubuntu.
Please suggest me a method to use camera inside docker container.

Related

USB smart card reader from a Docker container on a Windows host

I have some python code (with PyKCS11) that reads out data from an eID smart card. The eID card is connected to the PC via an USB smart card reader. I'm trying to move this entire setup to docker on a Windows 10 host. The image used is python:slim-buster. I was able to install all the necessary drivers + other software on the image. The problem however is with the USB itself, the docker container is unable to fetch the information. I assume it's because docker is unable to detect the USB smart card reader. The same python code, when executed outside the docker container, works fine.
I've tried running docker with docker run -it --privileged MY_IMAGE_ID, but with the same issue.
Is it even possible to achieve what I'm trying to do? If so, what am I doing wrong?

How to create Docker container in windows to run in ubuntu vm

I have three small Springboot Microservices and a plan. I have to say that I develop in Eclipse under Windows10 Home.
My plan is to build a Docker container of each one and run it in a ubuntu VM on my Windows pc, so that I can use the containers in a real linux server in the future.
Does this work? What do I need? Is there a Docker for Windows that builds container for linux? How do I deploy the container to the vm? Do I have to push it to dockerhub first? Can I access the container from a Windows Browser by some kind of port forwarding?
Thank you for your help....every hint is welcome.
You can use Docker for Desktop Windows with WSL 2 running a Ubuntu distro. It's the best setup to develop Docker for Linux, because of the incredible interoperability:
Both OS run side by side, sharing the same Docker environment (images, containers, compose sets, etc). You can manage, configure using Docker tools on either OS, switching back and forth easily
Both OS share the same file system, so you can develop config files with your favorite Windows editor which are equally accessible from Linux
Both OS share the same network, so you can access services, API from one to another via port forwarding (using browser in Windows and Curl in Linux to access same resources)
The close interoperability means no need to deploy across systems, since you have only one shared environment.
Since you develop in one place locally, no need to distribute images to remote repositories
As a bonus, Docker for Desktop ships with a fully working single cluster version of Kubernetes providing the same shared environment
Go for it then, unless your machine has limitations against WSL.

How can I access my device from a windows host to a docker linux container?

I have a desktop with Windows 10 and an elgato capture card. I am using OpenCV to capture the frames of the video for processing. So far, everything works perfectly fine:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame', gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The next step I want to do is utilize the Remote Development Extensions. This extension, with VS Code, works perfectly fine for my other python projects. This is the first project I am writing that utilizes a hardware device, but my linux container cannot access the hardware that the host has access to. I attempted to search for a solution, but all I have found is a way for using the --device parameter in my docker command with the examples pointing from a *nix device path to another *nix device path.
I did come across a post from the docker desktop team that is over two (2) years old saying that you cannot access hardware from a windows host to a linux container. I'm not sure if that is still the case, and I'm not sure if the remote containers extension has a way to access the devices... there is some magic that goes on in that the extension installs a vscode-server on the container, so I'm not sure if that would allow hardware access?
According to Microsoft's Official documentation, you can't share a device from a Windows Host to a Linux container. You can however share a device to a Windows container, but I guess this isn't what you want.
You can however use docker-machine to share your devices, this is because docker-machine uses a Virtual Machine such a VirtualBox instead of their custom engine to run the container's processes.
Once you have docker-machine installed you can simply execute the command
docker-machine create --driver virtualbox <A name for the machine>
Then you open VirtualBox which should be in your programs and add the device manually.

Is there any os virtualization without having to install a full OS (needed multiple similar vms)?

I wanted to have a separate virtualized OS environment (preferably Windows but Linux is also welcome) but running on a very small RAM to run a bot application.
I have tried Hyper-V (with disk differencing) and VMware (with linked/instant clones) and Virtualbox and qemu but so far they need full OS installation and it can take up so much space.
Basically I just needed multiple similar environment (close to 100) without having a big HDD space and I run all the apps from a local network folder.
(Similar to multiple vms running under one vhd but I dont want to take up so many HDD spaces)
I have tried using one customised Lubuntu livecd and WINPE live cd (Gandalf's WINPE 7) booting on multiple Hyper-V vms. They boot just fine but Gandalf's WINPE is not a full windows and require a high RAM usage while on Linux side I cant run my windows script + app well under WINE though Linux memory management is much better and I still can use a much smaller distro like Damn Small Linux if need be.
I checked Microsoft's App-V but it just virtualised the app not setting up a new standalone environment. I need a new environment with their own mouse pointer but needing very small RAM preferably just for running the bot and the app.
Thank you.
I have tried FreeBSD Jail, LXC, LXD, but unable to make it the way I want it to be (having one PC with multiple users but on a minimal footprint).
However, I am excited that I kind of find the solution and would like to share it.
For Windows host machine + Linux guest
Enable Hyper-V in Windows (if supported) or download VirtualBox
Install Docker for Windows
Install RealVNC (or any other VNC client)
Download (pull) or create any linux docker image with desktop
environment + VNC (optional: wine, winetricks, playonlinux for running windows apps + cimb-utils [it is for smb sharing network folder])
In powershell deploy multiple container using same image + assign each
different vnc port, for example :
For VNC + Samba network sharing + vncpassword
docker run -it --user 0 -d -p 5900:5900 -e VNC_PW=passwd --privileged --cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH --security-opt seccomp=unconfined ubuntu
For only VNC without vncpassword (depends on container)
docker run -d -p 5900:5900 abrahamb/lubuntu-vnc
docker run -d -p 5901:5900 abrahamb/lubuntu-vnc
docker run -d -p 5902:5900 abrahamb/lubuntu-vnc
etc
Open RealVNC and setup a connection to these addresses; for example :
localhost:5900
localhost:5901
localhost:5902
etc
Each ports will lead to separate containerised desktops
That way, you will have one base image for deploying multiple containers (like having one computer multiple users running at the same time) only requiring minimal RAM usage and Disk Size.
Another way is to boot a base live iso in multiple Hyper-V VMs. However, they are RAM intensive and can only deploy several separate environment.
Further info+findings:
Docker is actually kinda similar to LXC, LXD, and FreeBSD Jails since they are all containerised image. I believe if I try hard enough I can make similar setup in LXD. FreeBSD Jails might be a good alternative too.
However, I didnt try further since I couldnt find enough information regarding jails setup. I couldnt find any Youtube video that explains how to setup, only some articles/blog but still too frustrating since I dont have enough time to research further.
LXD/LXC can be configured to virtualize a desktop but not quite what I am looking for since that would mean I have to dual-boot/have Ubuntu vm.
Docker just recently implement Windows container but the base image is GUI-less. In the Linux side however, there are quite a few available images that have been configured with bare minimal desktop environment.
Also, using Docker, I dont need to have VM that is running Ubuntu/FreeBSD to setup lxd/lxc/jails or dual-booting Linux/FreeBSD. Another plus, Docker is cross-platform (can be used in Windows/Linux/MacOs).
tldr; Docker is awesome.

How does a DOCKER communicate with a Windows client

I understand that Docker runs on Linux kernel
Lets say, I deploy an application (SORRY!! I can't disclose the application due to confidentiality reasons) on a CentOS Docker image. The application is known to be compatible with both Windows and Linux.
So now, if I want to run some some program/script on that deployed image but the client that I am using is Windows. Here are two questions that I have,
Is it even possible to use Windows machine to execute the programs/scripts in the remote Docker image?
If answer to question 1 is yes, then how are the system calls in Windows mapped to the equivalent system calls in the Linux environment of Docker.
Is it even possible to use Windows machine to execute the programs/scripts in the remote Docker image?
No: you would need to run the docker image in a Linux VM running on your Windows.
The system calls would be to the VM Linux kernel.
A docker image for Windows (server 2016) would be built specifically for Windows.

Resources