Run a NOT headless chrome on a docker container - node.js

Running chrome on docker machines is only possible when chrome is headless. Unfortunately, headless chrome can't ignore certificate errors which prevents my tests from running.
I'm trying to run an already working NodeJS e2e test environment on a docker container. Most of the tests pass but when a site requires a certificate it can't be accessed. On none headless chrome I can simply ignore the certification error.
The base docker image installed on the container is Node:8
{
browserName: 'chrome',
chromeOptions: {
binary: puppeteer.executablePath(),
args: [
'--lang=en-US','--headless','--no-sandbox','--ignore-certificate-errors'
]
}
The expected result is to either run chrome with gui on a docker container or somehow ignore the server certificate errors in headless chrome.

Use Xvfb. This will allow you to use Chrome with GUI.
The idea is simple: you use virtual desktop. Configuring multiple desktops / displays on a standalone VM took some efforts. With Docker it is simple.
Some examples:
http://www.mattzeunert.com/2018/07/21/running-headful-chrome-on-ubuntu-server.html
https://medium.com/dot-debug/running-chrome-in-a-docker-container-a55e7f4da4a8

Another way (described here by Nils De Moor) is to let the docker container connect to your local machine's X server.
Say your ip address is 192.168.0.2.
You can set up a tunnel to you X display on i.e. port 6010, (which corresponds to display 192.168.0.2:10) with socat. For security, the range argument asks socat to only accept connections from your machine's IP address.
socat TCP-LISTEN:6010,reuseaddr,fork,range=192.168.0.2/32 UNIX-CLIENT:\"$DISPLAY\" &
Now you can set the DISPLAY variable inside the docker container with -e when you start it.
docker run -e DISPLAY=192.168.0.2:10 gns3/xeyes
In the case of chrome there are some more complications, described in the linked post, because chrome requires some more privileges (i.e. add --privileged )

Related

Docker Container Attached to Network, Network Inspect Shows No Containers

I am simply trying to connect a ROS2 node from my Ubuntu 22.04 VM on my laptop to another ROS2 node on another machine running Ubuntu 18.04. Ideally, I would only have Docker on the second machine (the first machine runs a trivial node that will never change), but I have been trying using a separate container on each.
Here is what I am doing and what I am seeing when I inspect:
(ssh into machine 2 from VM 1.)
A: start up network from machine 2.
sudo docker network create -d overlay --attachable my-attachable-ovrlay
B: start up container 1.
sudo docker run -it --rm test1
C: successfully attach container 1 to the network.
sudo docker network connect dwgyau64pvpenxoj2edu4liqu bold_murdock
D: Confirm the container lists network.
sudo docker inspect -f '{{range $key, $value := .NetworkSettings.Networks}}{{$key}} {{end}}' bold_murdock
prints:
bridge my-attachable-ovrlay
E: Check the network to see container.
sudo docker network inspect my-attachable-ovrlay
prints (among other things):
"Containers": null,
I am new to Docker AND networking, so I could be missing something huge, but I have tried all of the standard suggestions I found online including disabling my firewall, opening a ton of ports using ufw allow on both machines, making sure nodes are active, etc etc etc etc etc.
I tried joining the network from machine 2 and that works and the container is displayed when using network inspect. But when I do that, then machine 1 simply refuses to connect to network.
F: In this situation it gives an error.
sudo docker network connect dwgyau64pvpenxoj2edu4liqu objective_mendel
prints:
Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
Also, before trying any docker networking, I have tried plainly pinging from VM1 to machine 2 and that works, both ways. I have tried to use netcat to open an old-timey chat window on port 1234 (random port as per this resource) and that works one way only. I can communicate both ways, but only when machine 1 sends the initial netcat request and machine 2 listens. When machine 2 sends request and 1 listens, nothing happens.
I have been struggling to get this to work for 3 weeks now. I know it’s something stupid, I just know it. Any advice would be incredibly appreciated. Please explain like I know nothing about networking, because I just about do.
EDIT: I converted images (still hyperlinked) into code blocks.
If both PCs are on the same LAN, you could skip the whole network configuration entirely and use ROS2 auto-discovery.
E.g.
PC1:
docker run -it --rm --net=host -v /dev/shm:/dev/shm osrf/ros:foxy-desktop
export ROS_DOMAIN_ID=1
ros2 run demo_nodes_py talker
PC2:
docker run -it --rm --net=host -v /dev/shm:/dev/shm osrf/ros:foxy-desktop
export ROS_DOMAIN_ID=1
ros2 run demo_nodes_py listener
If the PCs are not on the same network, I usually use ZeroTier to create a virtual LAN between PC1, PC2, and PC(N), then repeat the above example.
The issue was that the router was set to 1969. When we updated the time by connecting to the internet for 15 seconds, then disconnected, it started working.

Docker images and containers change when docker desktop is running on linux

When docker desktop is running on linux, I see a different set of containers and images compared to when it is not running. That is, when I run docker images in the terminal, the output depends on whether docker desktop is running or not. After I 'quit docker desktop', the original behavior is restored.
I note the following changes:
docker desktop is off
docker desktop is running
images 'a, b, c'
shows images 'd, e, f'
containers 'aa, bb, cc'
containers 'dd, ee, ff'
non colored cli output
pretty colored cli output
My suspicion is that docker desktop kills a running docker service and starts a fresh one whose images and containers are located elsewhere on my filesystem. Then after quitting, the original service is restored. I'd like this behavior to change, such that the images and containers I'm working on are always the same, regardless of whether docker desktop is running or not.
I'm looking for some feedback on how to start debugging this.
Docker only runs natively on Linux. Docker Desktop is the "hack" that allows running docker on other platforms (MacOS, Windows, etc). Docker Desktop actually starts a Linux VM and runs docker inside that VM. It then takes care of mapping ports and volumes so that it appears to the end user that docker is "running directly on host".
The beauty of running Docker on linux is that it runs natively and you don't need extra hacks and tricks. So why you would use Docker Desktop on Ubuntu.... beats me :) However, the explanation of why you see different results is becuase you see different docker processes running on different machines: one on the host and one on a VM.

Puppeteer: Chrome Remote Launch

Is there a way to launch chrome in non headless mode from a docker container?
I have a node application inside a docker container and a headless chrome container where i can connect to. All works fine so far. To demonstrate what puppeteer is doing i want to launch a chrome in non headless mode on the host system. Is this possible?
You can start Chromium manually on your host machine and then connect to its WebSocket port using puppeteer.connect() - https://pptr.dev/#?product=Puppeteer&version=v1.8.0&show=api-puppeteerconnectoptions . Don't forgot to open the WS port to container.
We also experimented with running Puppeteer in non-headless mode inside the Docker container using XVFB (X virtual framebuffer) and noVNC (https://github.com/novnc/noVNC) to display whats on the screen at HTML page served from the container. But that's not ideal for debugging.
If you just want to see which pages are opened and their screenshots you could use live-view https://github.com/apifytech/apify-js#puppeteer-live-view we build exactly for this use case.

Docker container won't display in browser

I'm using Windows 10 Home and Docker Toolbox. I downloaded a webapp image then ran it with:
docker run -p 8888:80 aspnetapp
and tried to access it through the browser with
localhost:8888
but it wouldn't display, said "can't reach this page"
Also, tried the suggestion to get the ip with "docker-machine ip default" and use that instead of localhost
192.168.99.100:8888
Still doesn't work. Although, others say these things worked for them. Anyone with any other ideas?

How to enable the Docker Remote API on Windows

I am trying to use the Docker Remote API on a Windows 10 host machine. I am using Chrome's Postman extension to see if I can get results from the docker remote api's endpoints. Here are the endpoints that I've tried:
GET http://192.168.99.100:4243/images/json
GET http://192.168.99.100:2376/images/json
Both returned Connection to server 192.168.99.100 failed (The server is not responding)
After a few searches I found out that the Docker Remote API is not enabled by default on Windows. Most of the guides are for Ubuntu but I have found this particular one for Windows.
These are the steps that I performed on my machine
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile
Change DOCKER_HOST='H tcp://0.0.0.2376' to DOCKER_HOST='H tcp://0.0.0.2375'
change DOCKER_TLS=auto to DOCKER_TLS=no
export DOCKER_HOST='-H tcp://0.0.0.2375'
export DOCKER_TLS_VERIFY=0
env | grep DOCKER
docker-machine restart
docker-machine env
docker-machine regenerate-certs
After performing the steps above, I did try again the endpoints on Postman but I still get the same result.
Can you perhaps give a little help if I have missed a step? Or am I on track?
Also, to answer some of my queries.
Is the docker remote api port for Windows 2375 and 4243 for Linux?
Is DOCKER_HOST for Windows and DOCKER_OPTS for Linux?
Switch your docker to windows container
Got to C:\ProgramData\Docker\config
in deamon.json file
add "hosts": ["tcp://0.0.0.0:2376", "npipe://"]
restart docker.
give command : docker -H tcp://0.0.0.0:2376 ps
The Remote API is now enabled by default on Windows (see ticket here).
It is reachable at http:\\localhost:2375 indeed (tested it).
I faced the same issue and found a quick solution for this. Just open docker settings and enable "Expose daemon on TCP..." checkbox. Docker will start automatically and the problem should be solved.Please find the image attached for reference
using docker desktop, go to settings and check "Expose daemon on tcp://localhost:2375 without TLS"

Resources