tor browser in docker, how using with X or Wayland - browser

I need deleting all information of tor browser.
Is possible to creating a docker image with normal browser with tor proxy and using it trought ssh -X options?
Runing it with --rm=true automaic deleting kontainer data and always using this same configuration.
Is possible using this continer in the cloud? For example in AWS ,Azure etc.?
Is possible to download directory mount in my host machine?

If you're on Linux or Mac you can do this.
See item 9 in Jess' blog post: Docker Containers on the Desktop:
docker run -it \
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-e DISPLAY=unix$DISPLAY \ # pass the display
--device /dev/snd \ # sound
--name tor-browser \
jess/tor-browser

Related

Why do I need to install certificates for an external URL when installing gitlab?

I am confused.
For now, I just want to self-host gitlab in my local home network without exposing it to the internet. Is this possible? If so can i do this without installing ca-certificates?
Why is gitlab force (?) me to expose my gitlab server to the internet?
Nothing else I've locally installed my NAS/Server requires ca certificates for me to connect to its webservice?: I can just go to xyz.456.abc.123:port in chrome
e.g. in this article, the public url is referenced: https://www.cloudsavvyit.com/2234/how-to-set-up-a-personal-gitlab-server/
You don't need to install certificates to use GitLab and you do not have to have GitLab exposed to the internet to have TLS security.
You can also opt to not use TLS/SSL at all if you really want. In fact, GitLab does not use HTTPS by default.
Using docker is probably the easiest way to demonstrate it's possible:
mkdir -p /opt/gitlab
export GITLAB_HOME=/opt/gitlab
docker run --detach \
--hostname localhost \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
-e GITLAB_OMNIBUS_CONFIG='external_url "http://localhost"' \
gitlab/gitlab-ee:latest
# give it 15 or 20 minutes to start up
curl http://localhost
You can replace http://localhost in the external_url configuration with the computer hostname you want to use for your local server or even an IP address.

How to handle Sematext becoming detached from monitoring a restarted container

I am using Sematext to monitor a small composition of Docker containers plus the Logsene feature to gather the web traffic logs from one container running Node Express web application.
It all works fine until I update and restart the web server container to pull in a new code build. At this point, Sematext Logsene seems to get detached from the container, and so I lose the HTTP log trail in the monitoring. I still see the Docker events, so it seems only the logs part which is broken.
I am running Sematext "manually" (i.e. it's not in my Docker Compose) like this:
sudo docker run -d --name sematext-agent --restart=always -e SPM_TOKEN=$SPM_TOKEN \
-e LOGSENE_TOKEN=$LOGSENE_TOKEN -v /:/rootfs:ro -v /var/run/docker.sock:/var/run/docker.sock \
sematext/sematext-agent-docker
And I update my application simply like this:
docker-compose pull web && docker-compose up -d
where web is the web application service name (amongst database, memcached etc)
which recreates the web container and restarts it.
At this point Sematext stops forwarding HTTP logs.
To fix it I can restart Sematext agent like this:
docker restart sematext-agent
And the HTTP logs start arriving in their dashboard again.
So, I know I could just append the agent restart command to my release script, but I am wondering if there's a way to prevent it from becoming detached in the first place? I guess it's something to do with it monitoring the run files.
I have searched their documentation and FAQs, but not found anything specific about this effect.
I seem to have fixed it, but not in the way I'd expected.
While looking through the documentation I found the sematext-agent-docker package with the Logsene integration built-in has been deprecated and replaced by two separate packages.
"This image is deprecated.
Please use sematext/agent for monitoring and sematext/logagent for log collection."
https://hub.docker.com/r/sematext/sematext-agent-docker/
You now have to use both Logagent https://sematext.com/docs/logagent/installation-docker/ and a new Sematext Agent https://sematext.com/docs/agents/sematext-agent/containers/installation/
With these both installed, I did a quick test by pulling a new container image, and it seems that the logs still arrive in their web dashboard.
So perhaps the problem was specific to the previous package, and this new agent can "follow" the container rebuilds better somehow.
So my new commands are (just copied from the documentation, but I'm using env-vars for the keys)
docker run -d --name st-logagent --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-e LOGS_TOKEN=$SEMATEXT_LOGS_TOKEN \
-e REGION=US \
sematext/logagent:latest
docker run -d --restart always --privileged -P --name st-agent \
-v /:/hostfs:ro \
-v /sys/:/hostfs/sys:ro \
-v /var/run/:/var/run/ \
-v /sys/kernel/debug:/sys/kernel/debug \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
-e INFRA_TOKEN=$SEMATEXT_INFRA_TOKEN \
-e CONTAINER_TOKEN=$SEMATEXT_CONTAINER_TOKEN \
-e REGION=US \
sematext/agent:latest
Where
CONTAINER_TOKEN == old SPM_TOKEN
LOGS_TOKEN == old LOGSENE_TOKEN
INFRA_TOKEN = new to me
I will see if this works in the long run (not just the quick test).

Can two containers launch a Xserver on the same display on the same host?

I have created a Dockerfile that installs Xvfb and firefox with all the dependencies needed and I'm able to create a container with firefox launched on the DISPLAY=:1 of a Xserver.
When I try to launch another container, the second container is not able to launch a Xserver on DISPLAY=:1.
sudo docker logs docker_serv2
xvfb-run: error: Xvfb failed to start
No protocol specified
So I checked my processes with ps aux and I was suprised to see my X server listed on my host.
xxx 11343 1.9 0.6 240260 47620 ? Sl 08:41 0:12 Xvfb :1 -screen 0 1280x720x24 -shmem -listen tcp -nolisten tcp -auth /home/xxx/.Xauthority
xxx 11350 18.7 4.2 2238084 326600 ? Sl 08:41 2:07 /usr/lib/firefox/firefox
I use this command to create the Xserver and launch firefox on both containers :
xvfb-run -n 1 -f ~/.Xauthority --server-args='-screen 0 1280x720x24 -shmem -listen tcp' firefox
I understand that docker processes can be see on the host as it is not a VM, but I do not understand why the second container is not able to launch a X server on DISPLAY=:1 too, as the two containers are not linked.
Aren't they isolated from the host system ? I thought they would use their own minimalist environnement.
Here is my run.sh command :
docker run -d --rm \
--net=host \
-v /dev/uinput:/dev/uinput \
-v /dev/input:/dev/input \
-v /run/udev:/run/udev \
--name docker firefox
First I thought --net=host could be the source of my problem, but it only impacts the network configuration and I have the same issue without the option.
The others -v option are here because I'm also playing with some /dev/input instructions and are not important in this issue.
So, is it possible to launch two different containers launching two seperate X server on DISPLAY=:1 ?
Actually the issue came from the --net=host option.
I removed it from both run commands and I can launch two containers with X server on display1.
So the --net=host is not as isolated as I thought, and it does more than just match container network host.

Is that possible to run docker containers with devices as volumes?

Now we can run docker containers like docker run --device /dev/fuse $IMAGE.
But Kubernetes couldn't support host deivces yet, refer to https://github.com/kubernetes/kubernetes/issues/5607 .
Is that possible to mount devices like volumes? We have try -v /dev/fuse:/dev/fuse but the container didn't have permissions to open that char device. Can we add more capabilities to do that?
We have tried docker run --cap-add=ALL -v /dev/fuse:/dev/fuse and it didn't work. I think --device or --privileged is needed for this scenario.

How to forward eclipse in a docker container through a linux proxy?

I have an Eclipse instance running on linux Ubuntu in a docker container. This container runs on a CentOS host with no physical display and I would like to forward X11 from the docker container to my laptop (running windows) through the CentOS host.
Docker container runs with
docker run --name docker-eclipse -p 5000:5000/tcp -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix
While I can forward X11 from the host to my laptp with no problems, I'm not able to start eclipse inside the container, because it dies with "Cannot open display:".
What I'd like is
laptop --> remote host --> docker container running eclipse
What is the best way to do that?
This might work (server is assumed to be the remote host running Docker, laptop is assumed to be the local host from which you want the GUI):
Connect to the server.
Mount through sshfs the laptop's .X11 socket from the server: user#server:$sshfs laptop:/tmp/.X11-unix /tmp/.X11-unix.
Start the container with something like user#laptop:ssh -X server docker run --name docker-eclipse -p 5000:5000/tcp -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix.
I'm not sure this would work, and it does not feel the cleanest way of doing so, but what you want to perform is quite.... unusual (though it would be something really great !!).
Comment your feedback !

Resources