Finding Firefox Default Profile Path in Docker Linux Container - linux

I have created a Dockerfile for a puppeteer project that installs firefox in a linux container. Firefox is successfully installed in the container and I mount a volume into this container but I'm unable to find the firefox default profile path in the container.
First, I build an image using the command "docker build -t newimage2 .", then I mount a volume into the container using this command in the terminal "docker run -it -v vol1:/shared-volume --name newcont3 newimage2". The volume vol1 is mounted into the newly created docker container newcont3 and firefox gets installed in the container. I need to locate the firefox default path in the container but I am unable to locate the firefox default profile path. This is my Dockerfile below.
FROM node:16.17.1
WORKDIR /usr/src/app
RUN apt-get update \
&& apt-get install -y wget gnupg fonts-ipafont-gothic fonts-freefont-ttf firefox-esr --no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5551
# Exec form
CMD ["npm", "start"]
#ENTRYPOINT [ "npm", "start" ] can choose either the entrypoint or the CMD

Related

Trying to create dockerfile for selenium-node-firefox

I'm trying to create a dockerfile for my Node application that uses Selenium WebDriver. I tried the code below, it creates a directory for node, install geckodriver and firefox.
FROM node:12
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
# Install geckodriver
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.24.0/geckodriver-v0.24.0-linux64.tar.gz
RUN tar -xvzf geckodriver-v0.24.0-linux64.tar.gz
RUN chmod +x geckodriver
RUN mv geckodriver /usr/local/bin/
# Install firefox
RUN wget "https://download.mozilla.org/?product=firefox-latest&os=linux&lang=pt-BR" -O firefox.tar.bz2
RUN tar -jxvf firefox.tar.bz2 -C /usr/local/bin/
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
The error I receive is SessionNotCreatedError: Expected browser binary location, but unable to find binary in default location, no 'moz:firefoxOptions.binary' capability provided, and no binary flag set on the command line. So, the geckodriver doesn't find the firefox binary. I guess isn't setting the firefox well in PATH system, I try to call firefox --version and not found.
The index.js only do:
const { Builder } = require('selenium-webdriver');
const driver = await new Builder().forBrowser('firefox').build();
await driver.get('https://google.com');
I solve the problem with this dockerfile:
FROM ubuntu
RUN apt-get update
RUN apt install nodejs -y
RUN apt install npm -y
RUN node -v
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
RUN apt install wget -y
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.29.0/geckodriver-v0.29.0-linux64.tar.gz
RUN tar -xvzf geckodriver-v0.29.0-linux64.tar.gz
RUN chmod +x geckodriver
RUN mv geckodriver /usr/local/bin/
RUN apt install firefox -y
RUN export MOZ_HEADLESS=1
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
I've changed the firefox installation way and added export MOZ_HEADLESS=1

Getting too many dangling images on docker-compose up

I want to integrate python in dotnet core image as I need to execute python scripts.
When I am executing this DockerFile, lots of dangling images are created.
Dangling Images
Also, is there any proper way to integrate a python interpreter? For example, I will get a URL in the .net core container and then I want to pass that URL to the python container. How can we achieve this task? I am new to Docker.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
RUN apt-get update && apt-get install -y --no-install-recommends wget
RUN apt-get update && apt-get install -y python3.7 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p tmp
WORKDIR /tmp
RUN wget https://github.com/projectdiscovery/subfinder/releases/download/v2.4.4/subfinder_2.4.4_linux_amd64.tar.gz
RUN tar -xvf subfinder_2.4.4_linux_amd64.tar.gz
RUN mv subfinder /usr/local/bin/
#Cleanup
#wget cleanup
RUN rm -f subfinder_2.4.4_linux_amd64.tar.gz
FROM base AS final
RUN mkdir app
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY Publish/ /app/
ENTRYPOINT ["dotnet", "OnePenTest.Api.dll"]
As Images are immutable, so any change you would do results in a new image is created because your compose file specifies the build command, so it will rerun the build command when you start the containers.
And if any files you're composing with a COPY change, then the current image cache is no longer used and it'll build a new image without erasing the old image.
Can use the below commands to build your file
sudo docker-compose build --force-rm [--build-arg key=val...] [SERVICE...]
docker build --rm -t <tag>
The option --force-rm removes intermediate containers after a successful build.
Just configure your docker-compose.yaml then running this with this commands:
$ docker compose down --remove-orphans # if needed to stop all running containers
$ docker compose build
$ docker compose up --no-build
Add the flag --no-build in docker compose up.
ref: https://docs.docker.com/engine/reference/commandline/compose_up/

How to run privileged Docker container for using systemctl

I am new to Docker and I am trying to use systemctl to restart a service. It constantly fails and Failed to get D-Bus connection: Operation not permitted. I understand that in order to bypass this I need to run a privileged docker container, however, this still does not produce my desired results.
Please see below for the steps I took and the files involved:
Docker command result
docker run --privileged testapp /sbin/init
Dockerfile
FROM openjdk:14.0.1
# Copies required files to the Linux container
COPY ./out/production/TestingApp/ /App
COPY test.sh /App
COPY expressvpn-2.5.1.1-1.x86_64.rpm /App
WORKDIR /App
RUN yum -y update
RUN yum -y install sudo && yum -y install expect && yum -y install systemd
RUN yum -y install expressvpn-2.5.1.1-1.x86_64.rpm
ENTRYPOINT ["java", "Main"]
test.sh
In my Main.java this shell script file is executed and the output is printed out to the console.
sudo systemctl start expressvpn.service
expressvpn status
try
docker run -u 0 testapp /sbin/init

Ubuntu Docker container immediately stops, issue with Dockerfile?

I'm pretty new to Docker, and completely baffled as to why my container exits upon start.
I've built an Ubuntu image of which starts Apache and fail2ban upon boot. I'm unsure as to whether it's an issue with the Dockerfile, or the command I am running to start the container.
I've tried:
docker run -d -p 127.0.0.1:80:80 image
docker run -d -ti -p 127.0.0.1:80:80 image
docker run -d -ti -p 127.0.0.1:80:80 image /bin/bash
The Dockerfile is as follows:
FROM ubuntu:latest
RUN \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y iptables && \
apt-get install -y software-properties-common && \
apt-get install -y apache2 fail2ban && \
rm -rf /etc/fail2ban/jail.conf
ADD index.html /var/www/html/
ADD jail.conf /etc/fail2ban/
ENV HOME /root
WORKDIR /root
EXPOSE 80 443
ENTRYPOINT service apache2 start && service fail2ban start
CMD ["bash"]
I can jump into the container itself with:
docker exec -it image /bin/bash
But the moment I try to run it whilst staying within the host, it fails. Help?
Considering your question, where you mention "upon boot" I think it would be useful to read https://docs.docker.com/config/containers/multi-service_container/.
In a nutshell docker containers do not "boot" as a normal system, they start a process and execute it until it exits.
So, if you want to start two processes you can do a wrapper script as explained at the link above.
Remove the following line from your Dockerfile:
CMD ["bash"]
Also, when you want to get a shell into your container, you have to override the ENTRYPOINT definition of your Dockerfile:
docker exec -it --entrypoint "/bin/bash" image
See Dockerfile "ENTRYPOINT" documentation for more details

Write docker logs to external log file

We have a number of nodejs based microservices and all of them are running as docker containers.
Below is the content of dockerfile:
FROM keymetrics/pm2-docker-alpine:latest
ARG ENVIRONMENT
ARG PORT
ENV PORT $PORT
ENV ENVIRONMENT $ENVIRONMENT
RUN apt-get update -qq
RUN apt-get install --yes curl
RUN curl --silent --location https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install --yes nodejs
RUN apt-get install --yes build-essential vim
RUN mkdir /database_service
ADD . /database_service
WORKDIR /database_service
RUN npm install -g path
RUN npm cache clean
EXPOSE $PORT
CMD [ "npm", "start", $PORT, $ENVIRONMENT ]
Below is the command used to run the container
sudo docker run -p ${EXTERNAL_PORT_NUMBER}:${INTERNAL_PORT_NUMBER} --network
${NETWORK} --name ${SERVICE_NAME} --restart always -m 2048M --memory-swap -1
-itd ${ORGANISATION}/${SERVICE_NAME}:${VERSION}
I am looking for a way write contents of the logs generated by docker node based service to the external file on the Linux VM machine. If someone can help with sample command that will help.
You can do something like:
sudo docker run -p ${EXTERNAL_PORT_NUMBER}:${INTERNAL_PORT_NUMBER} --network
${NETWORK} --name ${SERVICE_NAME} --restart always -m 2048M --memory-swap -1
-itd ${ORGANISATION}/${SERVICE_NAME}:${VERSION} > /path/to/your/log.txt

Resources