I'm trying to make a docker container which uses OpenVPN to connect to my private internet access VPN and to download some data from a web server, but when i try to connect to PIA i get an error:
2022-12-07 12:08:03 [oslo403] Peer Connection Initiated with [AF_INET]**.***.***.***:1198
2022-12-07 12:08:03 sitnl_send: rtnl: generic error (-101): Network unreachable
2022-12-07 12:08:03 ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
2022-12-07 12:08:03 Exiting due to fatal error
I've tried to create a /dev/net/tun device manually:
RUN mkdir -p /dev/net && mknod /dev/net/tun c 10 200 && chmod 600 /dev/net/tun
But then i get this error:
2022-12-07 12:12:35 sitnl_send: rtnl: generic error (-101): Network unreachable
2022-12-07 12:12:35 ERROR: Cannot ioctl TUNSETIFF tun: Operation not permitted (errno=1)
2022-12-07 12:12:35 Exiting due to fatal error
Everything is running as root so that is not the issue.
Here is my complete dockerfile:
FROM alpine
RUN apk update && apk add bash openvpn wget unzip
# This section downloads PIA's configuration and adds login information to it.
RUN mkdir /vpn
RUN echo "********" > /vpn/login.txt
RUN echo "********" >> /vpn/login.txt
RUN wget https://www.privateinternetaccess.com/openvpn/openvpn.zip
RUN unzip openvpn.zip -d /vpn
RUN sed -i "s/auth-user-pass/auth-user-pass \/vpn\/login.txt/" /vpn/*
# Here is my attempted fix for the problem
RUN mkdir -p /dev/net && mknod /dev/net/tun c 10 200 && chmod 600 /dev/net/tun
ENTRYPOINT [ "openvpn", "/vpn/norway.ovpn" ]
I would love some help with this. Really all I want is an example where you use openvpn with docker to for example
curl api.ipify.org
You need to add this argument to the docker command:
--cap-add=NET_ADMIN
Network changes done by OpenVPN require extra permissions provided by the NET_ADMIN capability.
Related
I am trying to run the NVIDIA PyTorch container nvcr.io/nvidia/pytorch:22.01-py3 on a Linux system, and I need to mount a directory of the host system (that I have R/W access to) in the container. I know that I need to use bind mounts, and here's what I'm trying:
I'm in a directory /home/<user>/test, which has the directory dir-to-mount. (The <user> account is mine).
docker run -it -v $(pwd)/dir-to-mount:/workspace/target nvcr.io/nvidia/pytorch:22.01-py3
Here's the error output:
docker: Error response from daemon: error while creating mount source path '/home/<user>/test/dir-to-mount': mkdir /home/<user>/test: permission denied.
ERRO[0000] error waiting for container: context canceled
As far as I know, docker will only need to create the directory to be mounted if it doesn't exist already. Docker docs:
The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist.
I suspected that maybe the docker process does not have access; I tried chmod 777 with dir-to-mount as well as with test, but that made no difference.
So what's going wrong?
[Edit 1]
I am able to mount my user's entire home directory with the same command, but cannot mount other directories inside the home directory.
[Edit 2]
Here are the permissions:
home directory: drwx------
test: drwxrwxrwx
dir-to-mount: drwxrwxrwx
Run the command with sudo as:
sudo docker run -it -v $(pwd)/dir-to-mount:/workspace/target nvcr.io/nvidia/pytorch:22.01-py3
It appears that I can mount my home directory as a home directory (inside of /home/<username>), and this just works.
docker run -it -v $HOME:$HOME nvcr.io/nvidia/pytorch:22.01-py3
I don't know why the /home/<username> path is special, I've tried looking through the docs but I could not find anything relevant.
Given a simple Dockerfile that installs from something from the net, I'm trying to work out an elegant way to allow the build process to trust HTTPS endpoints when the build is both behind a corporate proxy and when it is not. Ideally without making changes to the Dockerfile.
Dockerfile:
FROM alpine
RUN apk update -v; apk add -v curl
Error:
$ docker build .
Sending build context to Docker daemon 83.97kB
Step 1/2 : FROM alpine
---> e50c909a8df2
Step 2/2 : RUN apk update -v; apk add -v curl
---> Running in 983ed3885376
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
140566353398600:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1913:
ERROR: https://dl-cdn.alpinelinux.org/alpine/v3.13/main: Permission denied
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/main: No such file or directory
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
140566353398600:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1913:
ERROR: 2 errors; 14 distinct packages available
https://dl-cdn.alpinelinux.org/alpine/v3.13/community: Permission denied
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/community: No such file or directory
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
139846303062856:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1913:
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
ERROR: https://dl-cdn.alpinelinux.org/alpine/v3.13/main: Permission denied
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/main: No such file or directory
139846303062856:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1913:
ERROR: https://dl-cdn.alpinelinux.org/alpine/v3.13/community: Permission denied
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/community: No such file or directory
ERROR: unable to select packages:
curl (no such package):
required by: world[curl]
The command '/bin/sh -c apk update -v; apk add -v curl' returned a non-zero code: 1
The issue here is that my developer machine is on the corporate network behind a traffic-intercepting proxy that man-in-the-middles the connection meaning from apk's point of view inside the Docker build, it is seeing a cert which has been signed by our proxy that it doesn't trust.
Trust from the host machine is not an issue - when I wget the file requested in the build it works:
$ wget https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
--2021-02-15 12:41:59-- https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
Connecting to 10.0.2.2:9000... connected.
Proxy request sent, awaiting response... 200 OK
Length: 631235 (616K) [application/octet-stream]
Saving to: ‘APKINDEX.tar.gz’
When I run it on the build server it passes fine cause no forward proxy.
Is there a way to pass in the Ubuntu trust bundle which has the proxy CA's (e.g. /etc/ssl/certs/ca-certificates) to the build process without modifying the Dockerfile?
Thanks!
Create a file named repositories in your local docker build context directory with the following content:
http://dl-cdn.alpinelinux.org/alpine/v3.13/main
http://dl-cdn.alpinelinux.org/alpine/v3.13/community
In your docker build file, before RUN apk update, add the following line:
COPY repositories /etc/apk/repositories
FROM abdennour/alpine:3.14-ssl
RUN openssl x509 -inform der -in COMPANY.der -out /usr/local/share/ca-certificates/company-cert.crt && \
cat /usr/local/share/ca-certificates/company-cert.crt >> /etc/ssl/certs/ca-certificates.crt && \
update-ca-certificates
EXPLAINED!
Request the CA certificate from the team who purchased the SSL Certificates.
Tell them provide me the certificate file "*.der"
Got it ? convert it to .cert file
RUN openssl x509 -inform der -in COMPANY.der -out /usr/local/share/ca-certificates/company-cert.crt && \
cat /usr/local/share/ca-certificates/company-cert.crt >> /etc/ssl/certs/ca-certificates.crt && \
update-ca-certificates
But this requires to have openssl ca-certificates packages in the image.
And because you can't install anything, then you can rely on alpine image which includes at least these two packages, like my base image:
FROM abdennour/alpine:3.14-ssl
I'm trying to follow these steps to get a docker container running NextCloud on my RaspberryPI. The steps seem very straight forward except I can't seem to get this working. The biggest difference is that I want to use an external drive as the data location. Here's what's happening:
I run sudo docker run -d -p 4442:4443 -p 442:443 -p 79:80 -v /mnt/nextclouddata:/data --name nextcloud ownyourbits/nextcloudpi-armhf
but when I go to https://pi_ip_address:442/activate (or any of the other ports), I get "problem loading page". I've also tried using https://raspberrypi.local:442/activate as well as appending both the IP and the name to the end of the command (where the DOMAIN is listed in the instructions).
I've seen some posts talking about how this is a problem with how docker accesses mounted drives, but I can't seem to get it working. When I type sudo docker logs -f nextcloud I get the following errors:
/run-parts.sh: line 47: /etc/services-enabled.d/010lamp: Permission denied
/run-parts.sh: line 47: /etc/services-enabled.d/020nextcloud: Permission denied
Init done
Does anyone have any steps to help get this working? I can't seem to find a consistent/working answer.
Thanks!
I have created a docker image where I install the mailutils package using:
RUN apt-get update && apt-get install -y mailutils
As a sample command, I am running:
mail -s 'Hello World' {email-address} <<< 'Message body'
When I am executing the same command on my local machine, it sends the mail. However, in docker container, it is not showing any errors but there is no mail received on the specified e-mail id.
I tried with --net=host argument in while spawning my docker container.
Following is my docker command:
docker run --net=host -p 0.0.0.0:8000:8000 {imageName}:{tagName} {arguments}
Is there anything that I am missing? Could someone explain the networking concepts behind this problem?
Install ssmtp and configure to send all mails to your relayhost.
https://wiki.debian.org/sSMTP
Thanks for the response #pilasguru. ssmtp works for sending mail from within a docker container.
Just to make the response more verbose, here are the things one would need to do.
Install ssmtp in the container. You could do this by the following command.
RUN apt-get update && apt-get -y install ssmtp.
You could configure the configurations for ssmtp at /etc/ssmtp/ssmtp.conf
Ideal configurations.
`
#
# Config file for sSMTP sendmail
#
# The person who gets all mail for userids < 1000
# Make this empty to disable rewriting.
root={root-name}
# The place where the mail goes. The actual machine name is required no
# MX records are consulted. Commonly mailhosts are named mail.domain.com
mailhub={smtp-server}
# Where will the mail seem to come from?
rewriteDomain={domain-name}
# The full hostname
hostname=c67fcdc6361d
# Are users allowed to set their own From: address?
# YES - Allow the user to specify their own From: address
# NO - Use the system generated From: address
FromLineOverride=YES
`
You can directly copy that from your root directory where you're building your docker image. For eg. you keep your configurations in file named: my.conf.
You can copy them in your docker container using the command:
COPY ./my.conf /etc/ssmtp/ssmtp.conf
Send a mail using a simple command such as:
ssmtp recipient_name#gmail.com < filename.txt
You can even send an attachment, specify to and from using the following command:
echo -e "to: {to-addr}\nFrom: {from-addr}\nsubject: {subject}\n"| (cat - && uuencode /path/to/file/inside/container {attachment-name-in mail}) | ssmtp recipient_name#gmail.com
uuencode could be installed by the command apt-get install sharutils
I followed this Git repo
Everything worked fine.
I can be able to issue php artisan varnish:flush from SSH.
but when I tried to flush cache from script i was getting error as
sudo: no tty present and no askpass program specified
This is how I added in routes.php
Route::get('/flush', function() {
Artisan::call('varnish:flush');
});
and I also Tried
Route::get('/flush', function() {
(new Spatie\Varnish\Varnish())->flush();
});
This is how complete error looks.
ProcessFailedException in Varnish.php line 64:
The command "sudo varnishadm -S /etc/varnish/secret -T 127.0.0.1:6082 'ban req.http.host ~ (^www.host.com$)'" failed.
Exit Code: 1(General error)
Working directory: /home/admin/web/host.com/public_html
Output:
================
Error Output:
================
sudo: no tty present and no askpass program specified
I am using Vesta CP Over VPS.
Find Me A Solution to solve this error..
When using sudo the command opens the folder /dev/tty for read-write and prints that error if it fails.
Rebooting your machine would be sufficient to get the folder back if it was removed. The system recreates all devices in /dev during bootup.
Also, make sure the permissions are correct:
chmod 666 /dev/tty