RaspberryPI NextCloudPi Docker - Problem Loading Page - linux

I'm trying to follow these steps to get a docker container running NextCloud on my RaspberryPI. The steps seem very straight forward except I can't seem to get this working. The biggest difference is that I want to use an external drive as the data location. Here's what's happening:
I run sudo docker run -d -p 4442:4443 -p 442:443 -p 79:80 -v /mnt/nextclouddata:/data --name nextcloud ownyourbits/nextcloudpi-armhf
but when I go to https://pi_ip_address:442/activate (or any of the other ports), I get "problem loading page". I've also tried using https://raspberrypi.local:442/activate as well as appending both the IP and the name to the end of the command (where the DOMAIN is listed in the instructions).
I've seen some posts talking about how this is a problem with how docker accesses mounted drives, but I can't seem to get it working. When I type sudo docker logs -f nextcloud I get the following errors:
/run-parts.sh: line 47: /etc/services-enabled.d/010lamp: Permission denied
/run-parts.sh: line 47: /etc/services-enabled.d/020nextcloud: Permission denied
Init done
Does anyone have any steps to help get this working? I can't seem to find a consistent/working answer.
Thanks!

Related

Curl 77 Error setting certificate file: http_ca.crt

I'm getting the error: Curl: 77 Error setting certificate file: http_ca.crt
when running the line: curl --cacert http_ca.crt -u elastic https://localhost:9200
Can anyone explain why I'm getting this error and more importantly how to resolve it?
I'm attempting to follow the steps in the below link:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
Edit:
The website says to use:
docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .
from my understanding /user/share/elasticsearch/config/certs/http_ca.crt is the source file inside the docker container, however when going into the docker container using docker exec -it es01 bash and using the ls command there is no 'usr' folder???? Did I miss a step? Is the tutorial wrong?
Edit 2
So I found out that docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt . does copy the file to my Home directory (still no clue how though since I couldn't find the 'usr' folder in the docker container).
The original error of curl 77 is still there though.
I figured out what was wrong...
I needed to give myself (root) permission to access the http_ca.crt file.

Enable command line audit logging in docker container - kubernetes

Hope you can help.
I am trying to enable audit logging in a docker container so if anybody kubectl exec to a docker container and runs any commands, then those commands get logged and we can view them in kubectl logs and capture with fluentd for example.
An option with adding the following line to /etc/profile of a container works for root but not for a non-root user as /proc/1/fd/1 is owned and writable by only root user and changing ownership or permissions, is not an option, unfortunately.
trap 'echo "$USER":"$BASH_COMMAND" >> /proc/1/fd/1' DEBUG
So far have tried the following:
A working option would be to run the container as a non-root, but unfortunately this is not an option
Option with just changing permissions/ownership doesn't change permissions/ownership
Also adding mesg y to /etc/profile to allow access to root's didn't work either as when doing su - non-root the permission gets denied (mesg: cannot open /dev/pts/2: Permission denied) - cannot change the permission
An option with adding a special file and trying to redirect the logs from there didn't work either, still the permission gets denied. For example:
mkfifo -m 666 /tmp/logpipe #create the special file
trap 'echo "$USER":"$BASH_COMMAND" <> /tmp/logpipe > /proc/1/fd/1' DEBUG # in /etc/profile
Changing to trap 'echo "$USER":"$BASH_COMMAND"' DEBUG won't work either as the logs need to go to /proc/1/fd/1 in case of docker
How would you enable command line audit logging in docker container or workaround the /proc/1/fd/1 permission issue for non-root in a container run as root user?
Any ideas highly appreciated.
Ah, came across my own question :D
So, Falco can be used as a HIDS (Host-based intrusion detection system), which will alert on any unusual activity based on rules defined in Flaco configuration. This can be deployed as a Deamonset (privileged) or directly on nodes.

setcap cap_net_admin in linux containers prevents user access to every file

I have a tcpdump application in a CentOS container. I was trying to run tcpdump as nonroot. Following this forum post: https://askubuntu.com/questions/530920/tcpdump-permissions-problem (and some other documentation that reinforced this), I tried to use setcap cap_net_admin+eip /path/to/tcpdump in the container.
After running this, I tried to run tcpdump as a different user (with permissions to tcpdump) and I got "Operation Not Permitted". I then tried to run it as root which had previously been working and also got, "Operation Not Permitted". After running getcap, I verified that the permissions were what they should be. I thought it may be my specific use case so I tried running the setcap command against several other executables. Every single executable returned "Operation Not Permitted" until I ran setcap -r /filepath.
Any ideas on how I can address this issue, or even work around it without using root to run tcpdump?
The NET_ADMIN capability is not included in containers by default because it could allow a container process to modify and escape any network isolation settings applied on the container. Therefore explicitly setting this permission on a binary with setcap is going to fail since root and every other user in the container is blocked from that capability. To run a container with this, you would need to add this capability onto the container with the command used to start your container. e.g.
docker run --cap-add NET_ADMIN ...
However, I believe all you need is NET_RAW (setcap cap_net_raw) which is included in the default capabilities. From man capabilities:
CAP_NET_RAW
* Use RAW and PACKET sockets;
* bind to any address for transparent proxying.

How to change user of docker service?

I'm having problem because i've installed & started docker as a "bad_user". The problem is that this container generates static files (its jekyll/jekyll image), and those files are owned by "bad_user" so i cannot edit them (i know i could add myself to bad_user group or own the dir by chown -R but it would be painful to do every time, and it just bugs me :).
I have tried to reinstall docker & removing /etc/docker directory without any effect. Every time i reinstall it (docker service/manager) runs as "bad_user" and overwrites directory owner.
My question is:
Would that be possible to make docker running under "docker" user ? I have already created that user with that group (yes, i have reinstalled docker-ce under that user already).
Im working on debian-based distro.
I guess in my case its docker daemon issue, somehow when its syncrhonizing shared volume files it gives permission to bad_user instead of user who is running container.
PS. This is the command i run if that matters:
docker run --rm -p 8000:8000 \
--volume="/home/docker/blog:/srv/jekyll" \
-it tocttou/jekyll:3.5 \
jekyll serve --watch --port 8000
Okay i figured it out. It turns out that when you run linux container that creates some files on the shared volume (the -v argument makes shared volume), the file permissions will be for user with grup id = 1000 and id = 1000. In my case user with id=1000 was "bad_user". If you want to workaround that you can use --user and specify user id that you're running under.
The key is to remember that linux permissions are just numbers, for host filesystem number 1000 is (in my case) "bad_user" and 10001 is "docker_user". If you check permissions from inside of the container you'll might see that user id = 1000 means very different user than on your host system.
I hope that next people who will encounter this issue will find that userful.
You can find more information here: https://dille.name/blog/2018/07/16/handling-file-permissions-when-writing-to-volumes-from-docker-containers/

HDP 2.5 Hortonworks ambari-admin-password-reset missing

I have downloaded the sandbox from hortonworks (Centos OS), then tried to follow the tutorial. It seems like the ambari-admin-password-reset command is not there and missing. I tried also to login with putty, the console asked me to change the password so I did.
now it seems like the command is there, but I have different passwords for the console and one for the putty for the same user.
I have tried to look for the reason why for the same user 'root' I have 2 different passwords (one for the virtual box console and one for the putty) that I can login with. I see different commands on each box. more than that when I share folder I can only see it on the virtual box console but not on the putty console) which is really frustrating.
How can I enforce that what I would see from putty would be the same as what I see from the virtual box console.
I think it somehow related to TTY but I am not sure.
EDIT:
running commands from the virtual box machine output:
grep "^passwd" /etc/nsswitch.conf
OUT: passwd: files sss
grep root /etc/passwd
OUT: rppt"x"0"0"root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin
getent passwd root
OUT: root:x:0:0:root:/root:/bin/bash
EDIT:
I think this is all about docker containers. It seems like the machine 2222 port is the ssh port for the hdp 2.5 container and not for the hosting machine.
Now I get another problem. when running
docker exec sandbox ls
it is getting stuck. any help ?
Thanks for helpers
So now I had the time to analyze the sandbox vm, and write it up for other users.
As you stated correctly in your edit of the question, its the docker container setup of the sandbox, which confuses with two separate root users:
via ssh root#127.0.0.1 -p 2222 you get into the docker container called "sandbox". This is a CentOS release 6.8 (Final), containing all the HDP services, especially the ambari service. The configuration enforces a password change at first login for the root user. Inside this VM you can also execute the ambari-admin-password-reset and set there a password for the ambari admin.
via console access you reach the docker host running a Centos 7.2, here you can login with the default root password for the VM as found in the HDP docs.
Coming to your sub-question with the hanging docker exec, it seems to be a bug in that specific docker version. If you google that, you will find issues discussing this or similar problems with docker.
So I thought that it would be a good idea to just update the host via yum update. However this turned out to be a difficult path.
yum tried to update the kernel, but complained that there is not enough space on the boot partion.
So I moved the boot partion to the root partition:
edit /etc/fsab and comment out the boot entry
unmount /boot
mv /boot
cp -a /boot.org /boot
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-install /dev/sda
reboot
After that I have found out that the docker configuration is broken and docker does not start anymore. In the logs it complained about
"Error starting daemon: error initializing graphdriver:
\"/var/lib/docker\" contains other graphdrivers: devicemapper; Please
cleanup or explicitly choose storage driver (-s )"
So I edited /etc/systemd/system/multi-user.target.wants/docker.service and changed the ExecStart setting to:
ExecStart=/usr/bin/dockerd --storage-driver=overlay
After a service docker start and a docker start sandbox. The container worked again and I could could login to the container and after a ambari-server restart everything worked again.
And now - with the new docker version 1.12.2, docker exec sandbox ls works again.
So to sum up the docker exec command has a bug in that specific version of the sandbox, but you should think twice if you want to upgrade your sandbox.
I ran into the same issue.
The HDP 2.5 sandbox runs all of its components in a docker container, but commands like docker exec -it sandbox /bin/bash or docker attach sandbox got stuck.
When I ran a simple ps aux, I found several /usr/bin/docker-proxy commands which looked like :
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 60000 -container-ip 172.17.0.2 -container-port 60000
They probably forward the HTTP ports of the various UIs of HDP components.
I could ssh into the container ip (here 172.17.0.2) using root/hadoop to authenticate. From there, I could use all "missing" commands like ambari-admin-password-reset.
$ ssh root#172.17.0.2
... # change password
$ ambari-admin-password-reset
NB: I am new to docker, so there's probably a better way to deal with this.
I'd like to post here the instructions for 3.0.1 here.
I followed the instructions of installing hortonworks version 3.0.1 here: https://youtu.be/5TJMudSNn9c
After running the docker container, go to your browser and enter "localhost:4200", that will take you to the in browser terminal of the container, that hosts ambari. Enter "root" for login and "hadoop" for password, change the root password, and then enter "ambari-admin-password-reset" in order to reset ambari password.
In order to be able to use sandbox-hdp.hortonworks.com, you need to add the line "127.0.0.1 sandbox-hdp.hortonworks.com" at the end of the /private/etc/hosts file on your mac.
Incorrect Pass
Then right corner click on power button >> power off drop down >> Restart >> when it boots up then press Esc key to get into recovery menu
Restart
select advance option and hit enter
Advance Option
Select Recovery mode hit enter
Select Root
Root enter
Command
mount -rw -o remount/
ls /home
change pass command
passwd username
user as yours
last step
enter pass two times by pressing enter
enter image description here
Hopefully you changed password (:

Resources