Curl 77 Error setting certificate file: http_ca.crt - linux

I'm getting the error: Curl: 77 Error setting certificate file: http_ca.crt
when running the line: curl --cacert http_ca.crt -u elastic https://localhost:9200
Can anyone explain why I'm getting this error and more importantly how to resolve it?
I'm attempting to follow the steps in the below link:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
Edit:
The website says to use:
docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .
from my understanding /user/share/elasticsearch/config/certs/http_ca.crt is the source file inside the docker container, however when going into the docker container using docker exec -it es01 bash and using the ls command there is no 'usr' folder???? Did I miss a step? Is the tutorial wrong?
Edit 2
So I found out that docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt . does copy the file to my Home directory (still no clue how though since I couldn't find the 'usr' folder in the docker container).
The original error of curl 77 is still there though.

I figured out what was wrong...
I needed to give myself (root) permission to access the http_ca.crt file.

Related

Docker Bind Mount: error while creating mount source path, permission denied

I am trying to run the NVIDIA PyTorch container nvcr.io/nvidia/pytorch:22.01-py3 on a Linux system, and I need to mount a directory of the host system (that I have R/W access to) in the container. I know that I need to use bind mounts, and here's what I'm trying:
I'm in a directory /home/<user>/test, which has the directory dir-to-mount. (The <user> account is mine).
docker run -it -v $(pwd)/dir-to-mount:/workspace/target nvcr.io/nvidia/pytorch:22.01-py3
Here's the error output:
docker: Error response from daemon: error while creating mount source path '/home/<user>/test/dir-to-mount': mkdir /home/<user>/test: permission denied.
ERRO[0000] error waiting for container: context canceled
As far as I know, docker will only need to create the directory to be mounted if it doesn't exist already. Docker docs:
The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist.
I suspected that maybe the docker process does not have access; I tried chmod 777 with dir-to-mount as well as with test, but that made no difference.
So what's going wrong?
[Edit 1]
I am able to mount my user's entire home directory with the same command, but cannot mount other directories inside the home directory.
[Edit 2]
Here are the permissions:
home directory: drwx------
test: drwxrwxrwx
dir-to-mount: drwxrwxrwx
Run the command with sudo as:
sudo docker run -it -v $(pwd)/dir-to-mount:/workspace/target nvcr.io/nvidia/pytorch:22.01-py3
It appears that I can mount my home directory as a home directory (inside of /home/<username>), and this just works.
docker run -it -v $HOME:$HOME nvcr.io/nvidia/pytorch:22.01-py3
I don't know why the /home/<username> path is special, I've tried looking through the docs but I could not find anything relevant.

RaspberryPI NextCloudPi Docker - Problem Loading Page

I'm trying to follow these steps to get a docker container running NextCloud on my RaspberryPI. The steps seem very straight forward except I can't seem to get this working. The biggest difference is that I want to use an external drive as the data location. Here's what's happening:
I run sudo docker run -d -p 4442:4443 -p 442:443 -p 79:80 -v /mnt/nextclouddata:/data --name nextcloud ownyourbits/nextcloudpi-armhf
but when I go to https://pi_ip_address:442/activate (or any of the other ports), I get "problem loading page". I've also tried using https://raspberrypi.local:442/activate as well as appending both the IP and the name to the end of the command (where the DOMAIN is listed in the instructions).
I've seen some posts talking about how this is a problem with how docker accesses mounted drives, but I can't seem to get it working. When I type sudo docker logs -f nextcloud I get the following errors:
/run-parts.sh: line 47: /etc/services-enabled.d/010lamp: Permission denied
/run-parts.sh: line 47: /etc/services-enabled.d/020nextcloud: Permission denied
Init done
Does anyone have any steps to help get this working? I can't seem to find a consistent/working answer.
Thanks!

Azure File Share - Mount

I create an Azure File Share on my Storage Account v2. Going under the label Connect I copied the command lines to mount the File Share with Samba v3.0
I didn't achieve my goal. Error received: Mount error(115): Operation now in progress
Useless the link of Azure: https://learn.microsoft.com/en-us/azure/storage/files/storage-troubleshoot-linux-file-connection-problems#mount-error115-operation-now-in-progress-when-you-mount-azure-files-by-using-smb-30
I have a Debian 10 fresh-updated ( yesterday ). I tried also with a docker image ubuntu:18.04, but the result didn't change, so I guess that there are more than my errors or possible mistakes.
The error is returned by the latest instruction:
$> mount -t cifs //MY_ACCOUNT.file.core.windows.net/MY_FILE_SHARE /mnt/customfolder -o vers=3.0,credentials=/etc/smbcredentials/MY_CREDENTIALS,dir_mode=0777,file_mode=0777,serverino
My tentatives:
I tried to change the version of Samba from 3.0 to 3.11 ---> NOTHING
I tried to use username and password instead of credentials ---> NOTHING
Using smbclient -I IP -p 445 -e -m SMB3 -U MY_USERNAME \\\\MY_ACCOUNT.file.core.windows.net\\MY_FILE_SHARE ----> NOTHING
Thanks for help.

Can't create Docker volume using absolute path on Linux

I'm getting the following error message when trying to run a Docker container with which I want to share some data via a directory (I added the emphasis):
##[error]/usr/bin/docker: Error response from daemon: create -v /opt/vsts/work/1/s/coverage: "-v /opt/vsts/work/1/s/coverage" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
What I don't understand is, that to my knowledge /opt/vsts/work/1/s/coverage is an absolute path, as indicated by the first forward slash.
Can someone explain what I'm doing wrong?
A build script was passing in "-v /opt/vsts/work/1/s/coverage" as the actual name, i.e.
docker run -v -v /opt/vsts/work/1/s/coverage:[...]
instead of
docker run -v /opt/vsts/work/1/s/coverage:[...].
Thanks #larsks for pointing me in the right direction.

Docker 1.6 and Registy 2.0

Has anyone tried successfully the search command with Docker 1.6 and the new registry 2.0?
I've set mine up behind Nginx with SSL, and so far it is working fine. I can push and pull images without problems. But when I try to search for them all the following command give a 404 response:
curl -k -s -X GET https://username:password#my-docker-registry.com/v1/search
404 page not found
curl -k -s -X GET https://username:password#my-docker-registry.com/v2/search
404 page not found
root#ip-10-232-0-191:~# docker search username:password#my-docker-registry.com/hello-world
FATA[0000] Invalid repository name (admin:admin), only [a-z0-9-_.] are allowed
root#ip-10-232-0-191:~# docker search my-docker-registry.com/hello-world
FATA[0000] Error response from daemon: Unexpected status code 404
I wanted to ask if anyone has any ideas why and what is the correct way to use the Docker client to search the registry for images.
Looking at the API v2.0 documentation, do they simply not support a search function? Seems a bit strange to omit such functionality.
At least something works :)
root#ip-10-232-0-191:~# curl -k -s -X GET https://username:password#my-docker-registry.com/v2/hello-world/tags/list
{"name":"hello-world","tags":["latest"]}
To Date - the search api is lacking from registry v2.0.1 and this issue is under discussion here. I believe search api is intended to land in v2.1.
EDIT: /v2/catalog endpoint is available in distribution/registry:master
Before new registry api:
If you are using REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY you may list the contents of that directory
user#host:~# tree $REGISTRY_FS_ROOTDIR/docker/registry/v2/repositories -L 2
***/docker/registry/v2/repositories
└── repository1
└── image1
This may be useful to make a quick web ui you can call to do this or if you have ssh access to the host storing the repositories:
ssh -T user#host -p <port> tree $REGISTRY_FS_ROOTDIR/docker/registry/ -L 2
Do look at the compose example which deploys both v1 & v2 registry behind an nginx reverse proxy
The latest version of Docker Registry available from https://github.com/docker/distribution supports Catalog API. (v2/_catalog). This allows for capability to search repositories.
If interested, you can try docker image registry CLI I built to make it easy for using the search features in the new Docker Registry v2 distribution : (https://github.com/vivekjuneja/docker_registry_cli)
if you're on windows, here's a Powershell script to query the v2/_catalog from windows with basic http auth.
https://gist.github.com/so0k/b59382ea7fd959cf7040
FYI, to use this you have to docker pull distribution/registry:master instead of docker pull registry:2. the registry:2 image version is currently 2.0.1 which does not come with the catalog endpoint.

Resources