Change the URL of a docker-machine - azure

I created a machine via docker-machine create -d azure --azure-static-public-IP. But what I did is I intentionally changed the public IP address of that VM. With this move, I can not docker-machine ssh or any docker-machine related command. Seems like it’s still sending request to the previous public-IP. How can I change that IP and convert it to the new one? I tried docker-machine regenerate-certs and even changing the config.json but nothing going to be happened…
The only way I saw fixing this is to reverting back the previous public IP of that VM

You should be fine with a change of the IP in "config.json". For Example, if i have to change my IP on my default docker-machine, i would go here:
/Users/arne/.docker/machine/machines/default/config.json
Adjust the IP and run
docker-machine regenerate-certs myVM
This should work.

Do you mean when you run Docker-machine ssh got this error:
Error checking TLS connection: Error checking and/or regenerating the
certs: There was an error validating certificates for host
"13.91.60.237:2376": x509: certificate is valid for 40.112.218.127,
not 13.91.60.237 You can attempt to regenerate them using
'docker-machine regenerate-certs [name]'. Be advised that this will
trigger a Docker daemon restart which might stop running containers.
In my test lab, my first IP address is 40.112.218.127, then I change it to 13.91.60.237, get this error.
Then I use this command to regenerate it:docker-machine regenerate-certs jasonvmm, like this:
[root#jasoncli#jasonye jasonvmm]# docker-machine regenerate-certs jasonvmm
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
[root#jasoncli#jasonye jasonvmm]# docker-machine ssh jasonvmm
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-47-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
208 packages can be updated.
109 updates are security updates.
Last login: Fri Dec 8 06:22:09 2017 from 167.220.255.48
Also, we can use this command to check the new settings:docker-machine env jasonvmm
[root#jasoncli#jasonye jasonvmm]# docker-machine env jasonvmm
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://13.91.60.237:2376"
export DOCKER_CERT_PATH="/root/.docker/machine/machines/jasonvmm"
export DOCKER_MACHINE_NAME="jasonvmm"
# Run this command to configure your shell:
# eval $(docker-machine env jasonvmm)
Please use this script to regenerate them docker-machine regenerate-certs VMname.
Hope this helps.

Related

Scp connection timed out ubuntu VM

so i'm trying to copy a file for my directory to Azure ubuntu VM , SSH works just fine ,but scp command takes a lot of time and then i had this message
connect to host 10.x.x.x port 22: Connection timed out lost connection
this is the command i used :
scp -vvv -i .ssh/id_rsa BaltimoreCyberTrustRoot.crt.pem azureuser#10.x.x.x:/var/www/html
• AFAIK, the SCP command that you are using to try to connect to your Ubuntu Azure VM might not be correct as the correct command to connect to your Ubuntu Linux VM from your local machine to copy files between them is as follows: -
scp -r ./tmp/ azureuser#10.xxx.xxx.xxx:/home/file/user/local
In the above command, the SCP connection gets established successfully after entering the private key further which files in the local system in ‘/tmp’ directory is recursively getting copied in the Azure ubuntu VM specified in ‘/home/file/user/local’ directory. Thus, the whole directory tree as specified is copied from the local system to the Azure ubuntu VM.
• Also, if you want to use the private key in the ‘SCP’ command through SSH, then you will have to use the below command to copy files from the local system to the Azure ubuntu VM: -
sudo scp -i ~/.ssh/id_rsa /path/cert.pem azureuser#10.xxx.xxx.xxx:/home/file/user/local
Using ‘sudo’ to access a ‘root’ file, while ‘SCP’ is going to look for the identity file ‘id_rsa’ in ‘/root/.ssh/’ instead of in ‘/home/user/.ssh/’. That's why you will have to specify the identity file (private key) in the SCP command to connect to the Azure ubuntu VM and transfer files from local system to the VM.
Other than this, kindly ensure that port 22 is opened in the inbound NSG rule on the Azure ubuntu VM and the VM's default page is accessible on port 80/443 over public IP address and the Azure FQDN assigned.
For more information, kindly refer to the links below: -
Can't scp to Azure's VM
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/copy-files-to-linux-vm-using-scp#scp-a-directory-from-a-linux-vm

HDP 2.5 Hortonworks ambari-admin-password-reset missing

I have downloaded the sandbox from hortonworks (Centos OS), then tried to follow the tutorial. It seems like the ambari-admin-password-reset command is not there and missing. I tried also to login with putty, the console asked me to change the password so I did.
now it seems like the command is there, but I have different passwords for the console and one for the putty for the same user.
I have tried to look for the reason why for the same user 'root' I have 2 different passwords (one for the virtual box console and one for the putty) that I can login with. I see different commands on each box. more than that when I share folder I can only see it on the virtual box console but not on the putty console) which is really frustrating.
How can I enforce that what I would see from putty would be the same as what I see from the virtual box console.
I think it somehow related to TTY but I am not sure.
EDIT:
running commands from the virtual box machine output:
grep "^passwd" /etc/nsswitch.conf
OUT: passwd: files sss
grep root /etc/passwd
OUT: rppt"x"0"0"root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin
getent passwd root
OUT: root:x:0:0:root:/root:/bin/bash
EDIT:
I think this is all about docker containers. It seems like the machine 2222 port is the ssh port for the hdp 2.5 container and not for the hosting machine.
Now I get another problem. when running
docker exec sandbox ls
it is getting stuck. any help ?
Thanks for helpers
So now I had the time to analyze the sandbox vm, and write it up for other users.
As you stated correctly in your edit of the question, its the docker container setup of the sandbox, which confuses with two separate root users:
via ssh root#127.0.0.1 -p 2222 you get into the docker container called "sandbox". This is a CentOS release 6.8 (Final), containing all the HDP services, especially the ambari service. The configuration enforces a password change at first login for the root user. Inside this VM you can also execute the ambari-admin-password-reset and set there a password for the ambari admin.
via console access you reach the docker host running a Centos 7.2, here you can login with the default root password for the VM as found in the HDP docs.
Coming to your sub-question with the hanging docker exec, it seems to be a bug in that specific docker version. If you google that, you will find issues discussing this or similar problems with docker.
So I thought that it would be a good idea to just update the host via yum update. However this turned out to be a difficult path.
yum tried to update the kernel, but complained that there is not enough space on the boot partion.
So I moved the boot partion to the root partition:
edit /etc/fsab and comment out the boot entry
unmount /boot
mv /boot
cp -a /boot.org /boot
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-install /dev/sda
reboot
After that I have found out that the docker configuration is broken and docker does not start anymore. In the logs it complained about
"Error starting daemon: error initializing graphdriver:
\"/var/lib/docker\" contains other graphdrivers: devicemapper; Please
cleanup or explicitly choose storage driver (-s )"
So I edited /etc/systemd/system/multi-user.target.wants/docker.service and changed the ExecStart setting to:
ExecStart=/usr/bin/dockerd --storage-driver=overlay
After a service docker start and a docker start sandbox. The container worked again and I could could login to the container and after a ambari-server restart everything worked again.
And now - with the new docker version 1.12.2, docker exec sandbox ls works again.
So to sum up the docker exec command has a bug in that specific version of the sandbox, but you should think twice if you want to upgrade your sandbox.
I ran into the same issue.
The HDP 2.5 sandbox runs all of its components in a docker container, but commands like docker exec -it sandbox /bin/bash or docker attach sandbox got stuck.
When I ran a simple ps aux, I found several /usr/bin/docker-proxy commands which looked like :
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 60000 -container-ip 172.17.0.2 -container-port 60000
They probably forward the HTTP ports of the various UIs of HDP components.
I could ssh into the container ip (here 172.17.0.2) using root/hadoop to authenticate. From there, I could use all "missing" commands like ambari-admin-password-reset.
$ ssh root#172.17.0.2
... # change password
$ ambari-admin-password-reset
NB: I am new to docker, so there's probably a better way to deal with this.
I'd like to post here the instructions for 3.0.1 here.
I followed the instructions of installing hortonworks version 3.0.1 here: https://youtu.be/5TJMudSNn9c
After running the docker container, go to your browser and enter "localhost:4200", that will take you to the in browser terminal of the container, that hosts ambari. Enter "root" for login and "hadoop" for password, change the root password, and then enter "ambari-admin-password-reset" in order to reset ambari password.
In order to be able to use sandbox-hdp.hortonworks.com, you need to add the line "127.0.0.1 sandbox-hdp.hortonworks.com" at the end of the /private/etc/hosts file on your mac.
Incorrect Pass
Then right corner click on power button >> power off drop down >> Restart >> when it boots up then press Esc key to get into recovery menu
Restart
select advance option and hit enter
Advance Option
Select Recovery mode hit enter
Select Root
Root enter
Command
mount -rw -o remount/
ls /home
change pass command
passwd username
user as yours
last step
enter pass two times by pressing enter
enter image description here
Hopefully you changed password (:

Connecting docker-machine to Azure using the generic driver

I have a Docker-based deployment on Azure. I know that docker-machine has a Azure driver, which can create VMs and generate the certs, etc.. But I'd rather use the Azure tools (CLI and portal).
So I created a VM, and installed my public SSH key on it. And now I'd like to connect to it using docker-machine. I add the server, so that I can see it when I do docker-machine ls:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Stopped Unknown
serv - generic Running tcp://XX.XX.XX.XX:2376 Unknown Unable to query docker version: Unable to read TLS config: open /Users/user/.docker/machine/machines/serv/server.pem: no such file or directory
When I try to set the environment variables, I see this:
$ docker-machine env serv
Error checking TLS connection: Error checking and/or regenerating the certs:
There was an error validating certificates for host "XX.XX.XX.XX:2376":
open /Users/user/.docker/machine/machines/serv/server.pem: no such file or directory
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.
When I try to regennerate-certs, I get:
$ docker-machine regenerate-certs serv
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Something went wrong running an SSH command!
command : sudo hostname serv && echo "serv" | sudo tee /etc/hostname
err : exit status 1
output : sudo: no tty present and no askpass program specified
I can SSH to the server fine.
What's the issue here? How can I make it work?

Dockerfile: Docker build can't download packages: centos->yum, debian/ubuntu->apt-get behind intranet

PROBLEM: Any build, with a Dockerfile depending on centos, ubuntu or debian fails to build.
ENVIRONMENT: I have a Mac OS X, running VMWare with a guest Ubuntu 14.04, running Docker:
mdesales#ubuntu ~ $ sudo docker version
Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2.1
Git commit (client): d84a070
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2.1
Git commit (server): d84a070
BEHAVIOR: Using "docker build" fails to download packages. Here's an example of such Dockerfile: https://github.com/Krijger/docker-cookbooks/blob/master/jdk8-oracle/Dockerfile, https://github.com/ottenhoff/centos-java/blob/master/Dockerfile
I know that we can run a container with --dns, but this is during the build time.
CENTOS
FROM centos
RUN yum install a b c
UBUNTU
FROM ubuntu
RUN apt-get install a b c
Users have reported that it might be problems with DNS configuration, others and the configuration has the Google's DNS servers commented out.
Step 2 : RUN yum install -y curl; yum upgrade -y; yum update -y; yum clean all
---> Running in 5f11b65c87b8
Loaded plugins: fastestmirror
Couldn't resolve host 'mirrorlist.centos.org
Still the problem persisted... So, most users on #docker#Freenode mentioned that it might be a problem with the DNS configuration... So here's my Ubuntu:
$ sudo cat /etc/resolv.conf
nameserver 127.0.1.1
search localdomain
I tried changing that, same problem...
PROBLEM
Talking to some developers at #docker#freenode, the problem was clear to everyone: DNS and the environment. The build works just fine at a regular Internet connection at home.
SOLUTION:
This problem occurs in an environment that has a private DNS server, or the network blocks the Google's DNS servers. Even if the docker container can ping 8.8.8.8, the build still needs to have access to the same private DNS server behind your firewall or Data Center.
Start the Docker daemon with the --dns switch to point to your private DNS server, just like your host OS is configured. That was found by trial and error.
Details
My MAC OS X, host OS, had a different DNS configured on my /etc/resolv.conf:
mdesales#Marcello-Work ~ (mac) $ cat /etc/resolv.conf
search corp.my-private-company.net
nameserver 172.18.20.13
nameserver 172.20.100.29
My host might be dropping the packets to the Google's IP address 8.8.8.8 while building... I just took those 2 IP addresses and placed under the Ubuntu's docker daemon configuration:
mdesales#ubuntu ~ $ cat /etc/default/docker
...
...
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 172.18.20.13 --dns 172.20.100.29 --dns 8.8.8.8"
...
The build now works as expected!
$ sudo ./build.sh
Sending build context to Docker daemon 7.168 kB
Sending build context to Docker daemon
Step 0 : FROM centos
---> b157b77b1a65
Step 1 : MAINTAINER Marcello_deSales#intuit.com
---> Running in 49bc6e233e4c
---> 2a380810ffda
Removing intermediate container 49bc6e233e4c
Step 2 : RUN yum install -y curl; yum upgrade -y; yum update -y; yum clean all
---> Running in 5f11b65c87b8
Loaded plugins: fastestmirror
Determining fastest mirrors
* base: mirror.supremebytes.com
* extras: centos.mirror.ndchost.com
* updates: mirrors.tummy.com
Resolving Dependencies
--> Running transaction check
---> Package systemd.x86_64 0:208-11.el7 will be updated
---> Package systemd.x86_64 0:208-11.el7_0.2 will be an update
---> Package systemd-libs.x86_64 0:208-11.el7 will be updated
---> Package systemd-libs.x86_64 0:208-11.el7_0.2 will be an update
--> Finished Dependency Resolution
Thanks to #BrianF and others who helped in the IRC channel!
Permanent VM Solution - UPDATE JULY 2, 2015
We now have GitHub Enterprise and CoreOS Enterprise Docker Registry in the mix... So, it was important for me to add the corporate DNS servers from the HOST machine in order to get the VM also to work.
Replacing the /etc/resolv.conf from the guest OS with the Host's /etc/resolv.conf also resolved the problem! Docker 1.7.0. I just created a new VM using Ubuntu 15.04 on VMWare Fusion and I had this problem again...
/etc/resolv.conf BEFORE
~/dev/github/public/stackedit on ⭠ master ⌚ 20:31:02
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1
search localdomain
/etc/resolv.conf AFTER*
~/dev/github/public/stackedit on ⭠ master ⌚ 20:56:09
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
search corp.mycompany.net
nameserver 10.180.194.35
nameserver 10.180.194.36
nameserver 192.168.1.1
I had pretty the same problem. The provided solution didn't help in my case. But it worked as soon I updated my Dockerfile adding environment variables for the proxy in it.
ENV HTTP_PROXY http://<proxy_host>:<port>
ENV HTTPS_PROXY http://<proxy_host>:<port>
ENV http_proxy http://<proxy_host>:<port>
ENV https_proxy http://<proxy_host>:<port>
It's likely due to your local caching name server listening on 127.0.1.1 which is not accessible from within the container.
Try putting the following into your Dockerfile:
CMD "sh" "-c" "echo nameserver 8.8.8.8 > /etc/resolv.conf"
also, just adding the nameservers from the host (in my case mac osx ) to the docker-machine vm solves the problem.
For me the problem was that my ISP blocked google's DNS (8.8.8.8) which docker uses as a fallback default.
The trick here is to find out your DNS IP and tell docker to use it.
In my case (running Ubuntu 17.04), trying to get this information from /etc/resolv.conf did not work, but I used this command:
nmcli dev show | grep IP4.DNS
Then I took this IP and added in /etc/defaults/docker:
DOCKER_OPTS="--dns 192.168.50.1"
Now restart your docker daemon, and try building again.
In my case, the issue is that our company's DNS is flawed in few ways, which requires tampering the /etc/hosts, and for docker, /etc/docker/daemon.json. That's the file which was hiding the error:
{
"dns": ["10.5...", "10.5...", "10.5..."]
}
I have backed this up and replaced with
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
And it started working. I am looking for a solution that would work in all cases - on our VPN which needs the custom DNS servers as well as home on a normal network.
Note that in modern Linux, /etc/hosts is generated and DNS is managed by SystemD. I am not sure how Docker handles this, but perhaps it could be enough to point it to SystemD's fake DNS at 127.0.0.53.
Create a local repo mirror - this can also be done as a docker-mirror-packages-repo
Then run "docker build --add-host "archive.ubuntu.com:repo-docker-ip" to have the build process download from your local mirror. That is not only faster but it ensures a better reproducibility of your builds.
I am using that for the testsuite of the docker-systemctl-replacement which is testing compatibility with a number of distros each with dozens of docker rebuilds.

Puppet agent can't find server

I'm new to puppet, but picking it up quickly. Today, I'm running into an issue when trying to run the following:
$ puppet agent --no-daemonize --verbose --onetime
**err: Could not request certificate: getaddrinfo: Name or service not known
Exiting; failed to retrieve certificate and waitforcert is disabled**
It would appear the agent doesn't know what server to connect to. I could just specify --server on the command line, but that will be of no use to me when this runs as a daemon in production, so instead, I specify the server name in /etc/puppet/puppet.conf like so:
[main]
server = puppet.<my domain>
I do have a DNS entry for puppet.<my domain> and if I dig puppet.<my domain>, I see that the name resolves correctly.
All puppet documentation I have read states that the agent tries to connect to a puppet master at puppet by default and your options are host file trickery or do the right thing, create a CNAME in DNS, and edit the puppet.conf accordingly, which I have done.
So what am I missing? Any help is greatly appreciated!
D'oh! Need to sudo to do this! Then everything works.
I had to use the --server flag:
sudo puppet agent --server=puppet.example.org
I actually had the same error but I was using the two learning puppet vm and trying run the 'puppet agent --test' command.
I solved the problem by opening the file /etc/hosts on both the master and the agent vm and the line
***.***.***.*** learn.localdomain learn puppet.localdomain puppet
The ip address (the asterisks) was originally some random number. I had to change this number on both vm so that it was the ip address of the master node.
So I guess for experienced users my advice is to check the /etc/hosts file to make sure that the ip addresses in here for the master and agent not only match but are the same as the ip address of the master.
for other noobs like me my advice is to read the documentation more clearly. This was a step in the 'setting up an agent vm' process the I totally missed xD
In my case I was getting same error but it was due to the cert which should been signed to node on puppetmaster server.
to check pending certs run following:
puppet cert list
"node.domain.com" (SHA256) 8D:E5:8A:2*******"
sign the cert to node:
puppet cert sign node.domain.com
Had the same issue today on puppet 2.6 on CentOS 6.4
All I did to resolve the issue was to check the usual stuff such as hosts and resolv.conf to ensure they were as expected (compared with a working server) and then;
Removed /var/lib/puppet directory rm -rf /var/lib/puppet
Cleared the certificate on the puppet master puppetca --clean
servername
Restarted the network service network restart
Re-ran puppet
Even though the resolv.conf was identical to the working server, puppet updated resolv.conf and immediately re-signed the certificate and replaced all the puppet lib files.
Everything was fine after that.

Resources