Connecting docker-machine to Azure using the generic driver - azure

I have a Docker-based deployment on Azure. I know that docker-machine has a Azure driver, which can create VMs and generate the certs, etc.. But I'd rather use the Azure tools (CLI and portal).
So I created a VM, and installed my public SSH key on it. And now I'd like to connect to it using docker-machine. I add the server, so that I can see it when I do docker-machine ls:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Stopped Unknown
serv - generic Running tcp://XX.XX.XX.XX:2376 Unknown Unable to query docker version: Unable to read TLS config: open /Users/user/.docker/machine/machines/serv/server.pem: no such file or directory
When I try to set the environment variables, I see this:
$ docker-machine env serv
Error checking TLS connection: Error checking and/or regenerating the certs:
There was an error validating certificates for host "XX.XX.XX.XX:2376":
open /Users/user/.docker/machine/machines/serv/server.pem: no such file or directory
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.
When I try to regennerate-certs, I get:
$ docker-machine regenerate-certs serv
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Something went wrong running an SSH command!
command : sudo hostname serv && echo "serv" | sudo tee /etc/hostname
err : exit status 1
output : sudo: no tty present and no askpass program specified
I can SSH to the server fine.
What's the issue here? How can I make it work?

Related

Scp connection timed out ubuntu VM

so i'm trying to copy a file for my directory to Azure ubuntu VM , SSH works just fine ,but scp command takes a lot of time and then i had this message
connect to host 10.x.x.x port 22: Connection timed out lost connection
this is the command i used :
scp -vvv -i .ssh/id_rsa BaltimoreCyberTrustRoot.crt.pem azureuser#10.x.x.x:/var/www/html
• AFAIK, the SCP command that you are using to try to connect to your Ubuntu Azure VM might not be correct as the correct command to connect to your Ubuntu Linux VM from your local machine to copy files between them is as follows: -
scp -r ./tmp/ azureuser#10.xxx.xxx.xxx:/home/file/user/local
In the above command, the SCP connection gets established successfully after entering the private key further which files in the local system in ‘/tmp’ directory is recursively getting copied in the Azure ubuntu VM specified in ‘/home/file/user/local’ directory. Thus, the whole directory tree as specified is copied from the local system to the Azure ubuntu VM.
• Also, if you want to use the private key in the ‘SCP’ command through SSH, then you will have to use the below command to copy files from the local system to the Azure ubuntu VM: -
sudo scp -i ~/.ssh/id_rsa /path/cert.pem azureuser#10.xxx.xxx.xxx:/home/file/user/local
Using ‘sudo’ to access a ‘root’ file, while ‘SCP’ is going to look for the identity file ‘id_rsa’ in ‘/root/.ssh/’ instead of in ‘/home/user/.ssh/’. That's why you will have to specify the identity file (private key) in the SCP command to connect to the Azure ubuntu VM and transfer files from local system to the VM.
Other than this, kindly ensure that port 22 is opened in the inbound NSG rule on the Azure ubuntu VM and the VM's default page is accessible on port 80/443 over public IP address and the Azure FQDN assigned.
For more information, kindly refer to the links below: -
Can't scp to Azure's VM
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/copy-files-to-linux-vm-using-scp#scp-a-directory-from-a-linux-vm

Connecting Azure Linux VM having Ubuntu installed from powershell throwing Host Verification failed. . . error

I tried to connect to Azure Linux VM where Ubuntu installed from https://shell.azure.com/bash
ssh username#ipaddress
above command is throwing error as Permission Denied (publickey) .
I have created SSH public key and added it to VM while creating Azure Linux VM following below article.
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal
But still facing Permission Denied issue.
Also, I tried to run bolt command on Azure Linux VM remotely from another windows machine powershell.
I got error as below
Host key verification failed for '10.20.30.40':fingerprint
SHA256:mssgkeghbfnb9883yygebwndjhk is unknown for '10.20.30.40'
How to fix above issues. Kindly suggest.
Permission denied (publickey) means that your public key is not in the authorized_keys file. Copy the public key manually to that user's ~/.ssh/authorized_keys file, or use ssh-copy-id which comes with OpenSSH.
Also, make sure you're SSH'ing to the right user with ssh user#host

Change the URL of a docker-machine

I created a machine via docker-machine create -d azure --azure-static-public-IP. But what I did is I intentionally changed the public IP address of that VM. With this move, I can not docker-machine ssh or any docker-machine related command. Seems like it’s still sending request to the previous public-IP. How can I change that IP and convert it to the new one? I tried docker-machine regenerate-certs and even changing the config.json but nothing going to be happened…
The only way I saw fixing this is to reverting back the previous public IP of that VM
You should be fine with a change of the IP in "config.json". For Example, if i have to change my IP on my default docker-machine, i would go here:
/Users/arne/.docker/machine/machines/default/config.json
Adjust the IP and run
docker-machine regenerate-certs myVM
This should work.
Do you mean when you run Docker-machine ssh got this error:
Error checking TLS connection: Error checking and/or regenerating the
certs: There was an error validating certificates for host
"13.91.60.237:2376": x509: certificate is valid for 40.112.218.127,
not 13.91.60.237 You can attempt to regenerate them using
'docker-machine regenerate-certs [name]'. Be advised that this will
trigger a Docker daemon restart which might stop running containers.
In my test lab, my first IP address is 40.112.218.127, then I change it to 13.91.60.237, get this error.
Then I use this command to regenerate it:docker-machine regenerate-certs jasonvmm, like this:
[root#jasoncli#jasonye jasonvmm]# docker-machine regenerate-certs jasonvmm
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
[root#jasoncli#jasonye jasonvmm]# docker-machine ssh jasonvmm
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-47-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
208 packages can be updated.
109 updates are security updates.
Last login: Fri Dec 8 06:22:09 2017 from 167.220.255.48
Also, we can use this command to check the new settings:docker-machine env jasonvmm
[root#jasoncli#jasonye jasonvmm]# docker-machine env jasonvmm
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://13.91.60.237:2376"
export DOCKER_CERT_PATH="/root/.docker/machine/machines/jasonvmm"
export DOCKER_MACHINE_NAME="jasonvmm"
# Run this command to configure your shell:
# eval $(docker-machine env jasonvmm)
Please use this script to regenerate them docker-machine regenerate-certs VMname.
Hope this helps.

How do I use SSH in a Jenkins pipeline?

I have some Jenkins jobs defined using a Jenkins Pipeline Model Definition, which builds NPM projects. I use Docker containers to build these projects (using a common image with
just Node.js + npm + yarn).
The results of the builds are contained in the dist/ folder that I zipped using a zip pipeline command.
I want to copy this ZIP file to another server using SSH/SCP (with private key authentication). My private key is added to the Jenkins environment (credentials manager), but when I use Docker containers, an SSH connection cannot be established.
I tried to add agent { label 'master' } to use the master Jenkins node for file transfer, but it seems to create a clean workspace with new Git fetch, and without my built files.
After I tried the SSH Agent Plugin, I have this output:
Identity added: /srv/jenkins3/workspace/myjob-TFD#tmp/private_key_370451445598243031.key (rsa w/o comment)
[ssh-agent] Started.
[myjob-TFD] Running shell script
+ scp -r dist test#myremotehost:/var/www/xxx
$ docker exec bfda17664965b14281eef8670b34f83e0ff60218b04cfa56ba3c0ab23d94d035 env SSH_AGENT_PID=1424 SSH_AUTH_SOCK=/tmp/ssh-k658r0O76Yqb/agent.1419 ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 1424 killed;
[ssh-agent] Stopped.
Host key verification failed.
lost connection
How do I add a remote host as authorized?
I had a similar issue. I did not use the label 'master', and I identified that the file transfer works across slaves when I do it like this:
Step 1 - create SSH keys in a remote host server, include the key to authorized_keys
Step 2 - Create credential using SSH keys in Jenkins, use the private key from the remote host
Use the SSH agent plugin:
stage ('Deploy') {
steps{
sshagent(credentials : ['use-the-id-from-credential-generated-by-jenkins']) {
sh 'ssh -o StrictHostKeyChecking=no user#hostname.com uptime'
sh 'ssh -v user#hostname.com'
sh 'scp ./source/filename user#hostname.com:/remotehost/target'
}
}
}
Use the SSH agent plugin:
SSH Agent Plugin
SSH Agent Plugin
When using this plugin you can use the global credentials.
To add a remote host to known hosts and hopefully cope with your error try to manually ssh from the Jenkins host to the target host as the Jenkins user.
Get on the host where Jenkins is installed. Type
sudo su jenkins
Now use ssh or scp like
ssh username#server
You should be prompted like this:
The authenticity of host 'server (ip)' can't be established.
ECDSA key fingerprint is SHA256:some-weird-string.
Are you sure you want to continue connecting (yes/no)?
Type yes. The server will be permanently added as a known host. Don't even bother passing a password, just Ctrl + C and try running a Jenkins job.
Like #haschibaschi recommends, I also use the ssh-agent plugin. I have a need to use my personal UID credentials on the remote machine, because it doesn't have any UID Jenkins account. The code looks like this (using, for example, my personal UID="myuid" and remote server hostname="non_jenkins_svr":
sshagent(['e4fbd939-914a-41ed-92d9-8eededfb9243']) {
// 'myuid#' required for scp (this is from UID jenkins to UID myuid)
sh "scp $WORKSPACE/example.txt myuid#non_jenkins_svr:${dest_dir}"
}
The ID e4fbd939-914a-41ed-92d9-8eededfb9243 was generated by the Jenkins credentials manager after I created a global domain credentials entry.
After creating the credentials entry, the ID is found under the "ID" column on the credentials page. When creating the entry, I selected type 'SSH Username with private key' ('Kind' field), and copied the RSA private key I had created for this purpose under the myuid account on host non_jenkins_svr without a passphrase.

How do I remove default ssh host from ssh configuration?

I used to connect to Amazon web services using ssh command and application.pem key. Now when I try to connect to other platforms such as Github my ssh client looks for same application.pem key and tries to connect to AWS. How do I connect to Github or change the default host and key configuration.I am using a Ubuntu 13.10 system and following is my ssh output.
pranav#pranav-SVF15318SNW:~/.ssh$ ssh
Warning: Identity file application.pem not accessible: No such file or directory.
You need the identity file to login to the box. Use the command:
ssh -i (identity_file) username#hostname"
This worked for me. Write just the filename (without any slashes), unlike Amazon EC2 tutorial which asks you to enter:
ssh -i /path/key_pair.pem ec2-user#public_dns_name
and also check the permission

Resources