Proxmox Cloud-init is not letting the SSH restart when the IP is changed after cloning the template - proxmox

I am facing the problem in proxmox cloud-init config file.
I am installing a CentOS8 Stream and installing the cloud-init in it.
I have converted it to a template, later when I am cloning that template into a VM, and assigning, the IP in the cloud-init after the IP is assigned the SSH is not accessible and it is not started, when I am trying to restart the SSH it is not restarting and getting the following error as in the image below.
This is the error log I am getting.

Related

This project was scheduled for deletion, but failed with the following message: Failed to open TCP connection to <host>:5000

The project has this message:
This project was scheduled for deletion, but failed with the following
message: Failed to open TCP connection to host.ru:5000 (Connection
refused - connect(2) for "host.ru" port 5000)
Can you tell me what this might be related to? Why does gitlab use a different port for deletion ?
(default port is 30443)
How do I delete this message?
A lot of questions, but I really don't understand what this message is. clearly this is an error :)
Gitlab is located in docker.
message
P.S. Now I check whether the port is open.
UP! If you don't need the container register, then disable it. This will solve the problem.
Issue:
gitlab is running in a separate docker container and registry is running in a separate docker container. gitlab container is not able to resolve the dns name for the registry and gives the error
"this project was scheduled for deletion, but failed with the following message: failed to open tcp connection to registry:5000 (getaddrinfo: name or service not known)"
Solution:
docker inspect (registry container name)
E.g. docker inspect registry
get the IP address of the registry container
Login to gitlab container machine :
E.g. docker exec -it gitlab bash
Edit the hosts file
vi /etc/hosts
Add the ip address and dns name mapping for the runner container
in the hostfile
172.xx.x.1 registry
This will resolve this issue.
no restarts required.

DNS problem when creating VM from deprovisioned Linux VM image in DevTest Lab

I tried many times. It looks like a bug in Azure DevTest Labs.
Here is the steps to reproduce the problem:
Create a VM from Ubuntu Linux 18.04 LTS
Create custom image from this VM with Run deprovision on virtual machine..
Create a new VM from this new image.
SSH to this VM.
Run host www.google.com will fail.
Are these steps wrong?
There is no step wrong which you have done. But the reason that you fail running host www.google.com is the second step. The deprovision will do some things when you execute the command waagent -deprovision+user:
When you create a VM from the deprovision custom image, there is no resolv.conf file in it. So you fail in running host www.google.com. The solution is that create a resolv.conf file in the directory /etc/.
The content of file resolv.conf will be different for the VM in the different locations. For example, the file resolv.conf will be like this if your VM in the location Japan East:
nameserver 127.0.0.53
search bbuuanmggeiengfg01a443drie.lx.internal.cloudapp.net

Connecting to Azure Container Services from Windows 8.1 Docker

I've been following this tutorial to set up an Azure container service. I can successfully connect to the master load balancer via putty. However, I'm having trouble connecting to the Azure container via docker.
~ docker -H 192.168.33.400:2375 ps -a
error during connect: Get https://192.168.33.400:2375/v1.30/containers/json?all=1: dial tcp 192.168.33.400:2375: connectex: No connection could be made because the target machine actively refused it.
I've also tried
~ docker -H 127.0.0.1:2375 ps -a
This causes the docker terminal to hang forever.
192.168.33.400 is my docker machine ip.
My guess is I haven't setup the tunneling correctly and this has something to do with how docker runs on Windows 8.1 (via VM).
I've created an environment variable called DOCKER_HOST with a value of 2375. I've also tried changing the value to 192.168.33.400:2375.
I've tried the following tunnels in putty,
1. L2375 192.168.33.400:2375
2. L2375 127.0.0.1:2375
3. L22375 192.168.33.400:2375
4. L22375 127.0.0.1:2375 (as shown in the video)
Does anyone have any ideas/suggestions?
Here are some screenshots of the commands I ran:
We can follow this steps to setup tunnel:
1.Add Azure container service FQDN to Putty:
2.Add private key(PPK) to Putty:
3.Add tunnel information to Putty:
Then we can use cmd to test it:

docker build fails on a cloud VM

I have an Ubuntu 16.04 (Xenial) running inside an Azure VM. I have followed the instructions to install Docker and all seems fine and dandy.
One of the things that I need to do when I trigger docker run is to pass --net=host, which allows me to run apt-get update and other internet-dependent commands within the container.
The problem comes in when I try to trigger docker build based on an existing Ubuntu image. It fails:
The problem here is that there is no way to pass --net=host to the build command. I see that there are issues open on the Docker GitHub (#20987, #10324) but no clear resolution.
There is an existing answer on Stack Overflow that covers the scenario I want, but that doesn't work within a cloud VM.
Any thoughts on what might be happening?
UPDATE 1:
Here is the docker version output:
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 22:11:10 2016
OS/Arch: linux/amd64
Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 22:11:10 2016
OS/Arch: linux/amd64
UPDATE 2:
Here is the output from docker network ls:
NETWORK ID NAME DRIVER SCOPE
aa69fa066700 bridge bridge local
1bd082a62ab3 host host local
629eacc3b77e none null local
Another approach would be to try letting docker-machine provision the VM for you and see if that works. There is a provider for Azure, so you should be able to set your subscription id on a local Docker client (Windows or Linux) and follow the instructions to get a new VM provisioned with Docker and it will also setup your local environment variables to communicate with the Docker VM instance remotely. After it is setup running docker ps or docker run locally would run the commands as if you were running them on the VM. Example:
#Name at end should be all lower case or it will fail.
docker-machine create --driver azure --azure-subscription-id <omitted> --azure-image canonical:ubuntuserver:16.04.0-LTS:16.04.201608150 --azure-size Standard_A0 azureubuntu
#Partial output, see docker-machine resource group in Azure portal
Running pre-create checks...
(azureubuntu) Completed machine pre-create checks.
Creating machine...
(azureubuntu) Querying existing resource group. name="docker-machine"
(azureubuntu) Resource group "docker-machine" already exists.
(azureubuntu) Configuring availability set. name="docker-machine"
(azureubuntu) Configuring network security group. location="westus" name="azureubuntu-firewall"
(azureubuntu) Querying if virtual network already exists. name="docker-machine-vnet" location="westus"
(azureubuntu) Configuring subnet. vnet="docker-machine-vnet" cidr="192.168.0.0/16" name="docker-machine"
(azureubuntu) Creating public IP address. name="azureubuntu-ip" static=false
(azureubuntu) Creating network interface. name="azureubuntu-nic"
(azureubuntu) Creating virtual machine. osImage="canonical:ubuntuserver:16.04.0-LTS:16.04.201608150" name="azureubuntu" location="westus" size="Standard_A0" username="docker-user"
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env azureubuntu
#Set environment using PowerShell (or login to the new VM) and see containers on remote host
docker-machine env azureubuntu | Invoke-Expression
docker info
docker network inspect bridge
#Build a local docker project using the remote VM
docker build MyProject
docker images
#To clean up the Azure resources for a machine (you can create multiple, also check docker-machine resource group in Azure portal)
docker-machine rm azureubuntu
Best I can tell that is working fine. I was able to build a debian:wheezy DockerFile that uses apt-get on the Azure VM without any issues. This should allow the containers to run using the default bridged network as well instead of the host network.
According to I can't get Docker containers to access the internet? using sudo systemctl restart docker might help, or enable net.ipv4.ip_forward = 1 or disable the firewall.
Also you may need to update the dns servers in /etc/resolv.conf on the VM

Couldn't resolve host 'bucket.s3.amazonaws.com' on virtual machine

Serving static pdfs from S3. Have a production environment on AWS and a development environment on my local Vagrant Virtual Machine. Everything was working fine until today.
When I try to access S3 files from my Vagrant development environment, I get
Couldn't resolve host 'bucket.s3.amazonaws.com'
I can still access the files as normal in my AWS production environment. The code to access is the exact same.
Other notes that may or may not be relevant
The VM was reset this morning. It has not worked since.
I've tried to flush the DNS -> ipconfig /flushdns
I've cleared the browser cache
Thanks for any help.
What do you mean by "the VM was reset" ? I've run into problems when I forget to put the VM into suspend and then change networks on the host machine. I won't be able to resolve anything after that. To fix I just did a vagrant suspend && vagrant up to have it refresh its network.
I faced a similar issue when tried to fetch S3 files using vagrant on openstack, solved it by configuring DNS on my openstack subnet.
(meaning the DNS issue wasn't related to the vagrant vm configuration, it was related to the openstack configuration, you can verify it by running: cat /etc/resolv.conf from your vagrant vm)

Resources