No Internet on Custom Image VM for Azure - azure

I launched an Ubuntu 18.04 VM with Azure. I installed a bunch of stuff that I need. Then, I used the dashboard to create a custom image from this machine. After that, I checked that the image was okay by launching some machines with that image. Everything seemed to be working fine.
Today, I launched a new instance with my custom image. Then I tried to install a few things with apt-get install and I get the following error (e.g. for unzip):
sudo: unable to resolve host ABCDEFG: Resource temporarily unavailable
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package unzip is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'unzip' has no installation candidate
This same thing happens for any package I try to install. After testing some basic things with my repositories, I checked the internet connection with ping. E.g. ping www.google.com which is also not working. I launched a vanilla Ubuntu 18.04 instance and I am not having these problems with that machine.
I have also tried sudo reboot but no luck with that. I did notice that when the system booted it shows the following error, also indicating that something is wrong with the internet:
Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings
Any help is greatly appreciated.

So, after some digging around, I found this answer to something similar: https://askubuntu.com/questions/1045278/ubuntu-server-18-04-temporary-failure-in-name-resolution.
I used the following command and the internet started working again:
sudo ln -s ../run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
This is a little different than the answer on askubuntu because this is on an Azure image. First, I noticed that my image was missing resolv.conf in /etc. Using ls -la /etc/resolv.conf on a different azure image, I saw that it was a symbolic link to ../run/systemd/resolve/stub-resolve.conf, so I created a link that matched this format on my machine and that fixed things.
** EDIT **
It's worth noting that when you deprovision the VM to create the custom image, it does say:
WARNING! The waagent service will be stopped.
WARNING! Cached DHCP leases will be deleted.
WARNING! root password will be disabled. You will not be able to login as root.
WARNING! /etc/resolv.conf will be deleted.
WARNING! xxxx account and entire home directory will be deleted.

Related

I can't load any web page by browsers in linux

I can't load any webpage in browsers (mozilla and chrome).
Also i can't execute sudo apt update command, because my file /var/lib/apt/lists/lock locked by process 917.
But my computer has internet, because messengers like telegram is working.
And i can execute command ping -c5 8.8.8.8 and it loads packages fine.
Can someone help me please?
Regarding the issue with apt, you likely have another instance of apt running somewhere. You really don't want doing multiple operations on your system's packages at the same time so apt automatically sets a lock file (/var/lib/apt/lists/lock) to avoid that.
tl.dr. find the other running instance of apt and kill it if appropriate. If you cant find any (can happen sometimes), delete the /var/lib/apt/lists/lock file yourself (please see https://askubuntu.com/a/335801).

GitLab fails on install and/or reconfigure

For a college assignment I had to configure gitlab on my virtual machine that’s hosted on google cloud engine and is currently running Ubuntu 20.04.
I tried to install gitlab twice but the install fails (first it got stuck for at least 5 minutes on unpacking github-ce (13.10.2-ce.0) then it failed.
Reconfiguring gets me the same message but without any context, I don’t know where the error is, what is causing it and how to fix it.
I did research this error but the only thing that I found out is that it’s probably related with the config file. Only line in my config line that’s not commented out is the external url and it has a value so I have no idea what to do.
I guess something on the machine is fup. Try the same on a known-good new machine / fresh install.
Ubunut 20.04
Sounds like a fun OS.

How to get Vagrant Homestead to boot using Hyper-V

I was unable to get homestead to boot using the directions provided here https://laravel.com/docs/5.7/homestead using hyper-V. The original issue was that the machine would not boot it would just hang indefinitely. Once I fixed this issue I encountered 2 more before I was able complete the vagrant up command.
I am not 100% sure this is the right place to post this but I have spent about 2 weeks off and on trying to solve this issue and hopefully I can save someone else a little time if they have similar issues. I was able to use homestead using virtual-box but it was extremely inconvenient to not have Hyper-V running on my PC so I uninstalled virtual-box and tried to setup homestead using Hyper-V. For me the VM would not boot at all. When I looked at it in Hyper-V manager it was just hung at startup. This turned out to be that it is setup as generation 1 box with the drive connected as IDE. For me the solution was to create a new generation 2 VM and connect the provided drive using SCSI. I then disabled secure boot and I was able to boot. Then it failed during the provisioning script trying to mount the default vagrant share. I could not figure out how to modify this call so ended up disabling it as for homestead it is not needed as far as I can tell. Then my third issue was not being able to mount any of the user defined shares in the homestead.yaml file. Some googling on this showed that I needed to make this call with no additional paramters which the script did not seem to provide an option to do. I modified the script and whola the vagrant up command completed successfully. Below are the details of the steps I took. If there is a simpler way to get Vagrant Homestead running using Hyper-V I would appreciate the advice.
Issue 1: Will not boot
Description: The issue seems to be that is trying to boot as a Generation 1 using the IDE controller. This does not seem work for my installation of windows 10 Pro.
Resolution:
1. Created a new VM using Generation 2 and attached the existing
"ubuntu-18.04-amd64.vhdx" to it using SCSI.
2. Boot this VM and then shutdown.
3. Turn off secure boot
4. Replace the Virtual machine files in [VagrantInstallFolder]\boxes\laravel-VAGRANTSLASH-homestead\6.4.0\hyperv with the new ones created above.
5. Delete newly created box from HyperV
Issue 2: Will not mount default Vagrant share
Error Message:
==> homestead-7: Machine booted and ready!
No valid IDs were given to the NFS synced folder implementation to
prune. This is an internal bug with Vagrant and an issue should be
filed.
Description: The vagrant up command fails at the attempt to mount the default vagrant share. I found no way to override the parameters for this call so it was always trying to make the call using nfs which is not supported on Windows. If it is possible to override this call settings then that would be the preferable way. But the only way I could figure out to get the provisioning script to continue to execute is to disable this share.
Resolution:
1. Modify the scripts\homestead.rb file and add the code below to the
Hyper V config settings section "Configure A Few Hyper-V Settings". This
will disable the default file share but you can still add your own from
the homestead.yaml file after completion of issue 3.
#Disable the default Vagrant file share
config.vm.synced_folder ".", "/vagrant", disabled: true
Issue 3: User defined shares in the homestead.yaml file still error.
Error Message:
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t cifs -o vers=3,credentials=/etc/smb_creds_vgt-96269f65d23acb279735d26264428995-66f0bd5cbca4d218f5f0b8a5f1712727,uid=1000,gid=1000,nolock,udp,noatime //192.168.1.107/vgt-96269f65d23acb279735d26264428995-66f0bd5cbca4d218f5f0b8a5f1712727 /home/vagrant/code
The error output from the last command was:
mount error(22): Invalid argument
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Description: The vagrant up command fails at the attempt to mount the user defined shares in the homestead.yaml file. The mount seems to be passing unneeded parameters to the mount command. We need to override the mount call in the scripts\homestead.rb file to use no parameters.
Resolution:
1. In the "Register All Of The Configured Shared Folders" section replace the line below.
Replace
config.vm.synced_folder folder['map'], folder['to'], type: folder['type'] ||= nil, **options
With
config.vm.synced_folder folder['map'], folder['to'], type: "smb"
2. Then run "vagrant up --provider hyperv"
What Vagrant Plugins are installed (vagrant plugin list)?
I was getting the following error:
No valid IDs were given to the NFS synced folder implementation to prune. This is an internal bug with Vagrant and an issue should be filed.
Previously, I'd been using NFS and had the following plugin installed: https://github.com/winnfsd/vagrant-winnfsd.
Once I removed the plugin via vagrant plugin uninstall vagrant-winnfsd, provisioning worked.
I had the same issue on windows 11 and i found something that might help you
Open Hyper-V Manager on windows
You'll find the VM created by the vagrant up command
Run it from the Manager and login into ubuntu VM
Try vagrant up command again inside your project folder
It should work now!
I hope this help you.

centOS7 netinstall with kickstart fails to get installation source

I'm trying to install CentOS7 using a kickstart file with a VM. I am using a netinstall version of the ISO.
When I try to put the URL in the kickstart file, it will take a long time to check the installation source, and then fail.
I have checked the ISO, installing successfully without kickstart and using this address for the source:
url --url="http://sunsite.rediris.es/mirror/CentOS/7/os/x86_64/"
However, when using kickstart file, I install and then fail with below error message -
Error setting up base repository
Even if I manually type it in after it errors out.
Does anyone have any ideas? I have reduced my kickstart file to just that one line and it still shows the same behaviour. I don't have this problem with kickstart using the minimal or full install ISO's.
I'm just learning Linux so I didn't realize you could switch into another screen and monitor the install/run commands simultaneously.
After doing so I realised it wouldn't resolve names.
My DNS is dead/isn't responding. Used kickstart to manually assign another DNS server to the interface. This allowed the install to resolve the url. This would explain why the install worked with the netinstall iso on its own, as it was using default settings.
Hope this helps someone.

Virtualmin Installation

I had installed virtualmin on a RHEL system and a couple of very strange problems have cropped up.
Firstly, the Apache test page now says - powered by CentOS instead of RHEL. All the files and filesystems are intact therefore I am at a loss as to why it would report another version of linux altogether.
Secondly, my sudo access has been overwritten / removed after installation. It just comes up with a message that XXXX (username) does not have sudo access....etc
And lastly, trying to access the virtualmin page over the port 10000 is just returning an "unable to connect" error. [Since I am locked out of using sudo, I am at a loss of how to proceed].
Thank you in advance for your help.
The Apache package we ship is a rebuild of the SRPM from CentOS. The default page is simply an HTML file...it is not "reporting" anything, really, except that you haven't setup any websites yet. On CentOS/RHEL Apache has to be rebuilt in order to support virtual servers in /home when using suexec. So, this is expected behavior and no reason for alarm. We used to ship a custom error page instead (with Virtualmin logo instead of CentOS, but the patch broke a while back and I never got around to fixing it...might go back to that next time we roll an Apache update).
Virtualmin did not touch your sudoers file. That problem is unrelated to the Virtualmin installation. (I wrote the install.sh and the virtualmin-base package; I'm 100% certain your sudoers issue is unrelated to Virtualmin). I don't have any guesses about what went wrong there, or how to fix it if you don't have any way to access the machine as root (rebooting into single user mode would be the right thing if you have hardware access or can get access via a KVM from your hosting provider/colo).
We would need to see the last few dozen lines of the install log to know what went wrong with the Virtualmin installation, and why Webmin failed to start.

Resources