I'm trying to install CoreOS on VMware Fusion.I follow the steps on official guide,but when I append "coreos.autologin" to the kernel parameters on boot,it showed error: cannot find the command 'load_coreos' ?
So,what could I do ?
I would guess you're probably hitting a bug.
You might try one of the Primary Methods documented for configuring CoreOS on vmWare using Cloud Config parameters. This is the primary documented method for accessing a CoreOS system remotely, and uses password-less login using ssh keys.
using Cloud Config on a config-drive
use the vmWare Guestinfo interface to set Cloud Config Parameters
Whichever you choose, you should at least configure the ssh_authorized_keys Cloud Config parameter. This is the ssh key that will be used to log in to the CoreOS system as the core user.
try following line instead of the load_coreos command :
linux$suf /coreos/vmlinuz-a mount.usr=PARTUUID=$usr_uuid coreos.autologin
Related
So I am converting custom built .iso to .raw. Deployed a VM on OpenStack using this .raw but I am unable to ssh into this machine.
I used GUI console and was able to login to this OpenStack VM using username and password. Once I am logged in, I restarted the cloud-init service and that resolved the ssh issue. I can ssh into the machine just fine.
Now the question is how do I make sure that enabling and restarting the cloud-init service are as part of first boot when deploying VMs on OpenStack.
I know I can pass the script when using UI to deploy VMs on OpenStack website but the requirement is this should be as part of the image itself. That means, I should just deploy a vm using .raw and the enabling and starting of cloud-init service should be part of the .raw image itself.
I am new to Linux or IT in general. Any suggestions are much appreciated.
Welcome to SO.
Maybe the keypoint is that cloud-init system which is an open-source package from Ubuntu that is available on various Linux distributions was installed or configured wrong. You should be sure that the cloud-init system which running in .iso image was working before convert to .raw format.
how do I make sure that enabling and restarting...
If you want to find it out, you could create instance with --user-data parameter.
Suggest a solution if such exists.
There are 20 empty baremetal servers. Me need to go to the ipmi and manually connect the image file to start the installation OS.
Question: are there any solutions to automate this process?
Since you tag this question with "OpenStack", you must have heard of Ironic.
If the thought of installing OpenStack to automatically install servers frightens you, look up Cobbler. It was used by now defunct products Helion OpenStack and SUSE OpenStack Cloud to set up clouds.
Ubuntu uses MAAS for this purpose.
This is not a complete list.
I made a VM for making a Image in Azure.
After I made the linux vm(Redhat), I stop the vm and made image.
But I failed making the vm from image.
Both cases have the same problems
-1st case:I didn't install anything.
-2nd case:I install something and made ssh key(rsa)
If i execute this command 'sudo waagent -deprovision+user', there is no error.
BUT my ssh key disappear so my VMs from image cannot connect each other, which means that I cannot generate a cluster by using Ambari.
Is there any way to solve this problem?
this is error I got when I failed making a VM from image.
--------error---- Provisioning failed. OS Provisioning for VM 'master0' did not finish in the
allotted time. However, the VM guest agent was detected running. This
suggests the guest OS has not been properly prepared to be used as a
VM image (with CreateOption=FromImage). To resolve this issue, either
use the VHD as is with CreateOption=Attach or prepare it properly for
use as an image: * Instructions for Windows:
https://azure.microsoft.com/documentation/articles/virtual-machines-windows-upload-image/
* Instructions for Linux: https://azure.microsoft.com/documentation/articles/virtual-machines-linux-capture-image/.
OSProvisioningTimedOut
Before you create a image, you should execute sudo waagent -deprovision+user. If you don't do it, you will get this error.
According to your scenario, you could configure Provisioning.RegenerateSshHostKeyPair=n (/etc/waagent.conf). According this official document
deprovision: Attempt to clean the system and make it suitable for
re-provisioning. This operation deleted the following:
All SSH host keys (if Provisioning.RegenerateSshHostKeyPair is 'y' in
the configuration file)
If it does not work for you, I suggest you could add publickey to your VMs by using Azure Portal.
After I installed MapR sandbox in my laptop, how to practice the sample exercises on MapR sandbox? Where I can find the instructions?
Thank You.
Venkat
Once you have started the VM you can connect to the VM using ssh and do all most of the work from the session.
If you have not changed anything to the configuration, the Sandbox is accessible usin glocal ssh connection (NAT) on the port 2222, so connect to it as follow:
Virtual box
ssh mapr#localhost -p 2222
You should have all the instructions about the sandbox, running on:
Virtual Box
VMWare
I do not know which exercices you want to do, you can find all the tutorials here:
https://www.mapr.com/products/mapr-sandbox-hadoop/tutorials/
Once you have set up the mapr sandbox on your Laptop, you should check if the node is working properly using
maprcli node list
Look for healthy and check the services running.
After that you should try to work with your Map Reduce program.
A book by "OReilly.Hadoop.The.Definitive.Guide" is a good way to start learning with hadoop, mapr and other ecosystems provided under this distributed system.
There are other tutorials also available on the net that you can choose.
We have a Jenkins server that uses the SSH plugin to configure a SSH remote hosts within the global Jenkins configuration. These ssh connections are using a public/private key for authentication to the remote host.
We then use these configured SSH remote hosts in the build step "Execute shell script on remote host using ssh" (I believe this is also part of the SSH plugin) in a number of jobs.
The problem I'm having is that any job using the execute shell script on remote host using ssh must be running on a Windows slave, since I haven't found a way to put in some sort of relative path to the keyfile.
On windows the file would be located at: C:\Users\<username>\.ssh\
On linux the file would be located at: /home/<username>/.ssh/
I've tried many iterations of using system environment variables, setting environment variables on the node configuration page and using these as part of the keyfile path without any luck.
Am I missing something? Is there a better way to handle it. There must be a way to manage keyfile locations and differences between ssh remote hosts across slaves.
Unfortunately, I believe there isn't a way to specify a relative path — the keyfile path configured must be the same on every build slave. Far from ideal, I know.
I'm not sure how Windows would handle it, but perhaps something like /ssh/whatever.key would work, if you were to place the file at c:\ssh\whatever.key and /ssh/whatever.key for Windows and Linux machines, respectively.
In any case, the plugin has since been modified to use the Jenkins Credentials plugin, which allows you to manage username/password or private key-based credentials from the Jenkins UI, without having to place files on disk.
However, although this has been integrated into the SSH plugin, there has not yet been a new release containing this functionality, but it looks like it should be coming "soon".
So if the workaround doesn't work, you can try to:
Wait for a new release
Post on the jenkinsci-users list to ask about a new release
Download and install a recent build of the plugin
(though I would be careful to back up the existing job config before trying this; or try it on a separate Jenkins instance)