There was not any cloud-config file in my coreOS, so I made one myself as below:
#cloud-config
hostname: coreos
ssh_authorized_keys:
-ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAgU0+1JMi9jzAiHSTu9GL4eNX0KzP5E5lN/0dczRcLF+uX4NSO9DCUUIlkGDml70aXrIHhawfR/TSz1YEkJeZDwWyRKgNeqTGXax1HncLF9kHaWxn7At34qmfWdu54zvtfhZVOV2FKWMC0A8hizkFY+LPV8rkM1Hjoik2f8FZ491ucy8Lygrtd0ZWDPBp/EyqG90JwHF6lEZanhq/2vVPTJdJtLelpdr0Ouvw132r3ex7tm76nj+T10DOsGntNfNr/VD8Z1UD2sRxG9JgWgVHVjYzfy5ISCQwvbYG6DZG+e33SxZb5Ch9B5h8vCaRgsA1DX1K+rdp5fxCF5h1VkxaMQ== rsa-key-20151214
But it did not work when I tried to log in with putty through ssh key, also got error when logged in
" server refused our key "
and
" Failed Units: 1
system-cloudinit#usr-share-oem-cloud\x2dconfig.yml.service "
Well I am confused about this cloud-config.
What should I do to make right one to work?
If anyone knows about coreOS, Please help me
The answer to your question depends on what type of CoreOS system you are running.
Also, from your question it isn't clear how you tried to set your system's cloud config.
If this is a bare metal install (you used the coreos-install tool to install to a physical system), you should have a cloud config file at /var/lib/coreos-install/user_data. user_data is your cloud config file here. It should have been created from the cloud-config.yml that was provided when running coreos-install.
For most of the other types of systems (CDROM/USB, PXE, vmWare, etc.) the cloud config file is usually part of the environment and read during every boot.
You can find the locations of the cloud config file for other CoreOS system types here.
If you didn't provide a cloud config during install, or in the environment you can use the following command to load a custom cloud config file:
sudo coreos-cloudinit --from-file=/home/core/cloud-config.yaml
of course you need to have command line access to do that. just in case you don't have console access yet, you can use the coreos.autologin Kernel parameter when you boot to skip login on the console.
You can validate your cloud-config at coreos.com/validate. I'm not sure that's what is failing here, but check it out if you keep running into issues.
Validater suggests this works? but maybe it's 3 parts?
#cloud-config
hostname: coreos
ssh_authorized_keys: ["-ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAgU0+1JMi9jzAiHSTu9GL4eNX0KzP5E5lN/0dczRcLF+uX4NSO9DCUUIlkGDml70aXrIHhawfR/TSz1YEkJeZDwWyRKgNeqTGXax1HncLF9kHaWxn7At34qmfWdu54zvtfhZVOV2FKWMC0A8hizkFY+LPV8rkM1Hjoik2f8FZ491ucy8Lygrtd0ZWDPBp/EyqG90JwHF6lEZanhq/2vVPTJdJtLelpdr0Ouvw132r3ex7tm76nj+T10DOsGntNfNr/VD8Z1UD2sRxG9JgWgVHVjYzfy5ISCQwvbYG6DZG+e33SxZb5Ch9B5h8vCaRgsA1DX1K+rdp5fxCF5h1VkxaMQ== rsa-key-20151214"]
Related
I referred many solutions yet no luck. I have a linux automation which runs few gcloud commands with some conditions. I made this script with node js, but it is incredibly slow that I even finish it manually before the scrips completes the run.
Same with the gcloud commands when I connect to a cluster and kubectl commands when i query something.
Please help!!
It could be a DNS config error on WSL side. I hadthe same issue today, here's how I fixed it !
1. Checking the (deadly slow) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m1.212s
user 0m0.151s
sys 0m0.050s
2. Checking the WSL/DNS configuration
[tbg#~] cat /etc/wsl.conf
[network]
generateResolvConf=false
[tbg#~] cat /etc/resolv.conf
nameserver XX.XXX.XXX.X
nameserver YYY.YY.YY.YY
nameserver 1.1.1.1
If you see that, remove these lines to get back to automatic resolv.conf generation and restart WSL (wsl --shutdown)
3. Checking the (fixed !) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m10.530s
user 0m0.087s
sys 0m0.043s
I found out my resolv.conf configuration was causing that latency, by trying to reinstall kubectl with apt, and finding apt really slow too
Right now access to /mnt folders in WSL2 is too slow and by default at launch the entire Windows PATH is added to the Linux $PATH so any Linux binary that scans $PATH will make things unbearably slow.
To disable this feature, edit the /etc/wsl.conf to add the following section:
[interop]
appendWindowsPath = false
Avoid adding Windows Path to Linux $PATH and best for now is adding folders to the $PATH manually.
Terminate the WSL distro (wsl.exe --terminate <distro_name>) to make it immediately effective or wsl.exe --shutdown and start the terminal again.
Refer to the stack link for more information.
I am learning to use Jenkins to deploy a .Net 5.0 application on an AWS EC2 server. This is the first time I am using Linux server and Jenkins for .Net (I'm am a life long Windows guy), and I am facing an error while trying to publish my artifacts over SSH to Web Server.
My setup:
Jenkins server is an AWS EC2 Linux AMI server.
Web Server is also an AWS EC2 LInux AMI server.
My Jenkins is correctly installed and working. I am able to build and run unit test cases without any issues.
For Deploy, I am using 'Publish Over SSH' plugin, and I have followed all steps to configure this plugin as mentioned here https://plugins.jenkins.io/publish-over-ssh/.
However, when try to 'Test Configuration', I get the below error,
Failed to connect or change directory
jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection. Message: [Failed to connect session for config [WebServer]. Message [Auth fail]]
I did a ping test from Jenkins server to Web Server, and it is a success.
I'm using the .pem key in the 'Key' section of 'Publish over SSH'. This key is the same key I use to SSH into the web server.
The below link suggests many different solutions, but none is working in my case.
Jenkins Publish over ssh authentification failed with private key
I was looking at the below link which describes the same problem,
Jenkins publish over SSH failed to change to remote directory
However in my case I have kept 'Remote Directory' as empty. I don't know if I have to specify any directory here. Anyways, I tried creating a new directory under the home directory of user ec2-user as '/home/ec2-user/publish' and then used this path as Remote Directory, but it still didn't work.
Screenshot of my settings in Jenkins:
I would appreciate if anyone can point me to the right direction or highlight any mistake I'm doing with my configuration.
In my case following steps solved the problem.
Solution is based on Ubuntu 22.04
add two line in /etc/ssh/sshd_config
PubkeyAuthentication yes
PubkeyAcceptedKeyTypes +ssh-rsa
restart sshd service
sudo service sshd restart
you might consider the following:
a. From the screenshot you’ve provided, it seems that you have checked the Use password authentication, or use different key option which will require you to add your key and password (inputs from these fields will be used in connecting to your server via SSH connection). If you use the same SSH key and passphrase/password on all of your servers, you can uncheck/untick that box and just use the config you have specified above.
b. You might also check if port 22 of your web server allows inbound traffic from the security group where your Jenkins server/EC2 instance is running. See reference here.
c. Also, make sure that the remote directory you have specified is existing otherwise the connection may fail.
Here's the sample config
Trying to setup SonarQube on EC2 using what should be basic install settings.
List item
Setup a standard EC2 AWS LINUX Ami attached to M4 large
SSH into EC2 instance
Install JAVA
Set to use JAVA8
wget https://sonarsource.bintray.com/Distribution/sonarqube/sonarqube-6.4.zip
unzip into the /etc dir
run sudo ./sonar.sh start
Instance starts
But when I try to go to the app it never comes up when I try either the IPv4 Public IP 187.187.87.87:9000 (ex not real IP) or try ec2-134-73-134-114.compute-1.amazonaws.com:9000 (not real IP either just for example)
Perhaps it is my ignorance or me not configuring something correctly as it pertains to the initial EC2 setup.
If anyone has any ideas, please let me know.
Issue was that SonarQube default port is 9000. and by default this port is not open in the security group if you dont apply the default security group in which all the ports are open(which is Not recommended).
As suggested in comment #Issac, opened the 9000 port to allow incoming request to SonarQube, in AWS security group setting of instance. Which solved the issue.
need to have an db and give permissions to the db insonar.properties file in sonar nd need to open firewalls
OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.
What would be the best way to run commands in remote servers? I am thinking of using SSH, but is there a better way than that?
I used Red Hat Linux and I want to run the command in one of the servers, specify what other servers I want to have my command run, and it has to do the exact same thing in the servers specified. Puppet couldn't solely help, but I might be able to combine some other tool with Puppet to do the job for me.
It seems you are able to log on to the other servers without entering a password. I assume this is based on SSH keys, as described here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-ssh-configuration-keypairs.html
You say another script is producing a list of servers. You can now use the following simple script to loop over the list:
for server in `./server-list-script`; do
echo $server:
ssh username#$server mkdir /etc/dir/test123
done >logfile 2>&1
The file "logfile" will collect the output. I'm pretty sure Puppet is able to do this as well.
Your solution will almost definitely end up involving ssh in some capacity.
You may want something to help manage the execution of commands on multiple servers; ansible is a great choice for something like this.
For example, if I want to install libvirt on a bunch of servers and make sure libvirtd is running, I might pass a configuration like this to ansible-playbook:
- hosts: all
tasks:
- yum:
name: libvirt
state: installed
- service:
name: libvirtd
state: running
enabled: true
This would ssh to all of the servers in my "inventory" (a file -- or command -- that provides ansible with a list of servers), install the libvirt package, start libvirtd, and then arrange for the service to start automatically at boot.
Alternatively, if I want to run puppet apply on a bunch of servers, I could just use the ansible command to run an ad-hoc command without requiring a configuration file:
ansible all -m command -a 'puppet apply'