Kubectl on Rancher Server not working properly - gitlab

It is probably a stupid question, but I could not find anything useful on that topic.
I am following this tutorial to set up an automatic CI/CD pipeline:
https://rancher.com/blog/2018/2018-08-07-cicd-pipeline-k8s-autodevops-rancher-and-gitlab/
I get stuck on the Token part. I get this error:
unable to recognize "http://x.co/rm082018": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
It seems kubectl is not properly configured. If I call kubectl version I get the following output:
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
It seems I would have to copy the admin.conf file into the home directory. However, this file does not exist since kubeadm is not installed on the rancher server. Later I tried installing kubeadm myself, calling kubeadm init and copying the resulting admin.conf file.
The error is still there.
So my question is:
how can I fix this?
do I have to fix this or can I get the token any other way?
Is the kubectl error normal behaviour since Rancher should handle all of this on its own?
Thanks in advance for any answers.

The kubectl command output indicates that no kubeconfig was found on your host. You have to do one of the following:
place a kubeconfig named config under ~/.kube/
export an environment variable named KUBECONFIG with the kubeconfig location as its value
use the kubectl command with the --kubeconfig ... argument
Happy hacking
Regards

Related

Fetch secrets and certificates from AzureKeyVault inside Docker container

I have a .net framework console application. Inside this application, I'm fetching secrets and certificates from keyvault using tenantId, client Id and Client Secret.
Application is fetching secrets and certificates properly.
Now I have containerized the application using Docker. After running the image I'm unable to fetch secrets and certificates. I'm getting below error:
" Retry failed after 4 tries. Retry settings can be adjusted in ClientOptions.Retry. (No such host is known.) (No such host is known.) (No such
host is known.) (No such host is known.)"
To resolve the error, please try the following workarounds:
Check whether your container was setup behind an nginx reverse proxy.
If yes, then try removing the upstream section from the nginx reverse proxy and set proxy_pass to use docker-compose service's hostname.
After any change make sure to restart WSL and Docker.
Check if DNS is resolving the host names successfully or not, otherwise try adding the below in your docker-compose.yml file.
dns:
- 8.8.8.8
Try removing auto generated values by WSL in /etc/resolv.conf and add DNS like below if above doesn't work.
# [network]
# generateResolvConf = false
nameserver 8.8.8.8
Try restarting the WSL by running below command as an Admin:
Restart-NetAdapter -Name "vEthernet (WSL)"
Try installing a Docker Desktop update as a workaround.
For more in detail, please refer below links:
Getting "Name or service not known (login.microsoftonline.com:443)" regularly, but occasionally it succeeds? · Discussion #3102 · dotnet/dotnet-docker · GitHub
ssl - How to fetch Certificate from Azure Key vault to be used in docker image - Stack Overflow

I am getting an error while trying to connect with node using geth tool

I am having an error while trying to attach to the quorum node, using the following command :--
geth attach --datadir new-node-1/geth.ipc
then I get this error
Unable to attach to remote geth: dial unix new-node-1/geth.ipc: connect: no such file or
directory
I tried to locate path of geth.ipc but nothing was there. I guess the file is not being created.
any suggestions guys..
You should be running the command like this (without --datadir):
geth attach new-node-1/geth.ipc
If that still doesn't work then make sure you have the correct path for the ipc file. Is new-node-1 definitely the same path that was specified to the quorum node when it was started (i.e. with with --datadir new-node-1).
If the path is correct but geth.ipc file doesn't exist then the node hasn't managed to start up. Check the log file to see if there were any errors.
Try out below:
geth attach http://127.0.0.1:8545
It worked for me

kubectl version returns error

I am trying to install a Kubernetes cluster on CentOS 7.3 servers. After some progress I got stuck on getting installing CNI plugin. To install plugin I need to pass a parameter which extracted from "kubectl version" command output. However command gets error when getting the required information, Server version:
[root#bigdev1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Error from server (NotFound): the server could not find the requested resource
Actually I started using default documentation (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) with version kubeadm 1.7.3 (and Docker 17) but got stuck on a check:
[root#bigdev1 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.03.1-ce. Max validated version: 1.12
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [bigdev1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.109.20]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
(waits here forever)
Then I decreased Docker version to 1.12.6 and kubernetes version to 1.6.0
After modifying kubeadm config. Also stopped passing cidr parameter to kubeadm init.
I will be glad if you can give any suggestions to get cleared of this issue or give the result of below command:
kubectl version | base64 | tr -d '\n'
Thanks in advance.
not sure which document your following. I would recommend using the kubeadm to configure the cluster.
https://kubernetes.io/docs/setup/independent/install-kubeadm/
This should give you the result of the command:
kubectl version 2>&1| base64 | tr -d '\n'

Hadoop 2.2.0 multi-node cluster setup on ec2 - 4 ubuntu 12.04 t2.micro identical instances

I have followed this tutorial to setup Hadoop 2.2.0 multi-node cluster on Amazon EC2. I have had a number of issues with ssh and scp which i was either able to resolve or workaround with help of articles on Stackoverflow but unfortunately, i could not resolve the latest problem.
I am attaching the core configuration files core-site.xml, hdfs-site.xml etc. Also attaching a log file which is a dump output when i run the start-dfs.sh command. It is the final step for starting the cluster and it is giving a mix of errors and i don't have a clue what to do with them.
So i have 4 nodes exactly the same AMI is used. Ubuntu 12.04 64 bit t2.micro 8GB instances.
Namenode
SecondaryNode (SNN)
Slave1
Slave2
The configuration is almost the same as suggested in the tutorial mentioned above.
I have been able to connect with WinSCP and ssh from one instance to the other. Have copied all the configuration files, masters, slaves and .pem files for security purposes and the instances seem to be accessible from one another.
If someone could please look at the log, config files, .bashrc file and let me know what am i doing wrong.
Same security group HadoopEC2SecurityGroup is used for all the instances. All TCP traffic is allowed and ssh port is open. Screenshot in the zipped folder attached. I am able to ssh from Namenode to secondary namenode (SSN). Same goes for slaves as well which means that ssh is working but when i start the hdfs every thing goes down. The error log is not throwing any useful exceptions either. All the files and screenshots can be found as zipped folder here.
Excerpt from error output on console looks like
Starting namenodes on [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
ec2-54-72-106-167.eu-west-1.compute.amazonaws.com]
You: ssh: Could not resolve hostname you: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
loaded: ssh: Could not resolve hostname loaded: Name or service not known
VM: ssh: Could not resolve hostname vm: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
Server: ssh: Could not resolve hostname server: Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
might: ssh: Could not resolve hostname might: Name or service not known
.....
Add the following entries to .bashrc where HADOOP_HOME is your hadoop folder:
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export
HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
Hadoop 2.2.0 : "name or service not known" Warning
hadoop 2.2.0 64-bit installing but cannot start

Puppet agent can't find server

I'm new to puppet, but picking it up quickly. Today, I'm running into an issue when trying to run the following:
$ puppet agent --no-daemonize --verbose --onetime
**err: Could not request certificate: getaddrinfo: Name or service not known
Exiting; failed to retrieve certificate and waitforcert is disabled**
It would appear the agent doesn't know what server to connect to. I could just specify --server on the command line, but that will be of no use to me when this runs as a daemon in production, so instead, I specify the server name in /etc/puppet/puppet.conf like so:
[main]
server = puppet.<my domain>
I do have a DNS entry for puppet.<my domain> and if I dig puppet.<my domain>, I see that the name resolves correctly.
All puppet documentation I have read states that the agent tries to connect to a puppet master at puppet by default and your options are host file trickery or do the right thing, create a CNAME in DNS, and edit the puppet.conf accordingly, which I have done.
So what am I missing? Any help is greatly appreciated!
D'oh! Need to sudo to do this! Then everything works.
I had to use the --server flag:
sudo puppet agent --server=puppet.example.org
I actually had the same error but I was using the two learning puppet vm and trying run the 'puppet agent --test' command.
I solved the problem by opening the file /etc/hosts on both the master and the agent vm and the line
***.***.***.*** learn.localdomain learn puppet.localdomain puppet
The ip address (the asterisks) was originally some random number. I had to change this number on both vm so that it was the ip address of the master node.
So I guess for experienced users my advice is to check the /etc/hosts file to make sure that the ip addresses in here for the master and agent not only match but are the same as the ip address of the master.
for other noobs like me my advice is to read the documentation more clearly. This was a step in the 'setting up an agent vm' process the I totally missed xD
In my case I was getting same error but it was due to the cert which should been signed to node on puppetmaster server.
to check pending certs run following:
puppet cert list
"node.domain.com" (SHA256) 8D:E5:8A:2*******"
sign the cert to node:
puppet cert sign node.domain.com
Had the same issue today on puppet 2.6 on CentOS 6.4
All I did to resolve the issue was to check the usual stuff such as hosts and resolv.conf to ensure they were as expected (compared with a working server) and then;
Removed /var/lib/puppet directory rm -rf /var/lib/puppet
Cleared the certificate on the puppet master puppetca --clean
servername
Restarted the network service network restart
Re-ran puppet
Even though the resolv.conf was identical to the working server, puppet updated resolv.conf and immediately re-signed the certificate and replaced all the puppet lib files.
Everything was fine after that.

Resources