I deployed coredns and etcd on a machine for testing
(with etcd as the endpoint). The configuration of coredns is as follows: coredns.conf,
Then etcd is as follows:
./etcdctl ls /skydns/com/mytest
/skydns/com/mytest/a
/skydns/com/mytest/b
./etcdctl get /skydns/com/mytest/a
{"host":"1.1.1.1"}
./etcdctl get /skydns/com/mytest/b
{"host":"1.1.1.1"}
then restart coredns and dig with 'dig #127.0.0.1 -p 1053 a.mytest.com' for test, output as follow:
enter image description here
and output log as follows:
enter image description here
coredns.conf as follows:
enter image description here
so,why not out 1.1.1.1 ?
Related
I have a .net framework console application. Inside this application, I'm fetching secrets and certificates from keyvault using tenantId, client Id and Client Secret.
Application is fetching secrets and certificates properly.
Now I have containerized the application using Docker. After running the image I'm unable to fetch secrets and certificates. I'm getting below error:
" Retry failed after 4 tries. Retry settings can be adjusted in ClientOptions.Retry. (No such host is known.) (No such host is known.) (No such
host is known.) (No such host is known.)"
To resolve the error, please try the following workarounds:
Check whether your container was setup behind an nginx reverse proxy.
If yes, then try removing the upstream section from the nginx reverse proxy and set proxy_pass to use docker-compose service's hostname.
After any change make sure to restart WSL and Docker.
Check if DNS is resolving the host names successfully or not, otherwise try adding the below in your docker-compose.yml file.
dns:
- 8.8.8.8
Try removing auto generated values by WSL in /etc/resolv.conf and add DNS like below if above doesn't work.
# [network]
# generateResolvConf = false
nameserver 8.8.8.8
Try restarting the WSL by running below command as an Admin:
Restart-NetAdapter -Name "vEthernet (WSL)"
Try installing a Docker Desktop update as a workaround.
For more in detail, please refer below links:
Getting "Name or service not known (login.microsoftonline.com:443)" regularly, but occasionally it succeeds? · Discussion #3102 · dotnet/dotnet-docker · GitHub
ssl - How to fetch Certificate from Azure Key vault to be used in docker image - Stack Overflow
I am using Redhat OpenShift 4.4.17 deployed in Azure.
I logged in to OpenShift as administrator.
I have a Docker image locally, now I need to push my docker image to OpenShift Docker registry.
I am using below command
docker login -u <user_name> -p `oc whoami -t` image-registry.openshift-image-registry.svc:5000
I am getting error as:
Error response from daemon: Get https://image-registry.openshift-image-registry.svc:5000/v2/: dial tcp: lookup image-registry.openshift-image-registry.svc: no such host"
What can I try to resolve this?
please see this one:
$ oc get route -n openshift-image-registry
NAME HOST/PORT
default-route default-route-openshift-image-registry.
PATH SERVICES PORT TERMINATION WILDCARD
image-registry <all> reencrypt None
image-registry.openshift-image-registry.svc:5000 can not be resolved at the external of the Openshift cluster, because it's internal registry service name.
So you should access to the internal registry service through the Route hostname of the registry in order to do docker login. Refer Exposing a secure registry manually, if the internal registry was not exposed.
// expose the internal registry to external using Route.
$ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
// Verify the internal registry Route hostname.
$ oc get route -n openshift-image-registry
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
default-route default-route-openshift-image-registry.apps.clustername.basedomain image-registry <all> reencrypt None
// Try to login using the internal registry Route hostname.
$ docker login -u <user_name> -p $(oc whoami -t) default-route-openshift-image-registry.apps.clustername.basedomain
Here is my test evidence using podman as follows.
First of all, you should place and update the trusted CA of your Router wildcard certificates on your client host which is executed the docker or podman client.
# podman login -u admin -p $(oc whoami -t) default-route-openshift-image-registry.apps.<clustername>.<basedomain>
Login Succeeded!
Additionally, if you face "x509: certificate signed by unknown authority" error message, then you should place the Router trusted CA on your host or should use "--tls-verify=false" in podman case or the same option for docker case instead of that.
# podman login -u admin -p $(oc whoami -t) default-route-openshift-image-registry.apps.<clustername>.<basedomain>
Error: error authenticating creds for "default-route-openshift-image-registry.apps.<clustername>.<basedomain>": pinging docker registry returned: Get https://default-route-openshift-image-registry.apps.<clustername>.<basedomain>/v2/: x509: certificate signed by unknown authority
# podman login --tls-verify=false -u admin -p $(oc whoami -t) default-route-openshift-image-registry.apps.<clustername>.<basedomain>
Login Succeeded!
I'm trying to register a git runner on my company remote network server Centos 7 Linux, gitlab is working fine but when trying to register a git runner like this:
gitlab-runner register
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):: https://testgitlab.mycompany.com/
Please enter the gitlab-ci token for this runner: $MyToken
Please enter the gitlab-ci description for this runner: [testgitlab.mycompany.com]: testing
Please enter the gitlab-ci tags for this runner (comma separated): test
but as a result I'm getting this error:
ERROR: Registering runner... failed
runner=UZqkfhj status=couldn't execute POST against https://testgitlab.mycompany.com/api/v4/runners: Post
https://testgitlab.mycompany.com/api/v4/runners: dial tcp:
lookup testgitlab.mycompany.com on 192.168.1.123:53: no such host
PANIC: Perhaps you are having network problems
My /etc/resolv.conf looks like this:
search mycompany.com
nameserver 192.168.1.123
I am working on an embedded Linux device with 3 different Linux partitions. The end use selects which partition to boot from. The root file system is read-only. There is a 4th partition that is mounted read-write. I need all instances of Linux to resister the same name with the DNS server (I.E. use the same host name). The host name is not known when the file system is created and is assigned later due to the fact that each device needs a unique host name. What I have done is create a file on the read-write partition that will get over written at a later date. Then I changed /etc/hostname to be a symlink to that file. I know that the file cannot be read until the read-write partition has been mounted and I believe that is my issue. If I set the /etc/hostname to a normal file then whatever is specified in that file works fine. Changing /etc/hosts does not seem do to anything.
The desired result is to control the name registered with the DNS server when the WiFi connects. The only way I have found to control this is through the host name. The /etc/hosts file does not seem to affect it. If the host name file is unreadable or not set then Linux defaults it to "localhost" and does not register anything useful with the DNS.
The WiFi is enabled through rfkill by an application that is run after boot. Running something in a script before the WiFi is enabled is a possible solution. I have not been able to successfully change what is registered with the DNS by changing the hostname on the command line before the WiFi is enabled. I can get the hostname changed but what is registered with DNS does not change.
Info:
The Linux Kernel is 3.14.60 and is a yocto based build.
systemd manages services on boot.
Current /etc/hosts
127.0.0.1 ABC123.localdomain ABC123
Current /etc/hostname
SerialNumberGoesHere
Here are all the ways to see the host name after boot:
>hostname
localhost
>hostnamectl status
Static hostname: SerialNumberGoesHere
Transient hostname: localhost
Icon name: computer
Machine ID: 4cdac8e5dce348d2bfc133dd393f6172
Boot ID: 9b8c9da934e748fc86606c4a24f57f9e
Operating System: Linux
Kernel: Linux 3.14.60+g02d9429
Architecture: arm
>uname -n
localhost
>echo /proc/sys/kernel/hostname
localhost
>sysctl kernel.hostname
kernel.hostname = localhost
As you can see "hostnamectl status" picks up the correct Static hostname, but the kernel did not.
On the Device
>nslookup 192.168.12.238
Server: 192.168.12.6
Address 1: 192.168.12.6 dc1.hq.com
Name: 192.168.12.238
Address 1: 192.168.12.238 linux.local
On another Computer (Ping 192.168.12.238 works)
>nslookup 192.168.12.238
Server: 127.0.1.1
Address: 127.0.1.1#53
** server can't find 238.12.168.192.in-addr.arpa: NXDOMAIN
If I change the hostname symlink to a real file this is the desired result:
>hostname
SerialNumberGoesHere
>hostnamectl status
Static hostname: SerialNumberGoesHere
Icon name: computer
Machine ID: 4cdac8e5dce348d2bfc133dd393f6172
Boot ID: ed760b42b7ae414881bc2d9352a8bb82
Operating System: ARANZ Sugar-Free Silhouette Star M2 (fido)
Kernel: Linux 3.14.60+g02d9429
Architecture: arm
>uname -n
SerialNumberGoesHere
>echo /proc/sys/kernel/hostname
SerialNumberGoesHere
>sysctl kernel.hostname
kernel.hostname = SerialNumberGoesHere
On the Device
> nslookup 192.168.12.238
Server: 192.168.12.7
Address 1: 192.168.12.7 dc1.hq.com
Name: 192.168.12.238
Address 1: 192.168.12.238 serialnumbergoeshere.hq.com
On another Computer
> nslookup 192.168.12.238
Server: 127.0.1.1
Address: 127.0.1.1#53
238.12.168.192.in-addr.arpa name = serialnumbergoeshere.hq.com.
Update Jan 10, 2017
I found the Answer. You must restart the network service after the hostname command.
systemctl restart systemd-networkd.service
I was looking for a similar functionality.
My first thought was also to symlink /etc/hostname to a file on another partition, and make sure the partition is mounted before following boot message appears: systemd[1]: Set hostname to <myhostname>.
However, after digger a bit deeper it doesn't come that simple. From what I found elsewhere:
I would have expected that initramfs packs your /etc/hostname with actual hostname, perhaps you need to regenerate initramfs?
systemd is started early in initramfs where it sets hostname from initramfs, but after switching root to actual system it reexecutes and sets hostname again, therefore you should end up with proper hostname anyway
So I ended up with the solution Daniel Schepler provided.
Here the systemd service:
[Unit]
Description=Update hostname to what we want
Before=systemd-networkd.target
After=mountdata.service
RequiresMountsFor=/usr
DefaultDependencies=no
[Service]
Type=simple
ExecStart=/usr/bin/mycustomhostnamescript.sh
[Install]
WantedBy=multi-user.target
The script:
...
hostn=$(cat "$FILE_SERIAL")
echo "setting hostname=$hostn"
echo "$hostn" > /etc/hostname
hostname "$hostn"
...
Its important that you don't use hostnamectl within the script as it will fail due to its dependencies not being loaded yet!
The solution works without needing to restart the networking service since it wasn't started yet. It also works with rebooting only once.
You must restart the network service after the hostname command.
systemctl restart systemd-networkd.service
– schustercp
For mender I was able to use
[Unit]
Description=Update hostname to what we want
Before=systemd-networkd.target
After=mountdata.service
RequiresMountsFor=/data
DefaultDependencies=no
[Service]
Type=simple
ExecStart=/usr/bin/hostname -F /etc/hostname
[Install]
WantedBy=multi-user.target
I changed the RequiresMountFor to be my data partition that contains the text file for the hostname.
I have followed this tutorial to setup Hadoop 2.2.0 multi-node cluster on Amazon EC2. I have had a number of issues with ssh and scp which i was either able to resolve or workaround with help of articles on Stackoverflow but unfortunately, i could not resolve the latest problem.
I am attaching the core configuration files core-site.xml, hdfs-site.xml etc. Also attaching a log file which is a dump output when i run the start-dfs.sh command. It is the final step for starting the cluster and it is giving a mix of errors and i don't have a clue what to do with them.
So i have 4 nodes exactly the same AMI is used. Ubuntu 12.04 64 bit t2.micro 8GB instances.
Namenode
SecondaryNode (SNN)
Slave1
Slave2
The configuration is almost the same as suggested in the tutorial mentioned above.
I have been able to connect with WinSCP and ssh from one instance to the other. Have copied all the configuration files, masters, slaves and .pem files for security purposes and the instances seem to be accessible from one another.
If someone could please look at the log, config files, .bashrc file and let me know what am i doing wrong.
Same security group HadoopEC2SecurityGroup is used for all the instances. All TCP traffic is allowed and ssh port is open. Screenshot in the zipped folder attached. I am able to ssh from Namenode to secondary namenode (SSN). Same goes for slaves as well which means that ssh is working but when i start the hdfs every thing goes down. The error log is not throwing any useful exceptions either. All the files and screenshots can be found as zipped folder here.
Excerpt from error output on console looks like
Starting namenodes on [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
ec2-54-72-106-167.eu-west-1.compute.amazonaws.com]
You: ssh: Could not resolve hostname you: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
loaded: ssh: Could not resolve hostname loaded: Name or service not known
VM: ssh: Could not resolve hostname vm: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
Server: ssh: Could not resolve hostname server: Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
might: ssh: Could not resolve hostname might: Name or service not known
.....
Add the following entries to .bashrc where HADOOP_HOME is your hadoop folder:
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export
HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
Hadoop 2.2.0 : "name or service not known" Warning
hadoop 2.2.0 64-bit installing but cannot start