I'm building a continuous integration pipeline through Jenkins to deploy microservices on a kubernetes cluster. I'm using minikube implementation.
I have some error in kubernetes agent configuration. I created a secretText credential to allow the agent to connect to my K8s cluster but when I test it I get this error:
Error testing connection https://<my_k8s_cluster_ip>:8443: java.net.SocketTimeoutException: connect timed out
I don't understand what the problem is.
K8s agent config
minikube config :
root#minikube:~# kubectl config view --minify=true
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Feb 2021 23:57:36 CET
provider: minikube.sigs.k8s.io
version: v1.17.1
name: cluster_info
server: https://<minikube_ip>:8443
name: minikube
contexts:
- context:
cluster: minikube
extensions:
- extension:
last-update: Tue, 16 Feb 2021 23:57:36 CET
provider: minikube.sigs.k8s.io
version: v1.17.1
name: context_info
namespace: bargo
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /root/.minikube/profiles/minikube/client.crt
client-key: /root/.minikube/profiles/minikube/client.key
Secret
root#minikube:~# kubectl get secrets
NAME TYPE DATA AGE
db-user-pass Opaque 2 2d18h
default-token-4f7lh kubernetes.io/service-account-token 3 3d
jenkins-token-4kdgr kubernetes.io/service-account-token 3 2d18h
Listening port
root#minikube:~# sudo netstat -tulpn | grep LISTEN
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 17207/systemd-resol
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 633/sshd: /usr/sbin
tcp 0 0 127.0.0.1:49153 0.0.0.0:* LISTEN 26838/docker-proxy
tcp 0 0 127.0.0.1:49154 0.0.0.0:* LISTEN 26851/docker-proxy
tcp 0 0 127.0.0.1:49155 0.0.0.0:* LISTEN 26865/docker-proxy
tcp 0 0 127.0.0.1:49156 0.0.0.0:* LISTEN 26879/docker-proxy
tcp6 0 0 :::22 :::* LISTEN 633/sshd: /usr/sbin
iptable result :
root#minikube:~# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:8443
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere <my_server_ip> tcp dpt:8443
ACCEPT tcp -- anywhere <my_server_ip> tcp dpt:5000
ACCEPT tcp -- anywhere <my_server_ip> tcp dpt:2376
ACCEPT tcp -- anywhere <my_server_ip> tcp dpt:ssh
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Ubuntu Firewal status :
root#minikube:~# sudo ufw status verbose
Status: inactive
It's seam the Kubernetes API wasen't expose on port 8443. So i use this command :
kubectl proxy --port=8080 &
When i try it :
root#minikube:~# curl http://localhost:8080/api/
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "<minikube_ip>:8443"
}
]
}
root#minikube:~# curl http://127.0.0.1:8080/api/
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "<minikube_ip>:8443"
}
]
}
root#minikube:~# curl http://<minikube_ip>:8080/api/
curl: (7) Failed to connect to <minikube_ip> port 8080: Connection refused
Related
I am setting up a Nexus OSS on an Azure VM.
I have set it up on a Ubuntu 16.04 LTS.
When I connect to the webapp via an SSH tunnel, I can access the Nexus repository manager. When I try to open it directly, I cannot get it to work.
As per the Azure docs and several Stackoverflow responses, I have updated the NSG and added port 8081 to be allowed but with no success. I also check the UFW (Ubuntu Firewall) and it is not even activated.
EDIT :
netstat -plant | grep 8081
tcp 0 0 127.0.0.1:33519 0.0.0.0:* LISTEN 18081/java
tcp 0 0 0.0.0.0:8081 0.0.0.0:* LISTEN 18081/java
tcp 0 0 127.0.0.1:8081 127.0.0.1:60242 TIME_WAIT -
tcp 0 0 127.0.0.1:8081 127.0.0.1:60366 TIME_WAIT -
tcp 0 0 127.0.0.1:8081 127.0.0.1:60244 TIME_WAIT -
EDIT2 :
admin#nexus-vm:~$ sudo iptables -nL INPUT
Chain INPUT (policy ACCEPT)
target prot opt source destination
Does anyone have any idea what could be wrong?
Thanks in advance!
Regards
The problem was the firewall of my company. Tested it over 4G and it works.
I've installed MongoDB 3.6 on CentOS 7 and am able to connect to it locally:
# cat /etc/centos-release
CentOS Linux release 7.4.1708 (Core)
# mongo
MongoDB shell version v3.6.2
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.2
Welcome to the MongoDB shell.
...
>
My server IP address is 192.168.83.45, but I can't login to the MongoDB from the same server via the IP address instead of 127.0.0.1:
# ip addr | grep 'inet '
inet 127.0.0.1/8 scope host lo
inet 192.168.83.45/24 brd 192.168.83.255 scope global enp0s3
inet 10.0.3.15/24 brd 10.0.3.255 scope global dynamic enp0s8
# mongo --host 192.168.83.45
MongoDB shell version v3.6.2
connecting to: mongodb://192.168.83.45:27017/
2018-01-31T23:29:35.817-0500 W NETWORK [thread1] Failed to connect to 192.168.83.45:27017, in(checking socket for error after poll), reason: Connection refused
2018-01-31T23:29:35.818-0500 E QUERY [thread1] Error: couldn't connect to server 192.168.83.45:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:251:13
#(connect):1:6
exception: connect failed
I have checked the following:
iptables rules: appended (meanwhile my Apache HTTP server is not
blocked)
SELinux status: disabled
MongoDB IP bind: commented out
The check is shown below:
iptables (rule added):
# iptables -nL
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:21
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:3000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:27017
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
My Apache HTTP server works well on port 80 and is not blocked:
# curl http://192.168.83.45
<html>
<head>
<title>Hello World!</title>
</head>
<body>
Hello World!
</body>
</html>
SELinux (disabled):
# sestatus
SELinux status: disabled
mongod.conf (IPbind was commented out, and I clearly understand the risk of simply commenting out this line but this is a virtual machine and is under host only network so it's fine):
# cat /etc/mongod.conf
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
# bindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces.
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options
#auditLog:
#snmp:
I've not only restarted the services, but also restarted the whole computer, but it still doesn't work. I can neither access my MongoDB from the same computer but via the IP address, nor from a remote computer.
I tested one more thing and now I'm sure it has nothing to do with my firewall. I stopped the MongoDB, changed the default listening port of Apache HTTP server from 80 to 27017 and restarted. Now I can get the HTML document via 27017 port with IP address 192.168.83.45. So I think my firewall rule is OK. There must be something wrong with the MongoDB:
# curl 'http://192.168.83.45:27017'
<html>
<head>
<title>Hello World!</title>
</head>
<body>
Hello World!
</body>
</html>
Despite #Sridharan r.g's solution doesn't work, my resolution was inspired by his answer.
I was so close to the solution:
Change the "bindIp" value from "127.0.0.1" in /etc/mongod.conf AND KEEP TWO SPACES BEFORE THE "bindIp", like this:
...
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
...
Please note:
There must be exactly two spaces before "bindIp": neither too many
nor too few.
In the default file format of MongoDB 3.6, it doesn't use
"bind_ip = " but rather "bindIp:"
There MUST BE AT LEAST ONE SPACE between the colon after "bindIp"
and the IP address (here it is 0.0.0.0)
If you want to add more than one IP addresses, use comma to separate
each values, and KEEP AT LEAST ONE SPACE between the comma and the
next IP address.
The file format is a little bit tricky, check here the file format specification.
make sure that mongodb daemon is running, and listening on 0.0.0.0, but not 127.0.0.1 port
check the specify mongodb port is listing are not with help of netstat command
still you facing the problem change the
$ vim /etc/mongod.conf
/etc/mongod.conf
Listen to local, LAN and Public interfaces.
bind_ip = 127.0.0.1,192.168.161.100,45.56.65.100
Locally I can connect to my express app on port 9000. If I start it on remote server I am unsuccessful to reach app, but I see in console logs that it successfully starts.
I see next output for netstat command after $my-express-app pm2 start bin/www
tcp6 0 0 :::3000 :::* LISTEN 52407/www
tcp6 0 0 :::8000 :::* LISTEN 43298/server.js
tcp6 0 0 :::9000 :::* LISTEN 52407/www
And next if I start as $my-express-app pm2 start app.js
tcp6 0 0 :::8000 :::* LISTEN 43298/server.js
tcp6 0 0 :::9000 :::* LISTEN 53096/app.js
My setup configuration is next
...................
app.set('port', 9000)
...................
app.listen(app.get('port'));
Have I missed something?
Express version is 4.x
Update
I also tried to bind app to listen any ip app.listen(app.get('port'),'0.0.0.0')
I have add 2 input/output rules (udp rule was exist before)
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:9000
ACCEPT udp -- anywhere anywhere udp dpt:bootpc
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ufw status tells me it inactive.
Have no sucess. Environment ubuntu 14.04
Update
I was able to run app on port 8000, where other js app is running normally. I cant find any settings related to this port. 9000 still not works. Below is nmap scan for 9000 port
nmap -p 9000 127.0.0.1
Starting Nmap 6.40 ( http://nmap.org ) at 2017-10-04 08:52 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000070s latency).
PORT STATE SERVICE
9000/tcp open cslistener
Nmap done: 1 IP address (1 host up) scanned in 0.05 seconds
nmap -p 9000 myip
Starting Nmap 6.40 ( http://nmap.org ) at 2017-10-04 08:52 UTC
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 3.05 seconds
RESOLVE
I need to setup endpoint to port 9000 on azure portal. It works now. Thanks.
You should check your remote server firewall and add the port 9000 to be opened for traffic.
What operating system are you using, and who is hosting this server for you? For example, I know that if you rent an Ubuntu server on DigitalOcean, most ports (including 9000) will be blocked by default by the firewall, ufw. If you're running on a new-ish version of Ubuntu, you can check your current firewall rules with ufw status. You may have to modify your firewall rules with ufw allow 9000.
I've got the following cluster of 3 Ubuntu machines on Azure cloud:
172.16.0.7 (master)
172.16.0.4 (kube-01)
172.16.0.5 (kube-02)
On 172.16.0.4 (kube-01) I've got a pod called publisher with port 8080 exposed. To make it available to the world I defined the following service:
"id": "publisher-service",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 8181,
"containerPort": 8080,
"publicIPs": ["172.16.0.4", "172.16.0.5"],
"selector": {
"group": "abc",
"component": "publisher"
},
"labels": {
"group": "abc"
}
172.16.0.4 and 172.16.0.5 are Internal IP Addressess (Azure terms) of kube-01 and kube-02 respectively
On 172.16.0.4 (kube-01) I've got an Azure endpoint defined with public port set to 8181 and private port set to 8181
On 172.16.0.5 (kube-02) I've got an Azure endpoint defined with public port set to 8182 and private port set to 8181
With such a setup I can successfully access publisher-service using my VM public virtual IP (VIP) address and port 8181.
However I would expect to be also able to reach the publisher-service using the same VIP address and port 8182 (as it is mapped to port 8181 on kube-02). Instead curl reports Recv failure: Connection reset by peer.
Am I doing anything wrong here? Maybe my understanding of Kubernetes External Services is incorrect (and hence my expectation is wrong)?
I also noticed in /var/log/upstart/kube-proxy the following entries logged:
E0404 17:36:33.371889 1661 proxier.go:82] Dial failed: dial tcp 10.0.86.26:8080: i/o timeout
E0404 17:36:33.371951 1661 proxier.go:110] Failed to connect to balancer: failed to connect to an endpoint.
Here is a part of iptables -L -t nat output captured on 172.16.0.5 (kube-02):
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 11.1.1.2 /* kubernetes */ tcp dpt:https redir ports 45717
REDIRECT tcp -- anywhere 11.1.1.1 /* kubernetes-ro */ tcp dpt:http redir ports 34122
REDIRECT tcp -- anywhere 11.1.1.221 /* publisher-service */ tcp dpt:8181 redir ports 48046
REDIRECT tcp -- anywhere 172.16.0.4 /* publisher-service */ tcp dpt:8181 redir ports 48046
REDIRECT tcp -- anywhere 172.16.0.5 /* publisher-service */ tcp dpt:8181 redir ports 48046
Chain KUBE-PORTALS-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere 11.1.1.2 /* kubernetes */ tcp dpt:https to:172.16.0.5:45717
DNAT tcp -- anywhere 11.1.1.1 /* kubernetes-ro */ tcp dpt:http to:172.16.0.5:34122
DNAT tcp -- anywhere 11.1.1.221 /* publisher-service */ tcp dpt:8181 to:172.16.0.5:48046
DNAT tcp -- anywhere 172.16.0.4 /* publisher-service */ tcp dpt:8181 to:172.16.0.5:48046
DNAT tcp -- anywhere 172.16.0.5 /* publisher-service */ tcp dpt:8181 to:172.16.0.5:48046
I am using Kubernetes v0.12.0. I followed this guide to setup my cluster (i.e. I'm using flannel).
UPDATE #1: added publisher pod status info.
apiVersion: v1beta1
creationTimestamp: 2015-04-04T13:24:47Z
currentState:
Condition:
- kind: Ready
status: Full
host: 172.16.0.4
hostIP: 172.16.0.4
info:
publisher:
containerID: docker://6eabf71d507ad0086b37940931aa739534ef681906994a6aae6d97b8b213
image: xxxxx.cloudapp.net/publisher:0.0.2
imageID: docker://5a76329ae2d0dce05fae6f7b1216e346cef2e5aa49899cd829a5dc1f6e70
ready: true
restartCount: 5
state:
running:
startedAt: 2015-04-04T13:26:24Z
manifest:
containers: null
id: ""
restartPolicy: {}
version: ""
volumes: null
podIP: 10.0.86.26
status: Running
desiredState:
manifest:
containers:
- capabilities: {}
command:
- sh
- -c
- java -jar publisher.jar -b $KAFKA_SERVICE_HOST:$KAFKA_SERVICE_PORT
image: xxxxx.cloudapp.net/publisher:0.0.2
imagePullPolicy: PullIfNotPresent
name: publisher
ports:
- containerPort: 8080
hostPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
id: ""
restartPolicy:
always: {}
version: v1beta2
volumes: null
generateName: rc-publisher-
id: rc-publisher-ls6k1
kind: Pod
labels:
group: abc
namespace: default
resourceVersion: 22853
selfLink: /api/v1beta1/pods/rc-publisher-ls6k1?namespace=default
uid: f746555d-dacd-11e4-8ae7-000d3a101fda
The external networking actually appears to be working fine -- the message you see in the logs is because the kube-proxy did receive the request you sent to it.
The reason it failed, though, is that the kube-proxy couldn't talk to your pod. Either flannel is failing to route to your pod's IP properly, or the pod isn't healthy. Since sending requests to 172.16.0.4 works, it's likely that something is wrong with your flannel setup. You can confirm this by trying to curl 10.0.86.26:8080 from node-2.
In case it may be something wrong with the health of the pod, you can check its detailed state by running kubectl.sh get pod $POD_NAME --output=yaml.
Sorry for the difficulties!
Once I reinstalled my cluster using k8s v0.14.2 everything started to work as expected. I followed Brendan Burns Docker Guide.
I took several days trying to configure my environment running linux mongodb without results. This platform is running on AWS EC2.
mongodb is configured with auth=truecommented, and with port=27017
My problem is when I try to connect remotely (or even from the same machine), I got:
-bash-4.1# mongo myIP:27017/mybd
MongoDB shell version: 2.4.9
connecting to: myIP:27017/mybd
Wed Apr 2 20:57:28.250 Error: couldn't connect to server myIP:27017 at src/mongo/shell/mongo.js:147
exception: connect failed
But if I try with localhost:
-bash-4.1# mongo localhost:27017/mybd
MongoDB shell version: 2.4.9
connecting to: localhost:27017/mybd
>
Now more info:
-bash-4.1# netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:27017 *:* LISTEN
tcp 0 0 *:28017 *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 localhost:smtp *:* LISTEN
tcp 0 48 ip-10-187-41-156.ec2.in:ssh 186-79-194-159.baf.mo:55311 ESTABLISHED
tcp 0 0 *:ssh *:* LISTEN
-bash-4.1# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:27017
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:27017 state ESTABLISHED
And finally, I've made sure that the security group is right. I've opened 27017 and 28017 to anything from the outside with 0.0.0.0/0.
edit your /etc/mongod.conf
bind_ip = 0.0.0.0
that's it,now you can connect to your remote mongodb instance.