Kubernetes: external service is not available from all minions on Azure cloud - azure

I've got the following cluster of 3 Ubuntu machines on Azure cloud:
172.16.0.7 (master)
172.16.0.4 (kube-01)
172.16.0.5 (kube-02)
On 172.16.0.4 (kube-01) I've got a pod called publisher with port 8080 exposed. To make it available to the world I defined the following service:
"id": "publisher-service",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 8181,
"containerPort": 8080,
"publicIPs": ["172.16.0.4", "172.16.0.5"],
"selector": {
"group": "abc",
"component": "publisher"
},
"labels": {
"group": "abc"
}
172.16.0.4 and 172.16.0.5 are Internal IP Addressess (Azure terms) of kube-01 and kube-02 respectively
On 172.16.0.4 (kube-01) I've got an Azure endpoint defined with public port set to 8181 and private port set to 8181
On 172.16.0.5 (kube-02) I've got an Azure endpoint defined with public port set to 8182 and private port set to 8181
With such a setup I can successfully access publisher-service using my VM public virtual IP (VIP) address and port 8181.
However I would expect to be also able to reach the publisher-service using the same VIP address and port 8182 (as it is mapped to port 8181 on kube-02). Instead curl reports Recv failure: Connection reset by peer.
Am I doing anything wrong here? Maybe my understanding of Kubernetes External Services is incorrect (and hence my expectation is wrong)?
I also noticed in /var/log/upstart/kube-proxy the following entries logged:
E0404 17:36:33.371889 1661 proxier.go:82] Dial failed: dial tcp 10.0.86.26:8080: i/o timeout
E0404 17:36:33.371951 1661 proxier.go:110] Failed to connect to balancer: failed to connect to an endpoint.
Here is a part of iptables -L -t nat output captured on 172.16.0.5 (kube-02):
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 11.1.1.2 /* kubernetes */ tcp dpt:https redir ports 45717
REDIRECT tcp -- anywhere 11.1.1.1 /* kubernetes-ro */ tcp dpt:http redir ports 34122
REDIRECT tcp -- anywhere 11.1.1.221 /* publisher-service */ tcp dpt:8181 redir ports 48046
REDIRECT tcp -- anywhere 172.16.0.4 /* publisher-service */ tcp dpt:8181 redir ports 48046
REDIRECT tcp -- anywhere 172.16.0.5 /* publisher-service */ tcp dpt:8181 redir ports 48046
Chain KUBE-PORTALS-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere 11.1.1.2 /* kubernetes */ tcp dpt:https to:172.16.0.5:45717
DNAT tcp -- anywhere 11.1.1.1 /* kubernetes-ro */ tcp dpt:http to:172.16.0.5:34122
DNAT tcp -- anywhere 11.1.1.221 /* publisher-service */ tcp dpt:8181 to:172.16.0.5:48046
DNAT tcp -- anywhere 172.16.0.4 /* publisher-service */ tcp dpt:8181 to:172.16.0.5:48046
DNAT tcp -- anywhere 172.16.0.5 /* publisher-service */ tcp dpt:8181 to:172.16.0.5:48046
I am using Kubernetes v0.12.0. I followed this guide to setup my cluster (i.e. I'm using flannel).
UPDATE #1: added publisher pod status info.
apiVersion: v1beta1
creationTimestamp: 2015-04-04T13:24:47Z
currentState:
Condition:
- kind: Ready
status: Full
host: 172.16.0.4
hostIP: 172.16.0.4
info:
publisher:
containerID: docker://6eabf71d507ad0086b37940931aa739534ef681906994a6aae6d97b8b213
image: xxxxx.cloudapp.net/publisher:0.0.2
imageID: docker://5a76329ae2d0dce05fae6f7b1216e346cef2e5aa49899cd829a5dc1f6e70
ready: true
restartCount: 5
state:
running:
startedAt: 2015-04-04T13:26:24Z
manifest:
containers: null
id: ""
restartPolicy: {}
version: ""
volumes: null
podIP: 10.0.86.26
status: Running
desiredState:
manifest:
containers:
- capabilities: {}
command:
- sh
- -c
- java -jar publisher.jar -b $KAFKA_SERVICE_HOST:$KAFKA_SERVICE_PORT
image: xxxxx.cloudapp.net/publisher:0.0.2
imagePullPolicy: PullIfNotPresent
name: publisher
ports:
- containerPort: 8080
hostPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
id: ""
restartPolicy:
always: {}
version: v1beta2
volumes: null
generateName: rc-publisher-
id: rc-publisher-ls6k1
kind: Pod
labels:
group: abc
namespace: default
resourceVersion: 22853
selfLink: /api/v1beta1/pods/rc-publisher-ls6k1?namespace=default
uid: f746555d-dacd-11e4-8ae7-000d3a101fda

The external networking actually appears to be working fine -- the message you see in the logs is because the kube-proxy did receive the request you sent to it.
The reason it failed, though, is that the kube-proxy couldn't talk to your pod. Either flannel is failing to route to your pod's IP properly, or the pod isn't healthy. Since sending requests to 172.16.0.4 works, it's likely that something is wrong with your flannel setup. You can confirm this by trying to curl 10.0.86.26:8080 from node-2.
In case it may be something wrong with the health of the pod, you can check its detailed state by running kubectl.sh get pod $POD_NAME --output=yaml.
Sorry for the difficulties!

Once I reinstalled my cluster using k8s v0.14.2 everything started to work as expected. I followed Brendan Burns Docker Guide.

Related

Can't fetch user info from Keycloak using Python on docker compose

I need to run, from docker compose, three containers: a fastapi server, a keycloack server and a postgres database.
This works well if I run the uvicorn command from my local bash instead of from docker-compose service. I also noted that if I run the code from outside docker-compose, I get the authorization option OpenIdConnect (OAuth2, authorization_code) and from docker-compose: OpenIdConnect (OAuth2, authorization_code with PKCE).
My docker-compose.yaml:
version: '3.9'
services:
web:
build: ./foo
command: uvicorn main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./foo:/usr/src
ports:
- 8000:8000
depends_on:
- db
- kc
environment:
BAR_ENV: local
LOGGER_NAME: local
BAR_DB_LOCAL_USERPASS: bar:bar
BAR_DB_LOCAL_DB_NAME: bar
BAR_DB_LOCAL_HOST: localhost:5438
BAR_HOSTNAME: bar.local
BAR_AUTH_URL: http://auth.bar.local:8087
BAR_FRONT_URL: bar.local:3000
kc:
image: quay.io/keycloak/keycloak-x:latest
command: start-dev --db=postgres --db-url-host=$$DB_HOST --db-url-database=$$DB_DATABASE --db-username=$$DB_USER --db-password=$$DB_PASS --http-port=8087
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
DB_HOST: db
DB_DATABASE: &KC_DB_DB keycloak
DB_USER: &KC_DB_USER keycloak
DB_PASS: &KC_DB_PASS keycloak
domainname: auth.bar.local
ports:
- 8087:8087
depends_on:
- db
volumes:
- ./resources/keycloak-themes:/opt/keycloak/themes/theme
db:
image: postgres:14
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
KC_DB_DB: *KC_DB_DB
KC_DB_USER: *KC_DB_USER
KC_DB_PASS: *KC_DB_PASS
BAR_DB_DB: bar
BAR_DB_USER: bar
BAR_DB_PASS: bar
ports:
- 5438:5432
volumes:
- ./data/pg-data:/var/lib/postgresql/data
- ./resources/init-kc-db.sh:/docker-entrypoint-initdb.d/init-kc-db.sh
- ./resources/init-bar-db.sh:/docker-entrypoint-initdb.d/init-bar-db.sh
I'm able to access http://<realm>.bar.local:8000/docs from the browser and to authenticate on OpenIdConnect (OAuth2, authorization_code with PKCE). It redirects me to keycloak login page and, then, back to swagger. But, if I try one of my endpoints in swagger, for example, /whoami, I get a 500 internal server error.
Logs from web_1 service:
web_1 | keycloak.exceptions.KeycloakConnectionError: Can't connect to server (HTTPConnectionPool(host='auth.bar.local', port=8087): Max retries exceeded with url: /realms/<realm>/protocol/openid-connect/userinfo (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd38041a6b0>: Failed to establish a new connection: [Errno 111] Connection refused')))
web_1 | {"asctime": "2022-04-26 11:31:54,929", "threadName": "MainThread", "filename": "httptools_impl.py", "lineno": 437, "message": "172.18.0.1:63454 - \"GET /api/v1_0/whoami HTTP/1.1\" 500", "severity": "INFO"}
the error above occurs in my keycloak_auth.py, when it tries to fetch user info from self.kc_clients[org]:
class OpenIdConnectMultipleViaKeycloak(SecurityBase):
def __init__(
self, *, internal_well_known_url: str, server_url: str,
client_template: str, realm_template: str):
self.model = OpenIdConnectModel(
openIdConnectUrl=internal_well_known_url)
self.scheme_name = 'OpenIdConnect'
self.auto_error = True
self.server_url = server_url
self.client_template = client_template
self.realm_template = realm_template
self.kc_clients = {}
async def __call__(self, request: Request) -> Optional[str]:
org = get_org_from_host(request.base_url.hostname)
if org not in self.kc_clients:
self.kc_clients[org] = KeycloakOpenID(
server_url=self.server_url,
client_id=self.client_template.format(org=org),
realm_name=self.realm_template.format(org=org))
authorization: str = request.headers.get("Authorization")
if not authorization:
raise HTTPException(
status_code=HTTP_403_FORBIDDEN, detail="Not authenticated")
try:
userinfo = self.kc_clients[org].userinfo(
authorization.replace('Bearer ', ''))
userinfo['keycloak_realm'] = org
except KeycloakGetError as e:
raise HTTPException(
status_code=HTTP_403_FORBIDDEN, detail=str(e))
return userinfo
Inspecting kc_1 service from inside container:
[root#e3e5d33ce08b /]# nmap -O localhost
Starting Nmap 7.70 ( https://nmap.org ) at 2022-04-26 17:01 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000080s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 999 closed ports
PORT STATE SERVICE
8087/tcp open simplifymedia
Device type: general purpose
Running: Linux 2.6.X
OS CPE: cpe:/o:linux:linux_kernel:2.6.32
OS details: Linux 2.6.32
Network Distance: 0 hops
OS detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 4.21 seconds
and
root#e3e5d33ce08b /]# netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8087 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.11:41567 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:37927 0.0.0.0:* LISTEN -
udp 0 0 127.0.0.11:57222 0.0.0.0:* -
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node PID/Program name Path
Active Bluetooth connections (only servers)
Proto Destination Source State PSM DCID SCID IMTU OMTU Security
Proto Destination Source State Channel
[root#e3e5d33ce08b /]#
Inspecting domain auth.bar.local from web_1 container:
root#0cf70e1cef7f:/usr/src/barz# nmap -p 8087 auth.bar.local
Starting Nmap 7.80 ( https://nmap.org ) at 2022-04-26 17:02 UTC
Nmap scan report for auth.bar.local (127.0.0.1)
Host is up (0.000068s latency).
rDNS record for 127.0.0.1: localhost
PORT STATE SERVICE
8087/tcp closed simplifymedia
Nmap done: 1 IP address (1 host up) scanned in 15.06 seconds
It seems that domainname is reachable from other containers and from outside, but requests made to port 8087 from outside don't work. I've tried to ps aux | grep start-dev and it is running under PID 1. I can even wget it inside kc_1 container and receive a response. I also tried code proposed in https://stackoverflow.com/a/50355857/6328506 , but the behavior did not change.
What am I supposed to do to successfully get http://auth.bar.local:8087/realms/<realm>/protocol/openid-connect/userinfo using docker compose?
Changing localhost for host.docker.internal and adopting solution proposed in https://stackoverflow.com/a/60026589/6328506 for service kc solved the problem. It worth mention that ping/nmap <service_name>, localhost and <network_alias> has different effects.

Jenkins kubernetes cloud service gettting SocketTimeoutException

I'm building a continuous integration pipeline through Jenkins to deploy microservices on a kubernetes cluster. I'm using minikube implementation.
I have some error in kubernetes agent configuration. I created a secretText credential to allow the agent to connect to my K8s cluster but when I test it I get this error:
Error testing connection https://<my_k8s_cluster_ip>:8443: java.net.SocketTimeoutException: connect timed out
I don't understand what the problem is.
K8s agent config
minikube config :
root#minikube:~# kubectl config view --minify=true
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Feb 2021 23:57:36 CET
provider: minikube.sigs.k8s.io
version: v1.17.1
name: cluster_info
server: https://<minikube_ip>:8443
name: minikube
contexts:
- context:
cluster: minikube
extensions:
- extension:
last-update: Tue, 16 Feb 2021 23:57:36 CET
provider: minikube.sigs.k8s.io
version: v1.17.1
name: context_info
namespace: bargo
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /root/.minikube/profiles/minikube/client.crt
client-key: /root/.minikube/profiles/minikube/client.key
Secret
root#minikube:~# kubectl get secrets
NAME TYPE DATA AGE
db-user-pass Opaque 2 2d18h
default-token-4f7lh kubernetes.io/service-account-token 3 3d
jenkins-token-4kdgr kubernetes.io/service-account-token 3 2d18h
Listening port
root#minikube:~# sudo netstat -tulpn | grep LISTEN
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 17207/systemd-resol
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 633/sshd: /usr/sbin
tcp 0 0 127.0.0.1:49153 0.0.0.0:* LISTEN 26838/docker-proxy
tcp 0 0 127.0.0.1:49154 0.0.0.0:* LISTEN 26851/docker-proxy
tcp 0 0 127.0.0.1:49155 0.0.0.0:* LISTEN 26865/docker-proxy
tcp 0 0 127.0.0.1:49156 0.0.0.0:* LISTEN 26879/docker-proxy
tcp6 0 0 :::22 :::* LISTEN 633/sshd: /usr/sbin
iptable result :
root#minikube:~# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:8443
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere <my_server_ip> tcp dpt:8443
ACCEPT tcp -- anywhere <my_server_ip> tcp dpt:5000
ACCEPT tcp -- anywhere <my_server_ip> tcp dpt:2376
ACCEPT tcp -- anywhere <my_server_ip> tcp dpt:ssh
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Ubuntu Firewal status :
root#minikube:~# sudo ufw status verbose
Status: inactive
It's seam the Kubernetes API wasen't expose on port 8443. So i use this command :
kubectl proxy --port=8080 &
When i try it :
root#minikube:~# curl http://localhost:8080/api/
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "<minikube_ip>:8443"
}
]
}
root#minikube:~# curl http://127.0.0.1:8080/api/
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "<minikube_ip>:8443"
}
]
}
root#minikube:~# curl http://<minikube_ip>:8080/api/
curl: (7) Failed to connect to <minikube_ip> port 8080: Connection refused

MongoDB cannot connect from remote computer

I've installed MongoDB 3.6 on CentOS 7 and am able to connect to it locally:
# cat /etc/centos-release
CentOS Linux release 7.4.1708 (Core)
# mongo
MongoDB shell version v3.6.2
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.2
Welcome to the MongoDB shell.
...
>
My server IP address is 192.168.83.45, but I can't login to the MongoDB from the same server via the IP address instead of 127.0.0.1:
# ip addr | grep 'inet '
inet 127.0.0.1/8 scope host lo
inet 192.168.83.45/24 brd 192.168.83.255 scope global enp0s3
inet 10.0.3.15/24 brd 10.0.3.255 scope global dynamic enp0s8
# mongo --host 192.168.83.45
MongoDB shell version v3.6.2
connecting to: mongodb://192.168.83.45:27017/
2018-01-31T23:29:35.817-0500 W NETWORK [thread1] Failed to connect to 192.168.83.45:27017, in(checking socket for error after poll), reason: Connection refused
2018-01-31T23:29:35.818-0500 E QUERY [thread1] Error: couldn't connect to server 192.168.83.45:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:251:13
#(connect):1:6
exception: connect failed
I have checked the following:
iptables rules: appended (meanwhile my Apache HTTP server is not
blocked)
SELinux status: disabled
MongoDB IP bind: commented out
The check is shown below:
iptables (rule added):
# iptables -nL
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:21
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:3000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:27017
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
My Apache HTTP server works well on port 80 and is not blocked:
# curl http://192.168.83.45
<html>
<head>
<title>Hello World!</title>
</head>
<body>
Hello World!
</body>
</html>
SELinux (disabled):
# sestatus
SELinux status: disabled
mongod.conf (IPbind was commented out, and I clearly understand the risk of simply commenting out this line but this is a virtual machine and is under host only network so it's fine):
# cat /etc/mongod.conf
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
# bindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces.
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options
#auditLog:
#snmp:
I've not only restarted the services, but also restarted the whole computer, but it still doesn't work. I can neither access my MongoDB from the same computer but via the IP address, nor from a remote computer.
I tested one more thing and now I'm sure it has nothing to do with my firewall. I stopped the MongoDB, changed the default listening port of Apache HTTP server from 80 to 27017 and restarted. Now I can get the HTML document via 27017 port with IP address 192.168.83.45. So I think my firewall rule is OK. There must be something wrong with the MongoDB:
# curl 'http://192.168.83.45:27017'
<html>
<head>
<title>Hello World!</title>
</head>
<body>
Hello World!
</body>
</html>
Despite #Sridharan r.g's solution doesn't work, my resolution was inspired by his answer.
I was so close to the solution:
Change the "bindIp" value from "127.0.0.1" in /etc/mongod.conf AND KEEP TWO SPACES BEFORE THE "bindIp", like this:
...
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
...
Please note:
There must be exactly two spaces before "bindIp": neither too many
nor too few.
In the default file format of MongoDB 3.6, it doesn't use
"bind_ip = " but rather "bindIp:"
There MUST BE AT LEAST ONE SPACE between the colon after "bindIp"
and the IP address (here it is 0.0.0.0)
If you want to add more than one IP addresses, use comma to separate
each values, and KEEP AT LEAST ONE SPACE between the comma and the
next IP address.
The file format is a little bit tricky, check here the file format specification.
make sure that mongodb daemon is running, and listening on 0.0.0.0, but not 127.0.0.1 port
check the specify mongodb port is listing are not with help of netstat command
still you facing the problem change the
$ vim /etc/mongod.conf
/etc/mongod.conf
Listen to local, LAN and Public interfaces.
bind_ip = 127.0.0.1,192.168.161.100,45.56.65.100

Cant connect express app

Locally I can connect to my express app on port 9000. If I start it on remote server I am unsuccessful to reach app, but I see in console logs that it successfully starts.
I see next output for netstat command after $my-express-app pm2 start bin/www
tcp6 0 0 :::3000 :::* LISTEN 52407/www
tcp6 0 0 :::8000 :::* LISTEN 43298/server.js
tcp6 0 0 :::9000 :::* LISTEN 52407/www
And next if I start as $my-express-app pm2 start app.js
tcp6 0 0 :::8000 :::* LISTEN 43298/server.js
tcp6 0 0 :::9000 :::* LISTEN 53096/app.js
My setup configuration is next
...................
app.set('port', 9000)
...................
app.listen(app.get('port'));
Have I missed something?
Express version is 4.x
Update
I also tried to bind app to listen any ip app.listen(app.get('port'),'0.0.0.0')
I have add 2 input/output rules (udp rule was exist before)
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:9000
ACCEPT udp -- anywhere anywhere udp dpt:bootpc
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ufw status tells me it inactive.
Have no sucess. Environment ubuntu 14.04
Update
I was able to run app on port 8000, where other js app is running normally. I cant find any settings related to this port. 9000 still not works. Below is nmap scan for 9000 port
nmap -p 9000 127.0.0.1
Starting Nmap 6.40 ( http://nmap.org ) at 2017-10-04 08:52 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000070s latency).
PORT STATE SERVICE
9000/tcp open cslistener
Nmap done: 1 IP address (1 host up) scanned in 0.05 seconds
nmap -p 9000 myip
Starting Nmap 6.40 ( http://nmap.org ) at 2017-10-04 08:52 UTC
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 3.05 seconds
RESOLVE
I need to setup endpoint to port 9000 on azure portal. It works now. Thanks.
You should check your remote server firewall and add the port 9000 to be opened for traffic.
What operating system are you using, and who is hosting this server for you? For example, I know that if you rent an Ubuntu server on DigitalOcean, most ports (including 9000) will be blocked by default by the firewall, ufw. If you're running on a new-ish version of Ubuntu, you can check your current firewall rules with ufw status. You may have to modify your firewall rules with ufw allow 9000.

Unable to get PostgreSQL 9.4 to listen on port 5432

I'm using a Linux VM (Ubuntu 15.10) to spin up a Postgres Database, and as far as I can tell, everything should be configured right.
My firewall is disabled:
user#UBUNTUMACHINE:~$ sudo ufw status numbered Status: inactive
But it's only listening on port 22
user#UBUNTUMACHINE:~$ netstat -an | grep "LISTEN "
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN
If I enable the firewall, and tell it to listen to 5432, it shows up in the rules:
user#UBUNTUMACHINE:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22 ALLOW IN Anywhere
22/tcp ALLOW IN Anywhere
5432/tcp ALLOW IN Anywhere
5432 ALLOW IN Anywhere
22 (v6) ALLOW IN Anywhere (v6)
22/tcp (v6) ALLOW IN Anywhere (v6)
5432/tcp (v6) ALLOW IN Anywhere (v6)
5432 (v6) ALLOW IN Anywhere (v6)
But I get the same results as above for netstat.
As far as I can tell from researching the issue, I have the correct values in my postgresql.conf file:
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
and I've tried both IP ranges and specific IPs as trusted in the pg_hba.conf file.
# Database administrative login by Unix domain socket
local all postgres ident sameuser
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres peer
#host replication postgres 127.0.0.1/32 md5
#host replication postgres ::1/128 md5
host all all 10.0.0.0/255 trust
host all all 10.11.0.0/255 trust
host all all 0.0.0.0/0 trust
Lastly, Postgres is running, per
user#UBUNTUMACHINE:~$ sudo service postgresql status
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (exited) since Wed 2017-03-08 11:09:57 CST; 57min ago
Process: 787 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 787 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/postgresql.service
Mar 08 11:09:57 UBUNTUMACHINEsystemd[1]: Starting PostgreSQL RDBMS...
Mar 08 11:09:57 UBUNTUMACHINEsystemd[1]: Started PostgreSQL RDBMS.
Mar 08 11:32:21 UBUNTUMACHINEsystemd[1]: Started PostgreSQL RDBMS.
Mar 08 11:32:26 UBUNTUMACHINEsystemd[1]: Started PostgreSQL RDBMS.
The log is telling me invalid CIDR mask in address 10.0.0.0/255
:: 255 might be larger than 32
Postgres refuses to start, because it refuses the netmask /255 which islarger than the possible number of bits in the (32 bits) IP-address. You could consider this to be a bit picky for the .hba parser, but it could also be considerered a configuration error.
In any case: replace the /255 by something sensible, like /24 (or /16, since you have two of these entries) And: replace the trust by something more safe, after it appears to work.

Resources