I can't get my NodeJS-app working on OpenShift. Everywhere it is written to use the env. variables
OPENSHIFT_NODEJS_PORT
OPENSHIFT_NODEJS_IP
but they are not present in my pod. If I just listen on some other IP and PORT (e.g. on port 3000 of 127.0.0.1), the app successfully deploys but does not receive any requests (and also cannot be reached from the exposed address). Output of the printenv command in the terminal of my pod running the NodeJS app is in the attached pictures (sorry, did not figure out how to copy text from the web terminal).
Output of printenv, 1
Output of printenv, 2
All variables having something to do with NODEJS_* indicate the IP 172.30.72.54, and PORT 8080. However, if I use these, I get a "listen EADDRNOTAVAIL" error.
Btw, also the OpenShift CLI indicates the same IP and PORT:
$ oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb ClusterIP 172.30.20.188 <none> 27017/TCP 2d
nodejs ClusterIP 172.30.72.54 <none> 8080/TCP 2d
So by now, I don't have any clue and can't find any information about what IP and PORT to use for my NodeJS app. Thanks for any help!
Related
I have a Postgres Database which was deployed as a Docker container in Linux server. How can i add this as a data source in grafana? I tried finding the docker host IP address using sudo ip addr show docker0 and got the result as 172.17.255.255. I added this URL in Host name field. And added the port number as 5432. But still, grafana is unable to fetch the data from DB. How can i do this?
If your database has its port closed you should use a docker networks but if the port is exposed you should try to use your host ip address instead of the docker gateway address
With docker ps, check if your Postgres container expose the port, eg.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e25459aa0ed6 postgres:10.17 "docker-entrypoint.s…" 12 seconds ago Up 11 seconds 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp jovial_merkle
If not, stop the service, then restart it exposing the port with docker run -d -p 5432:5432 ....
Check if there is any firewall rule that allow to access the port from outside the server.
Now you can reach you DB using the server IP
I have Node.js grpc server and Node.js grpc client. My grpc server is insecure server, and I already tested my grpc server using my client in local.
But I deployed to kube using nginx ingress controller and network load balancer then I got error.
{
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
}
The odd thing is when I using grpcurl, then I could get success result.
I am using this script.
$ grpcurl -insecure -proto my.proto grpc.example.com my.grpc.package/service
Is it problem because of my server? Or my client is incorrect?
Usually that msg is result of using the wrong port.
In my case the service was :
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
foo-name ClusterIP 10.0.8.132 <none> 5556/TCP,5557/TCP 123d
I was forwarding my port to 5556 and I got the same error msg.
I changed my port to forward from 5557 and it worked fine.
I have a node application listening on port 80, I have set the security groups open on port 80.
But, when I access my webapp in browser via public ip(http://xx.xxx.xx.xxx/), it doesn't show up.
What could be the issue?
I've use this doc as a guide https://aws.amazon.com/premiumsupport/knowledge-center/connect-http-https-ec2/
When your security group already allowed traffic It means something wrong with the instance.
The first step to debug such an issue to verify the application status inside the instance.
do ssh to the instance and verify is the instance responding on localhost curl localhost
check is the process running, if you are using any nodejs process manager like pm2 pm2 list or forever forever list or ps -aux | grep node
Verify is the server running on port 80.
check is the port occupied netstat -antu | grep LISTEN
In short, if the application responding on localhost using curl localhost, then as mentioned in the comment then the instance is in the private subnet.
you can check this article to know about public and private subnet.
So The answer was, my security groups were fine. My app wasn't running because I didn't set the environment variable correctly. sudo PORT=80 node server.js was the command I needed.
I'm running a node JS app on Google Cloud Services using the cloud shell. I've deployed using gcloud app deploy, everything reports as a success. If I use gcloud app logs tail -s default I can see the logs, it says my app is listening on port 3000, that's the first debug message I see from my app.
When I invoke the endpoint without the port on the end, i.e.
https://myapp.appspot.com/myendpoint
I get an error,
"GET /myendpoint" 502
If I try with port 3000, i.e.
https://myapp.appspot.com:3000/myendpoint
The request just times out and I get no log messages from the shell.
I have port 3000 opened on the firewall, and my app.yaml is,
runtime: nodejs
env: flex
service: default
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
Update 1:
I've also tried adding a forwarding port to my app.yaml,
network:
forwarded_ports:
- 3000/tcp
And allowed port 3000 in the VPC Firewall, but this seems to make no difference.
Update 2:
I can SSH into the instance and access the endpoint using a wget http://127.0.0.1:3000/myendpoint command but still no external access.
Update 3:
I've also tried port 443 too, listening on IP 0.0.0.0. But it seems to bind to IPV6 ip address 0 and changes the port to 8443 (somehow). This is just insane...
I resolved the issue by binding my service to port 8080, and removing the "service" field from my app.yaml. the external calls are all routed to port 8080 by default.
External calls have no port specified.
I'm using Kubernetes v1.0.6 on AWS that has been deployed using kube-up.sh.
Cluster is using kube-dns.
$ kubectl get svc kube-dns --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
Which works fine.
$ kubectl exec busybox -- nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10 ip-10-0-0-10.eu-west-1.compute.internal
Name: kubernetes.default
Address 1: 10.0.0.1 ip-10-0-0-1.eu-west-1.compute.internal
This is the resolv.conf of a pod.
$ kubectl exec busybox -- cat /etc/resolv.conf
nameserver 10.0.0.10
nameserver 172.20.0.2
search default.svc.cluster.local svc.cluster.local cluster.local eu-west-1.compute.internal
Is it possible to have the containers use an additional nameserver?
I have a secondary DNS based service discovery Oon let's say 192.168.0.1) that I would like my kubernetes containers to be able to use for dns resolution.
ps. A kubernetes 1.1 solution would also be acceptable :)
Thank you very much in advance,
George
The DNS addon README has some details on this. Basically, the pod will inherit the resolv.conf setting of the node it is running on, so you could add your extra DNS server to the nodes' /etc/resolv.conf. The kubelet also takes a --resolv-conf argument that may provide a more explicit way for you to inject the extra DNS server. I don't see that flag documented anywhere yet, however.
In Kuberenetes (probably) 1.2 we'll be moving to a model where nameservers are assumed to be fungible. There are too many resolvers that break when different nameservers serve different subsets of DNS, and there is no real specification here that we can point to.
In other words, we'll start dropping the host's nameserver records from the container's merged resolv.conf and making our own DNS server the only nameserver line. Our DNS will be able to forward requests to upstream nameservers.
I eventually managed to solve this pretty easily by configuring SkyDNS to add an additional nameserver, you can just add the environmental variable SKYDNS_NAMESERVERS as defined in the SkyDNS docs in your SkyDNS replication controller. It has minimal impact and does not depend on node changes etc.
env:
- name: SKYDNS_NAMESERVERS
value: 10.0.0.254:53,10.0.64.254:53
For those usign Kubernetes kube-dns, flag -nameservers nor environment variable SKYDNS_NAMESERVERS are no longer avaiable.
Usage of /kube-dns:
--alsologtostderr log to standard error as well as files
--config-map string config-map name. If empty, then the config-map will not used. Cannot be used in conjunction with federations flag. config-map contains dynamically adjustable configuration.
--config-map-namespace string namespace for the config-map (default "kube-system")
--dns-bind-address string address on which to serve DNS requests. (default "0.0.0.0")
--dns-port int port on which to serve DNS requests. (default 53)
--domain string domain under which to create names (default "cluster.local.")
--healthz-port int port on which to serve a kube-dns HTTP readiness probe. (default 8081)
--kube-master-url string URL to reach kubernetes master. Env variables in this flag will be expanded.
--kubecfg-file string Location of kubecfg file for access to kubernetes master service; --kube-master-url overrides the URL part of this; if neither this nor --kube-master-url are provided, defaults to service account tokens
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
--logtostderr log to standard error instead of files (default true)
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--version version[=true] Print version information and quit
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Now, either you put your name servers on the hosts resolv.conf, so DNS is inherited from the node, or you use custom resolv.conf and add it to Kubelet with the flag --resolv-conf as explained here
You need to know the IP of your Core DNS to set it as a secondary DNS
Run this command to get the CoreDNS IP:
kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 43d
metrics-server ClusterIP 172.20.232.147 <none> 443/TCP 43d
This is how I setup DNS in my deployment yaml.
I posted the Google DNS IP (for clarity) and my CoreDNS ip, but you should use your VPC DNS and your CoreDNS server.
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
dnsPolicy: None
dnsConfig:
nameservers:
- 8.8.8.8
- 172.20.0.10
searches:
- 1b.svc.cluster.local
- svc.cluster.local
- cluster.local
- ec2.internal
options:
- name: ndots
value: "5"