Compute Engine : "This site can’t be reached" - node.js

SITUATION:
I am following this tutorial.
When I get to the part where I create an instance and I execute the necessary commands, I get to the following:
To see the application running, go to http://[YOUR_INSTANCE_IP]:8080,
where [YOUR_INSTANCE_IP] is the external IP address of your instance.
PROBLEM:
The page deosn't load. I get the following error message:
This site can’t be reached
QUESTION:
What could have gone wrong ?
All previous steps worked perfectly and I was able to access my website locally.
I waited for the Compute Engine instance to be ready by checking:
gcloud compute instances get-serial-port-output my-app-instance --zone us-central1-f
and although I reproduced all the steps twice, I am still met with the error message.
Something must be missing.
EDIT:
My firewall rules:

I guess you don't apply firewall tag to instance ?
First, you can check your compute instences tags.
gcloud compute instances describe my-app-instance
In your example , you should see http-server in tags-items, like follows
tags:
fingerprint: xxxxxxx
items:
- http-server
- https-server
If not exist, you should add the tags to an existing VM instance, use this gcloud command:
gcloud compute instances add-tags [YOUR_INSTANCE_NAME] --tags http-server,https-server
To add the tags at the time of the instance creation, include that flag in your statement:
gcloud compute instances create [YOUR_INSTANCE_NAME] --tags http-server,https-server

If your code and firewall rules are correct then it's highly possible that you are trying to connect to wrong IP. You should be using external IP, not internal one which you get using ifconfig, you can get your external IP at whatsmyip.com

I will suggest looking into this step:
gcloud compute instances create my-app-instance \
--image-family=debian-9 \
--image-project=debian-cloud \
--machine-type=g1-small \
--scopes userinfo-email,cloud-platform \
--metadata app-location=$BOOKSHELF_DEPLOY_LOCATION \
--metadata-from-file startup-script=gce/startup-script.sh \
--zone us-central1-f \
--tags http-server
Please ensure the instance is created with the http-server tag
Otherwise, the firewall rules will not take effect on your instance
gcloud compute firewall-rules create default-allow-http-8080 \
--allow tcp:8080 \
--source-ranges 0.0.0.0/0 \
--target-tags http-server \
--description "Allow port 8080 access to http-server"

First, check if the firewall settings are correct as the others had mentioned.
Second, I was having the same problem and solve it by selecting on the Network Service Tier section the "Standard" option instead of the "Premium" one.
Third, check if there is another application running on the same port using the command:
netstat -tulpn
Which should return something like:
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
In my case it was not working because I had two applications running on the same port.

Related

gcloud alternate ssh port connection

I'm just getting started with gcloud vm's and trying to secure them up a bit. If I change the ssh port, what switch/flag do I add to the gcloud command when using the gcloud command like this
gcloud beta compute ssh --zone "us-east4-c" "base" --project "testproject"
Thanks!
After checking this GCP doc, you can see that you'll be able to set a custom port by adding a flag called --ssh-flag.
For example:
gcloud compute ssh example-instance --zone=us-central1-a --project=project-id --ssh-flag="-p 8000"
It is also applicable for gcloud beta:
gcloud beta compute ssh example-instance --zone=us-central1-a --project=project-id --ssh-flag="-p 8000"
The sample commands will SSH to your Compute Engine instance on port 8000.
Note: Before connecting, make sure you have an ingress Firewall Rule that accepts TCP on the port you've chosen.
UPDATE: If above is not working and you are getting connection refused, it means you need to configure your VM to listen to the port you wanted. Here are the steps:
Go to sshd configuration file : sudo vi /etc/ssh/sshd_config
Add your chosen port for example:
Save the file.
Restart sshd service : sudo systemctl reload sshd.service

Caddy 2 not running in Docker: "cannot assign requested address"

I'm trying to run the official Caddy 2 docker image. According to that page, to do that you should run:
docker run -p 80:80 \
-v $PWD/index.html:/usr/share/caddy/index.html \
-v caddy_data:/data \
caddy
When I run this, I get the following error:
{"level":"info","ts":1590185286.853735,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
run: loading initial config: loading new config: starting caddy administration endpoint: listen tcp 45.90.28.0:2019: bind: cannot assign requested address
I'm not sure why it's trying to bind to that IP address by default? I tried changing it in the Caddyfile but it still doesn't bind correctly, and anyway that doesn't really solve the underlying issue here.
What could be causing this problem? Should I be using Caddy 1 instead?
I experienced this issue recently on Linux and the root cause for me was that my ISP supplied routers DNS server was resolving localhost to an incorrect IP address.
You might want to try changing your DNS servers to Cloudflare's 1.1.1.1 or Google's 8.8.8.8 servers.

Is there any way to connect to an unpublished socket in a Kubernetes pod from outside?

I have an unsecured Postfix instance in a container that listens to port 25. This port is not exposed using a Service. The idea is that only a PHP container that runs inside the same pod should be able to connect to Postfix and there is no need for additional Postfix configuration .
Is there any way for other processes that run in the same network or Kubernetes cluster to connect to this hidden port?
From what I know, only other containers in the same Pod can connect to an unexposed port, via localhost.
I'm interested from a security point of view.
P.S. I now that one should make sure it has multiple levels of security in place but I'm interested only theoretically if there is some way to connect to this port from outside the pod.
From what I know, only other containers in the same Pod can connect to an unexposed port, via localhost.
Not exactly.
How this is implemented is a detail of the particular container runtime in use.
...I'm interested only theoretically if there is some way to connect to this port from outside the pod.
So here we go :)
For example on GKE you can easily access Pod from other Pod if you know Target Pod's IP.
I have used the following setup on GKE:
apiVersion: v1
kind: Pod
metadata:
annotations:
run: fake-web
name: fake-default-knp
spec:
containers:
- image: mendhak/http-https-echo
imagePullPolicy: IfNotPresent
name: fake-web
The Docker file for that image can be found here.
It specifies EXPOSE 80 443
So, container listens on these 2 Ports.
$kubectl exec fake-default-knp -- netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::443 :::* LISTEN 1/node
tcp 0 0 :::80 :::* LISTEN 1/node
I have no services:
$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 40d
and only 2 Pods.
$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP
busybox-sleep-less 1/1 Running 3476 40d 10.52.1.6
fake-default-knp 1/1 Running 0 13s 10.52.0.50
And I can connect to
$kubectl exec busybox-sleep-less -- telnet 10.52.0.50 80
Connected to 10.52.0.50
$kubectl exec busybox-sleep-less -- telnet 10.52.0.50 443
Connected to 10.52.0.50
As you can see, container is accessible on POD_IP:container_port from other pod (located on another node)
P.S> It worth checking "Inter-process communications (IPC)" if you really would like to continue using unsecured Postfix and prefer avoiding "unauthorized access from outside of Pod". It is described here.
Hope that helps!
Edit 30-Jan-2020
I decided to play with it a little bit. Technically, you can achieve what you want with the help of iptables. You need to specifically ACCEPT all traffic from localhost on port25 and DROP from everywhere else.
something like:
cat iptab.txt
# Generated by xtables-save v1.8.2 on Thu Jan 30 16:37:27 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -s 127.0.0.1/32 -p 6 -m tcp --dport 80 -j ACCEPT
-A INPUT -p 6 -m tcp --dport 80 -j DROP
COMMIT
I've tested it and can't telnet on port 80 from anywhere except that very Pod. Please note that I had to run my container in privileged mode in order to be able editing iptables rules directly from Pod. But that is going beyond initial question. :)
Yes, you can use kubectl port-forward to set up a tunnel directly to it for testing purposes.

Can't see application running on external IP of instance

Google Compute Engine newbie here.
I'm following along with the bookshelf tutorial: https://cloud.google.com/nodejs/tutorials/bookshelf-on-compute-engine
But run into a problem. When I try to view my application on http://[YOUR_INSTANCE_IP]:8080 with my external IP
Nothing shows up. I've tried running the tutorial again and again, but still same problem avails.
EDIT:
My firewall rules: http://i.imgur.com/gHyvtie.png
My VM instance:
http://i.imgur.com/mDkkFRW.png
VM instance showing the correct networking tags:
http://i.imgur.com/NRICIGl.png
Going to http://35.189.73.115:8080/ in my web browser still fails to show anything. Says "This page isn't working"
TL;DR - You're most likely missing firewall rules to allow incoming traffic to port 8080 on your instances.
Default Firewall rules
Google Compute Engine firewall by default blocks all ingress traffic (i.e. incoming network traffic) to your Virtual Machines. If your VM is created on the default network (which is usually the case), few ports like 22 (ssh), 3389 (RDP) are allowed.
The default firewall rules are described here.
Opening ports for ingress
The ingress firewall rules are described in detail here.
The recommended approach is to create a firewall rule which allows incoming traffic to your VMs (containing a specific tag you choose) on port 8080 . You can then associate this tag only to the VMs where you will want to allow ingress 8080.
The steps to do this using gcloud:
# Create a new firewall rule that allows INGRESS tcp:8080 with VMs containing tag 'allow-tcp-8080'
gcloud compute firewall-rules create rule-allow-tcp-8080 --source-ranges 0.0.0.0/0 --target-tags allow-tcp-8080 --allow tcp:8080
# Add the 'allow-tcp-8080' tag to a VM named VM_NAME
gcloud compute instances add-tags VM_NAME --tags allow-tcp-8080
# If you want to list all the GCE firewall rules
gcloud compute firewall-rules list
Here is another stack overflow answer which walks you through how to allow ingress traffic on specific ports to your VM using Cloud Console Web UI (in addition to gcloud).
PS: These are also part of the steps in the tutorial you linked.
# Add the 'http-server' tag while creating the VM
gcloud compute instances create my-app-instance \
--image=debian-8 \
--machine-type=g1-small \
--scopes userinfo-email,cloud-platform \
--metadata-from-file startup-script=gce/startup-script.sh \
--zone us-central1-f \
--tags http-server
# Add firewall rules to allow ingress tcp:8080 to VMs with tag 'http-server'
gcloud compute firewall-rules create default-allow-http-8080 \
--allow tcp:8080 \
--source-ranges 0.0.0.0/0 \
--target-tags http-server \
--description "Allow port 8080 access to http-server"

Service IP is not accessible across nodes in kubernetes

I have created a kubernetes v1.2 running in Azure cloud with one master(Master) and two nodes(Node1 and Node2). I have deployed an Nginx and Tomcat application. Both the containers are deployed in individual pods with RC and they have a SERVICE for each.
Nginx pod is deployed in the Node1 and Tomcat pod is deployed in Node2. Now Nginx from Node1 is trying to access Tomcat via tomcat's ServiceIP(clusterIP) which is in Node2. But its unreachable.
Nginx serviceIP: 10.16.0.2 Node1
Tomcat serviceIP: 10.16.0.4 Node2
I tried curl 10.16.0.4:8080 from Node2, it works. But same from Node1 fails with curl: (52) Empty reply from server
So communication to serviceIP across nodes fails. Is this the problem with kube v1.2?
Note: ClusterIP for the Service will be specified at the time of creating the service.
Since you are able to reach the cluster ip from the Node2, it looks like the service selector is properly defined.
Kube-proxy is the component that watches the services and creates iptable rules for end points. I would check if kube-proxy is running properly on Node1. Then check if iptable rules are set properly for the cluster ip you are trying to reach.
You can see these with iptables -L -t nat | grep namespace/servicename
Here is an example:
bash-4.3# iptables -L -t nat | grep kube-system/heapster
KUBE-MARK-MASQ all -- 172.168.16.182 anywhere /* kube-system/heapster: */
DNAT tcp -- anywhere anywhere /* kube-system/heapster: */ tcp to:172.168.16.182:8082
KUBE-SVC-BJM46V3U5RZHCFRZ tcp -- anywhere 192.168.172.66 /* kube-system/heapster: cluster IP */ tcp dpt:http
KUBE-SEP-KNJP5BBKUOCH7NDB all -- anywhere anywhere /* kube-system/heapster: */
In this example I looked up heapster running in kube-system namespace. It is showing that the cluster ip is 192.168.172.66 DNATs to the endpoint 172.168.16.182, which is pods ip (You should cross check this with the endpoints listed in kubectl describe service.
If is it not there, restarting kube-proxy might help.

Resources