TCP whitelist for custom TCP port does not work in haproxy ingress - haproxy-ingress

Hi I was able to configure the haproxy ingress for a custom TCP port (RabbitMQ), using helm with custom values:
# ha-values.yaml
controller:
ingressClass: haproxy
config:
whitelist-source-range: 251.161.180.161
# use-proxy-protocol: "true"
# TCP service key:value pairs
# <port>: <namespace>/<servicename>:<portnumber>[:[<in-proxy>][:<out-proxy>]]
# https://haproxy-ingress.github.io/docs/configuration/command-line/#tcp-services-configmap
tcp:
15672: "default/rabbitmq-bugs:15672"
5672: "default/rabbitmq-bugs:5672"
Installed helm with
helm install haproxy-ingress haproxy-ingress/haproxy-ingress \
--create-namespace --namespace=ingress-controller \
--values yaml/ha-values.yaml
I published on Digital Ocean, so a LoadBalancer was started, and the port 15672 correctly forwaded to the internal rabbitmq kubernetes service.
I was not able to make the whitelist option works.
The service was always reachable.
I also try enabling proxy protocol on both load balancer and haproxy, but still the whitelist didn't take place.
Seems like the whitelist option doesn't work for TCP filtering.
Has anyone succeded in make a custom TCP port whitelisting?
Thanks.

Related

How to Connect Externally hosted website with AWS CLoudfront CDN

I am hosting my site on Vultr and I want to connect it to CLoudfront CDN. How to do this? I have tried but it shows error that origin connectivity issue.
You see, this is a very specific situation and Vultr does not have the same integration with Cloudfront as it does with Cloudflare. For this I had to do the following:
First:
Release the cloud front IPs on the server's firewall, as the cloudfront has 135 IPs and Vultr's firewall panel can only register 50 entries, so transfer this responsibility to the server.
Create a script that only adds Cloudfront IPs to UFW.
I got this repo: https://github.com/Paul-Reed/cloudflare-ufw
So I have this in CRON:
0 0 * * 1 /usr/local/bin/cloudflare-ufw > /dev/null 2>&1
And for my case the script looked like this:
#!/bin/sh
curl -s https://www.cloudflare.com/ips-v4 -o /tmp/cf_ips
curl -s https://www.cloudflare.com/ips-v6 >> /tmp/cf_ips
# Allow all traffic from Cloudflare IPs (no port restrictions)
#to cfip in `cat /tmp/cf_ips`; ufw enable tcp proto of $cfip comment 'Cloudflare IP'; done
ufw reload > /dev/null
OTHER EXAMPLES OF RULES
Restrict to port 80
to cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 80 comment 'Cloudflare IP'; done
Restrict to ports 22 and 443
for cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 22443 comment 'Cloudflare IP'; done
Restrict to ports 80 and 443
to cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 80.443 comment 'Cloudflare IP'; done
ufw reload > /dev/null
Second:
I configured cloudfront, my case was specific for wordpress traffic. followed the following steps:
I created an AWS Certificate Manager public certificate
As per documents on AWS: https://docs.aws.amazon.com/pt_br/acm/latest/userguide/gs-acm-request-public.html#request-public-console
I created the distribution on CloudFront: https://docs.aws.amazon.com/pt_br/AmazonCloudFront/latest/DeveloperGuide/distribution-web-creating.html
The distribution will be responsible for the security and performance of the application.
I created a certificate for the origin server: https://www.gocache.com.br/seguranca/como-gerar-certificado-ssl-via-terminal-certbot-com-wildcard/
It is necessary to install a valid SSL certificate inside your server to make a secure connection with CloudFront. I recommend Let’s Encrypt as a free solution for generating certificates.
I registered the record in the DNS table: https://docs.aws.amazon.com/pt_br/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html
For the distribution to be accessible by the website address, it is necessary to register the address in the DNS table.
The record is a CNAME and its value is a distribution domain name. You can find this information in the Details section on the CloudFront Distribution General tab.

Unable to connect to EC2 + Elastic IP

I am setting up a Rest API on AWS EC2 and configuring the instance.
I have a problem and it is that despite being able to connect via ssh, I cannot make an API call on port 5000.
The VM has nothing configured, only Node and PM2.
Trying to enter through the public DNS I can't establish a connection either.
I have these security groups enabled.
5000 TCP 0.0.0.0/0
22 TCP 0.0.0.0/0
5000 TCP ::/0
443 TCP 0.0.0.0/0
443 TCP ::/0
80 TCP 0.0.0.0/0
80 TCP ::/0
Can someone help me with this? I don't understand what is happening.
What is the exact error is it timing out ?
If yes then the error is with the security group if not try doing ssh to your container and ping it locally using curl localhost command you might find that the pm2 server is not running the app properly

Istio in Azure AKS - Connection issues over 15001 port while connecting to Azure Redis Cache

We are facing issues on 15001 port in istio deployed in Azure AKS.
Currently we have deployed Istio in AKS and trying to connect to Azure cache redis instance in cluster mode. Our Azure redis instance is having more than two shards with SSL enabled and one of the master node is assigned on port 15001. We were able to connect to Azure redis from AKS pods over ports 6380, 15000, 15002, 15003, 15004 and 15005 ports. However when we try to connect to over 15001 we see some issues. When we try to connect to redis over 15001 port from a namespace without istio sidecar injection from same aks cluster the connection is working fine.
Below are the logs from rediscli pod deployed in our AKS cluster.
Success case:
redis-cli -h our-redis-host.redis.cache.windows.net -p 6380 -a our-account-key --cacert "BaltimoreCyberTrustRoot.pem" --tls ping
OUTPUT:
Warning: Using a password with ‘-a’ or ‘-u’ option on the command line interface may not be safe.
PONG
We are able to connect over all ports - 6380, 15000, 15002, 15003, 15004 and 15005 to redis. However when we try to conenct using 15001. We are getting below error
Failure case:
redis-cli -h our-redis-host.redis.cache.windows.net -p 15001 -a our-account-key --cacert "BaltimoreCyberTrustRoot.pem" --tls ping
OUTPUT:
Warning: Using a password with ‘-a’ or ‘-u’ option on the command line interface may not be safe.
Could not connect to Redis at our-redis-host.redis.cache.windows.net :15001: SSL_connect failed: Success
I could not see any entry in istio-proxy logs when trying from 15001 port. However when trying for other ports we can see entry in logs as below
[2021-05-05T00:59:18.677Z] "- - -" 0 - - - "-" 600 3982 10 - "-" "-" "-" "-" "172.XX.XX.XX:6380" PassthroughCluster 172.XX.XX.XX:45478 172.22.XX.XX:6380 172.XX.XX.XX:45476 - -
Is this because 15001 port blocks the outbound requests or manipulates certs for requests on 15001 port. If yes, is there any configuration to update the proxy_port to other ports than 15001?
Note: Posted this on istio forum . Posting here for better reach.
Istio versions:
> istioctl version
client version: 1.8.2
control plane version: 1.8.3
data plane version: 1.8.3
Port 15001 is used by Envoy in Istio. Applications should not use ports reserved by Istio to avoid conflicts.
You can read more here
We have utilised the concept of istio excludeOutboundPorts annotation to bypass the the istio envoy proxy interception of the traffic on outbound ports for which we are see the problem due to istio port requirements
Using annotations provided by istio, we can use either IP or port ranges to exclude the interception. Below is an explain with ports
template:
metadata:
labels:
app: 'APP-NAME'
annotations:
traffic.sidecar.istio.io/excludeOutboundPorts: "15001"
References:
Istio Annotations
Istio traffic capture limitations
Istio Port Requirement

Angular5 EC2 VPC deployment

I'm trying to deploy my application over Amazon EC2 instance, I couldn't reach my application over the public IP address. Such as 34.54.23.22:4200:
I've changed the security group and allowed TCP connection for port
4200
Nodejs is working fine, I've install it.
Ng serve is working
My inbound rules :
80 tcp 0.0.0.0/0, ::/0
4100 tcp 0.0.0.0/0, ::/0
443 tcp 0.0.0.0/0, ::/0
22 tcp 0.0.0.0/0
Thanks in advance!
I think the culprit is in how you do your ng serve. If you just did that command as is it won't work because you aren't allowing connections from the outside. For production deployment I highly recommend you use a true server like nginx or apache to serve your bundles (run ng build --prod).
To address your current situation you should be able to hit your page if you run ng serve --host 0.0.0.0 --port 80 which will allow connections on port 80 from most anywhere. Just be sure your server has said port open as well.

Can't see application running on external IP of instance

Google Compute Engine newbie here.
I'm following along with the bookshelf tutorial: https://cloud.google.com/nodejs/tutorials/bookshelf-on-compute-engine
But run into a problem. When I try to view my application on http://[YOUR_INSTANCE_IP]:8080 with my external IP
Nothing shows up. I've tried running the tutorial again and again, but still same problem avails.
EDIT:
My firewall rules: http://i.imgur.com/gHyvtie.png
My VM instance:
http://i.imgur.com/mDkkFRW.png
VM instance showing the correct networking tags:
http://i.imgur.com/NRICIGl.png
Going to http://35.189.73.115:8080/ in my web browser still fails to show anything. Says "This page isn't working"
TL;DR - You're most likely missing firewall rules to allow incoming traffic to port 8080 on your instances.
Default Firewall rules
Google Compute Engine firewall by default blocks all ingress traffic (i.e. incoming network traffic) to your Virtual Machines. If your VM is created on the default network (which is usually the case), few ports like 22 (ssh), 3389 (RDP) are allowed.
The default firewall rules are described here.
Opening ports for ingress
The ingress firewall rules are described in detail here.
The recommended approach is to create a firewall rule which allows incoming traffic to your VMs (containing a specific tag you choose) on port 8080 . You can then associate this tag only to the VMs where you will want to allow ingress 8080.
The steps to do this using gcloud:
# Create a new firewall rule that allows INGRESS tcp:8080 with VMs containing tag 'allow-tcp-8080'
gcloud compute firewall-rules create rule-allow-tcp-8080 --source-ranges 0.0.0.0/0 --target-tags allow-tcp-8080 --allow tcp:8080
# Add the 'allow-tcp-8080' tag to a VM named VM_NAME
gcloud compute instances add-tags VM_NAME --tags allow-tcp-8080
# If you want to list all the GCE firewall rules
gcloud compute firewall-rules list
Here is another stack overflow answer which walks you through how to allow ingress traffic on specific ports to your VM using Cloud Console Web UI (in addition to gcloud).
PS: These are also part of the steps in the tutorial you linked.
# Add the 'http-server' tag while creating the VM
gcloud compute instances create my-app-instance \
--image=debian-8 \
--machine-type=g1-small \
--scopes userinfo-email,cloud-platform \
--metadata-from-file startup-script=gce/startup-script.sh \
--zone us-central1-f \
--tags http-server
# Add firewall rules to allow ingress tcp:8080 to VMs with tag 'http-server'
gcloud compute firewall-rules create default-allow-http-8080 \
--allow tcp:8080 \
--source-ranges 0.0.0.0/0 \
--target-tags http-server \
--description "Allow port 8080 access to http-server"

Resources