from this article: https://docs.newrelic.com/docs/apm/new-relic-apm/getting-started/networks
it suggests me to allow outgoing firewall with following ips and ports:
Networks
50.31.164.0/24
162.247.240.0/22
Ports
TCP 80
TCP 443
I'm using ufw, how can i do it?
I've tried this:
sudo ufw allow proto tcp from 50.31.164.0/24 port 80
sudo ufw allow proto tcp from 50.31.164.0/24 port 443
sudo ufw allow proto tcp from 162.247.240.0/22 port 80
sudo ufw allow proto tcp from 162.247.240.0/22 port 443
When I check my rules: they are like this:
Am I doing this right?
You already allow connections from anywhere to 80 & 443, so you don't need the extra allow statements for their specific IP ranges.
The request for outgoing connections is if you are running say your corporate site through it but are filtering outgoing traffic from your LAN (ie, only being able to get to a schools website from a kiosk in their hallway or from a lab) and you want people on your local network to be able to access it.
Related
Currently I'm trying to make one container communicate with another one that is exposed and running on the same machine.
Lets say the external IP address is 123.123.123.123 and I exposed a basic NGINX Docker container on port 8080 via the ports property inside my docker-compose.yaml and I execute curl http://123.123.123.123:8080. From an external machine it successfully gets a response back, same goes for executing the command from the host machine. However when I execute this curl from another container on the same machine it exits with a timeout.
I'm unsure of the cause, I have tried temporarily exposing all ports via https://serverfault.com/a/129087 and this did actually allow communication from one container to the exposed container (Of course I restored the previous configuration afterwards).
It is important for me to be able to use the external routing, especially since in production jwilder/nginx-proxy is used with HTTPS certificates.
The machine is running Ubuntu 20.04, I haven't altered any firewall settings provided by iptables.
ufw status output:
Status: active
To Action From
-- ------ ----
22/tcp LIMIT Anywhere
2375/tcp ALLOW Anywhere
2376/tcp ALLOW Anywhere
22/tcp (v6) LIMIT Anywhere (v6)
2375/tcp (v6) ALLOW Anywhere (v6)
2376/tcp (v6) ALLOW Anywhere (v6)
Probably the most relevant part of iptables -L:
Chain DOCKER (6 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.18.0.2 tcp dpt:9000
ACCEPT tcp -- anywhere 172.20.0.2 tcp dpt:http
ACCEPT tcp -- anywhere 172.18.0.5 tcp dpt:https
ACCEPT tcp -- anywhere 172.18.0.5 tcp dpt:http
ACCEPT tcp -- anywhere 172.19.0.6 tcp dpt:mysql
Curious how this issue could be fixed. Of course adding both containers to the same internal network fixes this, but since port 8080 is already exposed to the world I would like this to include internal traffic as well. I'm using Docker Compose, both of these containers are not apart of the same docker-compose.yaml
The solution was staring in my face all this time. Basically with this image provided by DigitalOcean both iptables and UFW are enabled at the same time. However Docker is not able to expose ports whenever you add one to your Docker configuration.
Apparently that mostly isn't a problem for external traffic since in this scenario UFW apparently does not handle external traffic, that is all up to iptables. Adding ufw allow http and ufw allow https fixes it!
I am setting up a Rest API on AWS EC2 and configuring the instance.
I have a problem and it is that despite being able to connect via ssh, I cannot make an API call on port 5000.
The VM has nothing configured, only Node and PM2.
Trying to enter through the public DNS I can't establish a connection either.
I have these security groups enabled.
5000 TCP 0.0.0.0/0
22 TCP 0.0.0.0/0
5000 TCP ::/0
443 TCP 0.0.0.0/0
443 TCP ::/0
80 TCP 0.0.0.0/0
80 TCP ::/0
Can someone help me with this? I don't understand what is happening.
What is the exact error is it timing out ?
If yes then the error is with the security group if not try doing ssh to your container and ping it locally using curl localhost command you might find that the pm2 server is not running the app properly
It's my first time when I try configure a server running on Amazon EC2.
I figured out how run my node app on 80 port but now I'm trying to run on 443 port with Letsencrypt SSL. Before to work on 80 port I added
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 3000
and
sudo iptables -t nat -A OUTPUT -p tcp --dport 80 -j REDIRECT --to-ports 3000
and everything worked fine. But now after install Letsencrypt I try to do same thing but with 433 port instead 80 and it's not working.
Letsencrypt config automatically for me all files so now redirect from http to https is working fine and when my iptable is empty on https:// I see ubuntu default website. When I run lines mentioned above with 443 port app is still not working (browser can't even load anything). It's only working with http:/...:3000
I've added 443 port to Security Groups on EC2.
What I can do? Thanks.
You need to check your security group Inbound/Outbound rules, you need to see if port 443 is assigned to which host. A valid but dangerous configuration, just for testing, is allow everything on Inbound and Outbound, to see if its a problem on your Security Group.
Beyond that, you need to be sure if the binding port is listening. Are you using Amazon Linux?
As we know there are simple steps to give access to any VM-port from outside.
Here is the steps I have already covered :
Open VM instance and run the server on port 80 and checked the
localhost is running in the local browse,
added port 80 in the inbound of Network security group,
turned off all three types of firewall from the VM windows.
Still the public ip is not accisible from the outside. Ping ... is resulting "Request timeout" or the port "80" is not accessible from the browser using public IP.
Edit : Surprisingly i have found a Deny tag in the report ! Does it matter ?
Normally, add port 80 to NSG inbound rules and turn off VM's windows firewall, we will access website from outside.
In your scenario, maybe we should check web site work on IPv4 or IPv6 or both?
We can use this command to check it:
C:\Users\jason>netstat -ant
Active Connections
Proto Local Address Foreign Address State Offload State
TCP 0.0.0.0:80 0.0.0.0:0 LISTENING InHost
TCP 0.0.0.0:135 0.0.0.0:0 LISTENING InHost
TCP 0.0.0.0:445 0.0.0.0:0 LISTENING InHost
TCP 0.0.0.0:3389 0.0.0.0:0 LISTENING InHost
TCP [::]:80 [::]:0 LISTENING InHost
TCP [::]:135 [::]:0 LISTENING InHost
TCP [::]:445 [::]:0 LISTENING InHost
TCP [::]:3389 [::]:0 LISTENING InHost
We can find web service listen on port 80 and work on IPv4 and IPv6, so we can use IPv4(public IP address) and port 80 to access this web site.
We should make sure web service work on IPv4.
==========================
Update:
Please check your Vnet-->subnet, does this subnet associated with a NSG, if yes, we should modify this NSG's inbound rules, add port 80 to this NSG.
It seems a issue with node server for angular cli, more information please refer to this link below:
https://github.com/angular/angular-cli/issues/1793
So I have a FreeBSD router running PF and Squid, and it has three network interfaces: two connected to upstream providers (em0 and em1 respectively), and one for LAN (re0) that we serve. There is some load balancing configured with PF. Basically, it routes all traffic to ports 1-1024 through one interface (em0) and everything else through the other (em1).
Now, I have a Squid proxy also running on the box that transparently redirects any HTTP request from LAN to port 3128 in 127.0.0.1. Since Squid redirects this request to HTTP outside, it should follow the load balancing rule through em0, no? The problem is, when we tested it out (by browsing from a computer in the LAN to http://whatismyip.com, it reports the external IP of the em1 interface! When we turn Squid off, the external IP of em0 is reported, as expected.
How do I make Squid behave with the load balancing rule that we have set up?
Here's the related settings in /etc/pf.conf that I have:
ext_if1="em1" # DSL
ext_if2="em0" # T1
int_if="re0"
ext_gw1="x.x.x.1"
ext_gw2="y.y.y.1"
int_addr="10.0.0.1"
int_net="10.0.0.0/16"
dsl_ports = "1024:65535"
t1_ports = "1:1023"
...
squid=3128
rdr on $int_if inet proto tcp from $int_net \
to any port 80 -> 127.0.0.1 port $squid
pass in quick on $int_if route-to lo0 inet proto tcp \
from $int_net to 127.0.0.1 port $squid keep state
...
# load balancing
pass in on $int_if route-to ($ext_if1 $ext_gw1) \
proto tcp from $int_net to any port $dsl_ports keep state
pass in on $int_if route-to ($ext_if1 $ext_gw1) \
proto udp from $int_net to any port $dsl_ports
pass in on $int_if route-to ($ext_if2 $ext_gw2) \
proto tcp from $int_net to any port $t1_ports keep state
pass in on $int_if route-to ($ext_if2 $ext_gw2) \
proto udp from $int_net to any port $t1_ports
pass out on $ext_if1 route-to ($ext_if2 $ext_gw2) from $ext_if2 to any
pass out on $ext_if2 route-to ($ext_if1 $ext_gw1) from $ext_if1 to any
I have tried appending the following rule, but it did nothing:
pass in on $int_if route-to ($ext_if2 $ext_gw2) \
proto tcp from 127.0.0.1 to any port $t1_ports keep state
Thanks!
If you want all outgoing squid requests to go to a specific IP address on a specific interface, you should be able to use the "tcp_outgoing_address" option in squid.conf to specify an IP address on em0.