I have two projects on my own (P1 and P2) in Google Compute Engine and two service accounts (SA1 in P1 and SA2 in P2).
What I'm trying to achieve is that instances in P1 with SA1 can access to any resources in P2 which has SA2 let say to port 2345. I ensured that application is running and listening to 2345
What was done so far following this documentation https://cloud.google.com/vpc/docs/using-firewalls
I've created a new rule in P2 where the source is an SA1 and target is an SA2 but, it doesn't work and the port 2345 still unreachable.
Then I thought I need to specify the second source filter as 0.0.0.0/0 in considering that all resources work under SA1 no matter which IP it's coming from should work but, now it seems to be open to the world, what I'm trying to prevent of course.
I've spent a lot of time looking for the solution on the internet, without success. That seems like nobody had the similar issue before me 'cause I didn't find any record related to my problem.
Does anybody knows what I'm doing wrong or missing something
Any suggestions are appreciated.
Thanks in advance.
Related
We've developed a server software and for ease of use for end-users, we are using the localtunnel-server app on one of our linux servers to get around the need for port forwarding and messing around with firewalls.
The problem is that it seems to tunnel "all" traffic on the port 80. However, we are afraid of this being abused. We would like to restrict traffic somehow and I wanted to know if that was even possible.
For example, let's say our app uses the "/myapp" virtual directory on the localhost website. So if a request is supposed to go to http://localhost/myapp/index.html then the traffic gets tunneled to http://mytunnel.myserver.com/myapp/index.html
The problem is, if there are other sites running on localhost, http://localhost/someotherapp also gets through. We'd like to block urls that don't match a format or contain keywords such as "/myapp"
Is that even possible? And if so, any guidance on how to achieve this, would be greatly appreciated.
I am writing to you because I have a conception problem for my DNS infrastructure.DNS. My infrastructure is composed of a DNS machine (recursive or forwarding) and another authoritatve that has say views according to the source of the client (we can assimilate it to Bind even if it is not the case). This Auhoritative machine should not be queried directly but must go through the other. To summarize here is the infrastructure:
> Client Location 1 Client Location 2 Client Location 3
> \ | /
> DNS Recursive ou Forwarding
> |
> DNS Authoratitve with 3 « views ».
I thought of different solutions to solve these problems :
Create different ports on the DNS Recursive (or Forwading), each port containing a DNS that would correspond to a view that would query the Authoritative DNS (and thus recognize the origin). But I find this solution rather ugly and that will quickly increase if the number of views increases.
Use the DNS extension : EDNS to forward the client network (but that seems pretty complicated).
I wanted to know if you have other solutions and if not what would be the best.
Thank you in advance !
The first solution does not seem really workable as there is nearly no way to change from the default DNS port in various end clients OS. You would instead need separate recursive nameservers on separate IP addresses and each client configured with the specific nameserver it needs to use.
The second solution can work, it is ECS the "EDNS Client Subnet" feature, described in RFC7871 and supported in various nameservers. See for example in Bind: https://www.isc.org/wp-content/uploads/2017/04/ecs.pages.pdf
Now are you really sure you need this setup or that this is the only way to achieve your goals? It is difficult to propose other ideas as you describe from the get go your solution but not really your problem initially nor your constraints.
For example, it may be solved in some cases by just configuring each client with a different domain search list. client1 would have client1.example.com as suffix, client2 would have client2.example.com and so on. Now, with only one standard recursive nameserver and one authoritative one for example.com without any kind of extension or complicated setup, when client1 attempts to resolve www it will (may) get a different reply than client2 also attempting to resolve www as the final two fully qualified domain name would be indeed different (www.client1.example.com vs www.client2.example.com), because of the different search lists. This of course depends a lot on what kind of applications are running on each client.
The use of simpler nameservers such as dnsmasq may also help, but again your space problem is not defined enough to be sure what to suggest.
I have a .csv file with a bunch of IP addresses. I am looking for a way to run a script to import the file and convert them to URLs and maybe export them to another .csv or something similar. Is this possible? How would I do this? Admittedly, I am a novice when it comes to python. I have done enough research to know that a call to sockets is involved, but that's where the trail ends. I don't know where to go from there. Any and all help is greatly appreciated. Thank you in advance.
You cannot convert them to specific URL's, but I guess you want to get the domain names. This is called Reverse DNS Lookup.
You can run this command nslookup <IP-ADDRESS> and get the domain name (besides other info).
Example:
nslookup 75.126.153.206
However, this does not guarantee that only that domain is associated with that IP. This is due to load balancers, and you will get the load balancer's domain name as a result most of the times. To avoid that you can try to do a Reverse IP Lookup. This will provide you all the domains associated with that IP address (online example https://hackertarget.com/reverse-ip-lookup).
Besides that, detecting the protocol (http/s, ftp) is another story. You have to scan the targets for specific ports like 80 (http), 443 (https), 21 (ftp) etc. to verify for they services they run. Again this does not guarantee that a service is definitively running on default ports (like web servers a lot of time may run on 8080 or some other port). To avoid that you may need to scan for a wider range of ports and detect their services which will be far more time consuming and you may get into trouble by getting your IP banned since somewhere this may be considered illegal.
Anyway for port scanning I suggest you to take a look at Nmap (https://nmap.org/)
import requests
def ip_to_url(ip):
try:
r = requests.get(f'http://{ip}', timeout=1).url
return r
except:
return 'not found'
print(ip_to_url('142.250.189.238'))
Output
http://www.google.com/
Hope someone can help enlighten me on this issue. I am currently working on a lambda function that utilizes the cloud watch scheduler to check various devices and it is using elasticache to maintain a simple database on the readings.
My problem is that after I shut down my testing at night. I fire up the lambda function in the morning and the function has lost access to the internet. Which is represented by the function timing out. Regularly after a few hour of messing around with my routes and my vpc settings it will start working again. Just to break the following day. Sometimes it works with nat gateway other times with just a nat instance. The changes I typically make to the vpc set up are minor. The pattern for the set up I use is one public and one private and one natgateway.
Update: After not being able to access the internet from my VPC all day yesterday, today is functioning fine. What did I do differently, nothing. When it stops functioning again, probably later today, I will be calling up AWS to see if we can get to the bottom of this.
I've just fixed the same issue with my lambdas - the issue was that I had set the lambda to run in all of my subnets (I have 2 private and 1 public). This knowledgebase article specifies you should run them in private subnets only, which makes sense:
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
Go to your lambda page on the AWS console, deselect the public subnet and save and the problem should be solved.
It sounds like it is due to the ephemeral port range that AWS Lambda uses. I recommend you check all Network ACLS (NACLS) to ensure that they allow communication on the ephemeral port range used by Lambda:
AWS Lambda functions use ports 1024-65535
So this means that when your lambda runs, it may use any port in this range to send communication to the internet. Even though the destination is port 80 or 443, the sending port will be in this ephemeral range, so when the internet server responds it will send the response back to the originating ephemeral port. Ensure your NACLS allow the communication for this ephemeral range (inbound or outbound or both depending on your use case) or you might be blocked depending on which ephemeral port is used. This article has a useful explanation: https://www.whizlabs.com/blog/ephemeral-ports/
A Lambda function with VPC access will require a NAT gateway to access the internet. You state that it sometimes works with only an Internet Gateway, but that isn't possible according to the AWS documentation. If you are removing the NAT gateway, or the VPC's route to the NAT gateway, then that would remove internet access from any Lambda functions that have VPC access enabled.
We have an EC2 instance on AWS which we deployed our backend services to. We started by using Google Spreadsheets (scripted with Google Apps Script) to present our backend, through a webservice deployed on our server. We have a specific port from which https (uses a self-signed certifiate) protocol is used to serve the webservice encrypted while on flight. We had set up Security Groups (basically a firewall entry group) which include following CIDR ranges for that specific ingress port of our webservice:
64.18.0.0/20
64.233.160.0/19
66.102.0.0/20
66.249.80.0/20
72.14.192.0/18
74.125.0.0/16
173.194.0.0/16
207.126.144.0/20
209.85.128.0/17
216.58.192.0/19
216.239.32.0/19
as described in https://developers.google.com/apps-script/guides/jdbc#setup_for_google_cloud_sql
This setup was working fine until 5 days ago. Then something weird happened. When we run the script behind the spreadsheet from 'Script Editor' code
works fine and requests to our webservice return successfully. But when the exact same code was invoked through a menu item, it was not doing anything. After long frustrating investigation we found out that request was not even reaching our server (there were numerous other quirky symptoms like only last log command was visible on 'Execution Transcript' even though there should have been many others). Then we tried replacing the security group with a rule that accepts from any ip but to specific port everything was working fine again.
Here is a link to seemingly relevant issue in google-apps-script-issues page:
https://code.google.com/p/google-apps-script-issues/issues/detail?id=4679#c8
We ran tcpdump tcp port <port> -i eth0 -vv and observed that when we run the code from 'Script Editor' request was made from 66.102.7.156 (and from similar ips, which are in 66.102.0.0/20), when code is invoked from menu item in spreadsheet the request was made from 72.14.199.55 (and from similar ips, which are in 72.14.192.0/18). This one seems to be the problematic ip range.
My question is, why is it the case that when request sources are correctly included in firewall rules one block of ips don't work and starts to work when ip restriction on the port is lifted (source ip 0.0.0.0/0)? Is it a bug of Security Groups in AWS? Or are we doing something wrong? Also if our approach is not adequate in any way, alternative solutions or suggestions would be much appreciated.
As per the issues you linked to, there was a bug in Apps Script that lead to this behavior. That bug has now been fixed.