I have a Grails App and I am looking to lock it down by ip so that only a number of IP ranges can access the application. I have used the Spring Security command within Config to achieve this:
grails.plugins.springsecurity.ipRestrictions
Then when running the App in the cloud (Jelastic) even though I am on one of the IP's I listed it doesn’t let me access the areas I want. I then put some code in the app shown below to pull back the address of the Client and it shows the address of maybe the cloud proxy server instead of the Client using the App:
request.getRemoteAddr()
I think that it wont let me access the areas I want as its reading my IP as the IP of the cloud proxy, I have also tried running the commands below to see if any of them return my actual IP, however they were all null :S
request.getHeader("X-Forwarded-For");
request.getHeader("Proxy-Client-IP");
request.getHeader("WL-Proxy-Client-IP")
request.getHeader("HTTP_CLIENT_IP")
request.getHeader("HTTP_X_FORWARDED_FOR")
I just need to know if there is some way of restricting this application down in the cloud by Client IP instead of it using the IP of the Cloud Proxy? Thanks in advance
All requests to Jelastic instances are coming through infrastructure global Resolver.
So, you are right, request.getRemoteAddr() returns IP of Resolver and it's not recognized by your allowed list.
Workaround for this is purchasing external IP for your app server in Jelastic. In this case all requests will come directly to your instance.
I also recommend you get on board the dedicated Jelastic Community in order to share your experience and get help from others.
Have you configured a nginx instance in front of your tomcat? I am not sure if there are any jelastic specifics, but you have to configure nginx so that it passes the ip to the proxied service, see http://wiki.nginx.org/HttpRealIpModule
You could e.g. set a custom header, if you don't want to overwrite the defaults:
proxy_set_header X-Real-IP $remote_addr;
Have you tried dumping the headers you actually get in your app?
On Cloudfoundry.com I can see that I get 'x-forwarded-for'
class HeaderController {
def headerTest = {
def headerNames = request.headerNames.collect{ it }
headerNames.each {
render "$it : ${request.getHeader(it)}\n"
}
render "Remote addr : ${request.getRemoteAddr()}\n"
render "Forward addr : ${request.getHeader('x-forwarded-for' )}\n"
}
}
In Jelastic cloud 'x-forwarded-for' displays your IP as well.
As a follow up I suggest you to check the related topic on Jelastic Community
So supposedly you'd have to set your IP restriction config in a way that it checks the value of 'x-forwarded-for'.
Related
I created a Node.js app (firebase-admin) that reads/writes data from our local SQL Server, reads/writes data to Firebase Realtime Database. This part is working now. My other desktop apps will connect to this Node.js app to request data from firebase RTDB. This part is also working.
I would like now to host the Node.js app in our LAN. How do I setup the host machine so that the desktop apps (also from the same LAN) will connect like http://local ip of host machine/name of app. Example, http://192.168.1.254/firebasemiddleware/. Also, if I have another app, let's say named anothermiddleware, the local address should be, http://192.168.1.254/anothermiddleware/.
This machine is behind a firewall and will not be visible from outside the LAN.
How can I do this?
Thank you.
You could run Nginx as a reverse proxy to pass through based on address to your
multiple local services.
You would setup a reverse proxy on Nginx like this:
location /firebasemiddleware {
proxy_pass http://localhost:3000;
}
location /anothermiddleware {
proxy_pass http://localhost:3001;
}
The docs on this are here
If you're new to this there is a really cool configuration generator by digital ocean that might make life easier for you located here
Your network is currently a local area network and you cannot allow external connections to access it.
Option 1: you find out about the nat of the current carrier that is providing you and open the corresponding port of the nodejs app you are running with nat's port and access it through the public ip, get the public ip by accessing some websites for example: https://whatismyipaddress.com/en-vn/index.
Option 2: simpler if you don't have a static ipv4 address or can't open nat of your carrier, then you can use tunnel, recommend to learn https://www.cloudflare.com/products/tunnel/ .
I am trying to make a GET request to a foreign server. But the foreign server requires our IP address for security purposes.
Now the problem is I am running my app inside Kubernetes' pod with three nodes.
When I send the request, it takes the IP address of one of the kubernetes nodes.
I could add static IP addresses to all my nodes. But from what I have learned, best practice is to only release the Gateway(ingress) IP address to the outside world. Everything else should be hidden.
So I tried to proxy my axios request like this:
var res = await axios.get('https://someapi.com', {
proxy: {
host: 'ingressIP', //static ip
port: 80
}
});
But the request still returns an error saying that the IP is not allowed. It returned the IP address of the kubernetes node, where my POD was in.
I am not sure, that you will be able to pass your traffic through ingress somehow.
We also had the same problem. We needed to send requests to a third-party server from a specific IP-address.
But we solved this a bit different, we just created a new small server with static IP, installed Squid proxy server there and configured our applications to use Squid server as an HTTP forward proxy.
Squid has a lot of features, and IMO is quite bloated for such a simple use-case; I'd suggest something more lightweight, like tinyproxy (docker image here). So what you can do is create a Deployment using that image, pin it to a specific node (the one with the IP that the 3rd party API allows) using nodeSelector, create a Service pointing to it, and use that as a proxy in your requests. There's one drawback to this approach, though - you just added a(nother) single point of failure to your infrastructure.
I want to host a web app with node.js on a Linux virtual machine using the the HTTP module.
As the app will be visualising sensitive data I want to ensure it can only be accessed from PCs on the same LAN.
My understanding is that using the HTTP module a web server is created that's initially only accessible by other PCs on the same LAN. I've seen that either by tunnelling or portforwarding a node.js server can be exposed if desired.
Question
Are there any other important considerations/ways the server could be accessed externally?
Is there a particular way I can setup a node.js server to be confident that it's only accessible to local traffic?
It really depends what you are protecting against.
For example, somebody on your LAN could port forward your service using something like ngrok. There are a few things you can check for:
In this case the header x-forwarded-for is set. So, to protect against this you can check for this header on the incoming request, and if set you can reject the request.
The host header is also set and will indicate how the client referred to your service - if it is as you expect (maybe a direct local LAN address such as 192.168.0.xxx:3000) then all is OK, if not (I ran ngrok on a local service and got something of the form xxxxxxxx.ngrok.io) then reject it.
Of course a malicious somebody could create their own server to redirect requests. The only way there is to put in usernames and passwords or similar. At least you then known who is (allegedly) accessing your service and do something about it.
However, if you are not trying to pretect against a malicious internal actor, then you should be good as you are - I can't think of any way (unless there is a security hole in your LAN) for your service to be made public without somebody actively setting that up.
My last suggestion would be to use something like express rather than the http module by itself. It really does make life a lot simpler. I use it a lot for just this kind of simple internal server.
Thought I'd add a quick example. I've tested this with ngrok and it blocks access via the public address but works find via localhost. Change the host test to whatever local address (or addresses) you want to serve this service from.
const express=require('express');
const app=express();
app.use((req,res,next)=>{
if (req.headers.host!=='localhost:3000' || req.headers['x-forwarded-for']){
res.status(403).send('Invalid access!');
} else next();
});
app.get('/',(req,res)=>res.send('Hello World!'));
app.listen(3000,()=>{
console.log('Service started. Try it at http://localhost:3000/');
});
I would prefer using nginx as a proxy here and rely on nginx' configuration to accept traffic from local LAN to the node.js web server. If this is not possible, a local firewall would be the best tool for the job.
I have a domain with multiple active users with several applications hosting on it.
Domain: www.domain.com and running on server IP: XXX.XXX.XXX.1
I want to run www.domain.com/business on server IP: XXX.XXX.XXX.2
and similarly to run www.domain.com/hosting on server IP: XXX.XXX.XXX.3
It is very similar to Google scenario:
www.google.com runs on XXX.XXX.173.1 - XXX.XXX.185.1
www.google.com/+dinesh on XXX.XXX.186.1 -XXX.XXX.187.1
I have seen a lot of articles to manage DNS and virtual entries but unable to get correct answer.
Another way to do this is to make the host portions slightly different, i.e.:
business.domain.com/business
hosting.domain.com/hosting
You would then use these links where you are currently putting www.domain.com/business and www.domain.com/hosting. It's then a simple matter to have those different hostnames point at different addresses.
In general, it's not possible to have URLs with the same host point to different IP addresses on the basis of the stuff after the hostname. I cannot seem to verify your Google example (from where I'm looking, they both go to the same set of addresses). If you've more information on how you determined those addresses, please post that and maybe something else can be suggested.
You can manage it through Load balance rather than run on different server
Please use a reverse proxy in front of the application servers.
Consider using nginx or Apache Httpd.
These can be configured to route (technically proxy) to the desired app servers by inspecting the context path in URL.
If you choose to use nginx, see this post on how to configure nginx for such a use case.
Nginx configuration page for additional details: config
Is there any way to run Ghost on a subdomain using Node.JS? I am able to run it normally on Node.JS like:
App.Modules.Ghost = require('ghost'); /**< Ghost module. */
App.Apps.Ghost = App.Modules.Ghost({ config: '/Assets/Ghost/Config.js'.LocalFilePath }); /**< Create Ghost app. */
Then, I am then able to go to http://example.com/ghost/ and view my blog. Although this works for now, I want to be able to view my blog at http://blog.example.com/ using Node.JS.
Sadly, the way networking works prevents this in the context you desire. In order to achieve that sort of functionality, you would need a proxy server to go in front of the entire application. I would suggest NginX for this ability, due to its speed and wide-spread use.
Why is this not possible?
In this sense, networking is the system where you bind to an IP and a port. When you bind, nothing else can bind to that same IP/port. Since a domain (and subdomain) simply point to an IP address, there is no way that you can separate these connections at the networking level. This is why the Host HTTP header was added.
How does NginX do it?
NginX parses the Host header and can send the connection to your Ghost server as you wish it to be forwarded to. This also allows you to forward the main domain (http://example.com) to whatever website you like, therefor using different applications and such on the same IP and port.
This answer contains the best directions on how to achieve this functionality.