Allow access to local host from specific URL only on Linux - linux

I have a REST API listening on the localhost:8000 and I want it to accept requests from localhost:5000 only. Is there a way to achieve this on Linux without modifying the API code?

you can use iptables,
but I think it will be easier to use socat like this:
socat TCP4:localhost:8000 TCP4:localhost:5000
for more information, you can look at this
https://unix.stackexchange.com/questions/10428/simple-way-to-create-a-tunnel-from-one-local-port-to-another

Your REST API probably have it's own mechanism of preventing cross-origin requests and that is the reason why you struggle with connecting those two locations. This problem can't be solved on the Linux level.
First of all, let's explain a few things.
Request's origin is defined by the following features:
scheme, which is simply a protocol that you API uses (HTTP or HTTPS)
hostname, which is domain or IP address (in your case it is localhost)
port, which is self-explanatory.
So, you want to perform a cross-origin request. In case of the simple HTTP request (GET, HEAD or POST request), you have to set Access-Control-Allow-Origin on the side of your REST API (localhost:8000). For that, check how to set up that header in your specific technology.
Cross-origin requests in your case will be possible if you set this header for the following value:
Access-Control-Allow-Origin: *
You want your localhost to be accessible for the specific URL only - in case of localhost, it is only accessible by the locally running applications. If you deploy your application somewhere in the web, and you want only specific URLs to be able to connect with the REST API, you have to use the following setting of Access-Control-Allow-Origin header:
Access-Control-Allow-Origin: https://foo.example
In your case on localhost, that would be:
Access-Control-Allow-Origin: http://localhost:5000
(I assumed that you use http protocol)...
In my opinion, it doesn't make much sense to restrict localhost connections this way - '*' is good. The only reason I can think of is protection against SSRF attacks, is that the case? (It is only justified if your server is exposed to the web.)
Further resources:
Simple cross-origin request documentation
Enabling CORS for REST API

Related

Get absolute-form request target of HTTP request using WAI

The Request type provides accessors for the request method and the request version but not for the bit in between.
So if I have the following request:
GET http://www.example.org/index.html HTTP/1.1
I want the http://www.example.org/index.html in between
RFC7230 Section 5.3.2 allows for this when making a request to a proxy. Section 5.4 says that the Host header should be overriden by the proxy with the host in the URI if the request is in absolute-form. This seems good enough for me, I don't know if WAI would handle this correctly if a client was not behaving correctly and sending a Host header different from the absolute-form URI.
Alternatively, if this is not possible: I'd like to ask if there is a more low level HTTP library than WAI available in Haskell?
The rawPathInfo accessor method will provide this. See https://hackage.haskell.org/package/wai-3.2.2.1/docs/Network-Wai.html#v:rawPathInfo for details.
If you want the query string too, it's available via the rawQueryString accessor.
As for the host, HTTP requests don't normally look like your example (HTTP/1.1 clients will only look like that if they're connecting to a proxy, rather than to the destination Web server). Instead, they often look like this:
GET /index.html HTTP/1.1
Host: www.example.org
If you want http://www.example.org too, then you'll have to rebuild it yourself from the host and protocol information.

Keep on getting Unauthorize Web API

I have a project, It's a web application that requires Windows Authentication.
I've setup an Active Directory at home using my NAS virtualization. Then I've created a VMWare Server for IIS which is a member of that domain on my desktop which I also use for development. I've created the Web API and installed it into that VMWare server. When I call a routine directly, it works and return results but when I use the Web API routine from my javascript web application I keep on getting 401 error. I then put the code on the IIS server and the web application works.
I've seen a lot of solutions like changing the sequence of the Provider in IIS Authentication. Added Everyone read/write permission on the folders. I've also added entry on the web.config. But none of them work.
*****Update as per request on the comment *****
Below is when I run directly from Web API
Calling the Web API from Javascript
Here's the error I'm getting
Just FYI, I tried running the web api from Visual Studio on the same machine but also with 401 error
Is there anything I could add to AD to make my development machine as trusted?
********************A new issue after the code change **********
****************Another Update******
This is definitely weird, so I installed Fiddler 4 to see what's going on. But still no luck.
Then I made changes on the IIS HTTP Response Header
The weird thing is when I run Fiddler the error is gone but when I close it it comes back.
There are two things going on here:
A 401 response is a normal first step to Windows Authentication. The client is then expected to resend the request with credentials. AJAX requests don't do this automatically unless you tell it to.
To tell it to send credentials in a cross-domain request (more on that later), you need to set the withCredentials option when you make the request in JavaScript.
With jQuery, that looks like this:
$.ajax({
url: url,
xhrFields: {
withCredentials: true
}
}).then(callback);
These problems pop up when the URL in the address bar of the browser is different than the URL of the API you are trying to connect to in the JavaScript. Browsers are very picky about when this is allowed. These are called "cross-domain requests", or "Cross-Origin Resource Sharing" (CORS).
It looks at the protocol, domain name and port. So if the website is http://localhost:8000, and it's making an AJAX request to http://localhost:8001, that is still considered a cross-domain request.
When a cross-domain AJAX request is made, the browser first sends an OPTIONS request to the URL, which contains the URL of the website that's making the request (e.g. http://localhost:8000). The API is expected to return a response with an Access-Control-Allow-Origin header that says whether the website making the request is allowed to.
If you are not planning on sending credentials, then the Access-Control-Allow-Origin header can be *, meaning that the API allows anyone to call it.
However, if you need to send credentials, like you do, you cannot use *. The Access-Control-Allow-Origin header must specifically contain the domain (and port) of your webpage, and the Access-Control-Allow-Credentials must be set to true. For example:
Access-Control-Allow-Origin: http://localhost:8000
Access-Control-Allow-Credentials: true
It's a bit of a pain in the butt, yes. But it's necessary for security.
You can read more about CORS here: Cross-Origin Resource Sharing (CORS)

How to handle http requests which are getting redirected as https using my nodejs-express app?

I am injecting some script tags in a website, with source such as http:localhost:3000/css/my-page-css.css . While its working on almost all sites, there's this particular website that is somehow sending all my http requests as https. How do I handle such a case?
I have configured an https server also on my nodejs app which listens to port 8443 and http listens to 3000. But, when I inject my script tags, they have src URLS which point to port 3000. So even if I have an https configured on my nodejs app, it won't work since it would be listening to a different port.
You are using HTTP Strict Transport Security (HSTS)
Using the securityheader.com website on your URL, or Chrome Developer tools we see the following HTTP Header is sent back by your site:
Strict-Transport-Security max-age=7889238
This HTTP Header will be configured in your webserver and is a way of your webserver telling the browser "For the next 7889238 seconds only use HTTPS on this domain. If someone tries to use HTTP (either by typing or by clicking on a link) then automatically switch HTTP to HTTPS before you send it on to the server."
This is a security feature as currently the default (if a scheme is not explicitly given) is HTTP. This allows website owners to switch the default and, even strong that that, prevents it being able to be switched back.
HSTS is set at a domain level and it is not possible to have it on for one port (e.g. 443) but not for another (e.g. 3000) - it's either on for that domain or off.
If you really want to use HTTP then you need to remove this header and remove the remembered value of this header from your browser. While chrome allows you to do this by typing chrome://net-internals/#hsts in the URL and using the delete option, the easiest way to do this is to change the max age from 7889238 to 0, and then load the website again. And then remove the header completely.
This can be especially annoying for sites like localhost where you proxy requests and inadvertently set it for that dummy host name. You should see if your node proxy server allows you to strip off that HTTP header. Some might say it would be better if browser makers ignored HSTS for localhost, however I think it would be better if developers just stopped fighting HTTPS and used that even for development environments using a self-signed certificate that is added to your local trust store. This was you can avoid problems like mixed content, and also use features that are HTTPS only (including Brotli, HTTP/2, Geo Location...etc.) while developing (though some browsers like Chrome still allow these on http://localhost).
Alternatively set up a local DNS alias for each of your dev sites and use that with or without HTTPS as appropriate for the site in question.

In Node.js, finding the original client URL when app is behind a reverse proxy

I'm working on a Node.js/Express application that, when deployed, sits behind a reverse proxy.
For example: http://myserver:3000/ is where the application actually sits, but users access it at https://proxy.mycompany.com/myapp.
I can get the original user agent request's host from a header passed through easily enough (if the reverse proxy is configured properly), but how do I get that extra bit of path and protocol information from the original URL accessed by the browser?
When my application has to generate redirects, it needs to know that the end user's browser expects the request to go to not only to proxy.mycompany.com over https, but also under a sub-path of myapp.
So far all I can get access to is proxy.mycompany.com, which isn't enough to create a valid redirect.
For dev purposes I'm using a reverse proxy setup in nginx, but my company is using Microsoft's ISA as well as HAProxy.
Generally this is done with x-forwarded-* headers which are inserted by the reverse proxy itself. For example:
x-forwarded-host: foo.com
x-forwarded-proto: https
Take a look here:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/x-forwarded-headers.html
Probably you can configure nginx to insert whatever x- header you want, but the convention (standard?) seems to be the above.
If you're reverse proxying into a sub-path such as /myapp, that definitely complicates matters. Presumably that sub-path should be a configuration option available to both nginx and your app.

Emulate cross-site requests in localhost

Is there a way to make cross-site requests in the localhost? To emulate a different domain? How do you make an application, for example to do JSON-P or CORS and test it in your local machine without having an actual domain?
I am using NodeJS and WebStorm.
Thank you.
Assuming you can access your site from both 127.0.0.1 and localhost, load the page via localhost and have that page access http://127.0.0.1. I can't guarantee this triggers cross-origin in all cases, but it was working for me with iframes.

Resources