Implement custom protocol handler - google-chrome-extension

I want to create a chrome extension which will allow to open URLs like something://abc.def but not simply redirect it to some other http URL like Navigator.registerProtocolHandler() nor open some application.
I want to send whole raw HTTP request
GET / HTTP/1.1
...
to websocket and then render received whole raw HTTP response (headers, body, etc).
I couldn't find documentation for this, unfortunately.
TL;DR I want to redirect some protocol requests to websocket and render page from it's response.

Related

Express JS redirect with headers

Using express JS I'm trying to add some headers to the redirection I'm returning
However, everything I tried just work for the response headers and not for the request headers of the redirection. I.E., when inspecting it with the developer tools I can see the response headers but when the next call is made, I can not see the request headers
req.headers['x-custom-header'] = 'value'
res.setHeader('x-custom-header', 'value')
res.redirect('example.com')
Does anybody could explain how the response and request headers work on ExpressJS?
A redirect just does a redirect. It tells the browser to go to that new location with standard, non-custom headers. You cannot set custom headers on the next request after the redirect. The browser simply doesn't do that.
The usual way to pass some type of parameters in a redirect is to put them in a query string for the redirect URL or, in some cases, to put them in a cookie. In both cases of query string parameters and data in a cookie, those will be available to your server when the browser sends you the request for the redirected URL.
It also may be worth revisiting why you're redirecting in the first place and perhaps there's a different flow of data/urls that doesn't need to redirect in the first place. We'd have to know a lot more about what this actual operation is trying to accomplish to make suggestions there.
If your request is being processed by an Ajax call, then you can program the code receiving the results of the Ajax call to do anything you want it to do (including add custom headers), but if it's the browser processing the redirect and changing the page URL to load a new page, it won't pay any attention to custom headers on the redirect response.
Can anybody explain how the response and request headers work on ExpressJS?
Express is doing exactly what you told it to do. It's attaching the custom headers to the response that goes back to the browser. It's the browser that does not attach those same headers to the next request to the redirected URL. So, this isn't an Express thing, it's a browser thing.

Can I register a custom URL Scheme/Protocol with a Node HTTP Server?

I would like to be able to handle a custom URL scheme with the Node HTTP API. I would like to write links inside web pages like this: app://foo/bar. I would like to have the Node HTTP Request Handler receive this kind of URL.
When I try this kind of custom protocol in my URL, it looks like Chrome is not sending out the request because it is malformed. So nothing gets to my HTTP server in Node.
Is it possible to bind your HTTP server to a custom URL Scheme or Protocol like app://foo/bar?
Only certain protocols such as http:// and https:// will be sent to your nodejs http server. That's the issue. Your node.js server is an http server. The chrome browser will only send it URLs with the http protocol that it knows belong to an http server.
A custom protocol has to be first handled in the browser with a browser add-on that can then decide what to do with it.
Perhaps what you want to do is a custom HTTP URL such as:
http://yourserver.com/foo/bar
Then, your node.js http server will get the /foo/bar part of the request and you can write custom handlers for that.
To recap, the first part of the URL the part before the :// is the protocol. That tells the browser what protocol this URL is supposed to be used with. Without a browser add-on, a browser only comes with support for some built-in protocols such as http, https, ws, wss, mailto and some others.
An http server will only be able to respond to the http protocol so it will only work with URLs that expect to use the http protocol and that the browser knows use the http protocol. Thus your own protocol that the browser does not know about is not something the browser knows what to do with. It would take a browser add-on to tell the browser what to do for a custom URL.
When I try this kind of url, it almost looks like Chrome is batting it down before it can get to my HTTP server in Node.
Yes, it's not a recognizable protocol built into the browser so the browser doesn't know what to do with it or how to speak that protocol.
Is it possible to bind your HTTP server to a custom URL Scheme like this?
Only with a browser add-on that registers and implements support for the custom URL protocol.
I have made an npm module for this purpose.
link :https://www.npmjs.com/package/protocol-registry
So to do this in nodejs you just need to run the code below:
First Install it
npm i protocol-registry
Then use the code below to register you entry file.
const path = require('path');
const ProtocolRegistry = require('protocol-registry');
console.log('Registering...');
// Registers the Protocol
ProtocolRegistry.register({
protocol: 'testproto', // sets protocol for your command , testproto://**
command: `node ${path.join(__dirname, './index.js')} $_URL_`, // $_URL_ will the replaces by the url used to initiate it
override: true, // Use this with caution as it will destroy all previous Registrations on this protocol
terminal: true, // Use this to run your command inside a terminal
script: false
}).then(async () => {
console.log('Successfully registered');
});
Then suppose someone opens testproto://test
then a new terminal will be launched executing :
node yourapp/index.js testproto://test
Based on the comment thread on the other answer, I think I understand what you're trying to do.
I hear that you want to serve some files off localhost but not at all pollute the namespace of an existing webserver.
I have several weird alternative solutions:
Just pick a namespace that's unlikely to be used by a user. You can start with an underscore or a dollar sign? Or you can just a very random number.
You can serve your files, but only if a URI parameter exists with a very random string. PHP does this to serve the PHP logo for example.
You can't really change the scheme without creating browser add-ons, but you do have control over the TCP port. You can start a second webserver on a second port.
You can use a second domain. Just register a domain and point an A record to 127.0.0.1. Now your webserver running on localhost can check out the Host: header and serve your files if it matches your special hostname.

Is it possible to distinguish a requestURL as one typed in the address bar to log in a node proxy?

I just could not get the http-proxy module to work properly as a forward proxy. It works great as a reverse proxy. Therefore, I have implemented a node-based forward proxy using node's http and net modules. It works fine, both with http and https. I will deal with websockets later. Among other things, I want to log the URLs visited or requested through a browser. In the request object, I do get the URL, but as expected, when a page loads, a zillion other requests are triggered, including AJAX, third-party ads, etc. I do not want to log these.
I know that I can distinguish an AJAX request from the x-requested-with header. I can distinguish requests coming from a browser by examining the user-agent header (though these can be spoofed thru cURL). I want to minimize the log entries.
How do commercial proxies log such info? Or do they just log every request? One way would be to not log any requests within a certain time after the main request presuming that they are all associated with the main request. That would not be technically accurate.
I have researched in this area but did not find any solution. I am not looking for any specific code, just some direction...
No one can know that with precision, but you can find clues such as, "HTTP referer", "x-requested-with" or add your custom headers in each ajax request (squid proxy by default sends a "X-Forwarded-For" which says he is a proxy), but anybody can figure out what headers are you sending for your requests or copy all headers that a common browser sends by default and you will believe it is a person from a browser, but could be a bash cURL sent by a bot.
So, really, you can't know for example, if a request is an AJAX request because the headers aren't mandatory, by default your browser or your framework adds an x-requested-with or useful information to help to "guess" who is performing the request.

IIS doDynamicCompression and Browser Fallback

I want to enable GZip compression on my controller actions using IIS's doDynamicCompression configuration option.
The question is what will happen if one of my users uses a browsers that doesnt support GZip - would IIS detect it and send it uncompressed?
When your users send request to the serveur, the Browser send a header Accept-Encoding.
This header specifies to the server with compresssions methods are accepted by the Browser.
For exemple "deflate,gzip", "gzip" or nothing.
IIS parses this Header to compress or not the response.

Is HTTP header Referer sent when going to a http page from a https page?

After a few tests, I'm starting to reach the conclusion that a browser does not send a Referer HTTP header when one clicks to a http page from a https one.
What security reason is that for? Is is defined somewhere in the standard?
The HTTP RFC states, in section 15.1.3 Encoding Sensitive Information in URI's :
Clients SHOULD NOT include a Referer
header field in a (non-secure) HTTP
request if the referring page was
transferred with a secure protocol.
So, this is expected / standard behaviour.
Actually it's not that straight forward anymore (2014 onwards), according to this w3c document on referrer policy.
The default behaviour is that browsers will not send referrer information when going from HTTPS to HTTP. However, browsers will send referrer when going from HTTPS to HTTPS.
Also, in HTML5, there is a new meta tag named referrer, that looks like this:
<meta name="referrer" content="origin">
New browsers have already implemented this. So whether or not browsers will send referrer, will depend on this meta tag in the near future. If this meta tag is not included in page's HTML, then browsers will use the default behaviour.
Following are the possible values of content attribute of referrer meta tag:
no-referrer: Referrer will not be sent, regardless of HTTP or HTTPS
origin: Only the origin (main) domain will be sent as referrer
origin-when-crossorigin: Same origin will send full referrer URL and cross origin will send only origin URL as referrer
no-referrer-when-downgrade: This is the default behaviour when no referrer meta tag is provided on the page.
unsafe-url: This will always send referrer, regardless of HTTP or HTTPS
Also, there are some legacy attribute values for referrer meta tag. These are no longer recommended, but used in many sites at the moment:
never: same as no-referrer
default: same as no-referrer-when-downgrade
always: same as unsafe-url
I hope this information will be helpful to someone who just found this post after 2014.
Yes, defined in the standard:
Clients SHOULD NOT include a Referer
header field in a (non-secure) HTTP
request if the referring page was
transferred with a secure protocol
Reason: Sometimes SessionIDs are URL encoded. HTTP Pages can have cross site scripting which steals the session from the HTTPS communication. To prevent this, the referrer is not transmitted on the HTTPS to HTTP transition so that the URL encoded sessin ID can't be stolen.

Resources