How to disable proxy for CloudFlare? - node.js

I changed my domain's nameserver to CloudFalre yesterday. But I'm confuse about Caching system of CloudFlare.
My server is Node.js based server and even client request same url, sometimes server response with different contents.
As I found on internet, CloudFlare uses cache for fast browsing. It saves the content of server in their cache server and when client request same resource to server, CloudFlare returns resource to client without connecting with original server.
So if server's resource become different, CloudFlare will return old resource, is it right?
I setted A record for some domain and it become a "Proxying Mode". I cannot change it to gray connection.
Should I pay for do it? or is there a way to change this?
Thank you.

Cloudflare should not cache text/html, application/json types of responses. Static javascript files and images may be cached. Here is an article about what will be cached by default.
There should be no reason that you cannot change the "orange cloud" proxied mode to "grey cloud" DNS by clicking the icon.

Related

How to force route users to HTTP

Just updated our website, and migrated our DNS routing to the new server. The issue we are having now, is sometimes when a user types in our website 'example.com', it will sometimes route them to an HTTPS://example.com which isn't currently enabled.
Is there a way to have users routed to our HTTP://example.com instead of HTTPS://example.com, while we are waiting for SSL to be enabled on the new site?
No.
DNS is for resolving the hostname (example.com) to an IP address. You can't tell the browser to use HTTPS or HTTP via DNS.
I'm assuming that in the past, you've supported HTTPS. Once you've done that, the browsers often remember. The best thing to do is get your certificate place ASAP. You can use Lets Encrypt and Certbot and be done in a couple minutes in most cases.

How to identify https clients through proxy connection

We have developed a corporate NodeJS application served through http/2 protocol and we need to identify clients by their IP address because the server need to send events to clients based on their IP (basically some data about phone calls).
I can successfully get client IP through req.connection.remoteAddress but there are some of the clients that can only reach the server through our proxy server.
I know about x-forwarded-for header, but this doesn't work for us because proxies can't modify http headers in ssl connections.
So i though I could get the IP from client side and send back to the server, for example, during the login process.
But, if I'm not wrong, browsers doesn't provide that information to javascript so we need a way to obtain that information first.
Researching about it, the only option I found out is obtaining from a server which could tell me the IP from where I'm reaching it.
Of course, through https I can't because of the proxy. But I can easily enable an http service just to serve the client IP.
But then I found out that browsers blocks http connections from https-served pages because of "mixed active content" issue.
I read about it and I found out that I can get "mixed passive content" and I succeed in downloading garbage data as image file through <img>, but when I try to do the same thing using an <object> element I get a "mixed active content" block issue again even in MDN documentation it says it's considered passive.
Is there any way to read that data either by that (broken) <img> tag or am I missing something to make the <object> element really passive?
Any other idea to achieve our goal will also be welcome.
Finally I found a solution:
As I said, I was able to perform an http request by putting an <img> tag.
What I was unable to do is to read downloaded data regardless if it were an actual image or not.
...but the fact is that the request was made and to which url is something that I can decide beforehand.
So all what I need to do is to generate a random key for each served login screen and:
Remember it in association with your session data.
Insert a, maybe hidden, <img> tag pointing to some http url containing that id.
As soon as your http server receive the request to download that image, you could read the real IP through the x-forwarded-for header (trusting your proxy, of course) and resolve to which active session it belongs.
Of course, you also must care to clear keys, regardless of being used or not, after a few time to avoid memory leak or even to be reused with malicious intentions.
FINAL NOTE: The only drawback of this approach is the risk that, some day, browsers could start blocking mixed passive content too by default.
For this reason I, in fact, opted by a double strategy approach. That is: additionally to the technique explained above, I also implemented an http redirector which does almost the same: It redirects all petitions to the root route ("/") to our https app. But it does so by a POST request containing a key which is previously associated to the client real IP.
This way, in case some day the first approach stops to work, users would be anyway able to access first through http. ...Which is in fact what we are going to do. But the first approach, while it continue working, could avoid problems if users decide to bookmark the page from within it (which will result in a bookmark to its https url).

How ignore or delete dns caché browsers from my server?

I made an migration website server. Now, my clients have the previous ip address cached in their browsers. They must to clean cookies & caché and its a nuisance.
I'm wondering if exist somehow to ignore or delete this caché from my server or somewhere that i can control.
Thanks!
Server Technologies: Linux, Nginx, Nodejs, React.
you can remove cookies in node side and return an HTTP header with an empty field for Set-Cookie.
here is explanation

Is a Http2 Cross-origin push request possible?

Say I have a server that serves an HTML file at the url https://example.com/ and this refers to a css file at the url https://test.com/mystyles.css. Is it possible to push the mystyles.css file alongside the html content as part of an HTTP2 connection, so that a browser will use this css content?
I have tried to create such a request using a self-signed certificate on my localhost (and I have pre-created a security exception for both hosts in my browser) by sending the html file when a request arrives at http://localhost/, and pushing the css with a differing hostname/port in the :authority or Host header. However, on a full-page refresh, the CSS file is fetched in a separate request from the server, rather than using the pushed css file.
See this gist for a file that I have been using to test this. If I visit http://localhost:8080/ then the text is red, but if I visit http://test:8080/ it is green, implying that the pushed content is used if the origin is the same.
Is there a combination of headers that needs to be used for this to work? Possibly invoking CORS?
Yes it is theoretically possible according to this blog post from a Chrome developer advocate from 2017.
As the owners of developers.google.com/web, we could get our server to
push a response containing whatever we wanted for android.com, and set
it to cache for a year.
...
You can't push assets for
any origin, but you can push assets for origins which your connection
is "authoritative" for.
If you look at the certificate for developers.google.com, you can see
it's authoritative for all sorts of Google origins, including
android.com.
Viewing certificate information in Chrome Now, I lied a little,
because when we fetch android.com it'll perform a DNS lookup and see
that it terminates at a different IP to developers.google.com, so
it'll set up a new connection and miss our item in the push cache.
We could work around this using an ORIGIN frame. This lets the
connection say "Hey, if you need anything from android.com, just ask
me. No need to do any of that DNS stuff", as long as it's
authoritative. This is useful for general connection coalescing, but
it's pretty new and only supported in Firefox Nightly.
If you're using a CDN or some kind of shared host, take a look at the
certificate, see which origins could start pushing content for your
site. It's kinda terrifying. Thankfully, no host (that I'm aware of) offers full control over HTTP/2 push, and is unlikely to thanks to this little note in the spec: ...
In practice, it sounds like it's possible if your certificate has authority over the other domains and they're hosted at the same IP address, but it also depends on browser support. I was personally trying to do this with Cloudflare and found that they don't support cross-origin push (similar to the blog post author's observations about CDNs in 2017).

How to prevent SSL urls from leaking info?

I was using google SSL search (https:www.google.com) with the expectation that my search would be private. However, my search for 'toasters' produced this query:
https://encrypted.google.com/search?hl=en&source=hp&q=toasters&aq=f
As you can see, my employer can still log this and see what the search was. How can I make sure that when someone searches on my site using SSL (using custom google search) their search terms isn't made visible.
The URL is sent over SSL. Of course a user can see the URL in their own browser, but it isn't visible as it transits the network. Your employer can't log it unless they are the other end of the SSL connection. If your employer creates a CA certificate and installs it in your browser, they could use a proxy to spoof Google host names, but otherwise, the traffic is secure.
HTTPS protects the entire HTTP exchange, including the URL, so the only thing someone intercepting network traffic will be able to determine is that there was communication between the browser and your site (or Google in this case). Even without the innards, that information can be useful.
Unless you have full administrative control over the systems making the queries, you should assume that anything transpiring on them can be intercepted or logged. Browsers typically store history and cache pages in files on the local disk which can be read by administrators. You also can't verify that the browser itself hasn't been recompiled with code to log sites that were visited, even in "private" mode.
Presumably your employer provides you with a PC, the software on it, the LAN connection to its own corporate network, the internet proxy and corporate firewall, maybe DNS servers, etc etc.
So you are exposed to traffic sniffing and tracing at many different levels. Even if you browse to a url over SSL TLS, you have to assume that the contents of your http session can be recorded. Do you always check that the cert in your browser is from google and not your employer's proxy? Do you know what software sits between your browser and your network card, etc.
However, if you had complete control over the client, then you could be sure that no-one external to your https conversation with google would be able to see the url you are requesting.
Google still knows what you're up to, but that's a private matter between your search engine and your conscience ;)
to add to what #erickson said, read this. SSL will protect the data between the connected parties. If you need to hide that link from the boss then disable the browser caching of the sites visited, i.e. disable or delete the history data.

Resources