I made an migration website server. Now, my clients have the previous ip address cached in their browsers. They must to clean cookies & caché and its a nuisance.
I'm wondering if exist somehow to ignore or delete this caché from my server or somewhere that i can control.
Thanks!
Server Technologies: Linux, Nginx, Nodejs, React.
you can remove cookies in node side and return an HTTP header with an empty field for Set-Cookie.
here is explanation
Related
I changed my domain's nameserver to CloudFalre yesterday. But I'm confuse about Caching system of CloudFlare.
My server is Node.js based server and even client request same url, sometimes server response with different contents.
As I found on internet, CloudFlare uses cache for fast browsing. It saves the content of server in their cache server and when client request same resource to server, CloudFlare returns resource to client without connecting with original server.
So if server's resource become different, CloudFlare will return old resource, is it right?
I setted A record for some domain and it become a "Proxying Mode". I cannot change it to gray connection.
Should I pay for do it? or is there a way to change this?
Thank you.
Cloudflare should not cache text/html, application/json types of responses. Static javascript files and images may be cached. Here is an article about what will be cached by default.
There should be no reason that you cannot change the "orange cloud" proxied mode to "grey cloud" DNS by clicking the icon.
We have developed a corporate NodeJS application served through http/2 protocol and we need to identify clients by their IP address because the server need to send events to clients based on their IP (basically some data about phone calls).
I can successfully get client IP through req.connection.remoteAddress but there are some of the clients that can only reach the server through our proxy server.
I know about x-forwarded-for header, but this doesn't work for us because proxies can't modify http headers in ssl connections.
So i though I could get the IP from client side and send back to the server, for example, during the login process.
But, if I'm not wrong, browsers doesn't provide that information to javascript so we need a way to obtain that information first.
Researching about it, the only option I found out is obtaining from a server which could tell me the IP from where I'm reaching it.
Of course, through https I can't because of the proxy. But I can easily enable an http service just to serve the client IP.
But then I found out that browsers blocks http connections from https-served pages because of "mixed active content" issue.
I read about it and I found out that I can get "mixed passive content" and I succeed in downloading garbage data as image file through <img>, but when I try to do the same thing using an <object> element I get a "mixed active content" block issue again even in MDN documentation it says it's considered passive.
Is there any way to read that data either by that (broken) <img> tag or am I missing something to make the <object> element really passive?
Any other idea to achieve our goal will also be welcome.
Finally I found a solution:
As I said, I was able to perform an http request by putting an <img> tag.
What I was unable to do is to read downloaded data regardless if it were an actual image or not.
...but the fact is that the request was made and to which url is something that I can decide beforehand.
So all what I need to do is to generate a random key for each served login screen and:
Remember it in association with your session data.
Insert a, maybe hidden, <img> tag pointing to some http url containing that id.
As soon as your http server receive the request to download that image, you could read the real IP through the x-forwarded-for header (trusting your proxy, of course) and resolve to which active session it belongs.
Of course, you also must care to clear keys, regardless of being used or not, after a few time to avoid memory leak or even to be reused with malicious intentions.
FINAL NOTE: The only drawback of this approach is the risk that, some day, browsers could start blocking mixed passive content too by default.
For this reason I, in fact, opted by a double strategy approach. That is: additionally to the technique explained above, I also implemented an http redirector which does almost the same: It redirects all petitions to the root route ("/") to our https app. But it does so by a POST request containing a key which is previously associated to the client real IP.
This way, in case some day the first approach stops to work, users would be anyway able to access first through http. ...Which is in fact what we are going to do. But the first approach, while it continue working, could avoid problems if users decide to bookmark the page from within it (which will result in a bookmark to its https url).
I have no experience with Varnish, so please bear with me.
We have inserted Google Tag Manager into a clients site. The Tag Manager injects Google Analytics tracking code (and nothing else) into the page. The clients technical service provider has now complained that the Tag Manager prevents the Varnish cache from working.
My guess is that this has nothing to do with the tag manager as such but is rather caused by the cookies from Google Analytics - apparently in the default configuration pages with cookies are not cached. However since I'm not very familiar with Varnish I cannot speak with any authority in the matter.
So my question is: is there any reason why Google Tag Manager itself (not any tags inside the tag manager) would invalidate a Varnish cache on each request ? A web search turned up nothing specific regarding Varnish and GTM.
Thank you for your time,
Eike
Google Tag Manager will not interfere with Varnish cache in any way. The reason being is that the requests for Google Tag Manager are sent to google-analytics.com, not your website.
The cookies are then set by google-analytics.com and are only sent between the clients browser and google-analytics.com.
This means that Google Tag Manager does not actually have any affect on your website apart from the initial Javascript being loaded from there.
In fact varnish does not validate any cookie that is created through javascript, only caches the "set-cookie header" of the http request.
The problem you may be having is, if the "DataLayer" is placed in the html code, the values of the variables do not change as they would be in cache.
To solve this problem, we must make another http call (ex. ajax) does not to cache, it returns the variables for DataLayer.
Session Hijacking
So I have a slight problem. I'm trying to identify a visitor, which is very hard if not impossible by $_SERVER veriables as mentioned in this question: Preventing session hijacking.
Possible Solution
To make a bit harder than just copying the cookie from Client A to Client B (which is sadly childsplay), I want to collect some info and validate this against something I have stored. In my database I want to store things like User-Agent, IP-Address, OS etc. This I will encrypt using MCRYPT and store. To match against a user, a lot of variables have to be set, this makes it somewhat harder than just copying the cookie contents to login.
The problem
Here's when my problem starts... The User-Agent and OS are nearly if not completely identical. The reason is that it are Fat Clients with the same bootable image. Another problem is the IP. The server in the Datacenter has a connection to the office. For our applications (even tho not externally accessible) the IP-Address is the same for every client. I found out that I could try to use the X-Forwarded-For header to distinguish IP addresses and thus make the user a bit more unique.
What's next?
What I would like to know is the following: How can I make sure the X-Forwarded-For is ALWAYS set without having to anything the clients have access to? Does something have to be added there by routing? Our connection is https, so I doubt I can just "inject" something. Next to that, if I can inject something like this, can the users client side do this?
The clients are in our internal office network and the applications (running in php) are not accessible from the outside
The X-Forwarded-For and User-Agent HTTP headers can easily be spoofed by any user (just as easily as copying a cookie from one machine to another).
Chrome extensions such as Header Hacker can be used on the client, and since your site is using HTTPS these headers cannot be added en route (as the headers need to added to the OSI application layer, not the transport layer).
If you're worried about users copying cookies between one another, is there any mechanism that would stop them sharing their username and password credentials? If you did manage to implement something that verified that their sessions remained on the same client machine, couldn't they simply work round it by logging in as each other?
Aside from my questions, for a solution you could introduce a local proxy into your internal network, purely for connecting to your site at the data centre. The site should reject any connections that are not from the IP of the proxy server (configure the web server or firewall to only accept the client IP of the proxy for web connections). Using this approach you will need to install an SSL certificate onto the proxy, which each client machine can trust. This will enable the proxy server to decrypt traffic, add the appropriate IP address header (overwriting any set by the client) and then forward it onto your server. The server code can then safely check the X-Forwarded-For header to make sure it remains constant per user session.
If this sounds like a good solution, please comment if you have any questions and I'll update my answer.
Two other thoughts:
You could use something to fingerprint the browser like panopticlick. However, as this is retrieving various values from the client and creating a fingerprint, it can all be spoofed if the headers are set the same as another user's. Also, as each machine is from the same bootable image, this might well be the same anyway.
Rolling session cookies: You could randomly regenerate the session using session_regenerate_id(). This will update the session ID of the client creating the request, and any other client using the same ID will then be logged out because they are sending the old session ID. Actually, you could do this on every request which will ensure that only the current client is using the current session.
I'm running varnish on a dedicated server. When i load a page, it is delivered via Apache and on the second and subsequent hits it is then delivered via Varnish Cache (i.e. I can see two timestamps in X-Varnish headers).
But when i open up the same page from some other computer, it's again delivered from the backend (apache) for the first time and on further reloads it comes from Varnish.
If a page is already in Varnish Cache, isn't it supposed to be delivered via Varnish even on a new computer for the first time? I've tried simple hello world php files without any database calls with the same effect. Might it be something wrong with my vcl file or Varnish works this way only?
check whether you sending session data (cookies) which then look like unique calls to varnish. the docs show you how to strip cookies.
Jon is right. I had similar problem. You also need to clean up your cookie and cache before test. Check if the first visit response header, it tries to set cookie. If so, you can do "unset beresp.http.Set-Cookie under vcl_fetch.