Unsure if this is Selenium, chromedriver, Docker or IIS-specific. I am using Docker for Windows (beta, with Hyper-V) on my PC. On the same PC, I am using IIS to serve a website.
When using the selenium/standalone-chrome-debug:2.53.0 image to run a test on that website, the Chrome instance in the container does not receive any HttpOnly-cookies (I have used VNC to check). It does get normal cookies though. This means the CSRF token is gone, and trying to POST a form fails.
Works fine if I visit the website on my PC. Both my PC and the container has the domain for the website in their HOSTS-file.
Normal traffic (HTTP GET) works fine from the container, the only difference (thus far) is that it does not get HttpOnly-cookies.
Edit: When I opened up for navigating to external sites I do receive HttpOnly-cookies. So this is probably not related to Selenium or the chromedriver.
It might be related to use of VPN on my host PC, which I need for the local IIS website.
This turned out not to be a problem with Docker/Selenium/chromedriver/VPN.
We had a request filter that added the Secure-flag to cookies that are sent with the response, but only to requests that comes from remote machines. The Docker container is in this case seen as a remote machine.
When testing on my local IIS I am doing this over HTTP, which means that secure (HTTPS) cookies are not sent.
Related
I have a bit of a problem with a web page I'm making. Here's the situation :
I have a working NodeJs server that's online, hosted on a VPS.
I'm making a webpage that makes requests to this server. The requests work when I'm testing them from localhost or my local network.
When I put my website on my hosting service (different from the server), the requests fail.
Google Chrome return this error :
Failed to load resource: net::ERR_SSL_PROTOCOL_ERROR
The domain I have registered for my webpage has TSL1.3 I think, it it https for sure. So I thought it was a mismatch, like my website couldn't make requests to a simple http server that doesn't have any SSL or whatnot.
But when I looked into setting my server to use SSL or TSL or something like that, I got really confused. People recommended I use cloudflare as it provides certificates for free, but Cloudflare only works with domains, not stuff that runs on VPS with only an IP adress. I also tried following Certbot instructions to make a certificate myself but my VPS doesn't support snapd, even though it's Ubuntu 20.04.
Any attempt on my part to follow the rabbit hole of SSL certificates hasn't yielded anything, that's why I'm posting here. I don't even know if somehow getting a ssl certifcate will solve the problem.
Any help is much appreciated
I purchased a VPS server, installed IIS, setup domain and published static index.html page. It worked if I go to mydomain.com but 1 or 1.5 hours later it stopped working and I can only see the message The site can't be reached.
The VPS is accessible via Remote desktop and if I locally run the IE I can access mydomain.com but It does not work from outside of the VPS.
If I reboot the VPS server then after a while the page can be accessed again but again it lasts for around 1-1.5hours.
What could be the reason of this?
If it is caused by idle timeout, then your index.html page will not be displayed, this error means that your browser cannot establish a connection with the website you try to reach, because your Internet connection has been interrupted or because your internet service provider has blocked the access to the website.
You can try the following methods to solve this error:
Restart your Internet Router.
Try to visit other websites, to make sure that your Internet connection is working.
If you own another computer/device in your place, then try to visit the website where your receive the "ERR_CONNECTION_CLOSED" error, in order to make sure that the site you 're trying to visit is not blocked from your ISP.
If you have setup a VPN connection, then disconnect from it.
Temporarily disable the Firewall application.
I have a problem with an Express.js service running on production that I'm not able to replicate on my localhost. I have already tried requesting all the urls to production again to my local machine, but on my machine everything works fine. So I suspect that the problem comes with the data on the http headers (cookies, user agents, languages...).
So, is there a way, (some express module, or sniffer that runs on ubuntu) that allows me to easily create a dump on the server with the whole header so I can later repeat those exact requests to my localhost?
You can capture network packages with https://www.wireshark.org/, analyze them and maybe find the difference between your local environment and the production one.
You can try to use a Proxy-Tool like Charles (https://www.charlesproxy.com/) or Fiddler (http://www.telerik.com/fiddler) to log your Browser Requests.
I wanted to test my ReactJS + NodeJS website on another machine on my LAN, so I changed the server host ip from localhost to 0.0.0.0 as described in this answer. I noticed that although I could access the server from a remote machine, all I could see was the title and favicon (the rest was a blank page). I tried another approach of using the ngrok module as described here (which happens to be the answer to the same question as the previous link). I still got the same blank page.
The GET requests to the server are shown below (as shown by ngrok).
/landing is a page I was trying to access. Can someone explain whats happening?
PS: The server is running on a Mac and I'm trying to access the page on an Ubuntu machine. Also, I'm using this react-redux boilerplate. Webpack is also being used along with hot reloading.
Did you try changing port settings in firewall?
Go to firewall settings and allow the respective port for inbound
I've got a ServiceStack application that almost works when self hosted rather than to use IIS.
If I start the service and connect from a remote machine to the ip address of the PC http://10.0.0.5:81 then it's fine and everything works as expected.
However if I start the service and the first connection happens to come in on localhost (say because I'm testing the service is working after it's been installed) then all remote machines get redirected to http://localhost:81. The same is true if I used 127.0.0.1:81 with remote PC's getting redirected to the loop back address.
At that point all I can do is restart the service and connect from a remote machine first to get it working again.
Is there some way to disable what appears to be this caching?
ServiceStack tries to infer the BaseUrl for your services which it can only do at runtime which it then caches for subsequent requests. You can specify it to use an explicit Base Url instead with:
SetConfig(new HostConfig {
WebHostUrl = "http://10.0.0.5:81",
});