Sahi's proxy no longer working properly - browser

I have a suite of tests for our website using Sahi. These tests are automated and feed into our Jenkins build system.
The tests run on a dedicated PC that is used for nothing else. It has Sahi plus all the browsers installed. The Jenkins server makes a remote call to the testing PC to run the tests. Due to the time it takes to run all the tests, this functional test suite is run overnight.
For several months this system was all working beautifully. But suddenly one day a few weeks ago I came into the office and found that all the tests had failed. they haven't worked since. As far as I know, nothing significant has changed (we obviously keep the browser versions up-to-date, but I don't think the failure co-incided with any updates; Sahi itself hasn't had an update since last year)
I've done some work to find out what's happening:
Sahi uses a proxy as part of it's browser control magic, and I believe that this proxy is the source of the problem. But I can't work out how or why.
When the browser under Sahi's control loads the page to be tested, it seems none of the HTTP requests are succeeding. The raw page content is shown (I think because it's cached), but none of the styles, graphics or scripts (except those already cached by the browser). Furthermore, the Sahi script then tries to click on a button to proceed through the test, but the browser fails to load anything. Sahi waits for a bit, but eventually the script times out and the test fails.
I can replicate this on the affected PC when running Sahi manually. It happens on any site, and in all browsers. However it doesn't happen on my own desktop PC, which has the same versions of all the relevant software installed. And of course, it worked fine in the past on the test box.
I have tried uninstalling Sahi and the browsers, and re-installing from scratch. This has not made any difference. (I appreciate that uninstalling often doesn't actually delete everything, so perhaps there's more I could do here?)
I'm really hoping someone can help me here, because I'm unsure what else to try.
Many thanks in advance.

Since it happens on all browsers, it looks like a firewall setting may be preventing access to port 9999. Turn off the firewall and check. If you see any exceptions on the Sahi console, you could post that too.

I wasn't able to get to the bottom of this. I suspect it was something blocking the Sahi proxy from working, but I couldn't find the culprit.
I'd wasted too much time on it, and the machine it was running on wasn't being used for anything else anyway, so my final solution to this was to re-install the OS from scratch.
This has solved the problem. It hasn't helped me understand why it happened, but as long as it's working, eh?
Thank you to everyone who stopped by.

Related

Timeout: WebException when trying to access/debug LocalHost of Azure Functions [Unity] [PlayFab]

I'm trying to debug Azure Function scripts locally, in conjunction with Unity, but getting Timeout errors every time.
I have a few things here, and I'm not sure which one is actually causing the problem... It might be a settings on Windows, as oppose to one of the softwares.
I'm building in Unity 2019.4, and and using PlayFab and it's ability to use Azure Functions. When I try to execute scripts from the Azure servers, it functions correctly. But when I try to run it with Local Debugging, I get WebException: The request timed out System.Net.HttpWebRequest.GetRequestStream (See full error below).
Here's what I'm doing to setup:
Set PlayFab to Local Debugging (via VS Code Extension)(and confirming the json file is made in the temp folder)
Install Azure Functions Core Tools from Here
Start Azure Functions debugging from VS Code (terminal output shows that the the localhost is running it correctly)
Timeout error references the correct address http://localhost:7071/api/CloudScript/ExecuteFunction as confirmed in the VS Code Terminal when the AzFunc debugging is started.
When I clone the project to my MacBook Pro, everything runs smoothly in local debugging.
So, because of this, I've tried checking to make sure ports aren't blocked via PowerShell: netsh firewall show state,and told Windows Defender to not block anything from Unity or Code. When I run Netstat -ab in PowerShell/CMD, I do get:
Can not obtain ownership information
TCP 0.0.0.0:7071 DESKTOP-COMPUTER:0 LISTENING
[func.exe]
TCP 0.0.0.0:7680 DESKTOP-COMPUTER:0 LISTENING
I don't know if this is a problem, or normal...
I don't even know what else to check for. This problem is beyond me. If anyone knows the solution, or can point me in right direction, I'd be very grateful!
Below are the two errors from the Unity log whenever I execute an Azure Function script through PlayFab while local debugging:
WebException: The request timed out
System.Net.HttpWebRequest.GetRequestStream () (at <14e3453b740b4bd690e8d4e5a013a715>:0)
PlayFab.Internal.PlayFabWebRequest.Post (PlayFab.Internal.CallRequestContainer reqContainer) (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:319)
Rethrow as WebException: Timeout: WebException making http request to: http://localhost:7071/api/CloudScript/ExecuteFunction
UnityEngine.Debug:LogException(Exception)
PlayFab.Internal.PlayFabWebRequest:Post(CallRequestContainer) (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:332)
PlayFab.Internal.PlayFabWebRequest:WorkerThreadMainLoop() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:252)
System.Threading.ThreadHelper:ThreadStart()
Timeout: WebException making http request to: http://localhost:7071/api/CloudScript/ExecuteFunction
UnityEngine.Debug:Log(Object)
DemoScript:onPlayFabError(PlayFabError) (at Assets/PlayFabPartySDK/Examples/DemoScript.cs:264)
PlayFab.Internal.<>c__DisplayClass30_0:<QueueRequestError>b__0() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:395)
PlayFab.Internal.PlayFabWebRequest:Update() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:480)
PlayFab.Internal.PlayFabHttp:Update() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabHTTP.cs:364)
Okay, TLDR: The answer to the problem is that not everything was updated. So, update everything if you're experiencing the same problem.
More specifically in my case, the "Visual Studio Code Editor" asset in Unity's Package Manager.
I just wanted throw this out there in case anyone has a problem like this in the future. It may not be the same specific thing that needs upgrading, but search around for everything involved and make sure it's updated. Not just the big, obvious things (like Unity or your IDE). Thankfully for me in this case, the update was starting to cause other problems, and after much headbanging trying to solve those, I came across it.
Good luck, future fellow idiots!

NPM Nodejs crashes with BSOD

So I'm getting a blue screen of death whenever I have "npm start" running for a reactjs app. It's an intermittent crash, i.e. it doesn't happen every time I run it nor are there any exact steps to duplicate the crash, but I'll try to explain below under what circumstances it happens.
Create a reactjs app using create-react-app npm module.
Start the app using npm start. Chrome window opens, webpack is listening to changes I make to the source files.
Change any source file, and save it. NPM compiles it, Chrome page refreshes, and I can see my changes.
The above things work fine as expected "normally", but once in a while, right after I save a file, the system crashes with a BSOD saying DRIVER_IRQL_NOT_EQUAL_TO_OR_GREATER_THAN (NETIO.SYS) There is no definite "step" or action other than saving the file, or refreshing Chrome that would cause this to happen, and it also doesn't happen every single time.
Here are the steps I took to find out/eliminate the root cause of this issue:
Disabled by AV (Symantec Endpoint Protection).
Use a different browser (Mozilla, hell, even IE).
Changed the system (used a different laptop, although the same type - Microsoft Surface on Windows 10)
Updated all drivers, etc. (Verified by my organization admins)
Closing all other programs, etc. that might potentially be interfering (Atom IDE, Eclipse, etc.)
The necessary conditions for the crash to happen are:
npm start must be running (webpack server on localhost:3000)
A browser window must be open connected to localhost:3000 (if no browser is connected, it doesn't crash even if you change and save the file 200 times - I checked). Also, doesn't matter which browser (Checked with Mozilla/Edge/Chrome)
I believe the crash happens when NPM is recompiling the files and serving it to the browser (asking it to refresh using some websockets), but I'm not an expert on NodeJS/NPM so I'm not sure.
I've been stuck on this issue for more than 2 weeks now. Any help would be really appreciated. Kindly let me know if more information is needed.
The issue was with Symantec DLP (Data Loss Prevention) that was also installed on all our systems. The issue resolved itself after the admins added application exceptions for Nodejs, NPM, my reactjs project workspace paths.
Just posting this so that in case someone has a similar issue they can try this or remove Symantec DLP altogether.

Pages load issue on local IIS serv

I'm currently developing a web application running locally on IIS 10 with coldfusion 9.
I have a problem right now, caused by SSL I think. Since it's a backoffice, it has to be https, so I used our company certificate to install it locally on my computer and I linked it to the website I'm developing. The problem is whenever I use the https connection, all the pages are loaded twice (it isn't visible, but for instance when I submit a form, the data are inserted twice in the database).
I manage, with luck, to solve this issue by changing the SSL parameters "client certificates" from ignore to accept but when I do that, from time to time (like 1 out of 3) the page that I want to load takes forever (like 30s) and as I can see, uses 100% of the CPU.
It doesn't come from my code (I think) because when I navigate with http, I have none of the problem listed above.
Does anyone have an idea with this is happening and how to solve it ?
Thanks in advance ! If you need any further information, ask and I'll try to give it to you !
I've now installed Coldfusion 11 and with that the issue is not happening anymore. So I'm pretty sure it's a compatibility answer.

IIS Not Loading Some Pages

Website in question:
http://redbirdled.com/ (very apparent)
So RedBirdLED mainly loads fine but hangs many other times if I leave it alone and try to load a new page. The only way to get it to load is to try and load it again and it immediately loads after that. This is not a client issue as many people are reporting the problem. I've tried a few things such as idle-timeout and weird stuff with IIS already but it didn't help.
Overall the thing I am getting is that if I try to load something, it will wait and simply timeout from waiting. However if I press the button again immediately it is very likely to instantly load without issue. What is going on and is there any way I can trace the issue?
I'm going to assume it has little to do with idling unless it is possible to idle within 10 seconds... This issue is random at times. I don't understand what is happening.
Visit http://sole-revival.com/ and let me know if there are any issues. For me personally, the website runs PERFECTLY and without issue. That's the weirdest part. Some pools hang and some don't? I have never modified these pools from the default setting until tonight as I try to resolve the issue.
I'm using IIS 8.0 on Win Server 2012.

Xpages Build process and Replication

I'm wondering if someone can enlighted me a little bit on the Xpages build process and how this works with other replica copies of a database. Much of the advice I've seen posted regarding working with the the Domino Designer indicates (logically), that you'll have much faster response working on local copies and then replicating those to the server.
I'll usually save my changes locally, build manually, and replicate to the server, and most of the time, that seems to work fine. However, on some occasions, I've found that when I view the work I've done in the browser on the server copy, it hasn't seemed to update... in fact in a couple of scary incidents, it displays a version from several weeks ago (where is it even getting that from??). This isn't a browser caching issue, and I've opened the design elements (xpages, custom controls) on the server copy and verified that the changes ARE there. I end up having to perform a Clean on the server copy (not just a build) of the application, and then it displays as expected.
This seems like a foolish question, but you shouldn't have to perform a build on each replica copy correct? Any thoughts as to what might be an issue here? There is another developer involved, and he works directly on the server as he's in the same location, but we are rarely working at the same time, and never on the same elements. We are not using source control at this time.
We have seen similar behavior ourselves.
In our case, we do development on a server, clean / build project and then copy that database as a template to a deployment server. From there, we update design in the production database.
We have noticed that build process sometimes fails, especially when working over slower links. So we always repeat clean / build / refresh process a couple of times and we try to do it while in office with fast connection between the work stations and the server.
We haven't experienced build problems lately, so this repeating of build process obviously helps.
We have also seen that replicating design between local and server copies sometimes causes build related problems, which could explain the problems you are seeing. We have stopped using replication because of that and are now always working on the server copy directly.
I don't think that your not-using of source control software has anything to do with it.
I usually do all changes inside local template, then perform "Project \ Clean", then update design in server database. It works in 99% of cases. If not, I perform "Project \ Clean" once again. I hate this, but looks like it's the only way to get consistent code on production.

Resources