I'm building a Chrome extension for the steam community, and this extension has some functions to get the user games list and scrap their product page to get some information about every game.
Any of the requests done by the extension to the API is almost instant, but when I try to load an app page (like for example http://store.steampowered.com/app/440/ ), the response is delayed up to 40s.
At the end, the page loads and the ajax request is successful, but the waiting times are just too high.
I have noticed that this behaviour is not consistent, and on other computers with the same exact extension code and internet connection theresponse is instant, and on the same computer with other browsers the delay is not there either.
Why could this happen? I'm not starting multiple ajax calls, and the extension waits before finishing a call before making the next one.
Please, see attached image
Thanks
Related
Blazor-Server apps use the SignalR circuit.
I can somewhat understand how there is JS that change events happening from the DOM, so instead of sending a new HTTP GET request the framework manipulates the DOM and displays the new blazor page.
But how is it even possible that the circuit is still active and working on page back button? This is a BROWSER FEATURE, not some html element, which can be changed, right? Would it not be a security issue if the browser back button behavior can be manipulated in different ways?
Not firing a new HTTP GET request on page back seems pretty hacky. Wouldn't that allow for malicious websites to do the same? Can websites access the last page visited with that??
How does the browser "know" that the last page should also use the same websocket circuit?
Is it then possible to tell the browser that it should establish a websocket on a past page, that didn't even have any before (would seem like a security risk)?
How does the back button differ from hitting "enter" in the address bar (which will always cut and establish a new circuit)?
Is the back button exactly the same as calling JS history.back() ?
What will be better? Make 1 request and get all articles for all pages or make a request for each page as needed? Now I am using second variant (make a request for each page as needed). What will be better?
P.S. After the request, all data will be written to Redux
It's usually better to paginate your results, otherwise you load an important amount of data for nothing, which can be slow if the user has limited bandwith. Very large quantities of data loaded in a web browser can also slow down the browser itself in some cases.
If your calls to get the results of 1 page take too long when browsing multiple pages, you could load 2 pages at once and have your UI immediately display the second page when the user clicks on 'next', while contacting the backend to get the 3rd page. That way you keep a reactive UI, while only loading what's necessary.
I'm a web developer, I just want to know how things work behind the scenes when a request is fired.
Suppose let's assume I've a static website, I requested about us page in one tab, contact us in other tab, both the requests are fired at the same time..
when the requests are fired at the same time, How browser displays the content in respective tabs correctly ?
Thanks in advance..
I think you are looking for process id,
In browser each tab have different process id ( you can see that is task manager )
This seperates the send and receiving of the data in each tab...
We are facing an issue with Ajax True Client script. When recording and replaying the script. One transaction took more than 60 sec time to load the page. Same behaviour observed after executing scenario in controller as well. But, if we manually perform the same transaction, it took only 8 sec. There is huge gap between the expected response times. Can any one suggest the fix?
This happens because of external resource download attempts by the script, which are not visible to you when you manually browse the page.
For example, if the page requests data from Google Analytics, or Facebook, and cannot access these sites (due to company restrictions, firewall, etc.), the response time would jump up to 60 seconds (timeout), but when you browse manually, you will not experience the timeout, since the browser behaves differently.
To resolve this issue, you should first find out which site is the script attempting to download data from? You can do this using a browser's developer tools (such as F12 tool in Google Chrome), and looking at the "Network" tab. Once you use this tab and browse to the web page, you should see the external HTTP requests. Make a list of these sites.
Once you know which external sites the page goes to, you can then use the Utils.removeAutoFilter JS command in your TruClient script:
From the Truclient Toolbox, choose "Misc" > "Evaluate Javascript Code" and add it to the first line of your script.
Then you can set the JS code in this action to:
Utils.removeAutoFilter(url, isIncluded);
for example, to prevent the script from downloading data from facebook:
Utils.removeAutoFilter('http://facebook.com', true);
Utils.removeAutoFilter('https://facebook.com', true);
Utils.removeAutoFilter('http://www.facebook.com', true);
Utils.removeAutoFilter('https://www.facebook.com', true);
I have an event which needs to contact some third party providers before performing a redirect (think 'final payment page on ecommerce site') and hence has some lag associated with its processing. It is very important that these third party providers are not contacted more than once, and sometimes impatient users may try and refresh the page (hence re-submitting the data). The general code structure is:
If Session("orderStatus") <> 'processing' Then
Session("orderStatus") = 'processing'
DoThirdPartyStuffThatTakesSomeTime()
Response.Redirect("confirmationPage.asp", True)
End If
The problem is, if the user refreshes the page, the response.redirect does not happen (even though the rest of the code will run before the redirect from the original submission). It seems that the new submission creates a new thread for the browser which takes precedence - it skips this bit of code obviously to prevent the third party providers being contacted a second time, and since there is no redirect, just comes back to the same page. The whole second submission may have completed before the first submission has finished its job.
Any help on how I can still ignore all of the subsequent submissions of the page, but still make the redirect work...?
Thanks
Move you redirect out of the if structure:
If Session("orderStatus") <> 'processing' Then
Session("orderStatus") = 'processing'
DoThirdPartyStuffThatTakesSomeTime()
End If
Response.Redirect("confirmationPage.asp", True)
After a bit more research and investigation (including alot of tracing and tracking what was firing and when using session variables) I found that ASP.NET automatically serialises requests for a given session, so that each one will execute in turn. However (and this is what confused me), the 'Response' delivered to the browser is the result of the last action performed (assuming this action was initiated before the browser received a response to the previous action). So, in my case above, all the third party code initiated by the first request finishes execution before the second request even starts. When all the requests are finished processing, ASP.NET obviously delivers the HTML back from the last request to IIS (which happened to be just a refresh of the page).
So, Oded's first suggestion above about moving out the redirect was correct - and I've marked this with the answer.