I am not sure whether its a usual behavior or not, but, on [WKWebView goBack] navigates to previous page from the Back-Forward list. but, in my case on very first back action, it does not calls didFinishNavigation delegate method.
So, if it's usual behavior then how to detects that page loading is completed after back the action.
Are you seeing this on a specific iOS version? Im seeing this too, only and error is being thrown and the page just stops loading, hence didFinish... not being called.
I've only started seeing this since 13.4 (or 13.4.1)
--- EDIT ---
I do have a workaround.
This appears to happen more often on slower networks and when the user interacts before a page is fully loaded. Normally if a user taps back before the page they visited finished loading then a "Cancelled" error is thrown.
What seems to be happening now is, if the page they are returning to never finished loading at the point they left, then the error is thrown again, but this time referencing the page they are returning to. and more importantly the web view is no longer loading at this point.
what we have done is monitor this and check to see if the page is not loading, and if it isn't, reload it. This seems to work and while not ideal, its buying us some time to investigate a proper fix.
In addition, I put a retry limit on this and we show an error with a reload button after the third attempt, though in practice we've never seen it go beyond the first reload attempt.
The fact that the error is thrown almost instantaneously means the reload adds no perceivable delay for the user.
I even see this on the simulator. Navigate to google.com, click privacy, then goBack and while the page loads normally, navigation reports a failure with error:
Error Domain=NSURLErrorDomain Code=-999 "(null)" UserInfo={NSErrorFailingURLStringKey=https://www.google.com/, NSErrorFailingURLKey=https://www.google.com/, _WKRecoveryAttempterErrorKey=<WKReloadFrameErrorRecoveryAttempter: 0x600003904e40>}
What you need to do is capture this with the didFailNavigation: delegate method and handle it gracefully (if you see a consistent error like the above you can even treat it exactly the same as success).
Related
Uncaught (in promise) Error: Found malformed component comment at Blazor:
{"sequence":0,"type":"server","prerenderId":"aa848eac6d3143c2b938c451316a207c"
Alright, so I keep running into this error. It's very inconsistent, and not always reproducible. I'm on .NET6, using Blazorise, and every once in a while on a page load, I get this error. The result is that the page will load, but without the sidebar, leaving me no ability to navigate. So far my notes of the error are as follows:
It happens in all browsers.
It happens more often when my screen size is smaller, or the window is not full screen.
It only seems to happen on screens that have an EditForm (Blazorise component).
It only seems to happen on screens that have multiple columns.
If I refresh the page between 2 and 370 times, eventually, the sidebar will load as usual.
Research into this issue has shown that sometimes it happens when there's a double body tag or something, but I have not seen anything of the sort on any of my pages thus far when examining the content in the browser. Does anyone have any tips or tricks on this?
Any input appreciated, thank you.
I have an integration test that runs through a "confirm details" process, and then goes back to the main page to verify that the details have indeed been processed.
However, when I visit the main page near the end of the test, it doesn't go there...
within '#confirm_details' do
page.find_button("Continue").trigger('click')
end
find("#email_confirmation_modal a.email_confirmation_yes").trigger('click')
expect(page).to have_css('span.balance') # Probably a false positive, but I've also
# checked for elements that should only be there after the click
visit '/myaccount'
expect(page).to have_current_path '/myaccount'
... result:
Failure/Error: expect(page).to have_current_path('/myaccount')
expected "/sign_up/confirm_details" to equal "/myaccount"
... but sometimes it's fine. Anyone have an idea why?
(I'll add other details here as I think of them.)
The my_account action in the controller doesn't always get fired (this should surely be the app's entry point, right?)
When it does work, the page loads in well under a second, so it's not running into the 2-second limit. In fact I've tried looping visit '/myaccount' until page.current_path == '/myaccount' and it hung.
Page load is not guaranteed to have finished when visit returns (drivers try to do their best on that, but it's not always possible). For that reason you should always use a Capybara provided matcher when there is one for what you're trying to test since they have waiting/retrying behavior built in. In your case you should never use the RSpec provided eq matcher with current_path, instead you should be using the Capybara provided have_current_path matcher.
visit '/myaccount'
expect(page).to have_current_path('/myaccount')
If that doesn't fix your issue then it's likely to be something with your code that is filling in the "confirm details" so you'll probably want to add the tests full code to the question.
I have a strange thing occurring; as usual, I can't post code, unfortunately, so I'm describing the problem in case anyone can suggest a possible cause.
I have an xpage with a custom control included on it; the custom control handles document locking and changing to edit/read-only modes via links. The document locking is done by setting an applicationScope variable based on the UNID. To make it more friendly for other users on the system, I run a function periodically on the page to check whether the document is locked or not and update a link/label/tooltips appropriately (e.g. if locked by another user, then the "Edit" button is disabled; when the lock is released, it's re-enabled). This is done by calling an "xagent" through a standard, simple dojo-based ajax call.
For some reason, the behavior of the system gets erratic after 45 seconds to a minute. I'm checking the lock status every ten seconds or so, so it's not happening with the first call. I'm displaying a list of records associated with the document; each record is a row in a repeat. When I first go into edit mode, the controls are all displayed as they should be, i.e. editable. If the user changes a particular value with a combobox, it updates the whole row with a partial refresh. When things get erratic, I noticed that the row starts refreshing in read-only mode, which suggests to me that the document is changing edit mode. The only time I knowingly change edit mode is if a "Cancel" or "Save" button is pressed. (The locking mechanism itself doesn't have anything to do with the edit mode.)
It certainly seems like the ajax call I'm making is at the root of this. But I've stripped the xagent and the client-side code down to practically nothing, and it's still happening. I can't see what would be causing this behavior. Can anyone hazard a guess? Thanks....
Maybe check if the server log file has warnings like:
WARNING CLFAD####W: State data not available for /page because no control tree was found in the cache.
If you're seeing those warnings, it could be that the server can no longer find the current XPage page instance in the cache. In that case the page will revert to the initial state, like when the page was first opened. That might be why the document goes to read-only mode.
The session cache of server-side page instances only holds 4 pages when xsp.persistence.mode=basic, or it holds 16 instances when xsp.persistence.mode=file or fileex.
If you load 4 xagent page instances, then that will fill the cache, and it will no longer be able to find the page instance for the current XPage you are viewing. So the XPage will stop performing server-side actions, and partial refresh will always show the initial state of that area of the page.
To avoid that problem, in the xagent page you can set viewState="nostate" on the xp:view tag, so that page instances are not saved for the xagent page, as described here:
https://tobysamples.wordpress.com/2014/12/11/no-state-no-problem/
Or else you can create and reuse one page instance for the xagent, so only one is created. That is, on the first call to the XAgent, have the xagent return the $$viewid value for the xagent page instance (#{javascript:view.getUniqueViewId()}), and then in subsequent requests to the xagent use that $$viewid in the request, to restore the existing xagent page instance instead of creating new instances that will fill the cache. So the subsequent xagent requests will be like so:
/myApp.nsf/xagent1.xsp?$$viewid=!aaaaaaaa!
It's hard to troubleshoot without code, but here are a few thoughts:
How are you checking document locking? Via a client-side JavaScript AJAX call or an XPages partial refresh? If the latter, what is the refresh area? If the former, what is the refresh area you're passing and the return HTML? Does it always occur when you're in edit mode on a row and the check happens, or independently of that? The key thing to check here is what the check for locking is doing - is it checking the server and returning a message outside the repeat, or checking the server and returning HTML that overwrites what's currently on the browser with defaults, e.g. the document mode as read mode.
What network activity is happening between the browser and the server and when? Is something else overwriting the HTML for the row, so resetting the row to read mode.
It's unlikely to be random, the key is going to be identifying the reproduceable steps to identify a common scenario/scenarios and cause.
EDIT
Following on from your additional info, is there a rendered property on the Edit link? If that calculates to false in earlier JSF lifecycle phases, the eventHandler is not available to be triggered during the Invoke Application phase. Because the eventHandler also includes the refreshId, there is no refreshId and refreshMode, so it defaults to a full refresh with no SSJS running. See this blog post for clarification http://www.intec.co.uk/view-isrenderingphase-and-buttons/.
How do I know when a page fails to load?
When there's a server error, and I get the gray window that sayd "Google Chrome fails to load this page..." or "this page in not available" ?
I want to add my extension an option of auto refresh after 10 seconds. I work with some web-app that sometimes the server fails to load for some reason, and a refresh just brings it back to life.
So I'm looking for auto-refresh in this case.
Thanks in advance!
You may want to take a look at the chrome.webRequest.onErrorOccurred.addListener(function details){} listener. Per the documentation, errors that are triggered by any event before the completion of a request will fire the onErrorOccurred method. The details will contain information about the error, but in your case it sounds like any triggered error should cause a refresh, so perhaps something like this (completely untested, more theoretical :) ):
chrome.webRequest.onErrorOccurred.addListener(function details){
chrome.tabs.reload(details.tabId);
}
I hope someone can help me solve a very serious problem we face at the moment with a business critical application losing data when a user works in it.
This happens randomly - I have never reproduced this but the users are in the system a lot more than me.
A document is created with a load of fields on it, and there are 2 rich text fields. We're using Domino 8.5.3 - there are no extension lib controls in use. The document has workflow built in, and all validation is done by a SSJS function called from the data query save event. There is an insane amount of logging to the sessionscope.log and also this is (now) captured for each user in a notes document so I can review what they are doing.
Sometimes, a user gets to a workflow step where they have to fill in a Rich Text field and make a choice in a dropdown field, then they submit the document with a workflow button. When the workflow button is pressed (does a Full Update) some client side JS runs first
// Process any autogenerated submit listeners
if( XSP._processListeners ){ // Not sure if this is valid in all versions of XPages
XSP._processListeners( XSP.querySubmitListeners, document.forms[0].id );
}
(I added this to try and prevent the RTF fields losing their values after reading a blog but so far it's not working)
then the Server-side event runs and calls view.save() to trigger QS code (for validation) and PS code to run the workflow agent on the server.
95% of the time, this works fine.
5% of the time however, the page refreshes all the changes made, both to the RFT field (CKEditor) and the dropdown field are reloaded as they were previously, with no content. It's like the save hasn't happened, and the Full Update button has decided to work like a page refresh instead of a submit.
Under normal circumstances, the log shows that when a workflow button is pressed, the QuerySave code starts and returns True. Then the ID of the workflow button pressed is logged (so I can see which ones are being used when I am reviewing problems), then the PostSave code starts and finally returns true.
When there is a problem, The QuerySave event runs, returns true if the validation has passed, or false if it's failed, and then it stops. The ID of the workflow button is also logged. But the code should continue by calling the PostSave function if the QuerySave returns true - it doesn't even log that it's starting the PostSave function.
And to make matters worse, after the failure to call the PostSave code, the next thing that is logged is the beforePageLoad event running and this apparently reloads the page, which hasn't got the recent edits on it, and so the users loses all the information they have typed!
This has to be the most annoying problem I've ever encountered with XPages as I can find no reason why a successful QuerySave (or even a failure because mandatory fields weren't filled in) would cause the page to refresh like this and lose the content. Please please can someone help point me in the right direction??
It sounds as if in the 5% use cases, the document open for > 30mins and the XSP session is timing out - the submit causes the component tree to be re-created, and the now empty page returned back to the user. Try increasing the time out for the application to see if the issue goes away.
I would design the flow slightly different. In JSF/XPages validation belongs into validators, not into a QuerySave event. Also I'd rather use a submit for the buttons, so you don't need to trigger a view.save() in code. This does not interfere with JSF's sequence of things - but that's style not necessarily source of your problem.... idea about that:
As Jeremy I would as a first stop suspect a timeout, then the next stop is a fatal issue in your QuerySave event, that derails the runtime (for whatever reason). You can try something like this:
var qsResult = false;
// your code goes here, no return statements
// please and if you are happy
qsResult = true;
return qsResult;
The pessimistic approach would eventually tell you if something is wrong. Also: if there is an abort and your querySave just returns, then you might run in this trap
function noReturn() {return; } //nothing comes back!
noReturn() == true; --> false
noReturn() == false; --> false
noReturn() != false; --> true!!!!
What you need to check: what is your performance setting: serialize to disk, keep in memory or keep latest in memory? It could be you running foul of the way JavaScript libraries work.
A SSJS library is loaded whenever it is needed. Variables inside are initialized. A library is unloaded when memory conditions require it and all related variables are discarded. so if you rely on any variable in a JS Function that sits inside a SSJS library between calls you might or might not get the value back, which could describe your error condition. Stuff you want to keep should go into a scope (viewScope seems right here).
To make it a little more trickier:
When you use closures and first class functions these functions have access to the variables from the parent function, unless the library had been unloaded. Also functions (you could park them in a scope too) don't serialize (open flaw) so you need to be careful when putting them into a scope.
If your stuff is really complex you might be better off with a backing bean.
Did that help?
To create a managed bean (or more) check Per's article. Your validator would sit in a application bean:
<faces-config>
<managed-bean>
<managed-bean-name>workflowvalidator</managed-bean-name>
<managed-bean-class>com.company.WfValidator</managed-bean-class>
<managed-bean-scope>application</managed-bean-scope>
</managed-bean>
</faces-config>
Inside you would use a map for the error messages
public Map<String,String> getErrorMessages() {
if (this.errorStrings == null) { // errorStrings implements the MAP interface
this.loadErrorDefinitions(); //Private method, loads from Domino
}
return this.errorStrings;
}
then you can use EL in the Error message string of your validators:
workflowvalidator.errorMessage("some-id");
this allows XPages to pick the right one directly in EL, which is faster than SSJS. You could then go and implement your own custom Java validator that talks to that bean (this would allow you bypass SSJS here). Other than the example I wouldn't put the notes code in it, but talk to your WfValidator class. To do that you need to get a handle to it in Java:
private WfValidator getValidatorBean() {
FacesContext fc = FacesContext.getCurrentInstance();
return (WfValidator) fc.getApplication()
.getVariableResolver()
.resolveVariable(fc, "workflowvalidator");
}
Using the resolver you get access to the loaded bean. Hope that helps!
My experience is that this problem is due to keeping page in memory. Sometimes for some reason the page gets wiped out of memory. I'm seeing this when there is a lot of partial refreshes with rather complex backend Java processing. This processing somehow seems to take the space from memory that is used by the XPage.
The problem might have been fixed in later releases but I'm seeing it at least in 8.5.2.
In your case I would figure out some other workaround for the CKEditor bug and use "Keep pages on disk" option. Or if you can upgrade to 9.0.1 it might fix both problems.