Back from my post here
Glimpse making everything 50x slower
It was fixed but now is broken again.
It seems to again try to hash every bit of data in the relationships of tables even though the code doesnt do this, glimpse forces this.
Because of this glimpse isn't responding and we get the error
No data has been found. This could be caused because:
- the data is still loading by the client, or
- no data has been received from the server (check to see if the data & metadata payloads are present), or
- no plugin has been loaded, or
- an error has been thrown in the client (please check your JavaScript console and let us know if anything is up).
this is what it looks like in chrome
http://puu.sh/cF3xX/0f879fac9c.png
this is what ends up happening debugging locally
http://puu.sh/cF2c6/e10ca1b6e0.png
If you are using Entity Framework, make sure you are not using any data model in the Views. This can cause the issue that you described due to lazy loading attempting to map all the connections (relationships).
Related
I'm using TipTap to build a document editor. On the server side (node), I've got a postgres db storing tiptap's output as json. My issue is that I'm expecting the content of the editor to get big, and so posting the entire content to the server on each save isn't going to work.
What I'd like to do is take the transactions from tiptap and send them to the server, load the persisted version, apply the transactions, and then persist the result. Does anyone know how I would go about doing this?
It seems that tiptap depends on there being a DOM, so I'm not sure it's possible to load it in node. And even if it was, I'm unclear on how I could apply those transactions, although prosemirror does have the apply and prosemirror-transform commands, which seem promising.
It seems like people would have run into this issue before-any thoughts?
Thanks!
I'm having this weard mistake "The object you are trying to update was changed by another user. Please try your change again." I would like to know what is the reason of this without context. There are no logs about it, no exception stacktrace, no information about this mistake in documentation. I believe this is something about Bundles but I want to now the exact reason
GW throws several exceptions related, by example ConcurrentDataChangeException, DBVersionConflictException depending of entity type. It occurs when a bean is modified concurrent by two or more transactions (bundle).
This error usually happens because on one bundle two transaction changes are trying to commit where prior bundle is still not committed.
Let us understand with one example- There is user which does a policy change to add any contact or any other business operation. And at the same time another user open same transaction in his GW PC UI and tries to do some business operation at this GW system throws this error on UI because the pervious bundle is still not committed.
The error trace leads you to some OOTB java classed and I think you can get it from PCLogs from PC UI Server logs.
Hopes this clarifies you.
Actually this happens because your object was updated by someone else on the DB during your db read from the DB and your attempt to write it back into the DB.
GW does this by leveraging the version check from your database object.
The exception message actually tells you who and when did the conflicting update. There're no stack traces that will point you to the cause of the other update.
Root causes might be several - from distributed cache in clustered env going out of sync to actually having some other party doing work on the same entities as you do. So the fix is per case really.
I am working on a website on my localhost and suddenly I'm now getting this errors.
I get this error on Firefox
<script> source URI is not allowed in this document
And nothing on chrome,` but if I try using the files code, I get:
Application Error: There was a problem getting data for the application you requested. The application may not be valid, or there may be a temporary glitch. Please try again later.
It is basically for: https://connect.facebook.net/en_US/sdk.js.
The browser doesn't even send a GET request for the file.
Everything used to work perfect before. Not sure why I'm getting this.
I had an extension installed on both of my browsers. It was preventing it from loading.
If you have any VPNs, hide-tracking extensions, then it needs to be disabled.
In my case it was Disconnect firefox/chrome extension.
Apologies Paul, this is a duplicate to the post I put on OpenNTF, however the site will not allow me to log in the last 2 days to follow up, plus the wider audience of Stack might find me someone with an identical issue.
To keep it short.
I have 1 openLog database in a folder structure, logs/xpageslog.nsf
During development, I could log to this database, for example, using Paul Withers XPages OpenLog Logger, to log uncaught exceptions with the following settings:
private String logDbName = "logs\\xpageslog.nsf"; // in OpenLogItem.java from OpenLogClass library
logDbName = "logs/xpages.nsf" // in OpenLogFunctions.ls
xsp.openlog.filepath=log/xpageslog.nsf // in xsp.properties
However, if I then change all the above, to simply go to xpageslog.nsf, in the root of the server (this is a 2nd openLog database) errors still get logged to the first database.
I've tried building, cleaning, re-compiling, all to no avail. It seem's to be that somewhere, or somehow, the references to the original database are not being overwritten.
Any ideas?
It is good practice to use restart task http instead of tell http restart. Both commands have different effects.
As confirmed in comments, this solved the problem.
Some use tell http quit followed by load http, the effect is the same as with restart task http. At the other hand, simple tell http restart does not fully initialize http task, it's kind of soft reset and I recommend not using it.
I'm using Play 2.1.2 and I'm having some trouble with an Enumerator and I'm seeking ideas on how to debug this.
I'm trying to stream some S3 data through my server. I can get an InputStream from the Amazon SDK for my S3 file (getObject(bucket, key).getObjectContent()). I then turn that InputStream into an Enumerator[Array[Byte]] using Enumerator.fromStream.
All this type checks and on my local development machine it all works perfectly. When I formulate my Result in Play, I just return Ok.stream(enum).
The problem comes when I deploy this to a production server. The very first time I request the file, it works just fine and I get the whole file. But subsequent times it frequently gets part way through (different amounts each time) and then gets "stuck". I wrapped the Enumerator as follows to be able to log whether the enumeration completed:
val wrapped = enum.onDoneEnumerating { println("Contents fully enumerated"); }
Ok.stream(wrapped);
As expected, on my development machine (and the first time through on the production machines), I get the message "Contents fully enumerated". But after that, the production machine will start the download of the file, but it doesn't finish (in both the HTTP sense and in the Enumerator sense).
I'm not sure how to debug this. Obviously, fromStream does some magic and I don't know how to figure out what is happening between chunks. I thought this might be a thread pool issue so I wrapped the whole response in a future { blocking { ... } } block, but it didn't appear to make any difference.
I'm trying to avoid the hassle of creating a local temporary file from S3 and then building my Enumerator from that. Using the fromStream to create an Enumerator seemed like the elegant way to do this...if it worked.
Suggestions?
OK, so I think I figured this out. It turns out that on the Play side, things appear to be working. I tried all kinds of variations (different ways of constructing the enumerator, creating temporary files, etc). It didn't really matter.
What did matter was the proxy I was using. I'm using node-http-proxy and if I make a request on the server behind the proxy, I get the correct response (directly from Play). If I make the request on the server outside the proxy, I get an incorrect (empty) response. So it looks like the proxy is "dropping" the response.
It appears the issue is that the response is chunked by the stream call and this is causing the problem with the proxy. If I reformulate my response to be:
SimpleResult( header = ResponseHeader(200), body = enum)
Then play uses the Enumerator to construct a complete response (not streamed) and things work again. Of course, it is stupid to have to form the complete response in this case, but it does work. Hopefully I can find a better solution than this in the long term, but this seems to work for now.