Access directly to server with XPINC...very very slow - xpages

I have developed an application XPages that work very well in a Browser (Firefox ) and in every page the browser load max 150Kb of content (html, image, js, css...etc...)
When I have deploy the application to my remote user that directly access to server with XPiNC mode the speed are very very poor!
With a tool I sniffed the traffic and I see that for every GET there are 10Mbytes of data transfered (seem to transfer XML source and other code that is compiled on the fly...)
The application inside Notes Client is not useable so...and my customer has disappointed for this feature (is not possible use in local and replicate)
I have 8.5.3FP2 (client and server) with PRELOAD option setting.... without any change of this.
Have someone any suggest for me? Is this a BUG ?

It is true that remote applications (NSFs residing on a non-local server) are slower than local client replicas or remote apps run in a web browser. This is due to the fact that a lot more network transactions are generated when running in this mode. There are various things that can be done however to remedy the problem.
First however we need to identify the cause of the problem - you are seeing a 10MB transfer for each GET request, which is very large and will obviously negatively impact performance. One or more of the XPages in your application may be using the computeWithForm feature? If an XPages document data source "computes" a Notes form (typically to execute pre-existing application logic) then the form must be copied across the net to be computed in the local client. However all children of the form will also be hauled over - subforms, shared fields etc, and this can result in large net transactions like those you are seeing.
Often the computeWithForm feature is used as a development convenience and as long as the size of the form is small then the performance impact can be negligible. However, if the aggregate form is large, then it may be worth your while replacing the computeWithForm usage with separate XPages SSJS application logic.
Before going further we would need verify that this is in fact the issue - there could be other issues. Typically this manifests only on pages that open/edit documents - so you can maybe try turning computeWithForm off in a test environment and see if there is a difference.

XPiNC is a little special. When you open a server based NSF, all the program code needs to be downloaded to the client to be executed in the server container of the Notes client. The reasonable way to use an XPiNC with data in the server is to split the application. Have one NSF that contains all the program logic (all XPages and other code) and the other with forms, views and documents.
Replicate the application NSF locally and access only the data on the server. This should give you much better performance. You could have a configuration setting to compute the data NSF, so disconnected users could use a local replica of the data.
Let us know how it goes.
P.S.: There are some more tuning ideas...

Related

Is storing data on the NodeJs server reliable?

I am learning how to use socket.io and nodejs. In this answer they explain how to store users who are online in an array in nodejs. This is done without storing them in the database. How reliable is this?
Is data stored in the server reliable does the data always stay the way it is intended?
Is it advisable to even store data in the server? I am thinking of a scenario where there are millions of users.
Is it that there is always one instance of the server running even when the app is served from different locations? If not, will storing data in the server bring up inconsistencies between the different server instances?
Congrats on your learning so far! I hope you're having fun with it.
Is data stored in the server reliable does the data always stay the way it is intended?
No, storing data on the server is generally not reliable enough, unless you manage your server in its entirety. With managed services, storing data on the server should never be done because it could easily be wiped by the party managing your server.
Is it advisable to even store data in the server? I am thinking of a scenario where there are millions of users.
It is not advisable at all, you need a DB of some sort.
Is it that there is always one instance of the server running even when the app is served from different locations? If not, will storing data in the server bring up inconsistencies between the different server instances?
The way this works typically is that the server is always running, and has some basics information regarding its configuration stored locally - when scaling, hosted services are able to increase the processing capacity automatically, and handle load balancing in the background. Whenever the server is retrieving data for you, it requests it from the database, and then it's loaded into RAM (memory). In the example of the user, you would store the user data in a table or document (relational databases vs document oriented database) and then load them into memory to manipulate the data using 'functions'.
Additionally, to learn more about your 'data inconsistency' concern, look up concurrency as it pertains to databases, and data race conditions.
Hope that helps!

Meteor multiple tab sharing state

Users frequently open multiple tabs to my meteor app. Is there a way to get those tabs to share the same connection (state on the server) so that there isn't multiple redundant connections. I'm thinking about coming up with a package to do this myself, I'm wondering if anyone has given any thought to this. It should help with performance.
It is possible to share client-side data through localStorage (consider it a browser database). It is also possible to share server-side data, commonly through database (MongoDB in case of meteor). Network connection (instead of collection) is shared across tab automatically by the browser.
If you mean sharing collection (instead of connection), you don't need to do anything special to share them between tabs (clients). Clients observing the same collection will see same data.
However, the convenience offered by Meteor has its cost. One of it is that each client has its own partial collection copy, thus it can use/waste lots of memory.
This is implementation details, and just like how JavaScript use/waste more memory and cpu then native code in exchange for convenience, there is not much you can do about it, at least not easily.
Update: As Harry noted, for real DDP connection 'sharing', it is possible to detect and disconnect new tabs and use localStorage to sync data from first tab, so that there is only one active connection. However IMHO it would be quite a heroic feat.
You should be able to use HTML5 local storage for this. This library does just that:
https://github.com/diy/intercom.js

Is it possible to make CouchApp send requests autonomously?

I want to write very simple app, witch monitors states of some sites. I also want to make it in Couchapp style without using any environment except CouchDB.
So the question is how I can make CouchApp send sites requests using some schedule by itself?
BTW, if I fail with this CouchApp, is there some way to make it not involving demon stuff (or cron) on PHP or on even Java? I want to keep it as simple as possible, but not simpler.
rsp is correct. Since CouchDB uses web protocols and Javascript, it has become a victim of its own success.
My rule of thumb is this: CouchDB is a database. It stores your data. I do not expect MySQL to automatically monitor external web sites. Why would I expect CouchDB to do that?
However I agree; CouchDB always benefits from some persistent processing to maintain the data.
Since CouchDB is completely web-based, you could start with a simple dedicated "worker" web browser. Fetch a password-protected HTML page from CouchDB. That page has the Javascript to make the browser query servers and update CouchDB. This could work in the short-term as a quick solution. However browsers impose security restrictions on your queries; and also a browser is not a long-term computing platform.
The traditional way is to run your own client software to do these things. You can either run a dedicated computer, or use PHP, NodeJS, or any other hosting services out there.
2. The
You can't do it in CouchDB alone (CouchApps can only have pure functions without side effects so they can be guaranteed to be cacheable) but you can do it using simple scripts that talk to CouchDB. See this talk by Mikeal Rogers for details on how to do it.

Web site logs (IIS7, hosted)

I have an ASP.NET MVC app hosted at webhost4life
What's a good way to save logs?
I have an access to the ftp I upload site to, should I just do effectively
File.AppendAllText("log.txt", "Ooops, we have an error" + e.Message);
Or is there a better way? Send e-mail? save log into a database?
I always try to log to a database and fall back on a file if the database is inaccessible (perhaps that's the cause of the exception). This allows you to run queries and reporting against the log directly and find out what the problem is immediately. You can also run a "health check" against the application by storing critical excepions and marking them, etc.
Avoid writing to the file system; this can generate collisions/race conditions between threads that are attempting to write to the same file. Databases are wonderful solutions for this problem, and provide some nice benefits such as being able to generate reports easily from normalized data.
Also, what sort of information are you logging? The IIS logs are very detailed. Saving information that is already available in those logs duplicates work (the server writes its logs, and then you write your own), which of course incurs a performance hit.

Building a website backend in c#, compiled to a binary

I am creating a novel website that integrates web feeds from around the internet. I want to build a backend that does CPU intensive analysis of the web data on a regular basis, which will eventually add the results continuously into a database.
This database will be accessable by the website through a normal asp.net backend that will server the page up to the client.
Is it advisable, and best practice, to build the complex CPU operations in c# binaries that run continuously on the server?
Sounds like you want a .NET executable that either runs on a schedule (cronjob-style) or that schedules itself. In any case it's wise to have it completely separate to your website process. It sounds like data-generation and data-serving are separate concerns, so they should be kept separate. This also means that you can move it off the web-serving machine if load becomes an issue. If you're updating a live database remember to take transactions into account.

Resources