How to gather user metrics for an Electron desktop app? - node.js

I would like to gather some metrics about usage for an Electron-based cross-platform desktop app. This would consist of basic information on the user's environment (OS, screen size, etc) as well as the ability to track usage, for example track how many times the app is opened or specific actions within the app.
These metrics should be sent to an analytics server, so they can be viewed in aggregate. Ideally I could host the server-side component myself, but would certainly consider a solution hosted by a third party.
There are various analytics solutions for the web (Google Analytics, Piwik), and for mobile apps, as well as solutions for Node.js server-side apps. Is it feasible to adapt one of these solutions for desktop Electron-based apps? How? Or are there any good analytics solutions specifically designed for use with desktop apps which work with Electron / javascript?
Unlike a typical webpage, the user might be using the app offline, so offline actions should be recorded, queued, and sent later when the user comes online. A desktop app is typically loading pages from the file system, not HTTP, so the solution needs to be able to cope with that.
Unlike a Node.js server-side application, there could be a large number of clients rather than just a single (or a few) server instances. Analytics for a desktop app would be user-centric, whereas a server-side Node.js app might not be.
Ease of setup is also a big factor - an ideal solution would just have a few lines of configuration to gather basic metrics, then could be extended as necessary with custom actions/events.

The easiest thing will be to use Google Analytics or a similar offering.
For most you'll have two major issues to solve over hosting on a website:
Electron does not store cookies or state between runs. You have to store this manually
Most analytics libraries ignore file: urls so that they only get hits from the internet
Use an existing library and most of these issues will already be solved for you.

Related

Google App Engine with Python 3: Mix Standard and Flexible for Websockets

I've started to port a web app backend to Google App Engine for scaling. But I'm completely new to GAE and just reading into the concepts. Steep learning curve.
I'm 95% certain that at some point many millions or at another point at least hundreds of thousands of users will start using the web app through a GUI app that I'm writing. And they will be globals users, so at some point in the future I'm expecting a relatively stable flow of connection requests.
The GAE Standard Environment comes to mind for scaling.
But I also want the GUI app to react when user related data changes in the backend. Which suggests web sockets, which aren't supported in the Standard Environment, but in the Flexible Environment.
Here's my idea: The main backend happens in a Standard app, but the GUI listens to update notifications from a Flexible app through web sockets. The Standard app calls the Flexible app after noteworthy data changes have occurred, and the Flexible app notifies the GUI.
But is that even possible? Because sibling Flexible instances aren't aware of each other (or are they?), how can I trigger the persistent connections held by the Flexible instance with an incoming call from the Standard app to send out a notification?
(The same question goes for the case where I have only one Flexible app and no Standard app, because the situation is kind of the same.)
I'm assuming that the Flexible app can access the same datastore that the Standard app can. Didn't look this one up.
Please also comment on whether the Standard app is even a good idea at all in this case and I should just go with Flexible. These are really new concepts to me.
Also: Are there limits to number of persistent connections held by a Flexible app? Or will it simply start another instance if a limit is reached?
Which of the two environments end up cheaper in the long run?
Many thanks.
You can only have one App engine instance per project however you can have multiple flex services or standard services inside of an instance.
Whether if standard is a good idea it depends up to your arquitecture, I'm pretty sure you've looked at the comparison chart, from experience is that if your app can work okay with all the restrictions (code runtimes, no availability to do background process, no SSH debugging, among others) I will definitely go for standard since it has a very good performance when working with spikes of traffic deploys new services in just seconds, keep in mind that automatic scaling is needed for the best performance result.
There are multiple ways to connect between flex or standard services one would be to just send an HTTP request from one service to another, but some other options with GCP services like Pub/Sub.
In the standard environment, you can also pass requests between
services and from services to external endpoints using the URL Fetch
API.
Additionally, services in the standard environment that reside within
the same GCP project can also use one of the App Engine APIs for the
following tasks:
Share a single memcache instance.
Collaborate by assigning work
between services through Task Queues.
Regarding Data Store you can access the same datastore from different services here is a quickstart for flex and the quickstart for standard
Which of the two environments end up cheaper in the long run?
Standard pricing is based on instance hours
Flexible pricing is based on usage of vCPU, memory, and persistent disks
If your service run very hight performance process on short periods of time probably standard will be chepear, however if you run low performance process on long periods of time, flex will be chepear, but again it depends on each use case.

Azure Mobile SDK vs Custom Code - Scalability

We have written two mobile apps and a web back end. Mobile apps are written in Xamarin, back end in C# in Azure.
There is shared data between all three apps, some are simple keyword tables, but some data tables will change, e.g. mobile user is moving around and making some updates to a table, updates need to go back to web app and then possibly out to the apps.
Currently use SQLite on the mobile apps and following a off-line first approach, i.e. user changes a table we write to SQLite on mobile and then sync to server. If user has no connectivity a background process will eventually sync up data to server when possible.
All this is custom code now, and I am a little hesitant to continue on this path. We are in testing with 4 users or so, but expectation is to grow to thousands or tens of thousands of users in 6 to 18 months.
I think that our approach might not scale. Would prefer to switch to an Offline first framework instead of continuing to roll our own.
Given our environment I think using Azure Mobile SDK would be the obvious path to follow.
In general would you choose an offline first framework if your app will grow? In particular, any experience with using Azure Mobile SDK?
Note that your question will likely be closed because you're asking for an opinion/recommendation but anyways...
From the Azure Mobile Apps Github repo:
Please note that the product team is not currently investing in any
new feature work for Azure Mobile Apps.
Also to my knowledge, Microsoft has not announced any new SDK or upgrade path.
With that in mind, one option is to keep your custom code and bonify it with code that you'd extract from the SDK or vice versa.
Assuming that your mobile app calls a web service, which then performs any necessary writes, you could load test a copy of your production environment to see if things fall over and at what point. I'm not a huge fan of premature optimization.
Assuming things do fall over, you could introduce a shock absorber between your web service endpoint and the database using a Service Bus Queue.

Node API Architecture on AWS

I'm making an app that will have:
iOS and Android apps
A web-based "dashboard" to display data gathered from the mobile apps
The app requires that end-users create an account with us (we mostly likely will NOT use Facebook/Twitter logins).
Everything is/will be hosted on AWS using EC2/RDS/S3 (All encapsulated in Elastic Beanstalk)
| Web Browser | <----> | sails.js app | <-------> |actionhero.js API|
⬆︎
⬆︎
| Mobile app(s) | <-------------------------------------/
So far, I've built most of the backing API in actionhero.js, hosted on AWS.
It made sense to me to separate the API and the web app, because there web app is only for a small subset of users -- I'd expect 50x the traffic from our mobile apps over the web app.. We could scale the API to server the mobile users without unnecessarily scaling the sails.js app.
My questions are:
(biuggest unknown) How should I handle authentication? The sails.js app needs to be able to make requests to the API, and so do the mobile applications.
I was looking at the oauth2orize node module for creating our own Auth server, but it is designed for Connect/Express, so I don't think I could leverage it in the actionhero.js-based API.
If the solution is to create an OAuth server, am I supposed to host that on its own EC2 instance?
(AWS-specific question) I don't fully understand the use case for creating what AWS describes as a "worker tier" enviornment. Would there be a reason that the API would fall into that category?
If I want to run a data querying and aggregation task, I would create a separate node process for that, correct? If so, would that background worker have to exist on its own EC2 instance?
Sails.js and Actionhero.js both provide heavy support for socket.io. Should communication between the Sails app and my API happen over a persistent WebSocket connection? Will that scale if I need to create new instances in the future?
This seems like a fairly typical pattern; I'd like to hear if there are any big red flags in this design, before I paint myself into a corner. :-) THANKS!
Bonus question (specific to AWS Elastic Beanstalk)
Will I create separate "Applications" for the sails.js server and the API server? It looks like that's the only way to set it up, anyhow, but I want to make sure.
We have used node and beanstalk for a couple of applications now. For authentication, you can create an account for the user when they first access the app, and store the account id on the device. If you want them to be able to log in from multiple devices, you'll need to provide some kind of way of them identifying themselves, which is either id/password, or using Facebook. It's not that tough to set that up. Use session to allow them to log in and stay logged in. We generally just store the user id in the session.
A worker tier is for something you want to decouple from your app, something that you want to do that you don't need to know whether it succeeded/failed. A notification server is a prime example. You send the info for the notification into an SQS queue, that then gets sent to the worker tier, that does the work. We are just trying to figure this out now.
A big aggregation process, yes, I'd take it elsewhere, so it's not eating up your production server(s). You might want to create some data aggregation ongoing, as transactions are saved, so it accumulates. Big rollups after the fact can be time consuming and fragile.
Sounds like yes, they would be seperate applications.
A good tip. We use grunt to create the zip files for the app. It's a node batch tool. We check the latest info out of SVN, clean it up by doing things like removing .svn directories, apply our configuration into the config files by doing simple string replacement, then zip up resulting output. This then gets loaded into beanstalk. This takes all the guess work and time out of actually doing a new deployment. We can get a new build up in minutes that way.
Beanstalk can be very frustrating. When it fails, it's not very good at telling you why.

Web app to synchronize data with server

Is there an easy way to manage offline data with a web app, and synchronize with a server when there is a connection? I have been looking at Meteor, CouchDB and the likes, but still not sure what would be the least painfull way.
I could of course implement it myself with sockets or something similar, but if something is already made for the purpose, I don't see a reason to do it again.
I'm planning to work with Node as the server.
Thanks
You're talking about two things; 1) How to store/persist data if/when offline (storage mechanism), and 2) How to synchronize with a server when online (communication mechanism). The answer to 1 is some kind of local storage, and there any several ways of doing that (localstorage, websql, filesystem APIs etc) depending on your platform. The answer to 2 really depend on how urgent your synchronization needs are, but in general you can use HTTP itself with periodic (long-) polling, websockets and similar.
On top of both storage and communication mechanisms there are numerous libraries that make the job simpler, like Meteor (communication) and CouchDB (storage), but also many many more. There are even libraries that take care of the actual synchronization mechanism (with possible conflict resolution as well), but this very much depends on your actual application.
Updated: This framework looks promising, but I haven't tested it myself:
http://blog.nateps.com/announcing-racer-experimental-realtime-model
You might want to look at cloud services as well. These are best if you are developing a new application as they push you more to a serverless model, and of course you have to be happy using a service.
Simperium (simperium) is an interesting cloud service - the only one I can find today that does syncing (unlike Firebase and Spire.io who are similar in other respects), and for iOS it includes offline storage, while for JavaScript clients you'd need to cover the local storage yourself using HTML5 features. Backbone.js seems to have some support for this, and Simperium can integrate with Backbone, using a similar API style.
For non-cloud services, Derbyjs (derbyjs) is an open source project that includes Racer, a data synchronization library (mentioned by the earlier answer) - both are under rapid development and not yet complete, but look interesting if your timescales allow, and don't require a cloud service. There is a comparison of Derbyjs to Meteor that is useful - although it's written by the Derbyjs developers it's not too biased.
I also looked at CouchDB, which has some interesting built-in replication features, but I didn't like its use of indexes that are updated lazily when a query needs them (or by a batch process), and I wasn't happy with exposing the server DB directly to clients to enable replication/sync. Generally I think it's best to decouple the client side local storage from the server side DB, and of course for a web app it would be hard to use CouchDB on the client.

What are the best development tools to use in this project?

I am currently devising 3 database desktop applications for different users in a manufacturing company (one for the accounting department, sales department, production department). All applications have different functions but they should be able to access the data of the other department to reflect business transactions. What is the best programming language and database to use for this kind project? The three computers are not physically connected so I was thinking of having them to access a remote database. The language I am most familiar with is Java but I am very open to learning others if it would be more beneficial to the company. I was also thinking of having to use Adobe Air as I am adept with web programming but could still run as a desktop app but I can't seem to find sufficient resources of distributed systems using Adobe air. Any ideas would be very much appreciated. Thanks!
Lots of languages will do this just fine, including Java. You're familiar with that so my advice is stick to it with one caveat: depending on your requirements I would seriously suggest examining the possibility of making it a Web app instead. Desktop database apps are somewhat... old-fashioned. More to the point they'll create a bunch of headaches for you such as installation, Swing is annoying and tedious, etc.
As for what database, barring requirements you haven't specified, anything will do so pick something free like MySQL.
So for a desktop Java app I would:
Put the database on a remote server;
Put an application server or Web container on that same server;
Create a Webapp on the app server for handling RPC;
Pick a method of RPC, be it Web services or whatever, and use Spring to implement it;
Create a desktop Java app in Swing and distribute it to clients from the app server via Webstart (JNLP).
If it's a Web app:
Put the database and appserver or Web container on one server;
Pick a Java Web framework and create a bunch of Web pages that do what you want.
In all cases, have it be the same app but just act differently on the user type. This is much better than maintaining three different apps.
Also if you do a Web app, you might want to consider using PHP as it's a fast and proven way of knocking up Web pages and probably sufficient for the kind of internal application that you're doing.

Resources