Method(s) used to pass data to and from WebKit based browser - linux

I'm using WebKitGTK+ ( WebKit ) application that will be a very simple web browser running in a Linux environment as a separate executable that will need to exchange data between another application. The system is described in the following image:
Example scenario:
Event such as smart card insertion detected in Backend processing Application
smart card data is read and passed to the WebKitGTK+ GUI Application
User interacts with "web page" displayed in WebKitGTK+ Application
Data is passed from WebKitGTK+ Application back to the Backend processing Application
What are the common methods of passing data between the WebKitGTK+ Application and the Backend processing Application?
Does `WebKitGTK+ provide some hooks to do this sort of thing? Any help at all would be appreciated.

I know this is a old question, but will try to answer it to best of my abilities so that it can be useful to someone else.
Webkit is basically rendering a html page. You can connect to various signals that webkit provides and act on these signals. More details at http://webkitgtk.org/reference/webkitgtk/stable/webkitgtk-webkitwebview.html
For your back end application, if something interesting happens, you can update page either by navigating to a new url, new page contents or just by setting text of desired element. Webkit provides DOM api functions and all of these are possible.
It becomes interesting when getting data from webkit and sending it to your back end system. Again, this depends upon your specific design, but generally you can hook up signals such as navigation signals when user clicks on a button and get contents. Another alternative is to use alert handler and simply use javascript alert and process the alert data on backend side.
Shameless plug for example : https://github.com/nhrdl/notesMD. It uses a simple database backend, probably can be good use case as it sends data back and forth between database and webpage. Even manipulates the links so that desired actions take place.

Related

Electron 22.0.3 On Linux Session Cookies Dilemma

I have created my first Electron app which is a dashboard that collects a list of URLs from a mongodb and scrolls through each URL using a predefined time delay between URLs. I am using this to display information screens from different BMS systems (Building Management / Building Automation systems) which all reside on the same local network. The BMS systems require user logins in order to view the screens. I have created some code in my preload script which injects the necessary login credentials into the proper DOM elements and activates the submit method. (I know this is not 100% secure but to help that situation, I am running the dashboard in kiosk mode with DevTools disabled. Further more, the systems I am connecting to do not contain any super sensitive data only temperature readings, etc.) I create the main browser window once and call the loadURL method in a different function which loads the next URL in the list. The problem I am having is that after rotating between all of the displays in system-A, when I load the URLs from system-B, the app has to login to that system (Totally expected behavior), however, when my logic finishes displaying the URLs from system-B and loops back to system-A, my app has to login to system A again even though I had already logged into system A prior to displaying system-B screens and I never destroyed the original browser window. Is there a way to maintain persistent session info to prevent this reoccurring login process when switching from system to system? Ideally I would like to maintain the persistent session info until I quit the app.
I have read over the documentation for the session and cookies methods, but being new to JavaScript and Electron, I couldn't quite wrap my head around how to implement the classes. Any help with this would be greatly appreciated.

Which method is faster, express : Server-side rendering vs client-side rendering

What I would like to know is, how do you built your web application? I'm really confuse as which method should I use for my project.
Already decided which technologies to choose.
1) Node.js and express as its Framework
2) MongoDB
3) React + Flux
But the problem right now, should I use method (A) or method (B)
Method (A) - Serverside rendering for HTML
app.get('/users/', function(request, respond) {
var user = "Jack";
respond.render("user", { user: user });
});
Method (B) - Clientside rendering for HTML
app.get('/users/', function(request, respond){
var user = "Jack";
respond.json({ user: user });
});
Method A will render the HTML from the server and as well as the data.
Method B will just respond the data that is needed for the client which is React.js, so that it could manipulate the data.
My concern, is which method should I use? most startups use which method?
Thank you.
It's not an either/or proposition.
React is a client side framework. You have to render on the client side. The question is whether to render on the server side in addition to rendering on the client side.
The answer? If you can, YES!
You will get SEO benefits and an initial performance boost by rendering on the server side. But you will still have to do the same client side rendering.
I suggestion googling "isomorphic react" and doing some reading. Here is one article on the subject.
http://www.smashingmagazine.com/2015/04/react-to-the-future-with-isomorphic-apps/
Well, it really depends on which vision you have on the modern web, and what you are willing to do.
Will you prefer to let your users wait, displaying a loader while data are loaded asynchronously, or will you prefer to keep your users busy as long as you can ?
Here are different articles that will help you clear your mind and be aware of the different advantages that you can have by doing server-side rendering, client-side rendering having multiple issues.
You can see this post from Twitter blog saying they improve their initial page load by 1/5th to what they had before, by moving the rendering to the server:
https://blog.twitter.com/2012/improving-performance-on-twittercom
An other article, this time from airbnb, describing the issues you can have with client-side rendering itself:
http://nerds.airbnb.com/isomorphic-javascript-future-web-apps/
There is also an other interesting article talking about client-side/server-side rendering, bringing a debate on when should we use / not use server-side or client-side rendering and why:
https://ponyfoo.com/articles/stop-breaking-the-web
And to finish, I can give you two more link more focused on react, and describing in which way server-side rendering should be helpful for your case:
https://www.terlici.com/2015/03/18/fast-react-loading-server-rendering.html
http://reactjsnews.com/isomorphic-javascript-with-react-node/
Now, about what you SHOULD do, it's a matter of what you exactly need to do, to my opinion, but basically, you can do both at the same time (client-side AND server-side), to have the best user experience.
This concept is called "isomorphic javascript" and it is getting more and more popular these days.
The simplest architecture is to just do dynamic html rendering on the server, with no Ajax, and with a new HTML page requested for pretty much any client click. This is the 'traditional' approach, and has pros and cons.
The next simplest is to serve completely static html+js+css (your React app) to the client, and make XMLHttpRequest calls to webservices to fetch the required data (i.e. your method B).
The most complex but ideal approach (from a performance and SEO perspective) is to build an 'isomorphic' app that supports both approaches. The idea is that the server makes all the necessary WS calls that the client would make and renders the initial page that the user has visited (which could be a deep-linked part of the application), a bit like option A but using React to do the rendering, and then passes control to the client for future DOM updates. This then allows fast incremental updates to the page via web-service calls as the user interacts (e.g. just like B). Navigation between different 'pages' at this point involves using the History API to make it look like you're changing page, when actually you are just manipulating the current page using web-services. But you you then did a browser refresh, your server would send back the full HTML of the current page, before passing control to client-side React again. There are lots of React+Flux+Node examples of this approach available online, using the different flavours of Flux that support server-side rendering.
Whether that approach is worthwhile depends on your situation. It probably makes sense to start using approach B (you can share the your HTTP API between mobile apps and websites), but use a Flux architecture that supports server-side rendering and keep it in mind. That way, if you need to improve the performance of initial page loads, you have the means to do it.

What technology can i use to run a method on a browser(client side) every time a user uploads a picture?

I have a custom function/method that needs to run on the browser (client side) every time the user uploads a picture to a web-server. This method modifies the image being uploaded and sends it to the server.
Currently the method is written in java so I thought of using an applet on the browser which could run this method and then send the modified picture to a servlet residing on the server, but the applet has certain disk read/write restrictions. I am aware of policies that can be used to grant these permissions to the applet but they need the users consent every time.
Also I want to avoid the applet .class file to be downloaded every time this page is viewed. So
Is there a cleaner approach to all this?
Are there any other technologies that can help me run this method on the browser ? (its ok if i have to rewrite the function in a different language)
Is writing a custom browser extension a good idea?
I think, that the JS using will be much better for this task.
One of JS image processing JS-library
, just for example.
How to invoke a servlet from JS example
Writing a browser extension is a really wrong way.

Can I capture JSON data already being sent with a userscript/Chrome extension?

I'm trying to write a userscript/Chrome extension to capture JSON data being sent while using a web service so that I can reformat it and display selected portion on page. Currently the JSON is sent as the application loads (as I've observed from watching traffic with Fiddler 2). Is my only option to request the JSON again or is capture possible? As I'm not providing a code example, a requested answer is even some guidance on what method / topic to research or if I'm barking up the wrong tree.
No easy way.
If it is for a specific site you might look into intercepting and overwriting part of a code which sends a request. For example if it is sent on a button click you can replace existing click handler with your own implementation.
You can also try to make a proxy for XMLHttpRequest. Not sure if this even possible, never seen a working example. You can look at some attempts here.
For all these tasks you probably would need to run your javascript code out of sandboxed content script to be able to access parent page variables, so you would need to inject <script> tag with your code right into the page from a content script:

How do I run server-side code from couchdb?

Couchdb is great at storing and serving data, but I'm having some trouble getting to grips with how to do back-end processing with it. GWT, for example, has out of the box support for synchronous and asynchronous call backs, which allow you to run arbitrary Java code on the server. Is there any way to do something like this with couchdb?
For example, I'd like to generate and serve a PDF file when the user clicks a button a web app. Ideally the workflow would look something like this:
User enters some data
User clicks a generate button
A call is made to the server, and the PDF is generated server side. The server code can be written in any language, but preferably Java.
When PDF generation is finished, the user is prompted to download and save the document.
Is there a way to do this with out of the box couchdb, or is some additional, third-party software required to communicate between the web client and backend data processing code?
EDIT:Looks like I did a pretty poor job of explaining my question. What I'm interested in is essentially serving servlets from Couchdb similarly to the way that you can serve Java servlets along side web pages from a war file. I used GWT as an example because it has support for developing the servlets and client side code together and compiling everything into a single war file. I'd be very interested in something like this because it would make deploying fully functional websites a breeze through Couchdb replication.
By the looks of it, however, the answer to my question is no, you can't serve servlets from couchdb. The database is set up for CRUD style interactions, and any servlet style components need to either be served separately, or done by polling the db for changes and acting accordingly.
Here's what I would propose as the general workflow:
When user clicks Generate: serialize the data they've entered and any other relevant metadata (e.g. priority, username) and POST it to couchdb as a new document. Keep track of the _id of the document.
Code up a background process that monitors couchdb for documents that need processing.
When it sees such a document, have it generate the PDF and attach it to that same couch doc.
Now back to the client side. You could use ajax polling to repeatedly GET the couch doc and test whether is has an attachment or not. If it does, then you can show the user the download link.
Of course the devil is in the details...
Two ways your background process(es) can identify pending documents:
Use the _changes API to monitor for new documents with _rev beginning with "1-"
Make requests on a couchdb view that only returns docs that do not have an "_attachments" property. When there are no documents to process it will return nothing.
Optionally: If you have multiple PDF-making processes working on the queue in parallel you will want to update the couch doc with a property like {"being-processed":true} and filter these out the view as well.
Some other thoughts:
I do not recommend using the couchdb externals API for this use case because it (basically) means couchdb and your PDF-generating code must be on the same machine. But it's something to be aware of.
I don't know a thing about GWT, but it doesn't seem necessary to accomplish your goals. Certainly CouchDB can serve any static files (js or other) you want either as attachments to docs in a db or from the filesystem. You could even eval() JSON properties you put into couch docs. So you can use GWT to make ajax calls or whatever but GWT can be completely decoupled from couchdb. Might be simpler that way.
GWT has two parts to it. One is a client that the GWT compiler translates to Java, and the other is a Servlet if you do any RPC. Typically you would run your Client code on a browser and then when you made any RPC calls you would contact a Java Servlet Engine (Such as Tomcat or Jetty or ...) , which in turn calls you persistence layer.
GWT does have the ability to do JSON requests over HTTP and coincidentally, this is what CouchDB uses. So in theory it should be possible. (I do not know if anybody has tried it). There would be a couple of issues.
CouchDB would need to serve up the .js files that have the compiled GWT client code.
The main issue I see in your case is that couchDB would need to generate your PDF files, while couchDB is just a storage engine and does not typically do any processing. I guess you could extend it if you are any good with the Erlang programming language.

Resources