electron.js is a user interface toolkit that allows a web application to operate as an arbitrary GUI.
However, there are some applications that should be considered sensitive - for instance, a GUI for banking should have strong assurances that it's not doing anything mischievous.
I'm wondering if the electron executable (or node.js itself) would allow operation in a mode where networking is outright disabled - that way, as a consumer, I can at least be confident a user interface isn't sending my password off-site.
Something like ./node_modules/.bin/electron --no-networking index.js would be very convenient, albeit a far cry.
I haven't tried whether it works or not.
The following lines simulate the offline mode in chromium.
// To emulate a network outage.
window.webContents.session.enableNetworkEmulation({ offline: true });
Related
I'm curious as to how an application is being launched from a web control panel. I am using Splashtop Business, a remote desktop management system. The system allows one to select a workstation to connect to, select "Connect", and the native app will be launched, and initiate the connection.
I want to know how this app is being launched, with the information being transmitted from the browser to the application.
I checked the official documentation, and couldn't find anything on a custom URI being used for the application I'm using.
I watched the network traffic, and found the only thing of plausible importance (in my eyes) was a cookie being set. (I can clean and post some cookies if that would be helpful.)
I watched the local storage of the browser, nothing changed between different launches.
Other things of import:
The site said pop-ups needed to be enabled for the application to launch
There is a small delay while the site says it is "Locating the Splashtop Business app"
This works in multiple browsers (Firefox, Chrome)
Any plausible solutions and especially ways to verify this would be appreciated. I don't want to accept that "its a blackbox solution" and just try and find another way to do the same thing. I'd rather know what is going on with my computer, as this is fairly significant in respect to security.
I have a NPAPI plugin for sign-in data on website.
I want to replace it by Native Messaging technology. I have read the documentation, but I have a question : Is this technology safe?
Can hackers catch data in transfer from JavaScript to native host app and back?
Edit: merging in a better-worded question:
How secure is stdio data transfer ?
Is there a way for man-in-middle attack for such data transfer ?
It is, in principle, possible to inspect stdio calls made by an executable.
For instance, on Linux systems, you can use strace for that purpose. I don't know a similar Windows tool, but it's conceivable that it exists.
That would be akin to attaching a debugger to the browser/native host itself, and can only be done by someone who has access to the local machine with the same user credentials or administrative access. In particular, the user running Chrome can do it - just like he/she can use Dev Tools to inspect and intercept the data at the JavaScript side.
So, yes, in principle that can be intercepted, but only by someone will full rights to execute/debug code on the system it's running on, and OS takes care not to allow normal users to inspect processes of other users in this way.
You realize, of course, that Native Messaging will ONLY work within the bounds of the machine: With native messaging the browser will communicate with your host application over stdin/stdout.
So what exactly is the problem here? If the Hackers are capable of listening to your stdin/stdout they are already on your machine - you've already lost.
Not really, sometimes hackers can find XSS in vulnerable site, then it may be possible to use Native Messaging to execute command on Victim system
I am building a client-side program that connects to a server. This client-side program needs to have the source code available to the users as part of the licencing (not an option). However, I need to ensure that when a user connects to the server with that client-side program, it's running with the original code and hasn't been altered and re-compiled.
Is there any way to check during connection to the server that they're using an unaltered version of the program?
No, there's really no way to do that.
You're basically encountering the "Trusted Client" problem. The client code runs on the user's PC, and the user has full control over that PC. He can change the bytes of the program on disk, or even in memory. If you were to try to perform a hash or checksum against the code, he could simply change the code that did that verification and make it return "unmodified".
You could try to make things a little harder on a malicious user but there's no practical way to achieve what you're hoping.
What you have described is a issue that the video game industry has been fighting for the last decade and a half. In short, how to prevent the user from modifying the client (in their case, generally to prevent cheating, though also for copyright reasons). If that effort has taught us anything, it's that preventing modifications to the client is a constant arms race that you will never decisively win. In light of that, don't even try.
Follow the standard client-server assumption that the client is in the hands of the enemy and cannot be trusted. Build your server side defensively based on that assumption and you'll be alright.
It's very very difficult and probably not worth it. But if you are interested in pursuing it you'd have to develop something that has been code signed and monitored by the Windows kernal.
A couple topics that will orient you to the scope of the problem:
Protected media path
Driver signing
Both media devices and device drivers are digitally signed by the manufacturer and continuously monitored by Windows. If anything goes out of whack, it gets shut down (that'ts the technical term). Seems very daunting. And I don't know if the technology is available for desktop software that isn't a device driver and isn't related to DRM.
Good luck!
I've got an app that's been started on the Microsoft stack as a smart client (notionally WCF/WS enabled) with a small client app that gets deployed and the rest of the app running in our private cloud. It's only real dependency is internet connectivity, .net 4 and a windows operating system.
I am under pressure to convert over to a browser based architecture for all future development. Based on other web apps I've worked on, I'm concerned that the way that client IT organizations can control the browser, it will cause more problems down the line than what I really want to deal with.
Do you have experience making this kind of decision? What technical factors did you consider when deciding to go smart-client vs. browser? What resources were helpful in making this decision?
My app is a healthcare app targeted at healthcare providers (eg. hospitals), so everywhere I go, I have to worry about the Healthcare CIO looking over my shoulder.
Interesting. Originally I'm from C# winform and WPF Desktop programmer, and later being assigned to do web development. Haven't touch Smart Client yet but I think it should almost be the same with Native app. Based on experience, the technical things to consider are:
Multi browser support
Especially for reporting and graphic processing, without some library / plugins / framework for your component, it will be insanely hard to keep your app multibrowser. Especially in css style and less in javascript.
Client programming(javascript)
You will lose the ability to create controls and animation using C# controls. Instead you must using javascript (jquery or other library) in exchange. Javascript is not fully OOP, and intepret language (no compile error), making it harder (maybe there is some framework like coffeeScript which I haven't yet explore). In addition, it is harder to make since it will need server request / response activity in between the process, which I will describe later.
Request / Response Client-Server Architecture
This means that most process in client will need to request for the server (request for data to display, request to modify the data, etc). It also means that you lose the ability of control event, even if you use asp.net webform (it still need some tweaks for the event to work). However I assume you already used the WCF so this kind of architecture must be that hard.
Security
Don't keep important information such as password, etc in client (hidden field, javascript variable, etc). The concept should be the same with multitenant client, however in browser, user has free access to debug your webpage.
Concurrent and Multithreading
In browser, it is easier for multitab page and concurrent process will be very highly to occur. Your code must able to handle the multi threading for client side. For server side, you can still use your WCF to handle concurrencies.
My 2 cents.
Obviously the web application has its own challenges. I hope this link can help you in some aspects: http://msdn.microsoft.com/en-us/library/ee658099.aspx
Along with those you need to focus on non-function requirements like extensibility and scalability etc. too.
Supposing I was developing a fairly graphically intensive application (C++ or C#, graphics API undecided) for which most of the usage will be by remote users over RDP (either terminal server sessions or remote access to a single-user machine). It's obvious that non-essential "eye-candy" effects and animations should be avoided. My questions are:
What should I be careful to do/avoid doing to make most efficient use of the RDP protocol ? (e.g I have an idea RDP can remote some graphics drawing primitives straight to the client... but is that only for GDI ? Does using double-buffering break such remoting and force a bitmap mode ? Does the client-side bitmap cache "just work" or does it only cache certain things like fonts and icons ?)
Is there any sort of RDP protocol analyser available which will give some insight into what an RDP stream is actually transporting (in particular, bitmaps vs drawing primitives) ? (I can imagine adding some instrumentation to the rdesktop source to do this, but maybe something exists already).
In my experience I'd be careful when it comes to animations - especially fade up/down controls that can seriously kill performance over RDP.
Double-buffering might also cause some problems, however I personally haven't had to do too much in the way of workarounds for this - the article by Raymond Chen explains the possible pitfalls quite well.
Essentially, it's a good idea to check in code whether it's running in a remote sessions (RDP, Citrix, etc). Take a look at: GetSystemMetrics( SM_REMOTESESSION ) - you can then decide at runtime whether to enable or disable certain features.
My idea is that the optimization work made on RDP already cover 90% of the problem you're describing, so I would not worry about optimizing for RDP, you're already removed the eye-candy stuff, you know that the application will be used via RDP so I suppose you'll avoid operations that involves continuous redrawing of form, I believe that sould be enough.
Our application was never designed with RDP in mind, we had the same worries you have when a customer told us that all its client will be used via RDP (Citrix, in that specific case) from remote locations but also if we didn't change a single line of code the customer never called with slowlyness problems due to RDP.
Remember... Premature optimization is evil.