PhoneGap Cordova HIPAA compliance - security

Given that a Cordova app can be plugged in and inspected, are apps inherently less secure that native compiled code? Or do just the same rules apply regarding what's kept in and a normal UIWebView?

After some further research:
In current versions of Cordova, an app compiled with a distribution license prevents web inspection.
Your IPA file can be browsed however, so your source code should not contain any sensitive information. Don't save personal info to the app's sandbox (documents://, localstorage, web directory) since any encryption methods would be easily discovered and reproduced. Save all sensitive information to a password protected API.
You could also use a custom Cordova plugin to get/set sensitive information. Best case would be to use a custom plugin to get/set information from a secure API server (hiding your API parameters).
Also, treat any HTML or JS that have sensitive values as toxic. Delete/Remove them as quickly as possible (including the jquery cache). Make a special effort to remove any and all sensitive info from the DOM when the app moves to the background.
TLDR; Built for production, the active state of your app can be considered as secure as anything in RAM, but you must save any and all sensitive information off the device or use a plugin to do the encryption/decryption.

Related

Node.JS webapp: Authentication, Create Account, Forgot Password and Change Password

I would like to develop a new web-app in node.js (using express). I am relatively new to node.js world, so I assume there are frameworks that I am not familiar with.
Is there any framework (like Spring for Java) that manages authentication (and save the trouble from the developer)? Or each developer has to write this code over and over again?
Login/Logout is not all. There are other flows:
registration (create account),
forgot-password (and then set new password),
locking/unlocking an account,
change password
and I think I have covered all flows.
I know that each application has its own UI, forms, maybe with its logo, but the flow itself is similar for most applications.
In addition, I know that it is not that hard to implement, but it could be great to have some kind of tool / framework / infrastructure which implements the flows.
Is there such a tool/framework which helps applications' developers and implements these flows?
I've searched this issue but could not find anything.
Thanks!
Long ago I have developed authentication-flows for Java over Spring, and recently I wrote authentication-flows-js.
It is a module that answers most flows - authentication, registration, forgot-password, change password etc., and it is secured enough so applications can use it without the fear that it will be easily hacked.
It is for node.js applications (written in TypeScript) that use express. It is an open source (in GitHub). A release version is on npm, so you can use it as a dependency in your package.json.
In its README (and of course in the npm page) there are detailed explanations for everything and if something is missing - please let me know. An article will be published soon (I will add a link as a comment).
You can find here an example for a hosting application.
NOTE: I have heard comments like "It's not so difficult to implement". True.
But you have to make sure you take care of all cases. For example,
what happens if a user tries to create account that is already exists?
what happens if a user tries to create account that is already exists
but inactive? what about the policy of the password? (too long/too
short/how many capital etc.) what about sending the email with the
activation link to the user? how you create this link? should you
encrypt it? what about the controller that will receive the click on
the link and activate the account? and more...

Does Chrome Market accept extensions with minified and/or obfuscated source code?

I'm currently developing a Chrome extension and planning to publish it on Chrome market. I'm aware of open-source community benefits, however, do not want to share the source code and a bit worried about copyrights. Currently, the plan is to minify and obfuscate the source code before publishing. So the questions is:
Does Chrome Market accept extensions with minified and/or obfuscated source code?
Thanks in advance! :)
Any existing answers above have been rendered obsolete by the terms change on January 1st, 2019. This change was announced on October 1st, 2018.
In summary:
Google Allows minified code.
Google disallows obfuscated code.
The specific policy, available at https://developer.chrome.com/webstore/program_policies, is as follows:
Developers must not obfuscate code or conceal functionality of their
extension. This also applies to any external code or resource fetched
by the extension package. Minification is allowed, including the
following forms:
Removal of whitespace, newlines, code comments, and block delimiters
Shortening of variable and function names
Collapsing files together
2019 Update:
Google allows minified code, but not obfuscated one. See Brian's answer
Original answer:
Yes, you can use obfuscation tools (like jscrambler) before publishing your extension. I don't know if that may delay the publishing time, but I know for sure that are some published Chrome extensions with obfuscated/minified source code.
I, for instance, minify the code of my extension (LBTimer) with Google's Closure before publishing it.
It looks like they don't approve minified and obfuscated code. You can check thread on the Chromium Google Group, from April '16.
https://groups.google.com/a/chromium.org/forum/#!topic/chromium-extensions/1Jsoo9BPWuM
No, you cann't. This is email I received from Google Chrome Team: All
of the files and code are included in the item’s package.
All code inside the package is human readable (no obfuscated or minified code).
Avoid requesting or executing remotely hosted code (including by referencing remote javascript files or executing code obtained by XHR requests).
You can get a more specific answer if you contact the Google Chrome team.
Update with own experience:
I wasn't able to submit a build obfuscated with this javascript-obfuscator (more specifically, gulp version in my case) They were complaining about "your code is suspicious" so I guess something triggered an alert in their system.
However uglyfy worked for that - I still had to figure out a way to rename all the prototype functions as uglify doesn't seem to do that (or at least I wasn't able to find a way to do that)
Original answer:
To sum up, it seems like chrome extensions are allowed to be minified and obfuscated.
For more details, keep reading.
First of all, there are two different terms - chrome extension and chrome app and different rules applies based on that. Chrome app has more strict requirements and it seems like mcastilloy2k's answer is suitable for chrome app (at least it looks like it is based on the available policies for both).
And regarding the below google's answer:
Avoid requesting or executing remotely hosted code (including by
referencing remote javascript files or executing code obtained by XHR
requests).
If it's for chrome extension and not for chrome app that seems strange as per the extension FAQ from google which explicitly states that extension is allowed to make external requests to execute custom API aka 'remotely hosted code':
Capabilities
Can extensions make cross-domain Ajax requests?
Yes. Extensions can make cross-domain requests. See this page for more
information.
Can extensions use 3rd party web services?
Yes. Extensions are capable of making cross-domain Ajax requests, so
they can call remote APIs directly. APIs that provide data in JSON
format are particularly easy to use.
Can extensions use OAuth?
Yes, there are extensions that use OAuth to access remote data APIs.
Most developers find it convenient to use a JavaScript OAuth library
in order to simplify the process of signing OAuth requests.
Another discussion in this google groups thread shows that rejection might not be connected with obfuscation at all:
Eventually, these are the things I needed to do to get my extension
passed (but I keep my fingers crossed in case some other validation
test still has to be performed):
I created a privacy policy and added a link to it on the Google Chrome developer dashboard.
I explained in more detail what my extension is doing. It seems that Google needs this to have a better understanding of the extension.
In the description I explicitly stated how the extension handles personal or sensitive user data.
Eventually that was enough to get the extension
pass the checks even with minified & obfuscated code (but remember I
keep my fingers crossed).
Moreover one can always go and check existing extensions out there, like Grammarly for example, who has obfuscated code (to some extent at least) and who uses external API.

Yesod - the best way to create users on the web site?

I'm trying to develop a site, where users will be registered directly on it, as opposed to being authenticated by Google mail etc. Beside the usual username/password I need to collect more data from the user - name, address, etc. What would be the quickest way for adding the desired functionality? Short of writing my own Auth plugin I see two options:
Create my own registration form (which I kinda need to do anyway) and use HashDB for storing the passwords and later authentication. However, yesod.auth.hashdb seems to be gone from the latest version (why?) and is only available separately here: https://github.com/ollieh/yesod-auth-bcrypt/ . Is something wrong with it? Security flaws?
Use http://hackage.haskell.org/package/yesod-auth-account - looks much closer to what I need, because it already provides registration page, but it doesn't seem to be supported by the latest yesod 1.2.5 and it is not clear how to integrate my additional fields into the existing registration process

Are NPAPI plugin security issues same as a downloadable app that connects to the internet

I want to build a cross-platform helper app that lets my users scan the desktop filesystem and find/upload the original, hi-res version of a JPG image they have previously uploaded. The scan may try to match by filename, EXIF data, or by comparing visual attributes using computer vision algorithms.
I read the following and get a little frightened:
Security considerations
Including an NPAPI plugin in your extension is dangerous because plugins have unrestricted access to the local machine. If your plugin contains a vulnerability, an attacker might be able to exploit that vulnerability to install malicious software on the user's machine. Instead, avoid including an NPAPI plugin whenever possible.
My other option is to build a download/install native desktop app that runs in the background. But this approach is would also have unrestricted access to the local machine + my servers via the internet.
Both approaches require the user to download/install native code - but the NPAPI plugin has the promise of easier access and a common framework. So are the security issues the same or is one approach generally preferred over another?
Essentially, both plugins and a regular app have the same kind of access - so installing either one requires quite a bit of trust. There is a difference in attack surface however: while an application is normally something that can only be started by the user, a plugin is accessible to every website (restricting access to selected websites is possible but this protection itself can fail). Also, if you want to package an NPAPI plugin in a Chrome extension you have to consider that Chrome Web Store requires manual review before accepting such extensions (and distributing extensions from your own site is pretty hopeless with the changes made in Chrome 21). But it can potentially provide a better user experience. All in all: not an easy choice to make.

Security considerations when creating a mobile app using PhoneGap

I'm a beginner in creating mobile apps with phonegap. I have some doubts on security aspects, when creating a mobile app with phonegap.
I want to create an app that accesses a Web service, e.g. a REST service created using Jersey. Now, am I correct in thinking that a hacker can easily see the security keys/authentication mechanism used, to authenticate the client (on a mobile app) with the server (where REST API is to be used)?
In general, can a hacker easily access all data being sent by the mobile app (which was created using phonegap)?
Can a hacker disassemble a phonegap app to obtain original code? Wont he get the native code (e.g. Objective C in case of ios)? Or can he decompile even that into original phonegap code (ie html+js)? How do I prevent my code from being decompiled? Is this scenario the same as for most other languages i.e. hackers with powerful PCs can hack into just about any program/software? Is there some way to prevent this from happening?
Allright, first take a deep breath. You are probably not going to like some of my answers but you'll be living with the same issues that we all are.
The best thing to do in this case is to use something like the KeyChain plugin to retrieve your security keys from the native side.
You can take PhoneGap out of the question because it applies to any situation where you send unencrypted data between a client and server. Anyone can easily listen in using a number of tools including Wireshark or Ethereal. If you need to communicate with a sever it should be done on an encrypted, HTTPS or SSL, connection.
First I think you are under the mistaken impression that PhoneGap compiles your HTML/JS code into Obj-C. It does not. If the user uncompresses your app they will be able to read your HTML/JS. Also, they'll be able to decompile your Obj-C code as well. This does not take a powerful PC or even an experienced hacker. Pretty much anyone can do it.
My advice to you is not to worry about it. Put your time into creating a truly great app. The people who will pay for it will pay for it. The folks who decompile it would never buy the app no matter what. The more time you take trying to combat the hackers takes away from the time you could use to make your app greater. Also, most anti-hacking measures just make life harder for your actual users so in fact they are counter productive.
TLDR -
Consider that you are coding a website and all code ( html and js ) will be visible to user with
Crtl+Shift+i as in browsers
Some points for ensuring maximum security
If your using backend then recheck everything coming from the app
All attacks possible on websites (XSS, redicting to malacious webistes , cloning webistes, etc are possible )
All the data sent to the app will finally be available in some js variables / resource files , Since all variables are accessible by hacker so is all the data sent to app EVEN IF YOU ARE USING THE MOST SECURE DATA TRANSMISSION MECHANISMS
As Simon correctly said in his ans , phonegap or cordova does not convert html / js to native code . Html / Js is available as-it-is
Cordova also explicitly mentions this in their official statement
Do not assume that your source code is secure
Since a Cordova application is built from HTML and JavaScript assets that get packaged in a native container, you should not consider your code to be secure. It is possible to reverse engineer a Cordova application.
5.Mainly all the techniques which are used by websites to prevent their code from being cloned / easily understood are applicable even here (Mainly it includes converting js code into hard to read format - obfuscating code)
6.Comparing native apps with cordova/phonegap apps native apps , I would say that cordova are easier for hacker only because of lack of awareness between cordova developers who do not take enough measures to secure it and lack of readily available(one click ) mechanisms to directly obfuscate the code vs in android proguard
Sample Cordova App Hacking (NOTE:Phonegap also works in similar way)
I'll show an example to show how easy is it for hacker to hack a cordova app(where developer has taken no efforts in obfuscating the code )
basically we start with unzipping the apk files (apks can be unzipped like zip files )
The contents inside will be similar to this
The source code for cordova apps is present in /assets/www/ folder
As you can see all the contents including any databases that you have packed with the app are visible(see last 2 rows have a db file )
Along with it any other resources are also directly visible (text files/js / html/ audio / video / etc)
All the views / controllers are visible and editable too
Further exploring the contents we find a PaymentController.js file
Opening it we can directly see the js code and other comments
here we note that after user performs payment if it is successful then successCallback is called else cancelCallback is called.
Hacker can simply replace the two functions so that when payment is not successful successCallback is called .
In absence of other checks the hacker has successfully bypassed payment.

Resources