GWT - Ensure SQL password is never handed to the client - security

I have a GWT project where the server needs to talk to an SQL database. It needs a password to do that. That password needs to be stored somewhere. I can think of three locations to store that password:
Right there in the call to DriverManager.getConnection.
A final String field somewhere.
A .properties file.
With cases 1 and 2, the scenario comes to mind that the source code is translated to JavaScript and sent to the client.
That would never happen intentionally since it only makes sense for the server to talk to the database and not the client, but it could happen accidentally. In case 1 GWT would probably complain that it can't deal with JDBC, but in case 2 the field might be in some Constants class that compiles just fine.
I don't have enough experience with GWT to know how .properties files are handled. E.g. files in the src\foo\server directory might not be included in the JavaScript that gets handed to the client, but someone might come along later and accidentally move the file somewhere else where it is included.
So how can I ensure that the password is never accidentally sent to the client?
Note that I don't care that the password is stored in plain-text, either in code or in a config file.
Edit:
Clarification of my current situation:
My TestModule.gwt.xml only contains <source path='client'/>. It does not contain <source path='shared'/> or <source path='server'/>!
I have shared configs and server-only configs (the server-only config would contain the password for the database, then):
In the TestScreen (which is a Composite that shows a button on the page) I can use the ServerConfig class and SharedConfig class from client code without any problems:
This is a problem since I (or someone else) might accidentally cause the class with the password to be translated to JS and sent to the client.

The database password should rather be stored in a properties file than somewhere in the code. Unlike the code, this properties file should not be submitted to a version control system (like git or similar). It should also be outside the web folders.
Moreover, it would be huge security risk to use public final static String to store a password. Public members are visible to all other classes, static means no instance necessary to use it and final that it won't change. In your code you are storing a String constant that will be available to all instances of the class, and to other objects using the class. That is no good way to start considering security risks and is not directly related to GWT. It would be like storing a lot of money in a bank with no walls or doors and then asking how one could make it safe.
As long as data stays in the server side, you're fine. Per default, only client and shared paths are specified for translatable code. If your server classes do not implement IsSerializable and are not explicitly specified for translatable code in your gwt.xml file, they won't be sent to the client.

You have more than one option here :
Use a sperate classpath for both client a server so the classes in the server are never referenced in the client, this can be done following the recommended prject structure where each of the client/shared/server are a separate project, you can create such project structure using https://github.com/tbroyer/gwt-maven-archetypes, when you use this most likely the build will fail when anyone tries to depends on the server from the client, but there is still the possibilty that someone will do something and make it work.
Use #GwtIncompatible annotation on the class that holds the password which means the class will never be transpiled to JS at all and if referenced from the client side it will be a compilation error at gwt compilation phase.
Never put the password in a source file and depend on environment variable or some sort of password/key store that only exists on the server where you deploy the app and you still can set it locally for development.

If the server types and members are still accessible, you have misconfigured the .gwt.xml file, as #Adam said - make sure that the server vs client vs shared packages all exist in together in the same package as your .gwt.xml, and that no other .gwt.xml might exist.
This is not a security feature, like you are treating it, but a "how do I get the code I actually need to do my work" issue - java bytecode doesn't have enough detail in it (generics are erased, and old versions of gwt actually used javadoc tags for more detail) to generate the sources. Generally speaking, if you don't have sources, you can't pass that Java to GWT and expect it to be used in producing JS.
There are at least two edge case exceptions to this. There are probably more, but these spots of weirdness usually only matter when trying to understand why GWT can't generate JS from some Java, whereas you are trying to leverage these limitations as security features.
Generators and linkers run in the JVM, so naturally they can function with just plain JVM bytecode while the compiler is running. It would be a weird case where you would care about this, but consider something like a generator which was trying to extract some kind of reflection information and provide it in a static format for the browser.
GWT uses JDT to read the classes to be compiled, and it loads up bytecode where possible to resolve some things - one of those things happens to include constants. A "static final" string or primitive can be read from bytecode in this way without needing to go to the original .java sources.
If you have content in your bytecode that must not be considered in any way when generating JS, it belongs in a separate classpath - generally speaking, you should always separate your client code from your server code into separate projects with separate classpaths. There may exist at least one more project, to signify "shared" code which both client and server need to have access to.
And finally, it is generally speaking considered a bad idea to put secrets of any kind in your project itself, whether in the code itself or properties files, but instead to make it part of the deployment or runtime environment.

Related

Can a running nodejs application cryptographically prove it is the same as published source code version?

Can a running nodejs program cryptographically prove that it is the same as a published source code version in a way that could not be tampered with?
Said another way, is there a way to ensure that the commands/code executed by a nodejs program are all and only the commands and code specified in a publicly disclosed repository?
The motivation for this question is the following: In an age of highly sophisticated hackers as well as pressures from government agencies for "backdoors" that allow them to snoop on private transactions and exchanges, can we ensure that an application has been neither been hacked nor had a backdoor added?
As an example, consider an open source-based nodejs application like lesspass (lesspass/lesspass on github) which is used to manage passwords and available for use here (https://lesspass.com/#/).
Or an alternative program for a similar purpose encryptr (SpiderOak/Encryptr on github) with its downloadable version (https://spideroak.com/solutions/encryptr).
Is there a way to ensure that the versions available on their sites to download/use/install are running exactly the same code as is presented in the open source code?
Even if we have 100% faith in the integrity of the the teams behind applications like these, how can we be sure they have not been coerced by anyone to alter the running/downloadable version of their program to create a backdoor for example?
Thank you for your help with this important issue.
sadly no.
simple as that.
the long version:
you are dealing with the outputs of a program, and want to ensure that the output is generated by a specific version of one specific program
lets check a few things:
can an attacker predict the outputs of said program?
if we are talking about open source programs, yes, an attacker can predict what you are expecting to see and even can reproduce all underlying crypto checks against the original source code, or against all internal states of said program
imagine running the program inside a virtual machine with full debugging support like firing up events at certain points in code, directly reading memory to extract cryptographic keys and so on. the attacker does not even have to modify the program, to be able to keep copys of everything you do in plaintext
so ... even if you could cryptographically make sure that the code itself was not tampered with, it would be worth nothing: the environment itself could be designed to do something harmful, and as Maarten Bodewes wrote: in the end you need to trust something.
one could argue that TPM could solve this but i'm afraid of the world that leads to: in the end ... you still have to trust something like a manufacturer or worse a public office signing keys for TPMs ... and as we know those would never... you hear? ... never have other intentions than what's good for you ... so basically you wouldn't win anything with a centralized TPM based infrastructure
You can do this cryptographically by having a runtime that checks signatures before running any code. Of course, you'd have to trust that runtime environment as well. Unless you have such an environment you're out of luck - that is, unless you do a full code review.
Furthermore you can sign the build by placing a signature within the build system. The build system and developer access in turn can be audited. This is usually how secure development environments are build. But in the end you need to trust something.
If you're just afraid that a particular download is corrupted you can test against an official hash published at one or more trusted locations.

Mitigating security risks of javascript objects using require.js

I'm a little paranoid about storing sensitive information in global variables on the browser; who wouldn't be. Enter AMD! My question is, can we confidently use require.js to completely isolate variables, to help mitigate unwanted manipulation of variables from the console? Has anyone found a backdoor, or maybe a better way to put it is, has anyone witnessed any security issues with the require.js library?
Thanks!
No you can't. Even if you don't have any global variable the user can still go through your source code and add break points, then when the code reach the breakpoint he can manipulate all the variables that are accessible in the actual scope.
Take a look at this gamedev question which has some advices on how to make it harder (but not impossible) for users to cheat your code.
Yeah, the attacker can always view the source.
But if you size and shape the payload, minifying and modularize parts/regions of the client, serving them in accordance with use-case narratives on-demand, you effectively add a layer of security that exists due to the assumption of human-play.
A bot cannot simply traverse directories on a server, but instead must (via JavaScript) navigate the application intelligently, only getting code at a uniquely specified point in the app. It must know when certain payloads are essential to the use-case (say offering up credit card info N screens into a process).
Moreover, client code and be obfuscated w/r/t IP address or along continuous, periodic release cycles.

deploying node.js in compiled form

We are considering node.js for our next server side application. But we don't want our client to be able to look into our application's code. Can we deploy application written in node.js in compiled form? If yes, then how?
Maybe you could obfuscate all your code... I know this is not like compiling, but at least, it will avoid the 99% of the clients of looking at the code :D
Here is another topic: How can I obfuscate (protect) JavaScript?
good luck
But we don't want our client to be able to look into our application's code.
If you are shipping code to your client, they will be able to "look into your application's code". Technically, the process of "running your code" is "looking into your application's code".
Having a fully compiled version of your code can definitely feel "more safe", but they still have a copy of the code in some usable form. They can still reverse engineer pieces or do other things. This stuff really comes down to the license.
Here's a related answer. His quote is:
Write a license and get a lawyer to go after violators
Otherwise, you should host the stuff yourself and allow for public access.
Any form of obfuscation, minification, compilation is just going to be a speed bump on the way to "stealing your code". It's probably much better to simply have legal recourse.
I don't believe this is possible. I mean, technically I guess you could write everything as native C++ extensions, but that would defeat the purpose of using node.
As mentioned before, there is no true compilation in Node.js because the nod executable basically compiles javascript code on the fly.
A lot of developers use Google's Closure Compiler which really just "minify" -- removes comments, whitespaces, etc. -- and "optimize" -- converts javascript code to more efficient javascript. However, the resultant code is generally still parsable javascript code (albeit rather hard to read!). Check out this related stream for more info: Getting closure-compiler and Node.js to play nice
A couple of options that might be helpful:
Develop a custom module for "proprietary" business logic and host it on your secure servers
Wrap "proprietary" business logic into a java class or executable that is called as an external process in Node.js
Code "proprietary" business logic as compiled web services available on a separate application server that is called by Node.js.
It's up to you to define what part of your application should be considered "proprietary", but as a general rule I would not classify HTML and related javascript -- sent to the we browser -- as "proprietary". My advice is to be judicious here.
Lastly, I found the following stream with an interesting approach that might be helpful, but it is rather advanced and likely to be rather buggy: Secure distribution of NodeJS applications
Hope that helps...

Modular programming and node

UPDATE 1: I made a lot of progress on this one. I pretty much gave up (at least for now, but maybe long term) on the idea of allowing user-uploaded modules. However, I am developing a structure so that several modules can be defined and loaded. A module will be initialised, set its own routes, and have a 'public" directory for Javascript to be served. The more I see it, the more I realise that I can (should) also move the calls that are now system-wide in a module called "system".
UPDATE 2: I have made HUGE progress on this. I am about to commit tons of code on GitHub which will allow people to do really, really good modular programming (with modules exposing both client and server side code) using Node and Express. Please stay tuned.
UPDATE 3: I rewrote this thing as a system to register modules and enable them to communicate via a event/hooks system. It's coming along extremely nicely. I have tons of code already good to go -- I am just porting it to the new system. Feel free to have a look at the project on GitHub: https://github.com/mercmobily/hotplate )
UPDATE 4: This is good. It turns out that my idea about a module being client AND server is really working.
UPDATE 5: The module is getting closer to something usable. I implemented a new loader which will take into account what an init() function will invokeAll() -- and will make sure that modules providing that hook will be loaded first. This opens up hotplate to a whole new level.
UPDATE 6: Hotplate is now close to 12000 lines of code. By the time it's finished, sometime in February, I imagine it will be close to 20000 lines of code. It does a lot of stuff, and it all started here on StackOverflow! I need it to develop my own SaaS, so I really need to get it finished by February (so that I can sprint to July and finish the first version of BookingDojo). Thanks everybody!
I am writing something that will probably turn into a pretty big piece of software. The short story is that it's nodejs + Express + Mongodb/Mongoose + Dojo (client side).
NOTE: Questions in this text are marked as [Q1], [Q2], etc.
Coming from a Drupal background (and knowing how coooomplex it has evolved, something I would like to avoid), I am a bit of a module freak. At the moment, I've done the application's boilerplate (hotplate: https://github.com/mercmobily/hotplate ). It does all of the boring stuff (users, workspaces, password reminder, etc.) and it's missing quite a few pieces.
I would like to come up with a design that will allow modules in a similar fashion as Drupal (but possibly better). That is:
Modules can define new routes, and handle them
Modules are installed system-wide, and then each workspace can enable a set list of them
The initial architecture could be something along those lines:
A "modules" directory, where there is one directory per module
Each module has a directory for "public" files for the Javascript side of things
Each module would have public/startup.js which would be included in the app's javascript
Each module would have server/node.js which would be included on the fly by the server if/when needed
There would be one route defined, something like /app/:workspaceid/modules/MODULE_NAME/.* with a middleware that checks if that workspace has MODULE_NAME enabled -- and if it does, calls a module's function with the passed parameter
[Q1]: Does this some vaguely sane?
Issues:
I want to make this dynamic. I would like modules to be required when needed on the spot. This should be easy enough to do, by requiring things on the fly.
server/node.js would have a function called, but that function feels/looks an awful lot like a router itself
[Q2] Do you have any specific hints about this one?
These don't seem to be too much of a concern. However, the real question comes when you talk about security.
Privacy. This is a nasty one. At the moment, all the calls will make the right queries to mongoDb filtering by workspaceId. I would like to enforce some way so that there is no clear access to the database by the modules, so that each module doesn't have access to data that belongs to other workspaces
User-defined modules. I would love to give users the ability to upload their own modules (and maybe make them available to other users). But, this effectively means allowing people to upload code that will be executed by node itself! How would you go about this?
[Q3] How would you go about these privacy/security issues? Is there any way for example to run the user-uploaded code in a sort of node sandbox? What about access to file system etc.?
Thanks!
In the end, I answered this myself -- the hard way.
The answer: hotplate, https://github.com/mercmobily/hotplate
It does most of what I describe above. More importantly, with hotPlate (using hotPage and hotClientPages, available by default), you can write a module which
Defines some routes
Defines a "public" directory with the UI
Defines specific CSS and JS files that must be loaded when loading that module
Is able to add route-specific JSes if needed
Status:
I am accepting this answer as I am finished developing Hotplate's "core", which was the point of this answer. I still need to "do" things (for example, once I've written docs, I will make sure "hotplate" is the only directory in the module, without having an example server there). However, the foundation is there. In terms of "core", it's only really missing the "auth" side of the story (which will require a lot of thinking, since I want to make it so that it's db agnostic AND interfacing with passport). The Dojo widgets are a great bonus, although this framework can be used with anything (and in fact backbone-specific code would be sweeeeet).
What hotplate DOESN'T do:
What hotplate DOESn'T do, is give users the ability to upload modules which will then be loaded in the application. This is extremely tricky. The client side wouldn't be so bad (the user could define Javascript to upload, and there could be a module to do that, no worries). The server side, however, is tricky at best. There are just too many things that can go wrong (the client might upload a blocking piece of code, or they could start reading the file system, they would have access to the full database, and so on).
The solution to these issues are possible, but none of them are easy (you can cage the user's node environment and get it to run on a different port, for example, and so on) but some problems will stay. But, there is always hope.

MEF: Component authentication

I am building a Windows (Service) application that, in short, consists of a "bootstrapper" and an "engine" (an object loaded by the bootstrapper, which transfers control to it, and then performs the actual tasks of the application). The bootstrapper is a very basic startup routine that has few features that are likely to change. But the engine itself could be subject to upgrades after installation, and I am implementing a mechanism so that it can upgrade itself - by contacting a "master server" and checking its version number against a "most current" version. If there is a newer version of the engine available, it will download it into a designated folder and call a method in the bootstrapper to "restart".
So, whenever the bootstrapper starts up, it uses MEF to "scan" the appropriate directories for implementations of IEngine, compares their bootstrapper compatibility numbers and picks the newest compatible engine version. Then it transfers control to the engine (which then, in turn, performs the update check etc). If there are no eligible IEngines - or MEF fails during composition - it falls back on a default, built-in implementation of IEngine.
This application will be running on a remote server (or several), and the whole rationale behind this is to keep manual application maintenance to a minimum (as in not having to uninstall/download new version/reinstall etc).
So, the problem: Since the bootstrapper effectively transfers program execution to a method on the IEngine object, a malicious IEngine implementation (or impersonator) that somehow found its way to the application's scanned folders could basically wreak total havoc on the server if it got loaded and was found to be the most eligible engine version.
I am looking for a mechanism to verify that the IEngine implementation is "authentic" - as in issued by a proper authority. I've been playing around with some home brewn "solutions" (having IEngine expose a Validate function that is passed a "challenge" and has to return a proper "Response" - in various ways, like having the bootstrapper produce a random string that is encrypted and passed to the engine candidate, which then has to decrypt and modify the string, then hash it, encrypt the hash and return it to the bootstrapper which will perform a similar string modification on its random string, then hash that and compare that hash to to the decrypted response (hash) from the candidate etc), but I'm sure there are features in .Net to perform this kind of verification? I just looked at Strong Naming, but it seems it's not the best way for a system that will be dynamically loading yet unthought-of dlls..
Input will be greatly appreciated.
Assemblies can be digitally signed with a private key. The result is called a strong named assembly.
When a strong named assembly is loaded, .NET automatically checks whether its signature matches the embedded public key. So when a strong named assembly has been loaded, you have the guarantee that the author posseses the private key that corresponds to that public key.
You can get the public key by calling Assembly.GetName().GetPublicKey() and then compare it to the expected one, i.e. yours.
You can scan over the plugin assemblies, create an AssemblyCatalog for each one with the right public key (rejecting the others), finally aggregating them into an AggregateCatalog and building a CompositionContainer with it.
This is basically what Glenn Block also explained in this thread. (Best ignore the blog post linked there by Bnaya, his interpretation of StrongNameIdentityPermission is not correct.)
edit with responses to the wall of comments:
To get that public key, I make the
console application output the public
key byte array to somewhere. I embed
the byte array in my host application,
and subsequently use that to compare
against the public keys of plugin
candidates. Would that be the way to
do it?
Yes, but there is a simpler way to extract the public key. Look at the -Tp option of sn.exe.
Does this mechanism automatically prevent a malicous plugin assembly from exposing a
correct, but "faked" public key? As in, is there some mechanism to disqualify any assembly
that is signed, but has a mismatch between its exposed public key and it's internal
private key, from being loaded/run at all?
As far as I know, the check happens automatically. A strong named assembly cannot be loaded (even dynamically) if its signature is wrong. Otherwise the strong name would be useless. To test this, you can open your strong named assembly in a hex editor, change something (like a character in a const string embedded in the assembly) and verify that the assembly can no longer be loaded.
I guess what I was referring to was something akin to the type of hack/crack described here:
http://www.dotnetmonster.com/Uwe/Forum.aspx/dotnet-security/407/Signed-assemblies-easily-cracked
and here: Link
[...snip more comments...]
However, this can - apparently - be bypassed by simple tampering (as shown in first link, > and explained more here): grimes.demon.co.uk/workshops/fusionWSCrackOne.htm
The "attacks" you refer to fall in three categories:
removing the strong name altogether. This does not break the authentication, the assembly will no longer have a public key and so you will reject it.
disabling the strong name check, which requires full access to the machine. If this was done by an attacker, then it would mean that the attacker already owns your machine. Any security mechanism would be meaningless in such a context. What we are actually defending against is an attacker between the machine and the source of the assemblies.
a real exploit made possible by a bug in .NET 1.1 that has since been fixed
Conclusion: strong names are suitable to use for authentication (at least since .NET 2.0)
I've written a blog post with source code for a catalog which only loads assemblies with keys that you specify: How to control who can write extensions for your MEF application

Resources