JHipster registry /encrypt and /decrypt endpoints missing - jhipster

I am using latest JHipster Registry updated just couple of days ago. I am trying to setup symmetric key encryption that is part of spring boot itself https://cloud.spring.io/spring-cloud-config/spring-cloud-config.html (see Key Management). I have gotten it to work in Spring boot by setting the key in bootstrap.properties
Under JHipster, the developers advise is that all endpoints are under /management/** so I have tried /management/encrypt and just encrypt both return a 404.
I have set the encrypt.key in many places to try and get this to work
environment variable ENCRYPT_KEY
in git under application.yml
in bootstrap.yml within the registry app
However it still does not activate the endpoints or something else is wrong. If anyone has gotten it to work please indicate if it works for you and what settings you used.

JHipster Registry sets a prefix for the config server endpoints to be served under /config, this property is set in the bootstrap.yml and bootstrap-prod.yml files.
Once you add the encrypt.key property (or ENCRYPT_KEY environment variable) and install the "Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files" according to the Spring Cloud Config docs, you can access the encrypt/decrypt endpoints at:
http://admin:password#registry:8761/config/encrypt
http://admin:password#registry:8761/config/decrypt

Related

How do I configure NodeRestriction plug-in on kubelet?

Let's start with some context:
I'm studying CKS and reading CIS_Kubernetes_Benchmark_v1.6.0.pdf and there's a confusing section:
1.2.17 Ensure that the admission control plugin NodeRestriction is set (Automated)
...
Verify that the --enable-admission-plugins argument is set to a value that includes
NodeRestriction.
Remediation:
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-
apiserver.yaml on the master node and set the --enable-admission-plugins parameter
to a value that includes NodeRestriction.
The part about check if /etc/kubernetes/manifests/kube-apiserver.yaml has an entry for - --enable-admission-plugins=NodeRestriction,... makes sense, the annoying part is
"Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets."
Is extremely hard to google, and the Kubernetes official docs aren't clear about how to do this.
So now that the context is there the question is:
After setting - --enable-admission-plugins=NodeRestriction on the kube-apiserver, how do you verify that the NodeRestriction plug-in on the kubelet has been correctly configured?
To properly enable NodeRestrictions Admission Controller Plugin, you actually need to update kubernetes configuration in 3 different places:
kube-apiserver: - --enable-admission-plugins=NodeRestriction,...
kube-apiserver: - --authorization-mode=Node,RBAC (You must have Node specified)
kubelet (of every node): /var/lib/kubelet/config.yaml should have authorization.mode: Webhook (Other kubernetes distributions may substitute /var/lib/kubelet/config.yaml with another method of configuring it, but I'm sure there'd be a matching value)
When kubelet's authorization.mode is set to Webhook, instead of it's default of AlwaysAllow, it offloads authorization decisions to the kubernetes api-server. The Node Authorization mode is a special-purpose authorization mode that specifically authorizes API requests made by kubelet.
(The giantswarm article below is a great read, and does a good job explaining why you should implement this setting, I'll try to summarize it by saying that's generic hardening that helps prevent privilege escalation by a compromised workload or bad actor.)
Sources:
1.) Kubernetes Security Essentials (LFS260)
2.) Securing the Configuration of Kubernetes Cluster Components
3.) Using Node Authorization

How to secure ConnectionString and/or AppSettings in asp.net core (on-prem)

First off, I know we dont have ConnectionStrings and AppSettings per se in .Net core, but in my case I want to encrypt a ConnectionString and maybe some other application configurations stored in my appsettings.json (or other settings file).
I know this has been discussed alot all over the internet, but no one seems to have a legit answer..
So suggestions that has beeen thrown out there are:
Using EnvironmentConfigurationBuilder, however... that doesnt really solve the issue, since we just moved our plain text configurations from appsettings.json to the Env-variables)
Create a custom ConfigurationProvider that encrypts and decrypts the appsettings.json (or selective parts of it), however.. that doesnt solve the issue either, since we need to store our key(s) for the decryption somewhere accessible by our application, and even if we did store the key as a "hard-coded" string in our application, a "hacker" could simply just de-compile it.
Someone also mentioned that even if you do encrypt the appsettings.json, a "hacker" could always just do a memory dump and find the decrypted version of the data.. Im no expert on that field, so Im not sure how likely or how complicated such as thing would be.
Azure Key Vault has also been mentioned a few times, however.. in my case and in alot of cases when working with authorities, this is not an option since cloud-services are not allowed.
I might be overthinking this, since if an attacker/hacker actually has managed to get into our server, then we might have bigger issues.. but what would be the way to deal with this issue? Do you simply dont care and leave it all as "plain text"? Or should you take some sort if action and encrypt or obscure the secrets?
You don't need to encrypt connection strings from your config file because the best way is still to NOT store this information in your config files but as environment variables on your server.
In your appsettings.json file just store your local development connection string. For other environments, on the server it is deployed set an environment variable with __ (double underscore) for each child node in you config file.
You can read how this works on this page
If you have a config file as follow
{
"ConnectionStrings": {
"default": "Server=192.168.5.1; Database=DbContextFactorySample3; user id=database-user; password=Pa$$word;"
}
}
On a Windows server you would set the value like this
set "ConnectionStrings__default=Server=the-production-database-server; Database=DbContextFactorySample2; Trusted_Connection=True;"
I don't know how is your deployment flow and tools you're using but it's worth digging into it and find how you can use of this feature.
For example if you're deploying on Kubernetes you could use Helm to set your secret values.
At my company on TFS we create a Release pipeline and make use of the variables section to set the secret values. These values will then be used when the code is deployed on Kubernetes.
Variables in Release pipelines in TFS can be hidden like passwords and no developer can see the production values. Only administrators can

Setup clustered Traefik Edge Router on Azure kubernetes with Lets Encrypt

I'm trying to setup traefik with Lets Encrypt on kubernetes in Azure, so far so good and every thing is almost working ... this is the first time, so hoping I'm missing something to get everything working.
I have used the DeploymentController with 1 replica(later there will be more than one, going for clustered setup).
The issue is with the Lets Encrypt certificate.
I'm getting this error:
Failed to read new account, ACME data conversion is not available : permissions 755 for acme/acme.json are too open, please use 600
This seems like a fair requirement but how do I set this since I'm using the "node's storage" ... I know this is not the best option but having a hard time finding a good guide to follow ... so need some guidence here.
Guides says using a KV Storage as etcd
I have read:
https://docs.traefik.io/configuration/acme/
https://docs.traefik.io/user-guide/kubernetes/
It also says here: https://docs.traefik.io/configuration/acme/#as-a-key-value-store-entry
ACME certificates can be stored in a KV Store entry. This kind of storage is mandatory in cluster mode.
So I guess this is a requirement :-)
This all makes sense so every pod don't request the same certificate but can share it and be notified when a new certicate is requested ...
This page show the KV stores that is supported: https://docs.traefik.io/user-guide/kv-config/ - kubentes uses etcd, but I can't find any information if I can use that to store the certicate ... ?
So what is my options here? Do I need to install my own KV Store to support Lets Encrypt Certificates? Can i use Azure Storage Disk?

NiFi: Configuring SSLContextService for GetHTTP or InvokeHTTP

I am trying to use WMATA's (the DC system) Metro API, and use NiFi to pull in some live Train Position data. I currently have tried to use both GetHTTP and InvokeHTTP, but no luck. My confusion comes from two areas:
1) How to configure the processor itself?
2) Configuring the SSLContextService?
The Metro website gives a Primary and Secondary key - but I'm not sure how to parse that information, when the SSLContextDriver config asks for KeyStore filename, etc.
My GetHTTP config:
And my SSL config:
I get errors when I run the GetHTTP processor:
I hope my issue makes sense. Thanks
For the specific error message you have show, the URL you specified has contentType={contentType} which is invalid. If you wanted to reference a flow file attribute, or variable, it would need to be ${contentType}. Otherwise if you really want to literally pass {contentType} then I think you would need to URL encode the brackets first.
For your SSL Context service, I believe in this case you want to set the truststore to CA certs instead of the keystore. It is similar to how your browser has truststores and verifies server identities when you go to an https page. You would only specify the keystore if you needed the GetHttp/InvokeHttp processor to also provide an identity so the other server could verify the identity of the processor.

Best practice for managing web service credentials for Node.JS?

We're planning a secure Node.JS server, which uses several third-party web services. Each requires credentials that will need to be configured by the operations team.
Clearly they could simply put them in plain text in a configuration file.
Microsoft .NET seems to offer a better option with DPAPI (Data Protection API) - see Credential storage best practices. Is there a way to make this available through IISNode? Or is there any other option to secure such credentials within Node-JS configuration?
There's an extensive discussion of several options here, including the two suggested by xShirase:
http://pmuellr.blogspot.co.uk/2014/09/keeping-secrets-secret.html
User-defined services solves the problem, but only for Cloud Foundry.
This blog http://encosia.com/using-nconf-and-azure-to-avoid-leaking-secrets-on-github/ points out that you can often set environment variables separately on servers, and suggests using nconf to read them and config files separately.
I still wonder if there are specials for IIS?
There is 2 ways to do it securely :
First one is to use command line parameters when you launch your app.
These parameters are then found in process.argv
So, node myapp.js username password would give you :
process.argv[0]=node
process.argv[1]=/.../myapp.js (absolute path)
process.argv[2]=username
process.argv[3]=password
Second is to set the credentials as ENV variables. It is generally considered as the best practice as only you have access to these variables.
You would have to set the variables using the export command, than you'd access it in process.env
I currently had to do the exact same thing for my External API credentials. this is what i did
install node-config module
create a folder and file called config/config.js
here require(config) module
In local box it reads the configuation from local.json file
i have dummy values in local.json for api key and shared secret
on my QA environment i export two variables NODE_ENV="QA" and NODE_CONFIG_DIR="path to my configuation folder on qa server"
node-config module reads configuation from "path to your config folder / QA.json"
now i have real api key and credential in QA.json
here you can use an encryption to encrypt these values and put it back in QA.json
in your app get these config values and decrypt use it in your rest call
hope this helps.
so your config can live in the same container as node code.
refer to this for encryption and decryption
http://lollyrock.com/articles/nodejs-encryption/

Resources