Setup clustered Traefik Edge Router on Azure kubernetes with Lets Encrypt - azure

I'm trying to setup traefik with Lets Encrypt on kubernetes in Azure, so far so good and every thing is almost working ... this is the first time, so hoping I'm missing something to get everything working.
I have used the DeploymentController with 1 replica(later there will be more than one, going for clustered setup).
The issue is with the Lets Encrypt certificate.
I'm getting this error:
Failed to read new account, ACME data conversion is not available : permissions 755 for acme/acme.json are too open, please use 600
This seems like a fair requirement but how do I set this since I'm using the "node's storage" ... I know this is not the best option but having a hard time finding a good guide to follow ... so need some guidence here.
Guides says using a KV Storage as etcd
I have read:
https://docs.traefik.io/configuration/acme/
https://docs.traefik.io/user-guide/kubernetes/
It also says here: https://docs.traefik.io/configuration/acme/#as-a-key-value-store-entry
ACME certificates can be stored in a KV Store entry. This kind of storage is mandatory in cluster mode.
So I guess this is a requirement :-)
This all makes sense so every pod don't request the same certificate but can share it and be notified when a new certicate is requested ...
This page show the KV stores that is supported: https://docs.traefik.io/user-guide/kv-config/ - kubentes uses etcd, but I can't find any information if I can use that to store the certicate ... ?
So what is my options here? Do I need to install my own KV Store to support Lets Encrypt Certificates? Can i use Azure Storage Disk?

Related

How to secure ConnectionString and/or AppSettings in asp.net core (on-prem)

First off, I know we dont have ConnectionStrings and AppSettings per se in .Net core, but in my case I want to encrypt a ConnectionString and maybe some other application configurations stored in my appsettings.json (or other settings file).
I know this has been discussed alot all over the internet, but no one seems to have a legit answer..
So suggestions that has beeen thrown out there are:
Using EnvironmentConfigurationBuilder, however... that doesnt really solve the issue, since we just moved our plain text configurations from appsettings.json to the Env-variables)
Create a custom ConfigurationProvider that encrypts and decrypts the appsettings.json (or selective parts of it), however.. that doesnt solve the issue either, since we need to store our key(s) for the decryption somewhere accessible by our application, and even if we did store the key as a "hard-coded" string in our application, a "hacker" could simply just de-compile it.
Someone also mentioned that even if you do encrypt the appsettings.json, a "hacker" could always just do a memory dump and find the decrypted version of the data.. Im no expert on that field, so Im not sure how likely or how complicated such as thing would be.
Azure Key Vault has also been mentioned a few times, however.. in my case and in alot of cases when working with authorities, this is not an option since cloud-services are not allowed.
I might be overthinking this, since if an attacker/hacker actually has managed to get into our server, then we might have bigger issues.. but what would be the way to deal with this issue? Do you simply dont care and leave it all as "plain text"? Or should you take some sort if action and encrypt or obscure the secrets?
You don't need to encrypt connection strings from your config file because the best way is still to NOT store this information in your config files but as environment variables on your server.
In your appsettings.json file just store your local development connection string. For other environments, on the server it is deployed set an environment variable with __ (double underscore) for each child node in you config file.
You can read how this works on this page
If you have a config file as follow
{
"ConnectionStrings": {
"default": "Server=192.168.5.1; Database=DbContextFactorySample3; user id=database-user; password=Pa$$word;"
}
}
On a Windows server you would set the value like this
set "ConnectionStrings__default=Server=the-production-database-server; Database=DbContextFactorySample2; Trusted_Connection=True;"
I don't know how is your deployment flow and tools you're using but it's worth digging into it and find how you can use of this feature.
For example if you're deploying on Kubernetes you could use Helm to set your secret values.
At my company on TFS we create a Release pipeline and make use of the variables section to set the secret values. These values will then be used when the code is deployed on Kubernetes.
Variables in Release pipelines in TFS can be hidden like passwords and no developer can see the production values. Only administrators can

Secure Elasticsearch installation retrospectively

I have an Elasticsearch installation (V7.3.2). Is it possible to secure this retrospectively? This link states that a password can only be set "during the initial configuration of the Elasticsearch". Basically, I require consumers of the restful API to provide a password (?) going forward.
The elastic bootstrap password is used to init the internal/reserved users used by the components or features of the elastic stack (kibana, logstash, beats, monitoring, ...).
If you want to secure the API, you need to create users/roles for your scenario on top.
Please use TLS in your cluster when handling with passwords and don't expose the cluster directly for security reasons.
Here are all informations regarding a secure cluster including some tutorials: https://www.elastic.co/guide/en/elasticsearch/reference/7.3/secure-cluster.html
EDIT: Added links as requested. Feel free to raise a new question here at SO if you're facing serious problems!
Here you can find a complete guide to install and secure ElasticSearch.
Basically the bootstrap password is used initially to setup the built-in ElasticSearch users (like "elastic", "kibana"). Once this is done, you won't be able access ElasticSearch anonymously but only with one of the built in users, e.g. "elastic".
Then you can use "elastic" user to create additional users (with their own password) and roles (e.g. to asses specific indexes only in read-only mode).
As #ibexit wrote it's highly recommended to secure your cluster and don't expose it directly (use a proxy server, secured with SSL).

NiFi: Configuring SSLContextService for GetHTTP or InvokeHTTP

I am trying to use WMATA's (the DC system) Metro API, and use NiFi to pull in some live Train Position data. I currently have tried to use both GetHTTP and InvokeHTTP, but no luck. My confusion comes from two areas:
1) How to configure the processor itself?
2) Configuring the SSLContextService?
The Metro website gives a Primary and Secondary key - but I'm not sure how to parse that information, when the SSLContextDriver config asks for KeyStore filename, etc.
My GetHTTP config:
And my SSL config:
I get errors when I run the GetHTTP processor:
I hope my issue makes sense. Thanks
For the specific error message you have show, the URL you specified has contentType={contentType} which is invalid. If you wanted to reference a flow file attribute, or variable, it would need to be ${contentType}. Otherwise if you really want to literally pass {contentType} then I think you would need to URL encode the brackets first.
For your SSL Context service, I believe in this case you want to set the truststore to CA certs instead of the keystore. It is similar to how your browser has truststores and verifies server identities when you go to an https page. You would only specify the keystore if you needed the GetHttp/InvokeHttp processor to also provide an identity so the other server could verify the identity of the processor.

Working with multiple AWS keys in Hadoop environment

What's the workaround for having multiple AWS keys in Hadoop environment? My hadoop jobs will require access to two different S3 buckets (two different keys). Tried with "credential" provider but looks like it's pretty limited. It stores all keys in lower case, as a result I cannot use "s3a" for one job and "s3n" for other job. For example: for s3a, it looks for:
fs.s3a.access.key
fs.s3a.secret.key
And for s3n:
fs.s3n.awsAccessKeyId
fs.s3n.awsSecretAccessKey
But if I create provider with "fs.s3n.awsAccessKeyId", it stores as "fs.s3n.awsaccesskeyid", as a result, during runtime it fails to load the expected key.
As a workaround, I tried to generate two different credential providers and pass as:
--Dhadoop.security.credential.provider.path=key1,key2
But it didn't work togher as both of the keys have fs.s3a.access.key & fs.s3a.secrety.key pair.
I don't want to pass access and secret key using -D option as it's visible. Is there any better way to handle this scenario?
If you upgrade to Hadoop 2.8 you can use the per-bucket configurations to address this problem. Everything in fs.s3a.bucket.$BUCKETNAME is patched into the config for the FS instance for that bucket, overriding any other configs
fs.s3a.bucket.engineering.access.key=AAID..
fs.s3a.bucket.logs.access.key=AB14...
We use this a lot for talking to buckets in different regions, encryption, other things. Works well, so far. Though I would say that.
Special exception: if you encrypt credential secrets in JCECKS files. The docs cover this.

How to verify an application is the application it says it is?

Here's the situation: we have a common library which can retrieve database connection details from a central configuration store that we have setup. Each application uses this library when working with a database.
Basically, it will call a stored procedure and say "I am {xyz} application, I need to connect o " and it will return the connection details for that applications primary database (server, instance, database, user, and password).
How would one go about locking that down so that only application {xyz} can retrieve the passwords for {xyz} databases (there is a list of database details for each application... i just need to secure the passwords)?
The usual way is to have a different config store per app and give each app a different user/password to connect to the config store.
That doesn't prevent anyone from changing the app and replacing the user/password for app X with the values from app Y but it's a bit more secure, especially when you compile this data in instead of supplying it via a config file.
If you want to be really secure, you must first create a secure connection to the store (so you need a DB drivers that supports this). This connection must be created using a secure key that is unique per application and which can be verified (so no one can just copy them around). You will need to secure the executable with hashes (the app will calculate its own hash somehow and send that to the server who will have a list of valid hashes for each app).
All in all, it's not something trivial which you can just turn on with an obscure option. You will need to learn a lot about security and secure data exchange, first. You'll need a way to safely install your app in an insecure place, verify its integrity, protect the code against debuggers that can be attached at runtime and against it running in the virtual machine, etc.
Off the top of my head, try PKI.
Are you trying to protected yourself from malicous programs, and is this a central database that these applications are connecting to? If so you should probably consider a middle layer between your database and application.
I'm not sure this applies to your case, depending on how what your answers to the abovementioned would be, but by the comments it sounds like you are having a similar case to what this question is about.
Securing your Data Layer in a C# Application
The simplest/most straightforward way would be to store the passwords in encrypted format (storing passwords in plaintext is just plain bad anyhow, as recently demonstrated over at PerlMonks) and make each application responsible for doing its own password encryption/decryption. It would then not matter whether an app retrieved another app's passwords, as it would still be unable to decrypt them.
One possibility is to keep the passwords in the database in an encrypted form, and convey the encryption key to the allowed application(s) in a secure connection.Then, only the application with the encryption key can actually get the passwords and not others.

Resources