I like to transmit data from multiple sources to one target collectd server, but I'm not sure, how to ensure, that at the same time:
data is encrypted, i.e. no need to trust the network
data is signed, i.e. the target can trust the sources
no source shares the credentials with other sources
However, in the network plugin, so far I found only the values sign or encrypt for SecurityLevel, but not both. And it seems, that I cannot have different credentials (Username + Password) for different sources. Maybe I just misunderstood collectd?
There are 2 questions.
1) Encryption vs. signing
SecurityLevel "sign" provides sender verification.
SecurityLevel "encrypt" provides both encryption and sender verification.
In private/public keys schemes signing and encryption are separate: you can sign without encrypting, or encrypt without signing. However collectd uses a shared-secret scheme: both encryption and signing guarantee that the sender knows the shared secret: the sender is always verified.
2) Multiple credentials
You can have different credentials for different sources. See the following sample configuration.
<Plugin "network">
<Listen "192.168.0.1">
SecurityLevel "Encrypt"
AuthFile "/etc/collectd/auth_file"
</Listen>
</Plugin>
Where /etc/collectd/auth_file contains credentials using the following format.
user1: password1
user2: password2
References: https://collectd.org/wiki/index.php?title=Networking_introduction
Related
I am using the following link for 2-way ssl in JBoss. It works fine for me.
http://www.mastertheboss.com/jboss-server/jboss-security/complete-tutorial-for-configuring-ssl-https-on-wildfly
I am using the following command to generate key pair, using key password(keypass) as secret.
keytool -genkeypair -alias client -keyalg RSA -keysize 2048 -validity 365 -keystore client.keystore -dname "CN=client" -keypass secret -storepass secret
Like wise, I follow the steps in above link and I am able to enable https.
While doing so, one of the entry that is created in standalone-full.xml is as follows:
<tls>
<key-stores>
<key-store name="demoKeyStore">
<credential-reference clear-text="secret"/>
<implementation type="JKS"/>
<file path="server.keystore" relative-to="jboss.server.config.dir"/>
</key-store>
</key-stores>
<key-managers>
<key-manager name="demoKeyManager" key-store="demoKeyStore">
<credential-reference clear-text="secret"/>
</key-manager>
</key-managers>
<server-ssl-contexts>
<server-ssl-context name="demoSSLContext" protocols="TLSv1.2" key-manager="demoKeyManager"/>
</server-ssl-contexts>
</tls>
Here the clear-text value is secret, which was used while doing key generation. Since it is visible to anyone having access to standalone-full.xml file, I want to protect it.
Question: How do I encrypt the clear-text attribute with value "secret" in the xml file.
Few possible way I could think of is storing it in vault (I have not tried it yet) or encrypt the password using some other techniques
https://docs.rapidminer.com/9.0/server/administration/security/securing-passwords-in-jboss.html
JBoss AS 7.1 - datasource how to encrypt password
What is the best way to solve above problem. Please advise.
After more investigation and researching, I have narrowed down to using credential store. Please refer to following link Credential Store V/s Password Vault
Password Vault is Primarily used in legacy configurations, whereas Credential Store introduced with the elytron subsystem, credential stores allow for secure storage and usage of credentials.
Execute the following commands in CLI
Create a Credential Store
/subsystem=elytron/credential-store=my_store:add(location="cred_stores/my_store.jceks", relative-to=jboss.server.data.dir, credential-reference={clear-text=supersecretstorepassword},create=true)
Add a Credential to the Credential Store
/subsystem=elytron/credential-store=my_store:add-alias(alias=database-pw, secret-value="speci#l_db_pa$$_01")
List the Credentials in the Credential Store
/subsystem=elytron/credential-store=STORE_NAME:read-aliases()
Once the above steps are executed using CLI, need to make changes in <credential-reference/> tag.
You can also fine working example here: http://www.mastertheboss.com/jboss-server/jboss-security/using-credential-stores-to-store-your-passwords-in-wildfly-11
Above is an example for datasource, but it works similarly for encrypting clear-text for certificates.
I am using kuzzle (2.6) as a backend to my app. I'd like to encrypt data stored to Kuzzle by the users of the app, and organize encryption keys separate from the database. The key holding entity (keyStore for short) should give keys only to users that are truly registered in the database, without becoming able to access the user data itself.
So I'm trying to pass, from the app, when the user is logged in, a <kuid> together with a corresponding <jwt> obtained e.g. via kuzzle.auth.login('local', {username: <username>, password: <password>}) to the keyStore via https. The keyStore should send the information to the Kuzzle database, where a Kuzzle plugin can verify the user exists. If Kuzzle confirms the identity of the user to the keyStore, the keyStore will hand out a key to the user such that the user can encrypt/decrypt its data.
In short:
Is there any way I can let a plugin validate that a given <jwt> and a given <kuid> belong to the same user? <username> and <password> would both not be available to the plugin.
Kuzzle core developer here.
Right now we don't have a public API to get the user linked to an authentication token.
Still, you can use the auth:checkToken API action to verify the token validity and the jsonwebtoken package used by Kuzzle to retrieve the user kuid from the token.
const { valid } = await app.sdk.auth.checkToken(token);
if (valid) {
const kuid = require('jsonwebtoken').decode(token)._id;
}
Anyway, that's an interesting feature and we will discuss it in our next product workshop.
I will update this answer accordingly.
Ref: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#providers
According to the docs
Resources written as-is without encryption. When set as the first provider, the resource will be decrypted as new values are written.
When set as the first provider, the resource will be decrypted as new values are written. sounds confusing. If resources are written as is with no encryption into etcd, why does decrypted as new values are written mean ?
And following that
By default, the identity provider is used to protect secrets in etcd, which provides no encryption.
What kind of security does identity provider give if no encryption happens and if encryption happens, what kind of encryption is it?
As stated in etcd about security
Does etcd encrypt data stored on disk drives?
No. etcd doesn't encrypt key/value data stored on disk drives. If a user need to encrypt data stored on etcd, there are some options:
Let client applications encrypt and decrypt the data
Use a feature of underlying storage systems for encrypting stored data like dm-crypt
First part of the question:
By default, the identity provider is used to protect secrets in etcd, which provides no encryption.
It means that by default k8s api is using identity provider while storing secrets in etcd and it doesn't provide any encryption.
Using EncryptionConfiguration with the only one provider: identity gives you the same result as not using EncryptionConfiguration at all (assuming you didn't have any encrypted secrets before at all).
All secret data will be stored in plain text in etcd.
Example:
providers:
- identity: {}
Second part of your question:
Resources written as-is without encryption.
This is described and explained in the first part of the question
When set as the first provider, the resource will be decrypted as new values are written.
Take a look at this example:
providers:
- aescbc:
keys:
- name: key1
secret: <BASE 64 ENCODED SECRET>
- identity: {}
What this configuration means for you:
The new provider introduced into your EncryptionConfiguration does not affect existing data.
All existing secrets in etcd (before this configuration has been applied) are still in plain text.
Starting with this configuration all new secrets will be saved using aescbc encryption. All new secrets in etcd will have prefix k8s:enc:aescbc:v1:key1.
In this scenario you will have in etcd a mixture of encrypted and not encrypted data.
So the question is why we are using those two providers?
provider: aescbc is used to write new secrets as encrypted data during write operation and to decrypt existing secrets during read operation.
provider: identity is still necessary to read all not encrypted secrets.
Now we are switching our providers in EncryptionConfiguration:
providers:
- identity: {}
- aescbc:
keys:
- name: key1
secret: <BASE 64 ENCODED SECRET>
In this scenario you will have in etcd a mixture of encrypted and not encrypted data.
Starting with this configuration all new secrets will be saved in plain text
For all existing secrets in etcd with prefix k8s:enc:aescbc:v1:key1 provider: aescbc configuration will be used to decrypt existing secrets stored in etcd.
When set as the first provider, the resource will be decrypted as new values are written
In order to switch from mixture of encrypted and not encrypted data into scenario that we have only "not encrypted" data, you should perform read/write operation for all secrets:
$ kubectl get secrets --all-namespaces -o json | kubectl replace -f -
why's it there if it offers no encryption but the docs seem to talk about decryption and how it protects.
It's necessary to have the provider type of identity if you have a mixture of encrypted and not encrypted data
or if you want to decrypt all existing secrets (stored in etcd) encrypted by another provider.
The following command reads all secrets and then updates them to apply server side encryption. More details can be found in this paragraph
$ kubectl get secrets --all-namespaces -o json | kubectl replace -f -
Depending on your EncryptionConfiguration, all secrets will be saved as not encrypted -if the first provider is: identity or encrypted if the first provider is different type.
In addtion
EncryptionConfig is disabled as default setting. To use it, you have to add --encryption-provider-config in your kube-apiserver configuration. Identity is not encrypting any data, as per Encrypted Providers documentation it has 3x N/A.
I am working on a project where we are going to be using different services in a microservice architecture, and we would like to also use some Firebase services. I am working on an auth server that is going to mint custom JWT's for use in both Firebase, as well as the other API projects.
We would like to use the Firebase Auth SDK to easily integrate with FB, Google, Twitter etc, but we need to enrich the user's token with more data. Therefore, my thought process is that I'd create a Node.JS auth server that uses the Firebase Admin SDK to do this. The flow would be as follows:
User logs in with favourite provider on client
If login is succesful, the user receives a JWT from Firebase. This is sent to the auth server for validation
If the auth server can validate the token using the admin SDK, create a new custom token enriched with more data, and return this new custom token to the client
Have client re-authenticate with the new custom token, and use it for communication with both Firebase as well as our other API projects (which will mainly be in .NET Core)
Step 1-3 works fine. The problem arises when trying to verify the custom token on the other services.
TL;DR : There are two questions inhere:
When validating custom tokens issued using the Firebase Node.JS Admin SDK, what should I use as the public key? A key extracted from Google's exposed JWK's, or a key extracted from the private key that is used to sign?
In case of the JWK approach, how should I construct the custom token with a kid header?
First, I am in doubt of the proper way to verify it. (Please excuse me, I'm not that experienced creating OAuth flows.) The algorithm used is RS256, so I should be able to verify the token using a public key. As I see it, there are two ways to get this key:
Extract the public key from the private key and verify using this. I can do this and verify successfully on a test endpoint on my auth server, however I feel this is the incorrect way to do it
The other, and more correct way I think, is to use the values from the token to find the JWK's on Google's "/.well-known/openid-configuration/" endpoint for my project, , i.e.
https: //securetoken.google.com/[PROJECT ID]/.well-known/openid-configuration
to retrieve the exponent and modulus for the correct kid (key ID) and create the public key from those.
The token generated from the admin SDK by doing
admin.auth().createCustomToken(uid, additionalClaims).then(function(customToken)
with some custom claims looks something like this:
headers:
{
"alg": "RS256",
"typ": "JWT"
}
payload:
{
"claims": {
"premiumAccount": true,
"someRandomInnerObject": {
"something": "somethingRandom"
}
},
"uid": "<uid for the user>",
"iat": 1488454663,
"exp": 1488458263,
"aud": "https://identitytoolkit.googleapis.com/google.identity.identitytoolkit.v1.IdentityToolkit",
"iss": "firebase-adminsdk-le7ge#<PROJECT ID>.iam.gserviceaccount.com",
"sub": "firebase-adminsdk-le7ge#<PROJECT ID>.iam.gserviceaccount.com"
}
I can't seem to get method 2 to work, though. One problem is that the generated token does not have a kid header, and so does not conform to the OpenID spec (AFAIK), which leads to one of two options:
Go with the first approach above. This leads to problems though - if I for some reason need to revoke or reset the private key on the auth server, I need to do it and deploy the changes on all the other services too, making the solution less dynamic and more error-prone.
Generate a similar token manually using one of the libs mentioned at jwt.io, and add the kid from the original Firebase ID token to it's headers.
Problems with number 2:
What should I put as iss, aud and sub, then? The same values as the admin SDK does? If so, isn't that 'cheating', as they are no longer the issuer?
I've tried it (generating a similar copy of the token, but adding the kid of the original token), and I can't seem to verify the generated token using the created PEM key for the kid.
The way I do the latter is this (following a blog guide on the subject):
Go to https://www.googleapis.com/service_accounts/v1/jwk/securetoken#system.gserviceaccount.com and retrieve the modulus (n) and exponent (e) for the relevant kid
Generate the public key using a lib (rsa-pem-from-mod-exp)
Use the key to verify using the 'official' jwt lib
The above results in a public key as such:
-----BEGIN RSA PUBLIC KEY-----
MIIBCgKCAQEAxXpo7ChLMnv1QTovmm9DkAnYgINO1WFBWGAVRt93ajftPpVNcxMT
MAQI4Jf06OxFCQib94GyHxKDNOYiweVrHVYH9j/STF+xbQwiPF/8L7+haC2WXMl2
tkTgmslVewWuYwpfm4CoQFV29OVGWCqwEcbCaycWVddm1ykdryXzNTqfzCyrSZdZ
k0yoE0Q1GDcuUl/6tjH1gAfzN6c8wPvI2YDhc5gIHm04BcLVVMBXnC0hxgjbJbN4
zg2QafiUpICZzonOUbK6+rrIFGfHpcv8mWG1Awsu5qs33aFu1Qx/4LdMAuEsvX9f
EmFZCUS8+trilqJbcsd/AQ9eOZLAB0BdKwIDAQAB
-----END RSA PUBLIC KEY-----
Two things seem to be wrong. One is that the key is different from the one I can extract from the private key. The other is that the one I extract from the private key has these comments instead:
-----BEGIN PUBLIC KEY-----
-----END PUBLIC KEY-----
with no 'RSA'. Does this matter? In any case, it doesn't verify.
Finally, did I misunderstand the OpenID flow completely? Are the JWKs generated from a private key that I need as well to verify my JWTs? Should I expose my own JWKs on my auth server for the other services to contact and use instead of Google's? I'm a bit confused as to what the Firebase Admin SDK does and doesn't do, I think :-)
I know this is a lot of questions, but I think they're all related.
Some resources I've relied on in my research (besides the official admin sdk docs ofcourse):
jwt.io
Is it still possible to do server side verification of tokens in Firebase 3?
https://ncona.com/2015/02/consuming-a-google-id-token-from-a-server/
https://stackoverflow.com/a/42410233/1409779
https://andrewlock.net/a-look-behind-the-jwt-bearer-authentication-middleware-in-asp-net-core/
After re-authenticating the Firebase client SDK with the custom token, the client actually generates a new ID token with the claims from the custom token. This ID token is what you should use to verify requests made to your different microservices (documented here). So yes, your original ID token is discarded, but a new one is created in its place. And that ID token will be automatically refreshed every hour. So, you should be able to just call user.getToken() to get a valid ID token whenever you need it. That method handles all the caching on your behalf.
As Identity Provider we send a SAML assertion request to Service Provider and then they validate our signature in assertion using our certificate. SAML assertion contains an optional field called X509Certificate which is certificate of assertion issuer. (our certificate). My question is that from a security perspective is it better Service Provider use this field in each assertion for validating signature or use an external certificate file.
It's a very bad idea for them to only trust the public key value that is included in a signed message. Who's to say that someone couldn't just forge your EntityID and send them random SAMLResponses with some user data? The signature of the message would be valid, and they are using the key included in the message to validate the signature, right?
The benefits of offline key exchange are well known: Your SP has securely stored your public key and will always use to it validate the signature of your messages unless you instruct them otherwise (key rollover/update). If you include your certificate in the message, the SP will compare the two certificates first to ensure they match and then they should use the copy they have stored previously (assuming they match). Otherwise the message is rejected.
However, if you want an SP to trust the public key you include in your messages, the SP must be able to ensure that it is YOUR certificate that is being used. There are some smart folks at Ping Identity who have thought about this SP DSig Validation Use Case -- you can find a description of how they do this in their "Anchored Certificate" model for DSig Validation.
https://documentation.pingidentity.com/pingfederate/pf80/index.shtml#concept_digitalSigningPolicyCoordination.html
I don't really get the 11/19 answer. If you send your BinarySecurityToken (BST) in the Assertion for signature validation, and the receiver has an entry in his trust store with this public certificate, you should be good. In order for this to work:
1) The receiver must require that the assertion is signed
2) The receiver must check the signature verifying certificate in the assertion against a trust store.
3) DO NOT just trust the dn/issuer of the signer instead of using a trust store; that can be faked in a signing certificate.
If these things are followed, you have verified that the message is signed by the holder of the private key and that the assertion has not changed in flight. You then trust that holder, therefore you can proceed.
If the receiver doesn't require that an assertion is signed, anyone can send anything to the receiver.
If the receiver doesn't check a trust store for the verifying certificate, anyone can send a signed anything to the receiver.
The advantage of sending the BST in the message is that when your IdP certificate expires and you have to get a new one, the client only has to add your new certificate to his trust store instead of changing the configuration of his application.