I've been looking at how to create X509 certificates and I'm a bit confused. I understand the theory, and creating a single certificate is OK, but I don't have the operational know-how to create the system as a whole.
Here are the requirements:
1) There will be one master server. The SSL certificate for this will not be signed by any authority: it is the root. This certificate (or at least the means to verify it) will be distributed with the application.
2) There may be any number of secondary servers. Each will generate its own certificate and submit it to the master server.
3) The master server will sign secondary certificates with the root.
The use-case is that a client connects to a secondary server and must be able to verify that its certificate has been signed by the root.
N.B. The master server is identified by a DNS hostname. The secondary servers may be named or may be identified by IP address alone.
Four questions:
Can someone please show me the openssl commands to accomplish each of those three steps?
Which, if any, of the files generated by those steps should not be distributed?
After step 3, does the master have to return a modified certificate to the secondary?
Do the secondary certificates have to be distributed by the trusted master, or is it sufficient for the client to validate any certificate advertised by the secondary?
OpenSSL comes with a script useful for creating a basic CA: CA.pl. (You can of course configure it in more details by altering the OpenSSL configuration file.)
What the secondary servers should generate are Certificate Requests (CSR), which the CA can process to issue certificates (after a validation process of your choice).
Regarding file distributions: all parties should keep their private keys private.
Related
Exchange-Certificates
Which certificate do I have to add to my (Docker) Gitlab-Server so I can send mails?
Or do I have make a self-signed-certificate?
gitlab_rails['smtp_ca_file'] = '/path/to/your/cacert.pem'
I think you're misunderstanding the purpose of this setting. gitlab_rails['smtp_ca_file'] is used to help ensure that GitLab properly trusts your SMTP server (which is not provided by GitLab/Omnibus!). This is necessary, for example, in a situation where you may have a SMTP server which utilizes a (potentially self-signed) certificate that is not otherwise trusted under a well-known/trusted CA.
In this context, it would not make sense to generate a certificate -- the certificate should be provided to you by another entity, such as your company mail server or a company Certificate Authority (CA). See this question for details on how you can find the certificate you need using the openssl command (if you can't retrieve it directly from your server configuration or otherwise get it from some other source).
If your mail server uses a certificate issued by a well-known/trusted CA, this setting is not needed.
I'm trying to secure my k8s cluster, and I'm looking at client-authentication authorization support for k8s. My requirement is that I want to be able to uniquely identify myself (e.g. client) to the k8s apiserver, but everything I read so far about client authentication is not the solution.
My understanding is that the server will just ensure that the client certificate provided is in fact signed by the certificate authority. What if a hacker gets another certificate signed by the same certificate authority (which isn't hard to do in my org) and uses that to talk to my server? It appears that popular orchestrations like Swarm and k8s support this option and touted it as most secure so there must be a reason for doing this. Can someone shed some light?
It is not only verified that the certificate is authorized by the CA. The client certificate also contains the Common Name (CN) which can be used with a simple ABAC Authorization to limit the access to specific users or groups.
Also it shouldn't be easy to get a signed certificate. IMO the access to the root CA should be very limited and it should be comprehensible who is allowed to sign certs and when it happened. Ideally the root CA should life on a offline host.
Beside that it sounds like the CA is also used for other purposes. If it is so, you could consider to create a separate root cert for the client authentication. You can use a different CA for the server certificate by setting different CA files for --client-ca-file and --tls-ca-file on the apiserver. That way you can restrict who is able to create client certificates and still verify the server identity with the CA of your organization (which might already be distributed on all org computers).
Other Authentication Methods
As mentioned Kubernetes also has some other authenication methods. The static token file and the static password file have the disadvantage that the secrets have to be stored plain text on the disk. Also the apiserver has to be restarted on every change.
Service account tokens are designated to be used by applications which run in the cluster.
OpenID might be a secure alternative to client certificates, but AFAIK it is way harder to set up. Especially when there is no OpenID server, yet.
I don't know much about the other authenication methods, but they look like they are intended for integrating with existing single-sign-on services.
I've just recently learned a bit about how encrypted messages are sent across the Internet, and it seems that it relies a lot on "trusted third parties" My problem is that I don't trust anyone, is there some way to form an encrypted connection between two computers without prior secrets or the need to trust anyone?
Yes, by creating a "Certificate Authority" (CA) and installing its certificates on the machines.
The third parties you're talking about issue certificates, and sign those certificates using a CA certificate that is included in popular operating systems and/or web browsers. You can create your own CA certificate and install it onto the machines in question alongside those third party certificates. Then you can issue your own SSL certificates which will be recognised by those machines without any third party involvement.
Note that the CA certificates aren't "prior secrets" - there's nothing secret about the certificate itself. It has a private key, which you use to sign your SSL certificates, but that key doesn't need to be on the machines in question (and shouldn't be).
There are plenty of sites that will walk you through the process - a quick Google turned up this one for example: Creating Your Own SSL Certificate Authority.
I'm attempting to enable SSL communication from a web service client (Axis2) using the certificate on the user's CAC card. Works like a charm....UNTIL the web server is CAC enabled. At that point the SSL connection is rejected with the error message that the other certificates in the chain were not included.
I have ensured that the provider is available, either by adding it to the security.properties file or creating it programatically.
My current approach is to simply set the system properties:
System.setProperty("javax.net.ssl.keyStore", "NONE");
System.setProperty("javax.net.ssl.keyStoreType", "PKCS11");
I understand from this question/answer that this approach only sends the "end entity" certificate. Apparently I need to implement my own X509KeyManager. This is new ground for me, can anyone suggest a good reference or provide samples of how to do so?
Appreciate the assistance.
The best key manager implementation depends on the issuer of the certificates you expect to be using.
If the certificate on the user's CAC will always be issued by a specific CA, simply store that issuer's certificate and any intermediate certificates further up the chain in a PKCS #7 file. In the getCertificateChain() method, this collection can be appended blindly to the user's certificate and returned.
If things aren't quite that simple, but a complete list of possible issuers can be enumerated, obtain all of their certificates, and their issuer's certificates, and so on, up to the root certificates.
Add all of the root certificates to a key store as trusted entries. Bundle the intermediate certificates in a PKCS-#7–format file.
Implement X509KeyManager (or extend X509ExtendedKeyManager if you're working with SSLEngine). Specifically, in the getCertificateChain() method, you'll use a CertPathBuilder to create a valid chain from the user's certificate to a trusted root. The target is the certificate that you load from the user's CAC with the alias parameter. The trusted roots are the certificates in trust store that you created; the intermediate certificates can be loaded from the PKCS #7 file and added to the builder parameters. Once the chain is built, get the certificate path and convert it to an array. This is the result of the getCertificateChain() method.
If you can't predict who will be issuing the user's certificate, you might be able to obtain the intermediate certificates at runtime from an LDAP directory or other repository. That's a whole new level of difficulty.
I'm currently working on a project where I've created a CA cert and a couple of child certs to that CA cert. The certificates are going to be used to protect inter-server communication in a SAMLV2 setup so I'm going to have a cert for the identity provider and a cert for the service provider. The user/browser isn't going to validate the certs so it's only the servers that need to trust my custom CA. My cert tree looks something like this:
CustomRootCACert
CustomIdentityProviderCert
CustomServiceProviderCert
Now, I've heard a lot of people saying it's bad to use a home-made certificate in production. But when I ask why, people usually just mutters something about security but never go into the details. Are there any technical reasons not to use my own certs in production? I can't think of any... Of course I realize that if I lose control of my root cert anyone could start creating all sorts of certificates. But in this case they would also have to install the certificates on my servers and configure the saml application to use them. Only then could they start to generate fake saml requests and responses to my applications.
If this is the only problem, this solution (using home-made certs in production) would still be better than the login setup we have today.
Ask yourself what a certificate proves.
If you get a certificate issued by a reputable CA, then it proves that the certificate holder has verified their identity to that CA, to their standards of proof.
If you get a certificate issued by an ad-hoc CA, then it proves that someone knows how to make certificates.
If you control both ends of the conversation, I think it's fine to have your own private CA for the purpose. You would trust your own CA. You can probably make this very secure indeed (by keeping the CA private key in a safe place offline, and making signing a sneakernet exercise).
The difficulty would be if you needed to persuade anyone else to trust your CA. Why should they? You would need to convince them that it was safe to do so, and they would have the admin overhead of adding your CA certificate to their clients.
Since you are only using the certificate to protect the network traffic and not authenticate users/computers then it sounds like you have a legitimate use for using MakeCert.exe.
I feel there is one thing worth mentioning. After you spend some time working with the MakeCert.exe interface you might to consider using a Stand-Alone Root Certificate Server instead.
Consider these points:
(Almost) All versions of Windows Server include Certificate Server Services for free
Windows Stand-Alone CA Server is extremely simple to install and configure
Windows Stand-Alone CA Server can be installed on a Virtual Machine and turned on/off whenever you need to issue an additional certificate
A VM based Windows Stand-Alone CA Server can be run using very little memory (ex. 256mb)
Windows Stand-Alone CA Server includes a nice and clean web based enrollment interface to simplify requesting certificates.
CRL checking can be used or not used, depending on your needs.
In the past I first started with selfssl.exe and eventually moved to MakeCert.exe to generate a root certificate and then issued my client certificates. But after struggling with the syntax and always having to remember where I put that Root Certificate I switched over to using a Stand-Alone Root CA in a virtual machine.
IF the certificates are only passed around internally, between your own servers (and not used by the client, one way or the other) - then it is perfectly acceptable to use your own internal CA.
HOWEVER, one suggestion - dont have your Root CA issue your provider certs. Instead, use your Root CA to create an Intermediate CA - then use that to issue provider certificates. This will help you longer term, when you have to start managing certificate expiration, extending the system/infrastructure, revocation lists, etc.
There is no real issue with using a self signed certificate in private use, that is use when you control all of the systems that need to trust the homebrew root certificate.
You manually install your root cert onto each of the systems that need to trust it.
You can do this in production as well for browser use - for example within an organisation where the root ca can be rolled out via software distrubution method - there is no reason to go to the expense of paying a Certificate Authority that Microsoft happens to trust.
[edit]
In terms of secruity the issue is one of containing the private key for your root certificate, as long as you can ensure that stays private then you can validate any certificate off that root.