I've been facing 2 problem with Vault.
when I start server with --dev option, web UI on http://localhost:8200/ui/ is blank. logs are:
Refused to execute script from 'http://localhost:8200/ui/assets/vendor-955807f07aa62cf6b124690f61829edf.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled.
Is it possible to store all kv secrets in one file so that I can read these secrets every time I start server?
Related
Terraform 1.0.x
It's my first time using an artifactory backend to store my state files. In my case it's a Nexus repository, and followed this article to set up the repository.
I have the following
terraform {
backend "artifactory" {
# URL of Nexus-OSS repository
url = "http://x.x.x:8081/repository/"
# Repository name (must be terraform)
repo = "terraform"
# Unique path for this particular plan
subpath = "exa-v30-01"
# Nexus-OSS creds (nust have r/w privs)
username = "user"
password = "password"
}
}
Since the backend configuration does not accept variables for the username and password key/value pairs, how can I hide the credentials so they're not in plain site when I store my files in our Git repo?
Check out the "Partial Configuration" section of the Backend Configuration documentation. You have three options:
Specify the credentials in a backend config file (that isn't kept in version control) and specify the -backend-config=PATH option when you run terraform init.
Specify the credentials in the command line using the -backend-config="KEY=VALUE" option when you run terraform init (in this case, you would run terraform init -backend-config="username=user" -backend-config="password=password").
Specify them interactively. If you just don't include them in the backend config block, and don't provide a file or CLI option for them, then Terraform should ask you to type them in on the command line.
For settings related to authentication or identifying the current user running Terraform, it's typically best to leave those unconfigured in the Terraform configuration and use the relevant system's normal out-of-band mechanisms for passing credentials.
For example, for s3 backend supports all of the same credentials sources that the AWS CLI does, so typically we just configure the AWS CLI with suitable credentials and let Terraform's backend pick up the same settings.
For systems that don't have a standard way to configure credentials out of band, the backends usually support environment variables as a Terraform-specific replacement. In the case of the artifactory backend it seems that it supports ARTIFACTORY_USERNAME and ARTIFACTORY_PASSWORD environment variables as the out-of-band credentials source, and so I would suggest setting those environment variables and then omitting username and password altogether in your backend configuration.
Note that this out-of-band credentials strategy is subtly different than using partial backend configuration. Anything you set as part of the backend configuration -- whether in a backend block in configuration or on the command line -- will be saved by Terraform into a local cache of your backend settings and into every plan file Terraform saves.
Partial backend configuration is therefore better suited to situations where the location of the state is configured systematically by some automation wrapper, and thus it's easier to set it on the command line than to generate a configuration file. In that case, it's beneficial to write out the location to the cached backend configuration so that you can be sure all future Terraform commands in that directory will use the same settings. It's not good for credentials and other sensitive information, because those can sometimes vary over time during your session and should ideally only be known temporarily in memory rather than saved as part of artifacts like the plan file.
Out-of-band mechanisms like environment variables and credentials files are handled directly by the backend itself and are not recorded directly by Terraform, and so they are a good fit for anything which is describing who is currently running Terraform, as opposed to where state snapshots will be saved.
I have tried to give a custom config file to start the vault server, in the vault.service file and is working as expected.
But if i try to start the vault server in dev mode changing
ExecStart=/usr/local/bin/vault/vault server -dev, in this case service exits.
On the other hand if I manually run /usr/local/bin/vault/vault server -dev, vault server is getting started in dev mode.
Is it possible to run vault server in dev mode using vault as a service ?
After a research I have come to an understanding that, Yes we can run dev mode as a service.
Dev mode creates a token in /home/user_mentioned and home for that user should also be created. Or we need to set the home path for that particular user.
I am trying to backup SAP HANA database which is in Azure VM by using Recovery Vault service. While running "msawb-plugin-config-com-sap-hana.sh" script file I am getting the error
Failed to determine SYSTEM_KEY_NAME: Please specify with the '--system-key' option.
Need a valid system key to create the backup key.
Please help me to resolve this error.
According to the prerequisites https://learn.microsoft.com/en-us/azure/backup/tutorial-backup-sap-hana-db#prerequisites, you have to create a key in the default hdbuserstore.
You can create it by login as ndbadm:
su - ndbadm
and add the key:
/hana/shared/NDB/hdbclient/hdbuserstore set BACKUP YOUR_HOSTNAME:30013 SYSTEM YOUR_PASSWORD
Then as a root, run the script.
After running the script, you can check again as the ndbadm user if the key AZUREWLBACKUPHANAUSER is there:
/hana/shared/NDB/hdbclient/hdbuserstore list
and delete your previously created key:
/hana/shared/NDB/hdbclient/hdbuserstore delete BACKUP
The script uses the command "runuser" (in my case ndbadm). When hdbuserstore is executed under the profile ndadm no keys is returned. You can copy the files SSFS_HDB.DAT and SSFS_HDB.KEY in the path returned by hdbuserstore LIST from a profile with valid files.
Refer to SAP Note 2853601 - Why is Nameserver Port Used in HDBUSERSTORE for SAP Application Installation.
In an MDC - nameserver port (e.g. 30013) is used in hdbuserstore instead of indexserver port (e.g. 30015) for a tenant DB.
Screenshot
I am wondering if there is any straightforward way of injecting files/secrets into the vms of a scaleset, either as you perform the (ARM) deployment or change the image.
This would be application-level passwords, certificates, and so on, that we would not want to be stored on the images.
I am using the linux custum script extension for the entrypoint script, and realize that it's possible to inject some secrets as parameters to that script. I assume this would not work with certificates however (too big/long), and it would not be very future-proof as we would need to redeploy the template (and rewrite the entrypoint script) whenever we want to add or remove a secret.
Windows based VMSS can get certificates from the KV directly during deployment, but linux ones cannot do that. Also, there is a customData property which allows you to pass in whatever you want (i think its limited to 64kb base64 encoded data), but that is not really flexible as well.
One way of solving this - write an init script that would use Managed Service Identity to get secrets from the Key Vault, this way you get several advantages:
You dont store secrets in the templates\vm configuration
You can update the secret and all the VMSS will get new version on the next deployment
You dont have to edit the init script unless secret names changed or new secrets got introduced.
I have a .NET Framework 4.7 application that allows users to upload X.509 certificates in PFX or PKCS#12 format (think: "SSL certificates" with the private key included), it then loads the certificate into a System.Security.Cryptography.X509Certificates.X509Certificate2 instance. As my application code also needs to re-export the certificate I specify the X509KeyStorageFlags.Exportable option.
When running under IIS on my production web-server, the Windows user-profile for the identity that w3wp.exe runs under is not loaded, so I do not specify the UserKeySet flag.
String filePassword = ...
Byte[] userProvidedCertificateFile = ...
using( X509Certificate2 cert = new X509Certificate2( rawData: userProvidedCertificateFile, password: filePassword, keyStorageFlags: X509KeyStorageFlags.Exportable | X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet )
{
...
}
In early 2017 I deployed this code to an Azure App Service (aka Azure Website) instance and it worked okay - after initially failing because I did have the UserKeySet flag set (as Azure App Services do not load a user-profile certificate store.
However, since mid-2017 (possibly around May or June) my application has stopped working - I assume the Azure App Service was moved to an updated system (though Kudu reports my application is running on Windows Server 2012 (NT 6.2.9200.0).
It currently fails with two error messages that varied depending on input:
CryptographicException "The system cannot find the file specified."
CryptographicException "Access denied."
I wrote an extensive test-case that tries different combinations of X509Certificate2 constructor arguments, as well as with and without the WEBSITE_LOAD_CERTIFICATES Azure application setting.
Here are my findings when working with an uploaded PFX/PKCS#12 certificate file that contains a private key and does not have password-protection:
Running under IIS Express on my development box:
Loading the certificate file always succeeds, regardless of X509KeyStorageFlags value.
Exporting the certificate file requires at least X509KeyStorageFlags.Exportable.
Running under IIS on a production server (not an Azure App Service) where the w3wp.exe user-profile is not loaded:
Loading the certificate file requires that X509KeyStorageFlags.UserKeySet is not set, but otherwise always succeeds.
Exporting the certificate file requires at least X509KeyStorageFlags.Exportable, but otherwise always succeeds, otherwise it fails with "Key not valid for use in specified state."
Running under Azure App Service, without WEBSITE_LOAD_CERTIFICATES defined:
Loading the certificate with MachineKeySet set and UserKeySet is not set fails with a CryptographicException: "Access denied."
Loading the certificate with any other keyStorageFlags value, including values like UserKeySet | MachineKeySet | Exportable or just DefaultKeySet fails with a CryptographicException: "The system cannot find the file specified."
As I was not able to load the certificate at all I could not test exporting certificates.
Running under Azure App Service, with WEBSITE_LOAD_CERTIFICATES defined as the thumbprint of the certificate that was uploaded:
Loading the certificate with MachineKeySet and UserKeySet is not set, fails with CryptographicException: "Access denied." .
So values like UserKeySet and UserKeySet | MachineKeySet and Exportable will work.
Exporting certificates requires X509KeyStorageFlags.Exportable - same as all other environments.
So it seems that WEBSITE_LOAD_CERTIFICATES seems to work - but only if the certificate being loaded into an X509Certificate2 instance has the same thumbprint as specified in WEBSITE_LOAD_CERTIFICATES.
Is there any way around this?
I thought more about how WEBSITE_LOAD_CERTIFICATES seems to make a difference - but I had a funny feeling about it really only working with the certificate thumbprint that's specified.
So I changed the WEBSITE_LOAD_CERTIFICATES value to a dummy thumbprint - an arbitrary 40-character Base16 string, and re-ran my test - and it worked, even though the thumbprint had no relation to the certificate I was working with.
It seems that simply having WEBSITE_LOAD_CERTIFICATES defined will enable the the Azure website's ability to use X509Certificate and X509Certificate2 - even if the loaded certificate is never installed into, or even retrieved from, any systemwide or user-profile certificate store (as seen in the Certificates snap-in for MMC.exe).
This behaviour does not seem to be documented anywhere, so I'm mentioning it here.
I've contacted Azure support about this.
Regarding the behavioural change I noticed at mid-year - it's very likely that I did have WEBSITE_LOAD_CERTIFICATES originally set for a testing certificate we were using. When I made a new deployment later in the year around June I must have reset the Application settings which removed the WEBSITE_LOAD_CERTIFICATES and so broke X509Certificate2 instances.
TL;DR:
Open your Azure App Service (Azure Website) blade in portal.azure.com
Go to the Application settings page
Scroll to App settings
Add a new entry key: WEBSITE_LOAD_CERTIFICATES, and provide a dummy (fake, made-up, randomly-generated) value for it.
The X509Certificate2( Byte[], String, X509KeyStorageFlags ) constructor will now work, but note:
keyStorageFlags: X509KeyStorageFlags.MachineKeySet will fail with "Access denied"
All other keyStorageFlags values, including MachineKeySet | UserKeySet will succeed (i.e. MachineKeySet by itself will fail, but MachineKeySet used in conjunction with other bits set will work).