When I build my react app locally, all the envoriment variables get read from my local .env file, which is inside the projects root folder. We use gulp for the building process.
Now i want the same variables available in my azure pipeline, which also builds the app via gulp and deploys it to my Azure Static Web App.
I already tried pushing my .env file to the repo and also tried to set these variables in the pipelines YAML file via
env:
HOSTNAME: 'google.com'
And I also tried putting the values in my pipelines variables and accessing it in YAML like
env:
HOSTNAME: $(HOSTNAME)
Lastly I tried uploading my .env to Devops Pipelines Secure files, then adding Tasks to my pipeline to access this file and copiying it to my repos root folder.
All ways ended up with just random String like "8a8878aa1317" inside this variables, once the app is deployed and running. This random strings changes each time I run the pipeline. Does anyone know how to get the right values to the variables?
HOSTNAME is a tricky name for a variable, because it might be used by underlying OS or DevOps Agent (a random string value suggests that).
Try changing it to something like MY_HOSTNAME (in .env file, in the pipeline and in your app).
Related
I would like to manage Gitlab variables from different projects via local files, so I would like to export project's CI variables locally to a YAML or JSON file, to change values and to import it back with updated values.
I tried glab-cli and Gitlab API, but it's to basic, You must process a variable by variable manually, I would like to find a better solution, capable to process all variables at once.
I have deployed a project on gcloud app engine (standard) using the CLI command gcloud app deploy. On the first deployment, I put a .env file in the project which I want to delete. However, when I delete the .env file and push again, it says 0 files pushed onto the storage and the environment variables in the .env file still exist in the logs!
Is there any way to somehow refresh the deployment so I can get rid of the .env and the environment variables set by mistake?
When you deploy your app via gcloud app deploy, it copies your source code to a bucket. The default bucket is staging.<project_name>.appspot.com. (objects uploaded to the staging bucket have a default life span of 15 days i.e. they will get deleted 15 days from the last time the object was updated. You can change this value.)
When you run the deploy command, gcloud only deploys files that have been 'updated'. I think it compares files in your local env with the files in this staging bucket (this seems to be why it's telling you 0 files were pushed).
You could try clearing the staging bucket (deleting all the files in it) and then deploying your app again (it should then see the files as being new and should deploy them i.e. the files without the .env folder.
I've managed to replicate your issue and it works on my end. I tried to deploy an app with .env file then tried to deploy for a second time without a ".env" file, but before doing so, I made sure to remove the file from the folder using this command.
rm -rf .env
Run this command to see if this completely removes the folder since this is a hidden file.
ls -a
You can also follow the steps below to make sure the file isn't part of the deployment:
Go to App Engine services in the Google Cloud console.
Scroll bar to the right then on the Diagnose tab you will see a dropdown menu "Tools". Once you click that there are three options, choose "Source".
After you click the "Source" you will be directed to another page then you can check on that for the files that are being deployed.
Another workaround, if you want to specify which file is not included in the deployment you can use a .gcloudignore file.
I'm developing a little server made in node, hapijs, nodemon, etc.
It's a basic api rest which will grow with ongoing dev.
I need to have different variables for dev. and production. I actually have only one .env file. I've read it is not recommended to have 2 separate files for this.
How should I modify my app.js to have two situations?
run nodemon locally in my pc while in dev and local variables
when deploying to heroku, use production variables
Thanks a lot in advance,
As you've probably already done. Write your code to use environment variables. (whether you run locally or on production, that's the same code.).
const ACCESS_KEY = process.env.ACCESS_KEY;
Your .env file then contains ONLY your local settings, for debug on your local computer. You can add .env in your .gitignore file to make sure it doesn't get pushed to your git repository.
Production settings by contrast shouldn't be in any file at all. They should only be configured directly in the settings of your cloud provider.
if you're using Azure, they should be in an Azure Key Vault
if you're using AWS, they should be in the AWS Parameter Store
if you're using Heroku, then they should be configured in Heroku's settings.
Heroku settings
It's possible to do this from the "Settings" tab in your heroku app dashboard. There is a section "Config vars".
When heroku launches your application, it will define the configured config variables as environment variables. And you will be able to access them with process.env just as you would with the environment variables which were defined in your .env file during development.
CLI
The dashboard makes it easy to get an overview and to manage the keys. Perhaps even more conveniently, you can also do this with the heroku cli tool straight from the commandline.
To get a list of your current environment variables, run.
heroku config
To add a new key from the CLI.
heroku config:set ACCESS_KEY=adfsqfddqsdf
All of this is also described in the official documentation of Heroku.
Generally, you would generate your env file at build time. For example, using AWS SSM / or some kind of Vault that is secure, you store your secrets like db passwords. The env file is a template that gets compiled with the right env vars for the target deployment.
Also, you can have dummy variables in your env template that you commit to git. Then add a .gitignore file with an entry to your env template to ensure you don't commit any secrets to the env file. Then locally you compile your file for local, during your staging build for staging, during prod build for prod, etc.
As the app gets larger, this allows you to provision credentials per person / per environment. You add the associated secrets / permissions to the vault. Allow the people/environments access to those secrets, and then you can control access in a pretty fine grained fashion.
I suggest using an npm package for handling different environments variables and keys. (or implement it by yourself)
Alongside with .env file
1- use .env file to store credentials and secrets
2- Reference these .env variables via different package that provides separate file for each environment
suggested package : https://www.npmjs.com/package/config
I used this approach in one of my projects and made my life easier.
A widely adopted best practice is to inject at runtime the application settings (secrets and environment config).
It is safer (secrets are NOT stored in the source code, bundles or packages/images) and portable (as you deploy to more environments you only need to define suitable values - no code changes, recompilation or repackaging).
Single .env file
Define a single .env file: your application needs the same properties (with different values obviously) anywhere.
On your local development environment config the .env file for development: you don't commit either package this file.
Production Deployment
Define the runtime configuration: on Heroku use Config Vars to create an environment variable for each property defined in the .env file, for example
# .env
API_TOKEN = dev1
Create a Config Var API_TOKEN with the production value: this is injected at application-startup and never stored/exposed.
This approach is language-agnostic (in Java you might have a .properties instead but the principle is the same) and works with different hosting providers: you would deploy the same app/package while configuring the environment settings accordingly.
Azure DevOps XDT Transform tasks allow you to build release profiles that transform the base config file with settings that are specific to each environment, such as a connection string that points to different db servers for different environments. The app.dev.config file has transformations for the dev environment, app.qa.config for qa, etc, which are applied during the deployment to the base app.config file.
I need to take this one step further and deploy custom config files for each individual server in a load balanced environment. For example, the DEV environment has two servers dev1.mysite.com and dev2.mysite.com that are load balanced by dev.mysite.com. Each of the two servers needs specific settings in the config file deployed to that server.
I don't (yet) see a way in Azure DevOps to do this. Part of the solution might be to set up variables with the setting that needs to be applied to each environment/server but I haven't figured out how to apply the correct variable to each config.
You can use task Magic Chunks to apply the variable to each config.
You can search for Magic Chunks task in your pipeline and install it to your organization. Then add Config transform task before the deployment task to update the config file with specific setting. For below example settings of magic chunk task:
As above screenshot shows, You can reference your pipeline variables in the tasks.
There are other extension tasks like RegEx Find & Replace you can use to replace the variables in the config files.
My goal is to be able to develop/add features locally then create a local docker build and create a container using the Bitbucket Pipeline Repo Variables. I don't want to hard code any secrets on the host machine or inside the code. I'm trying to access some api keys hosted in the Bitbucket pipeline repo variables.
Anyone know how to do this? I am thinking some script inside the Dockerfile that will create environment variables inside the container.
You can pass these variables to your container as environment variables when you run the container with the -e flag (see: this question), you could use the bitbucket variables at this point. When you do this the variables are available in your docker container, but of course you will then still have to be able to use them in your python script I suppose?
You can easily do that like this:
variable = os.environ['ENV_VARIABLE_NAME']
If you do not want to pass the variables in plain text to the commands like this you could also set up a MySQL container linked to your python container which provides your application with the variables. This way everything is secured, dynamic and not visible from anywhere except to users with acces to your database and can still be modified easily. It takes a bit more time to set up, but is less of a hassle than an .env file.
I hope this helps you