I have a gitlab project that is mirroring (pull) a github private repo. Because of its origins, the repo has a "config/private.js" file with all the api keys and server config that it needs. Or rather, that file isnt in the repo, its in .gitignore.
How do I populate my gitlab environment with this file? It would be ideal if I could reserve a special file that is not in the repo and does not update with commits, and is used to populate the dist environment with a build command like:
- cat secrets.file > src/config/private.js
But -- I'm having no luck finding that in the documentation. I do se project and group secrets -- but 1. that would be tedious just to add them and 2. I would need to rewrite the code, or else create another just as tedious script to echo each to the file.
this was a tad complicated.
Gitlab does not install the repo it installs the build results, thus you can inject api key files in gitlab's CI CD - but you would have to change it/rebuild for each env. (You couldnt test results and then redeploy known working results to prod.) In my case, I was building once, and committed to only applying relevant keys to stage and prod.
What I do is I keep the secrets as variables on the destination. I inject a key file that refers to the env during CI CD. For example, it might set a key to __MY_API_KEY__. I use a postinstall script in deployment to apply these env keys to the built scripts that are installed (this is just a tr command over a set of env variables and /build files).
This way, I can use a hard coded, gitignored private file locally, and still inject private keys specific to each env separately.
Related
I would like to manage Gitlab variables from different projects via local files, so I would like to export project's CI variables locally to a YAML or JSON file, to change values and to import it back with updated values.
I tried glab-cli and Gitlab API, but it's to basic, You must process a variable by variable manually, I would like to find a better solution, capable to process all variables at once.
I'm developing a little server made in node, hapijs, nodemon, etc.
It's a basic api rest which will grow with ongoing dev.
I need to have different variables for dev. and production. I actually have only one .env file. I've read it is not recommended to have 2 separate files for this.
How should I modify my app.js to have two situations?
run nodemon locally in my pc while in dev and local variables
when deploying to heroku, use production variables
Thanks a lot in advance,
As you've probably already done. Write your code to use environment variables. (whether you run locally or on production, that's the same code.).
const ACCESS_KEY = process.env.ACCESS_KEY;
Your .env file then contains ONLY your local settings, for debug on your local computer. You can add .env in your .gitignore file to make sure it doesn't get pushed to your git repository.
Production settings by contrast shouldn't be in any file at all. They should only be configured directly in the settings of your cloud provider.
if you're using Azure, they should be in an Azure Key Vault
if you're using AWS, they should be in the AWS Parameter Store
if you're using Heroku, then they should be configured in Heroku's settings.
Heroku settings
It's possible to do this from the "Settings" tab in your heroku app dashboard. There is a section "Config vars".
When heroku launches your application, it will define the configured config variables as environment variables. And you will be able to access them with process.env just as you would with the environment variables which were defined in your .env file during development.
CLI
The dashboard makes it easy to get an overview and to manage the keys. Perhaps even more conveniently, you can also do this with the heroku cli tool straight from the commandline.
To get a list of your current environment variables, run.
heroku config
To add a new key from the CLI.
heroku config:set ACCESS_KEY=adfsqfddqsdf
All of this is also described in the official documentation of Heroku.
Generally, you would generate your env file at build time. For example, using AWS SSM / or some kind of Vault that is secure, you store your secrets like db passwords. The env file is a template that gets compiled with the right env vars for the target deployment.
Also, you can have dummy variables in your env template that you commit to git. Then add a .gitignore file with an entry to your env template to ensure you don't commit any secrets to the env file. Then locally you compile your file for local, during your staging build for staging, during prod build for prod, etc.
As the app gets larger, this allows you to provision credentials per person / per environment. You add the associated secrets / permissions to the vault. Allow the people/environments access to those secrets, and then you can control access in a pretty fine grained fashion.
I suggest using an npm package for handling different environments variables and keys. (or implement it by yourself)
Alongside with .env file
1- use .env file to store credentials and secrets
2- Reference these .env variables via different package that provides separate file for each environment
suggested package : https://www.npmjs.com/package/config
I used this approach in one of my projects and made my life easier.
A widely adopted best practice is to inject at runtime the application settings (secrets and environment config).
It is safer (secrets are NOT stored in the source code, bundles or packages/images) and portable (as you deploy to more environments you only need to define suitable values - no code changes, recompilation or repackaging).
Single .env file
Define a single .env file: your application needs the same properties (with different values obviously) anywhere.
On your local development environment config the .env file for development: you don't commit either package this file.
Production Deployment
Define the runtime configuration: on Heroku use Config Vars to create an environment variable for each property defined in the .env file, for example
# .env
API_TOKEN = dev1
Create a Config Var API_TOKEN with the production value: this is injected at application-startup and never stored/exposed.
This approach is language-agnostic (in Java you might have a .properties instead but the principle is the same) and works with different hosting providers: you would deploy the same app/package while configuring the environment settings accordingly.
I’m trying to set up GitLab CI/CD for an old client-side project that makes use of Grunt (https://github.com/yeoman/generator-angular).
Up to now the deployment worked like this:
run ’$ grunt build’ locally which built the project and created files in a ‘dist’ folder in the root of the project
commit changes
changes pulled onto production server
After creating the .gitlab-ci.yml and making a commit, the GitLab CI/CD job passes but the files in the ‘dist’ folder in the repository are not updated. If I define an artifact, I will get the changed files in the download. However I would prefer the files in ‘dist’ folder in the to be updated so we can carry on with the same workflow which suits us. Is this achievable?
I don't think commiting into your repo inside a pipeline is a good idea. Version control wouldn't be as clear, some people have automatic pipeline trigger when their repo is pushed, that'd trigger a loop of pipelines.
Instead, you might reorganize your environment to use Docker, there are numerous reasons for using Docker in a professional and development environments. To name just a few: that'd enable you to save the freshly built project into a registry and reuse it whenever needed right with the version you require and with the desired /dist inside. So that you can easily run it in multiple places, scale it, manage it etc.
If you changed to Docker you wouldn't actually have to do a thing in order to have the dist persistent, just push the image to the registry after the build is done.
But to actually answer your question:
There is a feature request hanging for a very long time for the same problem you asked about: here. Currently there is no safe and professional way to do it as GitLab members state. Although you can push back changes as one of the GitLab members suggested (Kamil Trzciński):
git push http://gitlab.com/group/project.git HEAD:my-branch
Just put it in your script section inside gitlab-ci file.
There are more hack'y methods presented there, but be sure to acknowledge risks that come with them (pipelines are more error prone and if configured in a wrong way, they might for example publish some confidential information and trigger an infinite pipelines loop to name a few).
I hope you found this useful.
I have an application written in NodeJS.
Every times I push my code with git on my master branch, the code is automatically deployed on my server. Everything’s work fine.
But I had a problem with my config file which is different between my local server and my distant server.
To resolve this problem, I have excluded to git my config file and copy it to _config.
In the _config file I put all my config server, and when the code is deploy to the server I made automatically a $ cp _config config in order to take the good config values.
This solution working but I am wondering if it’s the best solution to deploy my config file and if it’s secure to have it on Bitbucket.
For example, if I want to share my repository with someone I can't because he will see my config.
Is there an another solution to do that ?
If you have any advice, thanks in advance.
Your solution works fine, even though it might seems a bit crude to you.
Configuration generally consists of 2 types of data:
Sensitive data (e.g. passwords, tokens, credentials)
Everything else (e.g. constants, paths, locations, URLs)
Config variables of either of these types can be different on your dev machine compared to your production machine (dev API URL vs prod API URL; dev API password vs prod API password).
The second type of configuration data can be committed in your git repository, and it's probably easiest if you do. You could make 1 file for dev and 1 for production, and for example based on an environment variable load the correct one.
The first type - passwords - should not be committed to git. Not even if you have a private repository. It's just too dangerous. Instead, you could for example use a mechanism like you describe, or you could put the passwords in environment variables which you set on the production environment.
Hrm, probably not a complete solution but checkout process environment variables.
I typically have one that I keep privately that I add to gitignore and then I push the sample environment var file
On my projects I solve this with three config files.
Default config: This config file contains default configuration. Such configuration should be safe and appropriate for most installations. This may be real config file or some part of source code with default values. This file is committed to Git repository.
Example configuraiton file: This file looks like the third file, but it is not used by an application. I usually give it .example suffix. This file is committed to Git repo, but does not contain sensitive data.
Local configuration file: Copy of example with local changes, like database password or enable debug mode. This file is not committed to Git repo and it contains sensitive data. It should be listed in your .gitignore file.
When loading such configuration, I merge the defaults file with local file, so content of local file overrides the defaults, but if something is missing in local file, defaults are used instead. This way I can have minimal local file and most of the configuration is affected by developers' commits.
You may want to have even more config files when a framework is used and your application is installed to many customers. Then you may have framework configuration file, application configuration file, per-customer configuration file and local configuration file. And all files are merged together in this order, and only the last file, the local configuration, is not committed. Of course this involves more repositories, git submodules or something like that.
I have an existing directory structure on a machine and want to configure Gitlab CI to clone/fetch repos to specific paths.
I've managed to change the builds_dir property in the config.toml file to start in the correct place, but Gitlab adds extra nested folders by default.
So I set:
builds_dir = "/Users/myUser/Development/projName"
and when Gitlab CI clones the repo, it adds
"/555555bb/0/orgName"
so I end up with:
"/Users/myUser/Development/projName/555555bb/0/orgName/projName"
Is there a way in the Gitlab config file to remove the extra sub-directories, or is my only option to move the files around after the clone/fetch is complete?
Because of how Gitlab runners work you need to copy the files to the appropriate locations.
Since runners can handle multiple projects, they need to differentiate them in some way and this is where /555555bb/0/orgName part comes in.
You can define in your project-specific runner config where those files need to be copied to (a simple copy command will suffice).
I solved it by creating a symbolic link between the repository used by the gitlab runner and the repository I am wishing to use for the deployment of the application.