Looking for Gitlab feature as Circle CI context - gitlab

In Circle CI, context can allow me to set different values for same variables.
For example, I set two environments, such as dev and prod, in each of them, I set several variables
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
Since my environments are in different aws accounts, I can provide the different values to them.
Second, I can set permission that developers can only access dev context, support team can only access prod context.
But I don't get the same feature in Gitlab CI.
In one of their document, it mentions Group, but after I check, it doesn't work as expect at all.
https://docs.gitlab.com/ee/ci/migration/circleci.html#contexts-and-variables
Contexts and variables
CircleCI provides Contexts to securely pass environment variables across project pipelines. In GitLab, a Group can be created to assemble related projects together. At the group level, CI/CD variables can be stored outside the individual projects, and securely passed into pipelines across multiple projects.
Are there any way I can do that in Gitlab CI?
Sample usage of context in Circli CI for your reference
version: 2.1
workflows:
staging-workflow:
jobs:
- staging-build:
context:
- staging
prod-workflow:
jobs:
- prod-build:
context:
- prod

I think you could achieve something similar by utilising gitlabs schedules.
Simply create two schedules that contain the variables you want to pass.
Update your ci script to reference these variables.
Then each schedule can be owned by the respective parties.
You can also limit variables by group, based on which environment. So if you have a dev env - variables can be limited for that and the same for a prod env. Hope this helps.

Finally I got this problem fixed with Gitlab CI enviroments
set environments Staging and Produciton
when it asks for url, ignore the issue, put any url you like, I just put http://example.com .
still not sure the proper url I need put in.
update your variables in cicd, default is to all, change it to staging or Production.
Setting -> CICD -> Variables -> Expand
in the screenshot, I set variable AWS_ACCESS_KEY_ID two times, but assigned to different environments.
(Notes: AWS_ACCESS_KEY_ID doesn't need be masked, but AWS_SECRET_ACCESS_KEY should be)
update your pipeline to use the enviroment.
the gitlab ci environment document above confuse me again. you don't have to provide the url, just add the environment name is enough.
staging_deploy:
stage: deploy
environment:
name: staging
when: manual
script:
- echo "staging deployment"

Related

How to embed environment variables in a docker image in azure devops pipeline?

My requirement is to set few environment variables on the docker image that is built by azure devops pipeline.
This pipeline is used for dev/stage/production environments, each stage is tied to its own set of group variables, these variables I want to be available in nodejs app.
I can read the variables within the pipeline using $VARNAME but cant read same when the code runs using process.env.VARNAME.
I understand that it is not the best approach as the image would have environment variables which may potentially have secrets hence I am open to different ideas as well.
What I have tried so far:
Added ARG in dockerfile
ARG VARNAME=somevalue
on the docker build task added
- task: Docker#2
displayName: Build docker image
inputs:
command: build
repository: $(imageName)
tags: $(Build.BuildId)
buildContext: '$(Pipeline.Workspace)/PublishedWebApp'
arguments: --build-arg SOMEVAR=anewvalue
I try to access this as process.env.SOMEVAR in nodejs code.
I can actually see the --build-arg on the docker build executed in the pipeline but in the code it never appears.
What I am after is pretty standard requirement, we have multiple environments, each environment will have different keys (different group variables tied to different stage), how do I pass different keys to the deployment?
There are a few components from what I can see that need to align:
You are passing the key via --build-arg. This is correct.
You have an ARG in the dockerfile to receive what is passed in via --build-arg. This is correct.
You are missing setting the environment variable equal to the passed-in arg.
FROM python:3.7-slim
ARG SOME_PASSED_IN_ARG
ENV SOMEVARNAME $SOME_PASSED_IN_ARG
...
You would access SOMEVARNAME like process.env.SOMEVARNAME from the node application.
I give credit to #levi-lu-msft in their post here: https://stackoverflow.com/a/59784488/1490277
As for best practice, defining logic default environment variables in the container at BUILD is perfectly acceptable (e.g. ENV=PRODUCTION). At RUN, you are able to to overwrite any environment variable using passed in arguments:
docker run ... --e SOME_PASSED_IN_ARG=someOtherValue
A few "better" practices:
Containers are meant to be repeatable. Don't embed at BUILD time environment variable that will expire/change over time. In other words, if you decide to deploy a version from 3yrs ago and a BUILD time environment variable has an expired value (e.g. coupon code), your environment could be fragile and unpredictable.
Never embed secrets in the container at BUILD. #1 above because they may change overtime and also for security. Secret values (e.g., one-time keys, tokens, etc.) are available to anyone (bad actor or mishap actor) to extract simply by listing the environment variables of the image when running.

How to use the same ADO pipeline for deploying to dev, staging and production

I'm using ADO to trigger tests and deployments to Google App Engine when I push to Github. Note that I use the same .yaml file to run 2 stages; the first being Test and the second, Deploy. The reason I am not using release pipelines is because the docs suggest that release pipelines are the "classic" way of doing things, and besides, having the deploy code in my yaml means I can version control it (as opposed to using ADO's UI)
I have already set up the 2 stages and they work perfectly fine. However, the deploy right now only works for my dev environment. I wish to use the same stage to deploy to my other environments (staging and production).
Here's what my .yaml looks like:
trigger:
# list of triggers
stages:
- stage 'Test'
pool: 'ubuntu-latest'
jobs:
- job: 1
- job: 2
- job: 3
- stage 'Deploy'
jobs:
- job: 'Deploy to dev'
I could potentially do a deploy to staging and production by copying the Deploy to dev job and creating similar jobs for staging and production, but I really want to avoid that because the deployment jobs are pretty huge and I'd hate to maintain 3 separate jobs that all do the same thing, albeit slightly differently. For example, one of the things that the dev jobs does is copy app.yaml files from <repo>/deploy/dev/module/app.yaml. For a deploy to staging, I'd need to have the same job, but use the staging directory: <repo>/deploy/staging/module/app.yaml. This is a direct violation of the DRY (don't repeat yourself) principle.
So I guess my questions are:
Am I doing the right thing by choosing to not use release-pipelines and instead, having the deploy code in my yaml?
Does my azure-pipelines.yaml format look okay?
Should I use variables, set them to either dev, staging, and production in a dependsOn job, then use these variables to replace the directories where I copy the files from? If I do end up using this though, setting the variables will be a challenge.
Deploying from the yaml is the right approach, you could also leverage the new "Environment" concept to add special gates and approvals before deploying. See the documentation to learn more.
As #mm8 said, using templates here would be a clean way to manage the deployment steps and keep them consistent between environments. You will find more information about it here.

Gitlab shared runners: env variables privacy with docker runners

[Disclamer] : this post focus exclusively on CI / build aspects and has nothing to do with CD / deployment
Say I have a project hosted on Gitlab involving some source code compilation. For the present purpose, let's say I wan't Maven to create two cars (STAGE PROD) of some java project every time a developer merges on master. Those cars:
will be stored on the project registry
need to contain the environment variables defined for this project.
Can I legitimately assume that those environment variables will remain safe (i.e. private) if the project is compiled with shared gitlab runners based on the assumption that Docker runners are ephemeral ? Is there a (better) way to enforce privacy despite using shared runners?
One possible alternative would be using Protected environment variables.
issue 1452 show them in "CI/CD" panel of Settings.
By prefixing them with STAGE_ or PROD_, your maven job could use the right variable depending on the current car.
By their protected nature, they would not be exposed.

How to Access gitlab variables from another repo

I have two repositories A,B in gitlab.
I wanted to access few variables in "repo A" to "repo B" in gitlab to implement CICD. How do I do that.
You cannot share CI variables across projects unless they are under the same group and inherit its variables, please check https://gitlab.com/gitlab-org/gitlab-ce/issues/24143
As users propose on this issue, you can configure ENV variables per runner and make all jobs from different projects run on them with CI tags.
You can share enviornment variables if your repositories are in the same group.
You can find it under:
https://<your-gitlab>/groups/my-group/-/settings/ci_cd

Set Config property based on deployment platform (dev, staging, qa, production) in Node.js

In Node.js, I want to set config property value based on platform (dev, staging, qa, production) it will get deployed.
So for example for dev and staging environment, I want to set value '234'
And for prod, i want to set value '456'.
for deployment, i am using VSTS.
Shall I make the use of deployment variables?
For setting config property based, please take a look at this question: Node.js setting up environment specific configs to be used with everyauth
In VSTS Release, you could use environment scoped value per environment.
Share values across all of the tasks within one specific environment
by using environment variables. Use an environment-level variable for
values that vary from environment to environment (and are the same for
all the tasks in an environment). You define and manage these
variables in the Variables tab of an environment in a release
pipeline.
Source Link: Custom variables
If you want to change values, suggest you to use Replace Tokens task Visual Studio Team Services Build and Release extension that replace tokens in files with variable values.

Resources