I have a shell script that installs a particular software on an Azure VM. Say, it is install_software.sh.
There are environment specific parameters defined in a .param file. For example,
INSTALLATION_PATH=XYZ
INSTALLER_LOCATION=ABC
I plan to do this:
Create 3 param files specific to DEV, QA, PROD environments
Load all the 3 files from GitHub into the VM
Accept environment name as an argument while executing the script, example:
sh install_software.sh DEV
Check if $1 of the executed command is DEV, and EXPORT from DEV .param file.
Now, do you think this is a good approach? Or, is there any smarter approach. Appreciate if I can be pointed to any sample code snippets too. Thank you very much.
Related
I’m unable to get the environment variables (for example $env:RELEASE_RELEASENAME) for a task that runs a PowerShell script on a target machine, however, the env variables work for PowerShell Inline.
Does getting env variables from PowerShell on target machines need special treatment or am I missing something here?
Sometimes, I met this problem with Ubuntu hosted agent. My solution is to manually add the environment, then I can get the environment variable in inline script or script file.
My goal is to be able to develop/add features locally then create a local docker build and create a container using the Bitbucket Pipeline Repo Variables. I don't want to hard code any secrets on the host machine or inside the code. I'm trying to access some api keys hosted in the Bitbucket pipeline repo variables.
Anyone know how to do this? I am thinking some script inside the Dockerfile that will create environment variables inside the container.
You can pass these variables to your container as environment variables when you run the container with the -e flag (see: this question), you could use the bitbucket variables at this point. When you do this the variables are available in your docker container, but of course you will then still have to be able to use them in your python script I suppose?
You can easily do that like this:
variable = os.environ['ENV_VARIABLE_NAME']
If you do not want to pass the variables in plain text to the commands like this you could also set up a MySQL container linked to your python container which provides your application with the variables. This way everything is secured, dynamic and not visible from anywhere except to users with acces to your database and can still be modified easily. It takes a bit more time to set up, but is less of a hassle than an .env file.
I hope this helps you
I am trying to create a JS utility to version stamp a VSTS build with details about the branch and commit id.
I have been using git-rev-sync which works fine locally. However, when the code is checked out using a VSTS build definition, the repo is detached, and I am no longer able to determine from the git repo itself which branch the current code belongs to.
git-rev-sync reports something along the lines of:
Detatched: 705a3e89206882aceed8c9ea9b2f412cf26b5e3f
Instead of "develop" or "master"
I may look at the vsts node SDK which might be able to pick up VSTS environmental variables like you can with Powershell scripts.
Has anyone done this or solved this problem in a neater way?
The build variables will be added to current process’s environment variables, so you can access Build.SourceBranchName build-in variable from environment variable:
PowerShell:
$env:BUILD_SOURCEBRANCHNAME
NodeJS:
process.env.BUILD_SOURCEBRANCHNAME
Shell script:
$BUILD_SOURCEBRANCHNAME
Batch script:
%BUILD_SOURCEBRANCHNAME%
You also can pass it through argument of task ($(Build.SourceBranchName)), for example, using Replace Tokens task to replace variable value to a file, then you can read the value from the file (replace %{BUILD_SOURCEBRANCHNAME}%).
I can't find out how to access variables in a build-script provided by the gitlab-ci.yml-file.
I have tried to declare variables in two ways:
Private Variables in the Web-Interface of GitLab CI
Variable overrides/apennding in config.toml
I try to access them in my gitlab-ci.yml-files commands like that:
msbuild ci.msbuild [...] /p:Configuration=Release;NuGetOutputDir="$PACKAGE_SOURCE"
where $PACKAGE_SOURCE is the desired variable (PACKAGE_SOURCE) but it does not work (it does not seem to be replaced). Executing the same command manually works just as expected (replacing the variable name with its content)
Is there some other syntax required i am not aware of?
I have tried:
$PACKAGE_SOURCE
$(PACKAGE_SOURCE)
${PACKAGE_SOURCE}
PS: Verifying the runner raises no issues, if this matters.
I presume you are using Windows for your runner? I was having the same issue myself and couldn't even get the following to work:
script:
- echo $MySecret
However, reading the Gitlab documentation it has an entry for the syntax of environment variables in job scripts:
To access environment variables, use the syntax for your Runner’s shell
Which makes sense as most of the examples given are for bash runners. For my windows runner, it uses %variable%.
I changed my script to the following, which worked for me. (Confirmed by watching the build output.)
script:
- echo %MySecret%
If you're using the powershell for your runner, the syntax would be $env:MySecret
In addition to what was said in the answer marked as correct above, you should also check whether your CI variables in the gitlab settings are set as "protected". If so, you may not be able to use them in a branch that's not protected.
"You can protect a project, group or instance CI/CD variable so it is only passed to pipelines running on protected branches or protected tags." -> check it https://docs.gitlab.com/ee/ci/variables/index.html#protect-a-cicd-variable
Be aware that.. in certain cases SOME of the variables Gitlab CI CD offer are not always available..
In my case I wanted to use ${CI_COMMIT_BRANCH} but if you read the doc
https://docs.gitlab.com/ee/ci/variables/predefined_variables.html
The commit branch name. Available in branch pipelines, including pipelines for the default branch. Not available in merge request pipelines or tag pipelines.
From what I understand, CloudFormation template can retrieve a file from remote and run it (Ex: bash shell), for example: download a bash script to install Graphite/OpenTSDB RRD tools.
My question is: is there any best practice between using CloudFormation template commands to do installation steps by steps vs using CloudFormation template to retrieve the bash script to run installation?
Thanks
There is no "best" way to do it, there are only lots of different options with different trade-offs.
Putting scripts in your CF template quickly becomes tiresome because you have to quote your data.
Linking to shell scripts can get complex because you have to specify everything in detail, and the steps can get brittle.
After a while, you'll want to use Puppet or Chef. These let you declare what you want "Apache 2.1 should be installed, the config file should look like this.." instead of specifying how it should be done. This can keep complex things organized. (But has a learning curve. Look into OpsWorks.)
After that, you'll want to bundle your image into an AMI (speeds things up if your build take a while, and relies on other servers on the internet being up to install!)
I'd suggest you use user-data, given as a parameter to your template. whether it is saved locally or remotely, it is best to separate your infrastructure details (i.e the template) from the boot logic (the shell script). the user data can be a shell script, and it will get invoked when your instances boot.
Here's an example of providing user-data as a parameter:
"Parameters":{
"KeyName":{
"Description":"N/A",
"Type":"String"
},
"initScript":{
"Description":"The shell script to be executed on boot",
"Type":"String"
},
},
"Resources":{
"workersGroup1":{
"GlobalWorker":{
"Type":"AWS::EC2::Instance",
"Properties":{
"InstanceType":"t1.micro",
"ImageId":"ami-XXXX",
"UserData":{"Fn::Base64":{"Fn::Join":["", [{"Ref":"initScript"}]]}},
...