I can't seem to run ansible commands inside shell scripts.
Whenever I run ansible or ansible-playbook commands, it fails with the below error:
./check_best_host_for_vm_creation.sh: line 9: syntax error near unexpected token `ansible-playbook'
I am sure that the ansible-playbook command is correct and there is nothing wrong with it, as I am able to execute it successfully from outside the script.
The full script is:
#!/bin/bash
hostname_selected=''
for host in 10.28.187.153 10.28.143.10 do
ansible-playbook /etc/ansible/gather_vcenter_facts.yml --extra-vars "esxi_hostname=$host"
host_memory=`cat /etc/ansible/files/tmp_host_memory`
if [ "$host_memory" -eq 4000]; then
ansible-playbook /etc/ansible/create_vms_on_host.yml --extra-vars "esxi_hostname='$host'"
$hostname_selected=$host
break
fi
done
if ["$hostname_selected = '']; then
echo "No host available with free memory"
else
echo "Script done and the VM is created on host $hostname_selected "
fi
~
File names are correct, as well as paths.
There were several indentation, spacing and syntax errors. I corrected to this. Please try and let me know if it works now.
#!/bin/bash
hostname_selected=''
for host in '10.28.187.153' '10.28.143.10'
do
ansible-playbook /etc/ansible/gather_vcenter_facts.yml --extra-vars "esxi_hostname=$host"
host_memory=$( cat /etc/ansible/files/tmp_host_memory )
if [ "$host_memory" -eq 4000 ]
then
ansible-playbook /etc/ansible/create_vms_on_host.yml --extra-vars "esxi_hostname='$host'"
hostname_selected=$host
break
fi
done
if [ "$hostname_selected" = '' ]
then
echo "No host available with free memory"
else
echo "Script done and the VM is created on host $hostname_selected"
fi
Regards!
I am using Travis for CI. For some reason, the builds pass even when some tests fail. See the full log here
https://travis-ci.org/msm1089/hobnob/jobs/534173396
The way I am running the tests is via a bash script, e2e.test.sh, that is run by yarn.
Searching for this specific issue has not turned up anything that helps. It is something to do with exit codes I believe. I think I need to somehow get the build to exit with non-zero, but as you can see at bottom of the log, yarn exits with 0.
e2e.test.sh
#!/usr/bin/env bash
RETRY_INTERVAL=${RETRY_INTERVAL:-0.2}
# Run our API server as a background process
if [[ "$OSTYPE" == "msys" ]]; then
if ! netstat -aon | grep "0.0.0.0:$SERVER_PORT" | grep "LISTENING"; then
pm2 start --no-autorestart --name test:serve "C:\Program Files\nodejs\node_modules\npm\bin\npm-cli.js" -- run test:serve
until netstat -aon | grep "0.0.0.0:$SERVER_PORT" | grep "LISTENING"; do
sleep $RETRY_INTERVAL
done
fi
else
if ! ss -lnt | grep -q :$SERVER_PORT; then
yarn run test:serve &
fi
until ss -lnt | grep -q :$SERVER_PORT; do
sleep $RETRY_INTERVAL
done
fi
npx cucumber-js spec/cucumber/features --require-module #babel/register --require spec/cucumber/steps
if [[ "$OSTYPE" == "msys" ]]; then
pm2 delete test:serve
fi
travis.yml
language: node_js
node_js:
- 'node'
- 'lts/*'
- '10'
- '10.15.3'
services:
- elasticsearch
before_install:
- curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.1.deb
- sudo dpkg -i --force-confnew elasticsearch-6.6.1.deb
- sudo service elasticsearch restart
before_script:
- sleep 10
env:
global:
- NODE_ENV=test
- SERVER_PROTOCOL=http
- SERVER_HOSTNAME=localhost
- SERVER_PORT=8888
- ELASTICSEARCH_PROTOCOL=http
- ELASTICSEARCH_HOSTNAME=localhost
- ELASTICSEARCH_PORT=9200
- ELASTICSEARCH_INDEX=test
package.json
...
scripts:{
"test": "yarn run test:unit && yarn run test:integration && yarn run test:e2e"
}
...
So, how can I ensure that the cucumber exit code is the one that is returned, so that the build fails as it should when the tests don't pass?
There are a few possible ways to solve this. Here are two of my favorite.
Option 1:
Add set -e at the top of your bash script, so that it exits on first error, preserving the exit code, and subsequently, failing Travis if its a non zero.
Option 2:
Capture whatever exit code you want, and exit with it wherever it makes sense.
run whatever command here
exitcode=$?
[[ $exitcode == 0 ]] || exit $exitcode
As a side note - it seems like your bash script has too many responsibilities. I would consider separating them if possible, and then you give travis a list of commands to run, and possibly one or two before_script commands.
Something along these lines:
# .travis.yml
before_script:
- ./start_server.sh
script:
- npx cucumber-js spec/cucumber/features ...
I want to use the $CI_ENVIRONMENT_SLUG to point our Selenium tests to the right dynamic environment, but the variable is empty.
During the deployment stage it has a proper value and I don't get why the variable is not available in every stage. The echo cmd prints an empty line.
Tests:
image: maven:3.5.0-jdk-8
stage: Tests and static code checks
variables:
QA_PUBLISH_URL: http://$CI_ENVIRONMENT_SLUG-publish.test.com
script:
- echo $QA_PUBLISH_URL
- echo $CI_ENVIRONMENT_SLUG # empty
- mvn clean -Dmaven.repo.local=../../.m2/repository -B -s ../../settings.xml -P testrunner install -DExecutionID="FF_LARGE_WINDOWS10" -DRunMode="desktopLocal" -DSeleniumServerURL="https://$QA_ZALENIUM_USER:$QA_ZALENIUM_PASS#zalenium.test.com/wd/hub" -Dcucumber.options="--tags #sanity" -DJenkinsEnv="test.com" -DSeleniumSauce="No" -DBaseUrl=$QA_PUBLISH_URL
CI_ENVIRONMENT_SLUG is only available in the review JOB that has the environment set.
And currently (11.2) there is no way to move variables from one JOB to another, although you could:
echo -e -n "$CI_ENVIRONMENT_SLUG" > ci_environment_slug.txt
in the review JOB and add the file to the artifacts:
artifacts:
paths:
- ci_environment_slug.txt
and in your Tests job, use
before_script:
- export CI_ENVIRONMENT_SLUG=$(cat ci_environment_slug.txt)
I am trying to set environment variables with EC2s user data, but nothing i do seems to work
here are the User data scripts i tried
#!/bin/bash
echo "export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-23235232.us-east-1.elb.amazonaws.com" >> /env.sh
source /env.sh
And another:
#!/bin/bash
echo "#!/bin/bash" >> /env.sh
echo "export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-67323523.us-east-1.elb.amazonaws.com" >> /env.sh
chmod +x /env.sh
/env.sh
They both do absolutly nothing, and if i log in and issue the command source /env.sh or /env.sh it works. so this must be something forbidden that i am trying to do.
Here is the output from /var/log/cloud-init-output.log using -e -x
+ echo 'export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-2141709021.us-east-1.elb.amazonaws.com'
+ source /env.sh
++ export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-2141709022.us-east-1.elb.amazonaws.com
++ HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-2141709022.us-east-1.elb.amazonaws.com
Still, echo $HOST_URL is empty
As requested, the full UserData script
#!/bin/bash
set -e -x
echo "export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-2141709021.us-east-1.elb.amazonaws.com" >> /env.sh
source /env.sh
/startup.sh staging 2649
One of the more configurable approach to define environment variables for EC2 instances, is to use Systems Manager Parameter Store. This approach will make it easier to manage different parameters for large number of EC2 instances, both encrypted using AWS KMS as well as in plain text. It will also allows to change the parameter values with minimal changes in EC2 instance level. The steps are as follows.
Define string parameters (Encrypted with KMS or Unencrypted) in EC2 Systems Manager Parameter Store.
In the IAM role EC2 assumes, give required permission to access the parameter store.
Using the AWS CLI commands for EC2 System Manager, read the parameters and export to environment variables in User Data section using Get-Parameter or Get-Parameters AWS CLI commands and controlling command output as required.
e.g Using Get-Parameter command to retrieve db_connection_string parameter(Unencrypted).
export DB_CONNECTION=$(aws --region=us-east-2 ssm get-parameter --name 'db_connection' --query 'Value')
Note: For more details in setting up AWS KMS Keys, defining encrypted strings, managing IAM policies & etc., refer the following articles.
Securing Application Secrets with EC2 Parameter Store
Simple Secrets Management via AWS’ EC2 Parameter Store
I find this to be a pretty easy way to set environment variables for all users using User Data. It allows me to configure applications so the same AMI can work with multiple scenarios:
#!/bin/bash
echo export DB_CONNECTION="some DB connection" >> /etc/profile
echo export DB_USERNAME="my user" >> /etc/profile
echo export DB_PASSWORD="my password" >> /etc/profile
Now, all users will have DB_CONNECTION, DB_USERNAME and DB_PASSWORD set as environment variables.
The user data script on EC2 executes at after boot in its own process. The environment variables get set in that process and disappear when the process exits. You will not see the environment variables in other processes, i.e., login shell or other programs for that matter.
You will have to devise a way to get these environment variables into whatever program needs to see them.
Where do you need these variables to be available? In /startup.sh staging 2649?
EDIT
Try this:
#!/bin/bash
set -e -x
export HOST_URL="checkEmai-LoadBala-ICHJ82KG5C7P-2141709021.us-east-1.elb.amazonaws.com"
/startup.sh staging 2649
Then edit /startup.sh, and put the following line on the top:
echo $HOST_URL > /tmp/var
Boot the instance, and then paste /tmp/var here.
You can add another shell script in /etc/profile.d/yourscript.sh which will contain the set of environment variables you want to add.
This script will run at every bootup and your variable will be available to all users.
#!/bin/sh
echo 'export AWS_DEFAULT_REGION=ap-southeast-2' > ~/myconfiguration.sh
chmod +x ~/myconfiguration.sh
sudo cp ~/myconfiguration.sh /etc/profile.d/myconfiguration.sh
The above code creates a shell script to set environment variable for aws default region and copies it to profile.d .
You can use this script:
#!/bin/bash
echo HOST_URL=\"checkEmai-LoadBala-ICHJ82KG5C7P-23235232.us-east-1.elb.amazonaws.com\" >> /etc/environment
I created an EC2 instance with Amazon Linux AMI 2018.03.0 and added this user data to it and it works fine.
Refer to this answer for more details.
After doing the stuffs in the user data script, the process exits.
So, whatever environment variable you export will not be there in the next process. One way is to to put exports in the .bashrc file so that it gets available in the next session also.
echo "export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-23235232.us-east-1.elb.amazonaws.com" >> ~/.bashrc
Adding this to the init script of the node will add environment variables on launch. They won't show up in the node configuration page but they will be able to use in any job.
#!/bin/bash
echo 'JAVA_HOME="/usr/lib/jvm/java-8-oracle/"' | sudo tee -a /etc/profile#
This answer is similar to what hamx0r proposed however, jenkins doesn't have permission to echo to /etc/profiles with or without sudo.
This maynot be the exact answer to the OP's question but similar. I've thought of sharing this as I've wasted enough time searching for the answer and finally figured it out.
Example assuming - AWS EC2 running ubuntu.
If there is a scenario where you need to define the environment variables as well use it in the same bash session (same user-data process), then either you can add the variables to /etc/profile, /etc/environment or /home/ubuntu/.zshrc file. I have not tried /home/ubuntu/.profile file BTW.
Assuming adding to .zshrc file,
sudo su ubuntu -c "$(cat << EOF
echo 'export PATH="/tmp:\$PATH"' >> /home/ubuntu/.zshrc
echo 'export ABC="XYZ"' >> /home/ubuntu/.zshrc
echo 'export PY_VERSION=3.8.1' >> /home/ubuntu/.zshrc
source /home/ubuntu/.zshrc
echo printenv > /tmp/envvars # To test
EOF
)"
Once the user data is finished running, you can see the environment variables which you have added in the script are echoed to the envvars file. Reloading the bash with source /home/ubuntu/.zshrc made the newly added variables available in the bash session.
(additional info) How to install zsh and oh-my-zsh?
sudo apt-get install -y zsh
sudo su ubuntu -c "$(cat << EOF
ssh-keyscan -H github.com >> /home/ubuntu/.ssh/known_hosts
git clone https://github.com/robbyrussell/oh-my-zsh.git /home/ubuntu/.oh-my-zsh
cp /home/ubuntu/.oh-my-zsh/templates/zshrc.zsh-template /home/ubuntu/.zshrc
echo DISABLE_AUTO_UPDATE="true" >> /home/ubuntu/.zshrc
cp /home/ubuntu/.oh-my-zsh/themes/robbyrussell.zsh-theme /home/ubuntu/.oh-my-zsh/custom
EOF
)"
sudo chsh -s /bin/zsh ubuntu
Wondering why I didn't added the environment variable in .bashrc? The scenario which I mentioned above (using the environment variables in the same user-data session) adding to .bashrc won't work. .bashrc is only sourced for interactive Bash shells so there should be no need for .bashrc to check if it is running in an interactive shell. So just like above,
source /home/ubuntu/.bashrc
won't reload the bash. You can check this out written right in the beginning of the .bashrc file,
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
From this Medium.com article, you can put a script in UserData that writes to a file in /etc/profile.d that will get run automatically when a new shell is run.
Here is an example cloudformation.yaml
Parameters:
SomeOtherResourceData:
default: Fn::ImportValue: !Sub "SomeExportName"
Resources:
WebApi:
Type: AWS::EC2::Instance
Properties:
# ...
UserData:
Fn::Base64: !Sub
- |
#!/bin/bash
cat > /etc/profile.d/load_env.sh << 'EOF'
export ACCOUNT_ID=${AWS::AccountId}
export REGION=${AWS::Region}
export SOME_OTHER_RESOURCE_DATA=${SomeOtherResourceData}
EOF
chmod a+x /etc/profile.d/load_env.sh
And a YAML that exports something
# ...
Outputs:
SomeExportName:
Value: !Sub "${WebDb.Endpoint.Address}"
Export:
Name: SomeExportName
Here is what working for me
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
echo "TEST=THISISTEST" >> /etc/environment
The easiest way is definitely to use AWS Elastic Beanstalk, it creates for you everything you need with very small effort and have the easiest way in the entire AWS eco-system to set your environment variables.
check it out, there are also some exhaustive tutorials based on different languages
https://docs.aws.amazon.com/elastic-beanstalk/index.html
I want the awesome wm to run with different configuration when the network environment is changed. Therefore I wrote a script in the network manager dispatcher so that when the network environment change the script will be executed;
#!/bin/bash
# Restart awesome through awesome-client
USER=dreamingo
awesome_restart(){
/bin/su $USER -c "echo 'local awful = require (\"awful\"); return awful.util.restart()' | awesome-client"
}
#To check the cable insert or not
wire_insert_state=$(cat /sys/class/net/eth0/carrier)
#Check which config the current awesome taking(wired or wireless)
current_config=$(cat ~/.config/awesome/flag)
# if [[ $wire_insert_state = 1 ]] && [[ $current_config == "wireless" ]];then
if [[ $wire_insert_state = 1 ]];then
cp /home/dreamingo/.config/awesome/rc.lua.wire /home/dreamingo/.config/awesome/rc.lua
echo wired > ~/.config/awesome/flag
awesome_restart
# elif [[ $wire_insert_state = 0 ]] && [[ $current_config == "wired" ]];then
elif [[ $wire_insert_state = 0 ]];then
cp /home/dreamingo/.config/awesome/rc.lua.wireless /home/dreamingo/.config/awesome/rc.lua
echo wireless > ~/.config/awesome/flag
awesome_restart
fi
However, this script does run when the enviroment change, but the awesome wm won't restart.
I thought it was the reason that the script was executed by root, therefore I use the following command:
/bin/su $USER -c "echo 'local awful = require (\"awful\"); return awful.util.restart()' | awesome-client"
This command will work(restart the awesome wm) when I su to root; However, when i use:
sudo ./02check_wireless #02check_wireless was the name of the script
to run the script, the script fail to restart wm; But when i just run it as current user(dreamingo), it works;
Moreover, both the above result(success or not), the script will output:
Error org.freedesktop.DBus.Error.ServiceUnknown: The name org.naquadah.awesome.awful was not provided by any .service files
I thought the failed one also try to restart the awesome, but something covered or stop it....
This variant works fine for me with awesome 3.5.5 when I run it as root
#!/bin/bash
# Restart awesome through awesome-client
USER=my_username_here
awesome_restart(){
/bin/su $USER -c "echo 'awesome.restart()' | awesome-client"
}
awesome_restart