I'd like to be able to access my local bin files on CircleCI. Simply exporting a new PATH variable doesn't seem to be working. The following works locally, but does not work on CircleCI. On CircleCI, the variable is unchanged.
circle.yml
machine:
pre:
- echo $PATH
- export PATH=$PATH:./bin
- echo $PATH
The answer seems to be interpolating in the machine:environment section.
machine:
environment:
PATH: $PATH:./bin
Related
I have deploy.sh file in my azure repository and i need to execute this deploy.sh file from azure pipeline.
Here is the steps that i defined in pipeline. I tried two ways to do it through cmd and bash, both are not picking the right location of deploy.sh file
Here is the error i am getting:
This is the error from CMD, but CMD shows green even though path is not correct
This is the error for bash
Question
How to correct the path and do successful execution of deploy.sh?
You should never hard code the path of the pipeline run. Instead you should use the predefined variables that will automatically pick the build ID and also path. For your case you will need the
$(Pipeline.Workspace)/s/Orchestration/dev/deploy.sh
If you have multiple repositories checkout you should also use the name of the repository like
$(Pipeline.Workspace)/s/dotnetpipeline/Orchestration/dev/deploy.sh
$(Pipeline.Workspace)/s should be also replaced by $(Build.SourcesDirectory)
The predefined variables should follow the $(var) notation on the .YML file
This solved my issue
- task: CmdLine#2
inputs:
script: |
echo Write your commands here
cd $(Build.Repository.Name)/Orchestration/dev/
chmod +x deploy.sh
./deploy.sh
echo deploy.sh execution completed
Info + objective:
I'm using MAAS to deploy workstations with Ubuntu.
MAAS just deploys the machine with stock Ubuntu, and I then run a bash script I wrote to set up everything needed.
So far, I've ran that bash script manually on the newly deployed machines. Now, I'm trying to have MAAS run that script automatically.
What I did + error:
In the MAAS machine, I create the following file curtin file called /var/snap/maas/current/preseeds/curtin_userdata_ubuntu which contains the following:
write_files:
bash_script:
path: /root/script.sh
content: |
#!/bin/bash
echo blabla
... very long bash script
permissions: '0755'
late_commands:
run_script: ["/bin/bash /root/script.sh"]
However, in the log, I see the following:
known-caiman cloud-init[1372]: Command: ['/bin/bash /root/script.sh']
known-caiman cloud-init[1372]: Exit code: -
known-caiman cloud-init[1372]: Reason: [Errno 2] No such file or directory: '/bin/bash /root/script.sh': '/bin/bash /root/script.sh'
Question
I'm not sure putting such a large bash script in the curtin file is a good idea. Is there a way to store the bash script on the MAAS machine, and have curtin upload it to the server, and then execute it? If not, Is it possible to fix the error I'm having?
Thanks ahead!
This worked executing the command:
["curtin", "in-target", "--", "/bin/bash", "/root/script.sh"]
Though this method still means I have to write to a file and then execute it. I'm still hoping there's a way to upload a file and then execute it.
I do not add my script to curtin file.
I run below command and deploy servers.
maas admin machine deploy $system_id user_data=$(base64 -w0 /root/script.sh)
I would try
runcmd:
- [/bin/scp, user#host:/somewhere/script.sh, /root/]
late_commands:
run_script: ['/bin/bash', '/root/script.sh']
This obviously imply that you inject the proper credentials on the machine being deployed.
I'm trying to run some commands on my NodeJS app that need to be run via SSH (Sequelize seeding for instance), however when I do so, I noticed that the expected env vars were missing.
If I run eb printenv on my local machine I see the expected environment variables that were set in my EB Dashboard
If I SSH in, and run printenv, all of those variables I expect are missing.
So what happens, is when I run my seeds, I get an error:
node_modules/.bin/sequelize db:seed:all
ERROR: connect ECONNREFUSED 127.0.0.1:3306
I noticed that the port was wrong, it should be 5432. I checked to see if my environment variables were set with printenv and they are not there. This leads me to suspect that the proper env variables are not loaded in my ssh session, and NodeJS is falling back to the default values I provided in my config.
I found some solutions online, like running the following to load the env variables:
/opt/python/current/env
But the python directory doesn't exist. All that's inside /opt/ is elasticbeanstalk and aws directories.
I had some success so I could at least see that the env variables exist somewhere on the server by running the following:
sudo /opt/elasticbeanstalk/bin/get-config environment --output YAML
But simply running this command does not fix the problem. All it does is output the expected env variables to the screen. That's something! At least I know they are definitely there! But the env variables are still not there when I run printenv
So how do I fix this problem? Sequelize and NodeJS are clearly not seeing the env variables either, and it's trying to use the fallback default values that are set in my config file.
I know my answer is late, but I had the same problem and after some attempts with bash script I found a way to store it in your env vars.
you can simply run the following command:
export env=`/opt/elasticbeanstalk/bin/get-config environment -k <your-variable-name>`
now you will be able to easily access this variable:
echo $your-variable-name
afterward, you can utilize the env var to do what ever you like. in my case, I use it to decide which version of my code to build in a file called build-script.sh and its content is as follows:
# get env variable to know in which environment this code is running in
export env=`/opt/elasticbeanstalk/bin/get-config environment -k environment`
# building the code based on the current environment
if [ $env = "production" ]
then
echo "building for production"
npm --prefix /var/app/current run build-prod
else
echo "building for non production"
npm --prefix /var/app/current run build-prod
fi
hope this helps anyone facing the same issue 🤟🏻
Hello I have an nodejs app.In order to setup the app I have a folder called "setup" with the following files:
commands.js
index.sh
index.js
I have also an the following npm script:
setup:sh ./setup/index.sh
Here are the contents of index.sh
#!/bin/sh
echo "Optikos app database setup on progress";
node "$PWD/setup/index.js";
mongo --p 27019 "$PWD/setup/commands.js";
However when I run the script I get the following error:
./setup/index.sh: 4: ./setup/index.sh: mongo: not found
However mongo is already installed and in my $PATH
Any ideas why this is happening?
Here is my $PATH:
/home/mkcodergr/.npm-global/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/home/mkcodergr/Documents/GitHub/optikos-app/node_modules/.bin:~/mongo/bin:~/Downloads/ngrok:~/.npm-global/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
The problem was in my ~/.profile file.
I changed the following lines in the end of the file:
PATH="~/.npm-global/bin:$PATH"
PATH="~/Downloads/ngrok:$PATH"
PATH="~/mongo/bin:$PATH"
to:
PATH="$HOME/.npm-global/bin:$PATH"
PATH="$HOME/Downloads/ngrok:$PATH"
PATH="$HOME/mongo/bin:$PATH"
And reloaded my profile and it worked
setting environmental variables for a Hubot is pretty easy on the production server. However when I want to test the bot locally, I need the env vars inside a file. I already have the file .env for env vars that heroku is using for running locally.
But I can't seem to find a way to load env vars inside the Hubot scripts from a file.
Merry Christmas :-)
okay it's possible with hubot-env.
https://www.npmjs.com/package/hubot-env
The following command will load the file from a relative path:
hubot env load --filename=[filename].
It previously didn't work for me because I had HUBOT_ENV_BASE_PATH set on my mac so the command searched in the wrong folder for the file.