Bash run multiple remote scripts in a shared session - linux

I am trying to build a userdata load sequence which is constructed of several external files:
$ aws s3 cp s3://my-bucket/init.sh - | bash
$ echo "Some other custom commands"
$ aws s3 cp s3://my-bucket/more-stuff.sh - | bash
Now in init.sh there are some core functions that I need to use and they are not available in the other script sections since each one is a different bash session.
Is there are way to execute all these scripts and commands in one single bash session?

you should download the scripts and then run with source <filename>. Then all defined variables and functions are available for the other scripts.
$ aws s3 cp s3://my-bucket/init.sh ~/s3_init.sh
$ chomd 750 ~/s3_init.sh
$ source ~/s3_init.sh
...
For cp-options in aws s3 see https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html

Related

Gsutil multiple commands in a shell script is working on gitbash for windows but won’t run properly in the linux cli

I have thousands of gsutil commands in a shell script with the syntax
“gsutil cp gs://name of bucket/from-path to-path”
When I execute this script in my windows local using a gitbash it works fine. But when I am running this script in a Linux, it shows that the files are getting copied but when I look in the folder it is empty. Please help me.
Files should get saved in the destination.
The actual commands are:
gsutil cp gs://mycompany/archive/data/raw/integration/THY/2022/12/19/TKT_20221217.zip /home/ds102e/workspace/mft/
gsutil cp gs://mycompany/archive/data/raw/integration/SLK/2022/12/19/SQ9V_20221608.zip /home/ds102e/workspace/mft/
Is it possible you're on a Linux environment where gsutil is defined as an alias that runs gsutil in an isolated filesystem (e.g. within a docker container)? You can verify this by running type gsutil from your linux shell. As an example, see this post.

Dockerfile set ENV based on npm package version [duplicate]

Is it possible to set a docker ENV variable to the result of a command?
Like:
ENV MY_VAR whoami
i want MY_VAR to get the value "root" or whatever whoami returns
As an addition to DarkSideF answer.
You should be aware that each line/command in Dockerfile is ran in another container.
You can do something like this:
RUN export bleah=$(hostname -f);echo $bleah;
This is run in a single container.
At this time, a command result can be used with RUN export, but cannot be assigned to an ENV variable.
Known issue: https://github.com/docker/docker/issues/29110
I had same issue and found way to set environment variable as result of function by using RUN command in dockerfile.
For example i need to set SECRET_KEY_BASE for Rails app just once without changing as would when i run:
docker run -e SECRET_KEY_BASE="$(openssl rand -hex 64)"
Instead it i write to Dockerfile string like:
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
and my env variable available from root, even after bash login.
or may be
RUN /bin/bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" > /etc/profile.d/docker_init.sh'
then it variable available in CMD and ENTRYPOINT commands
Docker cache it as layer and change only if you change some strings before it.
You also can try different ways to set environment variable.
This answer is a response to #DarkSideF,
The method he is proposing is the following, in Dockerfile :
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
( adding an export in the /etc/bash.bashrc)
It is good but the environment variable will only be available for the process /bin/bash, and if you try to run your docker application for example a Node.js application, /etc/bash.bashrc will completely be ignored and your application won't have a single clue what SECRET_KEY_BASE is when trying to access process.env.SECRET_KEY_BASE.
That is the reason why ENV keyword is what everyone is trying to use with a dynamic command because every time you run your container or use an exec command, Docker will check ENV and pipe every value in the process currently run (similar to -e).
One solution is to use a wrapper (credit to #duglin in this github issue).
Have a wrapper file (e.g. envwrapper) in your project root containing :
#!/bin/bash
export SECRET_KEY_BASE="$(openssl rand -hex 64)"
export ANOTHER_ENV "hello world"
$*
and then in your Dockerfile :
...
COPY . .
RUN mv envwrapper /bin/.
RUN chmod 755 /bin/envwrapper
CMD envwrapper myapp
If you run commands using sh as it seems to be the default in docker.
You can do something like this:
RUN echo "export VAR=`command`" >> /envfile
RUN . /envfile; echo $VAR
This way, you build a env file by redirecting output to the env file of your choice. It's more explicit than having to define profiles and so on.
Then as the file will be available to other layers, it will be possible to source it and use the variables being exported. The way you create the env file isn't important.
Then when you're done you could remove the file to make it unavailable to the running container.
The . is how the env file is loaded.
As an addition to #DarkSideF's answer, if you want to reuse the result of a previous command in your Dockerfile during in the build process, you can use the following workaround:
run a command, store the result in a file
use command substitution to get the previous result from that file into another command
For example :
RUN echo "bla" > ./result
RUN echo $(cat ./result)
For something cleaner, you can use also the following gist which provides a small CLI called envstore.py :
RUN envstore.py set MY_VAR bla
RUN echo $(envstore.py get MY_VAR)
Or you can use python-dotenv library which has a similar CLI.
Not sure if this is what you were looking for, but in order to inject ENV vars or ARGS into your .Dockerfile build this pattern works.
in your my_build.sh:
echo getting version of osbase image to build from
OSBASE=$(grep "osbase_version" .version | sed 's/^.*: //')
echo building docker
docker build -f \
--build-arg ARTIFACT_TAG=$OSBASE \
PATH_TO_MY.Dockerfile \
-t my_artifact_home_url/bucketname:$TAG .
for getting an ARG in your .Dockerfile the snippet might look like this:
FROM scratch
ARG ARTIFACT_TAG
FROM my_artifact_home_url/bucketname:${ARTIFACT_TAG}
alternatively for getting an ENV in your .Dockerfile the snippet might look like this:
FROM someimage:latest
ARG ARTIFACT_TAG
ENV ARTIFACT_TAG=${ARTIFACT_TAG}
the idea is you run the shell script and that calls the .Dockerfile with the args passed in as options on the build.

How to auto quote content after character on each line in Linux? (Elastic Beanstalk environment variables related)

Consider the following file:
APP_ENV=production
APP_NAME=Some API <- This line HERE
RDS_DB_PASSWORD=Some_Strong_Password
// etc...
This file is auto-generated by AWS Elastic Beanstalk for my environments i.e. I have no control over the formatting of the contents.
When the application runs, it passes the environment variables internally and this works fine. However, when I try to run Laravel commands like the following, it does not escape the contents of each variable:
export $(sudo cat /opt/elasticbeanstalk/deployment/env) && sudo -E -u webapp php artisan some-command
As a result, the value Some API gets passed as Some instead because it has not been wrapped with quotes.
Is there a way to insert quotes around the values that come after the first = in this file on the fly and then pass them to my web app? Alternatively, am I running my commands incorrectly? Given this is Laravel specific, there are no docs on how to run Laravel commands on Elastic Beanstalk running Amazon Linux 2.
I managed to solve this by exporting the variables to my profile upon application deployment like so:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' > /etc/profile.d/sh.local
See reference here.

Chaining bash scripts on gcloud shell with gsutil

I built a series of bash scripts to run BigQuery jobs for a data pipeline. These scripts are saved in a google cloud storage bucket. I pipe them to sh using this:
gsutil cat gs://[bucket]/[filename] | sh
Basically there is no problem if I run this from command line, but once I try running this command from within a bash script, I keep getting file not found errors?
It doesn't seem like a PATH issue (I may be mistaken) as calling $PATH from within the script shows where gsutil is located.
Is this a permissions issue?
I'm running this from within google cloud console shell in my browser. Any help is appreciated.
To start, try printing out the output (both stdout and stderr) of the gsutil cat command, rather than piping it to sh. If you're receiving errors from that command, this will help shed some light on why sh is complaining. (In the future, please try to copy/paste the exact error messages you're receiving when posting a question.)
Comparing the output of gsutil version -l from both invocations will also be helpful. If this is an auth-based problem, you'll probably see different values for the config path(s) lines. If this is the case, it's likely that either:
You're running the script as a different user than who you normally run gsutil as. gcloud looks under $HOME/.config/gcloud/... for credentials to pass along to gsutil... e.g. if you've run gcloud auth login as Bob, but you're running the script as root, gcloud will try looking for root's credentials instead of Bob's.
In your script, you're invoking gsutil directly from .../google-cloud-sdk/platform/gsutil/, rather than its wrapper .../google-cloud-sdk/bin/gsutil which is responsible for passing gcloud credentials along.

add a alias to machine via a script

we are using cloud 9 ide for a dev machines. with in our git repo we have a setup script that configures certain env variables, sets up mysql etc. as part of this I want to add a alias
alias pu='vendor/bin/phpunit tests/'
when I run this on the command line it does what I expect and I can use the command pu
but when i run it as part of a script is does not add it and I cannot use the command pu
is there something I need to do first?
You should nut execute the script, because it will be run in another shell. You need to use the source or . command, which will execute the commands that your script contains in the current shell. (see man bash for details)
For example, you could add a line like this:
if [[ -f 'your_file_name.bash' ]]; then
source your_file_name.bash
fi
to your ~/.bashrc. The above code first checks for the existence of your file, then sources it to the current shell.

Resources