Boy, I'm tired of this topic. I have gitlab CI, local environments, keychain, keepass, gcp, aws and a whole bunch of other places where some of my env variables stored. Furthermore, Expo apps, for example, can't pull .env files, so I have to write bash scripts to create js files. This hurts my brain.
I want to have a cozy place where I store all my variables and secrets safely per project per environment. I want to share it with my team, CI servers etc. I want to just specify a single key: the environemnt title. And all the variables should be pulled from somewhere. Is there such tool anywhere on the github or internet???
Not sure if this question is suitable for stackoverflow, pls direct me to the right stackexchange forum if it doesn't.
Encrypt files and deployment scripts file by file, push them to repo. Whenever you need to switch environment, just decrypt the files with a the environment key word in name:
Encrypt.sh
# sh encrypt.sh ./config.production.js passwordhere
echo "encrypting $1"
openssl enc -aes-128-cbc -a -salt -pass pass:$2 -in $1 -out $1.enc
rm $1
mv $1.enc $1
echo "done"
decrypt.sh:
# sh decrypt.sh production passwordhere
echo "decrypting $1 environment"
for file in $(find . -not -path "*/node_modules/*" -name "*.$1*")
do
echo "decrypting $file to ${file//.$1}"
openssl enc -aes-128-cbc -a -d -salt -pass pass:$2 -in $file -out "${file//.$1}"
done
Related
I have a script which executes the gpg encryption command in a sh script throught cronjob.
This is a part of my script
do
gpg --batch --no-tty --yes --recipient $Key --output $Outputdir/${v}.pgp --encrypt ${v}
echo "$?"
if ["$?" -eq 0 ];
then
mv $Inputdir/${v} $Readydir/
echo "file moved"
else
echo "error in encryption"
fi
done
the echo $? gives value as 2.
tried the bellow command also
gpg --batch --home-dir dir --recipient $Key --output $Outputdir/${v}.pgp --encrypt ${v}
where dir=/usr/bin/gpg
My complete script
#set -x
PT=/gonm1_apps/xfb/ref/phoenix_drop
Inputdir=`grep Inputdir ${PT}/param.cfg | cut -d "=" -f2`
Outputdir=`grep Outputdir ${PT}/param.cfg | cut -d "=" -f2`
Key=`grep Key ${PT}/param.cfg | cut -d "=" -f2`
Readydir=`grep Readydir ${PT}/param.cfg | cut -d "=" -f2`
echo $USER
if [ "$(ls -la $Inputdir | grep -E 'S*.DAT')" ]; then
echo "Take action $Inputdir is not Empty"
cd $Inputdir
for v in `ls SID_090_*`
do
gpg --recipient $Key --output $Outputdir/${v}.pgp --encrypt ${v}
echo "$?"
if ["$?" -eq 0 ];
then
mv $Inputdir/${v} $Readydir/
echo "file moved"
else
echo "error in encryption"
fi
done
cd ${PT}
else
echo "$Inputdir is Empty"
fi
GnuPG manages individual keyrings and "GnuPG home directories" per user. A commmon problem when calling GnuPG from web services or cronjobs is executing them as another user.
This means that the other user's GnuPG does look up keys in the wrong key ring (home directory), and if that's fixed it should not have access permissions to the GnuPG home directory at all (not an issue when running a cron or web server as root, but that shouldn't be done for pretty much this reason first hand).
There are different ways to mitigate the issue:
Run the web server or cron job under another user. This might be a viable solution for cron jobs, but very likely not for web services. sudo or su might help at running GnuPG as another user.
Import the required (private/public) keys to the other user's GnuPG home directory, for example by switching to the www-data or root user (or whatever it's called on your machine).
Change GnuPG's behavior to use another user's home directory. You can do so with --home-dir /home/[username]/.gnupg or shorter --home-dir ~username/.gnupg if your shell resolves the short-hand. Better don't do this, as GnuPG is very strict at verifying access privileges and refuse to work if those are too relaxed. GnuPG doesn't like permissions allowing other users but the owner to access a GnuPG home directory at all, for good reasons.
Change GnuPG's behavior to use a completely unrelated folder as home directory, for example somewhere your application is storing data anyway. Usually, the best solution. Make sure to set the owner and access permissions appropriately. An example would be the option --home-dir /var/lib/foo-product/gnupg.
if
the echo $USER prints as root when executed on cronjob and as
username when executed manually
Then you need to login as the user and use a command such as "crontab -e" to add a cronjob for that user to run your script
I have the following bash code for a batch run of multiple files to be processed by 3 different programs:
for i in *.txt
do
program1 -in ${i} -out Prog1_out_${i}
program2 -in Prog1_out_${i} -out Prog2_out_${i}
program3 -in Prog2_out_${i} -out Prog3_out_${i}
done
I ran into problem with program 2 not finding the input which is the output from program 1, and of course program 3 did not find the required input.
Can anyone help with suggestions for solving the problem?
Thanks
If the programs produce the output whenever they are successful, you could make them dependent of the previous commands success, like this:
program1 -in ${i} -out Prog1_out_${i} &&
program2 -in Prog1_out_${i} -out Prog2_out_${i} &&
program3 -in Prog2_out_${i} -out Prog3_out_${i}
So if one of the programs fails, the rest of the chain will not be invoked.
However, if the creation of the output has nothing to do with the success of the program, but you just want to check if the files exist, you can add the appropriate check before you call programx, i.e.
if [ -f "${i}" ]
then
progx ...
fi
As you are doing the same thing all the time this could be generalized for all programs (untested):
for i in *.txt
do
mv $i Prog0_out_$i
for program in 0 1 2
do
INFILE=Prog{$program}_out_${$i}
if [ ! -r ${INFILE} ]
then
break
fi
program{$program} -in ${INFILE} -out "Prog{$program}_out_$((i+1))"
done
done
Im trying made a symlink file with name returned in a previous command but return the error is not a directory.
Each file in a folder i want to made the symlink file with the hash.0, the Following code snippet is in example file 213123.0:
for x in *; do openssl x509 -noout -hash -in $x|xargs ln -s $x {} ; done;
Returned:
"ln: target ‘b28afb7c’ is not a directory"
How can I do this correctly?
xargs is not find. You do not need {} to tell xargs where to stick the argument it always just sticks it at the end. Drop the {} argument and your command will work.
Use of xargs -t argument to have it show you the command it was trying to run would have found this for you.
It should also be pointed out that openssl (at least in some versions) has a c_rehash perl script that does this for you and handles corner cases that naive attempts will not (such as duplicated certificate files and duplicate hash results). Additionally your snippet doesn't actually append the .0 you said you wanted.
You cannot use xargs to do what you want here as you cannot control the placement/etc. of the argument to xargs such as to create the hash.0 filename you desire. That being said xargs is entirely unnecessary here as you only have a single bit of output to deal with.
Either use hash=$(openssl ... "$x"); ln -s "$x" "${hash}.0" or drop the variable entirely and use ln -s "$x" "$(openssl ... "$x")".
If you have GNU Parallel installed:
parallel 'ln -s {} $(openssl x509 -noout -hash -in {}).0' ::: *
If it is not packaged for your system, this should install it in 10 seconds:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
To learn more: Watch the intro video for a quick introduction:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). You command line
will love you for it.
I need to create a script which receives from the CLI the name of a file with the extension .tar.gz and a folder(e.g ./archivator.sh box.tar.gz MyFolder). This script will archive the files from the folder(only the files WITHIN the folder and without any compression) and they will be moved into the archive received as a parameter. The archive will be then encrypted(using the aescrypt) with the password 'apple'.
OS: Debian 6
Note: The final encrypted archive will have the same name as the first given parameter.
What i have tried so far is this:
tar -cvf $1 $2/* | aescrypt -e -p apple - > $1.aes | mv $1.aes $1
And this is what I receive when I am trying to check my script:
tar: This does not look like a tar archive
tar: Exiting with a failure status due to previous errors
Try doing this :
tar cf - $2/* | aescrypt -e -p apple - > $1
- here, means STDIN
Works well on Linux (archlinux) with GNU tar 1.26
If it doesn't work, run the script in debug mode:
bash -x script.sh
then come again to post the output.
After a little research, here is the solution:
pushd $2
tar cvf $1 .
openssl aes-256-cbc -in $1 -out $1.enc -pass pass:apple
mv $1.enc $1
popd
Your error seems to signal that the interpreter is receiving a file which is not a tar archive, yet it expects one. Have you checked to make sure the file your providing is a tar archive?
I'm trying to make a bash script in linux where some encrypted data is embedded and then retrieved and decrypted with openssl, like this:
cat | openssl des3 -d -a -salt -pass pass:asdf > output.txt <<EOF
U2FsdGVkX1/zN55FdyL5j1nbDVt5vK4V3WLQrnHPoycCJPwWO0ei3PCrrMqPaxUH.....blablablah data
EOF
The only problem with this, that would otherwise work, is that I have to hit enter when the script reaches this position. I have tried changing the way \n are placed, but no luck.
I can't afford to press manually enter for all the files that are going to be embedded like this one!!
Thanks for your help!
A couple of things wrong here:
You shouldn't use both cat | ... and also a here document (<<EOF). Use one or the other.
Your example isn't testable because the example text is not the DES3 encryption of any input.
This example works as expected:
cat ~/.profile | openssl des3 -e -a -salt -pass pass:asdf -out /tmp/output.txt
That is, it writes an encrypted version of ~/.profile, base64 encoded, to file /tmp/output.txt.
Here's a working decryption example with a here document:
openssl des3 -d -a -salt -pass pass:asdf <<EOF
U2FsdGVkX1/03DBd+MpEKId2hUY82cLWpYltYy2zSsg=
EOF
Try this in the safety and comfort of your own home...