Using VSTS Release to Unzip FIles on Linux VM - linux

I am trying to use VSTS to deploy a zip file to a Linux VM in Azure. I am using an SSH task to run the command:
sudo unzip -ju /home/$USER/release/deployfile-1.6.zip "*.war" -d "/opt/tomee/webapps/"
That command works. I don't want to change the filename each time it changes, though. I tried using a variable name:
sudo unzip -ju /home/$USER/release/$filename "*.war" -d "/opt/tomee/webapps/"
And I tried using a wildcard:
cd "/home/$USER/release/"
sudo unzip -ju '*.zip' "*.war" -d "/opt/tomee/webapps/"
(the above is supposed to be star.zip and star.war) Neither of those worked, and having little familiarity with Linux, I haven't been able to figure out a syntax that works.
Could someone please advise? Thank you.

Based on #JNevill comments, I made another attempt at using a variable for the filename. I also changed the u parameter to an o to automatically overwrite files. The final command syntax is:
sudo unzip -jo "/home/$USER/release/$(filename)" "*.war" -d "/opt/tomee/webapps/"
When the command is executed on the remote VM it becomes:
sudo unzip -jo "/home/$USER/release/deployfile-1.6.zip" "*.war" -d "/opt/tomee/webapps/"
The war files were successfully deployed to the VM.

Related

Change directory in a startup script debian

I am running a debian VM Instance using GCP Compute Engine and I have added a automation script to be executed on startup.
There are few tools which will be downloaded on startup. Only issue is, everything is getting downloaded in / directory.
I need to download everything in $HOME directory.
Different ways I have tried
#!/bin/bash
set -x
cd $HOME
mkdir $HOME/test
cd $HOME/test
apt install wget -y
wget https://download.java.net/openjdk/jdk11/ri/openjdk-11+28_linux-x64_bin.tar.gz
#!/bin/bash
set -x
source $HOME
mkdir $HOME/something
#!/bin/bash
set -x
cd $HOME
mkdir $HOME/something
exec bash
Still it is downloaded in / directory. What else can be done here?
You are trying to make 2 things : install wget package and download another one.
Why don't you tried to install wget manually ?
apt-get install wget
You have then to store the full path for your script, and download the package needed it it. Try this :
#!/bin/bash
homePath=$HOME
mkdir $HOME/test
wget https://download.java.net/openjdk/jdk11/ri/openjdk-11+28_linux-x64_bin.tar.gz -P $homePath/test/

Install rabbitmqadmin on linux

I'm trying to install and be able to run rabbitmqadmin on a linux machine. Following the instructions described here do not help.
After downloading the file linked, it prompts to copy the file (which looks like a python script) into /usr/local/bin.
Trying to run it by simply invoking rabbitmqadmin results in rabbitmqadmin: command not found. There seems to be no information anywhere about how to get this to work and assumes that all the steps listed on the site should work for all. It seems odd that simply copying a python script to the bin folder should allow it to become a recognised command without having to invoke the python interpreter every time.
Any help is appreciated.
I spent several hours to figure out this, use rabbitmqadmin on linux environment, Finally below steps solve my issue.
On my ubuntu server, python3 was installed, I checked it using below command,
python3 -V
Step 1: download the python script to your linux server
wget https://raw.githubusercontent.com/rabbitmq/rabbitmq-management/v3.7.8/bin/rabbitmqadmin
Step2: change the permission
chmod 777 rabbitmqadmin
Step3: change the header of the script as below(first line)
#!/usr/bin/env python3
Thant's all, Now you can run below commands,
To list down queues,
./rabbitmqadmin -f tsv -q list queues
To Delete ques,
./rabbitmqadmin delete queue name=name_of_queue
To add binding between exchange and queue
./rabbitmqadmin declare binding source="exchangename" destination_type="queue" destination="queuename" routing_key="routingkey"
RabbitMQ decided to omit one vital piece of information.
Make the script executable with chmod +x otherwise it will fail to work.
I want to post my commands for installing rabbitmqadmin, it is combination of other answers, but with a little improvements for using best practice:
sudo rabbitmq-plugins enable rabbitmq_management
wget 'https://raw.githubusercontent.com/rabbitmq/rabbitmq-management/v3.7.15/bin/rabbitmqadmin'
chmod +x rabbitmqadmin
sed -i 's|#!/usr/bin/env python|#!/usr/bin/env python3|' rabbitmqadmin
mv rabbitmqadmin .local/bin/
rabbitmqadmin -q list queues
I suppose that you already create .local/bin/ dir and add it to PATH (on Ubuntu bash add this dir to PATH if it exists).
After install Rabbbitmq on Ubuntu/Debian, you can activate the Rabbitmq Admin Portal using the next command:
rabbitmq-plugins enable rabbitmq_management
Then you can access to the portal from http://localhost:15672. Use the user/password "guest".
Below the steps to install rabbimqadmin:
cd /usr/local/bin/
wget http://127.0.0.1:15672/cli/rabbitmqadmin
chmod 777 rabbitmqadmin
For more details check the official documentation Obtaining rabbitmqadmin

How to copy files from dockerfile to host?

I want to get some files when dockerfile built successfully, but it doesn't copy files from container to host.
That means when I have built dockerfile, the files already be host.
This is now possible since Docker 19.03.0 in July 2019 introduced "custom build outputs". See the official docs about custom build outputs.
To enable custom build outputs from the build image into the host during the build process, you need to activate the BuildKit which is a newer recommended back-compatible way for the engine to do the build phase. See the official docs for enabling BuildKit.
This can be done in 2 ways:
Set the environment variable DOCKER_BUILDKIT=1, or
Set it in the docker engine by default by adding "features": { "buildkit": true } to the root of the config json.
From the official docs about custom build outputs:
custom exporters allow you to export the build artifacts as files on the local filesystem instead of a Docker image, which can be useful for generating local binaries, code generation etc.
...
The local exporter writes the resulting build files to a directory on the client side. The tar exporter is similar but writes the files as a single tarball (.tar).
If no type is specified, the value defaults to the output directory of the local exporter.
...
The --output option exports all files from the target stage. A common pattern for exporting only specific files is to do multi-stage builds and to copy the desired files to a new scratch stage with COPY --from.
e.g. an example Dockerfile
FROM alpine:latest AS stage1
WORKDIR /app
RUN echo "hello world" > output.txt
FROM scratch AS export-stage
COPY --from=stage1 /app/output.txt .
Running
DOCKER_BUILDKIT=1 docker build --file Dockerfile --output out .
The tail of the output is:
=> [export-stage 1/1] COPY --from=stage1 /app/output.txt .
0.0s
=> exporting to client
0.1s
=> => copying files 45B
0.1s
This produces a local file out/output.txt that was created by the RUN command.
$ cat out/output.txt
hello world
All files are output from the target stage
The --output option will export all files from the target stage. So using a non-scratch stage with COPY --from will cause extraneous files to be copied to the output. The recommendation is to use a scratch stage with COPY --from.
Copying files "from the Dockerfile" to the host is not supported. The Dockerfile is just a recipe specifying how to build an image.
When you build, you have the opportunity to copy files from host to the image you are building (with the COPY directive or ADD)
You can also copy files from a container (an image that has been docker run'd) to the host with docker cp (atually, the cp can copy from the host to the container as well)
If you want to get back to your host some files that might have been generated during the build (like for example calling a script that generates ssl), you can run a container, mounting a folder from your host and executing cp commands.
See for example this getcrt script.
docker run -u root --entrypoint=/bin/sh --rm -i -v ${HOME}/b2d/apache:/apache apache << COMMANDS
pwd
cp crt /apache
cp key /apache
echo Changing owner from \$(id -u):\$(id -g) to $(id -u):$(id -u)
chown -R $(id -u):$(id -u) /apache/crt
chown -R $(id -u):$(id -u) /apache/key
COMMANDS
Everything between COMMANDS are commands executed on the container, including cp ones which are copying on the host ${HOME}/b2d/apache folder, mounted within the container as /apache with -v ${HOME}/b2d/apache:/apache.
That means each time you copy anything on /apache in the container, you are actually copying in ${HOME}/b2d/apache on the host!
Although it's not directly supported via the Dockerfile functionality, you can copy files from a built docker image.
containerId=$(docker create example:latest)
docker cp "$containerId":/source/path /destination/path
docker rm "$containerId"
Perhaps you could mount a folder from the host as a volume and put the files there? Usually mounted volumes could be written to from within the container, unless you specify read-only option while mounting.
docker run -v ${PWD}:/<somewritablepath> -w <somewritablepath> <image> <action>
On completion of Docker Build using Dockerfile, following worked for me :
where /var/lib/docker/ is Docker Root Dir: at my setup.
sudo find /var/lib/docker/ -name DESIRED_FILE -type f | xargs sudo ls -hrt | tail -1 | grep tgz | xargs -i sudo cp -v '{}' PATH_ON_HOST
You can also use the network. Starting with the script from this answer: https://stackoverflow.com/a/58255859/836450
I run this script on my host in the folder I want my output to go
Inside my docker I have:
RUN apt-get install -y curl
RUN curl -F 'file=#/path/to/file' http://172.17.0.1:44444/
Note, for me at least, 172.17.0.1 is always the IP of the host while building the dockerfile

Docker can't write to directory mounted using -v unless it has 777 permissions

I am using the docker-solr image with docker, and I need to mount a directory inside it which I achieve using the -v flag.
The problem is that the container needs to write to the directory that I have mounted into it, but doesn't appear to have the permissions to do so unless I do chmod 777 on the entire directory. I don't think setting the permission to allows all users to read and write to it is the solution, but just a temporary workaround.
Can anyone guide me in finding a more canonical solution?
Edit: I've been running docker without sudo because I added myself to the docker group. I just found that the problem is solved if I run docker with sudo, but I am curious if there are any other solutions.
More recently, after looking through some official docker repositories I've realized the more idiomatic way to solve these permission problems is using something called gosu in tandem with an entry point script. For example if we take an existing docker project, for example solr, the same one I was having trouble with earlier.
The dockerfile on Github very effectively builds the entire project, but does nothing to account for the permission problems.
So to overcome this, first I added the gosu setup to the dockerfile (if you implement this notice the version 1.4 is hardcoded. You can check for the latest releases here).
# grab gosu for easy step-down from root
RUN mkdir -p /home/solr \
&& gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu
Now we can use gosu, which is basically the exact same as su or sudo, but works much more nicely with docker. From the description for gosu:
This is a simple tool grown out of the simple fact that su and sudo have very strange and often annoying TTY and signal-forwarding behavior.
Now the other changes I made to the dockerfile were these adding these lines:
COPY solr_entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
just to add my entrypoint file to the docker container.
and removing the line:
USER $SOLR_USER
So that by default you are the root user. (which is why we have gosu to step-down from root).
Now as for my own entrypoint file, I don't think it's written perfectly, but it did the job.
#!/bin/bash
set -e
export PS1="\w:\u docker-solr-> "
# step down from root when just running the default start command
case "$1" in
start)
chown -R solr /opt/solr/server/solr
exec gosu solr /opt/solr/bin/solr -f
;;
*)
exec $#
;;
esac
A docker run command takes the form:
docker run <flags> <image-name> <passed in arguments>
Basically the entrypoint says if I want to run solr as per usual we pass the argument start to the end of the command like this:
docker run <flags> <image-name> start
and otherwise run the commands you pass as root.
The start option first gives the solr user ownership of the directories and then runs the default command. This solves the ownership problem because unlike the dockerfile setup, which is a one time thing, the entry point runs every single time.
So now if I mount directories using the -d flag, before the entrypoint actually runs solr, it will chown the files inside of the docker container for you.
As for what this does to your files outside the container I've had mixed results because docker acts a little weird on OSX. For me, it didn't change the files outside of the container, but on another OS where docker plays more nicely with the filesystem, it might change your files outside, but I guess that's what you'll have to deal with if you want to mount files inside the container instead of just copying them in.

How to copy entire folder from Amazon EC2 Linux instance to local Linux machine?

I connected to Amazon's linux instance from ssh using private key. I am trying to copy entire folder from that instance to my local linux machine .
Can anyone tell me the correct scp command to do this?
Or do I need something more than scp?
Both machines are Ubuntu 10.04 LTS
another way to do it is
scp -i "insert key file here" -r "insert ec2 instance here" "your local directory"
One mistake I made was scp -ir. The key has to be after the -i, and the -r after that.
so
scp -i amazon.pem -r ec2-user#ec2-##-##-##:/source/dir /destination/dir
Call scp from client machine with recursive option:
scp -r user#remote:src_directory dst_directory
scp -i {key path} -r ec2-user#54.159.147.19:{remote path} {local path}
For EC2 ubuntu
go to your .pem file directory
scp -i "yourkey.pem" -r ec2user#DNS_name:/home/ubuntu/foldername ~/Desktop/localfolder
You could even use rsync.
rsync -aPSHiv remote:directory .
This's how I copied file from amazon ec2 service to local window pc:
pscp -i "your-key-pair.pem" username#ec2-ip-compute.amazonaws.com:/home/username/file.txt C:\Documents\
For Linux to copy a directory:
scp -i "your-key-pair.pem" -r username#ec2-ip-compute.amazonaws.com:/home/username/dirtocopy /var/www/
To connect to amazon it requires key pair authentication.
Note:
Username most probably is ubuntu.
I use sshfs and mount remote directory to local machine and do whatever you want. Here is a small guide, commands may change on your system
This is also important and related to the above answer.
Copying all files in a local directory to EC2. This is a Unix answer.
Copy the entire local folder to a folder in EC2:
scp -i "key-pair.pem" -r /home/Projects/myfiles ubuntu#ec2.amazonaws.com:/home/dir
Copy only the entire contents of local folder to folder in EC2:
scp -i "key-pair.pem" -r /home/Projects/myfiles/* ubuntu#ec2.amazonaws.com:/home/dir
I do not like to use scp for large number of files as it does a 'transaction' for each file. The following is much better:
cd local_dir; ssh user#server 'cd remote_dir_parent; tar -c remote_dir' | tar -x
You can add a z flag to tar to compress on server and uncompress on client.
One way I found on youtube is to connect a local folder with a shared folder in EC2 instance. Please view this video for the full instruction. The sharing is instantaneous.

Resources