How can I upload a 'secret file' via the Azure DevOps REST API? - azure

There is a secure file store built into Azure DevOps, available here: https://dev.azure.com/{organization}/{project}/_library?itemType=SecureFiles
I want to upload a file into that secure storage from a pipeline (because the file is generated by the pipeline). I understand that currently there is no dedicated task type to do this, so I have to do it via the Azure DevOps REST API.
How can I do that?

This bash scripts seems to be working for me:
set -e
name="my-file"
token="$(System.AccessToken)"
base_url="https://dev.azure.com/{organization}/{project}/_apis"
local_file="/path/to/local/file"
echo "Check if secure file $name exists"
existing_id="$(curl --fail -s -L -u ":${token}" "${base_url}/distributedtask/securefiles?api-version=5.0-preview.1" | jq -r ".value[] | select(.name == \"$name\").id " )"
if [ -n "$existing_id" ]; then
echo "Delete existing secure file: $existing_id"
curl --fail -v -L -X DELETE -u ":${token}" "${base_url}/distributedtask/securefiles/$existing_id?api-version=5.0-preview.1"
fi
echo "Uploading secure file as: $name"
curl --data-raw "#${local_file}" -u ":${token}" --fail -v -L -X POST -H "Content-Type: application/octet-stream" "${base_url}/distributedtask/securefiles?api-version=5.0-preview.1&name=${name}"
NOTE: I've used the API call examples that I've found here: https://documenter.getpostman.com/view/10072318/SzfAyS4s#3f75659d-4461-4efe-9ba3-77d5112f0bbe

Related

GitLab: How can I programatically download the artifacts issued at end of CI pipeline?

In Gitlab, how can I programatically download the artifacts issued at end of a CI pipeline?
It is easy to download it via the UI but how can I get it through API? In other words, is it possible to access it via a token or something similar?
It is possible through the API as in https://docs.gitlab.com/ee/api/jobs.html#get-job-artifacts
GET /projects/:id/jobs/:job_id/artifacts
Example requests:
Using the PRIVATE-TOKEN header:
curl --location --header "PRIVATE-TOKEN: 9koXpg98eAheJpvBs5tK" "https://gitlab.example.com/api/v4/projects/1/jobs/8/artifacts"
Using the JOB-TOKEN header (only inside .gitlab-ci.yml):
curl --location --header "JOB-TOKEN: $CI_JOB_TOKEN" "https://gitlab.example.com/api/v4/projects/1/jobs/8/artifacts"
Using the job_token parameter (only inside .gitlab-ci.yml):
curl --location --form "job-token=$CI_JOB_TOKEN" "https://gitlab.example.com/api/v4/projects/1/jobs/8/artifacts"
This works for me:
#!/bin/bash
GITLAB_URL="https://gitlab.example.com"
GITLAB_ARTIFACT_TOKEN="<token>"
group="<group>"
project="<project>"
branch="<branch>"
job="<job>"
outZipFile="$project.zip"
outHeadersFile="$outZipFile.httpheaders"
etagArgs=()
# The following is unfortunately not yet supported by GitLab; the returned Etag never changes:
#if [[ -f "$outHeadersFile" ]] && [[ -f "$outZipFile" ]]; then
# etag=$(grep etag < "$outHeadersFile" | cut -f2 -d' ')
# if [[ -n "$etag" ]]; then
# etagArgs=("--header" "If-None-Match: $etag")
# echo "using etag: $etag"
# fi
#fi
response=$(curl "$GITLAB_URL/api/v4/projects/${group}%2F${project}/jobs/artifacts/$branch/download?job=$job" \
--silent \
-w "%{http_code}\n" \
-D "$outHeadersFile" \
-o "$outZipFile.tmp" \
--header "PRIVATE-TOKEN: $GITLAB_ARTIFACT_TOKEN" \
"${etagArgs[#]}")
if [[ "$response" == 4* ]] || [[ "$response" == 5* ]]; then
echo "ERROR - Http status: $response"
rm "$outZipFile.tmp"
exit 1
elif [[ "$response" == 304 ]]; then
echo "$project is up-to-date"
else
echo "update $outZipFile"
mv "$outZipFile.tmp" "$outZipFile"
fi
This worked for me.
Created a new personal access token with API scopes.
Use the token to download it via curl command as shown below.
curl --location --header "PRIVATE-TOKEN:MY_PRIVATE_TOKEN" "https://it-gitlab.cloud.net/api/v4/projects/projectId/jobs/jobId/artifacts" --output watcher

How to create bashrc script for anonymous upload service file.io?

I am trying to create a script for my .bashrc that works for the anonymous file upload service file.io...
Here is what I have been working with but this one is for transfer.sh's service:
# anonymous file uploading to transfer.sh via command line ($ upload file.any)
upload() { if [ $# -eq 0 ]; then echo -e "No arguments specified. Usage:\necho transfer /tmp/test.md\ncat /tmp/test.md | transfer test.md"; return 1; fi
tmpfile=$( mktemp -t transferXXX ); if tty -s; then basefile=$(basename "$1" | sed -e 's/[^a-zA-Z0-9._-]/-/g'); curl --progress-bar --upload-file "$1" "https://transfer.sh/$basefile" >> $tmpfile; else curl --progress-bar --upload-file "-" "https://transfer.sh/$1" >> $tmpfile ; fi; cat $tmpfile; rm -f $tmpfile; }
which works wonderfully! but the code below which came from file.io's website is not helping me one bit. I feel I have tried everything but I am not the best at coding so far...
curl -F "file=test.txt" https://file.io
{"success":true,"key":"2ojE41","link":"https://file.io/2ojE41","expiry":"14 days"}
I have tried many different things, the first one being: (any many different variations as well)
upload2 () {
curl -F "file=$1" https://file.io
{"success":true,"key":"2ojE41","link":"https://file.io/2ojE41","expiry":"14 days"}
}
Can anyone help me to learn?
The following works for me.
upload2(){
curl -F "file=#$1" https://file.io
}
Then you can use upload2 Testfile
and get the response:
{"success":true,"key":"d8Lobo","link":"https://file.io/d8Lobo","expiry":"14 days"}

How can I CURL a remote file to a remote server

I am able to upload a local file to a remote server using the command
curl -T abc.pom -uusername:password URL
But I am not able to upload a remote file to that URL. The command I am using is this
curl -T remote-file-url -uusername:password URL
Is it not possible to do this? Is downloading it and then uploading it again the only option here?
My approach:
TF=/tmp/temp && curl <REMOTE_FILE_URL> -o $TF && curl -T $TF <UPLOAD_URL> && rm -f $TF
It might be possible to pipe the content of file from 1st to 2nd cURL but then the second one has to prepare the HTML form-encoded body by itself. The -T is a shorthand for this - it creates the form and populates it directly:
curl <REMOTE_FILE_URL> | curl -i -X POST -H "Content-Type: multipart/form-data" -d #- <UPLOAD_URL>
You may curl directly from the remote host by sending a command through ssh to the destination host :
ssh user#destination_host 'curl -o destination_file_path remote_file_url'

run gpg encryption command through cronjob

I have a script which executes the gpg encryption command in a sh script throught cronjob.
This is a part of my script
do
gpg --batch --no-tty --yes --recipient $Key --output $Outputdir/${v}.pgp --encrypt ${v}
echo "$?"
if ["$?" -eq 0 ];
then
mv $Inputdir/${v} $Readydir/
echo "file moved"
else
echo "error in encryption"
fi
done
the echo $? gives value as 2.
tried the bellow command also
gpg --batch --home-dir dir --recipient $Key --output $Outputdir/${v}.pgp --encrypt ${v}
where dir=/usr/bin/gpg
My complete script
#set -x
PT=/gonm1_apps/xfb/ref/phoenix_drop
Inputdir=`grep Inputdir ${PT}/param.cfg | cut -d "=" -f2`
Outputdir=`grep Outputdir ${PT}/param.cfg | cut -d "=" -f2`
Key=`grep Key ${PT}/param.cfg | cut -d "=" -f2`
Readydir=`grep Readydir ${PT}/param.cfg | cut -d "=" -f2`
echo $USER
if [ "$(ls -la $Inputdir | grep -E 'S*.DAT')" ]; then
echo "Take action $Inputdir is not Empty"
cd $Inputdir
for v in `ls SID_090_*`
do
gpg --recipient $Key --output $Outputdir/${v}.pgp --encrypt ${v}
echo "$?"
if ["$?" -eq 0 ];
then
mv $Inputdir/${v} $Readydir/
echo "file moved"
else
echo "error in encryption"
fi
done
cd ${PT}
else
echo "$Inputdir is Empty"
fi
GnuPG manages individual keyrings and "GnuPG home directories" per user. A commmon problem when calling GnuPG from web services or cronjobs is executing them as another user.
This means that the other user's GnuPG does look up keys in the wrong key ring (home directory), and if that's fixed it should not have access permissions to the GnuPG home directory at all (not an issue when running a cron or web server as root, but that shouldn't be done for pretty much this reason first hand).
There are different ways to mitigate the issue:
Run the web server or cron job under another user. This might be a viable solution for cron jobs, but very likely not for web services. sudo or su might help at running GnuPG as another user.
Import the required (private/public) keys to the other user's GnuPG home directory, for example by switching to the www-data or root user (or whatever it's called on your machine).
Change GnuPG's behavior to use another user's home directory. You can do so with --home-dir /home/[username]/.gnupg or shorter --home-dir ~username/.gnupg if your shell resolves the short-hand. Better don't do this, as GnuPG is very strict at verifying access privileges and refuse to work if those are too relaxed. GnuPG doesn't like permissions allowing other users but the owner to access a GnuPG home directory at all, for good reasons.
Change GnuPG's behavior to use a completely unrelated folder as home directory, for example somewhere your application is storing data anyway. Usually, the best solution. Make sure to set the owner and access permissions appropriately. An example would be the option --home-dir /var/lib/foo-product/gnupg.
if
the echo $USER prints as root when executed on cronjob and as
username when executed manually
Then you need to login as the user and use a command such as "crontab -e" to add a cronjob for that user to run your script

How to check status of URLs from text file using bash shell script

I have to check the status of 200 http URLs and find out which of these are broken links. The links are present in a simple text file (say URL.txt present in my ~ folder). I am using Ubuntu 14.04 and I am a Linux newbie. But I understand the bash shell is very powerful and could help me achieve what I want.
My exact requirement would be to read the text file which has the list of URLs and automatically check if the links are working and write the response to a new file with the URLs and their corresponding status (working/broken).
I created a file "checkurls.sh" and placed it in my home directory where the urls.txt file is also located. I gave execute privileges to the file using
$chmod +x checkurls.sh
The contents of checkurls.sh is given below:
#!/bin/bash
while read url
do
urlstatus=$(curl -o /dev/null --silent --head --write-out '%{http_code}' "$url" )
echo "$url $urlstatus" >> urlstatus.txt
done < $1
Finally, I executed it from command line using the following -
$./checkurls.sh urls.txt
Voila! It works.
#!/bin/bash
while read -ru 4 LINE; do
read -r REP < <(exec curl -IsS "$LINE" 2>&1)
echo "$LINE: $REP"
done 4< "$1"
Usage:
bash script.sh urls-list.txt
Sample:
http://not-exist.com/abc.html
https://kernel.org/nothing.html
http://kernel.org/index.html
https://kernel.org/index.html
Output:
http://not-exist.com/abc.html: curl: (6) Couldn't resolve host 'not-exist.com'
https://kernel.org/nothing.html: HTTP/1.1 404 Not Found
http://kernel.org/index.html: HTTP/1.1 301 Moved Permanently
https://kernel.org/index.html: HTTP/1.1 200 OK
For everything, read the Bash Manual. See man curl, help, man bash as well.
What about to add some parallelism to the accepted solution. Lets modify the script chkurl.sh to be little easier to read and to handle just one request at a time:
#!/bin/bash
URL=${1?Pass URL as parameter!}
curl -o /dev/null --silent --head --write-out "$URL %{http_code} %{redirect_url}\n" "$URL"
And now you check your list using:
cat URL.txt | xargs -P 4 -L1 ./chkurl.sh
This could finish the job up to 4 times faster.
Herewith my full script that checks URLs listed in a file passed as an argument e.g. 'checkurls.sh listofurls.txt'.
What it does:
check url using curl and return HTTP status code
send email notifications when url returns other code than 200
create a temporary lock file for failed urls (file naming could be improved)
send email notification when url becoms available again
remove lock file once url becomes available to avoid further notifications
log events to a file and handle increasing log file size (AKA log
rotation, uncomment echo if code 200 logging required)
Code:
#!/bin/sh
EMAIL=" your#email.com"
DATENOW=`date +%Y%m%d-%H%M%S`
LOG_FILE="checkurls.log"
c=0
while read url
do
((c++))
LOCK_FILE="checkurls$c.lock"
urlstatus=$(/usr/bin/curl -H 'Cache-Control: no-cache' -o /dev/null --silent --head --write-out '%{http_code}' "$url" )
if [ "$urlstatus" = "200" ]
then
#echo "$DATENOW OK $urlstatus connection->$url" >> $LOG_FILE
[ -e $LOCK_FILE ] && /bin/rm -f -- $LOCK_FILE > /dev/null && /bin/mail -s "NOTIFICATION URL OK: $url" $EMAIL <<< 'The URL is back online'
else
echo "$DATENOW FAIL $urlstatus connection->$url" >> $LOG_FILE
if [ -e $LOCK_FILE ]
then
#no action - awaiting URL to be fixed
:
else
/bin/mail -s "NOTIFICATION URL DOWN: $url" $EMAIL <<< 'Failed to reach or URL problem'
/bin/touch $LOCK_FILE
fi
fi
done < $1
# REMOVE LOG FILE IF LARGER THAN 100MB
# alow up to 2000 lines average
maxsize=120000
size=$(/usr/bin/du -k "$LOG_FILE" | /bin/cut -f 1)
if [ $size -ge $maxsize ]; then
/bin/rm -f -- $LOG_FILE > /dev/null
echo "$DATENOW LOG file [$LOG_FILE] has been recreated" > $LOG_FILE
else
#do nothing
:
fi
Please note that changing order of listed urls in text file will affect any existing lock files (remove all .lock files to avoid confusion). It would be improved by using url as file name but certain characters such as : # / ? & would have to be handled for operating system.
I recently released deadlink, a command-line tool for finding broken links in files. Install with
pip install deadlink
and use as
deadlink check /path/to/file/or/directory
or
deadlink replace-redirects /path/to/file/or/directory
The latter will replace permanent redirects (301) in the specified files.
Example output:
if your input file contains one url per line you can use a script to read each line, then try to ping the url, if ping success then the url is valid
#!/bin/bash
INPUT="Urls.txt"
OUTPUT="result.txt"
while read line ;
do
if ping -c 1 $line &> /dev/null
then
echo "$line valid" >> $OUTPUT
else
echo "$line not valid " >> $OUTPUT
fi
done < $INPUT
exit
ping options :
-c count
Stop after sending count ECHO_REQUEST packets. With deadline option, ping waits for count ECHO_REPLY packets, until the timeout expires.
you can use this option as well to limit waiting time
-W timeout
Time to wait for a response, in seconds. The option affects only timeout in absense
of any responses, otherwise ping waits for two RTTs.
curl -s -I --http2 http://$1 >> fullscan_curl.txt | cut -d: -f1 fullscan_curl.txt | cat fullscan_curl.txt | grep HTTP >> fullscan_httpstatus.txt
its work me

Resources