i have written a shell script to create a workspace in eclipse che. the script is generating the workspace url. but when i open it, workspace does not exit. Am i missing on something?
#!/bin/sh
#=========================================================================
repo_url=$1
branch_name=$2
ip_addr=$3
#echo $repo_url
#B="$(cut -d'/' -f4 <<<$repourl)"
temp_var="$(echo $repo_url | cut -d'/' -f5)"
repo_name="$(echo $temp_var | cut -d '.' -f1)"
#rm /opt/stackstorm/packs/jenkins/template/latest_template
sudo cp /opt/stackstorm/packs/dev_prediction/actions/latest_template.sh /opt/stackstorm/packs/dev_prediction/template/
#sed -i "s/repourl/$repo_url/g;s/branchname/$branch_name/g;s/reponame/$repo_name/g;s/ipaddr/$ip_addr/g" latest_template
sudo sed -i "s#repourl#$repo_url#g;s#branchname#$branch_name#g;s#reponame#$repo_name#g;s#ipaddr#$ip_addr#g" /opt/stackstorm/packs/dev_prediction/template/latest_template.sh
sudo chmod u+x /opt/stackstorm/packs/dev_prediction/template/latest_template.sh
#cat /opt/stackstorm/packs/jenkins/template/latest_template
sudo sh /opt/stackstorm/packs/dev_prediction/template/latest_template.sh > /dev/null
sudo rm /opt/stackstorm/packs/dev_prediction/template/latest_template.sh
echo "http://$ip_addr/dashboard/#/ide/admin/$repo_name-$branch_name"
Expected and Actual URL:
"http://URL.ap-south-1.elb.amazonaws.com/dashboard/#/ide/admin/petclinic-new-AIAS-37"
I don't actually follow... From what I can see here, you are just generating URL, but you are not creating a workspace. Was that done before?
To open a workspace by accessing URL of the workspace, the workspace has to exist first ;-)
If you are looking on how to create workspace from script, you can use REST API endpoints for that. In default Eclipse Che deployment, we are also deploying swagger on URL /swagger (in your case "http://URL.ap-south-1.elb.amazonaws.com/swagger"), so you can take a look at all the API endpoints for yourself.
The most interesting for you might be:
POST to <cheUrl>/api/devfile to create a workspace from devfile (new, Che 7 workspace definition file)
POST to <cheUrl>/api/workspace to create workspace from Che 6 workspace definition json.
Hope this helps.
Radim
Related
I'm trying to use wget to elegantly & politely download all the pdfs from a website. The pdfs live in various sub-directories under the starting URL. It appears that the -A pdf option is conflicting with the -r option. But I'm not a wget expert! This command:
wget -nd -np -r site/path
faithfully traverses the entire site downloading everything downstream of path (not polite!). This command:
wget -nd -np -r -A pdf site/path
finishes immediately having downloaded nothing. Running that same command in debug mode:
wget -nd -np -r -A pdf -d site/path
reveals that the sub-directories are ignored with the debug message:
Deciding whether to enqueue "https://site/path/subdir1". https://site/path/subdir1 (subdir1) does not match acc/rej rules. Decided NOT to load it.
I think this means that the sub directories did not satisfy the "pdf" filter and were excluded. Is there a way to get wget to recurse into sub directories (of random depth) and only download pdfs (into a single local dir)? Or does wget need to download everything and then I need to manually filter for pdfs afterward?
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
Try this
1)the “-l” switch specifies to wget to go one level down from the primary URL specified. You could obviously switch that to how ever many levels down in the links you want to follow.
wget -r -l1 -A.pdf http://www.example.com/page-with-pdfs.htm
refer man wget for more details
if the above doesn't work,try this
verify that the TOS of the web site permit to crawl it. Then, one solution is :
mech-dump --links 'http://example.com' |
grep pdf$ |
sed 's/\s+/%20/g' |
xargs -I% wget http://example.com/%
The mech-dump command comes with Perl's module WWW::Mechanize (libwww-mechanize-perl package on debian & debian likes distros
for installing mech-dump
sudo apt-get update -y
sudo apt-get install -y libwww-mechanize-shell-perl
github repo https://github.com/libwww-perl/WWW-Mechanize
I haven't tested this, but you cans still give a try, what i think is you still need to find a way to get all URLs of a website and pipe to any of the solutions I have given.
You will need to have wget and lynx installed:
sudo apt-get install wget lynx
Prepare a script name it however you want for this example pdflinkextractor
#!/bin/bash
WEBSITE="$1"
echo "Getting link list..."
lynx -cache=0 -dump -listonly "$WEBSITE" | grep ".*\.pdf$" | awk '{print $2}' | tee pdflinks.txt
echo "Downloading..."
wget -P pdflinkextractor_files/ -i pdflinks.txt
to run the file
chmod 700 pdfextractor
$ ./pdflinkextractor http://www.pdfscripting.com/public/Free-Sample-PDF-Files-with-scripts.cfm
I am trying to use VSTS to deploy a zip file to a Linux VM in Azure. I am using an SSH task to run the command:
sudo unzip -ju /home/$USER/release/deployfile-1.6.zip "*.war" -d "/opt/tomee/webapps/"
That command works. I don't want to change the filename each time it changes, though. I tried using a variable name:
sudo unzip -ju /home/$USER/release/$filename "*.war" -d "/opt/tomee/webapps/"
And I tried using a wildcard:
cd "/home/$USER/release/"
sudo unzip -ju '*.zip' "*.war" -d "/opt/tomee/webapps/"
(the above is supposed to be star.zip and star.war) Neither of those worked, and having little familiarity with Linux, I haven't been able to figure out a syntax that works.
Could someone please advise? Thank you.
Based on #JNevill comments, I made another attempt at using a variable for the filename. I also changed the u parameter to an o to automatically overwrite files. The final command syntax is:
sudo unzip -jo "/home/$USER/release/$(filename)" "*.war" -d "/opt/tomee/webapps/"
When the command is executed on the remote VM it becomes:
sudo unzip -jo "/home/$USER/release/deployfile-1.6.zip" "*.war" -d "/opt/tomee/webapps/"
The war files were successfully deployed to the VM.
I have a script, which ssh’s into a list of host’s and deletes files. The problem that I am facing is that when the deletion happens the console asks’s for password.
To get around that I know use the below code which hardcodes the password
time cat ../hosts.txt | xargs -P 16 -I foo ssh foo ' echo "MYPASSWORD" | sudo -kS rm -rf /tmp/randomfile*'
Is there any way to avoid hardcoding the password?
You could set up a ssh-agent.
I would suggest using a ssh-key, you could check this article describing how to create it and add it to your ssh-agent:
https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/
I updated the script with the absolute paths. Also here is my current cronjob entry.
I went and fixed the ssh key issue so I know it works know, but might still need to tell rsync what key to use.
The script runs fine when called manually by user. It looks like not even the rm commands are being executed by the cron job.
UPDATE
I updated my script but basically its the same as the one below. Below I have a new cron time and added an error output.
I get nothing. It looks like the script doesn't even run.
crontab -e
35 0 * * * /bin/bash /x/y/z/s/script.sh 2>1 > /tmp/tc.log
#!/bin/bash
# Clean up
/bin/rm -rf /z/y/z/a/b/current/*
cd /z/y/z/a/to/
/bin/rm -rf ?s??/D????
cd /z/y/z/s/
# Find the latest file
FILE=`/usr/bin/ssh user#server /bin/ls -ht /x/y/z/t/a/ | /usr/bin/head -n 1`
# Copy over the latest archive and place it in the proper directory
/usr/bin/rsync -avz -e /urs/bin/ssh user#server:"/x/y/z/t/a/$FILE" /x/y/z/t/a/
# Unzip the zip file and place it in the proper directory
/usr/bin/unzip -o /x/y/z/t/a/$FILE -d /x/y/z/t/a/current/
# Run Dev's script
cd /x/y/z/t/
./old.py a/current/ t/ 5
Thanks for the help.
I figured it out, I'm use to working in cst and the server was in gmt time.
Thanks everybody for the help.
I am trying to make a simple script in order to create automatic FTP backups reading domain, user and password from a CSV. I am using Wget command like this, and if i launch it from console, it works out of the box:
wget -r -P ./directory ftp://domain.com --ftp-user=myuser --ftp-password=mypassword
The problem occurs when i parameterize that command into a bash script to use it for many websites:
#!/bin/sh
#Read CSV.
while IFS=, read dominio usuario contrasenya
do
echo "Realizando Backup de: $dominio $usuario $contrasenya"
#Crear el backup del sitio web.
wget -r -P ./ ftp://"$dominio" --ftp-user="$usuario" --ftp-password="$contrasenya"
done < sitios-coma.csv
It returns 'Incorrect login'.
Anyone knows what i am doing wrong?