I want to get the short hash/sha of a GitHub commit, is there a way to get the short hash using GitHub API?
I was not able to find out anything on the official documentation page.
This trick did it for me:
curl -s -L https://api.github.com/repos/:ORG/:REPO/git/refs/heads/master | grep sha | cut -d '"' -f 4 | cut -c 1-7
Related
I have 10K+ XML where about half of them have the following line of code I'd like to replace:
<protocol_name_from_source><![CDATA[This section will be completed when reviewed by an Expert Review Panel.]]></protocol_name_from_source>
with this:
<protocol_name_from_source><![CDATA[Not applicable.]]></protocol_name_from_source>
I've been able to successfully grep for the affected files:
grep -rl '<process\_review><\!\[CDATA\[<p>The Expert Review Panel has not reviewed this measure yet\.<\/p>\]\]><\/process\_review>' ./
but I can't seem to be able to replace the text with sed:
grep -rl '<process\_review><\!\[CDATA\[<p>The Expert Review Panel has not reviewed this measure yet\.<\/p>\]\]><\/process\_review>' ./ | xargs sed -i 's/<process\_review><\!\[CDATA\[<p>The Expert Review Panel has not reviewed this measure yet\.<\/p>\]\]><\/process\_review>/<process\_review><\!\[CDATA\[<p>Not applicable\.<\/p>\]\]><\/process\_review>/g'
Appreciate any help in advance.
edit: These XMLs are in a git repo. Is there any risk of corrupting the repo?
Hmm, according to my man page, sed -i option expect to be followed with one (eventually 0 length) extension. And the command option should be introduced with -e if it is not the only command parameter.
So here I would use:
grep -rl '<process\_review><\!\[CDATA\[<p>The Expert Review Panel has not reviewed this measure yet\.<\/p>\]\]><\/process\_review>' ./ | xargs sed -i '' -e 's/<process\_review><\!\[CDATA\[<p>The Expert Review Panel has not reviewed this measure yet\.<\/p>\]\]><\/process\_review>/<process\_review><\!\[CDATA\[<p>Not applicable\.<\/p>\]\]><\/process\_review>/g'
Beware, I have not reviewed the (long...) sed s/.../.../ command...
I'm trying to scrape the binance price
I play arround with.
price1=$(echo -s https://api.binance.com/api/v3/ticker/price?symbol=ETHBTC | grep -o 'price":"[^"]*' | cut -d\" -f3)
echo $price1
I got the price but also an error like:
line 15: https://api.binance.com/api/v3/ticker/price?symbol=ETHBTC:
No such file or directory
someone can explain me how to use it correctly maybe
finally I like to have it in dollar
echo -s doesn't do anything special on my Linux. It just prints -s.
Use curl to download the data and jq to process it.
It is as simple as:
curl -s 'https://api.binance.com/api/v3/ticker/price?symbol=ETHBTC' | jq -r .price
The arguments of jq:
.price is the price property of the current object (.).
-r tells it to return raw data; the value of .price is a string in the JSON downloaded from the URL.
I have a script that I download slack with the wget command, as the script runs every time a computer is configured I need to always download the latest version of slack.
i work in debian9
I'm doing it right now:
wget https://downloads.slack-edge.com/linux_releases/slack-desktop-3.3.7-amd64.deb
and I tried this:
curl -s https://slack.com/intl/es/release-notes/linux | grep "<h2>Slack" | head -1 | sed 's/[<h2>/]//g' | sed 's/[a-z A-Z]//g' | sed "s/ //g"
this return: 3.3.7
add this to: wget https://downloads.slack-edge.com/linux_releases/slack-desktop-$curl-amd64.deb
and not working.
Do you know why this can not work?
Your script produces a long string with a lot of leading whitespace.
bash$ curl -s https://slack.com/intl/es/release-notes/linux |
> grep "<h2>Slack" | head -1 |
> sed 's/[<h2>/]//g' | sed 's/[a-z A-Z]//g' | sed "s/ //g"
3.3.7
You want the string without spaces, and the fugly long pipeline can be simplified significantly.
barh$ curl -s https://slack.com/intl/es/release-notes/linux |
> sed -n "/^.*<h2>Slack /{;s///;s/[^0-9.].*//p;q;}"
3.3.7
Notice also that the character class [<h2>/] doesn't mean at all what you think. It matches a single character which is < or h or 2 or > or / regardless of context. So for example, if the current version number were to contain the digit 2, you would zap that too.
Scraping like this is very brittle, though. I notice that if I change the /es/ in the URL to /en/ I get no output at all. Perhaps you can find a better way to obtain the newest version (using apt should allow you to install the newest version without any scripting on your side).
echo wget "https://downloads.slack-edge.com/linux_releases/slack-desktop-$(curl -s "https://slack.com/intl/es/release-notes/linux" | xmllint --html --xpath '//h2' - 2>/dev/null | head -n1 | sed 's/<h2>//;s#</h2>##;s/Slack //')-amd64.deb"
will output:
wget https://downloads.slack-edge.com/linux_releases/slack-desktop-3.3.7-amd64.deb
I used xmllint to parse the html and extract the first part between <h2> tags. Then some removing with sed and I receive the newest version.
#edit:
On noticing, that you could just grep <h2> from the site to get the version, you can the version with just:
curl -s "https://slack.com/intl/es/release-notes/linux" | grep -m1 "<h2>" | cut -d' ' -f2 | cut -d'<' -f1
I have a page exported from a wiki and I would like to find all the links on that page using bash. All the links on that page are in the form [wiki:<page_name>]. I have a script that does:
...
# First search for the links to the pages
search=`grep '\[wiki:' pages/*`
# Check is our search turned up anything
if [ -n "$search" ]; then
# Now, we want to cut out the page name and find unique listings
uniquePages=`echo "$search" | cut -d'[' -f 2 | cut -d']' -f 1 | cut -d':' -f2 | cut -d' ' -f 1 | sort -u`
....
However, when presented with a grep result with multiple [wiki: text in it, it only pulls the last one and not any others. For example if $search is:
Before starting the configuration, all the required libraries must be installed to be detected by Cmake. If you have missed this step, see the [wiki:CT/Checklist/Libraries "Libr By pressing [t] you can switch to advanced mode screen with more details. The 5 pages are available [wiki:CT/Checklist/Cmake/advanced_mode here]. To obtain information about ea - '''Installation of Cantera''': If Cantera has not been correctly installed or if you do not have sourced the setup file '''~/setup_cantera''' you should receive the following message. Refer to the [wiki:CT/FormulationCantera "Cantera installation"] page to fix this problem. You can set the Cantera options to OFF if you plan to use built-in transport, thermodynamics and chemistry.
then it only returns CT/FormulationCantera and it doesn't give me any of the other links. I know this is due to using cut so I need a replacement for the $uniquepages line.
Does anybody have any suggestions in bash? It can use sed or perl if needed, but I'm hoping for a one-liner to extract a list of page names if at all possible.
egrep -o '\[wiki:[^]]*]' pages/* | sed 's/\[wiki://;s/]//' | sort -u
upd. to remove all after space without cut
egrep -o '\[wiki:[^]]*]' pages/* | sed 's/\[wiki://;s/]//;s/ .*//' | sort -u
We can know the information about a client using
p4 client -o *clientname*
but it gives a lot of information. Is there any way to get only the view of the client using command line?
You can use p4's -z tag option to get annotated output useful for scripting. From there, you can extract the lines that start with ... View using grep and cut:
p4 -z tag client -o | grep -E '^[.]{3} View' | cut -d ' ' -f 3-
(And if you're using Windows, you can obtain grep and cut implementations from UnxUtils.)