What is the point of using -o - in curl? - linux

I see a lot of bash install scripts that goes like this
curl -o- http://example.org/script.sh | bash
Eg: https://github.com/nvm-sh/nvm#install--update-script
As far as I know writing to stdout is the default behaviour of curl.
Is there a specific reason to use this in any case?

As you note, -o - is the default behavior out of the box; but users might have a .curlrc file which specifies a different local default.
Also, -o - somewhat helps document to readers who are unfamiliar with curl what's going on.

Related

Sending multiline command output via wget as post-data

I'm trying to run a command that will send a POST request with data that is a result of another command. An example will say it all:
wget -O- --post-data=$(ls -lah) http://192.168.30.53/api/v2/network/clients
This is obviously wrong, but I have no idea how to "escape" the value of ls -lah command before passing it as a parameter.
Current output if needed:
wget: invalid option -- '>'
wget: --wait: Invalid time period '-r--r--'
You do not escape - you quote the usage. Check your scripts with shellcheck.
... --post-data="$(ls -lah)" ...
I have no idea how to "escape" the value of ls -lah command before
passing it as a parameter.
wget man page describe --post-data=string and --post-file=file in unison, relevant for this case is that
--post-data sends string as data, whereas --post-file sends the
contents of file. Other than that, they work in exactly the same way.
and that
argument to "--post-file" must be a regular file
due to above limitation piping stdin would not work, but that is not problem unless you are banning from creating normal files - just create temporary file for that, something like that
ls -lah > lsdata
wget -O- --post-file=lsdata http://192.168.30.53/api/v2/network/clients
rm lsdata
should work (I do not have ability to test it)

Bash -x or set -x

Is there any practical difference in debugging using bash -x scriptname.sh or using set -x inside the script and then calling it?
In my personal experience I always use bash -x because that way I don't have to change the script so it's less invasive.
Is there a scenario where bash -x would not fit and just set -x would work?
I came up with these questions because I always see people suggesting set -x when trying to help others to debug and I was wondering why not bash -x scriptname.sh. Is there a technical limitation?
PS: I'm not asking about what the commands do, but I'm asking "how" they do and if one is better than the other.
About what they do:
bash -x command
What does `set -x` do?
I would also suggest set -x and not bash -x to a user debugging a script.
The user fully expects to modify the script to fix the bug, so adding a line is a non-issue.
You don't have to know how the user runs their script (it could be via some build script or IDE, and not just ./foo in a terminal)
You don't have to confirm the shebang, since #!/bin/bash -eui would require you to instead run bash -euix file and #!/bin/sh would require sh -x file.
But for myself? Yeah, I'd just run bash -x.
The main reason to avoid things in the shebang is that strictly a POSIX kernel doesn't have to support more than one argument. So #!/usr/bin/env bash -x may not work (#!/bin/bash -x may, but then you may be using the ancient bash on macOS, for example). (That said, many UNIX-like systems do, in fact, support shebangs with multiple arguments.)
If you're dealing with security or secrets options at the first part of your script, but you want to be able to debug later on, you may want to set +x for the security related section. But, you can always do set -x so that is not a particularly robust reason to avoid bash -x.
They are different interface to the same functionality, check out the docs:
All of the single-character options used with the set builtin (see The Set Builtin) can be used as options when the shell is invoked.
The different is. Do I set it before the first line of the script gets evaluated (bash -x ./script.sh), or do I set it within the script (set -x anywhere in the script). They can both enable and disable the simple trace funvtionality, last set "wins" as in affects the lines that follow. There is no functional difference in the effect the two have otherwise.

Preserve colors while piping setup script from CURL to BASH

I am currently running a vbox provisioner with curl -sSL http://goo.gl/DGs3Fv |bash and noticed that I am losing my tty coloring while doing so. Is there anyway to fix this without porting into a variable and echoing out "after the fact"?
I've tried the answer in Can colorized output be captured via shell redirect? but this does not seem to work in my situation. Thanks.
Quick example
It appears that your distro's init scripts are checking stdin to determine whether to output colors to stdout, and if so that's a bug on their side.
The answer you link to is right, you should be able to work around it with script:
curl -sSL http://goo.gl/DGs3Fv | script -c 'bash' -q /dev/null
Running script .. curl | bash wouldn't work though, since it's bash and not curl you're trying to fool.

linux terminal web browser

Is there a program I can run in terminal that will spit out a webpage's output? Basically I want to redirect its output (robots.txt) from a webpage to a txt file.
Wget has this option, amongst others; this will output the page to standard output:
wget -O - http://www.example.com/robots.txt
and this will write into a file that you specified:
wget -O /whatever/output/file/you/put/here.txt http://www.example.com/robots.txt
If you want to append multiple commands to one file, you can use the shell's redirection capabilities. This will append the content of the page to the end of the specified file, preserving its previous content:
wget -O - http://www.example.com/robots.txt >> /home/piskvor/yourfile.txt
Telnet has been a well-known (though now forgotten, I guess) tool for looking at a web page. The general idea is to telnet to the http port, to type an http 1.1 GET command, and then to observe the results on the screen. A good explanation is http://support.microsoft.com/kb/279466
A Google search yields a whole bunch more.
EDIT: deleted extraneous, unrelated junk May 23 2011

Setting up Dreamhost Cron job to simply execute URL

Just when I thought I was understanding cron jobs, I realize I'm still not understanding. I'm trying to set up a cron job through Dreamhost to ping a URL once an hour. This URL when visited performs a small(ish) query and updates the database.
A few examples I've tried which haven't seemed to of worked:
wget -O /dev/null http://www.domain.com/index.php?ACT=25&profile_id=1
and
wget -q http://www.domain.com/index.php?ACT=25&profile_id=1
The correct domain was inserted into the URL of course.
So, what am I missing? How could I execute a URL via a Cronjob?
One thing, are you escaping your url?
Try with:
wget -O /dev/null "http://www.domain.com/index.php?ACT=25&profile_id=1"
Having an ampersand in the URL usually leads to strange behaviour (process going background and ignoring the rest of the URL, etc).
I just had the same exact problem, and I found that actually two solutions work. One is as Victor Pimentel suggested: enclosing the url with " and the second option is to escape the & character in the cronjob like this: \&, so in your case the statement would look like this:
wget -q http://www.domain.com/index.php?ACT=25\&profile_id=1
or
wget -q "http://www.domain.com/index.php?ACT=25&profile_id=1"
Putting the following on the Dreamhost control panel\goodies\cron seems to work for me
wget -qO /dev/null http://domain/cron.php
links -dump http://Txx-S.com/php/test1.php
Worked much better than wget. It echoes the outputs of the php script into the email without all the junk that wget provides. Took awhile to get here, but it IS in the Dreamhost documentation. Don't need all the home/user stuff and the headache of placing all the php's under different users...
IMHO.
Pete

Resources