Execute the output of previous command line - linux

I need to execute the result of a previous command, but I don't know how I can process.
I have a first command that returns an instruction to log in to the server and then I want to execute it just after.
my-first-command returns: docker login ...
For example:
> my-first-comnand | execute the result of my-first-command

This should do it I believe.
my-first-command | bash

I use $(!!) for this. As Charles points out, this may not be what everyone wants to do, but it works for me and suits my purpose better than the other answer.
$ find ./ -type f -name "some.sh"
$ $(!!)
!! is a variable that holds the last command, and putting into $( ) makes it get executed.
This is also useful for taking other actions on the output, since $( ) is treated as a variable.

Most handy way is to use backticks `your_command` to execute your sub-command inline and immediately use output in your main command.
Example:
`find ~/Library/Android/sdk/build-tools/* -d 0 | tail -1`/zipalign -f 4 ./app-release-unsigned.apk ./app-release.apk
In this example I firstly find the correct directory from where I will execute zipalign. There could be several directories as in my case (find returns two directories) so I getting last one using tail. And then I'm executing zipalign directly using previous result as path to correct zipalign binary.

Related

Bash Command Substitution as Parameter

After I have been following down a challenging problem with my friends, I had an idea to make a blind command substitution.
This one with single quotes,
pid='1024 --help `touch /tmp/helw`' piduser=$(ps -ouser -p$pid h)
does not create the file /tmp/helw
But this one with double quotes,
pid="1024 --help `touch /tmp/helw`" piduser=$(ps -ouser -p$pid h)
creates the file.
My problem is, how can i make command substitution without let it run in pid - variable, but in piduser variable.
First, don't. Separate actions can and should be separate actions.
touch /tmp/helw && ...
...but if for some odd reason this is actually necessary (I can't imagine why), anything done inside $(...) is a subshell and can be several commands.
pid='1024 --help ' piduser=$( touch /tmp/helw && ps -ouser -p$pid h )

Internal Variable PIPESTATUS

I am new to linux and bash scripting and i have query about this internal variable PIPESTATUS which is an array and stores the exit status of individual commands in pipe.
On command line:
$ find /home | /bin/pax -dwx ustar | /bin/gzip -c > myfile.tar.gz
$ echo ${PIPESTATUS[*]}
$ 0 0 0
working fine on command line but when I am putting this code in a bash script it is showing only one exit status. My default SHELL on command line is bash only.
Somebody please help me to understand why this behaviour is changing? And what should I do to get this work in script?
#!/bin/bash
cmdfile=/var/tmp/cmd$$
backfile=/var/tmp/backup$$
find_fun() {
find /home
}
cmd1="find_fun | /bin/pax -dwx ustar"
cmd2="/bin/gzip -c"
eval "$cmd1 | $cmd2 > $backfile.tar.gz " 2>/dev/null
echo -e " find ${PIPESTATUS[0]} \npax ${PIPESTATUS[1]} \ncompress ${PIPESTATUS[2]} > $cmdfile
The problem you are having with your script is that you aren't running the same code as you ran on the command line. You are running different code. Namely the script has the addition of eval. If you were to wrap your command line test in eval you would see that it fails in a similar manner.
The reason the eval version fails (only gives you one value in PIPESTATUS) is because you aren't executing a pipeline anymore. You are executing eval on a string that contains a pipeline. This is similar to executing /bin/bash -c 'some | pipe | line'. The thing actually being run by the current shell is a single command so it has a single exit code.
You have two choices here:
Get rid of eval (which you should do anyway as eval is generally something to avoid) and stop using a string for a command (see Bash FAQ 050 for more on why doing this is a bad idea.
Move the echo "${PIPESTATUS[#]}" into the eval and then capture (and split/parse) the resulting output. (This is clearly a worse solution in just about every way.)
Instead of ${PIPESTATUS[0]} use ${PIPESTATUS[#]}
As with any array in bash PIPESTATUS[0] contains the first command exit status. If you want to get all of them you have to use PIPESTATUS[#] which returns all the contents of the array.
I'm not sure why it worked for you when you tried it in the command line. I tested it and I didn't get the same result as you.

Read filename with * shell bash

I'am new in Linux and I want to write a bash script that can read in a file name of a directory that starts with LED + some numbers.(Ex.: LED5.5.002)
In that directory there is only one file that will starts with LED. The problem is that this file will every time be updated, so the next time it will be for example LED6.5.012 and counting.
I searched and tried a little bit and came to this solution:
export fspec=/home/led/LED*
LedV=`basename $fspec`
echo $LedV
If I give in those commands one by one in my terminal it works fine, LedV= LED5.5.002 but if i run it in a bash scripts it gives the result: LedV = LED*
I search after another solution:
a=/home/led/LED*
LedV=$(basename $a)
echo $LedV
but here again the same, if i give it in one by one it's ok but in a script: LedV = LED*.
It's probably something small but because of my lack of knowledge over Linux I cannot find it. So can someone tell what is wrong?
Thanks! Jan
Shell expansions don't happen on scalar assignments, so in
varname=foo*
the expansion of "$varname" will literally be "foo*". It's more confusing when you consider that echo $varname (or in your case basename $varname; either way without the double quotes) will cause the expansion itself to be treated as a glob, so you may well think the variable contains all those filenames.
Array expansions are another story. You might just want
fspec=( /path/LED* )
echo "${fspec[0]##*/}" # A parameter expansion to strip off the dirname
That will work fine for bash. Since POSIX sh doesn't have arrays like this, I like to give an alternative approach:
for fspec in /path/LED*; do
break
done
echo "${fspec##*/}"
pwd
/usr/local/src
ls -1 /usr/local/src/mysql*
/usr/local/src/mysql-cluster-gpl-7.3.4-linux-glibc2.5-x86_64.tar.gz
/usr/local/src/mysql-dump_test_all_dbs.sql
if you only have 1 file, you will only get 1 result
MyFile=`ls -1 /home/led/LED*`

How can I ALIAS --> "less" the latest file in a directory?

Just wondering how could I less the latest log file in a directory in Linux?
I'm after a oneliner, possibly considering an alias!
Something like this?
ls -1dtr /your/dir/{*,.*} | tail -1 | xargs less
Note that for the first block of ls I am using an answer of Unix ls command: show full path when using options
As it requires a parameter, we create a function instead of an alias. Store the following in ~/.bashrc:
my_less_func ()
{
ls -1dtr "$1"/{*,.*} | tail -1 | xargs less
}
Source it (it is enough doing . ~/.bashrc) and call it with:
my_less_func your/path
In zsh: less dir/*(.om[1])
dir/* is a regular glob.
The . qualifier restricts to regular files.
om means order by modification time, newest first.
[1] means just expand the first filename.
It's probably better without the [1] - just pass all the filenames to less in the om order. If the first one satisfies you, you can hit q and be done with it. If not, the next one is just a :n away, or you can search them all with /*something. If there are too many, om[1,10] will get you 10 newest files.

Why doesn't bash history expansion work in functions?

When I'm programming, I'll find myself cycling through the same three shell commands, e.g:
vim myGraphic.cpp
g++ -lglut -lGLU myGraphic.cpp -o prettyPicture
./prettyPicture
In order to avoid hitting the uparrow key thrice every time, I put the following in my bashrc:
function cyc {
CYCLE=3
!-$CYCLE
}
When I use the 'cyc' function, however, I get the error
"bash: !-3: command not found".
This technique of history expansion works interactively with the shell, but it does not seem to work with function definitions. What is the explanation for this difference? How might I make a function equivalent to 'cyc' that works?
This question has been asked here: use "!" to execute commands with same parameter in a script but in brief you need to
set -o history
set -o histexpand
in your script to enable history expansion.
History expansion seems to be expanded immediately, whereas other commands inside the body of a function are deferred until the function is called. Try defining the function at a shell prompt. I get bash: !-$CYCLE: event not found immediately, before the function definition is complete.
I tried escaping the exclamation point, but this causes it to be treated literally once the function is called, instead of being processed as a history expansion.
One alternative is a combination of eval and fc:
function cyc {
CYCLE=3
eval $( fc -nl -$CYCLE -$CYCLE )
}
I'll forgo the usual warning about eval because you'll simply be re-executing a command you previously ran, so caution will apply however you accomplish this. The given fc command will print a range of commands from history (-n suppresses the line number), and using the same value for the beginning and end of the range limits the output to a single command.
One way. It extract last four lines of your history, taking into account that history will be included, from that extract the first one for same result that !-3 and use perl to remove either the history number and leading spaces before executing the instruction.
function cyc {
CYCLE=4
history | tail -"$CYCLE" | head -1 | perl -ne 's/\A\s*\d+\s*// && system( $_ )'
}

Resources