How to understand this shell script? - linux

cat urls.txt | xargs -P 10 -n 1 wget -nH -nc -x]
This shell is very confusing to new user, just want to ask if there is any reference document I can refer?

There is nothing much confusing about it.
If you want to know what the commands do then use the manual.
man cat
man xargs
The pipe sends the output of one command to the next, in this case cat urls.txt to xargs.
cat urls.txt will write the contents of the file urls.txt to stdout, which is then used as the input for xargs.
xargs -P 10 -n 1 will execute a command with with the input (the contents of urls.txt) as arguments. The command in this case being wget -nH -nc -x]. I don't know what ] is supposed to do there, but that's probably a typo.
All in all you can understand, without much caring about the options, that this will download a list of files that is in urls.txt into your current directory. Of course it's always safe to check the options flags. in this case -nc for example causees wget to rename a downloaded file and append a number if the file is already in the directory.
All three man pages can also be found online:
cat
xargs
wget

you can follow this book https://www.iiitd.edu.in/~amarjeet/Files/SM2012/Linux%20Dummies%209th.pdf
And best way to learn Linux command is use man command
example :
type > man xargs on terminal you will get all detail
you will get man page for all linux comman
The best way is follow this link https://explainshell.com

Related

Use mget to download files in Bash

I am creating a bash script for centos 7 and I want that mget download me the files who be named in a file. How I can make this?
I tried with these codes "prueba" is the file where is ubicated the filename:
mget prueba
mget prueba/*
Thank you for helpme
Are you talking about this mget? If so, it's not directly possible to use this utility to download a list of URL's you specify in a file.
You can however use xargs to simulate the same effect:
xargs -n 1 -a prueba mget
This would effectively call mget for each line in the file you specify (e.g. prueba).
This shoud solve your problem:
xargs -n 1 -P 8 -a prueba wget
-a Use file as input
-n1 Use one argument at a time
-P8 Use up to 8 processes (no need to use mget, since xargs handles the parallel downloads)

Redirect argument from a file to a Linux command

I searched the Internet, but maybe I used the wrong keyword, but I couldn't find the syntax to my very simple problem below:
How do I redirect a file as command line arguments to the Linux command "touch"? I want to create a file with "touch abc.txt", but the filename should come from the filename "file.txt" which contains "abc.txt", not manually typed-in.
[root#machine ~]# touch < file.txt
touch: missing file operand
Try `touch --help' for more information.
[root#machine ~]# cat file.txt
abc.txt
Try
$ touch $(< file.txt)
to expand the content of file.txt and give it as argument to touch
Alternatively, if you have multiple filenames stored in a file, you could use xargs, e.g.,
xargs touch <file.txt
(It would work for just one, but is more flexible than a simple "echo").

How to write stdout to file with colors?

A lot of times (not always) the stdout is displayed in colors. Normally I keep every output log in a different file too. Naturally in the file, the colors are not displayed anymore.
I'd like to know if there's a way (in Linux) to write the output to a file with colors. I'm trying to use tee to write the output of vagrant to a file, this way I can still see the output (when it applies). I want to use it specifically for vagrant (it may change in the future, of course...)
Since many programs will only output color sequences if their stdout is a terminal, a general solution to this problem requires tricking them into believing that the pipe they write to is a terminal. This is possible with the script command from bsdutils:
script -q -c "vagrant up" filename.txt
This will write the output from vagrant up to filename.txt (and the terminal). If echoing is not desirable,
script -q -c "vagrant up" filename > /dev/null
will write it only to the file.
You can save the ANSI sequences that colourise your output to a file:
echo a | grep --color=always . > colour.txt
cat colour.txt
Some programs, though, tend not to use them if their output doesn't go to the terminal (that's why I had to use --color-always with grep).
In Ubuntu, you can install the package bsdutils to output to a text file with ANSI color codes:
script -q -c "ls --color=always" /tmp/t
Install kbtin to generate a clean HTML file:
ls --color=always | ansi2html > /tmp/t.html
Install aha and wkhtmltopdf to generate a nice PDF:
ls --color=always | aha | wkhtmltopdf - /tmp/t.pdf
Use any of the above with tee to display the output also on the console or to save a copy in another file. Example:
ls --color=always | tee /dev/stderr | aha | wkhtmltopdf - /tmp/test.pdf
You can also color your output with echo with different colours and save the coloured output in file. Example
echo -e '\E[37;44m'"Hello World" > my_file
Also You would have to be acquainted with the terminal colour codes
Using tee
< command line > |tee -a 'my_colour_file'
Open your file in cat
cat 'my_colour_file'
Using a named pipe can also work to redirect all output from the pipe with colors to another file
for example
Create a named pipe
mkfifo pipe.fifo
each command line redirect it to the pipe as follows
<command line> > pipe.fifo
In another terminal redirect all messages from the pipe to your file
cat pipe.fifo > 'my_log_file_with_colours'
open your file with cat and see the expected results.
I found out that using the tool called ansi2html.sh
Is the most simple way to export colorful terminal data to html file,
The commands to use it are:
ls --color=always | ansi2html.sh --palette=solarized > ~/Desktop/ls.html
All is needed is to send the output using a pipe and then output the stdout to simple html file
Solution
$ script -q /dev/null -c "your command" > log.txt
$ cat log.txt
Explanation
According to the man page of script, the --quit option only makes sure to be quiet (do not write start and done messages to standard output). Which means that the start and done messages will always be written to the file.
In order to utilize script and discard the output file at the same file, we can simply specify the null device /dev/null to it! Also, redirect the output to our desired destination and the color content will be written to the destination.
I was trying out some of the solutions listed here, and I also realized you could do it with the echo command and the -e flag.
$ echo -e "\e[1;33m This is yellow text \e[0m" > sample.txt
Next, we can view the contents of our sample.txt file.
$ cat sample.txt
Click link to see the output in yellow
Additionally, we can also use tee and pipe it with our echo command:
echo -e "\e[1;33m This is yellow text \e[0m" | tee -a sample.txt
On macOS, script is from the BSD codebase and you can use it like so:
script -q /dev/null mvn dependency:tree mvn-tree.colours.txt
It will run mvn dependency:tree and store the coloured output into mvn-tree.colours.txt
The tee utility supports colours, so you can pipe it to see the command progress:
script -q /dev/null mvn dependency:tree | tee mvn-tree.colours.txt
To get the script manual you can type man script:
SCRIPT(1) BSD General Commands Manual SCRIPT(1)
NAME
script -- make typescript of terminal session
SYNOPSIS
script [-adkpqr] [-F pipe] [-t time] [file [command ...]]
In the RedHat/Rocky/CentOS family, the ansi2html utility does not seem to be available (except for Fedora 32 and up). An equivalent utility is ansifilter from the EPEL repository. Unfortunately, it seems to have been removed from EPEL 8.
script is preinstalled from the util-linux package.
To set up:
yum install ansifilter -y
To use it:
ls --color=always | ansifilter -H > output.html
To generate a pretty PDF (not tested), have ansifilter generate LaTeX output, and then post-process it:
ls --color=always | ansifilter -L | pdflatex >output.pdf
Obviously, combine this with the script utility, or whatever else may be appropriate in your situation.

shell script to download latest file from FTP

I am writing shell script first time, I want to download latest create file from FTP.
I want to download latest file of specific folder. Below is my code for that. But it is downloading all the files of the folder not the latest one.
ftp -in ftp.abc.com << SCRIPTEND
user xyz xyz
binary
cd Rpts/
mget ls -t -r | tail -n 1
quit
SCRIPTEND
help me with this, please?
Try using wget or lftp utility instead, it compares file time/date and AFAIR its purpose is ftp scripting. Switch to ssh/rsync if possible, you can read a bit about lftp instead of rsync here:
https://serverfault.com/questions/24622/how-to-use-rsync-over-ftp
Probably the easiest way is to link last version on server side to "current", and always get the file pointed. If you're not admin of the server, you need to list all files with date/time, grab the information, parse it, decide which one is newest, in the meantime state on the server can change, and you find yourself in more complicated solution than it's worth.
The point is, that "ls" sorts output in some way, and time may not be default. There are switches to sort it e.g. base on modification time, however even when server responds with OK on ls -t , you can't be sure it really supports sorting, it can just ignore all switches and always return the same list, that's why admins usually use "current" link (ln -s). If there's no "current", to make sure you have the right file, you need to parse list anyway ( ls -al ).
http://www.catb.org/esr/writings/unix-koans/shell-tools.html
Looking at the code, the line
mget ls -t -r | tail -n 1
doesn't do what you think. It actually grabs all of the output of ls -t and then tail processes the output of mget. You could replace this line with
mget $(ls -t -r | tail -n 1)
but I am not sure if ftp will support such a call...
Try using an FTP client other than ftp. For example, curlftpfs available at curlftpfs.sourceforge.net is a good candidate as it allows you to mount an FTP to a directory as if it is a local folder and then run different commands on the files there (including find, grep, etc.). Take a look at this article.
This way, since the output comes form a local command, you'd be more certain that ls -t returns a properly sorted list.
Btw, it's a bit less convoluted to use ls -t | head -1 than ls -t -r | tail -1. They produce the same result but why reverse and grab from the tail when you can just grab the head :)
If you use curlftpfs then your script would be something like this (assuming server ftp.abc.com and user xyz with password xyz).
mkdir /tmp/ftpsession
curlftpfs ftp://xyz:xyz#ftp.abc.com /tmp/ftpsession
cd /tmp/ftpsession/Rpts
cp -Rpf $(ls -t | head -1) /your/destination/folder/or/file
cd -
umount /tmp/ftpsession
My Solution is this:
curl 'ftp://server.de/dir/'$(curl 'ftp://server.de/dir/' 2>/dev/null | tail -1 | awk '{print $(NF)}')

How to execute a command with one parameter at a time in the *nix shell?

Some commands like svn log, for example will only take one input from the command line, so I can't say grep 'pattern' | svn log. It will only return the information for the first file, so I need to execute svn log against each one independently.
I can do this with find using it's exec option: find -name '*.jsp' -exec svn log {} \;. However, grep and find provide differently functionality, and the -exec option isn't available for grep or a lot of other tools.
So is there a generalized way to take output from a unix command line tool and have it execute an arbitrary command against each individual output independent of each other like find does?
The answer is xargs -n 1.
echo moo cow boo | xargs -n 1 echo
outputs
moo
cow
boo
try xargs:
grep 'pattern' | xargs svn log
A little one off shell script (using xargs is much better for a one off, that's why it exists)
#!/bin/sh
# Shift past argv[0]
shift 1
for file in "$#"
do
svn log $file
done
You could name it 'multilog' or something like that. Call it like this:
./multilog.sh foo.c abc.php bar.h Makefile
It allows for a little more sanity when being called by automated build scripts, i.e. test the existence of each before talking to SVN, or redirect each output to a separate file, insert it into a sqlite database, etc.
That may or may not be what you are looking for.

Resources