Every time I use wget http://www.domain.com a Log file is being saved automatically on my server. is there anyway to run this command without logging?
Thanks,
Joel
You could try -o and -q
-o logfile
--output-file=logfile
Log all messages to logfile. The messages are
normally reported to standard error.
-q
--quiet
Turn off Wget's output.
So you'd have:
wget ... -q -o /dev/null ...
This will print the site contents to the standard output, is this what you mean when you say that you don't want logging to a file?
wget -O - http://www.domain.com/
I personally found that #Paul's answer was still adding to a Log file, regardless of the Command line output of -q
Added -O to /dev/null ontop of the -o output file argument.
wget [url] -q -o /dev/null -O &> /dev/null
Related
How to download multiple files using wget. Lets say i have a urls.txt containing several urls and i want to save them automatically with a custom filename for each file. How to do this?
I tried downloading 1 by 1 with this format "wget -c url1 -O filename1" successfully and now i want to try do batch download.
You might take look at xargs command, you would need to prepare file with arguments for each wget call, lets say it is named download.txt and
-O file1.html https://www.example.com
-O file2.html https://www.duckduckgo.com
and then use it as follows
cat download.txt | xargs wget -c
which is equivalent to doing
wget -c -O file1.html https://www.example.com
wget -c -O file2.html https://www.duckduckgo.com
Add the input file and a loop:
for i in `cat urls.txt`; do wget $i -O filename-$i; done
I'm facing this problem :
When I execute a CRON task from my Web Server with this command :
wget https://mon-url.com/file/cron.php >/dev/null 2>&1
It creates new files cron.php1, cron.php2, ... in my /root directory and It takes a lot of space. How can I prevent this ?
Thanks
You can use -O /dev/null what will write the output file to /dev/null
wget -O /dev/null https://mon-url.com/file/cron.php >/dev/null 2>&1
Or -O- which will output it to stdout.
wget -O- https://mon-url.com/file/cron.php >/dev/null 2>&1
This is the answer:
wget --delete-after
I have a small file containing just a number located at /mnt/1wire/342342342/temperature
I need to create a cron to wget http://myserver.com/myurl?temp= (and here goes that line)
I believe I should use somewhat cat /myfile | wget http://myserver.com?parameter= ?
I don't want to create any temporary files and I want to output the received data from get to null.
Did you try
wget http://myserver.com/myurl?temp=`cat /mnt/1wire/342342342/temperature` -o /dev/null -O /dev/null
# -o output log
# -O output document
I can't wget while there is no path already to save. I mean, wget doens't work for the non-existing save paths. For e.g:
wget -O /path/to/image/new_image.jpg http://www.example.com/old_image.jpg
If /path/to/image/ is not previously existed, it always returns:
No such file or directory
How can i make it work to automatically create the path and save?
Try curl
curl http://www.site.org/image.jpg --create-dirs -o /path/to/save/images.jpg
mkdir -p /path/i/want && wget -O /path/i/want/image.jpg http://www.com/image.jpg
To download a file with wget, into a new directory, use --directory-prefix without -O:
wget --directory-prefix=/new/directory/ http://www.example.com/old_image.jpg
Using -O new_file in conjunction with --directory-prefix, will not create the new directory structure, and will save the new file in the current directory.
It may even fail with "No such file or directory" error, if you specify -O /new/directory/new_file
I was able to create folder if it doesn't exists with this command:
wget -N http://www.example.com/old_image.jpg -P /path/to/image
wget is only getting a file NOT creating the directory structure for you (mkdir -p /path/to/image/), you have to do this by urself:
mkdir -p /path/to/image/ && wget -O /path/to/image/new_image.jpg http://www.example.com/old_image.jpg
You can tell wget to create the directory (so you dont have to use mkdir) with the parameter --force-directories
alltogether this would be
wget --force-directories -O /path/to/image/new_image.jpg http://www.example.com/old_image.jpg
After searching a lot, I finally found a way to use wget to download for non-existing path.
wget -q --show-progress -c -nc -r -nH -i "$1"
=====
Clarification
-q
--quiet --show-progress
Kill annoying output but keep the progress-bar
-c
--continue
Resume download if the connection lost
-nc
--no-clobber
Overwriting file if exists
-r
--recursive
Download in recursive mode (What topic creator asked for!)
-nH
--no-host-directories
Tell wget do not use the domain as a directory (for e.g: https://example.com/what/you/need
- without this option, it will download to "example.com/what/you/need")
-i
--input-file
File with URLs need to be download (in case you want to download a lot of URLs,
otherwise just remove this option)
Happy wget-ing!
I'm running the following command (on Ubuntu)
time wget 'http://localhost:8080/upLoading.jsp' --timeout=0
and get a result in the command line
real 0m0.042s
user 0m0.000s
sys 0m0.000s
I've tried the following:
time -a o.txt wget 'http://localhost:8080/upLoading.jsp' --timeout=0
and get the following error
-a: command not found
I want to get the result to be redirected to some file. How can I do that?
-a is only understood by the time binary (/usr/bin/time), When just using time you're using the bash built-in version which does not process the -a option, and hence tries to run it as a command.
/usr/bin/time -o foo.txt -a wget 'http://localhost:8080/upLoading.jsp' --timeout=0
Checking man time, I guess what you need is
time -o o.txt -a ...
(Note you need both -a and -o).
[EDIT:] If you are in bash, you must also take care to write
/usr/bin/time
(check manpage for explanation)
You can direct the stdout output of any commmand to a file using the > character.
To append the output to a file use >>
Note that unless done explicitly, output to stderr will still go to the console. To direct both stderr and stdout to the same output stream use
command 2>&1 outfile.txt (with bash)
or
command >& outfile.txt (with t/csh)
If you are working with bash All about redirection will give you more details and control about redirection.
\time 2> time.out.text command
\time -o time.out.text command
This answer based on earlier comments. It is tested it works. The advantage of the \ over /usr/bin/ is that you don't have to know the install directory of time.
These answers also only capture the time, not other output.
Exactly the time from GNU writes it's output to stderr and if you want to redirect it to file, you can use --output=PATH parameter of time
See this http://unixhelp.ed.ac.uk/CGI/man-cgi?time
And if you want to redirect stdout to some file, you can use > filename to create file and fill it or >> filename to append to some file after the initial command.
If you want to redirect stderr by yourself, you can use $ command >&2 your_stderr_output
Try to use /usr/bin/time since many shells have their own implementation of time which may or may not support the same flags as /usr/bin/time
so change your command to
/usr/bin/time -a -o foo.txt wget ....
How about your LANG ?
$ time -ao o.txt echo 1
bash: -ao: コマンドが見つかりません
real 0m0.001s
user 0m0.000s
sys 0m0.000s
$ export|grep LANG
declare -x LANG="ja_JP.utf8"
$ LANG=C time -ao o.txt echo 1
1
$ cat o.txt
0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 1984maxresident)k
0inputs+0outputs (0major+158minor)pagefaults 0swaps
Try:
command 2> log.txt
and the real-time output from "command" can be seen in another console window with:
tail -f log.txt
This worked for me:
( time command ) |& tee output.txt
https://unix.stackexchange.com/questions/115980/how-can-i-redirect-time-output-and-command-output-to-the-same-pipe
You can do that with > if you want to redirect the output.
For example:
time wget 'http://localhost:8080/upLoading.jsp' --timeout=0 > output.txt 2>&1
2>&1 says to redirect STDERR to the same file.
This command will erase any output.txt files and creates a new one with your output. If you use >> it will append the output at the end of any existing output.txt file. If it doesn't exist, it will create it.