I am trying to use cygstart to open a local HTML file in chrome. I know I can use cygstart with a URL, but the file name and file URL doesn't seem to work:
cygstart index.html
cygstart file://index.html
cygstart simply cannot do this. Similar to how cURL supports the file protocol,
but wget does not:
Why does curl allow use of the file URL scheme, but not wget
As a workaround, you can put this in ~/.profile or similar:
export BROWSER=firefox
then you can use it like this:
"$BROWSER" file:index.html
Example
Related
The page that I'm trying to download from (made up contact info):
http://www.filltext.com/?rows=1000&nro={index}&etunimi={firstName}&sukunimi={lastName}&email={email}&puhelinnumero={phone}&pituus={numberRange|150,200}&syntymaaika={date|10-01-1950,30-12-1999}&postinumero={zip}&kaupunki={city}&maa={country}&pretty=true
The command that I have been using (I have tried a lot of different options etc.):
wget -r -O -F [filename] URL
It works in the sense that it downloads the web page content to the file, but instead of being the raw data that is inside the cells, it's just a bunch of curly brackets.
How do I download the actual raw data instead of the JSON file? Any help would be very much apppreciated!
Thank you.
Do you want something like wget google.com -q -O - ?
I'm fairly new to shell and I'm trying to use wget to download a .zip file from one directory to another. The only file in the directory I am copying the file from is the .zip file. However when I use wget IP address/directory it downloads an index.html file instead of the .zip. Is there something I am missing to get it to download the .zip without having to explicitly state it?
wget is the utility to download file from web.
you have mentioned you want to copy from one directory to other. you meant it is on same server/node?
In that case you can simply use cp command
And if you want if from any other server/node [file transfer] you can use scp or ftp
I want to download a number of files which are as follows:
http://example.com/directory/file1.txt
http://example.com/directory/file2.txt
http://example.com/directory/file3.txt
http://example.com/directory/file4.txt
.
.
http://example.com/directory/file199.txt
http://example.com/directory/file200.txt
Can anyone help me with it using shell scripting? Here is what I'm using but it is downloading only the first file.
for i in {1..200}
do
exec wget http://example.com/directory/file$i.txt;
done
wget http://example.com/directory/file{1..200}.txt
should do it. That expands to wget http://example.com/directory/file1.txt http://example.com/directory/file2.txt ....
Alternatively, your current code should work fine if you remove the call to exec, which is unnecessary and doesn't do what you seem to think it does.
To download a list of files you can use wget -i <file> where is a file name with a list of url to download.
For more details you can review the help page: man wget
I am migrating a site in PHP and someone has hardcoded all the links into a function call display image('http://whatever.com/images/xyz.jpg').
I can easily use text mate to convert all of these to http://whatever.com/images/xyz.jpg.
But what I also need to do is bring the images down with it so for example wget -i images.txt.
But I need to write a bash script to compile images.txt with all the links to save me doing this manually because there are a lot!
Any help you can give on this is greatly appreciated.
I found a one-liner on that website that should work: (replace index.php by your source)
wget `cat index.php | grep -P -o 'http:(\.|-|\/|\w)*\.(gif|jpg|png|bmp)'`
If you wget the file via. a web server, will you not get the output from the PHP script? That will contain img tags which you can extract using xml_grep or some such tool.
like /usr/local?
I tried file:///usr/local but failed
[root#www2 robot]# cd file:///usr/local
-bash: cd: file:///usr/local: No such file or directory
If you have a requirement to be able to access generic URLs from your shell, try using curl as a replacement for your cat:
curl file:///path/to/file.txt
curl http://www.domain.com/file.txt
But as other posters have pointed out, the shell itself doesn't understand URLs.
If you have to deal with file:///usr/local on the command line, you could just remove the "file://" part. And do then a normal your command, e.g.:
echo "file:///usr/local/" | sed 's/file:\/\///' | cd
bash and cd does not function across multiple protocols to my knowledge. If you want to access stuff through other protocols you have to mount them to the local filesystem.
You're not trying to cd to some location provided by a user on some web form are you?
file:/// doesn't work on bash or derivative shells.
It will work in most browsers, and possible wget and curl, but bash is not a web browser.
The file: protocol only exists in web browsers. Try it in Firefox, it should work. To do it in a shell, just use
cd /usr/local/
Don't use file:. It's the same reason why you can't
cat http://www.google.com
Which would be awesome (but you have to use something like wget or curl).