We currently have a challenge where the ideal solution would be to symlink a file to a web URL...
image.jpg -> http://www.host.com/images/image.jpg
Is this possible?
Maybe a named pipe that you feed with a wget for the file?
Edit - Not wget. You can work with linx -dump. So -
mkfifo reddit
links -dump reddit.com > reddit
cat reddit
There are several nice and interesting solutions here. I especially like #ArjunShankar's fuse solution. In the spirit of keeping it simple though, perhaps a file in /etc/cron.daily with
#!/bin/sh
cd /your/dir && wget -N http://www.host.com/images/image.jpg
would be a lot simpler and Good Enough(TM)?
On mac I successfully used this great tool by maxogden, which also using FUSE:
https://github.com/maxogden/mount-url
brew install osxfuse
npm install -g mount-url
Then
mount-url "https://url-to-10-gb-video-file-on-some-external-cloud-storage/video.mp4?xxx=yyy"
This would create a symlink for the file named video.mp4 in the current directory.
Not too fast access speed, but works.
Related
When I do "ls -lrt", there is a file that is listed and I want to create link for example,
myfile.config -> /users/yue/home/logs/myfile.config
When making changes in myfile.config it also affects the file in
/users/yue/home/logs/myfile.config.
What command in linux allows for that?
Also, what is this called?
the command is called ln (link )
the basic form is ln pathtosource pathtotarget
here is a nice URL that talks about it http://www.computerhope.com/unix/uln.htm
Figured it out,
Its actually "ln -s " command
I'm trying to understand how to use wget to download specific directories from a bunch of different ftp sites with economic data from the US government.
As a simple example, I know that I can download an entire directory using a command like:
wget --timestamping --recursive --no-parent ftp://ftp.bls.gov/pub/special.requests/cew/2013/county/
But I envision running more complex downloads, where I might want to limit a download to a handful of directories. So I've been looking at the --include option. But I don't really understand how it works. Specifically, why doesn't this work:
wget --timestamping --recursive -I /pub/special.requests/cew/2013/county/ ftp://ftp.bls.gov/pub/special.requests/cew/
The following does work, in the sense that it downloads files, but it downloads way more than I need (everything in the 2013 directory, vs just the county subdirectory):
wget --timestamping --recursive -I /pub/special.requests/cew/2013/ ftp://ftp.bls.gov/pub/special.requests/cew/
I can't tell if i'm not understanding something about wget or if my issue is with something more fundamental to ftp server structures.
Thanks for the help!
Based on this doc it seems that the filtering functions of wget are very limited.
When using the --recursive option, wget will download all linked documents after applying the various filters, such as --no-parent and -I, -X, -A, -R options.
In your example:
wget -r -I /pub/special.requests/cew/2013/county/ ftp://ftp.bls.gov/pub/special.requests/cew/
This won't download anything, because the -I option specifies to include only links matching /pub/special.requests/cew/2013/county/, but on the page /pub/special.requests/cew/ there are no such links, so the download stops there. This will work though:
wget -r -I /pub/special.requests/cew/2013/county/ ftp://ftp.bls.gov/pub/special.requests/cew/2013/
... because in this case the /pub/special.requests/cew/2013/ page does have a link to county/
Btw, you can find more details in this doc than on the man page:
http://www.gnu.org/software/wget/manual/html_node/
can't you simply do (and add the --timestamping/--no-parent etc. as needed)
wget -r ftp://ftp.bls.gov/pub/special.requests/cew/2013/county
The -I seems to work at one directory level at a time, so if we step one step up from county/ we could do:
wget -r -I /pub/special.requests/cew/2013/county/ ftp://ftp.bls.gov/pub/special.requests/cew/2013/
But apparently we can't step further up and do
wget -r -I /pub/special.requests/cew/2013/county/ ftp://ftp.bls.gov/pub/special.requests/cew/
Can wget be used to get all the files on a server.Suppose if this is the directory structure using Django framework on my site foo.com
And if this is the directory structure
/web/project1
/web/project2
/web/project3
/web/project4
/web/templates
Without knowing the name of directories of /project1,project2.....Is it possible to download all the files
You could use
wget -r -np http://www.foo.com/pool/main/z/
-r (fetch files/folders recursively)
-np (do not descent to parent directory when retrieving recursively)
or
wget -nH --cut-dirs=2 -r -np http://www.foo.com/pool/main/z/
--cut-dirs (it makes Wget not "see" number remote directory components)
-nH (invoking Wget with -r http://fly.srk.fer.hr/ will create a structure of directories beginning with fly.srk.fer.hr/. This option disables such behavior.)
First of all, wget can only be used to retrieve files served by the web server. It's not clear in the question you're posting whether you mean actual files or web pages. I would guess from the way you phrased your question that your intent is to download the server files, not the web pages served by Django. If this is correct, then no wget won't work. You need to use something like rsync or scp.
If you do mean using wget to retrieve all of the generated pages from Django, then this will only work if links point to those directories. So, you need a page that has code like:
<ul>
<li>Project1</li>
<li>Project2</li>
<li>Project3</li>
<li>Project4</li>
<li>Templates</li>
</ul>
wget is not a psychic; it can only pull in pages it knows about.
try recursive retrieval - the -r option.
Ok so I need to run wget but I'm prohibited from creating 'dot' files in the location that I need to run the wget. So my question is 'Can I get wget to use a name other than .listing that I can specify'.
further clarification : this is to sync / mirror an ftp folder with a local one, So using the -O option is not really useful, as I require all files to maintain format.
You can use the -O option to set the output filename, as in:
wget -O file http://stackoverflow.com
You can also use wget --help to get a complete list of options.
For folks that come along afterwards, and are surprised by an answer to the wrong question, here is a copy of one of the comments below:
#FelixD, yes, unfortunately misunderstood the question. Looking at the code for wget version 1.19 (Feb 2017), specifically ftp.c, it appears that the .listing file is hardcoded in macro LIST_FILENAME, and no override possible. There are probably better options for mirroring ftp sites - maybe take a look at lftp and its mirror command, also includes parallel downloads: lftp.yar.ru
#Paul: You can use that -O option specified by spong
No. You can't do this.
wget/src/ftp.c
/* File where the "ls -al" listing will be saved. */
#ifdef MSDOS
#define LIST_FILENAME "_listing"
#else
#define LIST_FILENAME ".listing"
#endif
I have same problem;
wget seems to save the .listing file in current directory where wget was called from, regardless of -O path/outpout_file
As an ugly/desperate solution we can try to run wget from random directories:
cd /temp/random_1; wget ftp://example.com/ -O /full/save_path/to_file_1.txt
cd /temp/random_2; wget ftp://example.com/ -O /full/save_path/to_file_2.txt
Note: manual says that using the --no-remove-listing option will cause it to create .listing.1, .listing.2, etc, so that might be an option to avoid conflicts.
Note: .listing file is not created at all if ftp login failed.
like /usr/local?
I tried file:///usr/local but failed
[root#www2 robot]# cd file:///usr/local
-bash: cd: file:///usr/local: No such file or directory
If you have a requirement to be able to access generic URLs from your shell, try using curl as a replacement for your cat:
curl file:///path/to/file.txt
curl http://www.domain.com/file.txt
But as other posters have pointed out, the shell itself doesn't understand URLs.
If you have to deal with file:///usr/local on the command line, you could just remove the "file://" part. And do then a normal your command, e.g.:
echo "file:///usr/local/" | sed 's/file:\/\///' | cd
bash and cd does not function across multiple protocols to my knowledge. If you want to access stuff through other protocols you have to mount them to the local filesystem.
You're not trying to cd to some location provided by a user on some web form are you?
file:/// doesn't work on bash or derivative shells.
It will work in most browsers, and possible wget and curl, but bash is not a web browser.
The file: protocol only exists in web browsers. Try it in Firefox, it should work. To do it in a shell, just use
cd /usr/local/
Don't use file:. It's the same reason why you can't
cat http://www.google.com
Which would be awesome (but you have to use something like wget or curl).