How to download file that has space in its name? - iis

In my virtual directory, I have many mp3 files, there are space or Chinese characters. How do I allow visitors to download them?
For example:
There's no problem when downloading www.myWebsite.com/virtualDirectory/songNameSimple.mp3
But if the song name has space in it, it's replaced by %20, thus return 404 error.
I'm curious about solution in both iis and lamp, although maybe the solution is the same.
Thanks

The server is handling this automatically, it was working for me at the beginning due to something else.

Related

Open shared drive document in Word from browser

I have a share drive with lot of word documents, and i made a simple webpage which listing them. I want to open them with directly in Word. I found an URI scheme (ms-word:ofe|u|<document path>) but that not able to open from share drive, just from local drives.
I tried to use this ms-word:ofv|u|//<share drive>/test.docx but just open a blank Word. Without the double slash, word trying to open from the C drive.
Do you guys have any idea how can i solve this problem?
You are missing the http or https schema keyword. The final URI should be something like
ms-word:ofe|u|https://sharepoint.com/shared/YOUR_WORD_DOCUMENT.docx

Linux directory folder that begins with a symbols

I am working on a website and noticed that as I was making up folder names that any folder name that begins with a # is not recognized from the web browser. For example: example.com/#example/index.html will not work. Whereas example.com/%23example/index.html works. In the same example using !#$%*& will work file without encoding. Curious why and how to make it work if I wanted it to. I read this article: Which characters make a URL invalid?. Thanks
Invalid characters might have a special meaning for the browser. For example # is used to create links on the same page. Take a look here.

Mounting Sharepoint shares using davfs2 yelds empty folder

I'm trying to mount my employer's SharePoint document repository from Linux.
I followed the article published here: http://howto.unixdev.net/Linux-SharePoint.html
Everything seems perfect, I can authenticate and mount the shared folder, but the mount point is empty (well not exactly, I see a "lost+found" directory).
If I try the same path in explorer under Windows I see the files are there.
I have no errors in log files or at the CLI.
What can I try?
P.S. Since I see no replies, I am adding the content of /etc/fstab here, hoping that it can be useful to debug my problem:
http://AMENDED/bk/des/data\040administration/ /media/SharePoint davfs rw,noauto,user 0 0
The problem is with non-ascii characters in filenames.
please look in.
http://savannah.nongnu.org/support/?108385
There are no solution/ Just use only ascii for filenames.
I found the solution to my problem: while looking at the way Internet Explorer formatted the URL I noticed a few letters were capitalized. So I tried capitalizing the same letters in /etc/fstab and voila, the repository was correctly mounted.
Here is my current /etc/fstab:
http://AMENDED/bk/des/Data\040Administration/ /media/SharePoint davfs rw,noauto,user 0 0
It is to be noted that introducing the all-lowercase url in windows explorer works correctly.
The issue turns out to be with davfs2 code processing a response to PROPFIND in a case-sensitive manner. It's very easy to change it to case-insensitive processing before an official fix is made.
For more info see https://savannah.nongnu.org/support/index.php?108566

How can I find files that aren't needed on my site so I can delete them?

I'm developing a website, and after testing different ways to do things, I know that I have many files on my site that are not being used, including HTML/PHP files, images, stylesheets, and external scripts. Is there some program I can use or something so I can find all of the files that I don't need so I can delete them?
I need to find all files that are safe to delete, don't have anything to do with the site anymore, and that deleting them won't have any effect on how my site works.
I've tried finding orphaned files in Dreamweaver, but it lists a lot of files that I do actually need.
Here's one idea: Crawl the site and create a list of every file you can find, then check anything that's not on that list. Wikipedia has a list of crawlers including some open source ones.
Xenu's linksleuth is the easiest way I've found.
http://home.snafu.de/tilman/xenulink.html
After you do the scan you have the option to put in your FTP info. If you do so, it will also generate a list of files that are not accessible (orphans).
How would you qualify unnecessary? That's something you need to be sure of before beginning this. I guess one way to garbage collect your site is to delete files not being referenced by any other files.
The idea with the crawler #Brendan to get all files that actually are used is very nice.
Then you can start deleting files from your website and after that use a program to find any broken links in your website like Xenu or LinkTiger or then one you prefer.
You can connect with some ftp application, and delete files manual. This is the safest way, because scripts and programs don't know what is needed and what not...
This did not exist at the time this question was asked, but there is a Python script called weborphans designed for this purpose.
Here's a blog entry by the author with some more info: Finding orphaned files on websites

How to configure the filename length that can be handled by Linux Ubuntu?

Im using Liferay portal server on tomcat and Linux Ubuntu.
Liferay is generating a file that is very long. I've seen those files in windows and its working. But when i tried running it in ubuntu, it doesn't create the file and my server is giving me error. I've also tried to make a file with a very long filename and it really doesn't allow me.
Is there a way for Linux Ubuntu to allow me to do this?
Fix this...
The source of my problem is the encrypted home of my ubuntu OS. It seems that the filename of the file created is also encrypted making my long filename even longer.
When i made a new installation of my Ubuntu, i didn't encrypt my home anymore and it works fine now.. thanks a lot all...
There's a huge slew of reasons it may not be working, probably the least of which is a long file name (unless we're talking about a filename over 255 characters, which I believe is the hard-limit).
Also, file length isn't going to be a big problem unless you've got some truly enormous files (sometimes linux filesystems cap at 2GB, but I don't know what the behaviour is if you went over. You'd probably still see a 2GB file that just doesn't contain everything).
My knee-jerk reaction would be to say you're having a permissions problem where the user the server is running as (say, 'www' or 'www-data', or whatever) doesn't have permission to write in the folder its trying too.
The filename you have given as an example is fine:
kevin#latte:~/miscdev/j$ touch 'everything.jsp_Q_browserId=firefox&themeId=controlpanel&colorSchemeId=01&minifierType=js&minifierBundleId=javascript.everything.files&t=1249034302000'
kevin#latte:~/miscdev/j$ ls -l
total 0
-rw-r--r-- 1 kevin kevin 0 2009-07-30 17:07 everything.jsp_Q_browserId=firefox&themeId=controlpanel&colorSchemeId=01&minifierType=js&minifierBundleId=javascript.everything.files&t=1249034302000
I imagine the problem is that you are passing that filename to a shell un-escaped, and it is interpreting the & character. Put the filename in single-quotes, as I have in my example.
I had the same problem on my Ubuntu 9.10 machine and I think it really was caused by the home-directory encryption. Those "too long" filenames work fine outside my home.

Resources