Mounting Sharepoint shares using davfs2 yelds empty folder - linux

I'm trying to mount my employer's SharePoint document repository from Linux.
I followed the article published here: http://howto.unixdev.net/Linux-SharePoint.html
Everything seems perfect, I can authenticate and mount the shared folder, but the mount point is empty (well not exactly, I see a "lost+found" directory).
If I try the same path in explorer under Windows I see the files are there.
I have no errors in log files or at the CLI.
What can I try?
P.S. Since I see no replies, I am adding the content of /etc/fstab here, hoping that it can be useful to debug my problem:
http://AMENDED/bk/des/data\040administration/ /media/SharePoint davfs rw,noauto,user 0 0

The problem is with non-ascii characters in filenames.
please look in.
http://savannah.nongnu.org/support/?108385
There are no solution/ Just use only ascii for filenames.

I found the solution to my problem: while looking at the way Internet Explorer formatted the URL I noticed a few letters were capitalized. So I tried capitalizing the same letters in /etc/fstab and voila, the repository was correctly mounted.
Here is my current /etc/fstab:
http://AMENDED/bk/des/Data\040Administration/ /media/SharePoint davfs rw,noauto,user 0 0
It is to be noted that introducing the all-lowercase url in windows explorer works correctly.

The issue turns out to be with davfs2 code processing a response to PROPFIND in a case-sensitive manner. It's very easy to change it to case-insensitive processing before an official fix is made.
For more info see https://savannah.nongnu.org/support/index.php?108566

Related

Why nodejs can make hardlink with dircortory? [duplicate]

How do you create a hardlink (as opposed to a symlink or a Mac OS alias) in OS X that points to a directory? I already know the command "ln target destination" but that only works when the target is a file. I know that Mac OS, unlike other Unix environments, does allow hardlinking to folders (this is used for Time Machine, for example) but I don't know how to do it myself.
I agree that hard-linking folders/directories can cause problems if not careful, but they have a very definite advantage - Time Machine is a perfect example. Without them it simply would not be practical as the duplication of redundant versions of files would very quickly consume even the largest of disks.
Snow Leopard can create hard links to directories as long as you follow Amit Singh's six rules:
The file system must be journaled HFS+.
The parent directories of the source and destination must be different.
The source’s parent must not be the root directory.
The destination must not be in the root directory.
The destination must not be a descendent of the source.
The destination must not have any ancestor that’s a directory hard link.
So it's not correct at all that Snow Leopard has lost the ability to create hard links to
folders.
I just verified that link/unlink do work on Snow Leopard - as long as you follow the six
rules. I just tried it and it works fine on my Snow Leopard 10.6.6 system - tried it on the boot volume and on a separate USB external volume and it worked fine in both cases.
Here is the "hunlink.c" program:
#include <stdio.h>
#include <unistd.h>
int
main(int argc, char *argv[])
{
if (argc != 2)
return 1;
int ret = unlink(argv[1]);
if (ret != 0)
perror("unlink");
return ret;
}
gcc -o hunlink hunlink.c
So, be careful if you try it - remember to follow the rules and use hlink to create these hard links and use hunlink to remove the hard link afterwards. And don't forget to document
what you've done for later on or for someone else who might need to know this.
One other "gotcha" that I just learned about these "hard links" to folders. When you create them there is really a lot that happens "behind the curtain" of Mac OS X. One really important issue is that the folder you create the link to is really moved to a super-magical super-hidden folder called /.HFS+ Private Directory Data%000d/dir_xxx where xxx is the inode number of the "source_folder" - remember the format of the command is
hlink source_folder target_folder
So because of this, you have to be careful of not having any files open in the "source_folder" because if you do, they just got moved to the super-magical folder and you will likely have a problem if you try and save any changes to those files that were open in the "source_folder". This happened to me a couple of times until it dawned on me what was happening and the solution is pretty simple. I noticed that you couldn't do a "ls -la" command any longer without getting funny errors for all the folders/directories that were in the original "source_folder" but you could do a "ls" command and all looked well.
If you run "Verify disk" in the "Disk Utility" program, you will notice that it probably complains and gives a "Volume bitmap needs minor repair for orphaned blocks" which is what just happened with the creation of the super-magical folder and the movement of the "source_folder" to it.
If you do find yourself in this situation with "orphaned blocks", first save the changed files to some other temporary location not in the volume containing the "source_folder" tree, then use "Disk Utility" to unmount and remount the volume that contains the "source_folder" or just restart the computer. Then copy the files you saved to the temporary locations back to their original locations and you should be back in business. This is what worked for me, so can't guarantee this will work for you too. So it might be a good idea to try this out on a volume you have a good backup of just in case.
It seems so very weird that all this overhead occurs just for the simple task of creating a hard link to a folder. Does anyone have any idea why Mac OS X goes to all this effort for this hard link creation to folders? Does it have something to do with the fact that this is a "journaled" file system?
I discovered the info about the super-magical, super-hidden location by reading Amit Singh's explanation of his "hfsdebug" utility. If you want more details see his web site at Amit Singh's hfsdebug utility. It's a very interesting piece of software and will tell you lots of details about HFS+ file systems. It's free and I encourage you to download it and try it out. It's no longer supported but it still works on both Snow Leopard and Leopard - basically any HFS+ supported system. You can't really do any harm with it as it's a "read-only" tool - so it's great to use to look at some details of the filesystem.
One more issue about these "hard links to folders" - once you create one and the super-magical super-secret-hidden folder gets created, it's there for good. Even if you unlink the folder that caused it to be created in the first place, this magic folder stays around. Not sure why, but it definitely does. You can use "hfsdebug" to find this out if you wish to try it out. You can also use "hfsdebug" to find out how many of these "hard links to folders" exist on a drive. For these details refer to Amit's article on the "hfsdebug" utility.
He also has another newer utility that's supported but costs. It's called fileXray and costs $79 for one person on any number of computers in the same household for a personal non-business type license. It has an extensive 173-page User Guide that you can download to see what it can do before you purchase. Unfortunately there is no trial version, so read the manual and check out the web site for more details to see if it can help you out of a jam. Learn all the details about it at their web site - see fileXray web site for more info.
There are a couple of issues you should be aware of when using these hard links to folders. If the volume that they are created on is mounted to a remote client, there can be significant problems, depending on how they are mounted. If you use AFP to mount the volume to a remote client, there are big problems as any folder that currently has a hard link to it or has ever had one but later removed, will be unable to be used as all the lower level folders (but not files) will be inaccessible from either the Finder or a Terminal window. If you try to do a simple "ls -lR" command, it will fail and give you "ls: xxx: No such file or directory" error messages for all lower level folders. If you use a Finder window to traverse the directory tree of the remote volume, the folders that are in the folder that had or has a hard link to it will simply disappear without any error when you first click on the folder name.
These problems don't appear to occur (except for the error message) if you use NFS to mount the remote client (and assuming you had a NFS server on the system that has the volume as a local HFS+ filesystem). Details on how to use NFS to mount volumes are not provided here. I used a nice program from Dr. Marcel Bresink called "NFS Manager" to help with the NFS mounts on the server and client. You can get it from his web site - just search for "Bresink NFS Manager" in your favorite search engine, but he has a free trial version so you can try before you buy. It's not that big a deal if you want to learn how to do the NFS mounts, but the "NFS Manager" makes it pretty easy to set things up and to tweak all the different settings to help optimize it. He has several other neat Mac OS X utilities too that are very reasonably priced - one called "Hardware Monitor" that lets you monitor and graph all kinds of things like power usage, temperature of CPU, speed of fans and many many other variables for both the local and remote Mac systems over extended periods of time (from minutes to days). Definitely worth checking out if you are into handy utilities.
One thing I did notice is that NFS file transfers were about 20% slower than doing them via AFP, but your "mileage may vary", so no guarantees one way or the other, but I would rather have something that works even if I have to pay a 20% performance hit as compared to having nothing work at all.
Apple is aware of the problems with hard links and remote AFP filesystems, and they refer to it as an "implentation limitation" of the AFP client - I prefer to call it what it really appears to me to be - A BUG!!! I can only hope the next release of Mac OS X fixes the problem, as I really like having the ability to use hard links to folders when it makes sense.
These notes are my own personal opinion and I don't make any warranty about their correctness so use them at your own risk. Have a good backup before you play around with these "hard links to folders" just in case something unforeseen happens. But I hope you have fun if you do decide to look a bit more into this interesting aspect of Mac OS X.
You can't do it directly in BASH then. However... I found an article here that discusses how to do it indirectly: http://www.mactech.com/articles/mactech/Vol.23/23.11/ExploringLeopardwithDTrace/index.html by compiling a simple little C program:
#include <unistd.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
if (argc != 3) return 1;
int ret = link(argv[1], argv[2]);
if (ret != 0) perror("link");
return ret;
}
...and build in Terminal.app with:
$ gcc -o hlink hlink.c -Wall
Piffle. On 10.5, it tells you in the man page for ln:
-d, -F, --directory
allow the superuser to attempt to hard link directories (note:
will probably fail due to system restrictions, even for the
superuser)
So yes:
sudo ln -d existing_dir new_hard_link
Give it your password, and you're not done yet. You didn't document it, did you? You must document hard linked directories; even if it's a single user machine.
Deleting is a different story: if you go about it the usual way to delete directories, you'll delete the contents. So you must "unlink" the directory:
unlink new_hard_link
There. Hope you don't wreck your filesystem!
Cross-posting this great tool which neatly solves the problem, originally posted by Sam:
To install Hardlink, ensure you've installed homebrew, then run:
brew install hardlink-osx
Once installed, create a hard link with:
hln [source] [destination]
I also noticed that unlink command does not work on snow leopard, so I added an option to unlink:
hln -u destination
Code is available on Github for those who are interested: https://github.com/selkhateeb/hardlink
Yes it's supported by the kernel and the filesystem, but since it's not intended for general usage it's not exposed to the shell.
You could probably work out which APIs Time Machine uses and wrap them in a commandline tool, but it'd be better to take the hint and steer well-clear.
The OSX version of ln cannot do it, but, as mentioned in the other answer by rich, it is possible with the GNU version of ln which is available in homebrew as gln as part of the coreutils formula. man gln lists the -d option with the OSX-specific warning provided in rich's answer. In other words, it does not work in all cases. What exactly determines whether it works or not does not seem to be documented anywhere.
As a prerequisite, install coreutils:
brew install coreutils
Now you can do:
sudo gln -d /original_folder /mirror_folder
IMPORTANT: To remove the hard link you must use gunlink:
sudo gunlink /mirror_folder
❗️❗️❗️ Using rm or Finder will also delete the original folder.
FYI: The coreutils homebrew formula provides the GNU-compatible versions of generic unix tools. Use brew list coreutils to see the full list.
As of 2018 no longer possible. APFS (introduced in MacOS High Sierra 10.13) is not compatible with directory hardlinks. See https://github.com/selkhateeb/hardlink/issues/31
My case was that I found out that from a windows virtual machine, I cannot follow symlinks. (i wanted to test some HTML pages in Internet Explorer). And my directory structure had symlinks for CSS and images folders.
My workaround to solve the problem was a different approach than the other answers implied. I used rsync to create a copy of the folder. Rsync can resolve the symlinks and copy the linked files in stead.
This solved my problem without using hard links to directories. And it's actually an easy solution if you're just working on a small set of files.
rsync -av --copy-dirlinks --delete ../htmlguide ~/src/
From the article linked to, you'll get that error if you try to create the hard link in the same directory as the original. You have to create it somewhere else.
In Linux you can use bind mount to simulate hard linking directories. Not sure about OSX
sudo mount --bind /some/existing_real_contents /else/dummy_but_existing_directory
sudo umount /else/dummy_but_existing_directory
This can also be done with built-in Perl (from Terminal) without compiling anything. My specific use case is for Google Drive (which doesn't support symbolic links), so the examples below reflect the use case.
To link your "Documents" folder to Google Drive so it's synced:
perl -e 'link "/Users/me/Documents", "/Users/me/Google Drive/Documents"'
To remove the link to your "Documents" folder from Google Drive:
sudo perl -U -e 'unlink "/Users/me/Google Drive/Documents"'
You need "root" to unlink (see "unlink" perldoc).
Another solution is to use bindfs https://code.google.com/p/bindfs/ which is installable via port:
sudo port install bindfs
sudo bindfs ~/source_dir ~/target_dir
The short answer is you can't. :) (except possibly as root, when it would be more accurate to say you shouldn't.)
Unixes only allow a set number of links to directories - ".." from within all its children and "." from within itself. Anything else is potentially a recipe for a very confused directory tree. This is/was apparently a design decision by Ken Thompson.
(Having said that, apparently Apple's Time Machine does do this :) )
in case there is no sub folder, you can try
ln folder_path/*.* target_folder
it worked for me on OSX 10.9

How to download file that has space in its name?

In my virtual directory, I have many mp3 files, there are space or Chinese characters. How do I allow visitors to download them?
For example:
There's no problem when downloading www.myWebsite.com/virtualDirectory/songNameSimple.mp3
But if the song name has space in it, it's replaced by %20, thus return 404 error.
I'm curious about solution in both iis and lamp, although maybe the solution is the same.
Thanks
The server is handling this automatically, it was working for me at the beginning due to something else.

Perforce not syncing all files

Here, i am facing little different situation. When i got the latest revision from depot, out of 48,805 files i got only 48,771. The remaining 34 files showing error like below
"The file size is too long" FOr this issue what is the better solution?
This question is well answered here. The file name length limitation comes from Windows. If you can, shorten your Workspace root path.
As explained in the link given by #emartel, it is a windows limitation Perforce can't help you much however you can choose your workspace smartly. By default Perforce sets workspace same as server path (which usually will be bigger in size).This will help you in changing it.

TortoiseSVN update/cleanup error between Linux repository and Windows XP

For no reason that I can see, I can no longer run a TortoiseSVN Update on a development directory on my portable Windows XP Professional SP3 machine, getting the error:
Previous operation has not finished; run 'cleanup' if it was interrupted
Please execute the 'Cleanup' command.
If I try running cleanup, I get another error,
cannot process the following paths: cannot move $ROOT_DIR/.svn/tmp/tmp-... to $ROOT_DIR/path/where/thing/should/go: no such file or directory
I have verified that both files exist, and actually from CMD.EXE prompt I am able to issue a MOVE with those two filenames and have it work correctly. It's no use because next time SVN tries to repeat the operation itself after creating a different tmp file name, and while CMD succeeded, SVN fails.
UPDATE: the path lengths are in both cases well below PATH_MAX, target file system is NTFS, and permissions are OK. Maybe I'll now try with FileMon to see whatever TortoiseSVN is really up to.
I tried downgrading TortoiseSVN but to no avail. Other repositories work OK between the same machines.
TortoiseSVN 1.7.9, Build 23248 - 32 Bit , 2012/08/30 18:25:37
Subversion 1.7.6,
apr 1.4.6
apr-utils 1.3.12
neon 0.29.6
OpenSSL 1.0.1c 10 May 2012
zlib 1.2.7
Both server (OpenSuSE Linux 12.2) and client now run the latest version of SVN.
On Windows, I also cannot seem to get any more informative logs or information (I'm not very skilled with TortoiseSVN, I have always used the Linux command line version).
I might delete the local copy and run a checkout, but it's about 2 GB of data, and I'm on a slow connection, so it is really more of a "fly physically to server location and hook a copper Ethernet to the local network there" alternative. I'm reserving that as a sort of last ditch, nuclear option; I'd really rather understand what the problem is, for I fear it might happen again.
UPDATE
I've tried to delete remotely the subdirectory involved, committing the deletion on the server; deleting the subdirectory locally, and emptying the .svn/tmp subdirectory where I found sixteen tmp files, all copies of the one PNG causing problems.
I am still not able to perform any SVN subcommand, getting "Run cleanup!" error; on cleanup; I get a failed attempt to copy a tmpfile to the never-sufficiently-damned .PNG file, which no longer exists anywhere, into a directory that no longer exists anywhere.
I tried recreating the directory locally (but not the file!), no changes.
With FileMon, I traced the source PNG to 8e4c2389cf9d85c8b8ee54d49ea053c752a38187.svn-base in .svn/pristine subdirectory, tried removing it and got SVN complaining. I tried copying it to its intended destination (so that the file-as-it-should-be and the file-as-it-is are identical), no joy.
UPDATE
Well, this is weird. I decided to track everything that TortoiseSVN is doing using FileMon. I could see it checking the wc.db and search the item, checking for it in .svn/pristine (and finding it), copying it (unnecessarily if you ask me...) in .svn/tmp, and finally checking $DESTINATION_FILE (with correct case) using Windows Open() API. And getting PATH NOT FOUND. Yet the file is there, I can see it (and the name is less than 8.3 characters). And why PATH not found and not FILE not found?
Okay, it all boiled down to a directory that had been created remotely with a name ending with space. The file in itself was OK; the directory where it stood was not.
When updating, apparently, the directory got created but the name was shortened by Windows to exclude the final space.
To add to the difficulty of diagnosing, while TortoiseSVN did tell me what the problem was, it did so in the dialog box where the Arial font made the space in \path\to\your \file not clearly recognizable (it was, once I knew where to look, and compared that slash with the others. This one stood a little farther from the letter at its left).
Lesson learned: check really carefully the dialog file name, character by character (note to self: find a way of having it in Courier New if at all possible).
You may have two files in the repository that differ only in case. That's a problem on Windows. See this FAQ for details.

How to configure the filename length that can be handled by Linux Ubuntu?

Im using Liferay portal server on tomcat and Linux Ubuntu.
Liferay is generating a file that is very long. I've seen those files in windows and its working. But when i tried running it in ubuntu, it doesn't create the file and my server is giving me error. I've also tried to make a file with a very long filename and it really doesn't allow me.
Is there a way for Linux Ubuntu to allow me to do this?
Fix this...
The source of my problem is the encrypted home of my ubuntu OS. It seems that the filename of the file created is also encrypted making my long filename even longer.
When i made a new installation of my Ubuntu, i didn't encrypt my home anymore and it works fine now.. thanks a lot all...
There's a huge slew of reasons it may not be working, probably the least of which is a long file name (unless we're talking about a filename over 255 characters, which I believe is the hard-limit).
Also, file length isn't going to be a big problem unless you've got some truly enormous files (sometimes linux filesystems cap at 2GB, but I don't know what the behaviour is if you went over. You'd probably still see a 2GB file that just doesn't contain everything).
My knee-jerk reaction would be to say you're having a permissions problem where the user the server is running as (say, 'www' or 'www-data', or whatever) doesn't have permission to write in the folder its trying too.
The filename you have given as an example is fine:
kevin#latte:~/miscdev/j$ touch 'everything.jsp_Q_browserId=firefox&themeId=controlpanel&colorSchemeId=01&minifierType=js&minifierBundleId=javascript.everything.files&t=1249034302000'
kevin#latte:~/miscdev/j$ ls -l
total 0
-rw-r--r-- 1 kevin kevin 0 2009-07-30 17:07 everything.jsp_Q_browserId=firefox&themeId=controlpanel&colorSchemeId=01&minifierType=js&minifierBundleId=javascript.everything.files&t=1249034302000
I imagine the problem is that you are passing that filename to a shell un-escaped, and it is interpreting the & character. Put the filename in single-quotes, as I have in my example.
I had the same problem on my Ubuntu 9.10 machine and I think it really was caused by the home-directory encryption. Those "too long" filenames work fine outside my home.

Resources