Obtaining list of folders from a webdav - iis

I am trying to get a list of folders in an array from a remote webdav. I am using the PROPFIND method querying for property 'isfolder', which although it is not standard it is contemplated in 'Additional WebDAV Collection Properties'. Then, I will parse the XML response to build up the array. However, in the response, I get a 'Resource not found' for this property. I am using the query against an IIS server.
My question, which seems to be answered already with my results (need confirmation though), is:
-Doesn't IIS have 'isfolder' property by default?
Then, how can I include it?
And, is there a better way to get a list of folders from a webdav directory?
Many thanks in advance.

You don't need any custom properties. Just check the DAV:resourcetype property.

Related

The HTTP method is not allowed for the specified URL

We would like to use PUT, PATCH and DELETE methods when building our internal API in Domino, but I can't configure those methods. Server refusing to pass them.
When I used method PUT Domino replied to me:
Error code: 405 | Request method is not allowed by the server
After that I enabled methods via notes.ini
HTTPEnableMethods=GET,POST,PUT,DELETE,HEAD,PATCH
That seems to help a bit but now it says something about URL, but I do not really get what it means.
Error code: 405 | The HTTP method is not allowed for the specified URL
I have made tests on two different setups: with and without internet sites documents enabled in server documents.
Does anybody what I need to do to solve the problem above?
UPDATE
I just noticed a help text on Internet Site for "Methods" field.
GET, HEAD, and POST are the most commonly used methods. OPTIONS and TRACE are useful for debugging. PUT and DELETE should only be enabled if the Web site includes special CGI programs or Java applications that implement them.
Based on that it seems it is not possible to make PUT and DELETE work out of the box.

Is it possible to have a link to raw content of file in Azure DevOps

It's possible to generate a link to raw content of the file in GitHub, is it possible to do with VSTS/DevOps?
Even after reading the existing answers, I still struggled with this a bit, so I wanted to leave a bit more of a thorough response.
As others have said, the pattern is (query split onto separate lines for ease of reading):
https://dev.azure.com/{{organization}}/{{project}}/_apis/sourceProviders/{{providerName}}/filecontents
?repository={{repository}}
&path={{path}}
&commitOrBranch={{commitOrBranch}}
&api-version=5.0-preview.1
But how do you find the values for these variables? If you go into your Azure DevOps, choose Repos > Files from the left navigation, and select a particular file, your current url should look something like this:
https://dev.azure.com/{{organization}}/{{project}}/_git/{{repository}}?path=%2Fpackage.json
You should use those values for organization, project, and repository. For path, you'll see an HTTP encoded version of the unix file path. %2F is the HTTP encoding for /, so that path is actually just /package.json (a tool like Postman will do that encoding for you).
Commit or branch is pretty self explanatory; you either know what you want for this value or you should use master. I have "hard-coded" the api version in the above url because that's what the documentation currently points to.
For the last variable, you need providerName. In short, you should probably use TfsGit. I got this value from looking through the list of source providers and looking for one with a value of true for supportedCapabilities.queryFileContents.
However, if you just request this URL you'll get a "203 Non-Authoritative Information" response back because you still need to authenticate yourself. Referring again to the same documentation, it says to use Basic auth with any value for the username and a personal access token for the password. You can create a personal access token at https://dev.azure.com/{{organization}}/_usersSettings/tokens; ensure that it has the Token Administration - Read & Manage permission.
If you're unfamiliar with this sort of thing, again Postman is super helpful with getting these requests working before you get into the code.
So if you have a repository with a src directory at the root, and you're trying to get the file contents of src/package.json, your URL should look something like:
https://dev.azure.com/{{organization}}/{{project}}/_apis/sourceProviders/TfsGit/filecontents?repository={{repository}}&commitOrBranch=master&api-version={{api-version}}&path=src%2Fpackage.json
And don't forget the basic auth!
Sure, here's the rests call needed:
GET https://feeds.dev.azure.com/{organization}/_apis/packaging/Feeds/{feedId}/packages/{packageId}?includeAllVersions={includeAllVersions}&includeUrls={includeUrls}&isListed={isListed}&isRelease={isRelease}&includeDeleted={includeDeleted}&includeDescription={includeDescription}&api-version=5.0-preview.1
https://learn.microsoft.com/en-us/rest/api/azure/devops/artifacts/artifact%20%20details/get%20package?view=azure-devops-rest-5.0#package
I was able to get the raw contents of a file using this URL.
GET https://dev.azure.com/{organization}/{project}/_apis/sourceProviders/{providerName}/filecontents?serviceEndpointId={serviceEndpointId}&repository={repository}&commitOrBranch={commitOrBranch}&path={path}&api-version=5.0-preview.1
I got this from here.
https://learn.microsoft.com/en-us/rest/api/azure/devops/build/source%20providers/get%20file%20contents?view=azure-devops-rest-5.0
You can obtain the raw URL using chrome.
Turn on Developer tools and view the Network tab.
Navigate to view the required file in the DevOps portal (Content panel). Once the content view is visible check the network tab again and find the URL which starts with "Items?Path", this is json response which contains the required "url:" element.
Drag the filename from the attachments windows and drop it in to any other MS application to get the raw URL or linked filename.
Most answers address this well, but in context of a public repo with anonymous access the api is different. Here is the one that works in such a scenario:
https://dev.azure.com/{{your_user_name}}/{{project_name}}/_apis/git/repositories/{{repo_name_encoded}}/items?scopePath={{path_to_your_file}}&api-version=6.0
This is the exact equivalent of the "raw" url provided by Github.
Another way that may be helpful if you want to quickly get the raw URL for a specific file that you are browsing:
install the browser extension named "Undisposition"
from the dot menu (top right) choose "Download": the file will open in a new browser tab from which you can copy the URL
(edit: unfortunately this will only work for file types that the browser knows how to open, otherwise it will still offer to download it...)
I am fairly new to this and had an issue accessing a raw file in an Azure DevOps Repo. It's straightforward in Github.
I wanted to download a file in CMD and BASH using Curl.
First I browsed to the file contents in the browser make a note of the bold sections:
https://dev.azure.com/**myOrg**/_git/**myProjectName**?path=%2F**MyFileName.ps1**
I then constructed the URL similar to what #Zach posted above.
https://dev.azure.com/**myOrg**/**myProjectName**/_apis/sourceProviders/TfsGit/filecontents?repository=**myProjectName**&commitOrBranch=**master**&api-version=5.0-preview.1&path=%2F**MyFileName.ps1**
Now when I paste the above URL in the browser it displays the content in RAW form similar to GitHub.
The difference was I had to setup a PAT (Personal Access Token) in My Azure DevOps account then authenticate the URL in DOS/BASH example below:
curl -u "<username>:<password>" "https://dev.azure.com/myOrg/myProjectName/_apis/sourceProviders/TfsGit/filecontents?repository=myProjectName&commitOrBranch=master&api-version=5.0-preview.1&path=%2FMyFileName.ps1" -# -L -o MyFileName.ps1

Is there a delay when using the Box.com search API?

I'm using the Search API as defined here:
https://developers.box.com/docs/#search
It works well, though I noticed that when I make a folder on the site, then immediately call the API searching for that folder name, it doesn't appear in the results for a minute or so. Is there something I'm doing wrong, or some way to force it to do a live search? Thanks.
You're not doing anything wrong. It just takes a little bit of time for the search indexes to be updated with the new file/folder metadata. There's nothing you can do client-side to speed this up.
If you need immediate access that new folder, consider saving the folder ID that's returned in the response of the Create a New Folder request.

SharePoint: how to get Internet media type of a document?

How to get Internet media type of a document using SharePoint web-services? I access the service using SOAP. I've tried the Content Types documentation, but it is something different than Conten-type/Internet media-type/MIME.
SharePoint doesn't know about MIME-types. At least it doesn't know this on a per-file basis.
Check out SPFile which represents a file in SharePoint. The closest you might get is SPFile.ProgID: "Gets a string that identifies the application in which the file was created.", but that doesn't correspond to the MIME-type and nor is it always filled.
The MIME types are actually assigned by the IIS web server as you can see in this explanation:
Add new file type in SharePoint
So you will not be able to get the mime type by querying the SOAP web service. Your only chance is to download the file in question and check the HTTP Header in which you should find the Content-Type assigned by the IIS.
You won't be able to query by content type or similar stuff, you will have to use file extensions for that (e.g. xlsx, docx) - the IIS does nothing differently, it assigns the MIME-types by file extension. So instead of trying to get the mime-type, it might be easier to just get the file extension and deduct the mime-type from there.
Lastly: The SharePoint Content Type is a SharePoint internal construct. It represent different types of content in SharePoint, not always relating to downloadable documents. It has nothing to do with mime-types.
In case of server-side code you may use:
SPUtility.GetMimeTypeFromExtension(".pdf")

Issus deleting a site collection

I'm currently doing some test where I try to delete a site collection programmatically. Thereby I realized some strange behavior by SharePoint.
I used the following code to test the site collection deletion.
private static void DeleteSiteCollection(string urlSiteToDelete)
{
SPSecurity.RunWithElevatedPrivileges(delegate()
{
SPWebApplication webApp = SPWebApplication.Lookup(new Uri("http://wssdev06"));
webApp.Sites.Delete(urlSiteToDelete);
webApp.Update();
});
}
So when I call the method with the url of an existing site collection the site collection will be deleted as expected. But when I call the method with null, the empty string or an url which is not connected to a site collection then SharePoint deletes the site collection which resides under root (e.g. http://wssdev06/).
I'm not sure if I'm too dump to use this SPSiteCollection.Delete() method or if I did not understand the conecpt of site collections and managed paths, but I think this a really strange and alarming behavior.
I could repoduce this behavoir on different web application but had no option to test in on another SharePoint environment yet.
So am I doing something wrong or is this a bug?
UPDATE:
So I did some more investigations and realized that this must have something to do with the indexer of the SPSiteCollection class which returns the root site collection if there is no site collection located under the given url. Looks like a bug.
Whenever you ask SharePoint to find a Site Collection using an Url it'll do it's best to return a SPSite even if it means that it'll have to ignore part of the Url.
Sometimes this is a very good thing. f.i. if you have the full url of a list and want to find the corresponding SPSite and SPWeb.
But it can be very dangerous like when you're deleting Site collections and maybe make a spelling mistake.
If you want to make sure you get the right Site Collection the lookup the SPSite first and check that the SPSite you get has the Url that you want.
BR
Per
Your code looks right. One thought would be to add a check of the sites collection to make sure the site you want to delete is in the sites collection. I realize this does not answer your question.
This sounds like exactly the issue described in Microsoft's KB 968474 - stsadm can inadvertently delete a root site collection if erroneous URL path used. Similar to your symptoms, when using stsadm - o restore, "If the URL path is incorrect then the deletion and restore is attempted against the only valid path which is the root site collection of the URL."
It sounds to me like there is some bug in the underlying site delete API, as you suspected. Possibly, the algorithm looks for a "closest match" rather than "exact match".
Enumerating the Site Collections and validating an exact match might be the best way to avoid this. However, I wouldn't say you're doing anything wrong as this is very close to the Microsoft sample code and the documentation for the function gives no warning about passing invalid URLs.

Resources