Am using D365 Powerautomate to generate Sharepoint history (document change history) urls , i can able to generate like below , but the problem is it is downloading the file instead i want to open the word document.
I have tried web=1 in the querystring, but no luck.
https://SITEURL/_vti_history/VERSIONID/LISTNAME/ROOTFOLDER/DOCNAME.docx
Url is perfect and downloading the right document , but i want to open in browser instead of download
please use ?web=1 at the end of link, like this
https://SITEURL/_vti_history/VERSIONID/LISTNAME/ROOTFOLDER/DOCNAME.docx?web=1
Related
It's possible to generate a link to raw content of the file in GitHub, is it possible to do with VSTS/DevOps?
Even after reading the existing answers, I still struggled with this a bit, so I wanted to leave a bit more of a thorough response.
As others have said, the pattern is (query split onto separate lines for ease of reading):
https://dev.azure.com/{{organization}}/{{project}}/_apis/sourceProviders/{{providerName}}/filecontents
?repository={{repository}}
&path={{path}}
&commitOrBranch={{commitOrBranch}}
&api-version=5.0-preview.1
But how do you find the values for these variables? If you go into your Azure DevOps, choose Repos > Files from the left navigation, and select a particular file, your current url should look something like this:
https://dev.azure.com/{{organization}}/{{project}}/_git/{{repository}}?path=%2Fpackage.json
You should use those values for organization, project, and repository. For path, you'll see an HTTP encoded version of the unix file path. %2F is the HTTP encoding for /, so that path is actually just /package.json (a tool like Postman will do that encoding for you).
Commit or branch is pretty self explanatory; you either know what you want for this value or you should use master. I have "hard-coded" the api version in the above url because that's what the documentation currently points to.
For the last variable, you need providerName. In short, you should probably use TfsGit. I got this value from looking through the list of source providers and looking for one with a value of true for supportedCapabilities.queryFileContents.
However, if you just request this URL you'll get a "203 Non-Authoritative Information" response back because you still need to authenticate yourself. Referring again to the same documentation, it says to use Basic auth with any value for the username and a personal access token for the password. You can create a personal access token at https://dev.azure.com/{{organization}}/_usersSettings/tokens; ensure that it has the Token Administration - Read & Manage permission.
If you're unfamiliar with this sort of thing, again Postman is super helpful with getting these requests working before you get into the code.
So if you have a repository with a src directory at the root, and you're trying to get the file contents of src/package.json, your URL should look something like:
https://dev.azure.com/{{organization}}/{{project}}/_apis/sourceProviders/TfsGit/filecontents?repository={{repository}}&commitOrBranch=master&api-version={{api-version}}&path=src%2Fpackage.json
And don't forget the basic auth!
Sure, here's the rests call needed:
GET https://feeds.dev.azure.com/{organization}/_apis/packaging/Feeds/{feedId}/packages/{packageId}?includeAllVersions={includeAllVersions}&includeUrls={includeUrls}&isListed={isListed}&isRelease={isRelease}&includeDeleted={includeDeleted}&includeDescription={includeDescription}&api-version=5.0-preview.1
https://learn.microsoft.com/en-us/rest/api/azure/devops/artifacts/artifact%20%20details/get%20package?view=azure-devops-rest-5.0#package
I was able to get the raw contents of a file using this URL.
GET https://dev.azure.com/{organization}/{project}/_apis/sourceProviders/{providerName}/filecontents?serviceEndpointId={serviceEndpointId}&repository={repository}&commitOrBranch={commitOrBranch}&path={path}&api-version=5.0-preview.1
I got this from here.
https://learn.microsoft.com/en-us/rest/api/azure/devops/build/source%20providers/get%20file%20contents?view=azure-devops-rest-5.0
You can obtain the raw URL using chrome.
Turn on Developer tools and view the Network tab.
Navigate to view the required file in the DevOps portal (Content panel). Once the content view is visible check the network tab again and find the URL which starts with "Items?Path", this is json response which contains the required "url:" element.
Drag the filename from the attachments windows and drop it in to any other MS application to get the raw URL or linked filename.
Most answers address this well, but in context of a public repo with anonymous access the api is different. Here is the one that works in such a scenario:
https://dev.azure.com/{{your_user_name}}/{{project_name}}/_apis/git/repositories/{{repo_name_encoded}}/items?scopePath={{path_to_your_file}}&api-version=6.0
This is the exact equivalent of the "raw" url provided by Github.
Another way that may be helpful if you want to quickly get the raw URL for a specific file that you are browsing:
install the browser extension named "Undisposition"
from the dot menu (top right) choose "Download": the file will open in a new browser tab from which you can copy the URL
(edit: unfortunately this will only work for file types that the browser knows how to open, otherwise it will still offer to download it...)
I am fairly new to this and had an issue accessing a raw file in an Azure DevOps Repo. It's straightforward in Github.
I wanted to download a file in CMD and BASH using Curl.
First I browsed to the file contents in the browser make a note of the bold sections:
https://dev.azure.com/**myOrg**/_git/**myProjectName**?path=%2F**MyFileName.ps1**
I then constructed the URL similar to what #Zach posted above.
https://dev.azure.com/**myOrg**/**myProjectName**/_apis/sourceProviders/TfsGit/filecontents?repository=**myProjectName**&commitOrBranch=**master**&api-version=5.0-preview.1&path=%2F**MyFileName.ps1**
Now when I paste the above URL in the browser it displays the content in RAW form similar to GitHub.
The difference was I had to setup a PAT (Personal Access Token) in My Azure DevOps account then authenticate the URL in DOS/BASH example below:
curl -u "<username>:<password>" "https://dev.azure.com/myOrg/myProjectName/_apis/sourceProviders/TfsGit/filecontents?repository=myProjectName&commitOrBranch=master&api-version=5.0-preview.1&path=%2FMyFileName.ps1" -# -L -o MyFileName.ps1
Referring to this post:
Picture download from url via lotus script
what i have to do change if i want to download a file (.csv or .xlsx) from an url ?
You have to change nothing. The code behind the link works for all sorts of files. I just named the method "getImage...()" because the topic of original question was about downloading images.
I have used google search appliance in my application to search files.
I am able to search all files .However, i am not able to fetch xls file.
Search url for for same is:
GoogleSearch.html?
advanced=true&filter=0&requiredfields=&as_q=content&lr=&as_epq=&country=&as_eq=&unit=&committee=&sort=&function=&num=10&contenttype=&as_occt=any&as_filetype=xls&site=&Submit.x=91&Submit.y=15
You can take a look at the "Index diagnostic" page on the GSA Admin Console to confirm that those .xls files are actually stored in the index.
I've built an online system that allows users to download PDF files using ColdFusion. Users have to log in before they can download the files (PDF & Microsoft Office documents). (This application is only for our company staff.)
However, only today I found out that anyone with internet access can view the files. With only certain keywords such as 'Medical Form myCompanyName' in a Google search, they can view the PDF files using the browser.
How can I prevent this?
UPDATE
this is what my problem is. i've created a folder for all of the PDFs file. each of the files is called using ID from database. if let's say a user wanted to view Medical Form, the link would be: http://myApplication.myCompanyName/forms.cfm?Department=Account&filesID=001.
if the user copy this url & log out from system, he/she will not be able to view this file.(login page will be displayed)
However, without the url, other internet users sstill can view the pdf files just by search it on the net, and the search engine will gives a link that direct it to the folder itself, without having to login.
Example:
Medical Form's pdf file is stored in a folder named Document. when an internet user search for Medical Form, the search engine will link it to: http://myApplication.myCompanyName/Document/Medical%20Form.pdf
we have lots of PDF files in this folder and most of it are confidential, and for internal view purpose only. in php, we can disable this by using .htaccess. i'd like to know if there's anything like this for coldfusion?
You can send files through the code with single line like this:
<cfif isAuthorized>
<cfcontent file="/path/to/files/outside/of/web/root/Form.pdf" type="application/pdf" reset="true" />
</cfif>
ColdFusion FTW, right.
Please note that handling large files (say, 100MB+) may cause some problems, because files being pushed to RAM before sending. Looks like this is not correct any more, as Mike's answer explains.
Another option is to use content type like x-application if you want to force download.
UPD
You want to put this code into the file (let's say file.cfm) and use it for PDF links. Something like this:
Download file Xyz.pdf
file.cfm:
<!--- with trailing slash --->
<cfset basePath = "/path/to/files/outside/of/web/root/" />
<cfif isAuthorized AND StructKeyExists(url, "filename")
AND FileExists(basePath & url.filename)
AND isFile(basePath & url.filename)
AND GetDirectoryFromPath(basePath & url.filename) EQ basePath>
<cfcontent file="#basePath##url.filename#" type="application/pdf" reset="true" />
<cfelse>
<cfoutput>File not found, or you are not authorized to see it</cfoutput>
</cfif>
UPD2
Added GetDirectoryFromPath(basePath & url.filename) EQ basePath as easy and quick protection from the security issue mentioned.
Personally I usually use ID/database approach, though this answer was initially intended as simple guidance, not really compehensive solution.
You need to store your PDF's outside of your web realm.
So lets say the base of your web app is
/website/www
All http (web) requests are served from there.
/website/pdf
could be a path where all PDF's are stored. This path isn't accessible via URL as its not served by your web server.
Then in www
you have something like
downloadpdf.cfm?file=NameOfPDF.pdf
Which does your checks to ensure its an appropiate user and if so serves the document
<cfcontent type="application/pdf" file="/website/pdf/#url.file#" />
Using cfcontent, pre cf8, is a really bad idea, as it loads the entire file into memory before transmission. CF8 and later will actually stream from disk, which resolves the memory issue. However if you have large files, users on slow connections, and/or heavy downloads you still have to worry about thread starvation. Each download with cfcontent ties up a thread for the duration of the download.
Depending on your web server you might be able to route around this by using an x-sendfile extension. This allows you to send an http header with the path to a file outside of your web root, and have your web server handle sending the file, freeing up cf to do further work.
Here's an article by Ben Nadel about using mod_xsendfile on apache, http://www.bennadel.com/blog/2170-Streaming-Secure-Files-Efficiently-With-ColdFusion-And-MOD-XSendFile.htm and here's an equivalent IIS7 XSendFile plugin https://github.com/stakach/IIS-X-Sendfile-plugin
You might checkout the snippet of code for CFWheels SendFile() helper tag http://cfwheels.org/docs/1-1/function/sendfile
https://gist.github.com/1528113
I have a use case that seems pretty simple, but after Googling around I can't find a solution. I have some Word documents on an FTP server and I'd like to be able to create a link that would download them into Word and then allow the saved changes to be sent back to the FTP server.
The problem is that I can only get Word to either open the file from the FTP server as read-only and I can't save the changes back to the server automatically, or the file downloads to a temporary location which isn't automatically saved back to the server. I'm creating my link like this:
Test
Frustratingly, if I go into Word File|Open and paste the link "ftp://ftp.example.com/www/uploads/Image/test.doc" I can save back to the server. What gives? Is there a solution? From Googling around it seems that Sharepoint offers this ability, but that's not practical for us. We're using IE7 and Office 2003.
I believe Microsoft Word can read / write WebDAV - see this question:
Editable Word Document from JSP
Can you set up some kind of proxy that can connect via FTP?
Read this link http://www.webdavsystem.com/server/documentation/ms_office_read_only (is actually about webdav, but I'd guess this is the same issue for FTP), there is a section on on opening weblinked documents in non-readonly mode. Which needs some changes on the client side...
HTH
Tim
Solution for IE:
Put a file on ajaxbrowser.com (this is WebDAV Server for testing) and replace file's full path in the next code:
var openDocumentsObject = new ActiveXObject("SharePoint.OpenDocuments");
openDocumentsObject.EditDocument('http://ajaxbrowser.com/mydoc.docx');
Another example:
<a href='http://ajaxbrowser.com/mydoc.docx' id='urltarget' target='_blank'>Edit through URI</a>