I have some files on S3 and would like to view those files in web. Problem is that the files are not public and I dont want them to be public. Google doc viewer works but condition is, files should be public.
Can I use office web apps to show in browser. Since the files are private, I do not want to store any data on Microsoft servers. It looks like even google doc viewer stores the info while parsing.
What is the cleanest way?
Thanks.
I have looked around for something similiar before and there are some apps you can install locally (CyberDuck, S3 Browser, etc). In the browser has been limited until recently (full disclosure I worked on this project).
S3 LENS - https://www.s3lens.com/
I probably get a minus here, but also Microsoft has an online viewer, which works the same way: the file needs to be publicly accessible.
Here is the link: https://view.officeapps.live.com/op/view.aspx
What I cloud add is that those files need to be publicly accessible only for a short period, i.e. until the page gets opened. So you cloud trick them by uploading the file to be viewed to a public temporary storage in a randomly generated folder and give that url to the online viewer.
Of course this is not that safe, since the file will get as some point to the temp storage and then to Google or Microsoft, but the random path names offer some degree of safety.
I've created recently a small glitch app, which demonstrates what I just explained: https://honeysuckle-eye.glitch.me/
It uploads local files to a temp storage and then opens the viewer from that temp storage; the temp storage only last for one download, so it is pretty safe.
Related
I am planning a big migration from Dropbox to Microsoft Onedrive but i noticed that Onedrive does not offer the same download logic as Dropbox. To be precise, we really liked the feature from dropbox where the files are pre-downloaded in your computer and when you really need them you dont have to download.
Unfortunatelly from what i see, Onedrive does not offer the same option.
The reason we are trying to do this, its to minize wait time when you really need the files.
Anyone who encountered this program?
Kind regards
By default onedrive data will be on cloud to save your local storage and for colloboration.
If you want to use it local, there is an option avaiable in onedrive after sync.
Right click on required folder > Always keep on this device.
Now these files will be aviable locally even onedrive sync offline.
In this question, there is a proposed solution to preview the Word files with Google and Microsoft viewers. Both solution work pretty well, but there is a restriction for this project that the documents can't leave Europe.
I am wondering the following questions:
Does Google/Microsoft store the files on their server after they have downloaded it? I see that the file download does not happen in the browser, but in both cases it is served as image.
If the request is from Europe, is the document processed on European servers? And if no, is it possible to somehow influence it?
Here are the examples for the convenience:
https://docs.google.com/gview?url=https://omextemplates.content.office.net/support/templates/en-us/tf10002117.docx
https://view.officeapps.live.com/op/embed.aspx?src=https://omextemplates.content.office.net/support/templates/en-us/tf10002117.docx
PS: in case it is relevant, the document itself is stored in Google Storage European region.
I am new to SharePoint Online, but haven't found anything via Google: I have tons of files (read: terabytes) stored on my filesystem and on a cloud storage and want to access their metadata to allow searching for them. Is this possible without uploading them into SharePoint Online? It should also be possible to "sync" the hierarchy of the crawled folder so I can click through the folder structure in SharePoint. I do not want to store the content of these files though (for storage space reasons).
It is like having a synced folder in SharePoint where the files are searchable, but they are just shortcuts of some kind, without content.
I thought of creating some sort of timed job which crawls the file system and creates empty files in SharePoint which contain the metadata and a link to the file, but this seems very crude to me. Is there a better solution or maybe even something SharePoint Online itself provides?
// edited: I need to crawl not only files on my filesystem but also cloud storage files of different cloud storage services.
// whoops got that wrong, it is SharePoint Online, not 2013.
You can add the file share as one of the sources. SharePoint will be able to crawl & Index the files in the file share.
Steps on how to do it can be found here...
How to: Configure Enterprise Search to index a file share
I have a WordPress installation on an Azure Website (not a WebRole). I have FTP access to the site but it literally can take like an hour, which is insane because if I could just ZIP the thousands of files on the site (due to all the plugins, etc) it might take 5 seconds to run the zip. Downloading that then would be far more reliable, as its just 1 file, not 10,000 that could get messed in transfer. Traditional hosters allow you to get on their control panel and zip folders, for instance, but FTP doesn't allow this.
So can I do this on an Azure website, any way, shape or how??? I've looked a bit into SFTP which seems to have some such capabilities but it doesn't seem to be implemented in Azure websites. What can I do, this whole work flow is despisable, I can't live with it, it discourages backups. This encourages one to go to a traditional shared host but I would rather not if possible.
Use the Kudu Console. To access it, simply go to {yoursite}.scm.azurewebsites.net.
You will then be prompted for your login credentials for your Microsoft Azure Account. Once logged in, click on 'Debug Console' at the top of the web page.
Within the UI, Next to each file and folder, there is a Down Arrow icon that lets you download the item.
For files, it directly downloads the file by navigating to it.
For directories, it downloads a zip file containing the full content of the folder.
Detailed instructions can be found here: https://github.com/projectkudu/kudu/wiki/Kudu-console
Adding a screeshot of kudu-console.
Another solution would be the Kudu API, you can accomplish a lot of stuff with it such as downloading a folder from the app service as ZIP as well as automating this using a script. Use the following link within a web browser for example:
If logged in to kudu from the browser just use:
https://{{your-site}}.scm.azurewebsites.net/api/zip/{{folder-path}}
If using a script or command line pass your credentials as follows:
https://user:pass#{{your-site}}.scm.azurewebsites.net/api/zip/{{folder-path}}
Where user and pass can be obtained by going to your app service in the Azure Portal and clicking Get Publish Profile in the Overview tab. See the Deployment Credetials documentation for more details.
Note: The folder path starts from D:\home.
For more information consult the kudu Rest API documentation.
My customer is currently using MHT files for storing offline representations of browsed web pages. The files are saved and later viewed in Internet Explorer.
When viewing the files we would like to be sure there is absolutely no network activity to the original site or any other site - the content should be browsed 100% offline, and should not have any special "local" privileges as well (i.e. access to file:// protocols etc.). We would like to keep JS running if possible, and we can suffer to consequences of disabled features because of working offline.
We are willing to change the viewer or even the file format (and convert all old mht files as well) if a better solution is suggested.
Thanks for any help on this,
Udi
It is not possible to guarantee that there will be no network activity unless you go to offline mode in Internet explorer. Though the advantage of saving a web page to a mht file is that all the info for displaying the page (including images) are stored in one file instead of several files and folders, making archiving easy, if the web content has links to other pages, clicking on links will initiate network activity.
One option is to post-process the mht file and replace the url links with just the title of the link. For e.g, replacing
<A=20
title=3D"Conduction band"=20
href=3D"http://en.wikipedia.org/wiki/Conduction_band">conduction =
bands</A>
with "Conduction band".