How persistent is data I put on my Azure WebApp via FTP? - azure-web-app-service

I've been searching around and can't find any clear answers to this. I need a small amount of data - talking kilobytes, probably not ever reaching megabyte range - available as a file on my Azure instance, outside the web app itself, for a web job to work with. I won't get into why this is necessary, but it is (alternatives have been explored), and the question is now where to put those files. The obvious answer seems to be to connect to the FTP, create a directory, plop them there and work with them there.
I did a quick test and I'm able to create a "downloads" directory within the "data" directory, drop some files in it, and work with them there. It works great for this very small, simple need that I have.
How long will that data stay there? Is that directory purged at any point automatically by the servers? Is that directory part of any backups that are maintained? How "safe" is something I manually put outside the wwwroot folder?

It will never be purged. The only folder that can get purged is the %TEMP% folder. All other folders that you have write access to will be persisted forever.

Related

I wonder if its possible to store the image upload to server store outside jar, and how to find the direction

I created a spring-boot web project and uploaded it to server already(centOS7).
currently the img upload to jar file on server is stored inside the static package in jar file
this makes the jar file very large and hard to edit.
can some one give me a idea to store the img somewhere else on server and how to find the position of picture out of jar inside html.
First of all, you have to decide, in which directory you are going to store your files and create it:
mkdir /path/to/your/dir
Then assign a newly created directory to your application user:chown <your user>:<users group> /path/to/your/dir
Then, don't forget to give read/write permissions for the user, under which you run your app - to the already created directory.
chmod 600 /path/to/your/dir - this will allow your app to only read/write to the directory and prevent the execution of files within it (for security reasons).
Then just replace the path you already have - with the new one (to the newly created directory).
Please, be aware that there is a lot of security stuff to consider when you're going to store files on your server.
By the way, please consider reading about different storage options like AWS S3, Google Cloud Storage and Ceph.
Please, take into account - that if you're going to store your files on your server - then you should take care about them (for example: keep eyes on space, make sure you have mirroring across discs and so on so forth). With AWS S3, for example, you don't need to care about all of that stuff and it's very cheap.

Setting up a trigger to watch new folders Azure Logic Apps

I am trying to create a logic app that will transfer files as they are created from my FTP server to my Azure file share. The structure of the folder my trigger is watching is structured by date (see below). Each day that a file is added, a new folder is created, so I need the trigger to check new subfolders but I don't want to go into the app every day to change which folder the trigger looks at. Is this possible?
Here's how my folder(Called data) structure is, each day that a file is added a new folder is created.
-DATA-
2016-10-01
2016-10-02
2016-10-03
...
The FTP Connector uses a configurable polling where you set how many times it should look for a file. The trigger currently does not support dynamic folders. However what you could try is the following:
Trigger your logic app by recurrence (same principle as the FTP trigger in fact)
Action: Create a variable to store the date time (format used in your folder naming)
Action: Do a list files in folder (here you should be able to dynamically set the folder name using the variable you created)
For-each file in folder
Action: Get File Content
Whatever you need to do with the file (call nested logic app in case you need to do multiple processing actions on each fiel is smart if you need to handle resubmits of the flow by file)
In order to avoid that you pick up every file each time, you will need to find a way to exlude files which have been processed in an earlier run. So either rename the file after it's processed to an extension you can exclude in the next run or move the file to a subfolder "Processed\datetime" in the root.
This solution will require more actions and thus will be more expensive. I haven't tried it out, but I think this should work. Or at least it's the approach I would try to set up.
Unfortunately, what you're asking is not possible with the current FTP Connector. And there aren't any really great solution right now...:(
As an aside, I've seen this pattern several times and, as you are seeing, it just causes more problems than it could solve, which realistically is 0. :)
If you own the FTP Server, the best thing to do is put the files in one folder.
If you do not own the FTP Server, politely mention to the owner that this patterns is causing problems and doesn't help you in any way so please, put the files on one folder ;)

Publish website to Azure, remove additional files at destination, but ignore specific folders

I currently manually delete obsolete folders from a published azure website. I know there is an option in visual studio to Remove additional files at destination. My problem is that I have an Images folder (quite large) that users upload, that will be deleted when I publish with this option checked. My question is, is there a way to use this option with exclusions? Meaning, to delete all files that are not in the local project except "\Images" folder?
You can most likely customize the web deploy usage from VS to do what you want but I don't think I would recommend it since things like that tend to get fragile.
I would suggest changing your architecture to store the images in a blob container, then possibly mapping your blobs to a custom domain (https://azure.microsoft.com/en-us/documentation/articles/storage-custom-domain-name/).
Having your images in blob storage will also prevent any accidental deletion of the Images folder by someone else that doesn't know it shouldn't be touched (or you simply forgetting about it one day).
Using blob storage will also allow you to configure CDN usage if ever find that you needed it.
Another option would be to create a virtual directory on your WebApp configuration and put the Images there - that way your VS deploy/publish wouldn't be modifying that subdirectory. This link may help with that: https://blogs.msdn.microsoft.com/tomholl/2014/09/21/deploying-multiple-virtual-directories-to-a-single-azure-website/

Folder permissions in Azure web sites

Just getting my head around the new Azure web sites feature and hitting my first obstacle. I'm deploying a PHP site which writes cache data to the file system, but the app is throwing an error because the folder it wants to write to does not have write permission. Is it possible to set permissions on folders or is this a no-no?
I can probably work round this but would like to know if it's possible.
Folder permissions cannot be set/customized. This means whatever location your app writes to should be under your site root.
Your site can only write to locations under C:\DWASFiles\Sites\[siteName]\VirtualDirectory0 and to the %TEMP% folder.
Two caveats here:
Stuff can't be written directly under VirtualDirectory0, you have to create a subfolder under there and place your files in that subfolder
The %TEMP% folder really is temporary! If your site instance goes down for any reason and is brought back up somewhere else then everything in your %TEMP% folder will be gone. Use it only for files that really are temporary.
Is the folder that the app is trying to write to under the site's folder?
It's my understanding that folder permissions cannot be set/changed. But I haven't seen anything from Microsoft that definitively says "yes" or "no" to that.
It should be possible using webdeploy.
However I don't think there is a way do it without manually setting up the webdeploy package - as described in the post http://blogs.msdn.com/b/azureappgallery/archive/2013/04/03/set-file-folder-permissions-for-your-content-on-azure-website-using-web-deployable-package.aspx.

Creating a testing enviornment to test fixes without duplicating entire site

I have a rather large site with hundreds of files and a footprint of many hundreds of mgs. I have in a place an assignment system that we utilize to work on enhancements and bug fixes. What I'd like to do is setup a system in which each assignment gets pushed to the web server (testing server) in it's on "sandbox".
Typically I'd just create a copy of the site under a virtual directory, replace the files affected by the assignment and proceed with testing. Problem here is we would be making many copies of massive amounts of files.
What I have in my head would be a system where a "master" copy of the site contains all the current files (presumably from source control). From there create a virtual directory for each assignment with symbolic links to all files and folders except for those actually changed for the assignment.
I essentially want to create a integrated build process that will create the virtual, pull the sym links from master and then replace the links of the files that changed with the actual changed versions from the assignment.
Is this a possibility with Windows Server 2003 and IIS?
Probably possible, but sounds like a nightmare. Must all of these files be copied to each testing site? If they are merely content files (htm, gif, jpeg, etc), leverage VDIRS to a common location. It can even be a network location.

Resources