I've found many questions and answers that are similar to mine, but they all seem to be very specific use cases - or, at least, different/old enough to not really apply to me (I think).
What I want to do is something that I thought would be simple today. The most inefficient thing with the web apps is that copying files between them can be slow and/or time-consuming. You have to FTP (or similar) down, the send it back up.
There must be a way to do this same thing, but natively within Azure so the files don't necessarily have to go far and certainly not with the same bandwidth restrictions.
Are there any solid code samples or open-source/commercial tools out there that help make this possible? So far, I haven't come across any code samples, products, or anything else that makes it possible (aside from many very old PowerShell blogs from 5+ years ago). (I'm not opposed to a PowerShell-based solution, either.)
In my case, these are all the same web apps that have minor configuration-based customization differences between them. So, I don't think webdeploy is an option because it's not about deployment of code. Sometimes it's simply creating a clone for a new launch, and other times its creating a copy for staging/development.
Any help in the right direction is appreciated.
As you've noticed, copying files over to AppService is not the way to go. For managing files across different AppService instances, try using a Storage Account. You can attach an Azure Storage file share to the app service and mount it. There's a comprehensive answer below on how to get files into the storage account.
Alternatively, if you have control over the app code, you can use blob storage instead of files and read the content directly from the app.
Related
First of all, I'm not really sure if this question goes here in stackoverflow or if I should ask it on another place. Please if that's the case, indicate me in the right way :)
So, for context, this is an app that I was asked to develop for my job. At first I thought in doing a webapp and host it inside the company servers and domain (intranet), but it isn't possible due to external issues that I can't control.
Is there another way to achieve this? The app must have a database and should be accessible for a bunch of users at the same time.
Of course we want to spend the least amount of money possible to make this happen. Also, using a workstation of our own to host everything is not possible either.
Edit: I didn't finish developing, but for now I'm developing it in Python Flask.
The number of users is small really, just up to five people.
OK - I guess a lot of what you'll get in response to this is your definition is too vague. Things such as scale, number of users, programming languages used to create the web app etc are important when talking about hosting.
However, for me, there are three very good options out there for free hosting, up to a certain amount of traffic.
1.) Heroku - Heroku.com
A world known web hosting platform. You can publish code through GitHub, and it has some extensive coverage for different types of web apps. Definitely worth a look.
2.) Netlify - netlify.com
Similar to Heroku, but used by some major companies. Allows you to host for free to a point, and is relatively simple to get started with.
3.) Vercel - vercel.com
A bit more technical in my opinion - but again, very similar to the above two and has a free tier.
All three are great options, and I'd recommend looking into them in more detail to see what option is best for you. Can't go wrong with any of them.
I had a similar problem: A Python-Flask-SQLite app for me and my office pals to use together.
The solution was creating one .exe file with pyinstaller, hosting this and the database files in a network drive (one that everyone that will use the app has access). As everybody (~10 people) sees the same db, things works fine!
The Issue
I am currently building a PWA that is hosted on Azure and utilises Azure CDN Premium.
Within this PWA, we have the following files:
/service-worker.js
/js/translations/en-us.json
/js/translations/en-hk.json
etc...
When a release is deployed to the storage blob, we trigger a CDN 'purge' that is meant to tell the edge nodes to re-retrieve the assets from the origin storage account.
However, for some reason, the CDN is still returning old versions of these files, despite the storage account having the latest versions (I have left it over 10 hours so not a propagation issue).
Why is this happening? The whole point of a 'purge' is to empty the cache...
I appreciate that there may also be downstream caches beyond the nodes but I never have these problems with AWS and therefore I can only come to the conclusion that it is because Azure is either doing something badly, or I am misunderstanding how it is meant to work.
Possible Solutions
I have come up with possible solutions to this, however, because I am fairly new to Azure, I want to get other's opinions on what the best solution is...
Use Query Strings and Set the relevant Cache mode
I am aware that I could use just use query strings on these files (apart from service-worker.js), however, I do not feel confident this is the best solution.
Custom Rules Engine
Alternatively, I can define custom rules to instruct the CDN to skip the cache for certain files. This kind of defeats the purpose of a CDN though. Which goes back to the question, why is Azure not purging these assets properly...
If this is the best solution, please could someone advise me on the what rules I should define?
Our company has an on-prem file server that I'd like to move to the cloud. I followed these directions and was successfully able to map a drive on my local work computer to connect to an Azure File Share. Our company has about 20 locations, ~5 TB of data (mostly "office" type of files) in total, and about 500 users accessing them.
There are two issues I would like to improve but I'm not sure how:
There's somewhat of a lag when opening files. Other than increasing our office's internet speed, is there anything to be done to make it faster? Would some kind of site-to-site VPN help? Would adding some type of server or VM in the "middle" (maybe one per location?) that would perhaps somehow cache the files reduce the lag?
Also, we have and use an Office 365 subscription. What's the easiest way to use our existing AD structure to transfer over the NTFS permissions that are currently in place?
I Googled around and found a bunch of companies advertising their services, notable among them was Talon Storage. But it seems like something that could be done without hiring a company. What I'm hoping for is a DIY direction to optimally solve these issues. Perhaps there's a standard or commonly recommended solution for such issues. Any guidance would be greatly appreciated.
L-A-T-E-N-C-Y. The number one enemy for any cloud-based file server attempt. It ranges from annoying to down right unusable, depending on how far you are from the Azure datacenter of choice.
Imagine a poor soul trying to "stream" a large 20-meg Excel file with 20 references to external files. What used to take maybe 8 seconds on-prem will now take 40 in the cloud (on a good day). It's game over for productivity. Your marketing department that sometimes used to cut video in iMovie over the network? Those days are over.
I understand this is not the answer you were after, but it's the crude reality.
Do not panic, there are solutions, here's a good one - https://azure.microsoft.com/en-us/services/storsimple/
I'm sure you wanted to get rid of boxes not buy more, but it is what it is.
I am very new to microsoft azure. I would like to transfer 5gb of files(datasets) from my Microsoft one drive account to azure storage(blob storage I guess), and then share those files to about 10 other azure accounts on azure(I have some idea as to how to share files these files). I am not really sure how to go about it, and I prefer not downloading the 5gb of files from one drive and then uploading to azure. Help would be greatly appreciated, thanks a lot.
David's comment is correct, but I still want to provide a couple links to get you started. Like he mentioned, if you can break this into several questions that are more specific you can probably get much better StackOverflow response. I think the first part of the question could be phrased as 'How can I quickly transfer 5GB of files to Azure storage?'. This is still opinion based to some degree but has a couple more finite answers:
AzCopy/DmLib are, respectively, a command line tool and an Azure library that specialize in bulk transfer. There's a couple options including async copy and sync copy. These libraries are specialized to a greater degree for upload/download from the file system but will get you started.
There's a variety of language storage libraries where you can write custom code to connect up with OneDrive. Here is a getting started with .Net.
I think this is a very genuine question as downloading huge files and uploading them back is a very expensive and time consuming task. You can refer to a template here which would allow you to do a server side copy.
Hopefully, if not you, someone else would be benefited with this.
For the better part of 10 years + we have relied on various network mapped drives to allow file sharing. One drive letter for sharing files between teams, a seperate file share for the entire organization, a third for personal use etc. I would like to move away from this and am trying to decide if an ECM/Sharepoint type solution, or home grown app, is worth the cost and the way to go? Or if we should simply remain relying on login scripts/mapped drives for file sharing due to its relative simplicity? Does anyone have any exeperience within their own organization or thoughts on this?
Thanks.
SharePoint is very good at document sharing.
Documents generally follow a process for approval, have permissions, live in clusters... and these things lend themselves well to SharePoints document libraries.
However there are somethings that don't lend themselves well to living inside SharePoint... do you have a virtual hard drive (.vhd) file that you want to share with a workmate? Not such a good idea to try and put a 20GB file into SharePoint.
SharePoint can handle large files, and so can SQL Server behind it... but do you want your SQL Server bandwidth being saturated by such large files? Do you want your backup of SQL Server to hold copies of such large files multiple times?
I believe that there are a few Microsoft partners who offer the ability to disassociate file blobs from the SharePoint database, so that SharePoint can hold the metadata and a file system holds the actual files, and SharePoint simply becomes the gateway to manage access, permissions, and offer a centralised interface to files throughout an organisation. This would offer you the best of both worlds.
Right now though, I consider SharePoint ideal for documents, and I keep large files (that are not document centric) on Windows file shares.
Definetely, use a tool.
The main benefit here is version control. Being able to jump easily to a previous version, diff'ing and seeing who modified what (see most VCS' blame/annotate tool- it prints out a text file showing when/who modified each line in the text file).
Second, you can probably benefit from issue tracking/task tracking.
Other benefits include web access from the internet, having a wiki (which can be great in some situations), etc.
I use Subversion + Redmine at work, and I find it highly useful- test a few solutions and you will surely find out further advantages for you.
One thing that can be overlooked in the change to an document management tool is the planning required around how much is going to be stored and information architecture issues like where different content is going to end up.
SharePoint particularly is easy to setup without a good plan going forward and is particularly vulnerable to difficulties later on when things get to busy.
I would not recommend a home grown app for something like this. The problem has been solved by off the shelf tools and growing one from scratch is going to cost a huge amount and not get you any way near the features for the money.
Did I mention how important planning your security groups and document areas (IA) was?
If you need just document storage then sharepoint can do very well. WSS is ewen free and it provides very good document storage capabilities.
But you have to plan carefully as updating existing applications is painfull. If you decide to go with Sharepoint then I can give you few advices from top of my head
Pay attention to security configuration (user groups, privilegies,..)
Plan your document libraries well as it is not easy to just move documents betveen them
Also consider limiting number of versions that one document can have, because sharepoint stores full backups betveen verions, not just changes
Don't use infopath:) we have very bad experience with it (just don't tell this to managers)
If you don't really need to change graphical look of Sharepoint than don't bother with it as it brings many problems (I'm talking about custom masterpages and custom site templates)
Try to use as much OOB stuff as possible, because developing your own webparts not only cost more, but it can be quite complicated.
Make sure to turn-on search indexing. This is quite tricky, because it is by default turned off and then you will be as surprised that search is not working as I was :)
If you try to just deploy it and load 10.000 documents into it then you will surely have problems with it later. If you give a little thought about structure then you will end up with really good document storage.
Migrating is very probably worth the cost in the long term. You will gain reliability, versioning, traceability, and extensibility.
Be sure to first identify the groups/rights, and to identify which links need to be fixed (maybe you have applications that use links to the shares).
An open source alternative to SharePoint is Alfresco, it is very good for CIFS (Windows shares) too.