Azure: Start large app quickly on demand? - azure

I don't really understand how cloud services work.
Especially I would like to know if its possible to:
Upload a big application (>1GB) one time only and pay little (only for the storage) and on demand, quickly spawn instances of it (with max. a few minutes of startup time).
So, only pay if the application really runs and not having to upload it all again every stop/start.
Thanks!
Greets,
soacc32

It is certainly possible to do so. In fact this is how cloud service deployment works. When you deploy your application from your local computer, first the application package files are uploaded in blob storage and then deployed from there. You could upload the package separately in blob storage and when you want to create the deployment, you just specify the blob URL. So if you're creating a deployment using the portal, for package file and configuration file location you would choose "FROM STORAGE" instead of "FROM LOCAL" as shown in the screenshot below.
Hope this helps.

Related

Azure Back Ground Services For File Processing

We currently have Window service to process Inbound/outbound files.
In Bound files we read data and perform some calculations and store data in Database.
Out Bound files we generate data from the database.
We want to migrate to azure now. I have following questions .
1) what is the best way to store files in azure (Blob or File Share in azure) . We have only ".pdf,.txt,.xlsx" formats no videos
2) Which process is better to process files - WebJobs, Virtual Machine and install window service , Azure Batch Jobs, azure kubernetes service,Service Fabric.
Please some can help me on this.
Thanks
How are you receiving the files API, FTP or some other way? There are a ton of details that are needed to really answer this, but here are my thoughts.
Blob storage would be more cost effective. You only need to use a file share if you want to be able to map a network drive from a VM.
If processing one file would complete in less than 10 minutes I would look at Azure functions for that. If you’re processing thousands of files per day Azure functions would be expensive so I would look at running them on an App Service on VMs or moving to Service Fabric.
If you have a web site that’s used to upload the files and you’re already using Azure App service then you could use Web Jobs.

Any ideas about how to check Azure's blob storage for viruses?

Our application stores files uploaded from our customers to blob storage. These files are exchanged between different parties (our customers and their suppliers). Is there a way to check the uploaded files for viruses? The Antimalware service seems to just check virtual machines, but I cannot get any information about using it to check files as a service.
A great solution would be if we could store such a file in Azure Storage as an "on hold" file till it is checked. Then we would need a service to check this file and returns the result. If the file is virus-free we could then move it to the final destination.
Azure Storage is just... storage. There are no utilities built in, such as antivirus. You'd need to do your antivirus check on your own. Since antivirus tools typically only work with local OS storage, you'd need to place your "on hold" content (as you referred to it) on a local disk somewhere that you have antivirus installed and then copy to blob storage once your antivirus check is done.
How you accomplish managing this, and which software you use, is up to you. But VMs, App Services, and Cloud Services (web/worker roles) all have local disks available.
As the other answer states Azure Storage is just storage. There are a couple of ways you could do this though,
The first solution would be to run your own anti-virus and use this either as a gateway or programatically download the file from the Blob storage, check the file and then take the appropriate action. It's possible to run something like ClamAV to do this yourself.
Alternatively you could use a third party service like AttachmentScanner (which is exactly what you mention in your comment) which will accept a URL or a direct file upload. With Azure you can generate a temporary url pointing to the file with an expiration of a few minutes, pass the URL to AttachmentScanner and then take the appropriate action depending on the result.
I read an article about virus scanning for blob storage. Might be useful for you.
This guy is using an azure function trigger for the blob to catch the changes and sending the blob file to a virus scanner. The virus scanner is running in a docker container.
Full implementation details are available in the link below
https://peterrombouts.nl/2019/04/15/scanning-blob-storage-for-viruses-with-azure-functions-and-docker/
You can use Azure Defender for Storage to detect following:
Suspicious access patterns - such as successful access from a Tor exit node or from an IP considered suspicious by Microsoft Threat Intelligence
Suspicious activities - such as anomalous data extraction or unusual change of access permissions
Upload of malicious content - such as potential malware files (based on hash reputation analysis) or hosting of phishing content
And to enable it you need to go to Advanced security:
I setup an "azinbox" folder on an azure file storage container. I setup a console application (job) on a VM to check every 30 seconds for a file in that folder. If the job finds it, it moves the file from azinbox to a vminbox folder on the VM. As soon as the files shows up on the VM, if it has a virus, it gets quarantined and the file is deleted from the vminbox. The job on the vm then checks 30 seconds later to see if the file is still in the vminbox. If it is, it must be OK. The job moves the validated file to an azoutbox folder on the azure file storage container. From the Web Site perspective, 1) upload the file to azinbox 2) wait a minute and check the azoutbox. If the file is found, the website moves the file from the azoutbox to its final destination.
I admit it is a crappy solution because it takes a LONG time to complete a file upload. A minute or two can seem like a long time to upload a simple PDF to the user especially if they have more than one to upload.
Also, this requires you setup an entire VM server JUST to validate a file for a virus.
If anyone has a better option, please let me know.

Pushing Bluemix app wipes files in the public folder

I have a Bluemix app (with Node.js backend) on which i upload some files (in the folder public/uploads).
Whenever I change my server code and cf push the app, the files that are in the uploads folder are wiped. How can i publish my app without touching files and folders that I would like not to touch?
Thanks in advance.
This is happening because the way Cloud Foundry works. Bluemix runs on Cloud Foundry. What is causing this is the fact that the file system is ephemeral. The file system should not be used to store uploaded files.
When an app restarts, crashes, scales, or you upload a new version the file system is wiped.
Additionally if you scale your app to for example 5 instances each instance of your app would have different uploads.
I would highly encourage you to check out the 12 Factor App. One of the tenants is not storing files on disk.
I would encourage you to use a shared file system such as OpenStack Swift. It is available in Bluemix and is called Object Storage.
Restaging will wipe your files on Cloud Foundry, as there is no permanent local storage available. Try using Bluemix Live Sync to edit your code on the fly -- no restaging required. You can either download the Bluemix Live cli, or use IBM Dev Ops Services to take advantage of Live Edit. The documentation goes over all the options.
For more permanent storage solutions, check out the Bluemix catalog for services like Cloudant and Redis.
The file system for cloud foundry applications is ephemeral. Every time your push, restart, scale, you get a new fresh file system. Your application should not store files to disc (except cache/temp). Your application should store the uploaded files in some kind of database or a blob store. Look into the Cloudant service.
Considerations for Designing and Running an Application in the Cloud

Backup Azure Virtual Machine local folders to blob storage?

I've just setup an extra small VM instance in Windows Azure to run a help console for our company. The help files can be updated and published through a simple .NET interface. Obviously the flat html files are getting deployed to the local drive on the VM and exposed publicly through IIS. I'm just wondering how stable this is? If the VM suffers a hardware failure, presumably there's no automatic failover and any edits we've made to the help system will be lost?
Can anyone recommend a way I can shuttle the source files out of the VM into blob storage? I could write a an application to do this, I'm just wondering if there is an out-of-the-box solution out there?
Additional information:
The VM instance is running Server 2008 R2 SP1 (As a Virtual Machine not a web-role)
A backup needs to be created once every 24 hours
Aged backups (3+ days old) need to be automatically cleared from the blob container
The help system we use is called HelpConsole 2012
New pages are added at a rate of myabe 2-3 per week
The answer depends on how whether you are running this in a Windows Azure Virtual Machine or on a Windows Azure Web role.
If you are running this on a Windows Azure Virtual Machine, then the VHD is stored in BLOB storage and, if the site is running of the C: drive and not on a data Disk, then the system has some Host caching turned on for both reads and writes. In this scenario it is possible (depending on the methods you use to write your files out) that the data is not pushed back to the VHD in BLOB storage before a failure occurs. You can either ensure that your writing methods do a write through operation, or turn off the write caching. Better yet, attach a data disk for your web site files. By default data disks have both read and write caching off (you could turn on read caching). Since the VHD's are persisted you don't have to worry about the concern of the edits getting lost. You can script out taking a snapshot of the files and move them to BLOB storage separately, or even push them somewhere else. Another thing to think about with this option is that you have to care for the VM instances and keep them patched and up to date.
If you are running a Web Role, then yes, if a failure occurs and the VM goes through self healing it will indeed redeploy with the older files. In this case I'd recommend changing the code in the web role that when it writes the updates to the local file it also puts a copy of the local file into BLOB Storage. In addition, in the web role OnStart you could reach out to BLOB storage and pull down all the new content locally. BE VERY CAREFUL with this approach though because it only really works well for ONE instance, not multiple. If you plan on running multiple instances of the server (and you will have to if you want the SLA for uptime) then your code will need to be a little more robust and do writes out to BLOB storage and then alert all instances of the role that there is a new file to pull down locally.
Another option for web roles is to also write a handler for the content so that requests come in and are mapped to a file BLOB Storage directly. Then updates can occur to direct edits to the file in BLOB storage. This offloads the serving of the flat files from your compute nodes to BLOB storage and you could even implement some caching and stream the content back through the handler rather than having them hit BLOB storage directly if you wanted to.
Now, another option, is to use Windows Azure Web Sites for this. The underlying storage of the web site files in Windows Azure Web Sites is a shared location and thus updating the files in it will immediately be reflected for all instances. Also, the content for the site is stored in BLOB storage and can be updated via FTP, source control, or directly from code. Lots of options here. You may end up moving to reserved instances to help keep away from some of the quotas that Web Sites have. Web Sites may not be an option for you currently depending on other requirements (as in how much control do you need over the environment since you don't get a lot of control for Web Sites).

Is it possible to mount blob storage to my local machine for deployment?

I have a build script that it would be very useful to configure to dump some files into Azure blob storage so they can be picked up by my Azure web role.
My preferred plan was to find some way of mounting the blob storage on my build server as a mapped drive and simply using Robocopy copy to copy the files over. This will involve the least ammount of friction as I already am deploying some files like this to other web servers using WebDrive.
I found a piece of software that will allow me to do that: http://www.gladinet.com/
However on further investigation I found that it needs port 80 to run without some hairy looking hacking about on the server.
So is there another piece of software I could use or perhaps another way I haven't considered, such as deploying the files to a local folder that is automagically synced with blob storage?
Update in response to #David Makogon
I am using http://waacceleratorumbraco.codeplex.com/ this performs 2 way synchronisation between the blob storage and the web roles. I have tested this with http://cloudberrylab.com/ and I can deploy files manually to the blob and they are deployed correctly to the web roles. Also I have done the reverse and updated files in the web roles which have then been synced back to the blob and I have subsequently edited/downloaded them from blob storage.
What I'm really looking for is a way to automate the cloudberry side of things. So I don't have a manual step to copy a few files over. I will investigate the Powershell solutions in the meantime.
I know this is an old post - but in case someone else comes here... the answer is now "yes". I've been working on a CodePlex project to do exactly that. (All source code is available).
http://azuredrive.codeplex.com/
If you're comfortable using powershell in your build process then you could use the Cerebrata Cmdlets to upload the files. If that doesn't work for you, you could write a custom activity (but this sounds quite a bit more involved).
Mounting a cloud drive from a non-Windows Azure compute instance (e.g. your local build machine) is not supported.
Having said that: Even if you could mount a Cloud Drive from your build machine, your compute instances would need access to it too, and there can only be one writer. If your compute instances only needed read-only access, they'd need to create a snapshot after you upload new files.
This really doesn't sound like a good idea though. As knightpfhor suggested, the Cerebrata cmdlets provide this capability (look at Import-File). This allows you to push individual files into their own blobs. You can optimize further by pushing a single ZIP file into a blob. You can then use a technique similar to the one described by Nate Totten in his multi-tenant web role sample, to detect new zip files and expand them to your local storage. Nate's blog post is here.
Oh, and if you don't want to use the Cerebrata cmdlets, you can upload blobs directly with the Windows Azure Storage REST API (though the cmdlets are very simple to use and integrate seamlessly with PowerShell).

Resources