According to MSDN, an azure service can conatins any number of worker roles. According to my knowledge a worker role can be recycled at any time by Windows Azure Fabric. If it is the true, then:
Worker role should be state less OR
Worker role should persist its state to Windows Azure storage services.
But i want to make a service which conatains client data and do not want to use Azure storage service. How I can accomplish this?
The velocity (whatever it is called) component of AppFabric is a distributed cache and can be used in these situations.
Azure's web and compute roles are stateless means all its local data is volatile and if you want to maintain the state you need to use some external resource to maintain that state and logic in your app to handle that. For simplicity you can use Azure drive but again internally its a blob storage.
You can write to local storage on the worker role by using the standard file IO APIs - but this will be erased upon instance shutdown.
You could also use SQL Azure, or post your data off to another storage service by HTTP (e.g. Amazon S3, or your own server).
However, this is likely to have performance implications. Depending on how much data you'll be storing, how frequently, and how big it is, you might be better off with Azure Storage!
Why don't you want to use Azure Storage?
If the data could be stored in Azure you have a good number of choices: Azure distributed cache, SQL Azure, blob, table, queue, or Azure Drive. It sounds like you need persistence, but can't use any of these Azure storage mechanisms. If data security is the problem, could you encrypt/hashing the data? Understanding why would be useful.
One alternative might be not persist at all, by chaining/nesting synchronous web service calls together, thus achieving reliable messaging.
Another might be to use Azure Connect to domain join Azure compute resource to your local data centre (if you have one), and use you on-premise storage.
Related
I want to see when my resources are idling (e.g. certain resources might only be used during business hours and not used for any other background process). I'd like to do that preferably through an API call.
It would all depends on the type of resource and what you are wanting to do. You could use the Azure Monitor API or Azure Data Explorer API with Kusto to query out specific metrics for your different services. Depending on the type of data, this would require you to have more analytics enabled.
Here are some examples based on types of services.
Azure App Service - You could query for CPU, Memory, HTTP Requests, etc. This would give you an idea of activity. These same metrics tie into the auto-scaling.
Azure VMs - CPU, Memory, Disk IO, etc. You could determine your baseline then you would know when it is idle or not.
Azure Storage - Transactions, Ingress, Egress, Requests, etc. You could use that to determine if there is activity in your storage account.
As you can see it all depends on what you want to define as idling. If the goal is to reduce costs, then that will be difficult with many of these services. You could scale up and down your App Services with some scripts or scale in/out based on metrics. Same can be done with your Azure VMs, or using stopping and starting. Storage will not be able to be adjusted, but you are only charged for storage and egress so that is dictated by activity.
Hope this helps.
no, this is not possible. how do you define "idling"? how would azure know if your service does anything or not? besides, most of the PaaS resources cannot be stopped, so whats the use of that.
You can use Azure Advisor to get cost optimization advice, or Azure Monitor directly to gather performance data and then analyze it, but its not going to be trivial.
How does a Azure function differ from a ASP.NET Core Worker Service?
Does both of these cover the same use cases?
How do I decide which one to use?
Update:
Azure function is designed for azure. So when you use azure, or other azure services which are integrated with azure function, you should consider azure function, which could simplify your code and logic by the built-in features of azure function. But it will also have charges for azure function. And yes, if you don't use azure, just use asp.net core worker services.
Azure function are totally different from ASP.NET Core Worker Service.
The benefit of Azure function is that it supports a lot of triggers like azure blob trigger / azure event hub trigger, and is integrated with other azure services.
With these built-in features, it's easy to create a proper azure function to complete a proper task. For example, if you upload an image to azure blob storage, and then want to resize the image, you can easily create a blob trigger azure function with less code.
Worker services are the perfect use case for any background processing. And if you want to operation some azure services like azure blob / azure event hub etc. you may achieve this but need to do a lot of work.
At last, it depends on your use case to choose which one should be used, and select the simple / efficient one.
I think the answer Fred was looking for is, why would someone use Azure functions over Asp.net Core hosted service? And the answer isn't only because you have Azure account so everything is easy and integrated. It depends. I.e Traffic, cost, expertise, etc.
Actually, there are guidelines set by Microsoft when to use an Azure Functions versus let's say over a IHosted service running in a linux container hosted in AKS for example. and I'll post the link, but it's basically if your workload is mission critical or not. If you want the best performance, lowest cost, reliability a hosted dotnet core service running in Linux container would be an excellent alternative.
Here is the mission critical guidelines from Microsoft, hope this helps!
Design recommendations
Azure Functions:
Consider Azure Functions for simple business process scenarios which don't have the same stringent business requirements as business-critical system flows.
Low-critical scenarios can also be hosted as separate containers within AKS to drive consistency, provided affinity and anti-affinity requirements are fully considered when collocating containers on nodes.
Excerpt taken from here:
https://learn.microsoft.com/en-us/azure/architecture/framework/mission-critical/mission-critical-application-platform
We are making use of Azure Functions (v2) extensively to fulfill a number of business requirements.
We have recently introduced a durable function to handle a more complex business process which includes both fanning out, as well as a chain of functions.
Our problem is related to how much the storage account is being used. I made a fresh deployment on an account we use for dev testing on Friday, and left the function idling over the weekend to monitor what happens. I also set a budget to alert me if the cost start shooting up.
Less than 48 hours later, I received an alert that I was at 80% of my budget, and saw how the storage account was single handedly responsible for the entire bill. The most baffling part is, that it's mostly egress and ingress on file storage, which I'm entirely not using in the application! So it must be something internal by the azure function implementations. I've dug around and found this. In this case the issue seems to have been solved by switching to an App Service plan, but this is not an option in our case and must stick to consumption. I also double checked and made sure that I don't have the AzureWebJobsDashboard setting.
Any ideas what we can try next?
The below are some interesting charts from the storage account. Note how file egress and ingress makes up most of the activity on the entire account.
A ticket for this issue has also been opened on GitHub
The link you provided actually points to AzureWebJobsDashboard as the culprit. AzureWebJobsDashboard is an optional storage account connection string for storing logs and displaying them in the Monitor tab in the portal. The storage account must be a general-purpose one that supports blobs, queues, and tables.
For performance and experience, it is recommended to use
APPINSIGHTS_INSTRUMENTATIONKEY and App Insights for monitoring instead
of AzureWebJobsDashboard
When creating a function app in App Service, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. Internally, Functions uses Storage for operations such as managing triggers and logging function executions. Some storage accounts do not support queues and tables, such as blob-only storage accounts, Azure Premium Storage, and general-purpose storage accounts with ZRS replication. These accounts are filtered out of from the Storage Account blade when creating a function app.
When using the Consumption hosting plan, your function code and
binding configuration files are stored in Azure File storage in the
main storage account. When you delete the main storage account, this
content is deleted and cannot be recovered.
If you use the legacy "General Purpose V1" storage accounts, you may see your costs drop by up to 95%. I had a similar use case where my storage account costs exploded after the accounts were upgraded to "V2". In my case, we just went back to V1 instead of changing our application.
Altough V1 is now legacy, I don't see Azure dropping it any time soon. You can still create it using the Azure Portal. Could be a medium-term solution.
Some alternatives to save costs:
Try the "premium" performance tier (V2 only). It is cheaper for such workloads.
Try LRS or ZRS as the redundancy setting. Depends on the criticality of this orchestration data.
PS: Our use case were some EventHub processors which used the storage accounts for coordination and checkpointing.
PS2: Regardless of the storage account configuration, there must be a way reduce the traffic towards the storage account. It is just another thing to try to reduce costs.
Can you guys explain
Service Fabric can be packaged with MULTIPLE SERVICES to be shipped but then
how do you reuse some of these services into other Application?
Is there a way Reliable Dictionary or Reliable Queue may be shared among
services deployed on Same Cluster?
I tried reading on google but no clear understanding. Your help will be really appreciated.
... how do you reuse some of these services into other Application?
What do you mean with reuse? Sharing the code? You could have a service in Application A talk to a service in Application B instead of having the same service in Application A.
Is there a way Reliable Dictionary or Reliable Queue may be shared among services deployed on Same Cluster?
No there is not. A Reliable Dictionary or Reliable Queue provides data locality to a service removing the need for additional network calls. As soon as you need this same data for multiple services you should consider using other storage solutions like CosmosDB, Blob storage or another database.
If you are looking for some kind of distributed cache you can take a look at Azure Redis.
It is, however, entirely possible to expose the data of a Reliable Dictionary or Reliable Queue using a service. Then that service acts like a data provider / repository. You can expose methods like Add() or Delete() in such a service that results in an update of the Reliable Dictionary or Reliable Queue.
We plan to migrate the existing website to Windows azure, and i have been told that we need to store files to blob storage.
My questions is:
If we want to use blob storage, that means i need to re-write the file storage function(we use file system for now), call blob service api to store files, that's very strange for me just because we want to use windows azure, how about in the future we want to use Amazon EC2 or other cloud platform, they might have there own way to store file, then may be i need to re-write the file storage function again, in my opinion , the implementation of a project should not depends on the cloud platform(or cloud server)! Can any body correct me, thanks!
I won't address the commentary about whether an app should have a dependency on a particular cloud environment (or specific ways to deal with that particular issue), as that's subjective and it's a nice debate to have somewhere else. What I will address is the actual storage in Azure, as your info is a bit out-of-date.
One reason to use blob storage directly (and possibly the reason you were told to use blob storage) is that it provides access from multiple instances of your app. Also, blob storage provides 500TB of storage per storage account, and it's triple-replicated within the deployed region (and optionally geo-replicated). With attached storage (either with local disk or blob-backed Azure Disk), the access is specific to a particular instance of your app. Shifting from file system access to blob storage access does require app modification.
If you choose not to modify your app's file I/O operations, then you can also consider the new Azure File Service, which provides SMB access to storage (backed by blob storage). Using File Service, your app would (hopefully) not need to be modified, although you might need to change your root path.
More information on Azure File Service may be found here.
Why does it seem strange? You need to store your files somewhere and the cloud is a good a place as any IF it suits your needs. The obvious advantages are redundancy and geo replication, sharing files across multiple projects and servers, The list goes on. It's difficult to advise on whether it would be a good idea or not without hearing some specifics.
You could use windows azure storage with amazon in the future if you wanted to (you'd just need to set up the access for it), obviously with slighter longer delay. Then again that slight performance drop may be significant and you may end up re-writing it.
Most importantly, swapping over from one cloud provider to another is not trivial depending on just how much you use it or how much data you've got in it, so I would strongly suggest looking at the advantages / disadvantages of each platform closely before putting your lot in with either one and then fully learn that platform.
Personally, I went for Azure cloud services + storage etc even though it was slightly more expensive at the time, because i'm a Microsoft Person (not that I didn't do my research). It was annoying in the early days when key features were missing, but it's really matured now and I like the pace that it's improving.
It's cheap to test, why not try both and see which one suits you? A small price to pay when you have big decisions to make.
Disclaimer: I don't know the current state of Amazon web services.
Nice question. We are in the middle of a migration of an old PHP/MySQL/LocalShare to WebRole/SQLAzure/AzureStorage ERP application. We faced the same problem and decision. Let me write some thoughts about the issue :
It is a good option to just be able to switch the storage provider but is it reasonable? You can always build the abstraction but do you plan how to do the actual change of storage provider - migration/sync while in production? What kind of argument will exactly drive the transition to another storage provider? How much users and data do you have? Do you plan to shard-rebalance the storage in the future? How reliable must be this system during this storage provider switch? Do you want to totally move the data when you want to switch or you just want to shard it so that you start using this different provider? Does the cost development of these (reliable) storage layers and the cost of development of reliable transitions (or bi-directional syncs) outweighs the money difference between any two storage providers?
Just switching storage mechanism from Azure Blob to Amazon will incur heavy latency penalty if your other services are on Azure - When you create Storage and Services on Azure you set affinity groups by region so that you minimize the network latency.
These are only a few of the questions to answer before doing all the weightlifting. We have abstracted the file repository (blob) because we planned to move from local NFS to Blob transparently and gradually and it answers our needs.