In Windows Azure it's possible to create public Blob Container. Such a container can be accessed by anonymous clients via the REST API.
Is it also possible to create a publicly accessible Queue?
The documentation for the Create Container operation explains how to specify the level of public access (with the x-ms-blob-public-access HTTP header) for a Blob Container. However, the documentation for the Create Queue operation doesn't list a similar option, leading me to believe that this isn't possible - but I'd really like to be corrected :)
At this time, Azure Queues cannot be made public.
As you have noted, this "privacy" is enforced by requiring all Storage API calls made in RE: to queues to be authenticated with a signed request from your key. There is no "public" concept similar to public containers in blob store.
This would follow best practice in that even in the cloud you would not want to expose the internals of your infrastructure to the outside world. If you wanted to achieve this functionality, you could expose a very thin/simple "layer" app on top of queues. A simple WCF REST app in a web role could expose the queuing operations to your consumers, but handle the signing of api requests internally so you would not need the queues to be public.
You are right, the Azure storage queues won't be publicly accessible like the blobs (Uris). However you may still be able to achieve a publicly consumable messaging infrastructure with the appfabric service bus.
I think the best option would be to setup a worker role and provide access to the queue publicly in that manner. Maybe with AppFabric Service Bus for extra connectivity/interactivity with external sources.
? Otherwise - not really clear what the scope might be. The queue itself appears to be locked away at this time. :(
Related
I created a pair of services in service fabric, one goes and reads from the source database and if it finds any new items, adds to a reliable queue; the other one tries to dequeue from the reliable queue and creates in the other database where I need the records.
If both of this processes are in the same service, everything works, but if I separate this functionality in two different services, the second service queue is always empty, which tells me the queues are not the same.
Hence my question: is a reliable queue only available to instances of the same service type? Is there any way to make a reliable queue available to two or more service types? If I want to share the same queue across service types, do I have to use Service Bus instead?
I hope my question makes sense, I have been trying to find this in the documentation, but I do not see anything helpful there, maybe I am looking in the wrong place.
A reliable collection is indeed only available to one particular stateful service type. The whole idea behind it is that the data (reliable collection) lives where the code (service) lives.
If you want to access the queue from another service you could expose methods that manipulate the queue to do that on the service interface and have other services call this service. See this repo for some inspiration. Or use another messaging service like the Azure Service Bus or Azure Storage Queues.
Can you guys explain
Service Fabric can be packaged with MULTIPLE SERVICES to be shipped but then
how do you reuse some of these services into other Application?
Is there a way Reliable Dictionary or Reliable Queue may be shared among
services deployed on Same Cluster?
I tried reading on google but no clear understanding. Your help will be really appreciated.
... how do you reuse some of these services into other Application?
What do you mean with reuse? Sharing the code? You could have a service in Application A talk to a service in Application B instead of having the same service in Application A.
Is there a way Reliable Dictionary or Reliable Queue may be shared among services deployed on Same Cluster?
No there is not. A Reliable Dictionary or Reliable Queue provides data locality to a service removing the need for additional network calls. As soon as you need this same data for multiple services you should consider using other storage solutions like CosmosDB, Blob storage or another database.
If you are looking for some kind of distributed cache you can take a look at Azure Redis.
It is, however, entirely possible to expose the data of a Reliable Dictionary or Reliable Queue using a service. Then that service acts like a data provider / repository. You can expose methods like Add() or Delete() in such a service that results in an update of the Reliable Dictionary or Reliable Queue.
After enabling app insights on a webjobs which listens for events on an EventHub using the EventProcessor class, we see that it tries continuously to access a set of non-existing queues in the configured blob storage account. We have not configured any queues on this account.
There's no reference to a queue anywhere in my code, and it is my understanding that the EventProcessorHost uses blob storage and not queues in order to maintain state. So: Why is it trying to access queues?
The queue access that you're seeing comes from the JobHost itself, not from any specific trigger type like EventHubs. The WebJobs SDK uses some storage resources itself behind the scenes for its own operation, e.g. control queues to track its own work, blobs for storage of log information shown in the Dashboard, etc.
In the specific case you mention above, those control queues that are being accessed are part of our Dashboard Invoke/Replay/Abort support. We have an open issue here in our repo tracking potential improvements we can make in this area. Please feel free to chime in on that issue.
I have a local storage folder, called TempStore, set up on my Web Role instances.
Is it possible to expose files as a URI from my local storage?
E.g:
http://myapplication.cloudapp.net/TempStore/helloworld.jpg
I understand that I could use blobs for this, but I would prefer to use local storage in this case.
There is. However I really do not understand the reason for doing this? The only reason I see is some misunderstanding or not fully understanding the capabilities of the Windows Azure Platform Services (Storage, Cloud Service / Web Roles).
You have to know that local storage is not synced between role instances. Also if hardware failure happens, a role healing process will instantiate an entirely new VM with fresh image from your cloud service package. This will lead to an absolutely empty local storage resource. Windows Azure Load Balancer (the thing that sits in front of your web and worker roles, more here) uses Round Robin algorithm. Meaning that even if with one request user uploads file to your web role. The next request (that you will probably want to show preview) might go to another instance that has no idea of user uploaded.
If, after knowing all these facts, you still want to "shoot yourself in the foot" here is the solution:
Implement VirtualPathProvider
register it for desired public URL Path
Use the RoleEnvironment.GetLocalResource method in your VPP to obtain the full path to the local storage resource
don't blame anyone else when you realize this was a mistake ;)
According to MSDN, an azure service can conatins any number of worker roles. According to my knowledge a worker role can be recycled at any time by Windows Azure Fabric. If it is the true, then:
Worker role should be state less OR
Worker role should persist its state to Windows Azure storage services.
But i want to make a service which conatains client data and do not want to use Azure storage service. How I can accomplish this?
The velocity (whatever it is called) component of AppFabric is a distributed cache and can be used in these situations.
Azure's web and compute roles are stateless means all its local data is volatile and if you want to maintain the state you need to use some external resource to maintain that state and logic in your app to handle that. For simplicity you can use Azure drive but again internally its a blob storage.
You can write to local storage on the worker role by using the standard file IO APIs - but this will be erased upon instance shutdown.
You could also use SQL Azure, or post your data off to another storage service by HTTP (e.g. Amazon S3, or your own server).
However, this is likely to have performance implications. Depending on how much data you'll be storing, how frequently, and how big it is, you might be better off with Azure Storage!
Why don't you want to use Azure Storage?
If the data could be stored in Azure you have a good number of choices: Azure distributed cache, SQL Azure, blob, table, queue, or Azure Drive. It sounds like you need persistence, but can't use any of these Azure storage mechanisms. If data security is the problem, could you encrypt/hashing the data? Understanding why would be useful.
One alternative might be not persist at all, by chaining/nesting synchronous web service calls together, thus achieving reliable messaging.
Another might be to use Azure Connect to domain join Azure compute resource to your local data centre (if you have one), and use you on-premise storage.