JClouds for Azure Blob - azure

I can't find an example on how to create a new container/bucket with specific Location (Singapore) using JClouds. All the examples that I found on google are using null as default location.
azureBlobStore.createContainerInLocation(null, containerName);
Could any of you, JClouds veterans, help me out here?

I haven't used JClouds, but just went and looked at the docs for Azure storage. First thing they show is creation of a blob context:
BlobStoreContext context = new BlobStoreContextFactory().createContext("azureblob", accesskeyid, secretkey);
According to the Javadocs, the params are provider, identity, and credential. That being the case, you probably need to pass the storage account and key from the Windows Azure portal into the 2nd and 3rd parameters. Once you do this, your location is set for you, to the data center where you set up the storage account (In Windows Azure, a storage account is associated with a specific data center upon creation - all containers and objects are then created in that data center as well). I don't think the Location parameter is meaningful when setting up your Azure blob container. That Location parameter is nullable, since it only applies to a subset of cloud providers based on that provider's API (see Javadocs for more details).

I was looking for the same answer the other day and just wanted to echo what David said. Here is the code for AzureBlobStore.java in jclouds 1.5
#Override
public boolean createContainerInLocation(Location location, String container) {
return sync.createContainer(container);
}
As you can see, the location is ignored because your Azure account is already tied to a specific location.

Related

Azure - Blob Storage - Mixing Custom Domain with SAS

I have an Azure Storage account that hosts a static web site as explained here. This means the static web site "lives" in a storage container named $web. This web site is accessible via a custom domain. This is currently working as desired. However, there is one file that I want to restrict access to.
There is one file in the $web storage container that I only want individuals to access if a) they have a key and b) it's during a specific time window. My thinking was that I could accomplish this with a Shared Access Signature (SAS). However, while testing this approach, it doesn't seem to work. It seems that everything in the $web storage container is publicly visible whether a SAS has been generated or not. Is this correct?
Is there a way to require that a file in the $web storage container have an SAS? Or, do I need to "host" the file in a separate storage container (thus removing it from my custom domain)?
Thank you.
When visit the files stored in $web container via primary static website endpoint(for example, https://contosoblobaccount.z22.web.core.windows.net/index.html), the files are always be accessible whether the container is public or private. So it doesn't matter the sas token is specified or not.
And the sas token only take effects if the $web container is private access, and people visit it via primary blob service endpoint(For example, https://contosoblobaccount.blob.core.windows.net/$web/index.html).
Please refer to this official doc for more details.
So for your purpose, you should put it in another container with private access.

Why do I see a FunctionIndexingException when creating a QueueTrigger WebJob Function?

I created a function like this
public static Task HandleStorageQueueMessageAsync(
[QueueTrigger("%QueueName%", Connection = "%ConnectionStringName%")] string body,
TextWriter logger)
{
if (logger == null)
{
throw new ArgumentNullException(nameof(logger));
}
logger.WriteLine(body);
return Task.CompletedTask;
}
The queue name and the connection string name come from my configuration that has an INameResolver to get the values. The connection string itself I put from my secret store into the app config at app start. If the connection string is a normal storage connection string granting all permissions for the whole account, the method works like expected.
However, in my scenario I am getting an SAS from a partner team that only offers read access to a single queue. I created a storage connection string from that which looks similar like
QueueEndpoint=https://accountname.queue.core.windows.net;SharedAccessSignature=st=2017-09-24T07%3A29%3A00Z&se=2019-09-25T07%3A29%3A00Z&sp=r&sv=2018-03-28&sig=token
(I tried successfully to connect using this connection string in Microsoft Azure Storage Explorer)
The queue name used in the QueueTrigger attribute is also gathered from the SAS
However, now I am getting the following exceptions
$exception {"Error indexing method 'Functions.HandleStorageQueueMessageAsync'"} Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexingException
InnerException {"No blob endpoint configured."} System.Exception {System.InvalidOperationException}
If you look at the connection string, you can see the exception is right. I did not configure the blob endpoint. However I also don't have access to it and neither do I want to use it. I'm using the storage account only for this QueueTrigger.
I am using Microsoft.Azure.WebJobs v2.2.0. Other dependencies prevent me from upgrading to a v3.x
What is the recommended way for consuming messages from a storage queue when only a SAS URI with read access to a single queue is available? If I am already on the right path, what do I need to do in order to get rid of the exception?
As you have seen, v2 WebJobs SDK requires access to blob endpoint as well. I am afraid it's by design, using connection string without full access like SAS is an improvement tracked but not realized yet.
Here are the permissions required by v2 SDK. It needs to get Blob Service properties(Blob,Service,Read) and Queue Metadata and process messages(Queue,Container&Object,Read&Process).
Queue Trigger is to get messages and delete them after processing, so SAS requires Process permission. It means the SAS string you got is not authorized correctly even if SDK doesn't require blob access.
You could ask partner team to generate SAS Connection String on Azure portal with minimum permissions above. If they can't provide blob access, v3 SDK seems an option to try.
But there are some problems 1. Other dependencies prevent updating as you mentioned 2. v3 SDK is based on .NET Core which means code changes can't be avoided. 3. v3 SDK document and samples are still under construction right now.
I was having a load of issues getting a SAS token to work for a QueueTrigger.
Not having blob included was my problem. Thanks Jerry!
Slightly newer screenshot (I need add also):

Do I need an Azure Storage Account to run a WebJob?

So I'm fairly new to working with Azure and there are some things I can't quite wrap my head around. One of them being the Azure Storage Account.
My web jobs keeps stopping with the following error "Unhandled Exception: System.InvalidOperationException: The account credentials for '[account_name]' are incorrect." Understanding the error however is not the problem, at least that's what I think. The problem lies in understanding why I need an Azure Storage Account to overcome it.
Please read on as I try to take you through the steps taken thus far. Hopefuly the real question will become more clear to you.
In my efforts to deploy a WebJob on Azure we have created the following resources so far:
App Service Plan
App Service
SQL server
SQL database
I'm using the following code snippet to prevent my web job from exiting:
JobHostConfiguration config = new JobHostConfiguration();
config.DashboardConnectionString = null;
new JobHost(config).RunAndBlock();
To my understanding from other sources the Dashboard connection string is optional but the AzureWebJobsStorage connection string is required.
I tried setting the required connection string in portal using the configuration found here.
DefaultEndpointsProtocol=[http|https];AccountName=myAccountName;AccountKey=myAccountKey
Looking further I found this answer that clearly states where I would get the values needed, namely an/my missing Azure Storage Account.
So now for the actualy question: Why do I need an Azure Storage Account when I seemingly have all the resources I need place for the WebJob to run? What does it do? Is it a billing thing, cause I thought we had that defined in the App Service Plan. I've tried reading up on Azure Storage Accounts over here but I need a bit more help understanding how it relates to everything.
From the docs:
An Azure storage account provides resources for storing queue and blob data in the cloud.
It's also used by the WebJobs SDK to store logging data for the dashboard.
Refer to the getting started guide and documentation for further information
The answer to your question is "No", it is not mandatory to use Azure Storage when you are trying to setup and run a Azure web job.
If you are using JobHost or JobHostConfiguration then there is indeed a dependency for Storage accounts.
Sample code snippet is give below.
class Program
{
static void Main()
{
Functions.ExecuteTask();
}
}
public class Functions
{
[NoAutomaticTrigger]
public static void ExecuteTask()
{
// Execute your task here
}
}
The answer is no, you don't. You can have a WebJob run without being tied to an Azure Storage Account. Like Murray mentioned, your WebJob dashboard does use a storage account to log data but that's completely independent.

How can I programatically (C#) read the autoscale settings for a WebApp?

I'm trying to build a small program to change the autoscale settings for our Azure WebApps, using the Microsoft.WindowsAzure.Management.Monitoring and Microsoft.WindowsAzure.Management.WebSites NuGet packages.
I have been roughly following the guide here.
However, we are interested in scaling WebApps / App Services rather than Cloud Services, so I am trying to use the same code to read the autoscale settings but providing a resource ID for our WebApp. I have already got the credentials required for making a connection (using a browser window popup for Active Directory authentication, but I understand we can use X.509 management certificates for non-interactive programs).
This is the request I'm trying to make. Credentials already established, and an exception is thrown earlier if they're not valid.
AutoscaleClient autoscaleClient = new AutoscaleClient(credentials);
var resourceId = AutoscaleResourceIdBuilder.BuildWebSiteResourceId(webspaceName: WebSpaceNames.NorthEuropeWebSpace, serverFarmName: "Default2");
AutoscaleSettingGetResponse get = autoscaleClient.Settings.Get(resourceId); // exception here
The WebApp (let's call it "MyWebApp") is part of an App Service Plan called "Default2" (Standard: 1 small), in a Resource Group called "WebDevResources", in the North Europe region. I expect that my problem is that I am using the wrong names to build the resourceId in the code - the naming conventions in the library don't map well onto what I can see in the Azure Portal.
I'm assuming that BuildWebSiteResourceId is the correct method to call, see MSDN documentation here.
However the two parameters it takes are webspaceName and serverFarmName, neither of which match anything in the Azure portal (or Google). I found another example which seemed to be using the WebApp's geo region for webSpaceName, so I've used the predefined value for North Europe where our app is hosted.
While trying to find the correct value for serverFarmName in the Azure Portal, I found the Resource ID for the App Service Plan, which looks like this:
/subscriptions/{subscription-guid}/resourceGroups/WebDevResources/providers/Microsoft.Web/serverfarms/Default2
That resource ID isn't valid for the call I'm trying to make, but it does support the idea that a 'serverfarm' is the same as an App Service Plan.
When I run the code, regardless of whether the resourceId parameters seem to be correct or garbage, I get this error response:
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
{"Code":"SettingNotFound","Message":"Could not find the autoscale settings."}
</string>
So, how can I construct the correct resource ID for my WebApp or App Service Plan? Or alternatively, is there a different tree I should be barking up to programatially manage WebApp scaling?
Update:
The solution below got the info I wanted. I also found the Azure resource explorer at resources.azure.com extremely useful to browse existing resources and find the correct names. For example, the name for my autoscale settings is actually "Default2-WebDevResources", i.e. "{AppServicePlan}-{ResourceGroup}" which I wouldn't have expected.
There is a preview service https://resources.azure.com/ where you can inspect all your resources easily. If you search for autoscale in the UI you will easily find the settings for your resource. It will also show you how to call the relevant REST Api endpoint to read or update that resorce.
It's a great tool for revealing a lot of details for your deployed resources and it will actually give you an ARM template stub for the resource you are looking at.
And to answer your question, you could programmatically call the REST API from a client with updated settings for autoscale. The REST API is one way of doing this, the SDK another and PowerShell a third.
The guide which you're following is based on the Azure Service Management model, aka Classic mode, which is deprecated and only exists mainly for backward compatibility support.
You should use the latest
Microsoft.Azure.Insights nuget package for getting the autoscale settings.
Sample code using the nuget above is as below:
using Microsoft.Azure.Management.Insights;
using Microsoft.Rest;
//... Get necessary values for the required parameters
var client = new InsightsManagementClient(new TokenCredentials(token));
client.AutoscaleSettings.Get(resourceGroupName, autoScaleSettingName);
Besides, the autoscalesettings is a resource under the "Microsoft.Insights" provider and not under the "Microsoft.Web" provider, which explains why you are not able to find it with your serverfarm resourceId.
See the REST API Reference below for getting the autoscale settings.
GET
https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/microsoft.insights/autoscaleSettings/{autoscale-setting-name}?api-version={api-version}

Azure Shared Access Signature for whole storage account

I am using an API (node.js) to generate a read only shared access signature for an iOS app using Azure Mobile Services. The API generates the SAS using the following code...
var azure = require('azure-storage');
var blobService = azure.createBlobService(accountName, accountKey);
var sas = blobService.generateSharedAccessSignature("containerName", null, sharedAccessPolicy);
This works great when I want a SAS for access to one container. But I really need access to all containers in the storage account. I could obviously do this with a separate API call for each container but this would require hundreds of extra calls.
I have looked everywhere for a solution but I can't get anything to work, I would very much appreciate knowing if there is a way to generate a SAS for all containers in a storage account?
You can construct an account-level SAS, where you get to specify:
services to include (blob, table, queue, file)
resource access (e.g. container create & delete)
permissions (e.g. read, write, list)
protocol (e.g. https only, vs http+https)
Just like a service-specific SAS, you get to specify expiry date (and optionally start date).
Given your use case, you can tailor your account SAS to be just for blobs; there's no need to include unneeded services (in your case, tables/queues/files).
More specifics are documented here.

Resources