Create and Update Named Caches in Azure Managed Cache using Management API - azure

I am attempting to create an Azure Managed Cache using PowerShell and the Azure Management API, this two pronged approach is required because the Offical Azure PowerShell Cmdlets only have very limited support for Creation and Update of Azure Managed Cache. There is however an established pattern for calling the Azure Management API from PowerShell.
My attempts at finding the correct API to call have been somewhat hampered by limited documentation on the Azure Managed Cache API. However after working my way through the cmdlets using both the source code and the -Debug option in PowerShell I have been able to find what appear to be the correct API endpoints, as such I have developed some code to access these endpoints.
However, I have become stuck after the PUT request has been accepted to the Azure API as subsequent calls to the Management API /operations endpoint show that the result of this Operation was Internal Server Error.
I have been using Joseph Alabarhari's LinqPad to explore the API as it allows me to rapidly itterate on a solution using the minimum possible code, so to execute the following code snippets you will need both LinqPad and the following extension in your My Extensions script:
public static X509Certificate2 GetCertificate(this StoreLocation storeLocation, string thumbprint) {
var certificateStore = new X509Store(StoreName.My, storeLocation);
certificateStore.Open(OpenFlags.ReadOnly);
var certificates = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false);
return certificates[0];
}
The complete source code including the includes are available below:
My Extensions - you can replace an "My Extensions" by right clicking My Extensions in the bottom left hand pane and choosing "Open Script Location in Windows Explorer" then replacing the highlighted file with this one. Alternatively you may wish to merge my extensions into your own.
Azure Managed Cache Script - you should simply be able to download and double click this, once open and the above extensions and certificates are in place you will be able to execute the script.
The following settings are used throughout the script, the following variables will need to it for anyone who is following along using their own Azure Subscription ID and Management Certificate:
var cacheName = "amc551aee";
var subscriptionId = "{{YOUR_SUBSCRIPTION_ID}}";
var certThumbprint = "{{YOUR_MANAGEMENT_CERTIFICATE_THUMBPRINT}}";
var endpoint = "management.core.windows.net";
var putPayloadXml = #"{{PATH_TO_PUT_PAYLOAD}}\cloudService.xml"
First I have done some setup on the HttpClient:
var handler = new WebRequestHandler();
handler.ClientCertificateOptions = ClientCertificateOption.Manual;
handler.ClientCertificates.Add(StoreLocation.CurrentUser.GetCertificate(certThumbprint));
var client = new HttpClient(handler);
client.DefaultRequestHeaders.Add("x-ms-version", "2012-08-01");
This configures HttpClient to both use a Client Certificate and the x-ms-version header, the first call to the API fetches the existing CloudService that contains the Azure Managed Cache. Please note this is using an otherwise empty Azure Subscription.
var getResult = client.GetAsync("https://" + endpoint + "/" + subscriptionId + "/CloudServices");
getResult.Result.Dump("GET " + getResult.Result.RequestMessage.RequestUri);
This request is successful as it returns StatusCode: 200, ReasonPhrase: 'OK', I then parse some key information out of the request: the CloudService Name, the Cache Name and the Cache ETag:
var cacheDataReader = new XmlTextReader(getResult.Result.Content.ReadAsStreamAsync().Result);
var cacheData = XDocument.Load(cacheDataReader);
var ns = cacheData.Root.GetDefaultNamespace();
var nsManager = new XmlNamespaceManager(cacheDataReader.NameTable);
nsManager.AddNamespace("wa", "http://schemas.microsoft.com/windowsazure");
var cloudServices = cacheData.Root.Elements(ns + "CloudService");
var serviceName = String.Empty;
var ETag = String.Empty;
foreach (var cloudService in cloudServices) {
if (cloudService.XPathSelectElements("//wa:CloudService/wa:Resources/wa:Resource/wa:Name", nsManager).Select(x => x.Value).Contains(cacheName)) {
serviceName = cloudService.XPathSelectElement("//wa:CloudService/wa:Name", nsManager).Value;
ETag = cloudService.XPathSelectElement("//wa:CloudService/wa:Resources/wa:Resource/wa:ETag", nsManager).Value;
}
}
I have pre-created a XML file that contains the payload of the following PUT request:
<Resource xmlns="http://schemas.microsoft.com/windowsazure">
<IntrinsicSettings>
<CacheServiceInput xmlns="">
<SkuType>Standard</SkuType>
<Location>North Europe</Location>
<SkuCount>1</SkuCount>
<ServiceVersion>1.3.0</ServiceVersion>
<ObjectSizeInBytes>1024</ObjectSizeInBytes>
<NamedCaches>
<NamedCache>
<CacheName>default</CacheName>
<NotificationsEnabled>false</NotificationsEnabled>
<HighAvailabilityEnabled>false</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
</NamedCache>
<NamedCache>
<CacheName>richard</CacheName>
<NotificationsEnabled>true</NotificationsEnabled>
<HighAvailabilityEnabled>true</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
</NamedCache>
</NamedCaches>
</CacheServiceInput>
</IntrinsicSettings>
</Resource>
I construcuct a HttpRequestMessage with the above Payload and a URL comprised of the CloudService and Cache Names:
var resourceUrl = "https://" + endpoint + "/" + subscriptionId + "/cloudservices/" + serviceName + "/resources/cacheservice/Caching/" + cacheName;
var data = File.ReadAllText(putPayloadXml);
XDocument.Parse(data).Dump("Payload");
var message = new HttpRequestMessage(HttpMethod.Put, resourceUrl);
message.Headers.TryAddWithoutValidation("If-Match", ETag);
message.Content = new StringContent(data, Encoding.UTF8, "application/xml");
var putResult = client.SendAsync(message);
putResult.Result.Dump("PUT " + putResult.Result.RequestMessage.RequestUri);
putResult.Result.Content.ReadAsStringAsync().Result.Dump("Content " + putResult.Result.RequestMessage.RequestUri);
This request is nominally accepted by the Azure Service Management API as it returns a StatusCode: 202, ReasonPhrase: 'Accepted' response; this essentially means that the payload has been accepted and will be processed offline, the Operation ID can be parsed out of the HTTP Header to retreve further information:
var requestId = putResult.Result.Headers.GetValues("x-ms-request-id").FirstOrDefault();
This requestId can be used to request an update upon the status of the operation:
var operation = client.GetAsync("https://" + endpoint + "/" + subscriptionId + "/operations/" + requestId);
operation.Result.Dump(requestId);
XDocument.Load(operation.Result.Content.ReadAsStreamAsync().Result).Dump("Operation " + requestId);
The request to the /operations endpoint results in the following payload:
<Operation xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<ID>5364614d-4d82-0f14-be41-175b3b85b480</ID>
<Status>Failed</Status>
<HttpStatusCode>500</HttpStatusCode>
<Error>
<Code>InternalError</Code>
<Message>The server encountered an internal error. Please retry the request.</Message>
</Error>
</Operation>
And this is where I am stuck, the chances are I am subtly malforming the request in such a way that the underlying request is throwing a 500 Internal Server Error, however without a more detailed error message or API documentation I don't think there is anywhere I can go with this.

We worked with Richard offline and the following XML payload got him un-blocked.
Note - When adding/removing named cache to an existing cache, the object size is fixed.
Note 2- The Azure Managed Cache API is sensitive to whitespace between the element and the element.
Also please note, we are working on adding Named cache capability to our PowerShell itself, so folks don't have to use APIs to do so.
<Resource xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<IntrinsicSettings><CacheServiceInput xmlns="" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<SkuType>Standard</SkuType>
<Location>North Europe</Location>
<SkuCount>1</SkuCount>
<ServiceVersion>1.3.0</ServiceVersion>
<ObjectSizeInBytes>1024</ObjectSizeInBytes>
<NamedCaches>
<NamedCache>
<CacheName>default</CacheName>
<NotificationsEnabled>false</NotificationsEnabled>
<HighAvailabilityEnabled>false</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
<ExpirationSettings>
<TimeToLiveInMinutes>10</TimeToLiveInMinutes>
<Type>Absolute</Type>
</ExpirationSettings>
</NamedCache>
<NamedCache>
<CacheName>richard</CacheName>
<NotificationsEnabled>false</NotificationsEnabled>
<HighAvailabilityEnabled>false</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
<ExpirationSettings>
<TimeToLiveInMinutes>10</TimeToLiveInMinutes>
<Type>Absolute</Type>
</ExpirationSettings>
</NamedCache>
</NamedCaches>
</CacheServiceInput>
</IntrinsicSettings>
</Resource>

Related

How to create ConsumerGroups for Eventhub programatically using nodejs in Azure?

How to create consumergroups in eventhub using nodejs in Azure?
I tried to replicate what .net SDK offers, it did not work.
const { NamespaceManager } = require("#azure/service-bus");
let namespaceManager = NamespaceManager.CreateFromConnectionString(eventHubConnectionString);
let ehd = namespaceManager.GetEventHub(eventHubPath);
namespaceManager.CreateConsumerGroupIfNotExists(ehd.Path, consumerGroupName);
Here is the process that worked:
https://learn.microsoft.com/en-us/rest/api/eventhub/create-consumer-group
Steps:
Create SAS Token
Supply the right headers and make a https call to REST api
You can create it only once, if you call for the second time, it will throw an 409 error. If you want an update or insert call, you need to check for it.
SAS Token:
https://learn.microsoft.com/en-us/rest/api/eventhub/generate-sas-token
uri -- url of your eventhub
saName -- Name of your Managed Policy
saKey -- Primary / Secondary Key of your EventHub Manage Policy (Ensure it has the Manage)
function createSharedAccessToken(uri, saName, saKey) {
if (!uri || !saName || !saKey) {
throw "Missing required parameter";
}
var encoded = encodeURIComponent(uri);
var now = new Date();
var week = 60*60*24*7;
var ttl = Math.round(now.getTime() / 1000) + week;
var signature = encoded + '\n' + ttl;
var signatureUTF8 = utf8.encode(signature);
var hash = crypto.createHmac('sha256', saKey).update(signatureUTF8).digest('base64');
return 'SharedAccessSignature sr=' + encoded + '&sig=' +
encodeURIComponent(hash) + '&se=' + ttl + '&skn=' + saName;
}
Request Parameters:
URL:
https://your-namespace.servicebus.windows.net/your-event-hub/consumergroups/testCG?timeout=60&api-version=2014-01
Headers:
Content-Type: application/atom+xml;type=entry;charset=utf-8
Host: your-namespace.servicebus.windows.net
Authorization: {replace with the content from your SAS Token}
Payload:
<entry xmlns="http://www.w3.org/2005/Atom">
<content type="application/xml">
<ConsumerGroupDescription xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">Any name you want</ConsumerGroupDescription>
</content>
</entry>
Possible Return Statuses:
201 -- Successful Creation
404 -- Not found, you are using a name that does not exist
409 -- The messaging entity 'XXX' already exists.
If you notice any other issues, please leave a comment.
Event hubs Node.JS SDK doesn't support management operations.
Try a management client like https://www.nuget.org/packages/Microsoft.Azure.Management.EventHub/

Google Cloud Storage get signedUrl from CDN npm

I am using a code as the following to create a signed Url for my content:
var storage = require('#google-cloud/storage')();
var myBucket = storage.bucket('my-bucket');
var file = myBucket.file('my-file');
//-
// Generate a URL that allows temporary access to download your file.
//-
var request = require('request');
var config = {
action: 'read',
expires: '03-17-2025'
};
file.getSignedUrl(config, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from the URL.
});
This creates an Url that starts with https://storage.googleapis.com/my-bucket/
If I place that URL in the browser, it is readable.
However, i guess that URL is a direct access to the bucket file and is not passing through my configured CDN.
I see that in the docs (https://cloud.google.com/nodejs/docs/reference/storage/1.6.x/File#getSignedUrl) you can pass a cname option, which transforms the url to replace https://storage.googleapis.com/my-bucket/ to my bucket CDN.
HOWEVER when I copy the resulting URL, the sevice account or resulting url doesn't seem to have access to the resource.
I have added the firebase admin service account to the bucket but still I get no access.
Also, from the docs, the CDN signed url seems a lot different from the one signed through that API. Is it possible to create from the api a CDN signed url, or should i manually create it as explained in: https://cloud.google.com/cdn/docs/using-signed-urls?hl=en_US&_ga=2.131493069.-352689337.1519430995#configuring_google_cloud_storage_permissions?
For anyone interested in the node code for that signing:
var url = 'URL of the endpoint served by Cloud CDN';
var key_name = 'Name of the signing key added to the Google Cloud Storage bucket or service';
var key = 'Signing key as urlsafe base64 encoded string';
var expiration = Math.round(new Date().getTime()/1000) + 600; //ten minutes after, in seconds
var crypto = require("crypto");
var URLSafeBase64 = require('urlsafe-base64');
// Decode the URL safe base64 encode key
var decoded_key = URLSafeBase64.decode(key);
// buILD URL
var urlToSign = url
+ (url.indexOf('?') > -1 ? "&" : "?")
+ "Expires=" + expiration
+ "&KeyName=" + key_name;
//Sign the url using the key and url safe base64 encode the signature
var hmac = crypto.createHmac('sha1', decoded_key);
var signature = hmac.update(urlToSign).digest();
var encoded_signature = URLSafeBase64.encode(signature);
//Concatenate the URL and encoded signature
urlToSign += "&Signature=" + encoded_signature;
The Cloud CDN content delivery network works with HTTP(S) load balancing to deliver content to your users. Are you using HTTPS Load Balancer to deliver content to your users?
You can see this attached document[1] on using Google Cloud CDN and HTTP(S) load balancing and inserting content into the cache.
[1] https://cloud.google.com/cdn/docs/overview
[2] https://cloud.google.com/cdn/docs/concepts
What error code are you getting? Can you use the curl command and send the output with the error code for further analysis.
Could you confirm that configuration you have done meets the requirement of cacheability, as not all the HTTP response are cacheable? Google Cloud CDN caches only those responses that satisfy specific conditions [3], please confirm. Upon confirmation, I will do further investigation and advise you accordingly.
[3] Cacheability: https://cloud.google.com/cdn/docs/caching#cacheability
Could you provide me the output of this two command below, which will help me to verify if there is a permission issue on these objects? These commands will dump all the current permission settings on the object.
gsutil acl get gs://[full_path_to_file_to_be_cached]
gsutil ls -L gs://[full_path_to_file_to_be_cached]
For more details on permission, refer to this GCP documentation [4]
[4] Setting bucket permissions: https://cloud.google.com/storage/docs/cloud-console#_bucketpermission
No, it is not possible to create from the API a CDN signed URL
From what Google documents here. The answer provided by #htafoya seem legit.
However, I spent a couple of hours to struggle why the signed URL not working as CDN endpoint complains access denied. Eventually I found the code using crypto module doesn't produce the same hmac-sha1 hash value as what gcloud compute sign-url computed, I still don't know why.
At the same time, I see this lib (jsSHA) is pretty cool, it generates the HMAC-SHA1 hash value exactly the same as gcloud and it has a neat API, so I think I should comment here so that if others have the same struggle will benefit from this, this is the final code I used to sign gcloud cdn URL:
import jsSHA from 'jssha';
const url = `https://{domain}/{path}`;
const expire = Math.round(new Date().getTime() / 1000) + daySeconds;
const extendedUrl = `${url}${url.indexOf('?') > -1 ? "&" : "?"}Expires=${expire}&KeyName=${keyName}`;
// use jssha
const shaObj = new jsSHA("SHA-1", "TEXT", { hmacKey: { value: signKey, format: "B64" } });
shaObj.update(extendedUrl);
const signature = safeSign(shaObj.getHMAC('B64'));
return `${extendedUrl}&Signature=${signature}`;
working great!

Sending IM with Skype for Business Online from Console App

I am trying to set up a C# console app that can send notifications/reminders to users via Skype for Business online from a generic AD account. I was excited to see the other day that according to this page, UCWA is now supported in Skype for Business online: https://msdn.microsoft.com/en-us/library/office/mt650889.aspx.
I've been trying to follow this tutorial to get this set up: https://msdn.microsoft.com/en-us/library/office/mt590891(v=office.16).aspx. So far I haven't really had much luck... I have my application set up in Azure AD but I get stuck at the "Requesting an access token using implicit grant flow" step of that article (not 100% certain I'm taking the correct actions before that either)... so far I have this:
string clientId = "xxxxxxxx"
string resourceUri = "https://webdir.online.lync.com";
string authorityUri = "https://login.windows.net/common/oauth2/authorize";
AuthenticationContext authContext = new AuthenticationContext(authorityUri);
UserCredential cred = new UserCredential("username", "password");
string token = authContext.AcquireToken(resourceUri, clientId, cred).AccessToken;
var poolReq = CreateRequest("https://webdir.online.lync.com/autodiscover/autodiscoverservice.svc/root", "GET",token);
var poolResp = GetResponse(poolReq);
dynamic tmp = JsonConvert.DeserializeObject(poolResp);
string resourcePool = tmp._links.user.href;
Console.WriteLine(resourcePool);
var accessTokenReq = CreateRequest("https://login.windows.net/common/oauth2/authorize"
+ "?response_type=id_token"
+ "&client_id=" + clientId
+ "&redirect_uri=https://login.live.com/oauth20_desktop.srf"
+ "&state=" + Guid.NewGuid().ToString()
+ "&resource=" + new Uri(resourcePool).Host.ToString()
, "GET",token);
var accessTokenResp = GetResponse(accessTokenReq);
my GetResponse and CreateRequest methods:
public static string GetResponse(HttpWebRequest request)
{
string response = string.Empty;
using (HttpWebResponse httpResponse = request.GetResponse() as System.Net.HttpWebResponse)
{
//Get StreamReader that holds the response stream
using (StreamReader reader = new System.IO.StreamReader(httpResponse.GetResponseStream()))
{
response = reader.ReadToEnd();
}
}
return response;
}
public static HttpWebRequest CreateRequest(string uri, string method, string accessToken)
{
HttpWebRequest request = System.Net.WebRequest.Create(uri) as System.Net.HttpWebRequest;
request.KeepAlive = true;
request.Method = method;
request.ContentLength = 0;
request.ContentType = "application/json";
request.Headers.Add("Authorization", String.Format("Bearer {0}", accessToken));
return request;
}
accessTokenResp is an office online logon page, not the access token I need to move forward... so I'm stuck. I've tried quite a few variations of the above code.
I've been scouring the net for more examples but can't really find any, especially since UCWA support for Office 365 is so new. Does anyone have an example of how to do what I am trying to do or can point me to one? Everything I've found so far hasn't really even been close to what I'm trying. I can't use the Skype for Business client SDK unfortunately either as it doesn't meet all of my requirements.
I came to a working solution using ADAL (v3), with the help of steps outlined at
Authentication using Azure AD
Here the steps, which involve requesting multiple authentication tokens to AAD using ADAL
Register your application, as Native Application, in Azure AD.
Perform autodiscovery to find user's UCWA root resource URI.
This can be done by performing a GET request on
GET https://webdir.online.lync.com/Autodiscover/AutodiscoverService.svc/root?originalDomain=yourdomain.onmicrosoft.com
Request an access token for the UCWA root resource returned in the autodiscovery response, using ADAL
For instance, your root resource will be at
https://webdir0e.online.lync.com/Autodiscover/AutodiscoverService.svc/root/oauth/user?originalDomain=yourdomain.onmicrosoft.com
you'll have to obtain a token from AAD for resource https://webdir0e.online.lync.com/
Perform a GET on the root resource with the bearer token obtained from ADAL
GET https://webdir0e.online.lync.com/Autodiscover/AutodiscoverService.svc/root/oauth/user?originalDomain=yourdomain.onmicrosoft.com
This will return, within the user resource, the URI for applications resource, where to create your UCWA application. This in my case is:
https://webpoolam30e08.infra.lync.com/ucwa/oauth/v1/applications
Residing then in another domain, thus different audience / resource, not included in the auth token previously obatained
Acquire a new token from AAD for the host resource where the home pool and applications resource are (https://webpoolam30e08.infra.lync.com in my case)
Create a new UCWA application by doing a POST on the applications URI, using the token obtained from ADAL
Voilá, your UCWA application is created. What I notice at the moment, is that just few resources are available, excluding me / presence. So users' presence can be retrieved, but self presence status can't be changed.
I've been able however to retrieve my personal note, and the following resources are available to me:
people
communication
meetings
Show me some code:
Function to perform the flow obtaining and switching auth tokens
public static async Task<UcwaApp> Create365UcwaApp(UcwaAppSettings appSettings, Func<string, Task<OAuthToken>> acquireTokenFunc)
{
var result = new UcwaApp();
result.Settings = appSettings;
var rootResource = await result.Discover365RootResourceAsync(appSettings.DomainName);
var userUri = new Uri(rootResource.Resource.GetLinkUri("user"), UriKind.Absolute);
//Acquire a token for the domain where user resource is
var token = await acquireTokenFunc(userUri.GetComponents(UriComponents.SchemeAndServer, UriFormat.SafeUnescaped));
//Set Authorization Header with new token
result.AuthToken = token;
var usersResult = await result.GetUserResource(userUri.ToString());
//
result.ApplicationsUrl = usersResult.Resource.GetLinkUri("applications");
var appsHostUri = new Uri(result.ApplicationsUrl, UriKind.Absolute).GetComponents(UriComponents.SchemeAndServer, UriFormat.SafeUnescaped);
//Acquire a token for the domain where applications resource is
token = await acquireTokenFunc(appsHostUri);
//Set Authorization Header with new token
result.AuthToken = token;
//
var appResult = await result.CreateApplicationAsync(result.ApplicationsUrl, appSettings.ApplicationId, appSettings.UserAgent, appSettings.Culture);
return result;
}
Usage code ato retrieve OAuth tokens using ADAL
var ucSettings = new UcwaAppSettings
{
UserAgent = "Test Console",
Culture = "en-us",
DomainName = "yourdomain.onmicrosoft.com",
ApplicationId = "your app client id"
};
var acquireTokenFunc = new Func<string, Task<OAuthToken>>(async (resourceUri) =>
{
var authContext = new AuthenticationContext("https://login.windows.net/" + ucSettings.DomainName);
var ar = await authContext.AcquireTokenAsync(resourceUri,
ucSettings.ApplicationId,
new UserCredential("myusername", "mypassword"));
return new OAuthToken(ar.AccessTokenType, ar.AccessToken, ar.ExpiresOn.Ticks);
});
var app = await UcwaApp.Create365UcwaApp(ucSettings, acquireTokenFunc);
It should be of course possible to avoid hard-coding username and password using ADAL, but this was easier for PoC and especially in case of Console Application as you asked
I've just blogged about this using a start-to-finish example, hopefully it will help you. I only go as far as signing in, but you can use it with another post I've done on sending IMs using Skype Web SDK here (see day 13 and 14) and combine the two, it should work fine.
-tom
Similar to Massimo's solution, I've created a Skype for Business Online C# based console app that demonstrates how to sign and use UCWA to create/list/delete meetings and change user presence. I haven't gotten around to extending it to send IM's, but you're certainly welcome to clone my repository and extend it to your needs. Just drop in your Azure AD tenant name and native app ID into the code.
I think they just turned this on today - I was doing something unrelated with the Skype Web SDK samples and had to create a new Azure AD app, and noticed that there are two new preview features for receiving conversation updates and changing user information.
Now everything in the Github samples works for Skype For Business Online.

How to publish website form webapplication hosted in Azure?

need solution for website publishing form web application hosted in Azure.
I tried the following code, It create the domain but I was not able to upload the Published website.
private HttpResponseMessage CreateWebsite(CreateSiteViewModel site)
{
var cert = X509Certificate.CreateFromCertFile(Server.MapPath(site.CertPath));
string uri = string.Format("https://management.core.windows.net/{0}/services/WebSpaces/{1}/sites/", site.Subscription, site.WebSpaceName);
// A url which is looking for the right public key with
// the incomming https request
var req = (HttpWebRequest)WebRequest.Create(uri);
String dataToPost =string.Format(
#"<Site xmlns=""http://schemas.microsoft.com/windowsazure"" xmlns:i=""http://www.w3.org/2001/XMLSchema-instance"">
<HostNames xmlns:a=""http://schemas.microsoft.com/2003/10/Serialization/Arrays"">
<a:string>{0}.azurewebsites.net</a:string>
</HostNames>
<Name>{0}</Name>
<WebSpaceToCreate>
<GeoRegion>{1}</GeoRegion>
<Name>{2}</Name>
<Plan>VirtualDedicatedPlan</Plan>
</WebSpaceToCreate>
</Site>", site.SiteName, site.WebSpaceGeo, site.WebSpaceName);
req.Method = "POST"; // Post method
//You can also use ContentType = "text/xml";
// with the request
req.UserAgent = "Fiddler";
req.Headers.Add("x-ms-version", "2013-08-01");
req.ClientCertificates.Add(cert);
// Attaching the Certificate To the request
// when you browse manually you get a dialogue box asking
// that whether you want to browse over a secure connection.
// this line will suppress that message
//(pragramatically saying ok to that message).
string postData = dataToPost;
var encoding = new ASCIIEncoding();
byte[] byte1 = encoding.GetBytes(postData);
// Set the content length of the string being posted.
req.ContentLength = byte1.Length;
Stream newStream = req.GetRequestStream();
newStream.Write(byte1, 0, byte1.Length);
// Close the Stream object.
newStream.Close();
var rsp = (HttpWebResponse)req.GetResponse();
var reader = new StreamReader(rsp.GetResponseStream());
String retData = reader.ReadToEnd();
req.GetRequestStream().Close();
rsp.GetResponseStream().Close();
return new HttpResponseMessage
{
StatusCode = rsp.StatusCode,
Content = new StringContent(retData)
};
}
I am not entirely sure what you try to achieve here. But if I understand correctly you want to publish a website programmatic.
You cannot do this (publish a website programmatic) with Azure Management APIs. Azure management APIs are to manage Azure services and resources. The web site content itself is not in any way Azure Service, nor an Azure resource.
If you want to programmaticly publish a website to Azure Web Site, I would suggest taking deep read into How to deploy an Azure Web site.
Out from what is mentioned there, pretty easy to automate are
Web Deploy
Repositories using GIT
MSBuild
any other that you are familiar with ...

How to Create Queue in Windows Azure?

I am using below code to create queue, using SharedSecretTokenProvider. However I am not able to supply correct values of managerName & managerKey value form windows azure portal.
This results in Http 401 Unauthorized exception. How do I resolve this error?
const string queueName = "thequeue";
var tokenProvider = TokenProvider.CreateSharedSecretTokenProvider(
ConfigurationManager.AppSettings["managerName"],
ConfigurationManager.AppSettings["managerKey"]);
Uri uri = ServiceBusEnvironment.CreateServiceUri("http", "MyNamespace" , string.Empty);
NamespaceManager namespaceManager = new NamespaceManager(uri, tokenProvider);
QueueDescription qd = namespaceManager.CreateQueue(new QueueDescription(queueName)
{
DefaultMessageTimeToLive = TimeSpan.FromMinutes(15),
DuplicateDetectionHistoryTimeWindow = TimeSpan.FromMinutes(10),
LockDuration = TimeSpan.FromMinutes(2),
EnableBatchedOperations = true,
EnableDeadLetteringOnMessageExpiration = true,
RequiresDuplicateDetection = true
});
I had tried this a couple of times with your code before I realized the problem. You are using the SharedSecretTokenProvider which will go to ACS thinking it has an issuer and a key. Since you are trying to use the SAS you'll want to use a CreateSharedAccessSignatureTokenProvider instead.
Swap that out and provide the key and keyName and you should be good.
Also, Viperguynaz is correct, you should use the "sb" instead of http as well. It was failing before it reached there because the token provider was correctly declining you access since it didn't understand the key and keyname you were passing for what it thought was the issuer and key that ACS uses.
Start with ServiceBusEnvironment.CreateServiceUri Method. Note that Service Bus endpoint URIs must always use the “sb://” protocol; for example sb://contoso.servicebus.windows.net/helloservicebus.
Uri address = ServiceBusEnvironment.CreateServiceUri("sb", "contoso", "helloservicebus");
Get your URI inputs set correctly and you should be in business.

Resources