How to work with Gmail attachments on Azure Logic App - azure

I have created a Logic app where I get emails from a Gmail Account and I want to post the attachment of the email to my rest API. But I don't understand which type I get as the attachment. I have seen: if I use the Outlook.com trigger I get a base64String but from Gmail I get something else.
Is there an example how to work with Gmail attachments.

Thanks for the input SahadevSinh. I have changed my workflow like this:
And in my endpoint I do this:
public async System.Threading.Tasks.Task<MissionOutputDto> CreateMissionFromMail(HttpRequestMessage req)
{
string body = await req.Content.ReadAsStringAsync();
dynamic fileData = JObject.Parse(body);
string email = fileData.email;
JArray files = fileData.files;
string fileString = null;
string fileName = null;
string mimeType = null;
foreach (dynamic file in files)
{
fileString = file.ContentBytes;
fileName = file.Name;
mimeType = file.ContentType;
}

i have to one example to show you how you can get gmail attachment
enter image description here
1) receive email trigger :
Step 1 details
2) get email details :
Step 2 details
3) pass attachment details in HTTP request
Step 3 details
[
{
"Name": "test (2).txt",
"ContentBytes": "dGVzdA==",
"ContentType": "text/plain; charset=\"US-ASCII\"; name=\"test (2).txt\"",
"ContentId": "",
"Size": 4
},
{
"Name": "test (2) - Copy.txt",
"ContentBytes": "dGVzdA==",
"ContentType": "text/plain; charset=\"US-ASCII\"; name=\"test (2) - Copy.txt\"",
"ContentId": "",
"Size": 4
}
]
"contentbyte" : is base64Strig
WebAPI changes :
you have create one more class to retrieve this attachment data
public class GmailAttechment
{
public string FileName { get; set; }
public string ContentBytes { get; set; }
public string ContentType { get; set; }
public string ContentId { get; set; }
public int Size { get; set; }
}
this class use to retrieve attachment details from your request
add above class into your webapi request parameter
public class GetEmailDetails
{
public string file { get; set; }
public string fileName { get; set; }
public string from { get; set; }
public string mimeType { get; set; }
**public List<GmailAttechment> GmailAttechmentList { get; set; }**
}
example of action
public void GetGmailDetails(GetEmailDetails gmailDetails)
{
foreach (var item in gmailDetails.GmailAttechmentList)
{
//here you can get all file content
string base6String = item.ContentBytes;
}
}

Related

User Flow - API connector cannot parse response

I have an Azure B2C user flow. It is associated with an API Connector pointing to an Azure Function. The function returns a ResponseContent with extension claims:
public class ResponseContent
{
public const string ApiVersion = "1.0.0";
public ResponseContent()
{
this.version = ResponseContent.ApiVersion;
this.action = "Continue";
}
public ResponseContent(string action, string userMessage)
{
this.version = ResponseContent.ApiVersion;
this.action = action;
this.userMessage = userMessage;
}
public ResponseContent(string userTypes, string accountIdentifiers, string pricebookAuthorized, string portalAuthorized)
{
this.version = ResponseContent.ApiVersion;
this.action = "Continue";
this.extension_UserTypes = userTypes;
this.extension_AccountIdentifiers = accountIdentifiers;
this.extension_PricebookAuthorized = pricebookAuthorized;
this.extension_PortalAuthorized = portalAuthorized;
}
public string version { get; }
public string action { get; set; }
[JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
public string userMessage { get; set; }
[JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
public string extension_UserTypes { get; set; }
[JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
public string extension_AccountIdentifiers { get; set; }
[JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
public string extension_PricebookAuthorized { get; set; }
[JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
public string extension_PortalAuthorized { get; set; }
}
Here are the claims of the user flow:
When I run this Azure function using Postman, the following is returned:
{
"version": "1.0.0",
"action": "Continue",
"extension_UserTypes": "",
"extension_AccountIdentifiers": "",
"extension_PricebookAuthorized": "",
"extension_PortalAuthorized": ""
}
But when I try to run the user flow on Azure, I get
Microsoft.Identity.Client.MsalServiceException:
AADB2C90261: The claims exchange 'PreSendClaimsRestful' specified in
step '2' returned HTTP error response that could not be parsed.
What might be wrong, and how this can be diagnosed?
Please check if below points can help:
Each key value pair in the JSON is treated as string, string
collection or Boolean.
AADB2C may not deserialise the claim in the JSON you send. One may
need to deserialise the string at the API, or will have to return a
nested JSON object without the escape characters.
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
Reference: dotnet-external-identities-api-connector-azure-function-validate
ยท GitHub
To troubleshoot the unexpected response, try sending Azure AD B2C
logs to Application Insights.
References:
Azure B2C - REST API call Error
Add extra claims to an Azure B2C user flow using API connectors and
ASP.NET Core | (damienbod.com)
how-to-parse-json-in-net-core

How to commit a list of folders and files to Azure DevOps Repository using Rest API?

I am trying to automate creation of repositories and its initialization using Azure DevOps REST APIs. I am able to successfully create a repo using APIs.
How can we commit a bulk data, say a list of folders and files, that constitute the basic code structure, using REST API? In the request body of Pushes - Create , contentType can either be base64encoded or rawtext. I have used rawtext to test commit of a single file and it worked successfully. Now, I have to commit both files and folders together.
Accually, Rest API is always used to commit the documents related to the project.
If you want to commit all files in folders, you should define the paths of all files in changes. Shayki Abramczyk's comment is really helpful. Note: Git folders cannot be empty.
For example, these two paths will commit folder "content".
"item": {
"path": "/tasks/content/newtasks.md"
}
"item": {
"path": "/tasks/content/inactivetasks.md"
}
Please refer to this similar issue, Rakesh has created a function with C# to push files automatically.
public class Refs
{
public string name { get; set; }
public string objectId { get; set; }
public string oldObjectId { get; set; }
public Creator creator { get; set; }
public string url { get; set; }
}
public class Change
{
public string changeType { get; set; }
public Item item { get; set; }
public Newcontent newContent { get; set; }
}
public class CommitToAdd
{
public string comment { get; set; }
public ChangeToAdd[] changes { get; set; }
}
public class ChangeToAdd
{
public string changeType { get; set; }
public ItemBase item { get; set; }
public Newcontent newContent { get; set; }
}
public class ItemBase
{
public string path { get; set; }
}
public class Newcontent
{
public string content { get; set; }
public string contentType { get; set; }
}
// ### Implementation
//on your Program.cs file
public static class program
{
public async Task AddFileToRepository(string projectName, string repositoryId, Dictionary<string, Task<string>> blobContainer)
{
var refs = new List<Refs>() { new Refs { oldObjectId = "0000000000000000000000000000000000000000", name = Constants.DevOps.MASTER_REPO_REF_NAME } };
var changes = new List<ChangeToAdd>();
foreach (var blob in blobContainer)
{
if (!blob.Key.StartsWith(".git"))
{
ChangeToAdd changeJson = new ChangeToAdd()
{
changeType = "add",
item = new ItemBase() { path = blob.Key },
newContent = new Newcontent()
{
contentType = "rawtext",
content = blob.Value.Result
}
};
changes.Add(changeJson);
}
}
CommitToAdd commit = new CommitToAdd();
commit.comment = "commit from code";
commit.changes = changes.ToArray();
var content = new List<CommitToAdd>() { commit };
var request = new
{
refUpdates = refs,
commits = content
};
var uri = $"https://dev.azure.com/{_orgnizationName}/{projectName}/_apis/git/repositories/{repositoryId}/pushes{Constants.DevOps.API_VERSION}";
using (var client = this.HttpClient)
{
var authorizationToken = Convert.ToBase64String(System.Text.ASCIIEncoding.ASCII.GetBytes(string.Format("{0}:{1}", "", personalaccessToken)));
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", authorizationToken);
var requestJson = JsonConvert.SerializeObject(request);
var httpContent = new StringContent(requestJson, Encoding.ASCII, "application/json");
var response = await client.PostAsync(uri, httpContent);
if (!response.IsSuccessStatusCode)
{
throw new Exception(ApplicationMessages.FailedToAddFilesToRepository);
}
}
}
}

Azure tranlate API call is returning and 40100

I used this HTTP-Get request to get a Bearer token for the translation:
https://api.cognitive.microsoft.com/sts/v1.0/issueToken?Subscription-Key=1fo8xxx
Using the returned Bearer I wanted to translate a short text using this API endpoint:
https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=de
In the header I put this:
Content-Type: application/json; charset=UTF-8.
And in the body I put this:
[
{"Text":"I would really like to drive your car around the block a few times."}
]
I am using Postman, so in the authorization tab I selected Bearer and inserted in the field next to it this:
Bearer <result from the first API call>
If I send the reqeuest I get this result:
{"error":{"code":401000,"message":"The request is not authorized because credentials are missing or invalid."}}
In case someone ever stumbles upon this, after hours of trial and error I found out you need to pass the Ocp-Apim-Subscription-Region param in the header.
Here is an example in python that I was able to run successfully.
import json
import requests
def translate(text, source_language, dest_language):
if not <Secret Key>:
return 'Error: the translation service is not configured.'
headers = {'Ocp-Apim-Subscription-Key': <Secret Key>,
'Ocp-Apim-Subscription-Region': <region>,
'Content-type': 'application/json'}
url = 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from={}&to={}' \
.format(source_language, dest_language)
body = [{'text': text}]
request = requests.post(url, headers=headers, json=body)
if request.status_code != 200:
return 'Error: the translation service failed.'
return json.loads(request.content.decode('utf-8-sig'))
The list of regions and other examples can be found here:
https://learn.microsoft.com/en-us/azure/cognitive-services/translator/reference/v3-0-reference
Don't be fooled by the curl example that is not using the region..
Your request needs the "OCP-Apim-Subscription-Key" header. Take a look on the official example:
using System;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
// Install Newtonsoft.Json with NuGet
using Newtonsoft.Json;
/// <summary>
/// The C# classes that represents the JSON returned by the Translator Text API.
/// </summary>
public class TranslationResult
{
public DetectedLanguage DetectedLanguage { get; set; }
public TextResult SourceText { get; set; }
public Translation[] Translations { get; set; }
}
public class DetectedLanguage
{
public string Language { get; set; }
public float Score { get; set; }
}
public class TextResult
{
public string Text { get; set; }
public string Script { get; set; }
}
public class Translation
{
public string Text { get; set; }
public TextResult Transliteration { get; set; }
public string To { get; set; }
public Alignment Alignment { get; set; }
public SentenceLength SentLen { get; set; }
}
public class Alignment
{
public string Proj { get; set; }
}
public class SentenceLength
{
public int[] SrcSentLen { get; set; }
public int[] TransSentLen { get; set; }
}
private const string key_var = "TRANSLATOR_TEXT_SUBSCRIPTION_KEY";
private static readonly string subscriptionKey = Environment.GetEnvironmentVariable(key_var);
private const string endpoint_var = "TRANSLATOR_TEXT_ENDPOINT";
private static readonly string endpoint = Environment.GetEnvironmentVariable(endpoint_var);
static Program()
{
if (null == subscriptionKey)
{
throw new Exception("Please set/export the environment variable: " + key_var);
}
if (null == endpoint)
{
throw new Exception("Please set/export the environment variable: " + endpoint_var);
}
using (var client = new HttpClient())
using (var request = new HttpRequestMessage())
{
// Build the request.
// Set the method to Post.
request.Method = HttpMethod.Post;
// Construct the URI and add headers.
request.RequestUri = new Uri(endpoint + route);
request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
// Send the request and get response.
HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
// Read response as a string.
string result = await response.Content.ReadAsStringAsync();
// Deserialize the response using the classes created earlier.
TranslationResult[] deserializedOutput = JsonConvert.DeserializeObject<TranslationResult[]>(result);
// Iterate over the deserialized results.
foreach (TranslationResult o in deserializedOutput)
{
// Print the detected input language and confidence score.
Console.WriteLine("Detected input language: {0}\nConfidence score: {1}\n", o.DetectedLanguage.Language, o.DetectedLanguage.Score);
// Iterate over the results and print each translation.
foreach (Translation t in o.Translations)
{
Console.WriteLine("Translated to {0}: {1}", t.To, t.Text);
}
}
}
Console.Read();
}
https://learn.microsoft.com/en-us/azure/cognitive-services/translator/quickstart-translate?pivots=programming-language-csharp

ServiceStack Wrapper for Bloomberg OpenFIGI

I need to make the following call to an open API (https://www.openfigi.com/api)
Curl Example:
curl -v -X POST 'https://api.openfigi.com/v1/mapping' \
--header 'Content-Type: text/json' \
--data '[{"idType":"ID_WERTPAPIER","idValue":"851399","exchCode":"US"}]'
Request Format
The request is passed in via HTTP request body. The only supported HTTP verb is POST. Here is a sample request to the API:
[
{"idType":"ID_ISIN","idValue":"US4592001014"},
{"idType":"ID_WERTPAPIER","idValue":"851399","exchCode":"US"},
{"idType":"ID_BB_UNIQUE","idValue":"EQ0010080100001000","currency": "USD"},
{"idType":"ID_SEDOL","idValue":"2005973","micCode":"EDGX", "currency":"USD"}
]
Using ServiceStack Request DTO, how do I make a RequestDto to achieve a call to to the above third party service endpoint.
This is just an exercise of creating DTOs which match the shape of the JSON you want to output and JSON you want to receive. To emit the exact the exact JSON property names you can either use [DataMember] on the Request DTO, or JsConfig.EmitCamelCaseNames = true to tell ServiceStack to serialize properties in camelCase or you can use JsConfig.With() to create a Custom Scope.
I've created a Live example of this in Gistlyn which you can use to experiment against Bloomberg's API.
I've used [DataMember] attribute here as it will work independent of your Json Serialization config. You don't need to do this for the Response DTO because ServiceStack Serializers is case-insensitive.
So to send the Request that matches the shape of that JSON you can use:
[DataContract]
public class Mapping
{
[DataMember(Name="idType")]
public string IdType { get; set; }
[DataMember(Name="idValue")]
public string IdValue { get; set; }
[DataMember(Name="exchCode")]
public string ExchCode { get; set; }
[DataMember(Name="currency")]
public string Currency { get; set; }
[DataMember(Name="micCode")]
public string MicCode { get; set; }
}
You can use ServiceStack's HTTP Utils to easily send requests to 3rd Party APIs, e.g:
var url = "https://api.openfigi.com/v1/mapping";
var json = url.PostJsonToUrl(new[]{
new Mapping { IdType = "ID_ISIN", IdValue = "US4592001014" },
new Mapping { IdType = "ID_WERTPAPIER", IdValue = "851399", ExchCode = "US" },
new Mapping { IdType = "ID_BB_UNIQUE", IdValue = "EQ0010080100001000", Currency = "USD" },
new Mapping { IdType = "ID_SEDOL", IdValue = "2005973", MicCode = "EDGX", Currency = "USD" },
});
Then to receive the response you need to create DTOs which match the shape of the JSON Response which looks like:
public class BloombertResult
{
public string Figi { get; set; }
public string SecurityType { get; set; }
public string MarketSector { get; set; }
public string Ticker { get; set; }
public string Name { get; set; }
public string UniqueId { get; set; }
public string ExchCode { get; set; }
public string ShareClassFIGI { get; set; }
public string CompositeFIGI { get; set; }
public string SecurityType2 { get; set; }
public string SecurityDescription { get; set; }
public string UniqueIdFutOpt { get; set; }
}
public class BloombergResponse
{
public List<BloombertResult> Data { get; set; }
public string Error { get; set; }
}
Which you can just deserialize into a collection of BloombergResponse, e.g:
var response = json.FromJson<BloombergResponse[]>();
Gistlyn will show you a nice human readable preview of each variable by clicking on it in the watch window. Or if you're this in a C# Unit test you can quickly see to populated DTOs with:
response.PrintDump();

Azure Storage Multiple Image Upload

How can I upload multiple images to Azure Storage from Windows Phone 8? I found a solution to upload single image, but I need to upload 3 images.
The solution that I found is below:
http://azure.microsoft.com/documentation/articles/mobile-services-windows-phone-upload-data-blob-storage/
I have an User Model, which have following properties.
public class User
{
public long Id { get; set; }
[JsonProperty(PropertyName = "username")]
public string UserName { get; set; }
[JsonProperty(PropertyName = "name")]
public string Name { get; set; }
[JsonProperty(PropertyName = "password")]
public string Password { get; set; }
[JsonProperty(PropertyName = "phone")]
public string Phone { get; set; }
[JsonProperty(PropertyName = "email")]
public string Email { get; set; }
[JsonProperty(PropertyName = "gender")]
public char Gender { get; set; }
[JsonProperty(PropertyName = "birthdate")]
public DateTime BirthDate { get; set; }
[JsonProperty(PropertyName = "location")]
public string Location { get; set; }
[JsonProperty(PropertyName = "photo")]
public System.Windows.Controls.Image Photo { get; set; }
// For Blob Storage Photo Upload
[JsonProperty(PropertyName = "containerName")]
public string ContainerName { get; set; }
[JsonProperty(PropertyName = "resourceName")]
public string ResourceName { get; set; }
[JsonProperty(PropertyName = "sasQueryString")]
public string SasQueryString { get; set; }
[JsonProperty(PropertyName = "imageUri")]
public string ImageUri { get; set; }
}
And, for register our application, user can upload a profile picture. The code for uploading a profile picture as follows:
public RegisterVersion2()
{
InitializeComponent();
SetBinding();
BuildLocalizedApplicationBar();
cameraCaptureTask = new PhotoChooserTask();
cameraCaptureTask.ShowCamera = true;
cameraCaptureTask.Completed += cameraCaptureTask_Completed;
}
void cameraCaptureTask_Completed(object sender, PhotoResult e)
{
imageStream = e.ChosenPhoto;
bmp = new System.Windows.Media.Imaging.BitmapImage();
bmp.SetSource(e.ChosenPhoto);
profilPicture.Source = bmp;
}
private void photoChoose_Click(object sender, RoutedEventArgs e)
{
cameraCaptureTask.Show();
}
async void Register(object sender, EventArgs e)
{
// Some code for getting input from user for username, email, password etc.I don't put this code here.
Model.User newUser = new Model.User
{ Email = email, UserName = username, Password = password,
BirthDate = birthday, Name = name + " " + surname, Gender = gender,
Phone = phone, ContainerName = username, ResourceName = Guid.NewGuid().ToString() + ".jpg" };
string errorString = string.Empty;
DateTime today = DateTime.Today;
if (imageStream != null)
{
await User.InsertAsync(newUser);
}
// Send the item to be inserted. When blob properties are set this
// generates an SAS in the response.
// If we have a returned SAS, then upload the blob.
if (!string.IsNullOrEmpty(newUser.SasQueryString))
{
// Get the URI generated that contains the SAS
// and extract the storage credentials.
StorageCredentials cred = new StorageCredentials(newUser.SasQueryString);
var imageUri = new Uri(newUser.ImageUri);
// Instantiate a Blob store container based on the info in the returned item.
CloudBlobContainer container = new CloudBlobContainer(
new Uri(string.Format("https://{0}/{1}",imageUri.Host, newUser.ContainerName)), cred);
// Upload the new image as a BLOB from the stream.
CloudBlockBlob blobFromSASCredential = container.GetBlockBlobReference(newUser.ResourceName);
await blobFromSASCredential.UploadFromStreamAsync(imageStream);
// When you request an SAS at the container-level instead of the blob-level,
// you are able to upload multiple streams using the same container credentials.
imageStream = null;
}
My question is about uploading multiple images. For this time, I have an Item object, which again same properties as Id, ItemName, ItemCost, ItemDescription etc. However, for this time we need to have 3 different image upload. The properties below are used for one photo, also the code for uploading image is used for one photo. How can change it?
// For Blob Storage Photo Upload
[JsonProperty(PropertyName = "containerName")]
public string ContainerName { get; set; }
[JsonProperty(PropertyName = "resourceName")]
public string ResourceName { get; set; }
[JsonProperty(PropertyName = "sasQueryString")]
public string SasQueryString { get; set; }
[JsonProperty(PropertyName = "imageUri")]
public string ImageUri { get; set; }

Resources