I've been trying to convert the functions in this https://blog.jeremylikness.com/build-a-serverless-link-shortener-with-analytics-faster-than-finishing-your-latte-8c094bb1df2c to a WebAPI equivalent. This is my webapi call:
[HttpPost]
public async Task<IActionResult> PostAsync([FromBody] ShortRequest shortRequest)
{
_logger.LogInformation($"ShrinkUrl api called with req: {shortRequest}");
if(!Request.IsHttps && !Request.Host.Host.Contains("localhost"))
return StatusCode(StatusCodes.Status400BadRequest);
if (string.IsNullOrEmpty(shortRequest.Input))
return StatusCode(StatusCodes.Status404NotFound);
try
{
var result = new List<ShortResponse>();
var analytics = new Analytics();
// determine whether or not to process analytics tags
bool tagMediums = analytics.Validate(shortRequest);
var campaign = string.IsNullOrWhiteSpace(shortRequest.Campaign) ? DefaultCampaign : shortRequest.Campaign;
var url = shortRequest.Input.Trim();
var utm = analytics.TagUtm(shortRequest);
var wt = analytics.TagWt(shortRequest);
_logger.LogInformation($"URL: {url} Tag UTM? {utm} Tag WebTrends? {wt}");
// get host for building short URL
var host = Request.Scheme + "://" + Request.Host;
await _tableOut.CreateIfNotExistsAsync();
if (_keyTable == null)
{
_logger.LogInformation($"Keytable is null, creating initial partition key of 1");
_keyTable = new NextId
{
PartitionKey = "1",
RowKey = "KEY",
Id = 1024
};
var keyAdd = TableOperation.Insert(_keyTable);
await _tableOut.ExecuteAsync(keyAdd);
}
// strategy for getting a new code
string getCode() => Utility.Encode(_keyTable.Id++);
// strategy for logging
void logFn(string msg) => _logger.LogInformation(msg);
// strategy to save the key
async Task saveKeyAsync()
{
var operation = TableOperation.Replace(_keyTable);
await _tableOut.ExecuteAsync(operation);
}
// strategy to insert the new short url entry
async Task saveEntryAsync(TableEntity entry)
{
var operation = TableOperation.Insert(entry);
await _tableOut.ExecuteAsync(operation);
}
// strategy to create a new URL and track the dependencies
async Task saveWithTelemetryAsync(TableEntity entry)
{
await TrackDependencyAsync(
"AzureTableStorageInsert",
"Insert",
async () => await saveEntryAsync(entry),
() => true);
await TrackDependencyAsync(
"AzureTableStorageUpdate",
"Update",
async () => await saveKeyAsync(),
() => true);
}
if (tagMediums)
{
// this will result in multiple entries depending on the number of
// mediums passed in
result.AddRange(await analytics.BuildAsync(
shortRequest,
Source,
host,
getCode,
saveWithTelemetryAsync,
logFn,
HttpUtility.ParseQueryString));
}
else
{
// no tagging, just pass-through the URL
result.Add(await Utility.SaveUrlAsync(
url,
null,
host,
getCode,
logFn,
saveWithTelemetryAsync));
}
_logger.LogInformation($"Done.");
//return req.CreateResponse(HttpStatusCode.OK, result);
}
catch (Exception ex)
{
_logger.LogError("An unexpected error was encountered.", ex);
//return req.CreateErrorResponse(HttpStatusCode.BadRequest, ex);
}
return null;
}
And this is the function params:
[FunctionName("ShortenUrl")]
public static async Task<HttpResponseMessage>([HttpTrigger(AuthorizationLevel.Function, "post")],HttpRequestMessage req,
[Table(Utility.TABLE, "1", Utility.KEY, Take = 1)]NextId keyTable,
[Table(Utility.TABLE)]CloudTable, TraceWriter log)
The azure function takes care of ensuring that the keyTable contains the next Id in the counter but I cannot figure out how to do the same in the webapi call.
Any ideas?
Related
I am trying to develop a MS Teams bot that sends content to students module(unit) wise. I have created 3 classes:
methods.js = Contains all the methods for sending texts, attachments etc.
teamBot.js = Captures a specific keyword from the users and based on that executes a function.
test.js = Connects the bot with Airtable and sends the content accordingly
I am facing Cannot perform 'get' on a proxy that has been revoked error. I figured it might be because of the context. I am passing context as a parameter, which I feel might not be the correct way, how can I achieve the result, and retain the context between files.
teamsBot.js
const test = require("./test");
class TeamsBot extends TeamsActivityHandler {
constructor() {
super();
// record the likeCount
this.likeCountObj = { likeCount: 0 };
this.onMessage(async (context, next) => {
console.log("Running with Message Activity.");
let txt = context.activity.text;
// const removedMentionText = TurnContext.removeRecipientMention(context.activity);
// if (removedMentionText) {
// // Remove the line break
// txt = removedMentionText.toLowerCase().replace(/\n|\r/g, "").trim();
// }
// Trigger command by IM text
switch (txt) {
case "Begin": {
await test.sendModuleContent(context)
}
// By calling next() you ensure that the next BotHandler is run.
await next();
});
// Listen to MembersAdded event, view https://learn.microsoft.com/en-us/microsoftteams/platform/resources/bot-v3/bots-notifications for more events
this.onMembersAdded(async (context, next) => {
const membersAdded = context.activity.membersAdded;
for (let cnt = 0; cnt < membersAdded.length; cnt++) {
if (membersAdded[cnt].id) {
const card = cardTools.AdaptiveCards.declareWithoutData(rawWelcomeCard).render();
await context.sendActivity({ attachments: [CardFactory.adaptiveCard(card)] });
break;
}
}
await next();
});
}
test.js
const ms = require('./methods')
async function sendModuleContent(context) {
data = module_text //fetched from Airtable
await ms.sendText(context, data)
}
methods.js
const {TeamsActivityHandler, ActivityHandler, MessageFactory } = require('botbuilder');
async function sendText(context, text){
console.log("Sending text")
await context.sendActivity(text);
}
Refer this: TypeError: Cannot perform 'get' on a proxy that has been revoked
make the following changes to test.js
const {
TurnContext
} = require("botbuilder");
var conversationReferences = {};
var adapter;
async function sendModuleContent(context) {
data = module_text //fetched from Airtable
const currentUser = context.activity.from.id;
conversationReferences[currentUser] = TurnContext.getConversationReference(context.activity);
adapter = context.adapter;
await adapter.continueConversation(conversationReferences[currentUser], async turnContext => {
await turnContext.sendActivity(data);
});
}
The following code is a serial port event inside a winforms form ( so obviously running on it's own thread ). The slow line of code is run elsewhere ( in Nodejs ) and takes about 10 seconds. In this code the same line takes 45 seconds - sometimes 60 seconds.
private async void SerialPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
SerialPort sp = (SerialPort)sender;
string indata = sp.ReadLine().Replace("?","").Replace(" ","");
if(indata != null)
{
var weightMatch = Regex.Match(indata, #"(\d+\.\d+)kg$");
var weight = weightMatch.Groups[1].Value;
var message = barcode + " Weight:" + weight;
this.Invoke(new Action(() => { scanData.Add(new ScanDataItem(scanData.Count + 1, message, "")); }));
this.Invoke(new Action(() => { mainList.SelectedIndex = mainList.Items.Count - 1; }));
string jsonData = "";
// ***************************************************************
// The following line is about 30 seconds slower than it should be - inner function calls a web service
// ***************************************************************
var jsonString = await TrayWeighScan.doOrder(barcode, Decimal.Parse(weight));
try
{
var jsonDoc = JsonDocument.Parse(jsonString);
jsonData = JsonSerializer.Serialize(jsonDoc, new JsonSerializerOptions { WriteIndented = true });
}
catch (Exception)
{
jsonData = jsonString;
}
this.Invoke(new Action(() => { scanData.Last().responseData = jsonData; }));
this.Invoke(new Action(() => { responseTextBox.Text = jsonData; }));
this.Invoke(new Action(() => { statusTextBox.Text = "Ready to Scan"; }));
this.Invoke(new Action(() => { busy = false; }));
}
}
public class TrayWeighScan
{
//private static readonly HttpClient httpClient = new();
public static async Task<string> doOrder(string orderNumber, decimal weight )
{
HttpClient httpClient = new();
var url = "http://r2hserver/logistics/weighscan?orderNumber="+orderNumber+"&weight="+weight.ToString("F2");
HttpResponseMessage response = await httpClient.GetAsync(url);
string responseText = await response.Content.ReadAsStringAsync();
return responseText;
}
}
If I run the "slow" line directly from a form button, 11 seconds.
private async void btnUrlTest_Click(object sender, EventArgs e)
{
string orderNumber = "ORD100302338";
decimal weight = 1.606m;
//following line - 11 seconds.
var a = await TrayWeighScan.doOrder(orderNumber,weight);
var b = "";
}
Hans Passant as pointed out that most likely the cause isn't in the code structure at all, and most likely he's right. To test the hypothesis, descend into trayWeighScan.doOrder and change the async HttpClient async call into a sync call. If the problem goes away, my answer is good. If the problem remains, look elsewhere, say the network or the client machine's TCP stack or AV or somesuch.
I've not seen this exact thing before, but I've learned that launching async operations in a WinForms app doesn't do the expected when launched in the naive way. The following should work much better
Task.Run(async () => {
var jsonString = await trayWeighScan.doOrder(barcode, Decimal.Parse(weight);
try
{
var jsonDoc = JsonDocument.Parse(jsonString);
jsonData = JsonSerializer.Serialize(jsonDoc, new JsonSerializerOptions { WriteIndented = true });
}
catch (Exception)
{
jsonData = jsonString;
}
this.Invoke(new Action(() => { scanData.Last().responseData = jsonData; }));
this.Invoke(new Action(() => { responseTextBox.Text = jsonData; }));
this.Invoke(new Action(() => { statusTextBox.Text = "Ready to Scan"; }));
this.Invoke(new Action(() => { busy = false; }));
});
Wrapping winforms-originating async in Task.Run seems to completely fix the problem.
A lot of stuff talks about .ConfigureAwait() being the solution; but it's overly complex. The simplest and best solution is to only spawn async/await from threadpool threads, which means starting an async/await chain from Task.Run.
Hi I am quite new to docxtemplater but I absolutely love how it works. Right now I seem to be able to generate a new docx document as follows:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
const {Storage} = require('#google-cloud/storage');
var PizZip = require('pizzip');
var Docxtemplater = require('docxtemplater');
admin.initializeApp();
const BUCKET = 'gs://myapp.appspot.com';
exports.test2 = functions.https.onCall((data, context) => {
// The error object contains additional information when logged with JSON.stringify (it contains a properties object containing all suberrors).
function replaceErrors(key, value) {
if (value instanceof Error) {
return Object.getOwnPropertyNames(value).reduce(function(error, key) {
error[key] = value[key];
return error;
}, {});
}
return value;
}
function errorHandler(error) {
console.log(JSON.stringify({error: error}, replaceErrors));
if (error.properties && error.properties.errors instanceof Array) {
const errorMessages = error.properties.errors.map(function (error) {
return error.properties.explanation;
}).join("\n");
console.log('errorMessages', errorMessages);
// errorMessages is a humanly readable message looking like this :
// 'The tag beginning with "foobar" is unopened'
}
throw error;
}
let file_name = 'example.docx';// this is the file saved in my firebase storage
const File = storage.bucket(BUCKET).file(file_name);
const read = File.createReadStream();
var buffers = [];
readable.on('data', (buffer) => {
buffers.push(buffer);
});
readable.on('end', () => {
var buffer = Buffer.concat(buffers);
var zip = new PizZip(buffer);
var doc;
try {
doc = new Docxtemplater(zip);
doc.setData({
first_name: 'Fred',
last_name: 'Flinstone',
phone: '0652455478',
description: 'Web app'
});
try {
doc.render();
doc.pipe(remoteFile2.createReadStream());
}
catch (error) {
errorHandler(error);
}
} catch(error) {
errorHandler(error);
}
});
});
My issue is that i keep getting an error that doc.pipe is not a function. I am quite new to nodejs but is there a way to have the newly generated doc after doc.render() to be saved directly to the firebase storage?
Taking a look at the type of doc, we find that is a Docxtemplater object and find that doc.pipe is not a function of that class. To get the file out of Docxtemplater, we need to use doc.getZip() to return the file (this will be either a JSZip v2 or Pizzip instance based on what we passed to the constructor). Now that we have the zip's object, we need to generate the binary data of the zip - which is done using generate({ type: 'nodebuffer' }) (to get a Node.JS Buffer containing the data). Unfortunately because the docxtemplater library doesn't support JSZip v3+, we can't make use of the generateNodeStream() method to get a stream to use with pipe().
With this buffer, we can either reupload it to Cloud Storage or send it back to the client that is calling the function.
The first option is relatively simple to implement:
import { v4 as uuidv4 } from 'uuid';
/* ... */
const contentBuffer = doc.getZip()
.generate({type: 'nodebuffer'});
const targetName = "compiled.docx";
const targetStorageRef = admin.storage().bucket()
.file(targetName);
await targetStorageRef.save(contentBuffer);
// send back the bucket-name pair to the caller
return { bucket: targetBucket, name: targetName };
However, to send back the file itself to the client isn't as easy because this involves switching to using a HTTP Event Function (functions.https.onRequest) because a Callable Cloud Function can only return JSON-compatible data. Here we have a middleware function that takes a callable's handler function but supports returning binary data to the client.
import * as functions from "firebase-functions";
import * as admin from "firebase-admin";
import corsInit from "cors";
admin.initializeApp();
const cors = corsInit({ origin: true }); // TODO: Tighten
function callableRequest(handler) {
if (!handler) {
throw new TypeError("handler is required");
}
return (req, res) => {
cors(req, res, (corsErr) => {
if (corsErr) {
console.error("Request rejected by CORS", corsErr);
res.status(412).json({ error: "cors", message: "origin rejected" });
return;
}
// for validateFirebaseIdToken, see https://github.com/firebase/functions-samples/blob/main/authorized-https-endpoint/functions/index.js
validateFirebaseIdToken(req, res, () => { // validateFirebaseIdToken won't pass errors to `next()`
try {
const data = req.body;
const context = {
auth: req.user ? { token: req.user, uid: req.user.uid } : null,
instanceIdToken: req.get("Firebase-Instance-ID-Token"); // this is used with FCM
rawRequest: req
};
let result: any = await handler(data, context);
if (result && typeof result === "object" && "buffer" in result) {
res.writeHead(200, [
["Content-Type", res.contentType],
["Content-Disposition", "attachment; filename=" + res.filename]
]);
res.end(result.buffer);
} else {
result = functions.https.encode(result);
res.status(200).send({ result });
}
} catch (err) {
if (!(err instanceof HttpsError)) {
// This doesn't count as an 'explicit' error.
console.error("Unhandled error", err);
err = new HttpsError("internal", "INTERNAL");
}
const { status } = err.httpErrorCode;
const body = { error: err.toJSON() };
res.status(status).send(body);
}
});
});
};
})
functions.https.onRequest(callableRequest(async (data, context) => {
/* ... */
const contentBuffer = doc.getZip()
.generate({type: "nodebuffer"});
const targetName = "compiled.docx";
return {
buffer: contentBuffer,
contentType: "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
filename: targetName
}
}));
In your current code, there are a number of odd segments where you have nested try-catch blocks and variables in different scopes. To help combat this, we can make use of File#download() that returns a Promise that resolves with the file contents in a Node.JS Buffer and File#save() that returns a Promise that resolves when the given Buffer is uploaded.
Rolling this together for reuploading to Cloud Storage gives:
// This code is based off the examples provided for docxtemplater
// Copyright (c) Edgar HIPP [Dual License: MIT/GPLv3]
import * as functions from "firebase-functions";
import * as admin from "firebase-admin";
import PizZip from "pizzip";
import Docxtemplater from "docxtemplater";
admin.initializeApp();
// The error object contains additional information when logged with JSON.stringify (it contains a properties object containing all suberrors).
function replaceErrors(key, value) {
if (value instanceof Error) {
return Object.getOwnPropertyNames(value).reduce(
function (error, key) {
error[key] = value[key];
return error;
},
{}
);
}
return value;
}
function errorHandler(error) {
console.log(JSON.stringify({ error: error }, replaceErrors));
if (error.properties && error.properties.errors instanceof Array) {
const errorMessages = error.properties.errors
.map(function (error) {
return error.properties.explanation;
})
.join("\n");
console.log("errorMessages", errorMessages);
// errorMessages is a humanly readable message looking like this :
// 'The tag beginning with "foobar" is unopened'
}
throw error;
}
exports.test2 = functions.https.onCall(async (data, context) => {
const file_name = "example.docx"; // this is the file saved in my firebase storage
const templateRef = await admin.storage().bucket()
.file(file_name);
const template_content = (await templateRef.download())[0];
const zip = new PizZip(template_content);
let doc;
try {
doc = new Docxtemplater(zip);
} catch (error) {
// Catch compilation errors (errors caused by the compilation of the template : misplaced tags)
errorHandler(error);
}
doc.setData({
first_name: "Fred",
last_name: "Flinstone",
phone: "0652455478",
description: "Web app",
});
try {
doc.render();
} catch (error) {
errorHandler(error);
}
const contentBuffer = doc.getZip().generate({ type: "nodebuffer" });
// do something with contentBuffer
// e.g. reupload to Cloud Storage
const targetStorageRef = admin.storage().bucket().file("compiled.docx");
await targetStorageRef.save(contentBuffer);
return { bucket: targetStorageRef.bucket.name, name: targetName };
});
In addition to returning a bucket-name pair to the caller, you may also consider returning an access URL to the caller. This could be a signed url that can last for up to 7 days, a download token URL (like getDownloadURL(), process described here) that can last until the token is revoked, Google Storage URI (gs://BUCKET_NAME/FILE_NAME) (not an access URL, but can be passed to a client SDK that can access it if the client passes storage security rules) or access it directly using its public URL (after the file has been marked public).
Based on the above code, you should be able to merge in returning the file directly yourself.
I am using the code below to setup MassTransit to use ServiceBus
private static ServiceProvider SetupServiceCollection()
{
var connectionString = ConfigurationManager.AppSettings["AzureServiceBusConnectionString"];
var services = new ServiceCollection()
.AddMassTransit(x =>
{
x.UsingAzureServiceBus((context, cfg) =>
{
cfg.Host(connectionString);
cfg.ConfigureEndpoints(context);
cfg.Message<MyMessage>(x =>
{
x.SetEntityName("my-topic");
});
});
});
return services.BuildServiceProvider();
}
I use the following code to send a message
var message = new MyMessage()
{
MessageIdentifier = Guid.NewGuid().ToString(),
};
await _busControl.Publish(message);
I want my message to be sent to my-topic only
However, MassTransit is creating topic, the names seem to be getting generated using the type names. How do I totally stop this?
I am setting up the receiver as below
public static void SetupMassTransit(this ServiceCollection services, string connectionString)
{
services.AddMassTransit(x =>
{
x.UsingAzureServiceBus((context, cfg) =>
{
cfg.Host(connectionString);
cfg.ConfigureEndpoints(context);
x.UsingAzureServiceBus((context, cfg) =>
{
cfg.Host(connectionString);
cfg.SubscriptionEndpoint<MyMessage>("low", e =>
{
e.Consumer<MyMessageConsumer>(context);
e.PrefetchCount = 100;
e.MaxConcurrentCalls = 100;
e.LockDuration = TimeSpan.FromMinutes(5);
e.MaxAutoRenewDuration = TimeSpan.FromMinutes(30);
e.UseMessageRetry(r => r.Intervals(100, 200, 500, 800, 1000));
e.UseInMemoryOutbox();
e.ConfigureConsumeTopology = false;
});
});
});
}
I can see that the message is being sent correctly, as its shown inside the subscription in Service Bus Explorer. However, the receiver is not picking it up? There are no errors or anything to go on? Really frustrating
Paul
You're calling ConfigureEndpoints, which will by default create receive endpoints for the consumers, sagas, etc. that have been added. However, your code sample doesn't show any .AddConsumer methods. If you don't have any consumers, don't call ConfigureEndpoints.
For your receiver, you should use:
public static void SetupMassTransit(this ServiceCollection services, string connectionString)
{
services.AddMassTransit(x =>
{
x.AddConsumer<MyMessageConsumer>();
x.UsingAzureServiceBus((context, cfg) =>
{
cfg.Host(connectionString);
cfg.SubscriptionEndpoint("your-topic-name", "your-subscription-name", e =>
{
e.PrefetchCount = 100;
e.MaxConcurrentCalls = 100;
e.LockDuration = TimeSpan.FromMinutes(5);
e.MaxAutoRenewDuration = TimeSpan.FromMinutes(30);
e.UseMessageRetry(r => r.Intervals(100, 200, 500, 800, 1000));
e.UseInMemoryOutbox();
e.ConfigureConsumer<MyMessageConsumer>(context);
});
});
});
}
For your producer, you can simply use:
private static ServiceProvider SetupServiceCollection()
{
var connectionString = ConfigurationManager.AppSettings["AzureServiceBusConnectionString"];
var services = new ServiceCollection()
.AddMassTransit(x =>
{
x.UsingAzureServiceBus((context, cfg) =>
{
cfg.Host(connectionString);
});
});
return services.BuildServiceProvider();
}
Then, to publish, using your IServiceProvider that was created above:
var bus = serviceProvider.GetRequiredService<IBus>();
var endpoint = await bus.GetSendEndpoint(new Uri("topic:your-topic-name"));
await endpoint.Send(new MyMessage());
That should do the absolute minimum required that you need.
I have an API in NestJs which is not sending data on the first hit. However, on hitting it again it sends the desired data. I am guessing the API returns before the internal processing is done.
How to stop this. Is sleep a good option for this?
Or is there any other way to do this?
#Post("load")
#UseGuards(AuthGuard("jwt"))
async load(#Req() body: any)
{
const organizationId = body.user.organizationId;
const userId = body.user.userId;
if ("brandIds" in body.body)
{
await this.userService.onBoardUser(userId);
}
var settings = await this.settingsService.fetchLayout(organizationId, "home");
settings.forEach(async (element) =>
{
var parsedElement = JSON.parse(JSON.stringify(element));
var innerContent = await this.fetchContent(parsedElement.method, organizationId, userId);
var template = parsedElement.content[0];
let formattedItem = {};
innerContent.forEach((item) =>
{
try
{
formattedItem = template;
Object.keys(template).forEach((key) =>
{
if (template[key]!= "" && key != "type")
{
formattedItem[key] = eval(template[key]);
}
});
parsedElement.content.push(formattedItem);
formattedItem = null;
}
catch(err)
{
}
});
this.response.data.push(parsedElement);
innerContent = null;
template = null;
formattedItem = null;
parsedElement = null;
});
return(this.response);
}
looks like your main problem here is that your using async/await inside foreach which isnt working.
Use it like this:
for (const setting of settings) {
... your async code here.
}