Public Database between chrome extensions - google-chrome-extension

I want to created a public database so other extensions can access it, create tables, add entities, remove entities what they want.
I saw that the only way to do this is to use message passing between multiple extensions, but this solutions is problematic for me, because I need permission to "management" in order to know the other extensions IDs.
There is an option for sending messages to all extensions without knowing their ID? or there is another way of implementing public db without pub-sub synchronization?
btw - I can use localStorage or WebSQL.

Could you create an extension, hub, that is used to register other extensions and has a messaging hub.
All of the extensions that wanted to communicate with the public DB could then do it via the hub. Upon initialization from the background page, each extension could register with the hub their ID and which events they want to subscribe to.
Register action from each extension
chrome.tabs.sendRequest("hub", {
action: "register",
key: "somePrivKey",
id: "extId",
subscribeTo: ["createFoo", "deleteFoo"]
});
Then, each action performed would be communicated to the hub:
chrome.tabs.sendRequest("hub", {
action: "createFoo",
key: "somePrivKey",
context: 1
});
The hub extension would then listen to events. For "register" actions the hub would register the extension as an endpoint for the "subscribeTo" actions. For other actions ("createFoo" or "deleteFoo") the hub would iterate over the list of registered extensions for the event and perform a sendRequest that sends the "action" name and an optional "context".
A shared "key" could be known between the hub and all the extensions that want to communicate to prevent the hub from listening to events not from a known source.
Hub extension background.js:
var actionToExtMap = {};
chrome.extension.onRequestExternal.addListener(function(request, sender, sendResponse) {
if (request.key === "somePrivKey") {
if (request.action === "register") {
for (i = 0; i < request.subscribeTo.length; i++) {
var action = request.subscribeTo[i];
var extsionsForAction = actionToExtMap[action] || [];
extsionsForAction.push(request.id)
}
} else if (request.action) {
var extensionsToSendAction = actionToExtMap[request.action];
for (i = 0; i < extensionsToSendAction.length; i++) {
chrome.extension.sendRequest(extensionsToSendAction[i], {
action: request.action,
context: request.context //pass an option context object
}
}
}
}
});

Related

Application Insights telemetry correlation using log4net

I'm looking to have our distributed event logging have proper correlation. For our Web Applications this seems to be automatic. Example of correlated logs from one of our App Services API:
However, for our other (non ASP, non WebApp) services were we use Log4Net and the App Insights appender our logs are not correlated. I tried following instructions here: https://learn.microsoft.com/en-us/azure/azure-monitor/app/correlation
Even after adding unique operation_Id attributes to each operation, we're not seeing log correlation (I also tried "Operation Id"). Example of none correlated log entry:
Any help on how to achieve this using log4net would be appreciated.
Cheers!
Across services correlation IDs are mostly propagated through headers. When AI is enabled for Web Application, it reads IDs/Context from the incoming headers and then updates outgoing headers with the appropriate IDs/Context. Within service, operation is tracked with Activity object and every telemetry emitted will be associated with this Activity, thus sharing necessary correlation IDs.
In case of Service Bus / Event Hub communication, the propagation is also supported in the recent versions (IDs/Context propagate as metadata).
If service is not web-based and AI automated correlation propagation is not working, you may need to manually get incoming ID information from some metadata if any exists, restore/initiate Activity, start AI operation with this Activity. When telemetry item is generated in scope of that Activity, it will get proper IDs and will be part of the overarching trace. With that, if telemetry is generated from Log4net trace that was executed in the scope of AI operation context then that telemetry should get right IDs.
Code sample to access correlation from headers:
public class ApplicationInsightsMiddleware : OwinMiddleware
{
// you may create a new TelemetryConfiguration instance, reuse one you already have
// or fetch the instance created by Application Insights SDK.
private readonly TelemetryConfiguration telemetryConfiguration = TelemetryConfiguration.CreateDefault();
private readonly TelemetryClient telemetryClient = new TelemetryClient(telemetryConfiguration);
public ApplicationInsightsMiddleware(OwinMiddleware next) : base(next) {}
public override async Task Invoke(IOwinContext context)
{
// Let's create and start RequestTelemetry.
var requestTelemetry = new RequestTelemetry
{
Name = $"{context.Request.Method} {context.Request.Uri.GetLeftPart(UriPartial.Path)}"
};
// If there is a Request-Id received from the upstream service, set the telemetry context accordingly.
if (context.Request.Headers.ContainsKey("Request-Id"))
{
var requestId = context.Request.Headers.Get("Request-Id");
// Get the operation ID from the Request-Id (if you follow the HTTP Protocol for Correlation).
requestTelemetry.Context.Operation.Id = GetOperationId(requestId);
requestTelemetry.Context.Operation.ParentId = requestId;
}
// StartOperation is a helper method that allows correlation of
// current operations with nested operations/telemetry
// and initializes start time and duration on telemetry items.
var operation = telemetryClient.StartOperation(requestTelemetry);
// Process the request.
try
{
await Next.Invoke(context);
}
catch (Exception e)
{
requestTelemetry.Success = false;
telemetryClient.TrackException(e);
throw;
}
finally
{
// Update status code and success as appropriate.
if (context.Response != null)
{
requestTelemetry.ResponseCode = context.Response.StatusCode.ToString();
requestTelemetry.Success = context.Response.StatusCode >= 200 && context.Response.StatusCode <= 299;
}
else
{
requestTelemetry.Success = false;
}
// Now it's time to stop the operation (and track telemetry).
telemetryClient.StopOperation(operation);
}
}
public static string GetOperationId(string id)
{
// Returns the root ID from the '|' to the first '.' if any.
int rootEnd = id.IndexOf('.');
if (rootEnd < 0)
rootEnd = id.Length;
int rootStart = id[0] == '|' ? 1 : 0;
return id.Substring(rootStart, rootEnd - rootStart);
}
}
Code sample for manual correlated operation tracking in isolation:
async Task BackgroundTask()
{
var operation = telemetryClient.StartOperation<DependencyTelemetry>(taskName);
operation.Telemetry.Type = "Background";
try
{
int progress = 0;
while (progress < 100)
{
// Process the task.
telemetryClient.TrackTrace($"done {progress++}%");
}
// Update status code and success as appropriate.
}
catch (Exception e)
{
telemetryClient.TrackException(e);
// Update status code and success as appropriate.
throw;
}
finally
{
telemetryClient.StopOperation(operation);
}
}
Please note that the most recent version of Application Insights SDK is switching to W3C correlation standard, so header names and expected format would be different as per W3C specification.

Application Insights correlation through Event Grid

I have an application composed of two ASP.NET Core apps, app A and app B.
App A makes HTTP calls to App B, and Application Insights automatically correlates this and shows them as a single request. Great!
However, I'm now moving to a more event-based system design, where app A publishes an event to an Azure Event Grid, and app B is set up with a webhook to listen to that event.
Having made that change, the telemetry correlation is broken and it no longer shows up as a single operation.
I have read this documentation: https://learn.microsoft.com/en-us/azure/azure-monitor/app/correlation which explains the theory around correlation headers - but how can I apply this to the Event Grid and get it to forward the correlation headers on to the subscribing endpoints?
The Header pass-trough idea for a custom topic in the AEG has been recently (Oct.10th) unplanned.
However, the headers can be passed via the AEG model to the subscribers in the data object of the event message. This mediation can be done, for example, using the Policies in Azure API Management.
UPDATE:
The following documents can help for manual instrumentation of the webhook endpoint handler (subscriber side) using a custom tracking operations:
Track custom operations with Application Insights .Net SDK
Application Insights API for custom events and metrics
Add two correlation properties to all your events:
public string OperationId { get; set; }
public string OperationParentId { get; set; }
Publisher side: create Dependency and fill up these properties.
private Microsoft.ApplicationInsights.TelemetryClient _telemetryClient;
async Task Publish<TEventData>(TEventData data)
{
var #event = new EventGridEvent
{
Id = Guid.NewGuid().ToString(),
EventTime = DateTime.UtcNow,
EventType = typeof(TEventData).FullName,
Data = data
};
string operationName = "Publish " + #event.EventType;
// StartOperation is a helper method that initializes the telemetry item
// and allows correlation of this operation with its parent and children.
var operation =
_telemetryClient.StartOperation<DependencyTelemetry>(operationName);
operation.Telemetry.Type = "EventGrid";
operation.Telemetry.Data = operationName;
// Ideally, the correlation properties should go in the request headers but
// with the current implementation of EventGrid we have no other way
// as to store them in the event Data.
data.OperationId = operation.Telemetry.Context.Operation.Id,
data.OperationParentId = operation.Telemetry.Id,
try
{
AzureOperationResponse result = await _client
.PublishEventsWithHttpMessagesAsync(_topic, new[] { #event });
result.Response.EnsureSuccessStatusCode();
operation.Telemetry.Success = true;
}
catch (Exception ex)
{
operation.Telemetry.Success = false;
_telemetryClient.TrackException(ex);
throw;
}
finally
{
_telemetryClient.StopOperation(operation);
}
}
Consumer side: create Request and restore correlation.
[FunctionName(nameof(YourEventDataCosumer))]
void YourEventDataCosumer([EventGridTrigger] EventGridEvent #event)
{
var data = (YourEventData)#event.Data;
var operation = _telemetryClient.StartOperation<RequestTelemetry>(
"Handle " + #event.EventType,
data.OperationId,
data.OperationParentId);
try
{
// Do some event processing.
operation.Telemetry.Success = true;
operation.Telemetry.ResponseCode = "200";
}
catch (Exception)
{
operation.Telemetry.Success = false;
operation.Telemetry.ResponseCode = "500";
throw;
}
finally
{
_telemetryClient.StopOperation(operation);
}
}
This works, but not ideal as you need to repeat this code in every consumer. Also, some early log messages (e.g. emitted by constructors of injected services) are still not correlated correctly.
A better approach would be to create a custom EventGridTriggerAttribute (recreate the whole Microsoft.Azure.WebJobs.Extensions.EventGrid extension) and move this code into IAsyncConverter.ConvertAsync().

Azure BotFramework Directline polling interval causes events to continuously trigger

I am currently developing a prototype to communicate data between a chatbot and website elements. I am using the Azure Bot Services BotFrameworkAdapter and DirectLine in order to communicate between the two applications.
I am having a problem which I have narrowed down to the 'pollingInterval' property of the DirectLine object. The documentation says:
Clients that poll using HTTP GET should choose a polling interval that matches their intended use.
Service-to-service applications often use a polling interval of 5s or 10s.
Client-facing applications often use a polling interval of 1s, and issue a single additional request shortly after every message that the client sends (to rapidly retrieve a bot's response). This delay can be as short at 300ms but should be tuned based on the bot's speed and transit time. Polling should not be more frequent than once per second for any extended period of time.
To my understanding this is how the DirectLine object receives events from the bot, however I only need the event to trigger once, and not on the polling interval. It seems like there is no way to say "I am finished with this event" and move on. As soon as it is triggered once, it is continuously triggered which is causing issues with the functionality of the application.
I have this BotConnection object that is used to create a DirectLine instance and subscribe event handlers to the received events:
import { DirectLine } from 'botframework-directlinejs';
export default class BotConnection {
constructor() {
//Connect to directline object hosted by the bot
this.connection = new DirectLine({
domain: 'http://localhost:3001/directline',
pollingInterval: 1000,
webSocket: false
})
}
getConnection() {
return this.connection;
}
subscribeEventHandler(name, handle) {
console.log('subscribeEventHandler');
this.connection.activity$
.filter(activity => activity.type === "event" && activity.name === name)
.subscribe(activity => handle(activity));
}
}
I am implementing the botConnection class in my App file like so:
props.botConnection.subscribeEventHandler('changePage', () => console.log('test'));
My Bot file takes a message and sends an event to the page that should be handled once on the client application:
const { ActivityHandler } = require('botbuilder');
class MyBot extends ActivityHandler {
constructor() {
super();
this.onMessage(async (context, next) => {
//Testing directline back-channel functionality
if(context.activity.text === "Change page") {
await context.sendActivity({type: 'event', name: 'changePage', value: '/test'});
}
await next();
}
}
Any help with this issue would be fantastic. I am unsure if there is something supported by Azure or if there is some custom magic that I need to do.
Thanks

Messages not coming thru to Azure SignalR Service

I'm implementing Azure SignalR service in my ASP.NET Core 2.2 app with React front-end. When I send a message, I'm NOT getting any errors but my messages are not reaching the Azure SignalR service.
To be specific, this is a private chat application so when a message reaches the hub, I only need to send it to participants in that particular chat and NOT to all connections.
When I send a message, it hits my hub but I see no indication that the message is making it to the Azure Service.
For security, I use Auth0 JWT Token authentication. In my hub, I correctly see the authorized user claims so I don't think there's any issues with security. As I mentioned, the fact that I'm able to hit the hub tells me that the frontend and security are working fine.
In the Azure portal however, I see no indication of any messages but if I'm reading the data correctly, I do see 2 client connections which is correct in my tests i.e. two open browsers I'm using for testing. Here's a screen shot:
Here's my Startup.cs code:
public void ConfigureServices(IServiceCollection services)
{
// Omitted for brevity
services.AddAuthentication(options => {
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(jwtOptions => {
jwtOptions.Authority = authority;
jwtOptions.Audience = audience;
jwtOptions.Events = new JwtBearerEvents
{
OnMessageReceived = context =>
{
var accessToken = context.Request.Query["access_token"];
// Check to see if the message is coming into chat
var path = context.HttpContext.Request.Path;
if (!string.IsNullOrEmpty(accessToken) &&
(path.StartsWithSegments("/im")))
{
context.Token = accessToken;
}
return System.Threading.Tasks.Task.CompletedTask;
}
};
});
// Add SignalR
services.AddSignalR(hubOptions => {
hubOptions.KeepAliveInterval = TimeSpan.FromSeconds(10);
}).AddAzureSignalR(Configuration["AzureSignalR:ConnectionString"]);
}
And here's the Configure() method:
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
// Omitted for brevity
app.UseSignalRQueryStringAuth();
app.UseAzureSignalR(routes =>
{
routes.MapHub<Hubs.IngridMessaging>("/im");
});
}
Here's the method I use to map a user's connectionId to the userName:
public override async Task OnConnectedAsync()
{
// Get connectionId
var connectionId = Context.ConnectionId;
// Get current userId
var userId = Utils.GetUserId(Context.User);
// Add connection
var connections = await _myServices.AddHubConnection(userId, connectionId);
await Groups.AddToGroupAsync(connectionId, "Online Users");
await base.OnConnectedAsync();
}
Here's one of my hub methods. Please note that I'm aware a user may have multiple connections simultaneously. I just simplified the code here to make it easier to digest. My actual code accounts for users having multiple connections:
[Authorize]
public async Task CreateConversation(Conversation conversation)
{
// Get sender
var user = Context.User;
var connectionId = Context.ConnectionId;
// Send message to all participants of this chat
foreach(var person in conversation.Participants)
{
var userConnectionId = Utils.GetUserConnectionId(user.Id);
await Clients.User(userConnectionId.ToString()).SendAsync("new_conversation", conversation.Message);
}
}
Any idea what I'm doing wrong that prevents messages from reaching the Azure SignalR service?
It might be caused by misspelled method, incorrect method signature, incorrect hub name, duplicate method name on the client, or missing JSON parser on the client, as it might fail silently on the server.
Taken from Calling methods between the client and server silently fails
:
Misspelled method, incorrect method signature, or incorrect hub name
If the name or signature of a called method does not exactly match an appropriate method on the client, the call will fail. Verify that the method name called by the server matches the name of the method on the client. Also, SignalR creates the hub proxy using camel-cased methods, as is appropriate in JavaScript, so a method called SendMessage on the server would be called sendMessage in the client proxy. If you use the HubName attribute in your server-side code, verify that the name used matches the name used to create the hub on the client. If you do not use the HubName attribute, verify that the name of the hub in a JavaScript client is camel-cased, such as chatHub instead of ChatHub.
Duplicate method name on client
Verify that you do not have a duplicate method on the client that differs only by case. If your client application has a method called sendMessage, verify that there isn't also a method called SendMessage as well.
Missing JSON parser on the client
SignalR requires a JSON parser to be present to serialize calls between the server and the client. If your client doesn't have a built-in JSON parser (such as Internet Explorer 7), you'll need to include one in your application.
Update
In response to your comments, I would suggest you try one of the Azure SignalR samples, such as
Get Started with SignalR: a Chat Room Example to see if you get the same behavior.
Hope it helps!

Azure + SignalR - Secure hubs to different connection types

I have two hubs in a web role,
1) external facing hub meant to be consumed over https external endpoint for website users.
2) intended to be connected to over http on an internal endpoint by worker roles.
I would like the ability to secure access to the hubs somehow.
Is there anyway I can check to see what connection type the connecting user/worker role is using and accept/deny based on this?
Another method I thought of was perhaps using certificate authentication on the internal hubs but i'd rather not have to for speed etc.
GlobalHost.DependencyResolver.UseServiceBus(connectionString, "web");
// Web external connection
app.MapSignalR("/signalr", new HubConfiguration()
{ EnableJavaScriptProxies = true, EnableDetailedErrors = false });
// Worker internal connection
app.MapSignalR("/signalr-internal", new HubConfiguration()
{ EnableJavaScriptProxies = false, EnableDetailedErrors = true});
EDIT: I've included my own answer
A simple solution you can use roles of client to distinguish between to connections
object GetAuthInfo()
{
var user = Context.User;
return new
{
IsAuthenticated = user.Identity.IsAuthenticated,
IsAdmin = user.IsInRole("Admin"),
UserName = user.Identity.Name
};
}
also other options are fully described here
I ended up probing the request environment variables and checking the servers localPort and request scheme in a custom AuthorizeAttribute. The only downside to this at the moment is that the javascript proxies will still generate the restricted hub info. But i'm working on that :).
I'll leave the question open for a bit to see if anyone can extend on this.
public class SignalrAuthorizeAttribute : Microsoft.AspNet.SignalR.AuthorizeAttribute, Microsoft.AspNet.SignalR.IDependencyResolver
{
public override bool AuthorizeHubConnection(Microsoft.AspNet.SignalR.Hubs.HubDescriptor hubDescriptor, Microsoft.AspNet.SignalR.IRequest request)
{
bool isHttps = request.Environment["owin.RequestScheme"].ToString().Equals("https", StringComparison.OrdinalIgnoreCase) ? true : false;
bool internalPort = request.Environment["server.LocalPort"].ToString().Equals("2000") ? true : false;
switch(hubDescriptor.Name)
{
// External Hubs
case "masterHub":
case "childHub":
if (isHttps && !internalPort) return base.AuthorizeHubConnection(hubDescriptor, request);
break;
// Internal hubs
case "workerInHub":
case "workerOutHub":
if (!isHttps && internalPort) return base.AuthorizeHubConnection(hubDescriptor, request);
break;
default:
break;
}
return false;
}
}

Resources