The NPM package #microsoft/applicationinsights-we from microsoft when used in the frontend will add headers for tracking calls across different parts of the application (e.g. frontend, backend, services etc.).
I'm using axios in the frontend which out of the box does not work with the package. Neither disableFetchTracking:false nor disableAjaxTracking:false works. I don't want to replace axios with fetch, because axios is more convenient to use and this would be a lot of rewrite too.
What can I do?
The #microsoft/applicationinsights-web package does inject correlation headers into axios calls (by instrumenting XMLHttpRequest). There might be a few causes of the problem in your application:
Another library broke the instrumentation
Something else might be hijacking the XMLHttpRequest object and affecting the AppInsights instrumentation. One such library is pace.js, which overwrites the window.XMLHttpRequest constructor but does not add open, send and abort to it's prototype. AppInsights expects these functions to be present on the XMLHttpRequest's prototype: https://github.com/microsoft/ApplicationInsights-JS/blob/91f08a1171916a1bbf14c03a019ebd26a3a69b86/extensions/applicationinsights-dependencies-js/src/ajax.ts#L330
Here is a working axios + #microsoft/applicationinsights-web example:
Here is the same example, but with pace.js loaded - the Request-Id header is not added:
Either removing pace.js or placing it's script tag / import after the AppInsights initialization code should fix the problem.
Cross-origin requests
Another explanation might be that the frontend app is making cross-origin requests, which are not processed by AppInsights by default - the enableCorsCorrelation: true config setting is needed.
Related
I have an application that is built in loopback4, a nodejs framework to build REST Api's.
Within that app I want to cache public certificates of various jwt token issuers to not request these with every api call. My problem is the sequence and each middleware get a new instance with every request. While I could use something like redis, I wonder if I could achieve something similar without an extra service. I do not see any possibility to commit an object from the application that lasts longer than a single request, any ideas?
I assume the same would apply e.g. for express middlewares that do memory caching
I solved this issue with creating a caching service that was created in the application.ts to the app:
import {CacheService} from './caching/cacheservice.service';
import {CacheServiceBindings} from './caching/keys';
[..]
this.add(createBindingFromClass(CacheService, {key: CacheServiceBindings.CACHE}));
and injected this in the sequence.ts constructor:
#inject(CacheServiceBindings.CACHE) protected cache: CacheService,
that way I am able to use this object persisting the whole application lifecycle in the sequence.
I have an HTTP triggered, NodeJs Azure Function, and I'm looking to remove the "X-Powered-By" header from my response, but have found no way to do so.
I've tried adding both this and this azure site extensions, but neither has worked for me,
Setting the response header manually, i.e. res.headers = { ['x-powered-by']: null } is ineffective.
Based on the comments made on this github issue: https://github.com/Azure/Azure-Functions/issues/290 it would seem that using either extension should have removed the headers you wanted.
Modifying the response headers will likely won't work as they are probably added further down the pipeline by the function host and not overridable, see:
Access Azure Function runtime settings
Azure functions recently removed the x-aspnet-version header, further removal of other headers is tracked as part of the azure-webjobs-script-sdk here
You should leave a comment on the github issue and you can further discuss with the team working on this.
There is an extension called Remove Custom Headers that works for Web Apps but not for functions that have their own resource group. So, what you can do is:
1. Create a regular Web App
2. Create a function and make sure you use the same Hosting Plan as the Web App (do not use Consumption).
3. Once the function is created, install the extension named: "Remove Custom Headers"
4. Restart the function and the headers (Server and X-Powered-By) should disappear.
I have Clarion 9 app that I want to be able to communicate with HTTP servers. I come from PHP background. I have 0 idea on what to do.
What I wish to be able to do:
Parse JSON data and convert QUEUE data to JSON [Done]
Have a global variable like 'baseURL' that points to e.g. http://localhost.com [Done]
Call functions such apiConnection.get('/users') would return me the contents of the page. [I'm stuck here]
apiConnection.post('/users', myQueueData) would POST myQueueData contents.
I tried using winhttp.dll by reading it from LibMaker but it didn't read it. Instead, I'm now using wininet.dll which LibMaker successfully created a .lib file for it.
I'm currently using the PROTOTYPE procedures from this code on GitHub https://gist.github.com/ddur/34033ed1392cdce1253c
What I did was include them like:
SimpleApi.clw
PROGRAM
INCLUDE('winInet.equ')
ApiLog QUEUE, PRE(log)
LogTitle STRING(10)
LogMessage STRING(50)
END
MAP
INCLUDE('winInetMap.clw')
END
INCLUDE('equates.clw'),ONCE
INCLUDE('DreamyConnection.inc'),ONCE
ApiConnection DreamyConnection
CODE
IF DreamyConnection.initiateConnection('http://localhost')
ELSE
log:LogTitle = 'Info'
log:LogMessage = 'Failed'
ADD(apiLog)
END
But the buffer that winInet's that uses always returns 0.
I have created a GitHub repository https://github.com/spacemudd/clarion-api with all the code to look at.
I'm really lost in this because I can't find proper documentation of Clarion.
I do not want a paid solution.
It kind of depends which version of Clarion you have.
Starting around v9 they added ClaRunExt which provides this kind of functionality via .NET Interop.
From the help:
Use HTTP or HTTPS to download web pages, or any other type of file. You can also post form data to web servers. Very easy way to send HTTP web requests (and receive responses) to Web Servers, REST Web Services, or standard Web Services, with the most commonly used HTTP verbs; POST, GET, PUT, and DELETE.
Otherwise, search the LibSrc\ directory for "http" and you will get an idea of what is already there. abapi.inc for example, appears to provide a wrapper around wininet.lib.
I want log every http request being made by a particular node app, and all of its modules. Wrapping requests in a function could work for all non-module code, the disadvantage would obviously be it doesn't include module code, and be cumbersome to do.
This is for apps already in production, only other option I thought of was tcpdump.
Setting NODE_DEBUG=http will make node log out detailed HTTP request information to the console.
Examples:
NODE_DEBUG=http,http2 node index.js
NODE_DEBUG=http,http2 npm start
For more information see:
NODE_DEBUG documentation
This blog post: Debugging tools and practices in node.js.
As of 2022-11-23, this is the list of the available NODE_DEBUG attributes (based on the blog post above and checking the nodejs source code):
child_process
cluster
esm
fs
http
http2
inspect
module
net
policy
repl
source_map
stream
stream_socket
timer
tls
tracing
worker
You can pass multiple modules as a comma-separated list: NODE_DEBUG=http,http2,tls
How to find the module IDs in nodejs source code:
Unfortunately, there isn't a listing of the available debug module IDs in the nodejs documentation.
To find the IDs in the source: search/grep the nodejs .js files for .debuglog( usage:
# Grep inside the nodejs project
$ grep -r -E --color "\.debuglog\('" nodejs/lib
This will return results like:
let debug = require('internal/util/debuglog').debuglog('esm', (fn) => {
let debug = require('internal/util/debuglog').debuglog('http2', (fn) => {
The string passed to .debuglog(...) (e.g. 'esm' and 'http2') is the module id that can be passed to NODE_DEBUG.
The easiest / least intrusive way is with a web proxy. Either an off the shelf one or one you write yourself in node. The machines the apps live on would have to get configured* to send all outbound traffic through the proxy and then the proxy can log the traffic. Details on implementation will vary based on which proxy/ approach you pick.
*Arguably, there are ways to do this such that the machines don't even know they're being proxied, but I've found in practice that's really hard to get right, especially with https traffic
In 2022, you can use grackle_tracking library. It helps you track all traffic, errors and analytics into your database or just to the console https://www.getgrackle.com/analytics_and_tracking
Check HTTP request logger called Morgan here
I have a working ordinary Hapi application that I'm planning to migrate to Swagger. I installed swagger-node using the official instructions, and chose Hapi when executing 'swagger project create'. However, I'm now confused because there seem to be several libraries for integrating swagger-node and hapi:
hapi-swagger: the most popular one
hapi-swaggered: somewhat popular
swagger-hapi: unpopular and not that active but used by the official Swagger Node.js library (i.e. swagger-node) as default for Hapi projects
I though swagger-hapi was the "official" approach, until I tried to find information on how do various configurations on Hapi routes (e.g. authorization, scoping, etc.). It seems also that the approaches are fundamentally different, swagger-hapi taking Swagger definition as input and generating the routes automatically, whereas hapi-swagger and hapi-swaggered seem to have similar approach with each other by only generating Swagger API documentation from plain old Hapi route definitions.
Considering the amount of contributors and the number of downloads, hapi-swagger seems to be the way to go, but I'm unsure on how to proceed. Is there an "official" Swagger way to set up Hapi, and if there is, how do I set up authentication (preferably by using hapi-auth-jwt2, or other similar JWT solution) and authorization?
EDIT: I also found swaggerize-hapi, which seems to be maintained by PayPal's open source kraken.js team, which indicates that it might have some kind of corporate backing (always a good thing). swaggerize-hapi seems to be very similar to hapi-swagger, although the latter seems to provide more out-of-the-box functionality (mainly Swagger Editor).
Edit: Point 3. from your question and understanding what swagger-hapi actually does is very important. It does not directly serves the swagger-ui html. It is not intended to, but it is enabling the whole swagger idea (which the other projects in points 1. and 2. are actually a bit reversing). Please see below.
It turns out that when you are using swagger-node and swagger-hapi you do not need all the rest of the packages you mentioned, except for using swagger-ui directly (which is used by all the others anyways - they are wrapping it in their dependencies)
I want to share my understanding so far in this hapi/swagger puzzle, hope that these 8 hours that I spent can help others as well.
Libraries like hapi-swaggered, hapi-swaggered-ui, also hapi-swagger - All of them follow the same approach - that might be described like that:
You document your API while you are defining your routes
They are somewhat sitting aside from the main idea of swagger-node and the boilerplate hello_world project created with swagger-cli, which you mentioned that you use.
While swagger-node and swagger-hapi (NOTE that its different from hapi-swagger) are saying:
You define all your API documentation and routes **in a single centralized place - swagger.yaml**
and then you just focus on writing controller logic. The boilerplate project provided with swagger-cli is already exposing this centralized place swagger.yaml as json thru the /swagger endpoint.
Now, because the swagger-ui project which all the above packages are making use of for showing the UI, is just a bunch of static html - in order to use it, you have two options:
1) to self host this static html from within your app
2) to host it on a separate web app or even load the index.html directly from file system.
In both cases you just need to feed the swagger-ui with your swagger json - which as said above is already exposed by the /swagger endpoint.
The only caveat if you chose option 2) is that you need to enable cors for that end point which happened to be very easy. Just change your default.yaml, to also make use of the cors bagpipe. Please see this thread for how to do this.
As #Kitanotori said above, I also don't see the point of documenting the code programmatically. The idea of just describing everything in one place and making both the code and the documentation engine to understand it, is great.
We have used Inert, Vision, hapi-swagger.
server.ts
import * as Inert from '#hapi/inert';
import * as Vision from '#hapi/vision';
import Swagger from './plugins/swagger';
...
...
// hapi server setup
...
const plugins: any[] = [Inert, Vision, Swagger];
await server.register(plugins);
...
// other setup
./plugins/swagger
import * as HapiSwagger from 'hapi-swagger';
import * as Package from '../../package.json';
const swaggerOptions: HapiSwagger.RegisterOptions = {
info: {
title: 'Some title',
version: Package.version
}
};
export default {
plugin: HapiSwagger,
options: swaggerOptions
};
We are using Inert, Vision and hapi-swagger to build and host swagger documentation.
We load those plugins in exactly this order, do not configure Inert or Vision and only set basic properties like title in the hapi-swagger config.