Get username from keycloak session in NodeJS - node.js

Is there something similar to:
request.getUserPrincipal().getName() // Java
In Node to get username when we are using connect-keycloak with express middle-ware?

I also came along with this issue.
I did dive into the middleware code and tried to find something similar. It turns out that the request object is modified and appended by kauth.grant.
console.log('req.kauth.grant') prints out:
{
access_token: {
token: 'kasdgfksj333',
clientId: 'mobile',
header: {
alg: 'RS256'
},
content: {
jti: '33389eb6-3611-4de2-b913-add9283c3de0',
exp: 1464883174,
nbf: 0,
iat: 1464882874,
iss: 'http://docker:9090/auth/realms/test',
aud: 'test-client',
sub: '333604a0-b527-4afb-a04e-5e4ebf06ce9c',
typ: 'Bearer',
azp: 'test-client',
session_state: '1cd35952-8e42-44f1-ad15-aaf9964bfefa',
client_session: '943f1213-f556-4021-bbc6-2355146ab955',
'allowed-origins': [],
resource_access: [Object],
name: 'Test User',
preferred_username: 'user',
given_name: 'Test',
family_name: 'User',
email: 'foo#bar.com'
},
signature: < Buffer 45 1 b 3 d d7 4 f f9 d1 63 44 ad a9 ca b8 c4 67 88 ba e9 5 d 64 8 d a0 a9 75 a1 79 cf 18 52 d5 f7 f0 08 71 1 d 79 bd 59 e9 5 a f8 25 72 dd e5 06 71 4 f b7 f1 47... > ,
signed: 'eyJhbGcfOiJSUzf1NiJ9.eyJqdGkiOsJmYmY4OWViwi0zNjExLTrkZTItYjkxMy1hZGQ5MjgzYzNkZTAiLCJleHAiOjE0NjQ4ODMxNzQsIm5iZiI6MCwiaWF0IjoxNDY0ODgyODc0LCJpc3MiOiJodHRwOi8vZG9ja2VyaG9zdDo5MDgwL2F1dGgvcmVhbG1zL3JoY2FycyIsImF1ZCI6InJoY2Fycy12ZWhpY2xlLW93bmVyLWlvcyIsInN1YiI6IjkxMjYwNGEwLWI1MjctNGFmYi1hMDRlLTVlNGViZjA2Y2U5YyIsInR5cCI6IkJlYXJlciIsImF6cCI6InJoY2Fycy12ZWhpY2xlLW93bmVyLWlvcyIsInNlc3Npb25fc3RhdGUiOiIxY2QzNTk1Mi04ZTQyLTQ0ZjEtYWQxNS1hYWY5OTY0YmZlZmEiLCJjbGllbnRfc2Vzc2lvbiI6Ijk0M2YxMjEzLWY1NTYtNDAyMS1iYmM2LTIzNTUxNDZhYjk1NSIsImFsbG93ZWQtb3JpZ2lucyI6W10sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50Iiwidmlldy1wcm9maWxlIl19fSwibmFtZSI6IlRlc3QgVXNlciIsInByZWZlcnJlZF91c2VybmFtZSI6IjEyMzEyMyIsImdpdmVuX25hbWUiOiJUZXN0IiwiZmFtaWx5X25hbWUiOiJVc2VyIiwiZW1haWwiOiJmb29iYXJ1c2VyQGFyY29uc2lzLmNvbSJ9'
},
refresh_token: undefined,
id_token: undefined,
token_type: undefined,
expires_in: undefined,
__raw: '{"access_token":"eyJhbGciOiJSUzI3NiJ2.eyJqdGki4iJmYmY4OWriNi0zNjExLTRkZTItYjkxMy1hZGQ5MjgzYzNkZTAiLCJleHAiOjE0NjQ4ODMxNzQsIm5iZiI6MCwiaWF0IjoxNDY0ODgyODc0LCJpc3MiOiJodHRwOi8vZG9ja2VyaG9zdDo5MDgwL2F1dGgvcmVhbG1zL3JoY2FycyIsImF1ZCI6InJoY2Fycy12ZWhpY2xlLW93bmVyLWlvcyIsInN1YiI6IjkxMjYwNGEwLWI1MjctNGFmYi1hMDRlLTVlNGViZjA2Y2U5YyIsInR5cCI6IkJlYXJlciIsImF6cCI6InJoY2Fycy12ZWhpY2xlLW93bmVyLWlvcyIsInNlc3Npb25fc3RhdGUiOiIxY2QzNTk1Mi04ZTQyLTQ0ZjEtYWQxNS1hYWY5OTY0YmZlZmEiLCJjbGllbnRfc2Vzc2lvbiI6Ijk0M2YxMjEzLWY1NTYtNDAyMS1iYmM2LTIzNTUxNDZhYjk1NSIsImFsbG93ZWQtb3JpZ2lucyI6W10sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50Iiwidmlldy1wcm9maWxlIl19fSwibmFtZSI6IlRlc3QgVXNlciIsInByZWZlcnJlZF91c2VybmFtZSI6IjEyMzEyMyIsImdpdmVuX25hbWUiOiJUZXN0IiwiZmFtaWx5X25hbWUiOiJVc2VyIiwiZW1haWwiOiJmb29iYXJ1c2VyQGFyY29uc2lzLmNvbSJ9.RRs910_50WNEranKuMRniLrpXWSNoKl1oXnPGFLV9_AIcR15vVnpWvglct3lBnFPt_FH6QPJTmp7i-8mRTIDoIL8jtmEtJ8VfE2ZYX5WN3RlxPFQc5kCOZUQiV55eZALOCSTpm2HIw1eLhBVs4Is8RMJoWy8xj3k4pkOqqll8NY__TJdTG7Iihj0lReblyaW34OpSxkAYoqYaayox0H_7UbnpSAIL0BqBL41lDPH4mXouUX3i0fFbLOt_MnAtPrdFYTez7OVmKhZx7gavdQEkHEGK8thgagnCrycejUqTO0YUeOsasQ2NK9KLPBIEA0eX_p2l2yDYhlJR15stQ3AHA"}',
store: [Function],
unstore: [Function]
}
For sure - this is not developer friendly but you can access the username via
req.kauth.grant.access_token.content.preferred_username. That results in user.
I will report this as an issue to the main contributer.
(Github Repo of keycloak middleware https://github.com/keycloak/keycloak-nodejs-connect)
UPDATE
The main contributers of the keycloak project just answered me. If you find any additional issues - address them here:
https://issues.jboss.org/projects/KEYCLOAK
For the node.js adapter:
https://issues.jboss.org/browse/KEYCLOAK-2833?jql=project%20%3D%20KEYCLOAK%20AND%20component%20%3D%20%22Adapter%20-%20Node.js%22
UPDATE 2: March 15 2021
Reporting issues for the keycloak middleware require a RedHat user account now. Since this thread still seems to be active and I am not into that topic any longer (so much time passed by) I can only suggest to set up an account report bugs there.
https://issues.jboss.org/projects/KEYCLOAK
Hope I could help.
Cheers
Orlando
🍻

Related

Kafkajs - get statistics (lag)

In our nest.js application we use kafkajs client for kafka.
We need to get chance monitor statistic.
One of metrics is lag.
Trying to figure out if kafkajs provides any and nothing interesting. (The most interesting thing in payload are: timestamp, offset, batchContext.firstOffset, batchContext.firstTimestamp, batchContext.maxTimestamp)
Questions
Is there any ideas how to log lag value and other statistic provided by kafkajs?
Should I think about implementing my own statistic monitor to collect required information in node application which uses kafka.js client?
New Details 1
Following documentation I can get batch.highWatermark, where
batch.highWatermark is the last committed offset within the topic partition. It can be useful for calculating lag.
Trying
await consumer.run({
eachBatchAutoResolve: true,
eachBatch: async (data) => {
console.log('Received data.batch.messages: ', data.batch.messages)
console.log('Received data.batch.highWatermark: ', data.batch.highWatermark)
},
})
I can get information like a next one:
Received data.batch.messages: [
{
magicByte: 2,
attributes: 0,
timestamp: '1628877419958',
offset: '144',
key: null,
value: <Buffer 68 65 6c 6c 6f 21>,
headers: {},
isControlRecord: false,
batchContext: {
firstOffset: '144',
firstTimestamp: '1628877419958',
partitionLeaderEpoch: 0,
inTransaction: false,
isControlBatch: false,
lastOffsetDelta: 2,
producerId: '-1',
producerEpoch: 0,
firstSequence: 0,
maxTimestamp: '1628877419958',
timestampType: 0,
magicByte: 2
}
},
{
magicByte: 2,
attributes: 0,
timestamp: '1628877419958',
offset: '145',
key: null,
value: <Buffer 6f 74 68 65 72 20 6d 65 73 73 61 67 65>,
headers: {},
isControlRecord: false,
batchContext: {
firstOffset: '144',
firstTimestamp: '1628877419958',
partitionLeaderEpoch: 0,
inTransaction: false,
isControlBatch: false,
lastOffsetDelta: 2,
producerId: '-1',
producerEpoch: 0,
firstSequence: 0,
maxTimestamp: '1628877419958',
timestampType: 0,
magicByte: 2
}
},
{
magicByte: 2,
attributes: 0,
timestamp: '1628877419958',
offset: '146',
key: null,
value: <Buffer 6d 6f 72 65 20 6d 65 73 73 61 67 65 73>,
headers: {},
isControlRecord: false,
batchContext: {
firstOffset: '144',
firstTimestamp: '1628877419958',
partitionLeaderEpoch: 0,
inTransaction: false,
isControlBatch: false,
lastOffsetDelta: 2,
producerId: '-1',
producerEpoch: 0,
firstSequence: 0,
maxTimestamp: '1628877419958',
timestampType: 0,
magicByte: 2
}
}
]
Received data.batch.highWatermark: 147
Is any ideas how to use batch.highWatermark in tag calculation then?
Looks like the only way to get offset lag metric is by using instrumentation events:
consumer.on(consumer.events.END_BATCH_PROCESS, (payload) =>
console.log(payload.offsetLagLow),
);
offsetLagLow measures the offset delta between first message in the batch and the last offset in the partition (highWatermark). You can also use offsetLag but it is based on the last offset of the batch.
As #Sergii mentioned there are some props available directly when you are using eachBatch (here are all available methods on the batch prop). But you won't get that props if you are using eachMessage. So instrumentation events are the most universal approach.

Azure functions messing up gzipped POST data

Currently i'm implementing a webhook which states that the request sent to the configured endpoint will be gzipped, and i'm experiencing a weird bug with that.
I created a middleware to handle de gunzip of the request data:
const buffer: Buffer[] = [];
request
.on("data", (chunk) => {
buffer.push(Buffer.from(chunk));
})
.on("end", () => {
const concatBuff: Buffer = Buffer.concat(buffer);
zlib.gunzip(concatBuff, (err, buff) => {
if (err) {
console.log("gunzip err", err);
return next(err);
}
request.body = buff.toString();
next();
});
});
I added this middleware before all the other body parser middlewares to avoid any incompatibility with that.
So i'm testing it with this curl command:
cat webhook.txt | gzip | curl -v -i --data-binary #- -H "Content-Encoding: gzip" http://localhost:3334
In this server, which uses azure-function-express, i'm getting this error:
[1/9/2020 22:36:21] gunzip err Error: incorrect header check
[1/9/2020 22:36:21] at Zlib.zlibOnError [as onerror] (zlib.js:170:17) {
[1/9/2020 22:36:21] errno: -3,
[1/9/2020 22:36:21] code: 'Z_DATA_ERROR'
[1/9/2020 22:36:21] }
[1/9/2020 22:36:21]
it seems that the error is caused because the header is not the "magical number" of a gzip file:
<Buffer 1f ef bf bd 08 00 ef bf bd ef bf bd 4e 5f 00 03 ef bf bd 5d 6d 73 db b8 11 ef bf bd ef b
f bd 5f ef bf bd e1 97 bb 6b 7d 16 ef bf bd 77 ef bf bd 73 ef ... 4589 more bytes>
But here is the weird thing, i created a new express application to test this using the exactly same curl, and it works perfectly in there, so it seems that there is some problem with the createAzureFunctionHandler, or i'm missing out something.
Have you guys experienced any of those problems using Azure functions??
Any idea of what is Azure messing up with the gzip data??
I just got an answer from the Azure team, they recommend me to set a proxy inside proxies.json as a workaround, so if anyone is having the same issue you can just set a new proxy to override the Content-Type.
In my case i was always expecting a gzipped json, so maybe if you don't know beforehand which type is this wouldn't work for you.
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"RequireContentType": {
"matchCondition": {
"route": "/api/HttpTrigger"
},
"backendUri": "https://proxy-example.azurewebsites.net/api/HttpTrigger",
"requestOverrides": {
"backend.request.headers.content-type": "application/octet-stream",
"backend.request.headers.request-content-type": "'{request.headers.content-type}'"
}
}
}
}

Error in Watson Classifier API reference nodejs example: source.on is not a function

I'm trying to use Watson Classifier from node. I've started by implementing the example in the API reference, found at https://www.ibm.com/watson/developercloud/natural-language-classifier/api/v1/node.html?node#create-classifier
My code (sensitive information replaced with stars):
58 create: function(args, cb) {
59 var params = {
60 metadata: {
61 language: 'en',
62 name: '*********************'
63 },
64 training_data: fs.createReadStream(config.data.prepared.training)
65 };
66
67 params.training_data.on("readable", function () {
68 nlc.createClassifier(params, function(err, response) {
69 if (err)
70 return cb(err);
71 console.log(JSON.stringify(response, null, 2));
72 cb();
73 });
74 });
75 },
The file I am trying to make a stream from exists. The stream works (I've managed to read from it on "readable"). I've placed the on("readable") part because it made sense for me to do all of this once the stream becomes available, and also because I wanted to be able to check that I can read from it. It does not change the outcome, however.
nlc is the natural_langauge_classifier instance.
I'm getting this:
octav#****************:~/watsonnlu$ node nlc.js create
/home/octav/watsonnlu/node_modules/delayed-stream/lib/delayed_stream.js:33
source.on('error', function() {});
^
TypeError: source.on is not a function
at Function.DelayedStream.create (/home/octav/watsonnlu/node_modules/delayed-stream/lib/delayed_stream.js:33:10)
at FormData.CombinedStream.append (/home/octav/watsonnlu/node_modules/combined-stream/lib/combined_stream.js:44:37)
at FormData.append (/home/octav/watsonnlu/node_modules/form-data/lib/form_data.js:74:3)
at appendFormValue (/home/octav/watsonnlu/node_modules/request/request.js:321:21)
at Request.init (/home/octav/watsonnlu/node_modules/request/request.js:334:11)
at new Request (/home/octav/watsonnlu/node_modules/request/request.js:128:8)
at request (/home/octav/watsonnlu/node_modules/request/index.js:53:10)
at Object.createRequest (/home/octav/watsonnlu/node_modules/watson-developer-cloud/lib/requestwrapper.js:208:12)
at NaturalLanguageClassifierV1.createClassifier (/home/octav/watsonnlu/node_modules/watson-developer-cloud/natural-language-classifier/v1-generated.js:143:33)
at ReadStream.<anonymous> (/home/octav/watsonnlu/nlc.js:68:8)
I tried debugging it myself for a while, but I'm not sure what this source is actually supposed to be. It's just an object composed of the metadata I put in and the "emit" function if I print it before the offending line in delayed-stream.js.
{ language: 'en',
name: '*******************',
emit: [Function] }
This is my package.json file:
1 {
2 "name": "watsonnlu",
3 "version": "0.0.1",
4 "dependencies": {
5 "csv-parse": "2.0.0",
6 "watson-developer-cloud": "3.2.1"
7 }
8 }
Any ideas how to make the example work?
Cheers!
Octav
I got the answer in the meantime thanks to the good people at IBM. It seems you have to send the metadata as a stringified JSON:
59 var params = {
60 metadata: JSON.stringify({
61 language: 'en',
62 name: '*********************'
63 }),
64 training_data: fs.createReadStream(config.data.prepared.training)
65 };

Showing image from MongoDB that is stored as Buffer

So I am storing an image like this:
router.post('/', upload.single('pic'), (req, res) => {
var newImg = fs.readFileSync(req.file.path);
var encImg = newImg.toString('base64');
var s = new Buffer(encImg, 'base64');
var newCar = {
picture: s,
contentType: req.file.mimetype,
link: req.body.link
}
})
});
Now the data looks like this:
{
_id: 5a502869eb1eb10cc4449335,
picture: Binary { _bsontype: 'Binary',
sub_type: 0,
position: 1230326,
buffer: <Buffer 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49
48 44 52 00 00 05 00 00 00 03 1e 08 06 00 ... >
},
contentType: 'image/png',
link: 'fds',
__v: 0
}
I want to show this picture on frontend, like this:
<img src="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAAAUA
AAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO
9TXL0Y4OHwAAAABJRU5ErkJggg==" alt="Red dot" />
In my case, this code will be:
<img src="data:<%= c.contentType %>;base64, <%= c.picture %>" />
And all I am getting is some weird symbols:
I think I tried almost everything, and still can't figure out what is this. Even when I convert that Buffer toString('ascii'), I am still getting some symbols (boxes) that can't be recognized.
What am I supposed to do?
P.S. Also, is this a good way to store images? (less than 16MB), I think I noticed it's kinda slow, cuz of those long strings converting and reading file, compared to case where I just store the image as file?
HTML
<img [src]="'data:image/jpg;base64,'+Logo.data" height="50" width="60" alt="Red dot" />
Data from database:
"Logo" : {
"data" : BinData(0,"/9j/4AAQSkZJRgABAQEAYABgAAD/"),
"name" : "dp.jpg",
"encoding" : "7bit",
"mimetype" : "image/jpeg",
"truncated" : false,
"size" : 895082
}
Hope it help's

Node.js: Download file from s3 and unzip it to a string

I am writing an AWS Lambda function which needs to download files from AWS S3, unzips the file and returns the content in the form of a string.
I am trying this
function getObject(key){
var params = {
Bucket: "my-bucket",
Key: key
}
return new Promise(function (resolve, reject){
s3.getObject(params, function (err, data){
if(err){
reject(err);
}
resolve(zlib.unzipSync(data.Body))
})
})
}
But getting the error
Error: incorrect header check
at Zlib._handle.onerror (zlib.js:363:17)
at Unzip.Zlib._processChunk (zlib.js:524:30)
at zlibBufferSync (zlib.js:239:17)
The data looks like this
{ AcceptRanges: 'bytes',
LastModified: 'Wed, 16 Mar 2016 04:47:10 GMT',
ContentLength: '318',
ETag: '"c3xxxxxxxxxxxxxxxxxxxxxxxxx"',
ContentType: 'binary/octet-stream',
Metadata: {},
Body: <Buffer 50 4b 03 04 14 00 00 08 08 00 f0 ad 6f 48 95 05 00 50 84 00 00 00 b4 00 00 00 2c 00 00 00 30 30 33 32 35 2d 39 31 38 30 34 2d 37 34 33 30 39 2d 41 41 ... >
}
The Body buffer contains zip-compressed data (this is identified by the first few bytes), which is not just plain zlib.
You will need to use some zip module to parse the data and extract the files within. One such library is yauzl which has a fromBuffer() method that you can pass your buffer to and get the file entries.

Resources