I'm developing a solution that integrates with LUIS using API calls.
I can't find the API to get information from the Dashboard as for example incorrect predictions and unclear predictions. So far I found just this API:
{ENDPOINT}/luis/webapi/v2.0/apps/{APP_ID}/versions/{VERSION}/statsmetadata
but this is not enough, since I would get more detailed information like the one displayed in the dashboard:
The statsmetadata endpoint is the right operation for displaying the figures for the pie chart.
Sample
Here is an example in one of my LUIS apps
Dashboard display
The dashboard gives the following details:
Data origin
StatsMetadata endpoint gives all the details. Here is the call that is made: https://westeurope.api.cognitive.microsoft.com//luis/webApi/v2.0/apps/*MyAppId*/versions/*MyAppVersion*/statsmetadata
JSON Result:
{
"appVersionUtterancesCount": 8505,
"modelsMetadata": [{
"modelId": "IdOfModel1,
"modelName": "modelName1",
"utterancesCount": 85,
"misclassifiedUtterancesCount": 7,
"ambiguousUtterancesCount": 19,
"misclassifiedAmbiguousUtterancesCount": 7
}, {
"modelId": "IdOfModel2",
"modelName": "modelName2",
"utterancesCount": 402,
"misclassifiedUtterancesCount": 11,
"ambiguousUtterancesCount": 32,
"misclassifiedAmbiguousUtterancesCount": 11
}, {
"modelId": "IdOfModel3",
"modelName": "modelName3",
"utterancesCount": 293,
"misclassifiedUtterancesCount": 9,
"ambiguousUtterancesCount": 42,
"misclassifiedAmbiguousUtterancesCount": 9
}, {
"modelId": "IdOfModel4",
"modelName": "modelName4",
"utterancesCount": 58,
"misclassifiedUtterancesCount": 3,
"ambiguousUtterancesCount": 5,
"misclassifiedAmbiguousUtterancesCount": 3
}, {
"modelId": "IdOfModel5",
"modelName": "modelName5",
"utterancesCount": 943,
"misclassifiedUtterancesCount": 7,
"ambiguousUtterancesCount": 103,
"misclassifiedAmbiguousUtterancesCount": 7
}, {
"modelId": "IdOfModel6",
"modelName": "modelName6",
"utterancesCount": 266,
"misclassifiedUtterancesCount": 9,
"ambiguousUtterancesCount": 70,
"misclassifiedAmbiguousUtterancesCount": 9
}, {
"modelId": "IdOfModel7",
"modelName": "modelName7",
"utterancesCount": 2441,
"misclassifiedUtterancesCount": 68,
"ambiguousUtterancesCount": 180,
"misclassifiedAmbiguousUtterancesCount": 67
}, {
"modelId": "IdOfModel8",
"modelName": "modelName8",
"utterancesCount": 704,
"misclassifiedUtterancesCount": 40,
"ambiguousUtterancesCount": 154,
"misclassifiedAmbiguousUtterancesCount": 40
}, {
"modelId": "IdOfModel9",
"modelName": "modelName9",
"utterancesCount": 288,
"misclassifiedUtterancesCount": 9,
"ambiguousUtterancesCount": 65,
"misclassifiedAmbiguousUtterancesCount": 9
}, {
"modelId": "IdOfModel10",
"modelName": "modelName10",
"utterancesCount": 18,
"misclassifiedUtterancesCount": 1,
"ambiguousUtterancesCount": 1,
"misclassifiedAmbiguousUtterancesCount": 1
}, {
"modelId": "IdOfModel11",
"modelName": "modelName11",
"utterancesCount": 444,
"misclassifiedUtterancesCount": 12,
"ambiguousUtterancesCount": 50,
"misclassifiedAmbiguousUtterancesCount": 12
}, {
"modelId": "IdOfModel12",
"modelName": "modelName12",
"utterancesCount": 10,
"misclassifiedUtterancesCount": 8,
"ambiguousUtterancesCount": 8,
"misclassifiedAmbiguousUtterancesCount": 8
}, {
"modelId": "IdOfModel13",
"modelName": "modelName13",
"utterancesCount": 58,
"misclassifiedUtterancesCount": 9,
"ambiguousUtterancesCount": 21,
"misclassifiedAmbiguousUtterancesCount": 9
}, {
"modelId": "IdOfModel14",
"modelName": "None",
"utterancesCount": 2495,
"misclassifiedUtterancesCount": 194,
"ambiguousUtterancesCount": 428,
"misclassifiedAmbiguousUtterancesCount": 178
}],
"intentsCount": 14,
"entitiesCount": 7,
"phraseListsCount": 7,
"patternsCount": 0,
"trainingTime": "00:00:53.2732860",
"lastTrainDate": "2019-10-22T13:25:03Z",
"misclassifiedUtterancesCount": 387,
"ambiguousUtterancesCount": 1178,
"misclassifiedAmbiguousUtterancesCount": 370
}
Calculation on API data - Pie Chart
Incorrect predictions
Here it is the number of misclassified utterances divided by the total number of utterances
Formula: misclassifiedUtterancesCount / appVersionUtterancesCount
Check: 387 / 8505 = 0.04550 = 4.6%
Unclear predictions
Here it is the number of ambiguous (but not misclassified ambiguous as they are already counted in incorrect predictions) utterances divided by the total number of utterances
Formula: (ambiguousUtterancesCount - misclassifiedAmbiguousUtterancesCount) / appVersionUtterancesCount
Check: (1178 - 370) / 8505 = 808 / 8505 = 0.09500 = 9.5%
Correct predictions
It is... what remains! So:
Formula: (appVersionUtterancesCount - misclassifiedUtterancesCount - (ambiguousUtterancesCount - misclassifiedAmbiguousUtterancesCount)) / appVersionUtterancesCount
Check: (8505 - 387 - (1178 - 370)) / 8505 = (8505 - 387 - 808) / 8505 = 7310 / 8505 = 0.8594 = 85.9%
Calculation on API data - Right panel details
Data Imbalance
It is the models that have the biggest number of utterancesCount, but I don't know if there is a threshold ratio used to keep only the top N models
Incorrect predictions
By model, check the following ratio: misclassifiedUtterancesCount / utterancesCount and keep the ones having the highest ratio.
Unclear predictions
By model, check the following ratio: (ambiguousUtterancesCount - misclassifiedAmbiguousUtterancesCount) / utterancesCount and keep the ones having the highest ratio.
Details
You can directly check in your navigator developer tools that LUIS portal is calling this endpoint before displaying the dashboard.
Related
Here is the config file of the paraphrase mpnet transformer model and I would like to understand the meaning with examples of the hidden_size and num_hidden_layers parameters.
{
"_name_or_path": "old_models/paraphrase-mpnet-base-v2/0_Transformer",
"architectures": [
"MPNetModel"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "mpnet",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"relative_attention_num_buckets": 32,
"transformers_version": "4.7.0",
"vocab_size": 30527
}
I tried to use two Rust libraries to compute hmacsha1: hmacsha and hmac-sha1.
This functions use different set of params(hmacsha require strings, hmac-sha1 require [u8]) and return different result. I used this online tool to compare. hmacsha return same with online tool result, hmac-sha1 different from both online tool and hmacsha.
This is how I use hmacsha:
const ENCRYPT_KEY: &str = "C2B3723CC6AED9B5343C53EE2F4367CE";
let key = [200, 7, 216, 173, 213, 146, 229, 69, 96, 121, 68, 66, 55, 215, 179, 150, 34, 206, 128, 136, 252, 37, 173, 113, 170, 80, 148, 36, 5, 129, 212, 183, 47, 24, 47, 123, 157, 47, 26, 2];
let hash_key = HmacSha::from(
&encode_hex(&key), // C807D8ADD592E5456079444237D7B39622CE8088FC25AD71AA5094240581D4B72F182F7B9D2F1A02
ENCRYPT_KEY,
Sha1::default()
).compute_digest();
println!("{:?}", encode_hex(&hash_key)); // 545E5C8D754F21DD586BA7378F8AA7AA815F4500
This is my encode_hex function I use for both examples:
pub fn encode_hex(bytes: &[u8]) -> String {
let mut s = String::with_capacity(bytes.len() * 2);
for &b in bytes {
write!(&mut s, "{:02x}", b).unwrap();
}
s.to_uppercase()
}
This is how I use hmac-sha1:
const ENCRYPT_KEY: [u8; 16] = [
0xC2, 0xB3, 0x72, 0x3C, 0xC6, 0xAE, 0xD9, 0xB5, 0x34, 0x3C, 0x53, 0xEE, 0x2F, 0x43, 0x67, 0xCE
];
let key = [200, 7, 216, 173, 213, 146, 229, 69, 96, 121, 68, 66, 55, 215, 179, 150, 34, 206, 128, 136, 252, 37, 173, 113, 170, 80, 148, 36, 5, 129, 212, 183, 47, 24, 47, 123, 157, 47, 26, 2];
let hash_key = &hmac_sha1(&key, &ENCRYPT_KEY2);
println!("{:?}", encode_hex(&hash_key)); // 80FEC15187802FFB4EA8124220A19DD575E45DC3
Could somebody explain what am I doing wrong ?
HmacSha::from from hmac-sha crate takes a string as the key but it does not expect a hex-encoded string. It instead uses the bytes of the string as-is. This seems like a mistake in the API since this will only accept keys that are valid UTF-8 bytes. You should instead use HmacSha::new from the same crate, which accepts a byte slice as the key:
let hash_key = HmacSha::new(
&key, // no encode_hex() here
ENCRYPT_KEY,
Sha1::default()
).compute_digest();
How can I read buffer data from the type script?
I want to use the public key to get all the token lists I own.
I'm trying to get this, but an empty array of objects is being returned.
import {Connection, Keypair} from "#solana/web3.js";
const Solana = new Connection("https://api.testnet.solana.com/","confirmed")
const getInfo = async () => {
const recentInfo = await Solana.getEpochInfo()
const DEMO_FROM_SECRET_KEY = new Uint8Array([
223, 119, 171, 5, 237, 138, 42, 140, 176, 163, 74,
107, 25, 143, 90, 97, 250, 158, 203, 102, 238, 19,
77, 228, 211, 238, 147, 149, 40, 50, 211, 155, 51,
207, 14, 53, 86, 230, 164, 27, 14, 202, 78, 181,
185, 250, 16, 52, 134, 242, 96, 16, 12, 67, 2,
178, 106, 241, 156, 212, 11, 150, 114, 72])
const keypair = Keypair.fromSecretKey(DEMO_FROM_SECRET_KEY)
console.log("=============get account info==============")
async function fetchaccountdata() {
const accountinfo = await Solana.getParsedAccountInfo(keypair.publicKey,"max")
const accountinfodata = JSON.stringify(accountinfo)
const accountinfodata2 = JSON.parse(accountinfodata)
console.log(accountinfo)
console.log("=============get account info==============")
console.log(accountinfodata)
console.log(accountinfodata2)
}
fetchaccountdata()
}
getInfo()
Terminal is
accountinfodata = {"context":{"slot":94636033},"value":{"data":{"type":"Buffer","data":[]},"executable":false,"lamports":42784111360,"owner":{"_bn":"00"},"rentEpoch":231}}
accountinfodata2 = {
context: { slot: 94636033 },
value: {
data: { type: 'Buffer', data: [] },
executable: false,
lamports: 42784111360,
owner: { _bn: '00' },
rentEpoch: 231
}
}
The data of accountinfo object is empty array.
How can I read the information?
This call is fetching the account data for your wallet. Your wallet only holds SOL, which is why the data is nothing!
If you want to get all of the token accounts that you own, you'll be better off using getParsedTokenAccountsByOwner: https://github.com/solana-labs/solana/blob/2a42f8a06edd4d33c6cda4d66add0a2582d37011/web3.js/src/connection.ts#L2310
The buffer contains account info data in no particular format. Borsh has recently been established as a standard, though.
Example borsh serialization from SPL:
https://github.com/solana-labs/solana-program-library/blob/master/token/program/src/state.rs#L128
Example borsh deserialization for metaplex metadata: https://github.com/metaplex-foundation/metaplex-program-library/blob/master/token-metadata/program/src/instruction.rs#L77-L78
If you aren't trying to decode metaplex metadata, you should look elsewhere for the schema, but follow the same principles of the code. If you know the account isn't using borsh, then it's up to you to decode it.
Visit the solana tech discord for more information/help.
https://discord.com/channels/428295358100013066/517163444747894795/917412508418068510
function createData(collegeName, registrarName, phoneNumber, emailAddress, action) {
return {collegeName , registrarName, phoneNumber, emailAddress, action };
}
var rows = [
createData("data", 305, 3.7, 67, 4.3),
createData('Donut', 452, 25.0, 51, 4.9),
createData('Eclair', 262, 16.0, 24, 6.0),
createData('Frozen yoghurt', 159, 6.0, 24, 4.0),
createData('Gingerbread', 356, 16.0, 49, 3.9),
createData('Honeycomb', 408, 3.2, 87, 6.5),
createData('Ice cream sandwich', 237, 9.0, 37, 4.3),
createData('Jelly Bean', 375, 0.0, 94, 0.0),
createData('KitKat', 518, 26.0, 65, 7.0),
createData('Lollipop', 392, 0.2, 98, 0.0),
createData('Marshmallow', 318, 0, 81, 2.0),
createData('Nougat', 360, 19.0, 9, 37.0),
createData('Oreo', 437, 18.0, 63, 4.0),
];
this is static data (you can check using full code for table ) but i want to feed rows by dynamic data :this is the response of database
You shoul use a Map to generate an input for your table rows property:
let list = (data && data.profiles != null) ? data.profiles.map((value, index, array) => {
return {
id: value.id,
key: value.id,
name: value.name,
})
Above code is an example, change variables to your owns.
With the following javascript request:
navigator.credentials.create({
publicKey: {
// random, cryptographically secure, at least 16 bytes
challenge: new Uint8Array(16),
// relying party
rp: {
id: 'localhost',
name: 'My website'
},
user: {
id: new Uint8Array(16),
name: 'Tang',
displayName: 'Tang'
},
pubKeyCredParams: [
{
type: "public-key", alg: -7
}
],
attestation: "direct"
}
})
a FIDO2-compatible Yubikey 5 NFC systematically returns a "fido-u2f" attestation statement:
%{
"attStmt" => %{
"sig" => <<48, 69, 2, 33, 0, 132, 31, 225, 91, 58, 61, 190, 47, 66, 168, 8,
177, 18, 136, 106, 100, 219, 54, 52, 255, 103, 106, 156, 230, 141, 240,
82, 130, 167, 204, 128, 100, 2, 32, 61, 159, 126, 9, 244, 55, 100, 123,
169, ...>>,
"x5c" => [
<<48, 130, 2, 188, 48, 130, 1, 164, 160, 3, 2, 1, 2, 2, 4, 3, 173, 240,
18, 48, 13, 6, 9, 42, 134, 72, 134, 247, 13, 1, 1, 11, 5, 0, 48, 46, 49,
44, 48, 42, 6, 3, 85, 4, 3, 19, ...>>
]
},
"authData" => <<73, 150, 13, 229, 136, 14, 140, 104, 116, 52, 23, 15, 100,
118, 96, 91, 143, 228, 174, 185, 162, 134, 50, 199, 153, 92, 243, 186, 131,
29, 151, 99, 65, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...>>,
"fmt" => "fido-u2f"
}
How to receive a FIDO2 "packed" attestation statement instead?
Per the current spec/standard I don't think you (acting as a Relying Party) can "select" which attestation statement format you receive from the Authenticator (i.e. "the device"). That is a decision made by the Authenticator.
I think MacBook Pro TouchID platform authenticator via Chrome Desktop is sending "packed" attestation statements, if that helps.
There is no way to select the attestation using such simple keys. To test my implementation for both attestations I simply bought two different keys one from Yibico and one from Nitrokey. Yubico sends fido-u2f, while the Nitrokey sends packed attestations.
And if someone likes to know, this is how I implemented that:
let verifyAuthenticatorAttestationResponse = (webAuthnResponse) => {
let attestationBuffer =
base64url.toBuffer(webAuthnResponse.response.attestationObject);
let ctapMakeCredResp = cbor.decodeAllSync(attestationBuffer)[0];
let authrDataStruct = parseMakeCredAuthData(ctapMakeCredResp.authData);
let response = {'verified': false };
if(ctapMakeCredResp.fmt === 'fido-u2f' || ctapMakeCredResp.fmt === 'packed') {
if(!(authrDataStruct.flags & U2F_USER_PRESENTED))
throw new Error('User was NOT presented durring authentication!');
let clientDataHash =
hash(base64url.toBuffer(webAuthnResponse.response.clientDataJSON))
let publicKey = COSEECDHAtoPKCS(authrDataStruct.COSEPublicKey)
let PEMCertificate = ASN1toPEM(ctapMakeCredResp.attStmt.x5c[0]);
let signature = ctapMakeCredResp.attStmt.sig;
let signatureBase;
if(ctapMakeCredResp.fmt === 'fido-u2f') {
signatureBase = Buffer.concat([Buffer.from([0x00]), authrDataStruct.rpIdHash, clientDataHash, authrDataStruct.credID, publicKey]);
} else {
signatureBase = Buffer.concat([ctapMakeCredResp.authData, clientDataHash]);
}
response.verified = verifySignature(signature, signatureBase, PEMCertificate)
if(response.verified) {
response.authrInfo = {
fmt: `${ctapMakeCredResp.fmt}`,
publicKey: base64url.encode(publicKey),
counter: authrDataStruct.counter,
credID: base64url.encode(authrDataStruct.credID)
}
}
}
return response
}
By default the browser might select U2F. Try forcing UV by setting it to "required". This will force browser to use FIDO2 as U2F does not support UV.