white space issue in isAlpha() function of express-validator - node.js

I am using express-validator in my project
my json from the client is
{"name": "john doe"}
my express validation code is
[check('name', 'invalid name').isAlpha()]
why this code is returning invalid name while this is a string.
Also I tried isString() but it is also not working it is working in the same style as isAlpha().
Error json response to the client is
{
"errors": [
{
"value": "john doe",
"msg": "invalid name",
"param": "name",
"location": "body"
}
]
}
does isAlpha() function consider only one word as a string
How can I fix this

There is an option of .isAlpha you can use to ignore white spaces:
check('name', 'invalid name').isAlpha('en-US', {ignore: ' '})
The first parameter 'en-US' is AlphaLocale. For example I use 'es-ES' to validate Spanish special characters. You can use one of these to validate other languages: 'ar' | 'ar-AE' | 'ar-BH' | 'ar-DZ' | 'ar-EG' | 'ar-IQ' | 'ar-JO' | 'ar-KW' | 'ar-LB' | 'ar-LY' | 'ar-MA' | 'ar-QA' | 'ar-QM' | 'ar-SA' | 'ar-SD' | 'ar-SY' | 'ar-TN' | 'ar-YE' | 'az-AZ' | 'bg-BG' | 'cs-CZ' | 'da-DK' | 'de-DE' | 'el-GR' | 'en-AU' | 'en-GB' | 'en-HK' | 'en-IN' | 'en-NZ' | 'en-US' | 'en-ZA' | 'en-ZM' | 'es-ES' | 'fa-AF' | 'fa-IR' | 'fr-FR' | 'he' | 'hu-HU' | 'id-ID' | 'it-IT' | 'ku-IQ' | 'nb-NO' | 'nl-NL' | 'nn-NO' | 'pl-PL' | 'pt-BR' | 'pt-PT' | 'ru-RU' | 'sk-SK' | 'sl-SI' | 'sr-RS' | 'sr-RS#latin' | 'sv-SE' | 'th-TH' | 'tr-TR' | 'uk-UA' | 'vi-VN'.
The second parameter is the object IsAlphaOptions. It only contains an optional parameter 'ignore', and it can have the value of a string, string[] or RegExp.
So you can also ignore white spaces with the RegExp \s.
.isAlpha('en-US', {ignore: '\s'})

I got the answer. I used custom validation method. It resolved my issue.
[check('name').custom((value,{req})=>{
if(isNaN(value)){
return true;
}else{
throw new Error('invalid name')
}
})]

To check, using express-validator, a string contains only letters and spaces you can use a regular expression
check('name').custom((value) => {
return value.match(/^[A-Za-z ]+$/);
})

"john doe" consisting white space " ". Due to this white-space isAlpha() throwing error. isAlpha allows only a-zA-Z.

Hopefully Im not late to the party.
With class-validator#0.13.2, we can use
#Matches(/^[a-zA-Z0-9 -]*$/)
Just tweak the regex to satisfy your needs. In my case, I want to use #IsAlphanumeric() but with spaces and hyphen/dash

Simply replace isAlpha() or isAlphaNumeric()
with
isAlphanumericWithSpace()/ isAlphaWithSpace().

Related

Parse `key1=value1 key2=value2` in Kusto

I'm running Cilium inside an Azure Kubernetes Cluster and want to parse the cilium log messages in the Azure Log Analytics. The log messages have a format like
key1=value1 key2=value2 key3="if the value contains spaces, it's wrapped in quotation marks"
For example:
level=info msg="Identity of endpoint changed" containerID=a4566a3e5f datapathPolicyRevision=0
I couldn't find a matching parse_xxx method in the docs (e.g. https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/parsecsvfunction ). Is there a possibility to write a custom function to parse this kind of log messages?
Not a fun format to parse... But this should work:
let LogLine = "level=info msg=\"Identity of endpoint changed\" containerID=a4566a3e5f datapathPolicyRevision=0";
print LogLine
| extend KeyValuePairs = array_concat(
extract_all("([a-zA-Z_]+)=([a-zA-Z0-9_]+)", LogLine),
extract_all("([a-zA-Z_]+)=\"([a-zA-Z0-9_ ]+)\"", LogLine))
| mv-apply KeyValuePairs on
(
extend p = pack(tostring(KeyValuePairs[0]), tostring(KeyValuePairs[1]))
| summarize dict=make_bag(p)
)
The output will be:
| print_0 | dict |
|--------------------|-----------------------------------------|
| level=info msg=... | { |
| | "level": "info", |
| | "containerID": "a4566a3e5f", |
| | "datapathPolicyRevision": "0", |
| | "msg": "Identity of endpoint changed" |
| | } |
|--------------------|-----------------------------------------|
With the help of Slavik N, I came with a query that works for me:
let containerIds = KubePodInventory
| where Namespace startswith "cilium"
| distinct ContainerID
| summarize make_set(ContainerID);
ContainerLog
| where ContainerID in (containerIds)
| extend KeyValuePairs = array_concat(
extract_all("([a-zA-Z0-9_-]+)=([^ \"]+)", LogEntry),
extract_all("([a-zA-Z0-9_]+)=\"([^\"]+)\"", LogEntry))
| mv-apply KeyValuePairs on
(
extend p = pack(tostring(KeyValuePairs[0]), tostring(KeyValuePairs[1]))
| summarize JSONKeyValuePairs=parse_json(make_bag(p))
)
| project TimeGenerated, Level=JSONKeyValuePairs.level, Message=JSONKeyValuePairs.msg, PodName=JSONKeyValuePairs.k8sPodName, Reason=JSONKeyValuePairs.reason, Controller=JSONKeyValuePairs.controller, ContainerID=JSONKeyValuePairs.containerID, Labels=JSONKeyValuePairs.labels, Raw=LogEntry

Azure Log Analytics parse json

I have a query which results in a few columns but one of the columns, I am parsing JSON to retrieve the object value but there are multiple entries in it I want each entry in JSON to retrieve in a loop and display.
Below is the query,
let forEach_table = AzureDiagnostics
| where Parameters_LOAD_GROUP_s contains 'LOAD(AUTO)';
let ParentPlId = '';
let ParentPlName = '';
let commonKey = '';
forEach_table
| where Category == 'PipelineRuns'
| extend pplId = parse_json(Predecessors_s)[0].PipelineRunId, pplName = parse_json(Predecessors_s)[0].PipelineName
| extend dbMapName = tostring(parse_json(Parameters_getMetadataList_s)[0].dbMapName)
| summarize count(runId_g) by Resource, Status = status_s, Name=pipelineName_s, Loadgroup = Parameters_LOAD_GROUP_s, dbMapName, Parameters_LOAD_GROUP_s, Parameters_getMetadataList_s, pipelineName_s, Category, CorrelationId, start_t, end_t, TimeGenerated
| project ParentPL_ID = ParentPlId, ParentPL_Name = ParentPlName, LoadGroup_Name = Loadgroup, Map_Name = dbMapName, Status,Metadata = Parameters_getMetadataList_s, Category, CorrelationId, start_t, end_t
| project-away ParentPL_ID, ParentPL_Name, Category, CorrelationId
here in the above code,
extend dbMapName = tostring(parse_json(Parameters_getMetadataList_s)[0].dbMapName)
I am retrieving 0th element as default but I would like to retrieve all elements in sequence can somebody suggest me how to achieve this.
bag_keys() is just what you need.
For example, take a look at this query:
datatable(myjson: dynamic) [
dynamic({"a": 123, "b": 234, "c": 345}),
dynamic({"dd": 123, "ee": 234, "ff": 345})
]
| project keys = bag_keys(myjson)
Its output is:
|---------|
| keys |
|---------|
| [ |
| "a", |
| "b", |
| "c" |
| ] |
|---------|
| [ |
| "dd", |
| "ee", |
| "ff" |
| ] |
|---------|
If you want to have every key in a separate row, use mv-expand, like this:
datatable(myjson: dynamic) [
dynamic({"a": 123, "b": 234, "c": 345}),
dynamic({"dd": 123, "ee": 234, "ff": 345})
]
| project keys = bag_keys(myjson)
| mv-expand keys
The output of this query will be:
|------|
| keys |
|------|
| a |
| b |
| c |
| dd |
| ee |
| ff |
|------|
extend and mv-expand methods help in resolving this kind of scenario.
Solution:
extend rows = parse_json(Parameters_getMetadataList_s)
| mv-expand rows
| project Parameters_LOAD_GROUP_s,rows

Array placeholder in Gherkin syntax

Hi I am trying to write express a set of requirements in gherkin syntax, but it requires a good deal of repetition. I saw here that I can use placeholders which would be perfect for my task, however some of the data in my Given and in my then are collections. How would I go about representing collections in the examples?
Given a collection of spaces <spaces>
And a <request> to allocate space
When I allocate the request
Then I should have <allocated_spaces>
Examples:
| spaces | request | allocated_spaces |
| ? | ? | ? |
A bit hacky, but you can delimit a string:
Given a collection of spaces <spaces>
And a <request> to allocate space
When I allocate the request
Then I should have <allocated_spaces>
Examples:
| spaces | request | allocated_spaces |
| a,b,c | ? | ? |
Given(/^a collection of spaces (.*?)$/) do |arg1|
collection = arg1.split(",") #=> ["a","b","c"]
end
You can use Data Tables. I never try to have param in data table before, but in theory it should work.
Given a collection of spaces:
| space1 |
| space2 |
| <space_param> |
And a <request> to allocate space
When I allocate the request
Then I should have <allocated_spaces>
Examples:
| space_param | request | allocated_spaces |
| ? | ? | ? |
The given data table would be an instance of Cucumber::Ast::Table, checkout the rubydoc for its API.
Here's an example, again using split, but without the regex:
Scenario Outline: To ensure proper allocation
Given a collection of spaces <spaces>
And a <request> to allocate space
When I allocate the request
Then I should have <allocated_spaces>
Examples:
| spaces | request | allocated_spaces |
| "s1, s2, s3" | 2 | 2 |
| "s1, s2, s3" | 3 | 3 |
| "s1, s2" | 3 | 2 |
I use cucumber-js so this is what the code may look like:
Given('a collection of spaces {stringInDoubleQuotes}', function (spaces, callback) {
// Write code here that turns the phrase above into concrete actions
this.availableSpaces = spaces.split(", ");
callback();
});
Given('a {int} to allocate space', function (numToAllocate, callback) {
this.numToAllocate = numToAllocate;
// Write code here that turns the phrase above into concrete actions
callback();
});
When('I allocate the request', function (callback) {
console.log("availableSpaces:", this.availableSpaces);
console.log("numToAllocate:", this.numToAllocate);
// Write code here that turns the phrase above into concrete actions
callback();
});
Then('I should have {int}', function (int, callback) {
// Write code here that turns the phrase above into concrete actions
callback(null, 'pending');
});

Issues with quoted string regex in antlr4

I want to parse strings like "TOM*", "TOM" , "*TOM" , "TOM", "*" and all these without quotes. I created 2 rules name_with_quotes & name without quotes, but string with quotes are giving expected token: <EOF> error
I have following tokens in lexer.g4 file
ID : [a-zA-Z0-9-_]+ ;
WILDCARD_STARTS_WITH_STRING: ID'*';
WILDCARD_ENDS_WITH_STRING: '*'ID;
WILDCARD_CONTAINS_STRING : '*'ID'*' ;
STRING : ('"' | '\'') ( ( (~('"' | '\\' | '\r' | '\n') | '\\'('"' ) )*) | ) ('"' | '\'');
QUOTED_ID : ('"' | '\'') (((STAR)? ID (STAR)?) | ID | STAR) ('"' | '\'');
I have following rules in my parser file:
name_without_quotes : ID | WILDCARD_STARTS_WITH_STRING | WILDCARD_ENDS_WITH_STRING | WILDCARD_CONTAINS_STRING | STAR ;
name_with_quotes : QUOTED_ID;
name : name_with_quotes | name_without_quotes;
I also tried using following rules.
WITHOUT_QUOTES : '"' (ID | ID'*' | '*'ID | '*'ID'*' ) '"';
WITH_QUOTES : ID | WILDCARD_STARTS_WITH_STRING | WILDCARD_ENDS_WITH_STRING | WILDCARD_CONTAINS_STRING | STAR ;
But no luck. Any clue what I could be doing wrong?
Many Thanks.
Found solution here.
http://www.antlr3.org/pipermail/antlr-interest/2012-March/044273.html.
Just changed the order and it worked.

Struggling to parse array notation

I have a very simple grammar to parse statements.
Here are examples of the type of statements that need be parsed:
a.b.c
a.b.c == "88"
The issue I am having is that array notation is not matching. For example, things that are not working:
a.b[0].c
a[3][4]
I hope someone can point out what I am doing wrong here. (I am testing in ANTLRWorks)
Here is the grammar (generationUnit is my entry point):
grammar RatBinding;
generationUnit: testStatement | statement;
arrayAccesor : identifier arrayNotation+;
arrayNotation: '[' Number ']';
testStatement:
(statement | string | Number | Bool )
(greaterThanAndEqual
| lessThanOrEqual
| greaterThan
| lessThan | notEquals | equals)
(statement | string | Number | Bool )
;
part: identifier | arrayAccesor;
statement: part ('.' part )*;
string: ('"' identifier '"') | ('\'' identifier '\'');
greaterThanAndEqual: '>=';
lessThanOrEqual: '<=';
greaterThan: '>';
lessThan: '<';
notEquals : '!=';
equals: '==';
identifier: Letter (Letter|Digit)*;
Bool : 'true' | 'false';
ArrayLeft: '\u005B';
ArrayRight: '\u005D';
Letter
: '\u0024' |
'\u0041'..'\u005a' |
'\u005f '|
'\u0061'..'\u007a' |
'\u00c0'..'\u00d6' |
'\u00d8'..'\u00f6' |
'\u00f8'..'\u00ff' |
'\u0100'..'\u1fff' |
'\u3040'..'\u318f' |
'\u3300'..'\u337f' |
'\u3400'..'\u3d2d' |
'\u4e00'..'\u9fff' |
'\uf900'..'\ufaff'
;
Digit
: '\u0030'..'\u0039' |
'\u0660'..'\u0669' |
'\u06f0'..'\u06f9' |
'\u0966'..'\u096f' |
'\u09e6'..'\u09ef' |
'\u0a66'..'\u0a6f' |
'\u0ae6'..'\u0aef' |
'\u0b66'..'\u0b6f' |
'\u0be7'..'\u0bef' |
'\u0c66'..'\u0c6f' |
'\u0ce6'..'\u0cef' |
'\u0d66'..'\u0d6f' |
'\u0e50'..'\u0e59' |
'\u0ed0'..'\u0ed9' |
'\u1040'..'\u1049'
;
WS : [ \r\t\u000C\n]+ -> channel(HIDDEN)
;
You referenced the non-existent rule Number in the arrayNotation parser rule.
A Digit rule does exist in the lexer, but it will only match a single-digit number. For example, 1 is a Digit, but 10 is two separate Digit tokens so a[10] won't match the arrayAccesor rule. You probably want to resolve this in two parts:
Create a Number token consisting of one or more digits.
Number
: Digit+
;
Mark Digit as a fragment rule to indicate that it doesn't form tokens on its own, but is merely intended to be referenced from other lexer rules.
fragment // prevents a Digit token from being created on its own
Digit
: ...
You will not need to change arrayNotation because it already references the Number rule you created here.
Bah, waste of space. I Used Number instead of Digit in my array declaration.

Resources