Trying to set a schema for my eslint rule.
The intended configuration is as follows:
'my-rule': ['error', [
{'any-key': ['array', 'of', 'strings']},
{'some-other-key': ['array', 'of', 'strings']},
...etc
]]
I found out that it is possible to allow any non-enum property names, but I'm not sure how to specify their value types:
schema: [
{
type: 'array',
items: {
type: 'object',
propertyNames: {
type: 'string',
// ??? - how do I specify array of strings here?
},
}
}
],
Is this possible at all?
I don't think you can use propertyNames to specify the value types.
Wouldn't patternProperties work better for your purpose?
schema: [
{
type: 'array',
items: {
type: 'object',
patternProperties: {
".": {
type: "array",
items: {type: "string"}
}
},
}
}
]
Related
I'm using AJV to validate a HTTP request payload against a schema. However, I see an error reported that I was not expecting. This is a code example to demonstrate the issue:
const schema = {
type: 'array',
minItems: 1,
items: {
anyOf: [
{
type: 'object',
properties: {
op: {
enum: ['replace']
},
path: {
type: 'string',
pattern: '/data/to/foo/bar',
},
value: {
type: 'string',
},
},
},{
type: 'object',
properties: {
op: {
enum: ['replace']
},
path: {
type: 'string',
pattern: '/data/to/baz',
},
value: {
type: 'object',
required: ['foo', 'bar'],
properties: {
foo: {
type: 'string',
},
bar: {
type: 'string',
},
}
}
}
}
],
},
}
const validator = new ajv()
const compiledValidator = validator.compile(schema)
const data = [
{ // this object should pass
op: 'replace',
path: '/data/to/foo/bar',
value: 'foo',
},
{ // this object should fail in the `value` mismatch (missing required attribute)
op: 'replace',
path: '/data/to/baz',
value: {
foo: 'bar',
},
},
]
compiledValidator(data)
console.log(compiledValidator.errors)
The schema defines a number of objects to which an incoming list of data objects should match. The first data item matches the schema (first item schema), however the second data item misses a required attribute (bar) in the value object.
When I run the above code I get the following output:
[
{
instancePath: '/1/path',
schemaPath: '#/items/anyOf/0/properties/path/pattern',
keyword: 'pattern',
params: { pattern: '/data/to/foo/bar' },
message: 'must match pattern "/data/to/foo/bar"'
},
{
instancePath: '/1/value',
schemaPath: '#/items/anyOf/1/properties/value/required',
keyword: 'required',
params: { missingProperty: 'bar' },
message: "must have required property 'bar'"
},
{
instancePath: '/1',
schemaPath: '#/items/anyOf',
keyword: 'anyOf',
params: {},
message: 'must match a schema in anyOf'
}
]
I understand the 2nd and the 3rd (last) errors. However, The first error seems to indicate that the path doesn't match path requirements of the first item schema. It is true that the 2nd data item doesn't match the 1st schema item but I don't seem to understand how it is relevant. I would assume that the error would be focused around the value, not the path since it matches on the path schemas.
Is there a way to get the error reporting more focused around the errors that matter?
There is no way for the evaluator to know whether you intended the first "anyOf" subschema to match or the second, so the most useful thing to do is to show you all the errors.
It can be confusing because you don't need to resolve all the errors, just some of them, which is why some implementations also offer a heirarchical error format to make it more easy to see relationships like this. Maybe if you request that ajv implement more of these error formats, it will happen :)
You can see that all of the errors pertain to the second item in the data by looking at the instancePath for each error, which all start with /1. This is the location within the data that generated the error.
So let's look at the second item and compare it against the schema.
{ // this object should fail in the `value` mismatch (missing required attribute)
op: 'replace',
path: '/data/to/baz',
value: {
foo: 'bar',
},
}
The schema says that an item should either (from the anyOf) have
path: '/data/to/foo/bar' and value: { type: 'string' }, or
path: '/data/to/baz' and value: { required: [ 'foo', 'bar' ] }
The reported errors are:
The first case fails because path is wrong.
The second case fails because /value/bar is not present.
The last error is just the anyOf reporting that none of the options passed.
Link: https://www.npmjs.com/package/fastest-validator
I'm using fastest-validation in my NodeJS application. I've been having great success with it. Unfortunately, I'm running into an issue that I can't seem to figure out.
If you take a look under String (located here: https://www.npmjs.com/package/fastest-validator#user-content-string)
I'm attempting to utilize the property numeric as I have a string that is a number that I would like to validate. I'm not able to find any examples, so I was left with the assumption I must set the property to true. This doesn't appear to work as I've tried to validate this theory by also setting another field that is a numeric string and set the property of alpha to true. I fully expected my 'label' to pass and my 'value' field to fail. But both passed.
How are you supposed to used these properties?
See below for my code:
buildSchema.catalogPages = {
type: "array", items: {
type: "object", props: {
label: { type: "string", empty: false, numeric: true },
value: { type: "string", empty: false, alpha: true }
}
}
}
const v = new Validator()
const check = v.compile(buildSchema)
check(valuesToCheck)
Here is my data:
const valuesToCheck = [
{
label: "9"
value: "9"
},
{
label: "12"
value: "12"
},
EDIT
I just figured out my issue. I have a handler function that checks my schemas and valuesToCheck. What I was doing was returning the check(valuesToCheck) with the assumption it simply returns true or false. But in fact, if the check fails, it returns an array of what failed (which is awesome). I'm going to accept tam.teixeira's answer as it helped me realized I need to update my handler to check if it's a Boolean true or an array (aka it failed).
I think that should not be something related to the schema itself, in the following example:
const Validator = require('fastest-validator');
const schema = {
myItems: {
type: 'array',
items: {
type: 'object',
props: {
label: { type: 'string', empty: false, numeric: true },
value: { type: 'string', empty: false, alpha: true },
},
},
},
};
const valuesToCheck = {
myItems: [
{
label: '9',
value: '9',
},
{
label: '12',
value: '12',
},
],
};
const v = new Validator();
const check = v.compile(schema);
console.log(JSON.stringify(check(valuesToCheck), null, 4));
Indeed it fails validation with errors:
$> node fastestValidatorTest.js
[
{
"type": "stringAlpha",
"message": "The 'myItems[0].value' field must be an alphabetic string.",
"field": "myItems[0].value",
"actual": "9"
},
{
"type": "stringAlpha",
"message": "The 'myItems[1].value' field must be an alphabetic string.",
"field": "myItems[1].value",
"actual": "12"
}
]
So i got your expected behaviour: "'label' to pass and my 'value' field to fail"
So my conclusion is that maybe something is not right in the buildSchema.catalogPages part, to me it feels a bit suspiscious and i also got the expected behaviour with your example, but i've changed the schema object to be simpler.
PS: Thanks for the question i didn't knew the library, so learned something new, which is cool
How to search through multiple fields with elasticsearch? I've tried many queries but none of them worked out. I want the search to be case insensitive and one field is more important than the other. My query looks like this:
const eQuery = {
query: {
query_string: {
query: `*SOME_CONTENT_HERE*`,
fields: ['title^3', 'description'],
default_operator: 'OR',
},
},
}
esClient.search(
{
index: 'movies',
body: eQuery,
},
function(error, response) {
},
)
Mapping looks like this:
{
mappings: {
my_index_type: {
dynamic_templates: [{ string: { mapping: { type: 'keyword' }, match_mapping_type: 'string' } }],
properties: {
created_at: { type: 'long' },
description: { type: 'keyword' },
title: { type: 'keyword' },
url: { type: 'keyword' },
},
},
_default_: {
dynamic_templates: [{ string: { mapping: { type: 'keyword' }, match_mapping_type: 'string' } }],
},
},
}
The problem is the type: keyword in your mapping for fields description and title. Keyword type fields are not analyzed i.e they store the indexed data exactly like it was sent to elastic. It comes into use when you want to match things like unique IDs etc. Read: https://www.elastic.co/guide/en/elasticsearch/reference/current/keyword.html
You should read about analyzers for elasticsearch. You can create your custom analyzers very easily which can change the data you send them in different ways, like lowercasing everything before they index or search.
Luckily, there are pre-configured analyzers for basic operations such as lowercasing. If you change the type of your description and title fields to type: text, your query would work.
Read: https://www.elastic.co/guide/en/elasticsearch/reference/current/text.html
Also, i see you have dynamic templates configured for your index. So, if you do not specify the mappings for your index explicitly, all your string fields (like description and title) will be treated as type: keyword.
If you build your index like this:
PUT index_name
{
"mappings": {
index_type: {
"properties": {
"description": {"type": "text"},
"title": {"type": "text"}, ...
}
}
}
}
your problem should be solved. This is because type: text fields are analyzed by the standard analyzer by default which lowercases the input, among other things. Read: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-standard-analyzer.html
I am new to Loopback and trying to implement a remote method, one of whose arguments should be conceptually a dictionary of string -> string. I'm thinking an array of objects where each object has a single member that must be a string. Is there any way to specify this when defining a remote method?
I have tried several things that either result in runtime errors or do not behave as expected.
{ arg: "settings", type: [{ {arg: "setting", type: "string"} }]
}],
and
{arg: "settings", type: [ { arg: "setting", type: "string" } ]
}],
I basically want to express that my method expects a list of pairs of strings.
Any suggestion?
This can not be done through remote loopback REST connector.
All you can do is {arg: 'settings', type: 'array',http: { source: 'query' }}.
Then your array could be delivered through ?settings=[{"setting1":"value1"},{"setting2":"value2"}]
If I understood it correctly, you are looking for :
{
arg: "settings",
type: "object",
http: {
source: 'body'
},
default: [
{
string1: 'value1'
},
{
string2: 'value2'
}
]
}
Try it, it should work.
I made an index "user-name" with a custom made analyzer called 'autocomplete':
client.indices.create({
index: 'user-name',
type: 'text',
settings: {
analysis: {
filter: {
autocomplete_filter: {
type: 'edge-ngram',
min_gram: 1,
max_gram: 20
}
},
analyzer: {
autocomplete: {
type: 'custom',
tokenizer: 'standard',
filter: [
'lowercase',
'autocomplete_filter'
]
}
}
}
}
}
Then I try to reference this custom made analyzer by trying to use it in a mapping:
client.indices.putMapping({
index: 'user-name',
type: 'text',
body: {
properties: {
name: {
type: 'string',
analyzer: 'autocomplete',
search_analyzer: 'standard'
}
}
}
})
but then I get this error: "reason": "analyzer [autocomplete] not found for field [name]". Why isn't my autocomplete analyzer being detected? Thanks.
You're almost there. You simply need to put the index settings inside the body parameter:
client.indices.create({
index: 'user-name',
type: 'text',
body: {
settings: {
analysis: {
filter: {
autocomplete_filter: {
type: 'edge-ngram',
min_gram: 1,
max_gram: 20
}
},
analyzer: {
autocomplete: {
type: 'custom',
tokenizer: 'standard',
filter: [
'lowercase',
'autocomplete_filter'
]
}
}
}
}
}
}