How to check UUID using regular expressions? - node.js

In order to make validation over a api, i'm send companyId as UUID like: 71158c1a-56fd-4dd4-8e7f-fb95711a41de
To have this validation I used jsonschema with the following patterns (test all 3 of them):
/^[0-9a-fA-F]{8}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{12}$
/^[0-9a-fA-F]{8}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{12}$/gi
[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}
jsonschema:
companyId: {
type: "string",
default: "",
title: "The companyId Schema",
pattern: "/^[0-9a-fA-F]{8}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{12}$",
examples: ["71158c1a-56fd-4dd4-8e7f-fb95711a41de"],
},
For some reason the validation returned me errors:
path: Ä 'companyId' Å,
property: 'instance.companyId',
message: 'does not match pattern "/ÜÄ0-9a-fA-FÅä8åÖÖb-Ä0-9a-fA-FÅä4åÖÖb-Ä0-9a-fA-FÅä4åÖÖb-Ä0-9a-fA-FÅä4åÖÖb-Ä0-9a-fA-FÅä12å$"',
schema: ä
type: 'string',
default: '',
title: 'The companyId Schema',
pattern: '/ÜÄ0-9a-fA-FÅä8åÖb-Ä0-9a-fA-FÅä4åÖb-Ä0-9a-fA-FÅä4åÖb-Ä0-9a-fA-FÅä4åÖb-Ä0-9a-fA-FÅä12å$',
examples: ÄArrayÅ
å,
instance: 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee',
name: 'pattern',
argument: '/ÜÄ0-9a-fA-FÅä8åÖb-Ä0-9a-fA-FÅä4åÖb-Ä0-9a-fA-FÅä4åÖb-Ä0-9a-fA-FÅä4åÖb-Ä0-9a-fA-FÅä12å$',
stack: 'instance.companyId does not match pattern "/ÜÄ0-9a-fA-FÅä8åÖÖb-Ä0-9a-fA-FÅä4åÖÖb-Ä0-9a-fA-FÅä4åÖÖb-Ä0-9a-fA-FÅä4åÖÖb-Ä0-9a-fA-FÅä12å$"'
å,
Also im getting these cyrillic letters, maybe this is the reason?

The latest version of JSON Schema supports the uuid format (you may need to explicitly turn on format validation in the implementation, as by default it is supposed to be annotation-only):
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "string",
"format": "uuid",
}
Also, you have a stray / in your pattern before the ^ anchor, so your pattern will never match.

Related

Use of TypeSet vs TypeList in Terraform when building a custom provider

I'm developing a terraform provider by following this guide.
However I stumbled upon using TypeList vs TypeSet:
TypeSet implements set behavior and is used to represent an unordered collection of items, meaning that their ordering specified does not need to be consistent, and the ordering itself has no impact on the behavior of the resource.
TypeList is used to represent an ordered collection of items, where the order the items are presented can impact the behavior of the resource being modeled. An example of ordered items would be network routing rules, where rules are examined in the order they are given until a match is found. The items are all of the same type defined by the Elem property.
My resource require one of 2 blocks to be present, i.e.:
resource "foo" "example" {
name = "123"
# Only one of basketball / football are expected to be present
basketball {
nba_id = "23"
}
football {
nfl_id = "1"
}
}
and my schema looks the following:
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
},
"basketball": basketballSchema(),
"football": footballSchema(),
},
func basketballSchema() *schema.Schema {
return &schema.Schema{
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"nba_id": {
Type: schema.TypeString,
Required: true,
},
},
},
ExactlyOneOf: ["basketball", "football"],
}
}
func footballSchema() *schema.Schema {
return &schema.Schema{
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"nfl_id": {
Type: schema.TypeString,
Required: true,
},
},
},
ExactlyOneOf: ["basketball", "football"],
}
}
Is that accurate that both TypeSet and TypeList will work in this scenario where we restrict the number of elements to either 0 or just 1?

Elasticsearch 6.2 - Completion Suggester for long texts

I want to be able to search and suggest through long texts.
Below is my input string:
Clinical Support Specialist Medical Staff
If I search for clin or supp or spe or med or st it should give the results as the above string.
Also searches could be like clinical sup or specialist medi
Below is the mappings I create for the field:
description: {
type: 'completion',
analyzer: 'simple',
preserve_separators: true,
preserve_position_increments: true,
contexts: {
name: 'company',
type: 'category',
path: 'company',
}
}
And below is the search body:
descSuggestor: {
prefix: searchTerm,
completion: {
field: 'description'
}
}
Your question does not specify the elastic search version, or the environment you are trying to write your search query. However, you would be able to do that with regular expression in Kibana. For example, in the Dev tools of Kibana, you could write something like:
GET utilization_aggregation_2018/_search
{
"query": {
"regexp" : {"name": "supp.*"}
}
}
Hope this helps!

Stratio Lucene for Cassandra

I am newbie to Lucene. Just started. I have a few basic questions:
How to view all the indexes that are created using Stratio Lucene ?
How to delete indexes created using Stratio Lucene ?
What is the difference between
fields: {
fld_1: {type: "string"},
fld_2: {type: "text"}
}
type: "string" and type: "text"
The reason I ask for the difference is because I ran in to an error when trying to create my very first lucene index. My column in Cassandra is something like this: 'fld_1 text', but when I tried to create and index on fld_1 like above it threw an exception
ConfigurationException: 'schema' is invalid : Unparseable JSON schema: Unexpected character ('}' (code 125)): was expecting either valid name character (for unquoted name) or double-quote (for quoted) to start field name
at [Source: {
fields: {
The Lucene index script:
CREATE CUSTOM INDEX lucene_index ON testTable ()
USING 'com.stratio.cassandra.lucene.Index'
WITH OPTIONS = {
'refresh_seconds': '1',
'schema': '{
fields: {
fld_1: {type: "string"},
fld_2: {type: "string"},
id: {type: "integer"},
test_timestamp: {type: "date", pattern: "yyyy/MM/dd HH:mm:ss"}
}
}'
};
Thanks!
First : You can't view only the Stratio Lucene Index, below query will show you all the index
SELECT * FROM system."IndexInfo";
Second : You can delete index with DROP INDEX index_name command. i.e
DROP INDEX test;
Third : In Stratio Lucene Index, string is a not-analyzed text value and text is a language-aware text value analyzed according to the specified analyzer.
Which means that if you specify a field as string it will directly index and queried. But if you use text then it will first analyzed by your specified analyzer, default is default_analyzer(org.apache.lucene.analysis.standard.StandardAnalyzer) then index and queried.
Edited :
You have to first create a text field in cassandra then specified it when creating index.
Example :
ALTER TABLE testtable ADD lucene text;
CREATE CUSTOM INDEX lucene_index ON testTable (lucene) USING 'com.stratio.cassandra.lucene.Index'
WITH OPTIONS = {
'refresh_seconds': '1',
'schema': '{
fields: {
fld_1: {type: "string"},
fld_2: {type: "string"},
id: {type: "integer"},
test_timestamp: {type: "date", pattern: "yyyy/MM/dd HH:mm:ss"}
}
}'
};
For more : https://github.com/Stratio/cassandra-lucene-index/blob/branch-3.0.13/doc/documentation.rst#text-mapper

Allow swagger query param to be array of strings or integers

In building a rest api using swagger2 (openAPI), I want to allow a query param station_id to support the following:
?station_id=23 (returns station 23)
?station_id=23,45 (returns stations 23 and 45)
?station_id=[3:14] (returns stations 3 through 14)
?station_id=100% (%s act as wildcards so returns things like 1001,
10049, etc..)
I use the following swagger definition (an array of strings) as an attempt to accomplish this:
parameters:
- name: station_id
in: query
description: filter stations by station_id
required: false
type: array
items:
type: string
With this definition all of the previous examples work except ?station_id=23 as swagger validation fails with the following message:
{
"message": "Validation errors",
"errors": [
{
"code": "INVALID_REQUEST_PARAMETER",
"errors": [
{
"code": "INVALID_TYPE",
"params": [
"array",
"integer"
],
"message": "Expected type array but found type integer",
"path": [],
"description": "filter stations by station_id"
}
],
"in": "query",
"message": "Invalid parameter (station_id): Value failed JSON Schema validation",
"name": "station_id",
"path": [
"paths",
"/stations",
"get",
"parameters",
"0"
]
}
]
}
note that if I quote the station_id like ?station_id='23' validation passes and I get a correct response. But I'd really prefer not to have to use quotes. Something like a union type would help solve this, but as far as I can tell they aren't supported.
I also have another endpoint /stations/{id} that can handle the case of a single id, but still have many other (non primary key) numerical fields that I want to filter on in the way specified above. For instance station_latitude.
Any ideas to a work around - maybe I can use pattern (regex) somehow? If there is no workaround in the swagger definition is there a way to tweak or bypass the validator? This is a nodejs project using swagger-node I've upgraded the version of swagger-express-mw to 0.7.0.
I think what you'd need is the anyOf or oneOf keyword similar to the one provided by JSON Schema so that you could define the type of your station_id parameter to be either a number or a string. anyOf and oneOf are supported in OpenAPI 3.0 but not in 2.0. An OpenAPI 3.0 definition would look like this:
openapi: 3.0.0
...
paths:
/something:
get:
parameters:
- in: query
name: station_id
required: true
explode: false
schema:
oneOf:
- type: integer # Optional? Array is supposed to cover the use case with a single number
example: 23
- type: array
items:
type: integer
minItems: 1
example: [23, 45]
- type: string
oneOf:
- pattern: '^\[\d+:\d+]$'
- pattern: '^\d+%$'
# or using a single pattern
# pattern: '^(\[\d+:\d+])|(\d+%)$'
example: '[3:14]'
As an alternative, perhaps you could add sortBy, skip, and limit parameters to allow you to keep the type uniform. For example: ?sortBy=station_id&skip=10&limit=10 would retrieve only stations 10 - 20.

Elasticsearch Error: SearchPhaseExecutionException: SearchParseException

I am getting the following error when I try and use a template search on an AWS elasticsearch cluster using the query
"match": { "title": "copyright" }
Parse Failure [Failed to parse source [{\"match\"{\"title\":\"copyright\"}}]]];
nested: Parse Failure [No parser for element [match]]];
The query is failing during the search phase, whilst trying to parse the query.
Why is the Parse failing?
My query works fine for a localhost elasticsearch instance.
Here is my mapping for the index type:
properties: {
title: { type: 'string' },
toc: {
type: 'nested',
properties: {
title: { type: 'string' },
},
},
},
AWS Elasticsearch service supports an older version of elasticsearch to the one I was using, 1.5.2, compared to 2.1.
This older version supports a different syntax for template searches, where the template attribute is used instead of the inline attribute to supply your template, show here

Resources