Loopback sends wrong datatype - node.js

In my mongodb i have several arrays, but when i load those arrays from the database they strangley are objects instead of arrays.
This strange behaviour is since a couple of days, before everything worked fine and i got arrays out of the database.
Has loopback some strange flags which are set automatically, that transform my arrays to objects or something like that?
Currently I have the newest versions in all my packages and have already tried to use older versions, but nothing changes this behaviour.
At first there was also a problem with the saving of the arrays, sometimes they were saved as objects, but since I removed all null objects from the database, only arrays were saved.
The problem occurs with the sections array
my model json is:
{
"name": "Track",
"plural": "Tracks",
"base": "PersistedModel",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"alreadySynced": {
"type": "boolean"
},
"approved": {
"type": "boolean"
},
"deletedByClient": {
"type": "boolean",
"default": false
},
"sections": {
"type": "object",
"required": true
},
"type": {
"type": "string"
},
"email": {
"type": "string",
"default": ""
},
"name": {
"type": "string",
"default": "Neuer Track"
},
"reason": {
"type": "string",
"default": ""
},
"date": {
"type": "date"
},
"duration": {
"type": "number",
"default": 0
},
"correctnessscore": {
"type": "number",
"default": 0
},
"evaluation": {
"type": "object"
}
},
"validations": [],
"relations": {},
"acls": [],
"methods": {}
}
I have also already tried to change the type object to array but without success

Well, I am not seeing any array type in your model and I am not sure what is exactly your problem.
Has loopback some strange flags which are set automatically, that
transform my arrays to objects or something like that?
No loopback has no flags and doesn't transform any data type unless you set it !
So if you define a property as object and you are passing an array without validating the data type this will change the type of your data and save it as object instead of array.
Lets define an array in your Track model :
"property": {
"type": "array"
}
Do you need array of objects?
"property": {
"type": ["object"]
}
Strings ?
"property": {
"type": ["string"]
}
Numbers ?
"property": {
"type": ["number"]
}
Read more about loopback types here.

Related

JSON schema validation Draft 7 two type of data for one field

I need help creating a JSON schema for a value that could be an object, or an array of objects.
lib: jsonschema==3.2.0
py: 3.8
I have 2 responses from the server:
first:
{
"result": [
{
"brand": "Test"
}
]}
second:
{
"result":
{
"brand": "Test"
}
}
As you can see the difference between both in the first case its an array of obj the second just object.
my schema:
{
"$schema": "http://json-schema.org/draft-07/schema",
"$id": "http://example.com/example.json",
"type": "object",
"required": [
"result"
],
"properties": {
"result": {
"$id": "#/properties/result",
"type": ["array", "object"],
"additionalItems": true,
"items": {
"$id": "#/properties/result/items",
"anyOf": [
{
"$id": "#/properties/result/items/anyOf/0",
"type": "object",
"required": [
"brand"
],
"properties": {
"brand": {
"$id": "#/properties/result/items/anyOf/0/properties/brand",
"type": "string"
}
},
"additionalProperties": true
}
]
}
}
},
"additionalProperties": true}
In the first case when return array, it checks the "brand" type on the second when return object, no.
How I can set up 2 types for one field "result" that it could check the brand type?
Your schema can be fixed as follows:
{
"$schema": "http://json-schema.org/draft-07/schema",
"$id": "http://example.com/example.json",
"type": "object",
"required": [
"result"
],
"properties": {
"result": {
"$id": "#/properties/result",
"anyOf": [
{
"$id": "#/properties/result/items/brand",
"type": "object",
"properties": {
"brand": {
"$id": "#/properties/result/items/anyOf/0/properties/brand",
"type": "string"
}
},
"required": [
"brand"
],
"additionalProperties": true
},
{
"$id": "#/properties/result/items/array",
"type": "array",
"items": {
"$ref": "#/properties/result/items/brand"
}
}
]
}
},
"additionalProperties": true
}
Demos here, here and here.
However, it is customary to extract reusable portions of a schema into a separate "definitions" section, like so:
{
"$schema": "http://json-schema.org/draft-07/schema",
"$id": "http://example.com/example.json",
"definitions": {
"brand": {
"type": "object",
"properties": {
"brand": {
"$id": "#/properties/result/items/anyOf/0/properties/brand",
"type": "string"
}
},
"required": [
"brand"
],
"additionalProperties": true
}
},
"type": "object",
"required": [
"result"
],
"properties": {
"result": {
"$id": "#/properties/result",
"anyOf": [
{
"$ref": "#/definitions/brand"
},
{
"$id": "#/properties/result/items/array",
"type": "array",
"items": {
"$ref": "#/definitions/brand"
}
}
]
}
},
"additionalProperties": true
}
Demos here, here and here.
Notes:
To express that the property "result" may be of two different types, use the "anyof" keyword for the property's schema. The value of the "anyOf" should be an array with the schemas for each possible type (here the "brand" object or an array of "brand" objects) as the array items.
See: Multiple Types.
To avoid duplicating the definitions for the "brand" object, you can use the "$ref" when defining a schema for the array's items to refer back to the previously given schema for "brand". As noted above it s customary to place reused subschemas into a "definitions" section, but it is not necessary, "$ref" can refer to any schema item via the JSON Pointer syntax.
See: Reuse.
When the items of a list have a single schema, "additionalItems" should not be used.
See: List validation.

Azure DataFactory DelimitedText dataset with parametrized schema

I'm trying to create a generic CSV dataset with parametrized filename and schema to be able to use it in foreach loops with file lists and I'm having some trouble on publishing, and I don't know if I'm doing something wrong or if the framework docs are not correct.
According to documentation the schema description is:
Columns that define the physical type schema of the dataset. Type: array (or Expression with resultType array), itemType: DatasetSchemaDataElement.
I have a dataset with a parameter named Schema of type Array and the "schema" set to an expression that returns this parameter:
{
"name": "GenericCSVFile",
"properties": {
"linkedServiceName": {
"referenceName": "LinkedServiceReferenceName",
"type": "LinkedServiceReference"
},
"parameters": {
"Schema": {
"type": "array"
},
"TableName": {
"type": "string"
},
"TableSchema": {
"type": "string"
}
},
"folder": {
"name": "Folder"
},
"type": "DelimitedText",
"typeProperties": {
"location": {
"type": "AzureDataLakeStoreLocation",
"fileName": {
"value": "#concat(dataset().TableSchema,'.',dataset().TableName,'.csv')",
"type": "Expression"
},
"folderPath": "Path"
},
"columnDelimiter": ",",
"escapeChar": "\\",
"firstRowAsHeader": true,
"quoteChar": "\""
},
"schema": {
"value": "#dataset().Schema",
"type": "Expression"
}
},
"type": "Microsoft.DataFactory/factories/datasets"
}
However, when I publish, i get the following error:
Error code: BadRequest
Inner error code: InvalidPropertyValue
Message: Invalid value for property 'schema'
Am I doing something wrong? are the docs wrong?
Yes, this is the expected behavior. If you need to set dynamic value for column mapping, please ignore schema in DelimitedText dataset, which is more for a visually display of physical schema information and would not take effect when do copy activity column mapping. The expression setting for it is also not allowed. You could configure mapping as an expression to achieve this goal and pass it a proper value when trigger run.

Do Azure Logic Apps support oneOf, anyOf in JSON schema validation

I was trying to add JSON schema validation with in a Logic App using ParseJSON action.
I want to validate the existence of either of the object in the message (equivalent to XSD choice).
For instance, messages may have either of lastname or familyname.
{
"name": "Alan",
"familyname": "Turing"
}
Or
{
"name": "Alan",
"lastname": "Turing"
}
I modified the generated schema as,
{
"type": "object",
"properties": {
"name": {
"type": "string"
},
"oneOf": [
{
"lastname": {
"type": "string"
}
},
{
"familyname": {
"type": "string"
}
}
]
}
}
Logic App execution throws below error
Just to test if any other schema combination keywords works, tried to test with anyOf in place of oneOf and it fails in execution as well.
Does Logic Apps support these extended validation ? Am I missing some specific syntax here ?
If you are validating that either familyname or lastname be present then you are missing the "required" attribute.
{
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"oneOf": [
{
"familyname": {
"type": "string"
},
"required": [ "familyname" ]
},
{
"lastname": {
"type": "string"
},
"required": [ "lastname" ]
}
]
}
This will validate the JSON. If you want to get the values out in a later step you could use the coalesce function.
#coalesce(actionBody('Parse_JSON')?['familyname'], actionBody('Parse_JSON')?['lastname'])

Loopback.io | Case insensitive & unique properties

I've been reading the loobpack's framework documentation but I couldn't find the answer I need.
I would like to know if there is any options to set a property as case insensitive in the database.
I know I can handle this in the front-end of the application, however front-end validations are extremely dangerous as they can be overrided.
I currently have a model with the following content on myModel.json:
{
"name": "mymodel",
"properties": {
"id": {
"type": "number",
"required": true
},
"code": {
"type": "string",
"required": true,
"index": {
"unique": true
}
},
"name": {
"type": "string",
"required": true
},
}
}
The property code has to be unique, however I've tried to insert the word "COD001" and "cod001" and they were both accepted.
You could use the 'check uniqueness' validation method.
MyModel.validatesUniquenessOf('code', {ignoreCase: false});
Reference: https://apidocs.loopback.io/loopback-datasource-juggler/#validatable-validatesuniquenessof

How do I get Loopback to prefer short ids when doing id injection?

I have a loopback app which gives ids like "56dbfa7089223aca7946ca14" when creating models. I would prefer ids like "0" or "73". Is there a way to adjust id-injection practices to have ids start at 0 and increment as base-10 integers?
The data store is MongoDB v2.6.10)
running loopback v2.22 on node v5.7.1 on Ubuntu 15.10
Here's the relevant model.json
{
"name": "Term",
"base": "PersistedModel",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"name": {
"type": "string",
"required": true
},
"beginDate": {
"type": "date",
"required": true
},
"endDate": {
"type": "date",
"required": true
}
},
"validations": [],
"relations": {
"lessons": {
"type": "hasMany",
"model": "Lesson",
"foreignKey": ""
},
"classes": {
"type": "hasMany",
"model": "Class",
"foreignKey": ""
},
"weeklySchedules": {
"type": "hasMany",
"model": "WeeklySchedule",
"foreignKey": ""
}
},
"acls": [],
"methods": {}
}
there are several ways to do it, you can read about them in official documentation
It is stated by mongo developers, that is completely ok, to use your own id, if you really need it, so go for it.
However if you do it only because you "dont want ObjectId, because it does not look good", I suggest you, to get used to it.

Resources