Regular expressions are a good way to parse regular strings, but when the strings becomes more complex, the regex becomes n² times more complex.
What is the general approach to string parsing in general?
For example. I currently have defined a syntax ({who}[{where}].{what} = {value}) that can or cannot define certain components.
It's simple enough to be easily understood but complex enough to require a large regex.
Generally, the are some components that can be omitted, however, there are certain components that depends on other components. Example:
'key = value' => {
who: null,
where: null,
what: 'key',
value: 'value'
}
'me.key = value' => {
who: 'me',
where: null,
what: 'key',
value: 'value'
}
'me[here].key = value' => {
who: 'me',
where: 'here',
what: 'key',
value: 'value'
}
'me[here] = value' => 'Error! Need to specify *what*.'
'[here].key = value' => 'Error! Need to specify *who* if *where* is specified.'
Related
I'm trying to get an integer value from the user when this questions is asked:
let q4 = {
type: 'list',
name: 'manager',
message: 'Who do they report to?',
choices: ['Jen','Rachel','Tania']
};
let answerProcessing = (answer) => {
console.log(answer.manager)
}
but I can't figure it out from the documentation and I don't see any similar questions to this one. Perhaps it's really obvious but I can't get a non-string response.
You can give your choices a name and value.
The name will be output in the terminal.
The value can be whatever simple value you like it to be, like a number, or a short for the name.
let q4 = {
type: 'list',
name: 'manager',
message: 'Who do they report to?',
choices: [
{
name: 'Jen',
value: 1,
},
{
name: 'Rachel',
value: 2,
},
{
name: 'Tania',
value: 3,
}
]
};
let answerProcessing = (answer) => {
console.log(answer.manager) // outputs 1, 2, or 3
}
Figured it out:
console.log(q4.choices.indexOf(answer.role));
This will give the index value.
Leaving it up because I couldn't see other questions like it.
I have a simple Schema
const MediaElementSchema = {
primaryKey: 'id',
name: 'MediaElement',
properties: {
id: 'int',
type: 'string',
path: 'string'
}
}
When I try to get all:
let elements = realm.objects('MediaElement')
Realm returns the results in an object like below:
{"0": Record1, "1" : Record2, etc}
is there a way for realm to return an array of the elements like:
[Element1, Element2, etc]
I checked the documentation but didn't find anything relevant about the return type.
https://realm.io/docs/javascript/latest
You could just use plain old javascript to convert object into array.
let elements = {'0': 'Record1', '1' : 'Record2'};
elements = Object.keys(elements).map(key => elements[key]);
console.log(elements); // ["Record1", "Record2"]
I have below array of object
const reports = [{id:3, name:'three', description:'three d', other: 'other 3'}, {id:2, name:'two', description:'two d', other: 'other 2'}];
and I want to filter out only 2 property of each object and below is my desired output
[{id:3, name:'three'}, {id:2, name:'two'}];
so tried this way
const reportList = reports.map((report) => {id,name} );
console.log(reportList);
throw error
ReferenceError: id is not defined
even I can achieve this by using this approach
this.reportList = reports.map((report) => ({
id: report.id,
name: report.name,
description: report.description
}));
but here I need to write extra code, I want to use object accessor using key, can I achieve anyway?
You must wrap the returning object literal into parentheses. Otherwise curly braces will be considered to denote the function’s body. The following works:
const reports = [{
id: 3,
name: 'three',
description: 'three d',
other: 'other 3'
}, {
id: 2,
name: 'two',
description: 'two d',
other: 'other 2'
}];
const reportList = reports.map(({id, name}) => ({
id,
name
}));
console.log(reportList);
Reference: Returning object literals by MDN
I have an object array in a reducer that looks like this:
[
{id:1, name:Mark, email:mark#email.com},
{id:2, name:Paul, email:paul#gmail.com},
{id:3,name:sally, email:sally#email.com}
]
Below is my reducer. So far, I can add a new object to the currentPeople reducer via the following:
const INITIAL_STATE = { currentPeople:[]};
export default function(state = INITIAL_STATE, action) {
switch (action.type) {
case ADD_PERSON:
return {...state, currentPeople: [ ...state.currentPeople, action.payload]};
}
return state;
}
But here is where I'm stuck. Can I UPDATE a person via the reducer using lodash?
If I sent an action payload that looked like this:
{id:1, name:Eric, email:Eric#email.com}
Would I be able to replace the object with the id of 1 with the new fields?
Yes you can absolutely update an object in an array like you want to. And you don't need to change your data structure if you don't want to. You could add a case like this to your reducer:
case UPDATE_PERSON:
return {
...state,
currentPeople: state.currentPeople.map(person => {
if (person.id === action.payload.id) {
return action.payload;
}
return person;
}),
};
This can be be shortened as well, using implicit returns and a ternary:
case UPDATE_PERSON:
return {
...state,
currentPeople: state.currentPeople.map(person => (person.id === action.payload.id) ? action.payload : person),
};
Mihir's idea about mapping your data to an object with normalizr is certainly a possibility and technically it'd be faster to update the user with the reference instead of doing the loop (after initial mapping was done). But if you want to keep your data structure, this approach will work.
Also, mapping like this is just one of many ways to update the object, and requires browser support for Array.prototype.map(). You could use lodash indexOf() to find the index of the user you want (this is nice because it breaks the loop when it succeeds instead of just continuing as the .map would do), once you have the index you could overwrite the object directly using it's index. Make sure you don't mutate the redux state though, you'll need to be working on a clone if you want to assign like this: clonedArray[foundIndex] = action.payload;.
This is a good candidate for data normalization. You can effectively replace your data with the new one, if you normalize the data before storing it in your state tree.
This example is straight from Normalizr.
[{
id: 1,
title: 'Some Article',
author: {
id: 1,
name: 'Dan'
}
}, {
id: 2,
title: 'Other Article',
author: {
id: 1,
name: 'Dan'
}
}]
Can be normalized this way-
{
result: [1, 2],
entities: {
articles: {
1: {
id: 1,
title: 'Some Article',
author: 1
},
2: {
id: 2,
title: 'Other Article',
author: 1
}
},
users: {
1: {
id: 1,
name: 'Dan'
}
}
}
}
What's the advantage of normalization?
You get to extract the exact part of your state tree that you want.
For instance- You have an array of objects containing information about the articles. If you want to select a particular object from that array, you'll have to iterate through entire array. Worst case is that the desired object is not present in the array. To overcome this, we normalize the data.
To normalize the data, store the unique identifiers of each object in a separate array. Let's call that array as results.
result: [1, 2, 3 ..]
And transform the array of objects into an object with keys as the id(See the second snippet). Call that object as entities.
Ultimately, to access the object with id 1, simply do this- entities.articles["1"].
If you want to replace the old data with new data, you can do this-
entities.articles["1"] = newObj;
Use native splice method of array:
/*Find item index using lodash*/
var index = _.indexOf(currentPeople, _.find(currentPeople, {id: 1}));
/*Replace item at index using splice*/
arr.splice(index, 1, {id:1, name:'Mark', email:'mark#email.com'});
I am new to AVRO and please excuse me if it is a simple question.
I have a use case where I am using AVRO schema for record calls.
Let's say I have avro schema
{
"name": "abc",
"namepsace": "xyz",
"type": "record",
"fields": [
{"name": "CustId", "type":"string"},
{"name": "SessionId", "type":"string"},
]
}
Now if the input is like
{
"CustId" : "abc1234"
"sessionID" : "000-0000-00000"
}
I want to use some regex validations for these fields and I want take this input only if it comes in particular format shown as above. Is there any way to specify in avro schema to include regex expression?
Any other data serialization formats which supports something like this?
You should be able to use a custom logical type for this. You would then include the regular expressions directly in the schema.
For example, here's how you would implement one in JavaScript:
var avro = require('avsc'),
util = require('util');
/**
* Sample logical type that validates strings using a regular expression.
*
*/
function ValidatedString(attrs, opts) {
avro.types.LogicalType.call(this, attrs, opts);
this._pattern = new RegExp(attrs.pattern);
}
util.inherits(ValidatedString, avro.types.LogicalType);
ValidatedString.prototype._fromValue = function (val) {
if (!this._pattern.test(val)) {
throw new Error('invalid string: ' + val);
}
return val;
};
ValidatedString.prototype._toValue = ValidatedString.prototype._fromValue;
And how you would use it:
var type = avro.parse({
name: 'Example',
type: 'record',
fields: [
{
name: 'custId',
type: 'string' // Normal (free-form) string.
},
{
name: 'sessionId',
type: {
type: 'string',
logicalType: 'validated-string',
pattern: '^\\d{3}-\\d{4}-\\d{5}$' // Validation pattern.
}
},
]
}, {logicalTypes: {'validated-string': ValidatedString}});
type.isValid({custId: 'abc', sessionId: '123-1234-12345'}); // true
type.isValid({custId: 'abc', sessionId: 'foobar'}); // false
You can read more about implementing and using logical types here.
Edit: For the Java implementation, I believe you will want to look at the following classes:
LogicalType, the base you'll need to extend.
Conversion, to perform the conversion (or validation in your case) of the data.
LogicalTypes and Conversions, a few examples of existing implementations.
TestGenericLogicalTypes, relevant tests which could provide a helpful starting point.