couchdb futon document editor - can I customize the indentation rules? - couchdb

Suppose that I want to customize the indentation rules of the foton document editor, where and how can I do that?
I'll elaborate.
The foton editor lays out document like this:
(which to my flavor is completely annoying)
{
"_id": "1326017821636",
"_rev": "2-51ab614953437181a24f1c073fbc6201",
"doc_type": 0,
"step": 2,
"data": {
"map1": {
"attr1": 73031,
"attr2": "strval"
},
"map2": {
"att1": 52001,
"att2": "strval"
},
"mapmap": {
"map": {
"id11": {
"id": "id11",
"attr": "attr",
"attr2": 2222
},
"id1211": {
"id": "id1211",
"attr": "attr",
"attr2": 2222
}
}
}
}
}
And what would I want to change, you may ask? It seems pritty standard.
Well, I'm not a standard person. To my observations many standards evolved arbitrarily and suffer lack of thought. Besides, if I was a standard-follower I was not asking about customization ;)
Shortly -
- 3 spaces tab indent. Why 3? not 2 and not 4. just 3? LOL
- block formation - opening a block draws down a line in the worng place
- commas are in the wrong side
So I want it to be like this:
(and I even have the JS code that does it, I just need help in where to put it)
{ "_id" : "1326017821636"
, "_rev" : "2-51ab614953437181a24f1c073fbc6201"
, "doc_type" : 0
, "step" : 2
, "data" :
{ "map1" :
{ "attr1" : 73031
, "attr2" : "strval"
}
, "map2" :
{ "att1" : 52001
, "att2" : "strval"
}
, "mapmap" :
{ "map" :
{ "id11" :
{ "id" : "id11"
, "attr" : "attr"
, "attr2" : 2222
}
}
, { "id1122" :
{ "id" : "id11"
, "attr" : "attr"
, "attr2" : 2222
}
}
}
}
}
Why I do it this way?
- it looks more tabular. All syntax scuffold of same object/array are in the same column
(who placed the comma in the wrong side of the statement anyway)
- no redunand wasted empty lines
- only the start-block is an edge case (opposed to in the other way you have a case for begining a block and a case for ending a block and a case for every line).
It would have been fine if I could perform my indentations and the foton will not ruin them everytime it validates the document. But, since it does, I need to get into this mechanism and replace it's indentor with one of my own.
Any directions?
P.S:
If you know the answer here - you might know the answer to this question:
couchdb futon document editor - can I customize the document validation part?

Again, after a quick browse, this is where you might want to look:
https://github.com/apache/couchdb/blob/master/share/www/script/futon.browse.js#L899
You will have a corresponding /share/www/script folder on your local couchdb instance if you want to play around editing it live.

Related

Null when filtering Many-to-Many relationship with JHipster

I have an issue with filtering in JHipster.
Here is my (relevant) jhipster-jdl.jh file :
entity Exercise {
name String required
}
entity Difficulty {
name String required
}
entity Language {
name String required
}
relationship ManyToMany {
Exercise{language(name)} to Language
}
relationship ManyToOne {
Exercise{difficulty} to Difficulty
}
filter Exercise
I generated the Springboot service with JHipster and did not change anything.
Let's say I have an exercise called "test" with difficulty "easy" and languages "spanish" and "dutch".
When I query the GET exercises endpoint with the filter name.equals=test :
http://localhost:8080/myservice/api/exercises?nameId.equals=test
I get this answer :
[
{
"id" : 1000,
"difficulty" : {
"id": 5,
"name": "easy"
},
"languages" : null,
"name" : "test"
}
]
As you can see, the issue is that I don't have direct access to the languages linked to my exercise.
Note that the difficulty field has no issue because it is a many-to-one relationship.
The database is not the source of these issues, because if I query the GET exercises/{id} endpoint with the exercise's id :
http://localhost:8080/myservice/api/exercises/1000
I get the right result :
{
"id" : 1000,
"difficulty" : {
"id": 5,
"name": "easy"
},
"languages" : [
{
"id" : 200,
"name" : "spanish"
},
{
"id" : 205,
"name" : "dutch"
}
],
"name" : "test"
}
Now let's try to query the GET exercises endpoint with the filter languageId.greaterOrEqualThan=200 (for the sake of the example) :
http://localhost:8080/myservice/api/exercises?languageId.greaterOrEqualThan=200
Then the response will be :
[
{
"id" : 1000,
"difficulty" : {
"id": 5,
"name": "easy"
},
"languages" : null,
"name" : "test"
},
{
"id" : 1000,
"difficulty" : {
"id": 5,
"name": "easy"
},
"languages" : null,
"name" : "test"
}
]
Notice that the exercise comes out twice (or n times if it has n languages meeting the constraint, I checked), which is problematic.
I feel like something in the JHipster generator is broken, but it seems unlikely because I did not find anybody talking about this quite crippling issue.
Did I do something wrong when generating my JHipster project ? Or is it a true issue ?
Please feel free to ask for any other piece of code, I'm not sure what could be relevant. Thanks.
Note : I noticed the exercise endpoint filters for the languages field use the singular (e.g. language.equals), I don't know if this is normal for a many-to-many relationship.

mongoose, nodejs - add reference of current schema object to the previous schema object

I am using mongoose, nodejs with MVC architecture.
So, I have two collections crops and pesticides. I want a many to many relationship between these two collections.
For example, if I have 2 crops like below:
{
"_id" : ObjectId("5af1d1d54558fae1d0010bb4"),
"nameOfCrop" : "Tomato",
"imageOfCrop" : "tomatoimage",
"soilType" : " almost all soil types except heavy clay",
"waterNeeded" : "water once every two or three days",
"tagCrop" : "Vegetables",
"pesticideForCrop" : [ ]
}
{
"_id" : ObjectId("5af1d1d54558fae1d0010bb5"),
"nameOfCrop" : "Brinjal",
"imageOfCrop" : "brinjalimage",
"soilType" : "all types of soil varying from light sandy to heavy clay",
"waterNeeded" : "Regularly irrigated",
"tagCrop" : "Vegetables",
"pesticideForCrop" : [ ]
}
and two pesticides like below:
{
"_id" : ObjectId("5af7d3e735d4222b78a93838"),
"cropForPesticide" : [ ],
"nameOfPesticide" : "pesticide8",
"imageOfPesticide" : "p8image",
"__v" : 0
}
{
"_id" : ObjectId("5af7d49122b63e0824ed2d3d"),
"cropForPesticide" : [ ],
"nameOfPesticide" : "pesticide9",
"imageOfPesticide" : "p9image",
"__v" : 0
}
What I want is that tomato's pesticideForCrop key have object ids(suppose) of the pesticide pesticide8 and pesticide9 (meaning tomato can be treated with pesticide8 and pesticide9) and simultaneously I want a reference(_id) of tomato in the pesticide8's cropForPesticide key and pesticide9's cropForPesticide key.
I have a very vague approach in mind like firstly, I save a crop with the pesticideForCrop key being null at this point. Then I save a pesticide and while saving it, I can ask the user to select the crops which can be treated with that pesticide. I don't know how to code this. It would be nice if another feasible approach can be notified of or someone can point me in the right direction of how to code this.

elasticsearch predective search solution

Trying to get predictive drop down search ,How can i make search always starts from left to right
like in example "I_kimchy park" , "park"
If i search only "par" i have to get only park in return , but here i am getting both words , how to treat empty space as a character
POST /test1
{
"settings":{
"analysis":{
"analyzer":{
"autocomplete":{
"type":"custom",
"tokenizer":"standard",
"filter":[ "standard", "lowercase", "stop", "kstem", "edgeNgram" ,"whitespace"]
}
},
"filter":{
"ngram":{
"type":"edgeNgram",
"min_gram":2,
"max_gram":15,
"token_chars": [ "letter", "digit"]
}
}
}
}
}
PUT /test1/tweet/_mapping
{
"tweet" : {
"properties" : {
"user": {"type":"string", "index_analyzer" : "autocomplete","search_analyzer" : "autocomplete"}
}
}}
POST /test1/tweet/1
{"user" : "I_kimchy park"}
POST /test1/tweet/3
{ "user" : "park"}
GET /test1/tweet/_search
{
"query": {
"match_phrase_prefix": {
"user": "park"
}
}
}
That happens because your standard tokenizer splits your user field by white spaces. You can use Keyword Tokenizer in order to treat whole string as a single value (single token).
Please keep in mind that this change may affect other of your functionalities that use this field. You may have to add dedicated "not tokenized" user field for this purpose.

ElasticSearch -- boosting relevance based on field value

Need to find a way in ElasticSearch to boost the relevance of a document based on a particular value of a field. Specifically, there is a special field in all my documents where the higher the field value is, the more relevant the doc that contains it should be, regardless of the search.
Consider the following document structure:
{
"_all" : {"enabled" : "true"},
"properties" : {
"_id": {"type" : "string", "store" : "yes", "index" : "not_analyzed"},
"first_name": {"type" : "string", "store" : "yes", "index" : "yes"},
"last_name": {"type" : "string", "store" : "yes", "index" : "yes"},
"boosting_field": {"type" : "integer", "store" : "yes", "index" : "yes"}
}
}
I'd like documents with a higher boosting_field value to be inherently more relevant than those with a lower boosting_field value. This is just a starting point -- the matching between the query and the other fields will also be taken into account in determining the final relevance score of each doc in the search. But, all else being equal, the higher the boosting field, the more relevant the document.
Anyone have an idea on how to do this?
Thanks a lot!
You can either boost at index time or query time. I usually prefer query time boosting even though it makes queries a little bit slower, otherwise I'd need to reindex every time I want to change my boosting factors, which usally need fine-tuning and need to be pretty flexible.
There are different ways to apply query time boosting using the elasticsearch query DSL:
Boosting Query
Custom Filters Score Query
Custom Boost Factor Query
Custom Score Query
The first three queries are useful if you want to give a specific boost to the documents which match specific queries or filters. For example, if you want to boost only the documents published during the last month. You could use this approach with your boosting_field but you'd need to manually define some boosting_field intervals and give them a different boost, which isn't that great.
The best solution would be to use a Custom Score Query, which allows you to make a query and customize its score using a script. It's quite powerful, with the script you can directly modify the score itself. First of all I'd scale the boosting_field values to a value from 0 to 1 for example, so that your final score doesn't become a big number. In order to do that you need to predict what are more or less the minimum and the maximum values that the field can contain. Let's say minimum 0 and maximum 100000 for instance. If you scale the boosting_field value to a number between 0 and 1, then you can add the result to the actual score like this:
{
"query" : {
"custom_score" : {
"query" : {
"match_all" : {}
},
"script" : "_score + (1 * doc.boosting_field.doubleValue / 100000)"
}
}
}
You can also consider to use the boosting_field as a boost factor (_score * rather than _score +), but then you'd need to scale it to an interval with minimum value 1 (just add a +1).
You can even tune the result in order the change its importance adding a weight to the value that you use to influence the score. You are going to need this even more if you need to combine multiple boosting factors together in order to give them a different weight.
With a recent version of Elasticsearch (version 1.3+) you'll want to use "function score queries":
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-function-score-query.html
A scored query_string search looks like this:
{
'query': {
'function_score': {
'query': { 'query_string': { 'query': 'my search terms' } },
'functions': [{ 'field_value_factor': { 'field': 'my_boost' } }]
}
}
}
"my_boost" is a numeric field in your search index that contains the boost factor for individual documents. May look like this:
{ "my_boost": { "type": "float", "index": "not_analyzed" } }
if you want to avoid to do the boosting each time inside the query, you might consider to add it to your mapping directly adding "boost: factor.
So your mapping then may look like this:
{
"_all" : {"enabled" : "true"},
"properties" : {
"_id": {"type" : "string", "store" : "yes", "index" : "not_analyzed"},
"first_name": {"type" : "string", "store" : "yes", "index" : "yes"},
"last_name": {"type" : "string", "store" : "yes", "index" : "yes"},
"boosting_field": {"type" : "integer", "store" : "yes", "index" : "yes", "boost" : 10.0,}
}
}
If you are using Nest, you should use this syntax:
.Query(q => q
.Bool(b => b
.Should(s => s
.FunctionScore(fs => fs
.Functions(fn => fn
.FieldValueFactor(fvf => fvf
.Field(f => f.Significance)
.Weight(2)
.Missing(1)
))))
.Must(m => m
.Match(ma => ma
.Field(f => f.MySearchData)
.Query(query)
))))

updating nested document with mongoDb + nodeJs

I have a structure like this:
{
"_id" : ObjectId("501abaa341021dc3a1d0c70c"),
"name" : "prova",
"idDj" : "1",
"list" : [
{
"id" : 1,
"votes" : 2
},
{
"id" : 2,
"votes" : 4
}
]
}
And I'm trying to increase votes with this query:
session_collection.update(
{'_id':session_collection.db.bson_serializer.ObjectID.createFromHexString(idSession),'list.id':idSong},
{$inc:{'list.$.votes':1}},
{safe: true} ,
callback);
But it doesn't work, there are no problems it just doesn't update anything.
I think it's because the ['] (simple quotation mark) on the 'list.id' and 'list.$.votes' because the same query inside the terminal works perfectly.
Thanks!
I suspect your matching is not working as you expect. The callback will return
function(err, numberofItemsUpdated, wholeUpdateObject)
numberofItemsUpdated should equal 1 if your matching worked. you'll need to check if idSession and idSong are what you think they are.

Resources