I am a very noob and very very very beginner in elastic search, I got this error
StatusCodeError: [illegal_argument_exception] Fielddata is disabled on text fields by default. Set fielddata=true on [time-stamp] in order to load field data in memory by reversing the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.
I saw a few blogs where they mentioned I have to do something like
"field": "user.keyword"
but I tried to put this 1 line snippet in my code, please help me out I don't even know what is field and user in my case/code
my code:
esClient.search({
index:ESindex,
body: {
sort:[{"time-stamp":{"order":"desc"}}],
size:req.query.count,
query: {
match_phrase: { "participant-id":req.query["participant-id"] }
}
}
})
Related
the problem I am facing is as follows:
Search value: 'cooking'
JSON object::
data: {
skills: {
items: [ { name: 'cooking' }, ... ]
}
}
Expected result: Should find all the "skill items" that contain 'cooking' inside their name, using TypeORM and Nest.js.
The current code does not support search on the backend, and I should implement this. I want to use TypeORM features, rather than handling it with JavaScript.
Current code: (returns data based on the userId)
const allItems = this.dataRepository.find({ where: [{ user: { id: userId } }] })
I investigated the PostgreSQL documentation regarding the PostgreSQL functions and even though I understand how to create a raw SQL query, I am struggling to convert this to the TypeORM equivalent.
Note: I researched many StackOverflow issues before creating this question, but do inform me If I missed the right one. I will be glad to investigate.
Can you help me figure out the way to query this with TypeORM?
UPDATE
Let's consider the simple raw query:
SELECT *
FROM table1 t
WHERE t.data->'skills' #> '{"items":[{ "name": "cooking"}]}';
This query will provide the result for any item within the items array that will match exact name - in this case, "cooking".
That's totally fine, and it can be executed as a raw request but it is certainly not easy to maintain in the future, nor to use pattern matching and wildcards (I couldn't find a solution to do that, If you know how to do it please share!). But, this solution is good enough when you have to work on the exact matches. I'll keep this question updated with the new findings.
use Like in Where clause:
servicePoint = await this.servicePointAddressRepository.find({
where: [{ ...isActive, name: Like("%"+key+"%"), serviceExecutive:{id: userId} },
{ ...isActive, servicePointId: Like("%"+key+"%")},
{ ...isActive, branchCode: Like("%"+key+"%")},
],
skip: (page - 1) * limit,
take: limit,
order: { updatedAt: "DESC" },
relations:["serviceExecutive","address"]
});
This may help you! I'm matching with key here.
I've been trying to get full search text to work for a while now without any success. The current documentation has this example:
[Op.match]: Sequelize.fn('to_tsquery', 'fat & rat') // match text search for strings 'fat' and 'rat' (PG only)
So I've built the following query:
Title.findAll({
where: {
keywords: {
[Op.match]: Sequelize.fn('to_tsquery', 'test')
}
}
})
And keywords is defined as a TSVECTOR field.
keywords: {
type: DataTypes.TSVECTOR,
},
It seems like it's generating the query properly, but I'm not getting the expected results. This is the query that it's being generated by Sequelize:
Executing (default): SELECT "id" FROM "Tests" AS "Test" WHERE "Test"."keywords" ## to_tsquery('test');
And I know that there are multiple records in the database that have 'test' in their vector, such as the following one:
{
"id": 3,
"keywords": "'keyword' 'this' 'test' 'is' 'a'",
}
so I'm unsure as to what's going on. What would be the proper way to search for matches based on a TSVECTOR field?
It's funny, but these days I am also working on the same thing and getting the same problem.
I think part of the solution is here (How to implement PostgresQL tsvector for full-text search using Sequelize?), but I haven't been able to get it to work yet.
If you find examples, I'm interested. Otherwise as soon as I find the solution that works 100% I will update this answer.
What I also notice is when I add data (seeds) from sequelize, it doesn't add the lexemes number after the data of the field in question. Do you have the same behavior ?
last thing, did you create the index ?
CREATE INDEX tsv_idx ON data USING gin(column);
I am seeing some differences in behaviour between ApolloProvider and MockedProvider and it's throwing an error in testing.
Assuming I have the following query:
query {
Author {
id: authorID
name
}
}
In ApolloProvider this query creates entries in the Apollo Cache using the field alias as the key, each Author in the cache has an id. Therefore, Apollo can automatically merge entities.
When using MockedProvider, this is not the case. When I mock the following response:
const mockResponse = {
data: {
Author: {
id: 'test!!',
name: 'test'
},
},
}
I get the following error:
console.warn
Cache data may be lost when replacing the Author field of a Query object.
To address this problem (which is not a bug in Apollo Client), define a custom merge function for the Query.Author field, so InMemoryCache can safely merge these objects:
existing: {"authorID":"test!!"...
So the exact same query in ApolloProvider uses id (field alias) as the key and in MockedProvider it just adds authorID as another field entry. It ignores the field alias and has no key.
Obviously now nothing is able to merge. My first guess is that it's because the MockedProvider does not have access to the schema so it doesn't know that authorID is of type ID? Or am I way off?
One thing that's really weird to me is that my mockResponse doesn't even provide an authorID. My mockResponse is { id: "test!!" } but the cache shows an entry for {"authorID":"test!!"}, so it's somehow 'unaliased' itself.
I'm really struggling to understand what is happening here. Any insight at all would be enormously useful.
I have a cloudant db which contains documents for access logs of users. Example:
{
"name": "John Doe",
"url": "somepage.html",
"dateaccessed": "2016-08-23T21:20:25.502Z"
}
I created a search index with a function:
function (doc) {
if(doc.dateaccessed) {
var d = new Date(doc.dateaccessed);
index("dateaccessed", d.getTime(), {store: true});
}
}
Now this setup is working as expected with just a normal query. Example
{
q: 'dateaccessed:[1420041600000 TO 1471987625266]',
include_docs: true,
sort: '-dateaccessed<number>',
}
However, I wish to limit the results - let's say 5 at a time (which can be done with the "limit: 5" argument), and I want to somehow make a pagination - be able to move to the next 5 results or the previous 5 results.
I checked the cloudant documentation and there's an argument there called "bookmark" (https://cloudant.com/for-developers/search/) but I'm not sure how to use it.
May I request for any insights on this?
The Cloudant documentation shows examples on how to use bookmarks, but the gist is that you get a bookmark from the server in your search response, when requesting the next page, you use the bookmark received in your request with the bookmark parameter in either your JSON object for POST, or in the query params for GET.
Am new to elastic search. Am facing a problem to write a search query returning all matched records in my collection. Following is my query to search record
{
"size":"total no of record" // Here i need to get total no of records in collection
"query": {
"match": {
"first_name": "vineeth"
}
}
}
By running this query i am only getting maximum 10 records, am sure there is more than 10 matching records in my collection. I searched a lot and finally got size parameter in query. But in my case i dont know the total count of records. I think giving an unlimited number to size variable is not a good practice, so how to manage this situation please help me to solve this issue, Thanks
It's not very common to display all results, but rather use fromand size to specify a range of results to fetch. So your query (for fetching the first 10 results) should look something like this:
{
"from": 0,
"size": 10,
"query": {
"match": {
"first_name": "vineeth"
}
}
}
This should work better than setting size to a ridiculously large value. To check how many documents matched your query you can get the hits.total (total number of hits) from the response.
To fetch all the records you can also use scroll concept.. It's like cursor in db's..
If you use scroll, you can get the docs batch by batch.. It will reduce high cpu usage and also memory usage..
For more info refer
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-scroll.html
To get all records, per de doc, you should use scroll.
Here is the doc:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-scroll.html
But the idea is to specify your search and indicate that you want to scroll it:
curl -XGET 'localhost:9200/twitter/tweet/_search?scroll=1m' -d '
{
"query": {
"match" : {
"title" : "elasticsearch"
}
}
}'
in the scroll param you specify how long you want the search results available.
Then you can retrieve them with the returned scroll_id and the scroll api.
in new versions of elastic (e.g. 7.X), it is better to use pagination than scroll (deprecated):
https://www.elastic.co/guide/en/elasticsearch/reference/current/paginate-search-results.html
deprecated in 7.0.0:
GET /_search/scroll/<scroll_id>