I have a collection in this format:
[
{
'name': 'test',
'features': ['features/id', 'features/id2'...]
}
]
I want to create a dynamic edge collection which connects between documents that has the same features.
For example, if I have this collection:
[
{
'name': 'test',
'features': ['features/id', 'features/id2']
},
{
'name': 'test2',
'features': ['features/id2']
},
{
'name': 'test3',
'features': ['features/id']
},
]
The edge collection will automatically create these connections: test <-> test2; test <-> test3
You cannot create collections with AQL. What you can do though is to
use a single edge collection and store attributes such as the name on edges to filter by it later in queries
run a query to determine the distinct edge collection names, create an edge collection for each name via arangosh, the HTTP API or a driver and also run a query for each to create edges in the respective edge collection
Also see https://www.arangodb.com/docs/stable/graphs.html#multiple-edge-collections-vs-filters-on-edge-document-attributes
I don't think that this is what you are asking for however (see comment).
Related
Is it possible to update a certain field in one collection in MongoDB when a certain data's timestamps expire in another collection under the same cluster? If it can be done, could you provide an example?
Example of what I am trying to accomplish:
collection1:
[{
name:"ABC",
Age:26,
value:true
}]
Collection2:
[{
name:"ABC",
result:"pass"
}]
So, if the data with the name ABC expires in collection2, I want to change the value of ABC to false in collection1.
I know this is not exactly what you want
but by this query, you could get the same results without updating col1
use aggregate and join two collection if there is not any data in col2 for specific name return value as false else true
db.col1.aggregate([
{$lookup:{
from:"col2",
localField:"name",
foreignField :"name",
as:"t"
}},
{
$project:{
name:1,
Age:1,
value:{$cond:{ if: { $eq: [ [], "$t" ] }, then: false, else: true }}
}
}
])
I'm rather new to elasticsearch, so i'm coming here in hope to find advices.
I have two indices in elastic from two different csv files.
The index_1 has this mapping:
{'settings': {
'number_of_shards' : 3
},
'mappings': {
'properties': {
'place': {'type': 'keyword' },
'address': {'type': 'keyword' },
}
}
}
The file is about 400 000 documents long.
The index_2 with a much smaller file(about 50 documents) has this mapping:
{'settings': {
"number_of_shards" : 1
},
'mappings': {
'properties': {
'place': {'type': 'text' },
'address': {'type': 'keyword' },
}
}
}
The field "place" in index_2 is all of the unique values from the field "place" in index_1.
In both indices the "address" fields are postcodes of datatype keyword with a structure: 0000AZ.
Based on the "place" field keyword in index_1 I want to assign the term of field "address" from index_2.
I have tried using the pandas library but the index_1 file is too large. I have also to tried creating modules based off pandas and elasticsearch, quite unsuccessfully. Although I believe this is a promising direction. A good solution would be to stay into the elasticsearch library as much as possible as these indices will be later be used for further analysis.
If i understand correctly it sounds like you want to use updateByQuery.
the request body should look a little like this:
{
'query': {'term': {'place': "placeToMatch"}},
'script': 'ctx._source.address = "updatedZipCode"'
}
This will update the address field of all documents with the matched place.
EDIT:
So what we want to do is use updateByQuery while iterating over all the documents in index2.
First step: get all the documents from index2, will just do this using the basic search feature
{
"index": 'index2',
"size": 100 // get all documents, once size is over 10,000 you'll have to padginate.
"body": {"query": {"match_all": {}}}
}
Now we iterate over all the results and use updateByQuery for each of the results:
// sudo
doc = response[i]
// update by query request.
{
index: 'index1',
body: {
'query': {'term': {'address': doc._source.address}},
'script': 'ctx._source.place = "`${doc._source.place}`"'
}
}
I can able to search the case by company name
var mySearch = search.create({
type: search.Type.SUPPORT_CASE,
columns: [{
name: 'title'
}, {
name: 'company'
}],
filters: [{
name: 'company',
operator: 'is',
values: 'Test'
}]
});
return mySearch.run({
ld: mySearch.id
}).getRange({
start: 0,
end: 1000
});
But I am not able to search case by company id.
companyId is 115
Below are not working
i)
filters: [{
name: 'company',
operator: 'is',
values: 115
}]
ii)
filters: [{
name: 'companyid',
operator: 'is',
values: 115
}]
According to the Case schema company is a Text filter, meaning you would have to provide it with the precise Name of the company, not the internal ID.
Instead you may want to use the customer.internalid joined filter to provide the internal ID. Also, Internal ID fields are nearly always Select fields, meaning they do not accept the is operator, but instead require the anyof or noneof operator.
You can find the valid operators by field type on the Help page titled Search Operators
First, you can try this :
var supportcaseSearchObj = search.create({
type: "supportcase",
filters:
[
["company.internalid","anyof","100"]
],
columns:
[
search.createColumn({
name: "casenumber",
sort: search.Sort.ASC
}),
"title",
"company",
"contact",
"stage",
"status",
"profile",
"startdate",
"createddate",
"category",
"assigned",
"priority"
]
});
Second : how did I get this ? The answer is hint that will make your life easier :
Install the "NetSuite Saved Search Code Export" chrome plugin.
In Netsuite UI, create your saved search (it is always easier that doing it in code).
After saving the search, open it again for edition.
At the top right corner (near list, search menu in the netsuite page), you will see a link "Export as script" : click on it and you will get your code ;)
If you can not install the chrome plugin :
In Netsuite UI, create your saved search (it is always easier that doing it in code).
In your code, load your saved search
Add a log.debug to show the [loadedesearchVar].filters
You can then copy what you will see in the log to use it as your search filters.
Good luck!
I have data in table in this format
emp_id,emp_name,title,supervisor_id,supervisor_name
11,Anant,Business Unit Executive,8,abc
15,Raina,Analysis Manager Senior,11,Anant
16,Kumar,Conversion Manager,11,Anant
18,amit,Analyst Specialist,11,Anant
25,anil,senior engineer,18,amit
35,Pang Pang,senior engineer,25,anil
38,Xiang Xiang,UE engineer,25,anil
I will enter supervisor_id and it will return all employee under that then after continue this until we achieve lower level, i want to do this in node and sql server with recursive function.
I want this data to be in hierarchical way like this .
var ds ={ 'emp_id':11,
'name': 'Anant',
'title': 'Business Unit Executive',
'children': [
{ 'name': 'Raina','emp_id':15, 'title': 'Analysis Manager Senior' },
{ 'name': 'Kumar','emp_id':16, 'title': 'Conversion Manager' },
{ 'name': 'amit', 'emp_id':18, 'title': 'Analyst Specialist',
'children': [
{ 'name': 'anil','emp_id':25, 'title': 'senior engineer' ,
'children': [
{ 'name': 'Pang Pang','emp_id':35, 'title': 'engineer' },
{ 'name': 'Xiang Xiang', 'emp_id':38,'title': 'UE engineer' }
]
}
]
}
]
};
I'm not familiar with the which library you are using to request form server so i will sudo code those portions
async getEmployeesBySupervisorId(supervidor_id){
const employees = await <get-employees-query> // you may also need to map the results to your {emp_id, name, title} depending on your query library default to [] if no employees are found
return Promise.all(...employees.map(employee=>{
employee.children = await getEmployeesBySupervisorId(employee.emp_id)
}))
}
That will get you an array of employees, with children until no more employees are found,
While this will work it fires many queries, it may be better for you to leverage sql and your ORM to make this more efficient in the future.
Let me start off by stating that I'm aware of the populate method that mongoose offers, but since my work has decided to move to native mongodb drivers in the future, I can no longer rely on populate to avoid work for myself latter on.
If I have two collections of Documents
People
{_id:1, name:Austin}
{_id:2, name:Doug}
{_id:3, name:Nick}
{_id:4, name:Austin}
Hobbies:
{Person: 1, Hobby: Cars}
{Person:1, Hobby: Boats}
{Person:3, Hobby: Chess}
{Person:4, Hobby: Cars}
How should I go about joining each document in people with Hobbies. Ideally I would prefer to only have to call the database twice once to get the people and the second time to get the hobbies, and then return to the client app objects with them joined toeghter.
It depends on what is your primary concern. Generally, I would say to embed the hobbies into the People, like:
{
"_id":1,
"name":"Austin",
"hobbies": [
"Cars","Boats"
]
},
{
"_id":2,
"name":"Doug",
"hobbies": []
},
{
"_id":3,
"name":"Nick",
"hobbies": [
"Chess"
]
},
{
"_id":4,
"name":"Austin",
"hobbies": [
"Cars"
]
}
which would give you the possibility of using a multi keyed index on hobbies and allow queries like this:
db.daCollection.find({"hobbies":"Cars"})
which would return both Austins as complete documents. Yes, I know that there would be a lot of redundant entries. If you would try to prevent that, could model it like this:
{
"_id": 1,
"name":"Cars"
},...
{
"_id":1,
"name":"Austin",
"hobbies": [
1, ...
]
}
which would need an additional index on the name field of the hobby to be efficient. So when you would want to find every person which is into cars, you would need to find the _id and query for it like
db.person.find({"hobbies":1})
I think it is easier, more intuitive and for most use cases faster if you use the embedding.