We have recently noticed these errors being thrown for one of our edge collections.
None of the recommendations to unload and load index back helped.
The index that this error log referenced is the default index (_from, _to indexes) created by ArangoDB automatically and I have no control of. I think this might explain why index unload/load suggestion didn't work.
In the meantime, this is not just an annoyance, due to the issue certain queries don't produce results that we expect.
Unfortunately, I could not find any suggestions on how to address the issue, and I'm wondering if ArangoDB devs can suggest a solution?
The error logs I was referring to
2020-04-10T03:44:10Z [1] ERROR [3acb3] {graphs} Could not extract indexed edge document, return 'null' instead. This is most likely a caching issue. Try: 'db.contributor_action.unload(); db.contributor_action.load()' in arangosh to fix this.
2020-04-10T03:44:10Z [1] ERROR [3acb3] {graphs} Could not extract indexed edge document, return 'null' instead. This is most likely a caching issue. Try: 'db.contributor_action.unload(); db.contributor_action.load()' in arangosh to fix this.
2020-04-10T03:44:10Z [1] ERROR [3acb3] {graphs} Could not extract indexed edge document, return 'null' instead. This is most likely a caching issue. Try: 'db.contributor_action.unload(); db.contributor_action.load()' in arangosh to fix this.
2020-04-10T03:44:10Z [1] ERROR [3acb3] {graphs} Could not extract indexed edge document, return 'null' instead. This is most likely a caching issue. Try: 'db.contributor_action.unload(); db.contributor_action.load()' in arangosh to fix this.
2020-04-10T03:44:10Z [1] ERROR [3acb3] {graphs} Could not extract indexed edge document, return 'null' instead. This is most likely a caching issue. Try: 'db.contributor_action.unload(); db.contributor_action.load()' in arangosh to fix this.
AQL example that exposes the issue
LET c1 = (
FOR ca in contributor_action
COLLECT WITH COUNT INTO count
RETURN count
)[0]
LET c2 = (
FOR ca in contributor_action
FOR c IN contributor FILTER ca._from == c._id
COLLECT WITH COUNT INTO count
RETURN count
)[0]
RETURN {c1, c2}
Result shows the discrepancy between AQL execution with the index and without
{
"c1": 1123421,
"c2": 1121787
}
For those who are interested to know and looking for answers, this issue might be related to my problem. https://github.com/arangodb/arangodb/issues/11303
Waiting for the 3.6.4 release to confirm.
Related
I am trying to check for the existence of a UUID as a primary key in my Django environment...and when it exists...my code works fine...But if it's not present I get a "" is not a Valid UUID...
Here's my code....
uuid_exists = Book.objects.filter(id=self.object.author_pk,is_active="True").first()
I've tried other variations of this with .exists() or .all()...but I keep getting the ['“” is not a valid UUID.'] error.
I did come up with a workaround....
if self.object.author_pk is not '':
book_exists = Book.objects.filter(id=self.object.author_pk,is_active="True").first()
context['author_exists'] = author_exists
Is this the best way to do this? I was hoping to be able to use a straight filter...without clarifying logic....But I've worked all afternoon and can't seem to come up with anything better. Thanks in advance for any feedback or comments.
I've had the same issue and this is what I have:
Wrapping it into try/except (in my case it's a View so it's supposed to return a Response object)
try:
object = Object.objects.get(id=object_id)
except Exception as e:
return Response(data={...}, status=status.HTTP_40...
It gets to the exception (4th line) but somehow sends '~your_id~' is not a valid UUID. text instead of proper data. Which might be enough in some cases.
This seems like an overlook, so might as well get a fix soon. I don't have enough time to investigate deeper, unfortunately.
So the solution I came up with is not ideal either but hopefully is a bit cleaner and faster than what you're using rn.
# Generate a list of possible object IDs (make use of filters in order to reduce the DB load)
possible_ids = [str(id) for id in Object.objects.filter(~ filters here ~).values_list('id', flat=True)]
# Return an error if ID is not valid
if ~your_id~ not in possible_ids:
return Response(data={"error": "Database destruction sequence initialized!"}, status=status.HTTP_401_UNAUTHORIZED)
# Keep working with the object
object = Objects.objects.get(id=object_id)
I am using Google Cloud Datastore to save my application data. I have to add a query to get all results matching with Name, Brand or Sku.
Query data with one of the field is returning me records but using all fields together returns me error.
Query:
const term = "My Red";
const q = gstore.createQuery(req.params.orgId, "Variant")
.filter('brand', '=', term)
.filter('sku', '=', term)
.limit(10);
Error:
{"msec":435.96913800016046,"error":"no matching index found.
recommended index is:- kind: Variant properties: -
name: brand - name:
sku","data":{"code":412,"metadata":{"_internal_repr":{}},"isBoom":true,"isServer":true,"data":null,"output":{"statusCode":500,"payload":{"statusCode":500,"error":"Internal
Server Error","message":"An internal server error
occurred"},"headers":{}}}} Debug: internal, error
Also, I want to perform OR operation to get matching results as above will return data with AND operation.
Please help me to find correct path to achieve the desired result.
Thanks in advance and let me know if something is not clear.
The error indicates that the composite index required by the respective query is not in Serving state.
That means it's either not created/deployed or it was recently deployed and is still being built.
Composite indexes must be specifically created and deployed in your app.
If you didn't create it you need to do so. The error message indicates the content the index configuration requires. If you're using the development server it might create it automatically, but you still need to deploy it.
See Indexes docs for more details.
If you recently deployed the composite index please note that it can take some significant amount of time until the matching index is built, depending on how many entities of that kind already exist in the Datastore. You can check the status of the index building in the developer console, on the Indexes page
I just read the issue from arangoDB github [1.1] Feature request: On unique constrain violation exceptions include ids or keys that were involved #342 (https://github.com/arangodb/arangodb/issues/342) because it is exactly what I was searching for.
Apparently this issue is already closed because it has been implemented in ArangoDB 3.2, however I do not find the way to retrieve the id or key from a unique constraint violation error.
For example I call the following AQL query on a collection with a Hash Index on "projectName" attribute.
# first insert - everything OK
INSERT {"projectName":"test","startDate":"now"} IN projects
RETURN NEW
# second insert - will give unique constraint violation error
INSERT {"projectName":"test","startDate":"tomorrow"} IN projects
RETURN NEW
Therefore the error that I get is: AQL: unique constraint violated (while executing). Errors: {u'code': 409, u'errorNum': 1210, u'errorMessage': u'AQL: unique constraint violated (while executing)', u'error': True}
with no _key or _id or something that tells me with which document do I have a unique constraint violation. Based on the closed issue it should be possible but I don't see how, or maybe I just understood it wrong.
Note: A similar question was posted in Determining which unique constraint caused INSERT failure in ArangoDB but I think it was answered before the release of ArangoDB 3.2
I am using ArangoDB 3.2.3
Thanks
It looks like that the index details are still omitted when triggering a unique constraint violation from within AQL. In this case it will only show/return a generic "unique constraint violated", but not indicate which index caused it.
This is unintentional, and there is now a pull request to fix this:
https://github.com/arangodb/arangodb/pull/3330
I want to paginate the results of an ldap query so that I get 50 users per query for each page. The documentation here http://ldap3.readthedocs.io/searches.html?highlight=generator suggests that using a generator is the simplest way to do this, however it doesn't provide any detail as to how to use it to achieve pagination. When I iterate through the generator object, it prints out every user entry, even though I specified 'paged_size=5' in my search query. Can anyone explain to me what's going on here? Thanks!!
Try setting the paged_criticality parameter to True. It could be that the server is not capable of performing a paged search. If that's the case and paged_criticality is True the search will fail rather than returning all users.
This is a similar system that I used:
# Set up your ldap connection
conn = Connection(*args)
# create a generator
entry_generator = conn.extend.standard.paged_search(
search_base=self.dc, search_filter=query[0],
search_scope=SUBTREE, attributes=self.user_attributes,
paged_size=1, generator=True)
# Then get your results:
results = []
for entry in entry_generator:
total_entries += 1
results.append(entry)
if total_entries % 50 == 0:
# do something with results
Otherwise try setting the page_size to 50 and get results like that.
I have two types of entities: Subjects and Correspondents. They're both related to each other via a to-many relationship. I want to have a single fetchRequest that I can pass to a NSFetchedResultsController which will have it return:
All of the subjects which have more than one correspondent.
All of the correspondents which have subjects that only they are apart of.
After trying a variety of things, I decided that it's not possible to make a single fetch that returns both Subjects and Correspondents, so I turned to StackOverflow and found someone else suggesting that you have a single entity which does nothing more than have relationships with the two entities you'd like to return.
So I created a third type of entity, which I called Folders, which each have an optional to-one relationship with a Subject and a Correspondent. It also has two attributes, hasCorrespondent and hasSubject, which are booleans keeping track of whether Subject or Correspondent are set.
So I wrote this predicate which returns the Folder entities:
(hasCorrespondent == 1 AND ANY correspondent.subjects.correspondents.#count == 1)
OR
(hasSubject == 1 AND subject.correspondents.#count >= 1)
The issue with this is I'm getting an error:
Terminating app due to uncaught exception 'NSInvalidArgumentException',
reason: 'Unsupported function expression count:(correspondent.subjects.correspondents)
So, any suggestions as to what I'm doing incorrectly? How can I accomplish what I'd like? Are there any additional details I should share?
UPDATE
With Martin's suggestion, I changed the offending portion to this:
SUBQUERY(correspondent.subjects, $s, $s.correspondents.#count == 1).#count > 0
But that generated a new error:
Keypath containing KVC aggregate where there shouldn't be one;
failed to handle $s.correspondents.#count
After googling around, I found suggestions to add a check that the collection being enumerated over had at least one object, but modifying the offending line to this didn't change my error messages (so as far as I can tell it did nothing):
correspondent.subjects.#count > 0 AND
SUBQUERY(correspondent.subjects, $s, $s.correspondents.#count == 1).#count > 0
I had a similar problem. I created an additional field countSubentity. And when add/remove subentity change this field. And predicate looks:
[NSPredicate predicateWithFormat:#"SUBQUERY(subcategories, $s,
$s.countSubentity > 0).#count > 0"];
This work around has been less than ideal, but it seems to get the job done:
1 - I added an attribute to subject called count.
2 - I set (part of) my expression to
ANY correspondent.subjects.count == 1
Note that no SUBQUERY() was necessary for this workaround.
3 - Everytime I modify a subject's correspondents set, I run
subject.count = #(subject.correspondents.count);
I'm still hoping for a better solution and will be happy to mark any (working) better solution as correct.