I am running the following query using Cypher on Neo4j 2.0.1
MATCH (n) WHERE n.value = -4810333952080461631 OR n.value = -163182636343344959
RETURN n.value
Results:
-4810333952080462000
-163182636343344960
It seems that the values are getting rounded. I tried it through their web ui, and the node js neo4j client.
When I browse to the node through their web ui, I can see that it holds the correct value.
This was opened in an issue: https://github.com/neo4j/neo4j/issues/2009. The JSON returned is correct. If it's not correct coming from the client you're using (you don't mention which), there is possibly a bug in the client, where a value is converted to a double and losing precision.
Related
I’ve built a dynamic query generator to create my desired queries based on many factors, however, in rare cases it acted weird. After a day on reading logs I found a situation that can be simplified in this:
db.users.find({att: ‘a’, att: ‘b’})
What I expect is that mongodb by default uses AND, so above query’s result should be an empty array. However, it’s not!
But when I use AND explicitly, the result is an empty array
db.users.find({$and: [{att: 'a'}, {att: ‘b'}]})
In javascript object’s keys should be unique, otherwise the value will be replaced by the latest value
(also mongodb shell is based on js, so it follows some rules of js)
const t = {att: 'a', att: 'b'};
console.log(t);
So in your case your query is acting like this:
db.users.find({att: ‘b’})
You’ve to handle this situation on your code if you want the result be empty in the mentioned condition
We are currently struggeling with Batch Query,
which seems to ignore the filter expressions on S4 side caused by a wrong URL encoding.
/sap/opu/odata/sap/ZP2M_A_CONTRACT_SEARCH_HDR_CDS/ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID eq %274600002020%27&$select=*&$format=json
Executing the query using FluentHelperRead.execute(HttpClient)
the returned list of entities contains the expected result with exactly one entity.
Executing the query as Batch Query the following request is logged in console:
GET ZP2M_A_CONTRACT_SEARCH_HDR?%24filter%3DPurchaseContractID+eq+%25274600002020%2527%26%24select%3D*%26%24format%3Djson HTTP/1.1
The collected list from all batch result parts contains all entities.
It seems, that the query URL is encoded in wrong way
and that S4 ignored the filter expressions when encoded in this way.
e.g. $filter is encoded to %24filter which is ignored by S4.
This seems to be a bug in BatchRequestImpl.getRequest(ODataQueryImpl) method,
where URL encoding is done a 2nd time on already encoded URL parts.
if(systemQuery.indexOf("$format=json&$count=true") != -1)
{
systemQuery = systemQuery.substring(0, systemQuery.indexOf("$format=json&$count=true") -1);
keysUrl.append("/$count");
}
systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); // this code line which encodes the query 2nd time
keysUrl.append("?");
The code line systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); located in
BatchRequestImpl(1.38.0) - line 295
BatchRequestImpl(1.42.2) - line 307
encodes the systemQuery string again (including the already encoded parts of FilterExpression as well).
When undoing the changes of this code line in debugger and replacing the scapces by %20 or '+' the Batch Query looks like that
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID%20eq%20%274600002020%27&$select=*&$format=json HTTP/1.1
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID+eq+%274600002020%27&$select=*&$format=json HTTP/1.1
and it returns the expected result (exactly 1 entity).
This wrong encoding appears when using these library versions:
sdk-bom: 3.16.1
connectivity: 1.38.0
This issue appears in newest SDK versions as well:
sdk-bom: 3.21.0
connectivity: 1.39.0
This issue appears with connectivity JAR in newest version too:
sdk-bom: 3.21.0
connectivity: 1.40.2
Debugging together with a ABAP/S4 colleague figures out,
that S4 only applies filter expressions, if the keyword $filter is found in request,
%24filter%3D is ignored (the cause why we get all entities running the Batch Query).
My suggestion to solve it would be
// decode query first (to decode the filter expression)
systemQuery = URLDecoder.decode(systemQuery, "UTF_8");
// encode query
systemQuery = org.apache.commons.httpclient.util.URIUtil.encodeQuery(systemQuery, "UTF_8");
My code, how I am calling the batchRequest:
FluentHelperRead<?, MyEntity, ?> queryApi = myService.getAll... // with adding some filter expression
BatchRequestBuilder batchRequestBuilder = BatchRequestBuilder.withService(MyService.DEFAULT_SERVICE_PATH);
ODataQuery query = queryApi.toQuery();
batchRequestBuilder.addQueryRequest(query);
HttpClient httpClient = HttpClientAccessor
.getHttpClient(DefaultErpHttpDestinationAccessor.get());
BatchRequest request = batchRequestBuilder.build();
BatchResult result = request.execute(httpClient);
// ... evaluate response
I think, this is a general issue in the Cloud SDK.
Would is be possible to get this fixed in next Cloud SDK release?
Can you share your code for Batch request? Do you use BatchRequestImpl directly?
The thing is SAP Cloud SDK relies on some dependencies one of which introduces the BatchRequestImpl and if it's called directly the bug is on the dependency side. I have already informed them to investigate this double encoding issue. Unfortunately, we can't directly influence how fast it is resolved and sometimes it takes longer than we'd like.
The good news, we're working on replacing this dependency with our own implementation to solve exactly this kind of problem. The batch is work in progress and should be available in Beta around the end of next month for OData V4 and hopefully around the same time for OData V2 (it's not a hard commitment and depends on other priorities).
From here we have to wait for whatever happens first:
The bug is fixed on the dependency side
Internal OData client implementation is ready together with Batch
I hope it helps and explains current solution path. If you share a bit around your deadlines and the potential impact we'll be happy to consider that.
This has been fixed within the dependency and as of version 3.25.0 the SAP Cloud SDK includes the fix.
I am using the python bolt driver to create nodes in a neo4j database. These nodes get altered by apoc.trigger functions. And i want the returning BoltStatementResult to contain the altered version of these nodes.
This is what i have tested so far:
My triggers are working as expected. The nodes stored, are altered correctly.
I tried the 'before' and 'after' phases.
I set the trigger functions to return the altered version.
I did write a second query to get the new and updated node of database. But this option is quite unsafe, as it has no unique identifier.
My trigger function:
CALL apoc.trigger.add(
'onCreateNodeAddMetadata',
'UNWIND {createdNodes} AS n
SET n.uid = apoc.create.uuid(), n.timestamp = timestamp() RETURN n',
{phase: 'before'}
)
I expect the returning value of my session.write_transaction to contain the the added properties.
As a safe workaround (but see caveat below), the Cypher query in your write_transaction can return the native ID of the created node (e.g., RETURN ID(n)).
Then, as long as you know the node was not deleted, you can perform a query for it with that ID (in this example, myID contains the ID value and is passed as a parameter):
MATCH (n) WHERE ID(n) = $myId
...
If the node could be deleted before you search for it by native ID, then this technique is not safe, since neo4j can re-assign the native ID of a deleted node to another newly- created node.
The database is in Azure cloud and not being used in production currently. There are 80.000 rows and a uprn is a VARCHAR(100);
I'm already using JOI to validate each UPRN as well;
I'm using KNEX with a SQL Server database with the following whereIn query:
knex(LOCATIONS.table).whereIn(LOCATIONS.uprn, req.body.uprns)
but this takes 8-12s to complete and sometimes timesout. if I use .toQuery() on the same thing, SSMS will return the result within 1-2.
If I do a raw query, the resulting .toQuery() or toString() works in SSMS and returns results. But if I try to use the raw directly, it will return 0 results.
I'm looking to either fix what's making whereIn so slow or get the raw query working.
EDIT 1:
After much debugging and trying -- it seems that the bug is due to how knex deals with arrays, so I made a for-of loop to add ? ? ? for each array element and then inputed the array for all params.
This led me to realizing the performance issue is due to SQL server way of parameterising.
I ended up building a raw query string with all of the parameters and validating the input with Joi string/regex config:
Joi.string()
.min(1)
.max(35)
.regex(/^[a-z\d\-_\s]+$/i)
allowing only for alphanumeric, dashes and spaces which should prevent sql injection.
I'm going to look deeper into security issues with this and might make a separate login that can only SELECT data from that table and nothing more to run with these queries.
Needed to just handle it raw and validate separately.
I am using titan 0.4.2 with cassandra 2.0.7 as the storage back end. I have used rexter-server 2.4.0 to insert vertex in the titan. However while i am trying to update a vertex property using rexter client I am getting null pointer exception.
RexsterClient client = RexsterClientFactory.open("localhost", "titangraph");
client.execute("g.getVertex(8).setProperty('name','William')");
The above code is throwing null pointer exception. However the script g.getVertex(8).setProperty('name','William') runs perfectly fine in the gremlin console
How can i update titan vertex property using rexster rexpro?
I will assume that you are saying the NullPointerException (NPE) is coming from the script executed on the server-side. In other words, the problem is a result of running:
g.getVertex(8).setProperty('name','William')
and not something in the client instantiation or related to other client side code beyond the script itself.
With that assumption in mind, I could not recreate your error. The execute method does return a list with a single null within it but I don't think you're referring to that as your problem given the assumption. So, there's really only two things that could be wrong that I can think of:
The vertex returned by g.v(8) does not exist and it returns null
g is null
To verify, just execute g.v(8). If it returns null then item one above is the issue. If you still get an NPE then item two above is the problem. If item two is the problem then the name of the graph you are referencing, titangraph, is either not right or there is a bug in Rexster's handling of that binding. To figure that out, execute this instead:
g = rexster.getGraph('titangraph')
g.v(8)
If you still have a NPE then I'd have to say that you need to check your rexster.xml more carefully regarding your configuration. If it works, then you should likely report a bug in Rexster.