Background Information
I've been following this tutorial:
http://adrianmejia.com/blog/2014/10/01/creating-a-restful-api-tutorial-with-nodejs-and-mongodb/#mongoose-read-and-query
I have a mongodb called test and it has the following collections:
> show collections
chassis
ports
customers
locations
system.indexes
>
Symptoms
When I try to query for any document inside the chassis collection, it keeps returning null even though many records exist.
dev#devbox:~/nimble_express$ curl localhost:3000/chassis/55a7cc4193819c033d4d75c9
nulldev#devbox:~/nimble_express$
Problem
After trying many different things, I discovered the following issue in the mongodb logs ( i turned on verbose logging)
In the following log entry, notice the reference to "test.chasses" (which is a typo. It should be "chassis") :
2015-07-29T14:42:25.554-0500 I QUERY [conn141] query test.chasses query: { _id: ObjectId('55a7cc4193819c033d4d75c9') } planSummary: EOF ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 2 } }, MMAPV1Journal: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { R: 1 } } } 0ms
I've grepped to make sure I don't have this typo anywhere in my code using the following command:
dev#devbox:~/nimble_express/nimbleApp$ grep -ris 'chasses' .
dev#devbox:~/nimble_express/nimbleApp$
I'm not sure where it's getting this collection name from.
Other queries against other collections work just fine. For example, I have a collection called "ports" and I pretty much copied and pasted all logic I have for chassis' and it works just fine.
Here's the proof from the logs:
2015-07-29T14:58:15.127-0500 I QUERY [conn160] query test.ports planSummary: COLLSCAN cursorid:68808242412 ntoreturn:1000 ntoskip:0 nscanned:0 nscannedObjects:1000 keyUpdates:0 writeConflicts:0 numYields:7 nreturned:1000 reslen:188922 locks:{ Global: { acquireCount: { r: 16 } }, MMAPV1Journal: { acquireCount: { r: 8 } }, Database: { acquireCount: { r: 8 } }, Collection: { acquireCount: { R: 8 } } } 0ms
Any suggestions? I'm sure I have a typo somewhere... but I can't find it. All my code is within the nimble_express directory tree.
I copied the 'chassis' collection in my mongodb to "tempCollection" like this:
> db.createCollection('tempCollection')
{ "ok" : 1 }
> db.chassis.copyTo('tempCollection');
WARNING: db.eval is deprecated
57
> exit
bye
And then, I created my schema, a route for this collection.
When I attempted to do a curl request for localhost:3000/tempCollection, I noticed that in the logs, the name of the collection was wrong again.
[[A2015-07-29T15:13:19.661-0500 I QUERY [conn168] query test.tempcollections planSummary: EOF ntoreturn:1000 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 2 } }, MMAPV1Journal: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { R: 1 } } } 0ms
And that's when it dawned on me. Something somewhere was pluralizing the collection names! So I googled and found this post:
Is there a way to prevent MongoDB adding plural form to collection names?
So the solution for me was to explicitly define the collection name like so:
module.exports = mongoose.model('chassis', ChassisSchema, 'chassis');
inside the model/Chassis.js file
Instead of marking this as a duplicate, I think I should leave this question as is for those who think they have a problem with collection names. For noobs like me, you assume that you are doing something wrong vs. the system performing some automagic for you! But I'm happy to do whatever the community suggests!
We can close this off as a duplicate. Or leave as is.
Related
I have one log file of mongodb. I want to display all output of grep command in greater than or less than given value"protocol:op_msg 523ms"
sed -n '/2022-09-15T12:26/,/2022-09-15T14:03/p' mongod.log| grep "op_msg 523ms"
output of log file :
7391:2022-11-22T09:23:23.047-0500 I COMMAND [conn26] command test.test appName: "MongoDB Shell" command: find { find: "test", filter: { creationDate: new Date(1663252936409) }, lsid: { id: UUID("7c1bb40c-5e99-4281-9351-893e3d23261d") }, $clusterTime: { clusterTime: Timestamp(1669126970, 1), signature: { hash: BinData(0, B141BFD0978167F8C023DFB4AB32BBB117B3CD80), keyId: 7136078726260850692 } }, $db: "test" } planSummary: COLLSCAN keysExamined:0 docsExamined:337738 cursorExhausted:1 numYields:2640 nreturned:1 queryHash:6F9DC23E planCacheKey:6F9DC23E reslen:304 locks:{ ReplicationStateTransition: { acquireCount: { w: 2641 } }, Global: { acquireCount: { r: 2641 } }, Database: { acquireCount: { r: 2641 } }, Collection: { acquireCount: { r: 2641 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 28615999, timeReadingMicros: 288402 } } protocol:op_msg 523ms
I have tried below command , but this command is only giving exact value. I need to find all query of log file which is greater than 100ms.
sed -n '/2022-09-15T12:26/,/2022-09-15T14:03/p' mongod.log| grep "op_msg 523ms"
Option 1) You can use grep bash onliner like this to filter mongo queries in specific execution time range:
grep -P '\d+ms' /log/mongodb/mongos.log | while read LINE; do querytime="$(echo "$LINE" | grep -oP '\d+ms' | grep -oP '\d+')"; if [ "$querytime" -gt 6000 ]&&[ "$querytime" -lt 7000 ]; then echo "$LINE"; fi; done
Explained:
Filter all log lines having Number+"ms" at the end.
Loop over the queries to extract execution time in querytime variable
Check if $querytime between 6000ms and 7000ms
If $queritime in the specified range print the current $LINE
Option 2) You can use the --slow MS, --fast MS options from mlogfilter from the mtools package where you can do something like:
mlogfilter mongod.log --slow 7000 --fast 6000
If all of the lines of interest have protocol:op_msg as the penultimate column, this becomes pretty trivial:
awk '$(NF-1) ~ /protocol:op_msg/ && $NF > 100 { print $NF }'
Note that this completely ignores the units when doing the comparison and will also print lines in which the final column is 250h or 250ns, but that's easy enough to filter for. With a more accurate problem description, a more precise solution is certainly available.
i'm having this error and have no idea how it comes to this: it worked the last time I checked and I haven´t made any single change
Build file 'D:\getVersionSoap\build.gradle' line: 134
Execution failed for task ':genJaxb'.
unable to parse the schema. Error messages should have been provided
Here is the piece of code:
"Line 134" is the line "xjc(destdir: sourcesDir)"
task genJaxb {
ext.sourcesDir = "${buildDir}/generated_sources/jaxb"
ext.classesDir = "${buildDir}/classes/jaxb"
ext.schemaDir = "${projectDir}/src/main/resources"
outputs.dir sourcesDir
doLast() {
project.ant {
taskdef name: "xjc", classname: "com.sun.tools.xjc.XJCTask",
classpath: configurations.jaxb.asPath
mkdir(dir: sourcesDir)
mkdir(dir: classesDir)
xjc(destdir: sourcesDir) {
schema(dir: schemaDir, includes: "**/ /* *.xsd")
arg(value: "-wsdl")
produces(dir: sourcesDir, includes: "**/ /* *.java")
}
javac(destdir: classesDir, source: 1.8, target: 1.8, debug: true,
debugLevel: "lines,vars,source",
classpath: configurations.jaxb.asPath,
includeantruntime: "false") {
src(path: sourcesDir)
include(name: "**/ /* *.java")
include(name: "*.java")
}
copy(todir: classesDir) {
fileset(dir: sourcesDir, erroronmissingdir: false) {
exclude(name: "**/ /* *.java")
}
}
}
}
}
Any idea? Thank you!
I am working through the tutorial where it says how to create a contract.
Here is their code:
function createFirstPing() {
const request = {
commands: {
applicationId: 'PingPongApp',
workflowId: `Ping-${sender}`,
commandId: uuidv4(),
ledgerEffectiveTime: { seconds: 0, nanoseconds: 0 },
maximumRecordTime: { seconds: 5, nanoseconds: 0 },
party: sender,
list: [
{
create: {
templateId: PING,
arguments: {
fields: {
sender: { party: sender },
receiver: { party: receiver },
count: { int64: 0 }
}
}
}
}
]
}
};
client.commandClient.submitAndWait(request, (error, _) => {
if (error) throw error;
console.log(`Created Ping contract from ${sender} to ${receiver}.`);
});
}
I want to create a similar request for in my project that sends a field called 'datetime_added'. In my DAML code it is of type time. I cannot figure out the proper syntax for this request. For example:
arguments: {
fields: {
sender: { party: sender },
receiver: { party: receiver },
count: { int64: 0 },
datetime_added: { time: '2019 Feb 19 00 00 00' }
}
}
The format I am expressing the time is not what is causing the problem (although I acknowledge that it's also probably wrong). The error I'm seeing is the following:
Error: ! Validation error
▸ commands
▸ list
▸ 0
▸ create
▸ arguments
▸ fields
▸ datetime_added
✗ Unexpected key time found
at CommandClient.exports.SimpleReporter [as reporter] (/home/vantage/damlprojects/loaner_car/node_modules/#da/daml-ledger/lib/data/reporting/simple_reporter.js:36:12)
at Immediate.<anonymous> (/home/vantage/damlprojects/loaner_car/node_modules/#da/daml-ledger/lib/data/client/command_client.js:52:62)
at runCallback (timers.js:705:18)
at tryOnImmediate (timers.js:676:5)
at processImmediate (timers.js:658:5)
I don't understand, is time not a valid DAML data type?
Edit
I tried switching time to timestamp as follows
datetime_added: {timestamp: { seconds: 0, nanoseconds: 0 }}
causing the following error:
/home/......../damlprojects/car/node_modules/google-protobuf/google-protobuf.js:98
goog.string.splitLimit=function(a,b,c){a=a.split(b);for(var d=[];0<c&&a.length;)d.push(a.shift()),c--;a.length&&d.push(a.join(b));return d};goog.string.editDistance=function(a,b){var c=[],d=[];if(a==b)return 0;if(!a.length||!b.length)return Math.max(a.length,b.length);for(var e=0;e<b.length+1;e++)c[e]=e;for(e=0;e<a.length;e++){d[0]=e+1;for(var f=0;f<b.length;f++)d[f+1]=Math.min(d[f]+1,c[f+1]+1,c[f]+Number(a[e]!=b[f]));for(f=0;f<c.length;f++)c[f]=d[f]}return d[b.length]};goog.asserts={};goog.asserts.ENABLE_ASSERTS=goog.DEBUG;goog.asserts.AssertionError=function(a,b){b.unshift(a);goog.debug.Error.call(this,goog.string.subs.apply(null,b));b.shift();this.messagePattern=a};goog.inherits(goog.asserts.AssertionError,goog.debug.Error);goog.asserts.AssertionError.prototype.name="AssertionError";goog.asserts.DEFAULT_ERROR_HANDLER=function(a){throw a;};goog.asserts.errorHandler_=goog.asserts.DEFAULT_ERROR_HANDLER;
AssertionError: Assertion failed
at new goog.asserts.AssertionError (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:98:603)
at Object.goog.asserts.doAssertFailure_ (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:99:126)
at Object.goog.asserts.assert (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:99:385)
at jspb.BinaryWriter.writeSfixed64 (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:338:80)
at proto.com.digitalasset.ledger.api.v1.Value.serializeBinaryToWriter (/home/vantage/damlprojects/loaner_car/node_modules/#da/daml-ledger/lib/grpc/generated/com/digitalasset/ledger/api/v1/value_pb.js:289:12)
at jspb.BinaryWriter.writeMessage (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:341:342)
at proto.com.digitalasset.ledger.api.v1.RecordField.serializeBinaryToWriter (/home/vantage/damlprojects/loaner_car/node_modules/#da/daml-ledger/lib/grpc/generated/com/digitalasset/ledger/api/v1/value_pb.js:1024:12)
at jspb.BinaryWriter.writeRepeatedMessage (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:350:385)
at proto.com.digitalasset.ledger.api.v1.Record.serializeBinaryToWriter (/home/vantage/damlprojects/loaner_car/node_modules/#da/daml-ledger/lib/grpc/generated/com/digitalasset/ledger/api/v1/value_pb.js:822:12)
at jspb.BinaryWriter.writeMessage (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:341:342)
In short, I need to know what type to use in my Node.js client for a DAML value of type time and how to express it.
I would recommend using the reference documentation for the bindings (although, as of version 0.4.0, browsing through it to answer your question I noticed two mistakes). In the upper navigation bar of the page you can start from Classes > data.CommandClient and work your way down its only argument (SubmitAndWaitRequest) until, following the links to the different fields, you reach the documentation for the timestamp field, which, as the error suggests (despite the mistake in the documentation), should be a Timestamp, where seconds are expressed in epoch time (seconds since 1970).
Hence, to make the call you wanted this would be the shape of the object you ought to send:
arguments: {
fields: {
sender: { party: sender },
receiver: { party: receiver },
count: { int64: 0 }
datetime_added: { timestamp: { seconds: 0, nanoseconds: 0 } }
}
}
For your case in particular, I would probably make a small helper that uses the Date.parse function.
function parseTimestamp(string) {
return { seconds: Date.parse(string) / 1000, nanoseconds: 0 };
}
That you can then use to pass in the time you mentioned in the example you made:
arguments: {
fields: {
sender: { party: sender },
receiver: { party: receiver },
count: { int64: 0 }
datetime_added: { timestamp: parseTimestamp('2019-02-19') }
}
}
As a closing note, I'd like to add that the Node.js bindings ship with typing files that provide auto-completion and contextual help on compatible editors (like Visual Studio Code). Using those will probably help you. Since the bindings are written in TypeScript, the typings are guaranteed to be always up to date with the API. Note that for the time being, the auto-completion works for the Ledger API itself but won't give you help for arbitrary records that target your DAML model (the fields object in this case).
I'm trying to create an index with TTL using the MongoDB driver for Node.js and a Mongo server hosted at mLab.
Node version 9.3.0.
Driver version 3.0.0.rc0
mongod version: 3.4.10 (MMAPv1)
Code in node.js:
var processCollection;
async function init (options) {
processCollection = await options.db.collection('processes');
await processCollection.dropIndexes();
await processCollection.createIndex(
{ 'modified': 1 },
{ expireAfterSeconds: 3600 }
);
}
Results in DB:
db['system.indexes'].find()
{
"v": 2,
"key": {
"modified": 1
},
"name": "modified_1",
"ns": "e-consular.processes"
}
The option expireAfterSeconds is missing in the resulting index. What am I doing wrong?
Collection.createIndex is broken in versions 3.0.0rc0 and 3.0.0 of the Node mongodb driver. It will ignore the options object argument.
This was fixed in version 3.0.1 of the driver. (You can see the fix here).
Update your driver to the latest version (e.g. npm i mongodb#3.0.4) and it should work as expected.
I am trying to use Rally WS-API to fetch those User stories and Features that were changed between two dates. The closest I got was getting to the Revision. But I am not sure on how to get the features and User stories from It. Thanks in advance
<pre>
==Request==
restApi.query({
type: 'Revision',
query: queryUtils.where('CreationDate', '>=',’2015-03-01’),
fetch: ['FormattedID', 'Name','Release','State','RevisionHistory','Revisions','PortfolioItem/Feature','ObjectID','VersionId'],
scope: {
workspace: 12345,
project: 54321
}
})
</pre>
<pre>
==Response==
{
_rallyAPIMajor:'2',
_rallyAPIMinor:'0',
Errors:[
],
Warnings:[
],
TotalResultCount:2,
StartIndex:1,
PageSize:2,
Results:[
{
_rallyAPIMajor:'2',
_rallyAPIMinor:'0',
_ref:'https://rally1.rallydev.com/slm/webservice/v2.0/revision/31480953333',
_refObjectUUID:'98e0ff40-34cb-494f-afc3-3cfeefdd1ce1',
_objectVersion:'1',
ObjectID:31480953333,
VersionId:'1',
RevisionHistory:{
_rallyAPIMajor:'2',
_rallyAPIMinor:'0',
_ref:'https://rally1.rallydev.com/slm/webservice/v2.0/revisionhistory/31276441234',
_refObjectUUID:'dc6978c3-9fa1-4c24-b900-01d5aedd6007',
_objectVersion:'1',
ObjectID:31276441234,
VersionId:'1',
Revisions:{
_rallyAPIMajor:'2',
_rallyAPIMinor:'0',
_ref:'https://rally1.rallydev.com/slm/webservice/v2.0/RevisionHistory/31276441234/Revisions',
_type:'Revision',
Count:7
},
_type:'RevisionHistory'
},
_type:'Revision'
},
{
_rallyAPIMajor:'2',
_rallyAPIMinor:'0',
_ref:'https://rally1.rallydev.com/slm/webservice/v2.0/revision/31484636333',
_refObjectUUID:'4b41e138-9026-46e7-ac24-94b7dd0765f8',
_objectVersion:'1',
ObjectID:31484636333,
VersionId:'1',
RevisionHistory:{
_rallyAPIMajor:'2',
_rallyAPIMinor:'0',
_ref:'https://rally1.rallydev.com/slm/webservice/v2.0/revisionhistory/31283675555',
_refObjectUUID:'1142d961-8928-4a9e-8ca1-bc6dd6df17b7',
_objectVersion:'1',
ObjectID:31283675555,
VersionId:'1',
Revisions:{
_rallyAPIMajor:'2',
_rallyAPIMinor:'0',
_ref:'https://rally1.rallydev.com/slm/webservice/v2.0/RevisionHistory/31283675555/Revisions',
_type:'Revision',
Count:4
},
_type:'RevisionHistory'
},
_type:'Revision'
}
]
}
</pre>
With Rally WSAPI, it's probably easiest to use a LastUpdateDate-bounded query:
((LastUpdateDate >= 2015-01-01T00:00Z) AND ((LastUpdateDate < 2015-03-01T00:00Z))
Rather than looking at RevisionHistories for individual artifacts.
However you would need to do a separate query for User Stories and Features, respectively.
An alternative way to accomplish this type of date-bounded change queries, especially if you are interested in a specific type of state-transition, would be to use Rally's Analytics 2.0 / Lookback API framework:
https://rally1.rallydev.com/analytics/doc/#/manual
https://www.rallydev.com/community/developer-product/analytics-20-announcing-lookback-api
I'm not sure if the Rally node toolkit has support for Lookback, but it wouldn't be too hard to extend it to do so.
There is a Java toolkit for accessing Lookback API:
https://github.com/RallyTools/Rally-Lookback-Toolkit