Is tracing intended to be used as I am using it? How can I trace errors chain? - rust

I'm using async-graphql and axum.
This is a reproduction of the issue: https://github.com/frederikhors/iss-async-graphql-error-handling.
To start:
cargo run
If you open the GraphiQL client at http://localhost:8000, you can use the below query to simulate what I'm trying to understand:
mutation {
mutateWithError
}
The backend response is:
{
"data": null,
"errors": [
{
"message": "I cannot mutate now, sorry!",
"locations": [
/*...*/
],
"path": ["mutateWithError"]
}
]
}
I like this, but what I don't understand is the tracing part:
2022-09-29T17:01:14.249236Z INFO async_graphql::graphql:84: close, time.busy: 626µs, time.idle: 14.3µs
in async_graphql::graphql::parse
in async_graphql::graphql::request
2022-09-29T17:01:14.252493Z INFO async_graphql::graphql:108: close, time.busy: 374µs, time.idle: 8.60µs
in async_graphql::graphql::validation
in async_graphql::graphql::request
2022-09-29T17:01:14.254592Z INFO async_graphql::graphql:146: error, error: I cannot mutate now, sorry!
in async_graphql::graphql::field with path: mutateWithError, parent_type: Mutation, return_type: String!
in async_graphql::graphql::execute
in async_graphql::graphql::request
2022-09-29T17:01:14.257389Z INFO async_graphql::graphql:136: close, time.busy: 2.85ms, time.idle: 30.8µs
in async_graphql::graphql::field with path: mutateWithError, parent_type: Mutation, return_type: String!
in async_graphql::graphql::execute
in async_graphql::graphql::request
2022-09-29T17:01:14.260729Z INFO async_graphql::graphql:122: close, time.busy: 6.31ms, time.idle: 7.80µs
in async_graphql::graphql::execute
in async_graphql::graphql::request
2022-09-29T17:01:14.264606Z INFO async_graphql::graphql:56: close, time.busy: 16.1ms, time.idle: 22.6µs
in async_graphql::graphql::request
Do you see the INFO async_graphql::graphql:146: error, error: I cannot mutate now, sorry!?
Why is it an INFO event? I would expect it to be an ERROR event.
And where is the innter error this is a DB error?
Code
The code is very simple:
pub struct Mutation;
#[Object]
impl Mutation {
async fn mutate_with_error(&self) -> async_graphql::Result<String> {
let new_string = mutate_with_error().await?;
Ok(new_string)
}
}
async fn mutate_with_error() -> anyhow::Result<String> {
match can_i_mutate_on_db().await {
Ok(s) => Ok(s),
Err(err) => Err(err.context("I cannot mutate now, sorry!")),
}
}
async fn can_i_mutate_on_db() -> anyhow::Result<String> {
bail!("this is a DB error!")
}
async fn graphql_handler(
schema: Extension<Schema<Query, Mutation, EmptySubscription>>,
req: GraphQLRequest,
) -> GraphQLResponse {
schema.execute(req.into_inner()).await.into()
}

Related

Trouble generating an Intent with delegateDirective from a TouchEvent handler (Alexa)

I need to confirm deleting a task from a button event. For this reason, I want Alexa to ask for confirmation, and therefore I need to generate a DeleteTaskIntent from my code.
I have tried this:
return handlerInput.responseBuilder.addDelegateDirective({
name: 'DeleteTaskIntent',
confirmationStatus: 'NONE',
slots: {
idTask:{
name: 'idTask',
value: idTask,
confirmationStatus: 'NONE'
}
}
}).getResponse();
In my TouchEventHandler, but after checking the request in the requestEnvelope, I see this:
request: {
type: 'System.ExceptionEncountered',
requestId: 'amzn1.echo-api.request.9c2cf5f4-2f2c-419c-898c-05bd5f096810',
timestamp: '2022-02-23T11:30:08Z',
locale: 'es-ES',
error: {
type: 'INVALID_RESPONSE',
message: 'Directive "Dialog.Delegate" cannot be used in response to an event'
},
cause: {
requestId: 'amzn1.echo-api.request.0494d80d-c6ac-41d6-b3a2-dffd97f427b5'
}
}
And the error
{
"name": "AskSdk.GenericRequestDispatcher Error"
}
also appears, which suggests that no handler can handle this case.
Any idea about what I'm doing wrong when trying to generate the Intent?

Digital Asset Node.js bindings: syntax for expressing 'time' type variable

I am working through the tutorial where it says how to create a contract.
Here is their code:
function createFirstPing() {
const request = {
commands: {
applicationId: 'PingPongApp',
workflowId: `Ping-${sender}`,
commandId: uuidv4(),
ledgerEffectiveTime: { seconds: 0, nanoseconds: 0 },
maximumRecordTime: { seconds: 5, nanoseconds: 0 },
party: sender,
list: [
{
create: {
templateId: PING,
arguments: {
fields: {
sender: { party: sender },
receiver: { party: receiver },
count: { int64: 0 }
}
}
}
}
]
}
};
client.commandClient.submitAndWait(request, (error, _) => {
if (error) throw error;
console.log(`Created Ping contract from ${sender} to ${receiver}.`);
});
}
I want to create a similar request for in my project that sends a field called 'datetime_added'. In my DAML code it is of type time. I cannot figure out the proper syntax for this request. For example:
arguments: {
fields: {
sender: { party: sender },
receiver: { party: receiver },
count: { int64: 0 },
datetime_added: { time: '2019 Feb 19 00 00 00' }
}
}
The format I am expressing the time is not what is causing the problem (although I acknowledge that it's also probably wrong). The error I'm seeing is the following:
Error: ! Validation error
▸ commands
▸ list
▸ 0
▸ create
▸ arguments
▸ fields
▸ datetime_added
✗ Unexpected key time found
at CommandClient.exports.SimpleReporter [as reporter] (/home/vantage/damlprojects/loaner_car/node_modules/#da/daml-ledger/lib/data/reporting/simple_reporter.js:36:12)
at Immediate.<anonymous> (/home/vantage/damlprojects/loaner_car/node_modules/#da/daml-ledger/lib/data/client/command_client.js:52:62)
at runCallback (timers.js:705:18)
at tryOnImmediate (timers.js:676:5)
at processImmediate (timers.js:658:5)
I don't understand, is time not a valid DAML data type?
Edit
I tried switching time to timestamp as follows
datetime_added: {timestamp: { seconds: 0, nanoseconds: 0 }}
causing the following error:
/home/......../damlprojects/car/node_modules/google-protobuf/google-protobuf.js:98
goog.string.splitLimit=function(a,b,c){a=a.split(b);for(var d=[];0<c&&a.length;)d.push(a.shift()),c--;a.length&&d.push(a.join(b));return d};goog.string.editDistance=function(a,b){var c=[],d=[];if(a==b)return 0;if(!a.length||!b.length)return Math.max(a.length,b.length);for(var e=0;e<b.length+1;e++)c[e]=e;for(e=0;e<a.length;e++){d[0]=e+1;for(var f=0;f<b.length;f++)d[f+1]=Math.min(d[f]+1,c[f+1]+1,c[f]+Number(a[e]!=b[f]));for(f=0;f<c.length;f++)c[f]=d[f]}return d[b.length]};goog.asserts={};goog.asserts.ENABLE_ASSERTS=goog.DEBUG;goog.asserts.AssertionError=function(a,b){b.unshift(a);goog.debug.Error.call(this,goog.string.subs.apply(null,b));b.shift();this.messagePattern=a};goog.inherits(goog.asserts.AssertionError,goog.debug.Error);goog.asserts.AssertionError.prototype.name="AssertionError";goog.asserts.DEFAULT_ERROR_HANDLER=function(a){throw a;};goog.asserts.errorHandler_=goog.asserts.DEFAULT_ERROR_HANDLER;
AssertionError: Assertion failed
at new goog.asserts.AssertionError (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:98:603)
at Object.goog.asserts.doAssertFailure_ (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:99:126)
at Object.goog.asserts.assert (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:99:385)
at jspb.BinaryWriter.writeSfixed64 (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:338:80)
at proto.com.digitalasset.ledger.api.v1.Value.serializeBinaryToWriter (/home/vantage/damlprojects/loaner_car/node_modules/#da/daml-ledger/lib/grpc/generated/com/digitalasset/ledger/api/v1/value_pb.js:289:12)
at jspb.BinaryWriter.writeMessage (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:341:342)
at proto.com.digitalasset.ledger.api.v1.RecordField.serializeBinaryToWriter (/home/vantage/damlprojects/loaner_car/node_modules/#da/daml-ledger/lib/grpc/generated/com/digitalasset/ledger/api/v1/value_pb.js:1024:12)
at jspb.BinaryWriter.writeRepeatedMessage (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:350:385)
at proto.com.digitalasset.ledger.api.v1.Record.serializeBinaryToWriter (/home/vantage/damlprojects/loaner_car/node_modules/#da/daml-ledger/lib/grpc/generated/com/digitalasset/ledger/api/v1/value_pb.js:822:12)
at jspb.BinaryWriter.writeMessage (/home/vantage/damlprojects/loaner_car/node_modules/google-protobuf/google-protobuf.js:341:342)
In short, I need to know what type to use in my Node.js client for a DAML value of type time and how to express it.
I would recommend using the reference documentation for the bindings (although, as of version 0.4.0, browsing through it to answer your question I noticed two mistakes). In the upper navigation bar of the page you can start from Classes > data.CommandClient and work your way down its only argument (SubmitAndWaitRequest) until, following the links to the different fields, you reach the documentation for the timestamp field, which, as the error suggests (despite the mistake in the documentation), should be a Timestamp, where seconds are expressed in epoch time (seconds since 1970).
Hence, to make the call you wanted this would be the shape of the object you ought to send:
arguments: {
fields: {
sender: { party: sender },
receiver: { party: receiver },
count: { int64: 0 }
datetime_added: { timestamp: { seconds: 0, nanoseconds: 0 } }
}
}
For your case in particular, I would probably make a small helper that uses the Date.parse function.
function parseTimestamp(string) {
return { seconds: Date.parse(string) / 1000, nanoseconds: 0 };
}
That you can then use to pass in the time you mentioned in the example you made:
arguments: {
fields: {
sender: { party: sender },
receiver: { party: receiver },
count: { int64: 0 }
datetime_added: { timestamp: parseTimestamp('2019-02-19') }
}
}
As a closing note, I'd like to add that the Node.js bindings ship with typing files that provide auto-completion and contextual help on compatible editors (like Visual Studio Code). Using those will probably help you. Since the bindings are written in TypeScript, the typings are guaranteed to be always up to date with the API. Note that for the time being, the auto-completion works for the Ledger API itself but won't give you help for arbitrary records that target your DAML model (the fields object in this case).

logstash hangs with error sized_queue_timeout

We have a logstash pipeline in which numerous logstash-forwarders forward logs to a single logstash instance. Many times we have observed that the logstash hangs with the below error:-
[2016-07-22 03:01:12.619] WARN -- Concurrent::Condition: [DEPRECATED] Will be replaced with Synchronization::Object in v1.0.
called on: /opt/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/logstash-input-lumberjack-1.0.2/lib/logstash/sized_queue_timeout.rb:16:in `initialize'
Exception in thread ">output" java.lang.UnsupportedOperationException
at java.lang.Thread.stop(Thread.java:869)
at org.jruby.RubyThread.exceptionRaised(RubyThread.java:1221)
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:112)
at java.lang.Thread.run(Thread.java:745)
Our logstash config looks like below:-
input {
lumberjack {
port => 6782
codec => json {}
ssl_certificate => "/opt/logstash-1.5.3/cert/logstash-forwarder.crt"
ssl_key => "/opt/logstash-1.5.3/cert/logstash-forwarder.key"
type => "lumberjack"
}
}
filter {
if [env] != "prod" and [env] != "common" {
drop {}
}
if [message] =~ /^\s*$/ {
drop { }
}
}
output {
if "_jsonparsefailure" in [tags] {
file {
path => "/var/log/shop/parse_error/%{env}/%{app}/%{app}_%{host}_%{+YYYY-MM-dd}.log"
}
} else {
kafka {
broker_list => ["kafka:9092"]
topic_id => "logstash_logs2"
}
}
}
On restarting the logstash it starts working again. Can some one let me know why this problem comes and how can we get around this without restarting logstash everytime?

What is the pattern to match complete input in Logstash?

I am using ELK stack with filebeat.
filebeat.conf
filebeat:
prospectors:
-
paths:
- /home/ubuntu/logs_*
input_type: log
output:
logstash:
hosts: [${LOGSTASH_PORT_5044_TCP_ADDR}]
index: filebeat
console:
pretty: true
This is passing logs from a file logs_test
A sample log
{"name":"test","statusCode":0,"deployment":"production","hostname":"ip-random-address","level":30,"jobName":"testJob","date":"2016-07-18T03:15:02.075Z","jobType":"script","msg":"","time":"2016-07-18T03:15:02.076Z","v":0}
I want to make a HTTP call to an external URL when the field statusCode is 1
The entire log object is being passed to logstash.
My logstash config
input {
beats {
port => 5044
codec => "json"
}
}
output {
if ([statusCode] and [statusCode] == 1) {
http {
format=>"message"
http_method=>"post"
url=>"http://www.example.com"
message=>'{"text": "%{some_pattern_matcher}"}'
}
}
}
[Question] What should the "some_pattern_matcher" be to send all fields to HTTP request.
PS: %{mesage} does not work.
input {
beats {
port => 5044
codec => "json"
}
}
filter{
grok{
match => { "message" => "%{GREEDYDATA:data}" }
}
}
output {
if ([statusCode] and [statusCode] == 1) {
http {
format=>"message"
http_method=>"post"
url=>"http://www.example.com"
message=> %{data}
}
}
}
I haven't tried it out. So try this one and let me know if this solution works. If not please post the error(s) you got.

express / mongoose application querying wrong collection

Background Information
I've been following this tutorial:
http://adrianmejia.com/blog/2014/10/01/creating-a-restful-api-tutorial-with-nodejs-and-mongodb/#mongoose-read-and-query
I have a mongodb called test and it has the following collections:
> show collections
chassis
ports
customers
locations
system.indexes
>
Symptoms
When I try to query for any document inside the chassis collection, it keeps returning null even though many records exist.
dev#devbox:~/nimble_express$ curl localhost:3000/chassis/55a7cc4193819c033d4d75c9
nulldev#devbox:~/nimble_express$
Problem
After trying many different things, I discovered the following issue in the mongodb logs ( i turned on verbose logging)
In the following log entry, notice the reference to "test.chasses" (which is a typo. It should be "chassis") :
2015-07-29T14:42:25.554-0500 I QUERY [conn141] query test.chasses query: { _id: ObjectId('55a7cc4193819c033d4d75c9') } planSummary: EOF ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 2 } }, MMAPV1Journal: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { R: 1 } } } 0ms
I've grepped to make sure I don't have this typo anywhere in my code using the following command:
dev#devbox:~/nimble_express/nimbleApp$ grep -ris 'chasses' .
dev#devbox:~/nimble_express/nimbleApp$
I'm not sure where it's getting this collection name from.
Other queries against other collections work just fine. For example, I have a collection called "ports" and I pretty much copied and pasted all logic I have for chassis' and it works just fine.
Here's the proof from the logs:
2015-07-29T14:58:15.127-0500 I QUERY [conn160] query test.ports planSummary: COLLSCAN cursorid:68808242412 ntoreturn:1000 ntoskip:0 nscanned:0 nscannedObjects:1000 keyUpdates:0 writeConflicts:0 numYields:7 nreturned:1000 reslen:188922 locks:{ Global: { acquireCount: { r: 16 } }, MMAPV1Journal: { acquireCount: { r: 8 } }, Database: { acquireCount: { r: 8 } }, Collection: { acquireCount: { R: 8 } } } 0ms
Any suggestions? I'm sure I have a typo somewhere... but I can't find it. All my code is within the nimble_express directory tree.
I copied the 'chassis' collection in my mongodb to "tempCollection" like this:
> db.createCollection('tempCollection')
{ "ok" : 1 }
> db.chassis.copyTo('tempCollection');
WARNING: db.eval is deprecated
57
> exit
bye
And then, I created my schema, a route for this collection.
When I attempted to do a curl request for localhost:3000/tempCollection, I noticed that in the logs, the name of the collection was wrong again.
[[A2015-07-29T15:13:19.661-0500 I QUERY [conn168] query test.tempcollections planSummary: EOF ntoreturn:1000 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 2 } }, MMAPV1Journal: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { R: 1 } } } 0ms
And that's when it dawned on me. Something somewhere was pluralizing the collection names! So I googled and found this post:
Is there a way to prevent MongoDB adding plural form to collection names?
So the solution for me was to explicitly define the collection name like so:
module.exports = mongoose.model('chassis', ChassisSchema, 'chassis');
inside the model/Chassis.js file
Instead of marking this as a duplicate, I think I should leave this question as is for those who think they have a problem with collection names. For noobs like me, you assume that you are doing something wrong vs. the system performing some automagic for you! But I'm happy to do whatever the community suggests!
We can close this off as a duplicate. Or leave as is.

Resources