Go run loop in parallel with timeout - multithreading

I need to run the requests in parallel and not one after another but with timeout. now can I do it in go ?
This is the specific code which I need to run in parallel and the trick here is also to use timeout, i.e. wait to all the request according to the timeout and get the responses after all finished.
for _, test := range testers {
checker := NewTap(test.name, test.url, test.timeout)
res, err := checker.Check()
if err != nil {
fmt.Println(err)
}
fmt.Println(res.name)
fmt.Println(res.res.StatusCode)
}
This is the all code (working code)
https://play.golang.org/p/cXnJJ6PW_CF
package main
import (
`fmt`
`net/http`
`time`
)
type HT interface {
Name() string
Check() (*testerResponse, error)
}
type testerResponse struct {
name string
res http.Response
}
type Tap struct {
url string
name string
timeout time.Duration
client *http.Client
}
func NewTap(name, url string, timeout time.Duration) *Tap {
return &Tap{
url: url,
name: name,
client: &http.Client{Timeout: timeout},
}
}
func (p *Tap) Check() (*testerResponse, error) {
response := &testerResponse{}
req, err := http.NewRequest("GET", p.url, nil)
if err != nil {
return nil, err
}
res, e := p.client.Do(req)
response.name = p.name
response.res = *res
if err != nil {
return response, e
}
return response, e
}
func (p *Tap) Name() string {
return p.name
}
func main() {
var checkers []HT
testers := []Tap{
{
name: "first call",
url: "http://stackoverflow.com",
timeout: time.Second * 20,
},
{
name: "second call",
url: "http://www.example.com",
timeout: time.Second * 10,
},
}
for _, test := range testers {
checker := NewTap(test.name, test.url, test.timeout)
res, err := checker.Check()
if err != nil {
fmt.Println(err)
}
fmt.Println(res.name)
fmt.Println(res.res.StatusCode)
checkers = append(checkers, checker)
}
}

A popular concurrency pattern in Go is using worker pools.
A basic worker pool uses two channels; one to put jobs on, and another to read results to. In this case, our jobs channel will be of type Tap and our results channel will be of type testerResponse.
Workers
take a job from the jobs channel and puts the result on the results channel.
// worker defines our worker func. as long as there is a job in the
// "queue" we continue to pick up the "next" job
func worker(jobs <-chan Tap, results chan<- testerResponse) {
for n := range jobs {
results <- n.Check()
}
}
Jobs
to add jobs, we need to iterate over our testers and put them on our jobs channel.
// makeJobs fills up our jobs channel
func makeJobs(jobs chan<- Tap, taps []Tap) {
for _, t := range taps {
jobs <- t
}
}
Results
In order to read results, we need to iterate over them.
// getResults takes a job from our worker pool and gets the result
func getResults(tr <-chan testerResponse, taps []Tap) {
for range taps {
r := <- tr
status := fmt.Sprintf("'%s' to '%s' was fetched with status '%d'\n", r.name, r.url, r.res.StatusCode)
if r.err != nil {
status = fmt.Sprintf(r.err.Error())
}
fmt.Println(status)
}
}
Finally, our main function.
func main() {
// Make buffered channels
buffer := len(testers)
jobsPipe := make(chan Tap, buffer) // Jobs will be of type `Tap`
resultsPipe := make(chan testerResponse, buffer) // Results will be of type `testerResponse`
// Create worker pool
// Max workers default is 5
// maxWorkers := 5
// for i := 0; i < maxWorkers; i++ {
// go worker(jobsPipe, resultsPipe)
// }
// the loop above is the same as doing:
go worker(jobsPipe, resultsPipe)
go worker(jobsPipe, resultsPipe)
go worker(jobsPipe, resultsPipe)
go worker(jobsPipe, resultsPipe)
go worker(jobsPipe, resultsPipe)
// ^^ this creates 5 workers..
makeJobs(jobsPipe, testers)
getResults(resultsPipe, testers)
}
Putting it all together
I changed the timeout to one millisecond for the 'second call' to show how the timeout works.
package main
import (
"fmt"
"net/http"
"time"
)
type HT interface {
Name() string
Check() (*testerResponse, error)
}
type testerResponse struct {
err error
name string
res http.Response
url string
}
type Tap struct {
url string
name string
timeout time.Duration
client *http.Client
}
func NewTap(name, url string, timeout time.Duration) *Tap {
return &Tap{
url: url,
name: name,
client: &http.Client{Timeout: timeout},
}
}
func (p *Tap) Check() testerResponse {
fmt.Printf("Fetching %s %s \n", p.name, p.url)
// theres really no need for NewTap
nt := NewTap(p.name, p.url, p.timeout)
res, err := nt.client.Get(p.url)
if err != nil {
return testerResponse{err: err}
}
// need to close body
res.Body.Close()
return testerResponse{name: p.name, res: *res, url: p.url}
}
func (p *Tap) Name() string {
return p.name
}
// makeJobs fills up our jobs channel
func makeJobs(jobs chan<- Tap, taps []Tap) {
for _, t := range taps {
jobs <- t
}
}
// getResults takes a job from our jobs channel, gets the result, and
// places it on the results channel
func getResults(tr <-chan testerResponse, taps []Tap) {
for range taps {
r := <-tr
status := fmt.Sprintf("'%s' to '%s' was fetched with status '%d'\n", r.name, r.url, r.res.StatusCode)
if r.err != nil {
status = fmt.Sprintf(r.err.Error())
}
fmt.Printf(status)
}
}
// worker defines our worker func. as long as there is a job in the
// "queue" we continue to pick up the "next" job
func worker(jobs <-chan Tap, results chan<- testerResponse) {
for n := range jobs {
results <- n.Check()
}
}
var (
testers = []Tap{
{
name: "1",
url: "http://google.com",
timeout: time.Second * 20,
},
{
name: "2",
url: "http://www.yahoo.com",
timeout: time.Second * 10,
},
{
name: "3",
url: "http://stackoverflow.com",
timeout: time.Second * 20,
},
{
name: "4",
url: "http://www.example.com",
timeout: time.Second * 10,
},
{
name: "5",
url: "http://stackoverflow.com",
timeout: time.Second * 20,
},
{
name: "6",
url: "http://www.example.com",
timeout: time.Second * 10,
},
{
name: "7",
url: "http://stackoverflow.com",
timeout: time.Second * 20,
},
{
name: "8",
url: "http://www.example.com",
timeout: time.Second * 10,
},
{
name: "9",
url: "http://stackoverflow.com",
timeout: time.Second * 20,
},
{
name: "10",
url: "http://www.example.com",
timeout: time.Second * 10,
},
{
name: "11",
url: "http://stackoverflow.com",
timeout: time.Second * 20,
},
{
name: "12",
url: "http://www.example.com",
timeout: time.Second * 10,
},
{
name: "13",
url: "http://stackoverflow.com",
timeout: time.Second * 20,
},
{
name: "14",
url: "http://www.example.com",
timeout: time.Second * 10,
},
}
)
func main() {
// Make buffered channels
buffer := len(testers)
jobsPipe := make(chan Tap, buffer) // Jobs will be of type `Tap`
resultsPipe := make(chan testerResponse, buffer) // Results will be of type `testerResponse`
// Create worker pool
// Max workers default is 5
// maxWorkers := 5
// for i := 0; i < maxWorkers; i++ {
// go worker(jobsPipe, resultsPipe)
// }
// the loop above is the same as doing:
go worker(jobsPipe, resultsPipe)
go worker(jobsPipe, resultsPipe)
go worker(jobsPipe, resultsPipe)
go worker(jobsPipe, resultsPipe)
go worker(jobsPipe, resultsPipe)
// ^^ this creates 5 workers..
makeJobs(jobsPipe, testers)
getResults(resultsPipe, testers)
}
Which outputs:
// Fetching http://stackoverflow.com
// Fetching http://www.example.com
// Get "http://www.example.com": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
// 'first call' to 'http://stackoverflow.com' was fetched with status '200'

Parallelism can be done in different ways in Golang.
This is a naive approach with wait group, Mutex and unlimited go routines which is not recommended.
I think using channels is the preferred way to do parallelism.
package main
import (
"fmt"
"net/http"
"sync"
"time"
)
type HT interface {
Name() string
Check() (*testerResponse, error)
}
type testerResponse struct {
name string
res http.Response
}
type Tap struct {
url string
name string
timeout time.Duration
client *http.Client
}
func NewTap(name, url string, timeout time.Duration) *Tap {
return &Tap{
url: url,
name: name,
client: &http.Client{
Timeout: timeout,
},
}
}
func (p *Tap) Check() (*testerResponse, error) {
response := &testerResponse{}
req, err := http.NewRequest("GET", p.url, nil)
if err != nil {
return nil, err
}
res, e := p.client.Do(req)
if e != nil {
return response, e
}
response.name = p.name
response.res = *res
return response, e
}
func (p *Tap) Name() string {
return p.name
}
func main() {
var checkers []HT
wg := sync.WaitGroup{}
locker := sync.Mutex{}
testers := []Tap{
{
name: "first call",
url: "http://google.com",
timeout: time.Second * 20,
},
{
name: "second call",
url: "http://www.example.com",
timeout: time.Millisecond * 100,
},
}
for _, test := range testers {
wg.Add(1)
go func(tst Tap) {
defer wg.Done()
checker := NewTap(tst.name, tst.url, tst.timeout)
res, err := checker.Check()
if err != nil {
fmt.Println(err)
}
fmt.Println(res.name)
fmt.Println(res.res.StatusCode)
locker.Lock()
defer locker.Unlock()
checkers = append(checkers, checker)
}(test)
}
wg.Wait()
}

Related

Error: ORA-01008: not all variables bound

I have this error for a few hours and I can't identify the problem. Error: ORA-01008: not all variables bound.
Controller
async bipagem(req: Request, res: Response) {
try {
let credentials = super.openToken(req)
let { p_fil_filial, p_set_cdgo, p_mini_fab, p_codigo_barra } = req.query
let info = await this.rep.bipagem(
p_fil_filial as string,
p_set_cdgo as string,
p_mini_fab as string,
p_codigo_barra as string,
credentials as string
)
res.json(info)
} catch (error) {
catchErr(res, error)
}
}
}
Repository
public async bipagem(
p_fil_filial: string,
p_set_cdgo: string,
p_mini_fab: string,
p_codigo_barra: string,
userPool: string
) {
let conn
try {
conn = await connection(userPool)
const ret = await conn.execute(
`DECLARE
c_result SYS_REFCURSOR;
BEGIN
-- Call the function
:result := brio.pck_fab0024.bipagem(p_fil_filial => :p_fil_filial,
p_set_cdgo => :p_set_cdgo,
p_mini_fab => :p_mini_fab,
p_codigo_barra => :p_codigo_barra,
p_msg => :p_msg);
DBMS_SQL.RETURN_RESULT(c_result);
END;`,
{
p_fil_filial,
p_set_cdgo,
p_mini_fab,
p_codigo_barra,
p_msg: { type: oracledb.STRING, dir: oracledb.BIND_OUT }
}
)
return { ...(ret.outBinds as object), conteudo: ret.implicitResults[0] }
} catch (e) {
console.log('Erro na fab0024: ', e.message)
return {
p_fil_filial,
p_set_cdgo,
p_codigo_barra,
p_msg: '',
conteudo: []
}
} finally {
if (conn && typeof conn !== 'string') conn.close()
}
}
}
I tried to include the p_msg parameter and got this return error TS2339: Property 'bipagem' does not exist on type 'unknown'.
Your PL/SQL block has six bind parameters but you are passing only five values. Hence it is not a surprise that you get an error saying that one of the local variables isn't bound.
I think you have missed the fact that :result in the line below is also a bind parameter:
:result := brio.pck_fab0024.bipagem(p_fil_filial => :p_fil_filial,
I suspect you meant to assign the result to the local variable c_result (to which you don't currently assign any value) instead of an extra bind parameter:
c_result := brio.pck_fab0024.bipagem(p_fil_filial => :p_fil_filial,

Elasticsearch node js point in time search_phase_execution_exception

const body = {
query: {
geo_shape: {
geometry: {
relation: 'within',
shape: {
type: 'polygon',
coordinates: [$polygon],
},
},
},
},
pit: {
id: "t_yxAwEPZXNyaS1wYzYtMjAxN3IxFjZxU2RBTzNyUXhTUV9XbzhHSk9IZ3cAFjhlclRmRGFLUU5TVHZKNXZReUc3SWcAAAAAAAALmpMWQkNwYmVSeGVRaHU2aDFZZExFRjZXZwEWNnFTZEFPM3JReFNRX1dvOEdKT0hndwAA",
keep_alive: "1m",
},
};
Query fails with search_phase_execution_exception at onBody
Without pit query works fine but it's needed to retrieve more than 10000 hits
Well, using PIT in NodeJS ElasticSearch's client is not clear, or at least is not well documented. You can create a PIT using the client like:
const pitRes = await elastic.openPointInTime({
index: index,
keep_alive: "1m"
});
pit_id = pitRes.body.id;
But there is no way to use that pit_id in the search method, and it's not documented properly :S
BUT, you can use the scroll API as follows:
const scrollSearch = await elastic.helpers.scrollSearch({
index: index,
body: {
"size": 10000,
"query": {
"query_string": {
"fields": [ "vm_ref", "org", "vm" ],
"query": organization + moreQuery
},
"sort": [
{ "utc_date": "desc" }
]
}
}});
And then read the results as follows:
let res = [];
try {
for await (const result of scrollSearch) {
res.push(...result.body.hits.hits);
}
} catch (e) {
console.log(e);
}
I know that's not the exact answer to your question, but I hope it helps ;)
The usage of point-in-time for pagination of search results is now documented in ElasticSearch. You can find more or less detailed explanations here: Paginate search results
I prepared an example that may give an idea about how to implement the workflow, described in the documentation:
async function searchWithPointInTime(cluster, index, chunkSize, keepAlive) {
if (!chunkSize) {
chunkSize = 5000;
}
if (!keepAlive) {
keepAlive = "1m";
}
const client = new Client({ node: cluster });
let pointInTimeId = null;
let searchAfter = null;
try {
// Open point in time
pointInTimeId = (await client.openPointInTime({ index, keep_alive: keepAlive })).body.id;
// Query next chunk of data
while (true) {
const size = remained === null ? chunkSize : Math.min(remained, chunkSize);
const response = await client.search({
// Pay attention: no index here (because it will come from the point-in-time)
body: {
size: chunkSize,
track_total_hits: false, // This will make query faster
query: {
// (1) TODO: put any filter you need here (instead of match_all)
match_all: {},
},
pit: {
id: pointInTimeId,
keep_alive: keepAlive,
},
// Sorting should be by _shard_doc or at least include _shard_doc
sort: [{ _shard_doc: "desc" }],
// The next parameter is very important - it tells Elastic to bring us next portion
...(searchAfter !== null && { search_after: [searchAfter] }),
},
});
const { hits } = response.body.hits;
if (!hits || !hits.length) {
break; // No more data
}
for (hit of hits) {
// (2) TODO: Do whatever you need with results
}
// Check if we done reading the data
if (hits.length < size) {
break; // We finished reading all data
}
// Get next value for the 'search after' position
// by extracting the _shard_doc from the sort key of the last hit
searchAfter = hits[hits.length - 1].sort[0];
}
} catch (ex) {
console.error(ex);
} finally {
// Close point in time
if (pointInTime) {
await client.closePointInTime({ body: { id: pointInTime } });
}
}
}

node-oracledb giving Error: NJS-044: named JSON object is not expected in this context while executing stored procedure

I am facing an issue to call oracle db stored procedure using node-oracledb npm ("oracledb": "^3.1.2") and ("#types/oracledb": "^3.1.0") into node.js application. The stored procedure takes 3 input paramters of type string, string and array of oracleDB type respectively. However, while passing last parameter of DB type, node.js application throws an exception of "NJS-044: named JSON object is not expected in this context".
// DB payload
let obj = {
tableOwner: 'Mr X',
tableName: 'Demo',
retentionData: this.CreateArrayFromJSONObject(array_of_data)
}
// DB procedure
let procedure: string = `BEGIN PKG_ARCHIVAL_TOOL.P_RETENTION_POLICY_CREATE(:tableOwner, :tableName, :retentionData); END;`;
/// DB execution function call
DBService.getInstance().ExecuteDBProcedureRequest(procedure, userPolicyJSON);
// DB executing
public ExecuteDBProcedureRequest = (procedure: string, inputBody: any) : Promise<any> => {
return new Promise((resolve, reject) => {
DBConn.execute(procedure, inputBody, { autoCommit: true}, (err: oracledb.DBError, result: oracledb.Result) => {
if(err) {
reject(err);
}
if(result) {
resolve(Utils.CreateJSONObject(result));
}
})
});
}
// SQL procedure call
PKG_ARCHIVAL_TOOL.P_RETENTION_POLICY_CREATE(
P_TABLE_OWNER => P_TABLE_OWNER,
P_TABLE_NAME => P_TABLE_NAME,
P_RETEN_DATA => V_DATA,
P_ID => V_ID,
P_OUT => V_OUT
);
P_RETEN_DATA is a table of a record:-
Record - TYPE R_RETENTION_POLICY_DEF IS RECORD(
COLUMN_NAME VARCHAR2(40) NOT NULL DEFAULT ' ',
COLUMN_POS NUMBER NOT NULL DEFAULT 1,
COLUMN_TYPE VARCHAR2(10) NOT NULL DEFAULT 'NUMBER',
OPERATOR VARCHAR2(10) NOT NULL DEFAULT '=',
GATE VARCHAR2(10) DEFAULT NULL,
BRAC_ST NUMBER DEFAULT 0,
BRAC_ED NUMBER DEFAULT 0
);
Table :- TYPE T_RETENTION_POLICY_DEF IS TABLE OF R_RETENTION_POLICY_DEF;
array_of_data = [["FNAME, 1, "VARCHAR2", ">", "OR", 0, 0], ["LNAME, 1, "VARCHAR2", "=", "AND", 0, 0]]
Binding to a record will only work in node-oracledb 4, which is under development here.
Your code may also have other issues (number of parameters in the PL/SQL call, trying to pass some kind of array? to a record etc).
The general solution with node-oracledb 3.1 is to use a wrapper PL/SQL block that you can bind permissible types into. This wrapper block then massages the values into a record and calls your target procedure, P_RETENTION_POLICY_CREATE.
Given this SQL:
set echo on
create or replace package rectest as
type rectype is record (name varchar2(40), pos number);
procedure myproc (p_in in rectype, p_out out rectype);
end rectest;
/
show errors
create or replace package body rectest as
procedure myproc (p_in in rectype, p_out out rectype) as
begin
p_out := p_in;
end;
end rectest;
/
show errors
You would call it like:
// Node-oracledb 3.1
'use strict';
const oracledb = require('oracledb');
const config = require('./dbconfig.js');
let sql, binds, options, result;
async function run() {
let connection;
try {
connection = await oracledb.getConnection(config);
sql =
`declare
i_r rectest.rectype; -- input record
o_r rectest.rectype; -- output record
begin
i_r.name := :i_nm;
i_r.pos := :i_ps;
rectest.myproc(i_r, o_r);
:o_nm := o_r.name;
:o_ps := o_r.pos;
end;`;
binds = [
{i_nm: 'def', i_ps: 456},
{i_nm: 'ghi', i_ps: 789},
];
const options = {
bindDefs:
{ i_nm: { type: oracledb.STRING, maxSize: 40 },
i_ps: { type: oracledb.NUMBER },
o_nm: { type: oracledb.STRING, maxSize: 40, dir: oracledb.BIND_OUT },
o_ps: { type: oracledb.NUMBER, dir: oracledb.BIND_OUT }
}
};
result = await connection.executeMany(sql, binds, options);
console.log(result);
} catch (err) {
console.error(err);
} finally {
if (connection) {
try {
await connection.close();
} catch (err) {
console.error(err);
}
}
}
}
run();
The output is
{
outBinds: [ { o_nm: 'def', o_ps: 456 }, { o_nm: 'ghi', o_ps: 789 } ]
}

vue mutation push object reference?

addSentence: (state) => {
const obj = state;
// next line is correct;
obj.sentences.push({ ...obj.current });
// change to next line, get error
// obj.sentences.push(obj.current);
obj.current = new Sentence();
},
import Constants from './Constants';
export default class Sentence {
constructor(config) {
this.text = '';
this.fontFamily = 'KaiTi';
this.fontSize = 16;
this.fontStyle = '';
this.appearStyle = {
name: 'type',
speed: 40,
startDelay: 0,
};
this.disappearStyle = {
name: 'backspace',
speed: 80,
startDelay: 0,
smartBackspace: true,
};
}
play(context) {
}
drawText() {
}
}
state.cuurent is an object of type Sentence.
And state.sentences = [Sentence]
This is a mutation handler.
Error:
[vuex] Do not mutate vuex store state outside mutation handlers.

How to return an array of errors with graphQL

How can I return multiple error messages like this ?
"errors": [
{
"message": "first error",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": [
"somePath"
]
},
{
"message": "second error",
"locations": [
{
"line": 8,
"column": 9
}
],
"path": [
"somePath"
]
},
]
On my server, if I do throw('an error'), it returns.
"errors": [
{
"message": "an error",
"locations": [
{
}
],
"path": ["somePath"]
}
]
I would like to return an array of all the errors in the query.
How can I add multiple errors to the errors array ?
Throw an error object with errors:[] in it. The errors array should have all the errors you wanted to throw together. Use the formatError function to format the error output. In the below example, I am using Apollo UserInputError. You can use GraphQLError as well. It doesn't matter.
const error = new UserInputError()
error.errors = errorslist.map((i) => {
const _error = new UserInputError()
_error.path = i.path
_error.message = i.type
return _error
})
throw error
new ApolloServer({
typeDefs,
resolvers,
formatError: ({ message, path }) => ({
message,
path,
}),
})
//sample output response
{
"data": {
"createUser": null
},
"errors": [
{
"message": "format",
"path": "username"
},
{
"message": "min",
"path": "phone"
}
]
}
Using ApolloServer I've found multiple errors will be returned when querying an array of items and an optional field's resolver errors.
// Schema
gql`
type Foo {
id: ID!
bar: String # Optional
}
type Query {
foos: [Foo!]!
}
`;
// Resolvers
const resolvers = {
Query: {
foos: () => [{ id: 1 }, { id: 2 }]
}
Foo: {
bar: (foo) => {
throw new Error(`Failed to get Foo.bar: ${foo.id}`);
}
}
}
// Query
gql`
query Foos {
foos {
id
bar
}
}
`;
// Response
{
"data": {
"foos": [{ id: 1, bar: null }, { id: 2, bar: null }]
},
"errors": [{
"message": "Failed to get Foo.bar: 1"
}, {
"message": "Failed to get Foo.bar: 2"
}]
}
If Foo.bar is not optional, it will return just the first error.
If you want to return many errors, at once, I would recommend MultiError from VError which allows you to represent many errors in one error instance.
You would need to catch the errors without the throw statement because you don't want to interrupt your process. Instead, you can create an array called errors and .push() the errors into it. When you see fit, near the end of your process, you can check to see if there are errors inside the errors array. If there are, you can display them or handle them as you wish
// example
var errors = [];
doSomething(function(err,res){
if(err){
errors.push(err);
}
console.log("we did a thing");
doSomethingElse(function(err,res2){
if(err){
errors.push(err);
};
console.log("we did another thing");
// check and throw errors
if(errors.length > 0){
throw errors;
}
});
});
You can use the GraphQL Error Function, I have a example with TypeScript:
function throwError(message: string, path: any) {
throw new GraphQLError(
message,
[],
{body: '', name: ''},
undefined,
[path]
)
}
And then I just call the function as many times as needed.
The JavaScript constructor looks like:
constructor(
message: string,
nodes?: $ReadOnlyArray<ASTNode> | ASTNode | void,
source?: ?Source,
positions?: ?$ReadOnlyArray<number>,
path?: ?$ReadOnlyArray<string | number>,
originalError?: ?Error,
extensions?: ?{ [key: string]: mixed },
): void;
Check the graphql-js gitHub:
https://github.com/graphql/graphql-js/blob/master/src/error/GraphQLError.js#L22
Looks like the question it is not about to show many exceptions but about to show all the stack trace of the error. When one error is thrown up, the execution will not receive or throw up other error. In some languages, you can nativally set the parent exception to the current exception, but it is not the case of javascript, so far I can tell and looking to the docs https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error and https://nodejs.org/api/errors.html#errors_error_propagation_and_interception. You will need to create your own error class, what it is not that hard.
If the problem it is show trace
The stack trace in Javascript it is a string! What it is good if you just want to put it into some log but bad if you want to make a more meaningful reading structure, like a json.
If what you want to do it is really show the stack trace, probably you are going to need to convert the stack trace of the Error object into an array, using something like this:
https://github.com/stacktracejs/error-stack-parser and then put this array inside of your error object.
After that, you can just save that object into your database. You still will be watching just one error, but you are going to have all the "location", "line", "path" of it trace, that sounds to me what you are looking for.
If the problem it is to show the parent errors message and trace
If you want to keep the parent Error of some trace, you will probably need to create your own error class.
/**
* Class MyError extends Error but add the parentError attribute
*/
function MyError(message, parentError ) {
this.message = message;
this.stack = Error().stack;
this.parentError = parentError;
}
MyError.prototype = Object.create(Error.prototype);
MyError.prototype.name = "MyError";
function a() {
b();
}
function b() {
try {
c();
} catch ( e ) {
throw new MyError( "error on b", e );
}
}
function c() {
d();
}
function d() {
throw new MyError("error on d");
}
function showError( e ) {
var message = e.message + " " + e.stack;
if ( e.parentError ) {
return message + "\n" + showError( e.parentError );
}
return message;
}
try{
a();
} catch ( e ) {
console.log(showError( e ));
}
If the problem it is show many errors messages and trace
If you want to keep many errors into a big package, for validation feedback, for example, you may extend the error class to create a package of errors. I created one simple example of each one of this classes.
/**
* Class MyErrorPackage extends Error
* but works like a error package
*/
function MyErrorPackage(message, parentError ) {
this.packageErrors = [];
this.message = "This package has errors. \n";
this.isValid = true;
this.stack = Error().stack;
this.parentError = parentError;
this.addError = function addError( error ) {
this.packageErrors.push( error );
this.isValid = false;
this.message += "PackageError(" + this.packageErrors.length + "): " + error.stack + error.stack + "\n";
};
this.validate = function validate() {
if( ! this.isValid ) {
throw this;
}
};
}
MyErrorPackage.prototype = Object.create(Error.prototype);
MyErrorPackage.prototype.name = "MyErrorPackage";
function showError( e ) {
var message = e.message + " " + e.stack;
if ( e.parentError ) {
return message + "\n" + showError( e.parentError );
}
return message;
}
function showPackageError( e ) {
var message = e.message + " " + e.stack;
if ( e.parentError ) {
return message + "\n" + showError( e.parentError );
}
return message;
}
try{
var p = new MyErrorPackage();
try {
throw new Error("error 1");
} catch( e1 ) {
p.addError(e1);
}
try {
throw new Error("error 2");
} catch( e2 ) {
p.addError(e2);
}
try {
throw new Error("error 3");
} catch( e3 ) {
p.addError(e3);
}
p.validate();
} catch ( e4 ) {
console.log(showError( e4 ));
}

Resources