I am using hapi-job-queue for certain cron jobs but having trouble with schedule? - node.js

From the documentation of hapi-job-queue I found that it supports the Later style time definition in schedule params. So I tried like
server.register([
{
register: require('hapi-job-queue'), options: {
connectionUrl: Config.database.url,
endpoint: '',
auth: false,
jobs: [
{
name: 'test-job',
enabled: true,
schedule: 'at 04:59 pm',
method: someMethods
}
]
}
}
]
But I think the code is not working.. if i try schedule: 'every 5 seconds'
everything works find and i even tried schedule: 'at 5:00 pm' which is a valid Later style time definition. Am i missing something?

I tried your code and it seems to work correctly. By the way you can verify the correct parsing of the timing you specified by just checking the 'Jobs' collection on the mongoDb instance you specified in
'Config.database.url'.
Look for the document having 'test-job' as _id field and check the 'nextRun' property; you should see the correct timing: '2016-08-13T16:59:00.000+0000' (in my case)

Related

Jest is not generating the report table in my terminal

The issue I'm facing is the lack of report tables in my terminal once I run my npm test.
I know for a fact that the reports are being generated, since I can see the files in the coverage directory.
However, it's a bit annoying and despite my debugging, I can't seem to find out what the issue is.
Here is my jest.config.js:
/*
* For a detailed explanation regarding each configuration property and type check, visit:
* https://jestjs.io/docs/en/configuration.html
*/
module.exports = {
// Automatically clear mock calls and instances between every test
clearMocks: true,
// Indicates whether the coverage information should be collected while executing the test
collectCoverage: true,
// The directory where Jest should output its coverage files
coverageDirectory: "coverage",
// Indicates which provider should be used to instrument code for coverage
coverageProvider: "v8",
reporters: [
"default",
[
"jest-junit",
{
outputDirectory: "./coverage",
outputName: "unit_tests_coverage.xml",
},
],
],
// A list of reporter names that Jest uses when writing coverage reports
coverageReporters: ["cobertura", "lcov"],
// The maximum amount of workers used to run your tests. Can be specified as % or a number. E.g. maxWorkers: 10% will use 10% of your CPU amount + 1 as the maximum worker number. maxWorkers: 2 will use a maximum of 2 workers.
maxWorkers: "50%",
// A list of paths to directories that Jest should use to search for files in
roots: ["test"],
testEnvironment: "node",
// Options that will be passed to the testEnvironment
// testEnvironmentOptions: {},
testRegex: ["/test/.*\\.(test|spec)?\\.(ts|tsx)$"],
transform: {
"^.+\\.ts?$": ["babel-jest"],
},
}
At the end of every test execution, I get like a summary like this:
Test Suites: 9 passed, 9 total
Tests: 155 passed, 155 total
Snapshots: 0 total
Time: 10.248 s
But no table showing line coverage, branch coverage... etc.
Is my jest.config.js incorrect or am I missing something?
Thanks in advance for your help!
Thanks to #jonrsharpe, I managed to find out what the issue was.
Since I was using reporters, the default one (text) was overridden. So in order to see it again, I had to specify it manually (check docs)
...
coverageReporters: ["cobertura", "lcov", "text"],
...

queryText being sent to Dialogflow is not the original user query

The user input / queryText being sent to Dialogflow is not the expected, original user query.
simulator query manipulation
I enabled "Log interactions to Google Cloud" in my Dialogflow project's settings. What I'm seeing is multiple "assistant_action" resources before the actual request that goes to DF. In the example above, this is what I see:
GCP logs
With the first debug resource showing post data with:
"inputs":[{"rawInputs":[{"inputType":"UNSPECIFIED_INPUT_TYPE","query":"how long has it been on the market"}]
And
resource: {
type: "assistant_action"
labels: {
project_id: "<MY-PROJECT-ID>"
version_id: ""
action_id: ""
}
},
timestamp: "2021-03-05T18:41:44.142202856Z"
severity: "DEBUG"
labels: {
channel: "production"
querystream: "GOOGLE_USER"
source: "AOG_REQUEST_RESPONSE"
}
The subsequent requests are the same but with modified input queries ("how long has it been on the market" -> "how long has something been on the market" -> "how long has us FDA been on the market"), the last one being the actual user query sent, the channel being preview and the action_id "actions.intent.TEXT".
resource: {
type: "assistant_action"
labels: {
project_id: "<MY-PROJECT-ID>"
version_id: ""
action_id: "actions.intent.TEXT"
}
},
timestamp: "2021-03-05T18:41:45.942019959Z"
severity: "DEBUG"
labels: {
channel: "preview"
querystream: "GOOGLE_USER"
source: "AOG_REQUEST_RESPONSE"
}
I should note that I am testing current drafts of an AoG project and have no releases let alone a production release. I have a denied beta, because of branding issues which I address with separate AoG/DF projects for PROD. I do not have any intents enabled for slot filling or any required entity parameters. This is just one example, but I have been noticing many occurrences of this issue.
What is happening here? Why is the original user input being manipulated? What are all these interactions we are seeing before the expected request/response cycle?
After having contacted someone at Google Cloud, I was informed this was something that had been raised by others and that AoG devs were looking into it.
As of a Mar 24 2021 release, I can no longer replicate this Entity Resolution issue.

Suppressing nightwatchjs warnings in terminal output

I'm using nightwatchjs to run my test suite, and I would like to remove the warning messages being outputted to my terminal display.
At the moment, I'm getting loads of these (admittedly genuine) warning messages whilst my scripts are running and it's making the reading of the results harder and harder.
As an example;
Yes they are valid messages, but it's not often possible for me to uniquely pick out each individual element and I'm not interested in them for my output.
So, I'd like to know how I can stop them from being reported in my terminal.
Below is what I've tried so far in my nightwatch.conf.js config file;
desiredCapabilities: {
browserName: 'chrome',
javascriptEnabled : true,
acceptSslCerts: true,
acceptInscureCerts: true,
chromeOptions : {
args: [
'--ignore-certificate-errors',
'--allow-running-insecure-content',
'--disable-web-security',
'--disable-infobars',
'--disable-popup-blocking',
'--disable-notifications',
'--log-level=3'],
prefs: {
'profile.managed_default_content_settings.popups' : 1,
'profile.managed_default_content_settings.notifications' : 1
},
},
},
},
but it's still displaying the warnings.
Any help on this would be really appreciated.
Many thanks.
You can try setting detailed_output property to false in the configuration file. This should stop these details from printing in the console.
You can find a sample config file here.
You can find relevant details available under Output Settings section of official docs here.
Update 1: This looks like a combo of properties which controls this and the below combo works for me.
live_output: false,
silent: true,
output: true,
detailed_output: false,
disable_error_log: false,

Protractor config file: cucumberOpts tags not taken individually or ignored

I'm using the following configuration file.
/*EC:201611*/
var featsLocation = 'features/';
var stepsLocation = 'steps/';
exports.config = {
params:{
authURL:'http://localhost:3333',
login:{
email:'',
passw:''
}
},
resultJsonOutputFile:'',
getPageTimeout: 60000,
allScriptsTimeout: 500000,
framework: 'custom',
frameworkPath: require.resolve('protractor-cucumber-framework'),
capabilities: {
'browserName': 'phantomjs',
'phantomjs.binary.path': '/srv/build/applications/phantomjs/bin/phantomjs'
},
specs: [
featsLocation+'ediRejects.feature'
, featsLocation+'shipmentValidation.feature'
],
baseUrl: '',
cucumberOpts: {
tags: ['#Smoke','#Intgr'],
require: [
stepsLocation+'ediRejects/ediRejects.spec.js'
, stepsLocation+'shipmentValidation/shipmentValidation.spec.js'
, stepsLocation+'appointmentsOverdue/appointmentsOverdue.spec.js'
, stepsLocation+'deliveryOverdue/deliveryOverdue.spec.js'
, stepsLocation+'cpLogin/cpLogin.spec.js'
, stepsLocation+'globalSearch/globalSearch.spec.js'
, './support/hooks.js'
],
monochrome: true,
strict: true,
plugin: "json"
},
};
/*EC:201611*/
As you can see I'm adding these tags: ['#Smoke','#Intgr'].
At the feature files, I placed the tags on top of the scenarios, like this...
Feature: EDI-Rejects
#Smoke
Scenario: Access Final Mile Application
Given I get authentication to use EDI Rejects widget at Final Mile Application
#Smoke
Scenario: Validate title and headers
When I am testing EDI Rejects widget on the Werner Final Mile URL
Then Check the EDI Rejects widget header name is "EDI Rejects"
And Check the EDI Rejects Column Header names are "Customer Name", "Contact Name", "Contact Number", "Reject Reason", "Rejected %", "Shipper Ref #"
#Intgr
Scenario Outline: Validate Global Search Feature
When I am testing Global Search Feature of the Werner Final Mile URL
Then Check the "<columnName>" search is Correct
Examples:
| columnName |
| Customer Name |
| Contact Name |
| Contact No |
| Reject Reason |
| Shipper Reference Number |
But when I execute I get this...
0 scenarios
0 steps
0m00.000s
Am I missing something?
Node version = v 7.2.0.
Protractor version = 4.0.10.
npm version = 3.10.9.
In addition I have noted that when I put the two tags in the same scenario block of the feature file, like this...
#Smoke
#Intgr
Scenario: Access Final Mile Application
Given I get authentication to use EDI Rejects widget at Final Mile Application
#Smoke
#Intgr
Scenario: Validate title and headers
When I am testing EDI Rejects widget on the Werner Final Mile URL
Then Check the EDI Rejects widget header name is "EDI Rejects"
And Check the EDI Rejects Column Header names are "Customer Name", "Contact Name", "Contact Number", "Reject Reason", "Rejected %", "Shipper Ref #"
#Intgr
Scenario Outline: Validate Global Search Feature
When I am testing Global Search Feature of the Werner Final Mile URL
Then Check the "Customer Name" search is Correct
#Smoke
Scenario Outline: Validate Communication Methods disabled functionality
When I am testing Appointments Overdue widget on the Werner Final Mile URL
Then Check the Appointments Overdue widget "Phone" Communication button is disabled if none of the agents segments are selected
The first two scenarios are picked by protractor, but the third and fourth is ignored. This doesn't work for me because not all my scenarios are smoke test and not all my scenarios are integration test.
UPDATE
I did what #Ram Pasala suggested. But Now with Cucumberjs2.0 I'm facing a new problem:
I'm using a package.json to run my scripts with the npm test command.
This was the "old" syntax that used to work:
protractor ./FM_IntTest_UI_conf.js --cucumberOpts.tags ~#ignore --cucumberOpts.tags #smoke,#rt,#sprint
Based on what the new cucumberjs doc says...
At Old cucumberjs
This: --cucumberOpts.tags ~#ignore --cucumberOpts.tags #smoke,#rt,#sprint
At cucumberjs2.0
Becomes: --cucumberOpts.tags 'not #ignore and (#smoke or #rt)'
So I tried:
protractor ./FM_IntTest_UI_conf.js --cucumberOpts.tags 'not #ignore and (#smoke or #rt)'
And to be more consistent with cucumberjs2.0 doc, I also tried:
protractor ./FM_IntTest_UI_conf.js --tags 'not #ignore and (#smoke or #rt)'
Neither Worked. For both I got the following error:
' was unexpected at this time.
C:\workspace> "C:\workspace\node_modules.bin\node.exe" "C:\workspace\node_modules.bin\..\protractor\bin\protractor"
./FM_IntTest_UI_conf.js ---tags 'not #ignor e and (#smoke or #rt)'
npm ERR! Test failed. See above for more details.
What is now the correct syntax?
UPDATE (20170613)
After trial and error I found out all the following:
The syntax must be using double quotes like this:
protractor ./FM_IntTest_UI_conf.js --tags "not #ignore and (#smoke or #rt)"
Cucumberjs documentation is incorrect.
To put this in a package.json, escape chars are neeeded:
"protractor ./FM_IntTest_UI_conf.js --tags \"not #ignore and (#smoke or #rt)\""
Couple of things here -
tags cli option accepts a string not array source-code. In cucumberJS versions less than 2.0 which is infact used by protractor-cucumber-framework module, you would have declare them like this to run both #Smoke or #Intgr scenarios -
cucumberOpts: {
tags: '#Smoke,#Intgr'
}
This should solve your problem for now.
But since Cucumber 2.0 onwards this would change. It is currently in RC(release-candidate) phase and soon protractor-cucumber-framework would support it, lot of breaking changes have been introduced one of which would impact tags expression. According to their official docs cucumber-tag-expression new style tags have been introduced which are much more readable:
cucumberOpts: {
tags: '#Smoke or #Intgr'
}
//complex tags would become somewhat like this-
cucumberOpts: {
tags: '(#Smoke or #Intgr) and (not #Regression)'
}

How to use wait-for-sync properly

For experiments with single node configuration I run ArangoDB with the command:
arangod --server.endpoint=tcp://0.0.0.0:8529 --server.disable-authentication=true --database.wait-for-sync=true
Then I do a few commands:
db._createDatabase("foo")
db._useDatabase("foo")
db._create("a")
db.a.properties()
Get the result:
{
"doCompact" : true,
"journalSize" : 33554432,
"isSystem" : false,
"isVolatile" : false,
"waitForSync" : false,
"keyOptions" : {
"type" : "traditional",
"allowUserKeys" : true
},
"indexBuckets" : 8
}
And where is my "waitForSync": true by default? Where do I do a mistake?
I can confirm your problem using ArangoDB 2.8.7 and the arangosh. This is a bug. If the same is done on the console (with --console), then it works.
From arangosh the request goes via the HTTP API and there the default of "false" for "waitForSync" is added, the command line option is ignored, which is the bug. I will make sure that this will be fixed in the next release of ArangoDB.
In the meantime, please add "waitForSync": true in all db._create calls in arangosh and all POST /_api/collection API calls via HTTP.

Resources