I have a User UserEvent script for purchase order. When Purchase Order created from NetSuite UI its working fine, while Purchase Order created from SOAP xml request, this script not able to capture any log on suite script Execution log ?
I have checked all the below parameter for executing script .
1- Setup > Integration > SOAP web service prefrences > RUN SERVER SUITESCRIPT AND TRIGGER WORKFLOWS (checked true)
2- APPLIES TO -> PurchaseOrder
3- Log Level -> Debug
4- Status -> Testing/Release (checked in Both case)
5- Deployed -> checked
6- Inactive -> false
7- Roles -> All Roles/ Specific Roles (Checked with both )
While UserEvent Script for Items working fine on UI & SOAP Request case.
TIA
/**
* #NApiVersion 2.x
* #NScriptType UserEventScript
* #NModuleScope Public
*/
define(["N/log"], function (log) {
var exports = {};
/**
* Function definition to be triggered before record is loaded.
*
* #param {Object} scriptContext
* #param {Record} scriptContext.newRecord - New record
* #param {Record} scriptContext.oldRecord - Old record
* #param {string} scriptContext.type - Trigger type
* #Since 2015.2
*/
function beforeSubmit(scriptContext) {
log.debug({ "title": "Before Submit", "details":"Before Submit Event"});
}
function afterSubmit(scriptContext){
log.debug({ "title": "After Submit", "details": "After Submit Event"});
}
exports.beforeSubmit = beforeSubmit;
exports.afterSubmit = afterSubmit;
return exports;
});
I have private/public keys (files hs_ed25519_secret_key and hs_ed25519_public_key) generated by Tor. My goal is to convert them to KeyObject in Node.js, but I cannot extract keys from them.
When I open the file (e.g. hs_ed25519_secret_key), I see this:
== ed25519v1-secret: type0 ==8����+�Z�Y���DsЄ�_�K���k�h��z�z|�<ʾ'�Q��:������`�D'��
I guess they used base64 encoding. So, I tried to open the file with:
import fs from "fs";
const publicKey = fs.readFileSync("./hs_ed25519_secret_key", "base64");
and got the following output:
PT0gZWQyNTUxOXYxLXNlY3JldDogdHlwZTAgPT0AAAAYC++/vXtY77+977+977+9UWlCTGXvv73vv73vv70efu+/ve+/ve+/vVzvv71+JHkyeuq4kFpqHe+/vTDvv73vv71e77+977+9aO+/ve+/vXPnp4ZeBmXvv71T77+977+9Jlfvv73vv73ClO+/ve+/vTYK
How can I extract the private key from it? What is the format?
The ultimate goal is to implement the following methods:
class TorV3 {
/**
* The same result as Tor.getPublicFromSecret(TorV3.readSecretKey(path))
* #returns KeyObject
*/
static readPublicKey(path) {}
/**
* #returns KeyObject
*/
static readSecretKey(path) {}
/**
* #returns KeyObject
*/
static getPublicFromSecret(keyObject) {}
}
I installed newrelic version 1.5.3 from npm and in config file set license key.
I include newrelic.js with configuration in app.js.
newrelic = require('newrelic');
But it never return from including newrelic configuration. This means that it hangs and don't load site further. All the code after that line never executes.
I tried to read newrelic_agent.log but it is empty.
I tried to change logging level to trace but log is still empty.
My config file is default, except application name and license key:
/**
* This file includes all of the configuration variables used by the Node.js
* module. If there's a configurable element of the module and it's not
* described in here, there's been a terrible mistake.
*/
exports.config = {
/**
* Array of application names.
*
* #env NEW_RELIC_APP_NAME
*/
app_name : ['hello'],
/**
* The user's license key. Must be set by per-app configuration file.
*
* #env NEW_RELIC_LICENSE_KEY
*/
license_key : 'my license key',
/**
* Hostname for the New Relic collector proxy.
*
* You shouldn't need to change this.
*
* #env NEW_RELIC_HOST
*/
host : 'collector.newrelic.com',
/**
* The port on which the collector proxy will be listening.
*
* You shouldn't need to change this.
*
* #env NEW_RELIC_PORT
*/
port : 443,
/**
* Whether or not to use SSL to connect to New Relic's servers.
*
* #env NEW_RELIC_USE_SSL
*/
ssl : true,
/**
* Proxy host to use to connect to the internet.
*
* FIXME: proxy support does not currently work
*
* #env NEW_RELIC_PROXY_HOST
*/
proxy_host : '',
/**
* Proxy port to use to connect to the internet.
*
* FIXME: proxy support does not currently work
*
* #env NEW_RELIC_PROXY_PORT
*/
proxy_port : '',
/**
* You may want more control over how the module is configured and want to
* disallow the use of New Relic's server-side configuration. To do so, set
* this parameter to true. Some configuration information is required to make
* the module work properly with the rest of New Relic, but settings such as
* apdex_t and capture_params will not be overridable by New Relic with this
* setting in effect.
*
* #env NEW_RELIC_IGNORE_SERVER_CONFIGURATION
*/
ignore_server_configuration : false,
/**
* Whether the module is enabled.
*
* #env NEW_RELIC_ENABLED
*/
agent_enabled : true,
/**
* The default Apdex tolerating / threshold value for applications, in
* seconds. The default for Node is apdexT to 100 milliseconds, which is
* lower than New Relic standard, but Node.js applications tend to be more
* latency-sensitive than most.
*
* #env NEW_RELIC_APDEX
*/
apdex_t : 0.100,
/**
* Whether to capture parameters in the request URL in slow transaction
* traces and error traces. Because this can pass sensitive data, it's
* disabled by default. If there are specific parameters you want ignored,
* use ignored_params.
*
* #env NEW_RELIC_CAPTURE_PARAMS
*/
capture_params : false,
/**
* Array of parameters you don't want captured off request URLs in slow
* transaction traces and error traces.
*
* #env NEW_RELIC_IGNORED_PARAMS
*/
ignored_params : [],
logging : {
/**
* Verbosity of the module's logging. This module uses bunyan
* (https://github.com/trentm/node-bunyan) for its logging, and as such the
* valid logging levels are 'fatal', 'error', 'warn', 'info', 'debug' and
* 'trace'. Logging at levels 'info' and higher is very terse. For support
* requests, attaching logs captured at 'trace' level are extremely helpful
* in chasing down bugs.
*
* #env NEW_RELIC_LOG_LEVEL
*/
level : 'trace',
/**
* Where to put the log file -- by default just uses process.cwd +
* 'newrelic_agent.log'. A special case is a filepath of 'stdout',
* in which case all logging will go to stdout, or 'stderr', in which
* case all logging will go to stderr.
*
* #env NEW_RELIC_LOG
*/
filepath : require('path').join(process.cwd(), 'newrelic_agent.log')
},
/**
* Whether to collect & submit error traces to New Relic.
*
* #env NEW_RELIC_ERROR_COLLECTOR_ENABLED
*/
error_collector : {
/**
* Disabling the error tracer just means that errors aren't collected
* and sent to New Relic -- it DOES NOT remove any instrumentation.
*/
enabled : true,
/**
* List of HTTP error status codes the error tracer should disregard.
* Ignoring a status code means that the transaction is not renamed to
* match the code, and the request is not treated as an error by the error
* collector.
*
* Defaults to 404 NOT FOUND.
*
* #env NEW_RELIC_ERROR_COLLECTOR_IGNORE_ERROR_CODES
*/
ignore_status_codes : [404]
},
transaction_tracer : {
/**
* Whether to collect & submit slow transaction traces to New Relic. The
* instrumentation is loaded regardless of this setting, as it's necessary
* to gather metrics. Disable the agent to prevent the instrumentation from
* loading.
*
* #env NEW_RELIC_TRACER_ENABLED
*/
enabled : true,
/**
* The duration at below which the slow transaction tracer should collect a
* transaction trace. If set to 'apdex_f', the threshold will be set to
* 4 * apdex_t, which with a default apdex_t value of 500 milliseconds will
* be 2 seconds.
*
* If a time is provided, it is set in seconds.
*
* #env NEW_RELIC_TRACER_THRESHOLD
*/
transaction_threshold : 'apdex_f',
/**
* Increase this parameter to increase the diversity of the slow
* transaction traces recorded by your application over time. Confused?
* Read on.
*
* Transactions are named based on the request (see the README for the
* details of how requests are mapped to transactions), and top_n refers to
* the "top n slowest transactions" grouped by these names. The module will
* only replace a recorded trace with a new trace if the new trace is
* slower than the previous slowest trace of that name. The default value
* for this setting is 20, as the transaction trace view page also defaults
* to showing the 20 slowest transactions.
*
* If you want to record the absolute slowest transaction over the last
* minute, set top_n to 0 or 1. This used to be the default, and has a
* problem in that it will allow one very slow route to dominate your slow
* transaction traces.
*
* The module will always record at least 5 different slow transactions in
* the reporting periods after it starts up, and will reset its internal
* slow trace aggregator if no slow transactions have been recorded for the
* last 5 harvest cycles, restarting the aggregation process.
*
* #env NEW_RELIC_TRACER_TOP_N
*/
top_n : 20
},
/**
* Whether to enable internal supportability metrics and diagnostics. You're
* welcome to turn these on, but they will probably be most useful to the
* New Relic node engineering team.
*/
debug : {
/**
* Whether to collect and submit internal supportability metrics alongside
* application performance metrics.
*
* #env NEW_RELIC_DEBUG_METRICS
*/
internal_metrics : false,
/**
* Traces the execution of the transaction tracer. Requires logging.level
* to be set to 'trace' to provide any useful output.
*
* WARNING: The tracer tracing data is likely only to be intelligible to a
* small number of people inside New Relic, so you should probably only
* enable tracer tracing if asked to by New Relic, because it will affect
* performance significantly.
*
* #env NEW_RELIC_DEBUG_TRACER
*/
tracer_tracing : false
},
/**
* Rules for naming or ignoring transactions.
*/
rules : {
/**
* A list of rules of the format {pattern : 'pattern', name : 'name'} for
* matching incoming request URLs and naming the associated New Relic
* transactions. Both pattern and name are required. Additional attributes
* are ignored. Patterns may have capture groups (following JavaScript
* conventions), and names will use $1-style replacement strings. See
* the documentation for addNamingRule for important caveats.
*
* #env NEW_RELIC_NAMING_RULES
*/
name : [],
/**
* A list of patterns for matching incoming request URLs to be ignored by
* the agent. Patterns may be strings or regular expressions.
*
* #env NEW_RELIC_IGNORING_RULES
*/
ignore : []
},
/**
* By default, any transactions that are not affected by other bits of
* naming logic (the API, rules, or metric normalization rules) will
* have their names set to 'NormalizedUri/*'. Setting this value to
* false will set them instead to Uri/path/to/resource. Don't change
* this setting unless you understand the implications of New Relic's
* metric grouping issues and are confident your application isn't going
* to run afoul of them. Your application could end up getting blackholed!
* Nobody wants that.
*
* #env NEW_RELIC_ENFORCE_BACKSTOP
*/
enforce_backstop : true,
/**
* Browser Monitoring
*
* Browser monitoring lets you correlate transactions between the server and browser
* giving you accurate data on how long a page request takes, from request,
* through the server response, up until the actual page render completes.
*/
browser_monitoring : {
/**
* Enable browser monitoring header generation.
*
* This does not auto-instrument, rather it enables the agent to generate headers.
* The newrelic module can generate the appropriate <script> header, but you must
* inject the header yourself, or use a module that does so.
*
* Usage:
*
* var newrelic = require('newrelic');
*
* router.get('/', function (req, res) {
* var header = newrelic.getBrowserTimingHeader();
* res.write(header)
* // write the rest of the page
* });
*
* This generates the <script>...</script> header necessary for Browser Monitoring
* This script must be manually injected into your templates, as high as possible
* in the header, but _after_ any X-UA-COMPATIBLE HTTP-EQUIV meta tags.
* Otherwise you may hurt IE!
*
* This method must be called _during_ a transaction, and must be called every
* time you want to generate the headers.
*
* Do *not* reuse the headers between users, or even between requests.
*
* #env NEW_RELIC_BROWSER_MONITOR_ENABLE
*/
enable : true,
/**
* Request un-minified sources from the server.
*
* #env NEW_RELIC_BROWSER_MONITOR_DEBUG
*/
debug : false
}
};
Changed version to 1.5.1 and it all started work ok. npm i newrelic#1.5.1
I'm using the solr-client module for nodejs to query my solr collections.
Now I'm trying to add to and update collections in my backend code using solr-client.
I've tried http://lbdremy.github.io/solr-node-client/code/add.js.html succesfully to add data to a collection. But I don't know how to update records.
I've tried using this method (all methods can be found here: http://lbdremy.github.io/solr-node-client/code/solr.js.html);
/**
* Send an update command to the Solr server with the given `data` stringified in the body.
*
* #param {Object} data - data sent to the Solr server
* #param {Function} callback(err,obj) - a function executed when the Solr server responds or an error occurs
* #param {Error} callback().err
* #param {Object} callback().obj - JSON response sent by the Solr server deserialized
*
* #return {Client}
* #api private
*/
Client.prototype.update = function (data, callback) {
var self = this;
this.options.json = JSON.stringify(data);
this.options.fullPath = [this.options.path, this.options.core, 'update/json?commit=' + this.autoCommit + '&wt=json']
.filter(function (element) {
if (element) {
return true;
}
return false;
})
.join('/');
updateRequest(this.options, callback);
return self;
}
But how does this method knows which records to update? Does it searches for pk's in the data parameter and when it matches with your pk in the collection, it get's updated? And does it need an extra commit?
But how does this method knows which records to update? SEE BELOW
Does it searches for pk's in the data parameter and when it matches with your pk in the collection, it get's updated? - YES
And does it need an extra commit? - YES
Technically, u can use the INSERT as well as UPDATE. They are the same in SOLR
I'm creating my first Yeoman Generator. I want to download an external zip containing a CMS and unzip it in the root. According to this thread this should be possible. Has this not been implemented yet? What do i need to copy over to my generator if not?
I have run generator generator and got my basic generator up. This is my code so far.
Generator.prototype.getVersion = function getVersion() {
var cb = this.async()
, self = this
this.log.writeln('Downloading Umbraco version 6.1.6')
this.download('http://our.umbraco.org/ReleaseDownload?id=92348', '.');
}
This generates an error telling me that it "cannot find module 'download'". What is the correct syntax?
I did a little investigation for you.
There are two methods to download something with yeoman...
/**
* Download a string or an array of files to a given destination.
*
* #param {String|Array} url
* #param {String} destination
* #param {Function} cb
*/
this.fetch(url, destination, cb)
/**
* Fetch a string or an array of archives and extract it/them to a given
* destination.
*
* #param {String|Array} archive
* #param {String} destination
* #param {Function} cb
*/
this.extract(archive, destination, cb)
The callback will pass an error if something went wrong.
There's also a method to download packages from Github.
/**
* Remotely fetch a package from github (or an archive), store this into a _cache
* folder, and provide a "remote" object as a facade API to ourself (part of
* generator API, copy, template, directory). It's possible to remove local cache,
* and force a new remote fetch of the package.
*
* ### Examples:
*
* this.remote('user', 'repo', function(err, remote) {
* remote.copy('.', 'vendors/user-repo');
* });
*
* this.remote('user', 'repo', 'branch', function(err, remote) {
* remote.copy('.', 'vendors/user-repo');
* });
*
* this.remote('http://foo.com/bar.zip', function(err, remote) {
* remote.copy('.', 'vendors/user-repo');
* });
*
* When fetching from Github
* #param {String} username
* #param {String} repo
* #param {String} branch
* #param {Function} cb
* #param {Boolean} refresh
*
* #also
* When fetching an archive
* #param {String} url
* #param {Function} cb
* #param {Boolean} refresh
*/