We run our app in two "modes":
Our REST API (express js)
Our background processor (amqplib)
Our REST API that starts using nodemon runs completely fine with debug, however our background processor does not work with debug.
We declare DEBUG=app:* in our .env file and yes, we do see it when we console log but for some reason when we do the following, nothing reports when running our background processor.
We do see that one of amqp's dependencies called bitsyntax uses debug. I am wondering if it does anything to turn off debug but cannot find anything in their code that does that.
Is there anything I can do to resolve this problem?
import amqp from 'amqplib/callback_api'
import dotenv from 'dotenv'
import debug from 'debug'
const testLog = debug('app:worker')
const rabbitmq = `amqp://${process.env.RABBITMQ_USER}:${process.env.RABBITMQ_PASS}#${process.env.RABBITMQ_HOST}`
console.log(process.env.DEBUG) // app:*
const worker = () => {
try {
amqp.connect(rabbitmq, (err, conn) => {
// ...
testLog('this does not show up')
console.log('this does show up')
// ...
})
} catch (err) {
// ...
}
}
worker()
We run our background processor using the following command:
NODE_ENV=development nodemon ./src/workers/index.ts
As per the following github issue response,
on workers the following code needs to be enabled:
debug.enable(process.env.DEBUG)
Related
I created a test WASM program using Go. In the program's main, it adds an API to the "global" and waits on a channel to avoid from exiting. It is similar to the typical hello-world Go WASM that you can find anywhere in the internet.
My test WASM program works well in Browsers, however, I hope to run it and call the API using Node.js. If it is possible, I will create some automation tests based on it.
I tried many ways but I just couldn't get it work with Node.js. The problem is that, in Node.js, the API cannot be found in the "global". How can I run a GO WASM program (with an exported API) in Node.js?
(Let me know if you need more details)
Thanks!
More details:
--- On Go's side (pseudo code) ---
func main() {
fmt.Println("My Web Assembly")
js.Global().Set("myEcho", myEcho())
<-make(chan bool)
}
func myEcho() js.Func {
return js.FuncOf(func(this js.Value, apiArgs []js.Value) any {
for arg := range(apiArgs) {
fmt.Println(arg.String())
}
}
}
// build: GOOS=js GOARCH=wasm go build -o myecho.wasm path/to/the/package
--- On browser's side ---
<html>
<head>
<meta charset="utf-8"/>
</head>
<body>
<p><pre style="font-family:courier;" id="my-canvas"/></p>
<script src="wasm_exec.js"></script>
<script>
const go = new Go();
WebAssembly.instantiateStreaming(fetch("myecho.wasm"), go.importObject).then((result) => {
go.run(result.instance);
}).then(_ => {
// it also works without "window."
document.getElementById("my-canvas").innerHTML = window.myEcho("hello", "ahoj", "ciao");
})
})
</script>
</body>
</html>
--- On Node.js' side ---
globalThis.require = require;
globalThis.fs = require("fs");
globalThis.TextEncoder = require("util").TextEncoder;
globalThis.TextDecoder = require("util").TextDecoder;
globalThis.performance = {
now() {
const [sec, nsec] = process.hrtime();
return sec * 1000 + nsec / 1000000;
},
};
const crypto = require("crypto");
globalThis.crypto = {
getRandomValues(b) {
crypto.randomFillSync(b);
},
};
require("./wasm_exec");
const go = new Go();
go.argv = process.argv.slice(2);
go.env = Object.assign({ TMPDIR: require("os").tmpdir() }, process.env);
go.exit = process.exit;
WebAssembly.instantiate(fs.readFileSync(process.argv[2]), go.importObject).then((result) => {
go.run(result.instance);
}).then(_ => {
console.log(go.exports.myEcho("hello", "ahoj", "ciao"));
}).catch((err) => {
console.error(err);
process.exit(1);
});
This pseudo code represents 99% content of my real code (only removed business related details). The problem is that I not only need to run the wasm program (myecho.wasm) by Node.js, but I also need to call the "api" (myEcho), and I need to pass it parameters and receive the returned values, because I want to create automation tests for those "api"s. With Node.js, I can launch the test js scripts and validate the outputs all in the command line environment. The browser isn't a handy tool for this case.
Running the program by node wasm_exec.js myecho.wasm isn't enough for my case.
It would be nice to know more details about your environment and what are you actually trying to do. You can post the code itself, compilation commands, and versions for all the tools involved.
Trying to answer the question without these details:
Go WASM is very browser oriented, because the go compiler needs the glue js in wasm_exec.js to run. Nodejs shouldn't have a problem with that, and the following command should work:
node wasm_exec.js main.wasm
where wasm_exec.js is the glue code shipped with your go distribution, usually found at $(go env GOROOT)/misc/wasm/wasm_exec.js, and main.wasm is your compiled code. If this fails, you can post the output as well.
There is another way to compile go code to wasm that bypasses wasm_exec.js, and that way is by using the TinyGo compiler to output wasi-enabled code. You can try following their instructions to compile your code.
For example:
tinygo build -target=wasi -o main.wasm main.go
You can build for example a javascript file wasi.js:
"use strict";
const fs = require("fs");
const { WASI } = require("wasi");
const wasi = new WASI();
const importObject = { wasi_snapshot_preview1: wasi.wasiImport };
(async () => {
const wasm = await WebAssembly.compile(
fs.readFileSync("./main.wasm")
);
const instance = await WebAssembly.instantiate(wasm, importObject);
wasi.start(instance);
})();
Recent versions of node have experimental wasi support:
node --experimental-wasi-unstable-preview1 wasi.js
These are usually the things you would try with Go and WASM, but without further details, it is hard to tell what exactly is not working.
After some struggling, I noticed that the reason is simpler than I expected.
I couldn't get the exported API function in Node.js simply because the API has not been exported yet when I tried to call them!
When the wasm program is loaded and started, it runs in parallel with the caller program (the js running in Node).
WebAssembly.instantiate(...).then(...go.run(result.instance)...).then(/*HERE!*/)
The code at "HERE" is executed too early and the main() of the wasm program hasn't finished exporting the APIs yet.
When I changed the Node script to following, it worked:
WebAssembly.instantiate(fs.readFileSync(process.argv[2]), go.importObject).then((result) => {
go.run(result.instance);
}).then(_ => {
let retry = setInterval(function () {
if (typeof(go.exports.myEcho) != "function") {
return;
}
console.log(go.exports.myEcho("hello", "ahoj", "ciao"));
clearInterval(retry);
}, 500);
}).catch((err) => {
console.error(err);
process.exit(1);
});
(only includes the changed part)
I know it doesn't seem to be a perfect solution, but at least it proved my guess about the root cause to be true.
But... why it didn't happen in browser? sigh...
I've created a helper app with Xcode. It's a command line app that keeps running using RunLoop (because it will do Bluetooth things in the background).
I want to spawn this app using node.js and read its output. I've sucessufully done this with other applications using the spawn method. However, with this MacOS app nothing is visible until the app finishes.
My node.js code:
const { spawn } = require('node:child_process')
const child = spawn(PROCESS)
child.stdout.on('data', data => {
console.log(data.toString());
})
My Swift code (helper app):
import Foundation
var shouldKeepRunning = true
print("app started")
let controller = Controller()
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + .seconds(10)) {
shouldKeepRunning = false
}
while shouldKeepRunning == true && RunLoop.current.run(mode: RunLoop.Mode.default, before: Date.distantFuture) {
}
In node.js app started is only printed after 10 seconds, when the app finishes. When running the app using Terminal, I see app started immediately.
Does anyone know why this happens and how it can be solved?
Thanks!
This question is actually the same: swift "print" doesn't appear in STDOut but 3rd party c library logs do when running in docker on ECS but instead of node.js Docker is not logging the output.
I fixed it by adding the following to the top of my code:
setbuf(stdout, nil)
This will make print() write to stdout directly without waiting for some buffer to be full first.
I would like to load the settings for my node application before the app loads, so the settings will be available as the code is loaded. I can get a LoadSettings.js file to run using --require but the promise that loads the settings doesn't resolve before the app is loaded. Is there a way that I can force node to wait for the promise to resolve before the loading of this require completes and the rest of the app is loaded?
This is possible using deasync. Here's an example:
import settings from './settings';
let done = false;
function loadSettings() {
console.log('Loading settings');
return settings.load().then(() => {
console.log('Loaded settings');
done = true;
});
}
loadSettings();
require('deasync').loopWhile(() => {
return !done;
});
I'm writing a serverless React app using AWS Amplify. I do my E2E tests using Cypress.
Before each test, I log the user in via the AWS Amplify CLI. Afterwards, I clear all data on the development server and create some new data using fixtures. This way I always have controlled state for each test (see code below).
My question is: Is this a good practice? Or is it bad to make that many requests against the server in before each test? If it is bad, how would you do that if you do not have direct access to the server (again serverless) to run commands like cy.exec('npm run db:reset && npm run db:seed')? Cypress does warn me in the console about the use of promises:
Cypress detected that you returned a promise in a test, but also invoked one or more cy commands inside of that promise.
Here is the code I use:
import API, { graphqlOperation } from '#aws-amplify/api';
import Auth from '#aws-amplify/auth';
import Amplify from 'aws-amplify';
import * as R from 'ramda';
import config from '../../src/aws-exports';
import { contacts } from '../../src/contacts/fixtures';
import { replaceEmptyStringsWithNull } from '../../src/contacts/helpers';
import {
createContact as addMutation,
deleteContact as deleteMutation
} from '../../src/graphql/mutations';
import { listContacts } from '../../src/graphql/queries';
Amplify.configure(config);
const deleteContact = input =>
API.graphql(graphqlOperation(deleteMutation, { input }));
const createContact = input =>
API.graphql(graphqlOperation(addMutation, { input }));
describe('Contactlist', () => {
beforeEach(async () => {
await Auth.signIn(Cypress.env('email'), Cypress.env('password'));
const allContacts = await API.graphql(graphqlOperation(listContacts));
await Promise.all(
R.map(
R.pipe(
R.prop('id'),
id => ({ id }),
deleteContact
)
)(allContacts.data.listContacts.items)
);
await Promise.all(
R.map(
R.pipe(
R.dissoc('id'),
replaceEmptyStringsWithNull,
createContact
)
)(contacts)
);
});
// ... my tests
It would be exactly the way I would perform the test. I love to start with a fully controlled state, even if that means having multiple API-calls as a before()
So basically this is what I want to do. Have a grunt script that compiles my coffee files to JS. Then run the node server and then, either after the server closes or while it's still running, delete the JS files that were the result of the compilation and only keep the .coffee ones.
I'm having a couple of issues getting it to work. Most importantly, the way I'm currently doing it is this:
grunt.loadNpmTasks("grunt-contrib-coffee");
grunt.registerTask("node", "Starting node server", function () {
var done = this.async();
console.log("test");
var sp = grunt.util.spawn({
cmd: "node",
args: ["index"]
}, function (err, res, code) {
console.log(err, res, code);
done();
});
});
grunt.registerTask("default", ["coffee", "node"]);
The problem here is that the node serer isn't run in the same process as grunt. This matters because I can't just CTRL-C once to terminate JUST the node server.
Ideally, I'd like to have it run in the same process and have the grunt script pause while it's waiting for me to CTRL-C the server. Then, after it's finished, I want grunt to remove the said files.
How can I achieve this?
Edit: Note that the snippet doesn't have the actual removal implemented since I can't get this to work.
If you keep the variable sp in a more global scope, you can define a task node:kill that simply checks whether sp === null (or similar), and if not, does sp.kill(). Then you can simply run the node:kill task after your testing task. You could additionally invoke a separate task that just deletes the generated JS files.
For something similar I used grunt-shell-spawn in conjunction with a shutdown listener.
In your grunt initConfig:
shell: {
runSuperCoolJavaServer:{
command:'java -jar mysupercoolserver.jar',
options: {
async:true //spawn it instead!
}
}
},
Then outside of initConfig, you can set up a listener for when the user ctrl+c's out of your grunt task:
grunt.registerTask("superCoolServerShutdownListener",function(step){
var name = this.name;
if (step === 'exit') process.exit();
else {
process.on("SIGINT",function(){
grunt.log.writeln("").writeln("Shutting down super cool server...");
grunt.task.run(["shell:runSuperCoolJavaServer:kill"]); //the key!
grunt.task.current.async()();
});
}
});
Finally, register the tasks
grunt.registerTask('serverWithKill', [
'runSuperCoolJavaServer',
'superCoolServerShutdownListener']
);