how to send email (spawn mail) from gjs gtk app - gnome

I am trying to write a gjs app that needs to send emails.
The way I have found to do this is using spawn_async_with_pipes() to call mail.
The app seems to spawn mail, and I don't get an error, but I don't get any useful output nor do I get the test emails...
I have been at this for a while now and have found little to no useful up to date documentation. I am working with gtk3 and gjs (and glib). I have also tried spawning a shell script that in turn calls mail. This resulted in "could not resolve host" errors and a dead letter queue. So I know that I am spawning my command. I am not concerned about the "could not resolve host command", but by the fact that I can't get it by spawning mail directly.
I am spawning mail like this:
const [res, pid, in_fd, out_fd, err_fd] =
await GLib.spawn_async_with_pipes(null,
['mail',
'-V',
`-s "${msgObj.subBlock}"`,
`-r ${to}`,
`-S smtp=${HOST}`,
'-S smtp-use-starttls',
'-S smtp-auth=login',
`-S smtp-auth-user=${USER}`,
`-S smtp-auth-password=${PASS}`,
FROM
], null, GLib.SpawnFlags.SEARCH_PATH, null);
const in_reader = new Gio.DataOutputStream({
base_stream: new Gio.UnixOutputStream({fd: in_fd})
});
var feedRes = in_reader.put_string(msgObj.msgBlock, null);
const out_reader = new Gio.DataInputStream({
base_stream: new Gio.UnixInputStream({fd: out_fd})
});
const err_reader = new Gio.DataInputStream({
base_stream: new Gio.UnixInputStream({fd: err_fd})
});
var out = out_reader.read_until("", null);
var err = err_reader.read_until("", null);
print(` > out : "${out}"`);
print(` > res : "${res}"`);
print(` > feedRes : "${feedRes}"`);
print(` > err : "${err}"`);
err is 0, and res is just true
I don't know what the output should be, but I'm not getting a recognizable error, and no email is being delivered...
How can I get my app to send emails? Is spawning mail not the way to go?
Thanks in advance for any pointers you can give me.

There's couple things here I think are confusing you I think I can clear up.
await GLib.spawn_async_with_pipes(
GLib has it's own concept of async functions, that when applicable need to be wrapped in a Promise to work effectively with the await keyword. In this case, GLib.spawn_async_with_pipes() is not asynchronous in the way you're thinking, but that's okay because we're going to use the higher level class Gio.Subprocess.
async function mail(msgObj, to, host, user, pass, cancellable = null) {
try {
let proc = new Gio.Subprocess({
argv: ['mail',
'-V',
// Option switches and values are separate args
'-s', `"${msgObj.subBlock}"`,
'-r', `${to}`,
'-S', `smtp=${host}`,
'-S', 'smtp-use-starttls',
'-S', 'smtp-auth=login',
'-S', `smtp-auth-user=${user}`,
'-S', `smtp-auth-password=${pass}`,
FROM
],
flags: Gio.SubprocessFlags.STDIN_PIPE |
Gio.SubprocessFlags.STDOUT_PIPE |
Gio.SubprocessFlags.STDERR_MERGE
});
// Classes that implement GInitable must be initialized before use, but
// you could use Gio.Subprocess.new(argv, flags) which will call this for you
proc.init(cancellable);
// We're going to wrap a GLib async function in a Promise so we can
// use it like a native JavaScript async function.
//
// You could alternatively return this Promise instead of awaiting it
// here, but that's up to you.
let stdout = await new Promise((resolve, reject) => {
// communicate_utf8() returns a string, communicate() returns a
// a GLib.Bytes and there are "headless" functions available as well
proc.communicate_utf8_async(
// This is your stdin, which can just be a JS string
msgObj.msgBlock,
// we've been passing this around from the function args; you can
// create a Gio.Cancellable and call `cancellable.cancel()` to
// stop the command or any other operation you've passed it to at
// any time, which will throw an "Operation Cancelled" error.
cancellable,
// This is the GAsyncReady callback, which works like any other
// callback, but we need to ensure we catch errors so we can
// propagate them with `reject()` to make the Promise work
// properly
(proc, res) => {
try {
let [ok, stdout, stderr] = proc.communicate_utf8_finish(res);
// Because we used the STDERR_MERGE flag stderr will be
// included in stdout. Obviously you could also call
// `resolve([stdout, stderr])` if you wanted to keep both
// and separate them.
//
// This won't affect whether the proc actually return non-
// zero causing the Promise to reject()
resolve(stdout);
} catch (e) {
reject(e);
}
}
);
});
return stdout;
} catch (e) {
// This could be any number of errors, but probably it will be a GError
// in which case it will have `code` property carrying a GIOErrorEnum
// you could use to programmatically respond to, if desired.
logError(e);
}
}
Gio.Subprocess is a better choice overall, but especially for language bindings that can't pass "out" arguments into functions. Using GLib.spawn_async_with_pipes you would usually pass in NULL to prevent opening any pipes you didn't want, and always ensure you close any pipes you don't. Since we can't do that in GJS, you can end up with dangling file descriptors you can't close.
Gio.Subprocess does a lot of leg work for you and ensures file descriptors are closing, prevents zombie processes, sets up child watches for you and other things you really don't want to worry about. It also has convenience functions for getting IO streams so you don't have to wrap the fd's yourself, among other useful things.
I wrote a longer primer on async programming in GJS that you might find helpful here. You should be able to breeze though it pretty quickly, and it tries to clear up some confusion about the relationship between GLib async, JavaScript async and the GLib main loop vs JS event loop.

Related

Why await within async function doesn't work for fs modules?

I am trying to read a sample.json file through my js code. First my program checks for sample.json within every folder in the specified path. And it reads the sample.json if available and fetches the data. But the await used doesn't work as expected and simply passes the empty object to the calling function before the async functions completes it execution. I have attached the image for the issue.
async function getAvailableJson(filesPath) {
let detectedJson = {};
let folders = await fs.promises.readdir(filesPath);
folders.forEach(async function(folder) {
await fs.promises.access(path.join(filesPath, folder, "Sample.json")).then(async function() {
jsonData = await fs.promises.readFile(path.join(filesPath, folder ,"Sample.json"))
const directory = JSON.parse(jsonData)
const hashvalue = Hash.MD5(jsonData)
detectedJson[directory["dirName"]] = {
name: directory["dirName"],
version: directory["dirVersion"],
hash: hashvalue
}
console.log(detectedJson);
}).catch(function(err) {
if(err.code === "ENOENT")
{}
});
});
return detectedJson;
}
I don't want to use any sync functions since it creates unnecessary locks. I have also tried with fs.readdir, fs.access and fs.readFile functions. Could someone point out what I am doing wrong here since I am new to Node.js thanks in advance.
Sample Image
Change your .forEach() to use for/of instead and generally simplify by not mixing await and .then().
async function getAvailableJson(filesPath) {
let detectedJson = {};
let folders = await fs.promises.readdir(filesPath);
let detectedJson = {};
for (let folder of folders) {
let file = path.join(filesPath, folder, "Sample.json");
try {
let jsonData = await fs.promises.readFile(file);
const directory = JSON.parse(jsonData);
const hashvalue = Hash.MD5(jsonData);
detectedJson[directory["dirName"]] = {
name: directory["dirName"],
version: directory["dirVersion"],
hash: hashvalue
};
} catch (err) {
// silently skip any directories that don't have sample.json in them
// otherwise, throw the error to stop further processing
if (err.code !== "ENOENT") {
console.log(`Error on file ${file}`, err);
throw err;
}
}
console.log(detectedJson);
}
return detectedJson;
}
Summary of Changes:
Replace .forEach() with for/of.
Remove .then() and use only await.
Remove .catch() and use only try/catch.
Remove call to fs.promises.access() since the error can just be handled on fs.promises.readFile()
Add logging if the error is not ENOENT so you can see what the error is and what file it's on. You pretty much never want to silently eat an error with no logging. Though you may want to skip some particular errors, others must be logged. Rethrow errors that are not ENOENT so the caller will see them.
Declare and initialize all variables in use here as local variables.
.forEach() is not promise-aware so using await inside it does not pause the outer function at all. Instead, use a for/of loop which doesn't create the extra function scope and will allow await to pause the parent function.
Also, I consider .forEach() to be pretty much obsolete these days. It's not promise-aware. for/of is a more efficient and more generic way to iterate. And, there's no longer a need to create a new function scope using the .forEach() callback because we have block-scoped variables with let and const. I don't use it any more.
Also, I see no reason why you're preflighting things with fs.promises.access(). That just creates a race condition and you may as well just handle whatever error you get from fs.promises.readFile() as that will do the same thing without the race condition.
See also a related answer on a similar issue.

How to stop class/functions from continuing to execute code in Node.js

I have made a few questions about this already, but maybe this question would result in better answers(i'm bad at questions)
I have one class, called FOO, where I call an async Start function, that starts the process that the class FOO was made to do. This FOO class does a lot of different calculations, as well as posting/getting the calculations using the node.js "requets" module.
-I'm using electron UI's (by pressing buttons, that executes a function etc..) to create and Start the FOO class-
class FOO {
async Start(){
console.log("Start")
await this.GetCalculations();
await this.PostResults()
}
async PostResults(){
//REQUESTS STUFF
const response = {statusCode: 200} //Request including this.Cal
console.log(response)
//Send with IPC
//ipc.send("status", response.statusCode)
}
async GetCalculations(){
for(var i = 0; i < 10; i++){
await this.GetCalculation()
}
console.log(this.Cal)
}
async GetCalculation(){
//REQUEST STUFF
const response = {body: "This is a calculation"} //Since request module cant be used in here.
if(!this.Cal) this.Cal = [];
this.Cal.push(response)
}
}
var F1 = new FOO();
F1.Start();
Now imagine this code but with A LOT more steps and more requests ect. where it might take seconds/minutes to finish all tasks in the class FOO.
-Electron got a stop button that the user can hit when he wants the calculations to stop-
How would I go about stopping the entire class from continuing?
In some cases, the user might stop and start right after, so I have been trying to figure out a way to STOP the code from running entirely, but where the user would still be able to create a new class and start that, without the other class running in the background.
I have been thinking about "tiny-worker" module, but on the creation of the worker, it takes 1-2 seconds, and this decreases the purpose of a fast calculation program.
Hopefully, this question is better than the other ones.
Update:
Applying the logic behind the different answers I came up with this:
await Promise.race([this.cancelDeferred, new Promise( async (res, req) => {
var options ={
uri: "http://httpstat.us/200?sleep=5000"
}
const response = await request(options);
console.log(response.statusCode)
})])
But even when the
this.cancelDeferred.reject(new Error("User Stop"));
Is called, the response from the request "statuscode" still gets printed out when the request is finished.
The answares I got, shows some good logic, that I didn't know about, but the problem is that they all only stop the request, the code hanlding the request response will still execute, and in some cases trigger a new request. This means that I have to spam the Stop function until it fully stops it.
Framing the problem as a whole bunch of function calls that make serialized asynchronous operations and you want the user to be able to hit a Cancel/Stop button and cause the chain of asynchronous operations to abort (e.g. stop doing any more and bail on getting whatever eventual result it was trying to get).
There are several schemes I can think of.
1. Each operation checks some state property. You make these operations all part of some object that has a aborted state property. The code for every single asynchronous operation must check that state property after it completes. The Cancel/Stop button can be hooked up to set this state variable. When the current asynchronous operation finishes, it will abort the rest of the operation. If you are using promises for sequencing your operations (which it appears you are), then you can reject the current promise causing the whole chain to abort.
2. Create some async wrapper function that incorporates the cancel state for you automatically. If all your actual asynchronous operations are of some small group of operations (such as all using the request module), then you can create a wrapper function around whichever request operations you use that when any operation completes, it checks the state variable for you or merges it into the returned promise and if it has been stopped, it rejects the returned promise which causes the whole promise chain to abort. This has the advantage that you only have to do the if checks in one place and the rest of your code just switches to using your wrapped version of the request function instead of the regular one.
3. Put all the async steps/logic into another process that you can kill. This seems (to me) like using a sledge hammer for a small problem, but you could launch a child_process (which can also be a node.js program) to do your multi-step async operations and when the user presses stop/cancel, then you just kill the child process. Your code that is monitoring the child_process and waiting for a result will either get a final result or an indication that it was stopped. You probably want to use an actual process here rather than worker threads so you get a full and complete abort and so all memory and other resources used by that process gets properly reclaimed.
Please note that none of these solutions use any sort of infinite loop or polling loop.
For example, suppose your actual asynchronous operation was using the request() module.
You could define a high scoped promise that gets rejected if the user clicks the cancel/stop button:
function Deferred() {
let p = this.promise = new Promise((resolve, reject) => {
this.resolve = resolve;
this.reject = reject;
});
this.then = this.promise.then.bind(p);
this.catch = this.promise.catch.bind(p);
this.finally = this.promise.finally.bind(p);
}
// higher scoped variable that persists
let cancelDeferred = new Deferred();
// function that gets called when stop button is hit
function stop() {
// reject the current deferred which will cause
// existing operations to cancel
cancelDeferred.reject(new Error("User Stop"));
// put a new deferred in place for future operations
cancelDeferred = new Deferred();
}
const rp = require('request-promise');
// wrapper around request-promise
function rpWrap(options) {
return Promise.race([cancelDeferred, rp(options)]);
}
Then, you just call rpWrap() everywhere instead of calling rp() and it will automatically reject if the stop button is hit. You need to then code your asynchronous logic so that if any reject, it will abort (which is generally the default and automatic behavior for promises anywa).
Asynchronous functions do not run code in a separate thread, they just encapsulate an asynchronous control flow in syntactic sugar and return an object that represents its completion state (i.e. pending / resolved / rejected).
The reason for making this distinction is that once you start the control flow by calling the async function, it must continue until completion, or until the first uncaught error.
If you want to be able to cancel it, you must declare a status flag and check it at all or some sequence points, i.e. before an await expression, and return early (or throw) if the flag is set. There are three ways to do this.
You can provide a cancel() function to the caller which will be able set the status.
You can accept an isCancelled() function from the caller which will return the status, or conditionally throw based on the status.
You can accept a function that returns a Promise which will throw when cancellation is requested, then at each of your sequence points, change await yourAsyncFunction(); to await Promise.race([cancellationPromise, yourAsyncFunction()]);
Below is an example of the last approach.
async function delay (ms, cancellationPromise) {
return Promise.race([
cancellationPromise,
new Promise(resolve => {
setTimeout(resolve, ms);
})
]);
}
function cancellation () {
const token = {};
token.promise = new Promise((_, reject) => {
token.cancel = () => reject(new Error('cancelled'));
});
return token;
}
const myCancellation = cancellation();
delay(500, myCancellation.promise).then(() => {
console.log('finished');
}).catch(error => {
console.log(error.message);
});
setTimeout(myCancellation.cancel, Math.random() * 1000);

asynchronous version of JSON.stringify and JSON.parse

var assert = require('assert');
var parseJSON = require('json-parse-async');
var contact = new Object();
contact.firstname = "Jesper";
contact.surname = "Aaberg";
contact.phone = ["555-0100", "555-0120"];
var contact2 = new Object();
contact2.firstname = "JESPER";
contact2.surname = "AABERG";
contact2.phone = ["555-0100", "555-0120"];
contact.toJSON = function(key) {
var replacement = new Object();
for (var val in this) {
if (typeof(this[val]) === 'string')
replacement[val] = this[val].toUpperCase();
else
replacement[val] = this[val]
}
return replacement;
};
var jsonText = JSON.stringify(contact);
contact = JSON.parse(jsonText);
console.log(contact);
console.log(contact2);
assert.deepEqual(contact, contact2, 'these two objects are the same');
What are the asynchronous equivalent functions of JSON.parse, JSON.stringify and assert.deepEqual? I am trying to create a race condition and non-deterministic behavior within the following code but I have not been able lto find non-blocking, asynchronous equivalents of the functions mentioned above.
node.js does not have an actual asynchronous JSON parser built-in. If you want something that will actually do the parsing outside the main node.js Javascript thread, then you would have to find a third party module that parses the JSON outside of the Javascript thread (e.g. in a native code thread or in some other process). There are some modules in NPM that advertise themselves as asynchronous such as async-json-parser or async-json-parse or json-parse-async. You would have to verify that whichever implementation you were interested in was truly an asynchronous implementation (your Javascript continues to run while the parsing happens in the background).
But, reading the detail in your question about the problem you're trying to solve, it doesn't sound like you actually need a parser that truly happens in the background. To give you your ability to test what you're trying to test, it seems to me like you just need an indeterminate finish that allows other code to run before the parsing finishes. That can be done by wrapping the synchronous JSON.parse() in a setTimeout() with a promise that has a random delay. That will give some random amount of time for other code to run (to try to test for your race conditions). That could be done like this:
JSON.parseAsyncRandom = function(str) {
return new Promise(function(resolve, reject) {
// use a random 0-10 second delay
setTimeout(function() {
try {
resolve(JSON.parse(str));
} catch(e) {
reject(e);
}
}, Math.floor(Math.random() * 10000));
});
}
JSON.parseAsyncRandom(str).then(function(obj) {
// process obj here
}, function(err) {
// handle err here
});
Note: This is not true asynchronous execution. It's an asynchronous result (in that it arrives some random time later and other code will run before the result arrives), but true asynchronous execution happens in the background in parallel with other JS running and this isn't quite that. But, given your comment that you just want variable and asynchronous results for testing purposes, this should do that.
I've recently faced this problem myself, so I decided to create a library to handle JSON parsing in a really asynchronous way.
The idea behind it is to divide the parsing process into chunks, and then run each separately in the event loop so that other events (user interactions, etc) can still be evaluated within a few milliseconds, keeping the UI interactive.
If you are interested, the library it's called RAJI and you can find it here: https://github.com/federico-terzi/raji
After installing RAJI, you can then convert your synchronous JSON.parse calls into async raji.parse calls, such as:
const object = await parse(payload);
These calls won't block the UI
You can use 'bluebird', like this example to convert calling function to promise.
I write code below using javascript es6.
const Promise = require('bluebird')
function stringifyPromise(jsonText) {
return Promise.try(() => JSON.stringify(jsonText))
}
function parsePromise(str) {
return Promise.try(() => JSON.parse(str))
}
stringifyPromise(contact)
.then(jsonText => parsePromise(jsonText))
.then(contact => {
assert.deepEqual(contact, contact2, 'these two objects are the same')
})
})

Complex sequencing of promises - nested

After a lot of googling I have not been able to confirm the correct approach to this problem. The following code runs as expected but I have a grave feeling that I am not approaching this in the correct way, and I am setting myself up for problems.
The following code is initiated by the main app.js file and is passed a location to start loading XML files from and processing into a mongoDB
exports.processProfiles = function(path) {
var deferrer = q.defer();
q(dataService.deleteProfiles()) // simple mongodb call to empty the Profiles collection
.then(function(deleteResult) {
return loadFilenames(path); // method to load all filenames in the given path using fs
})
.then(function(filenames) {
// now we have all the file names lets load and save
filenames.forEach(function(filename) {
// Here is where i think the problem is!
// kick off another promise chain for the dynamically sized array of files to process
q(loadFileContent(path, filename)) // first we load the data in the file
.then(function(inboundFile) {
// then parse XML structure to my new shiny JSON structure
// and ask Mongo to store it for me
return dataService.createProfile(processProfileXML(filename, inboundFile));
})
.done(function(result) {
console.log(result);
})
});
})
.catch(function(err) {
deferrer.reject('Unable to Process Profile records : ' + err);
})
.done(function() {
deferrer.resolve('Profile Processing Completed');
});
return deferrer.promise;
}
Whilst this code works these are my main concerns but cannot solve them on my own after a few hours of Google and reading.
1) Is this blocking? The read out to the console is difficult to understand if this is running asynchronously as i want it to - i think it is but advice on if I am doing something fundamentally wrong would be great
2) Is having a nested promise a bad idea, should I be linking it to the outter promise - I have tried but could not get anything to compile or run.
I haven't used Q in a really long time, but I think that you'd need to do is let it know you're about to hand back an array of promises that need to all be satisfied before moving on.
Additionally as you're waiting for multiple promises on one section of code, rather than nesting further, throw the 'set' of promises back up once they're all satisfied.
q(dataService.deleteProfiles()) // simple mongodb call to empty the Profiles collection
.then(function (deleteResult) {
return loadFilenames(path); // method to load all filenames in the given path using fs
})
.then(function (filenames) {
return q.all(
filenames.map(function (filename) {
return q(loadFileContent(path, filename)) { /* Do stuff with your filenames */ });
})
);
.then(function (resultsOfLoadFileContentsPromises) {
console.log('I did stuff with all the things');
)
.catch(function(err) {});
What you have is not 'blocking'. But really what you're doing with promises is moving things into a new 'block'ing section. The more blocks you have, the more async-ish your code will appear. If nothing else is running apart from this promise, it will still appear procedural.
But inner promises must still resolve before the parent promises resolve thereafter.
Inner promises like what you have aren't an inherently bad, personally I will break them out into seperate files to makes easier to reason about, but I wouldn't define that as 'bad' unless there's no need for that inner promise to exist, however where possible (and in your example here) I've adjusted so I throw back up the next set of promises for a new section to deal with the data after it's gotten it.
(I'm not great with Q though, this code will probably require a little further tweaking).

Block for stdin in Node.js

Short explanation:
I'm attempting to write a simple game in Node.js that needs to wait for user input every turn. How do I avoid callback hell (e.g. messy code) internal to a turn loop where each turn loop iteration needs to block and wait for input from stdin?
Long explanation:
All the explanations I have read on StackOverflow when someone asks about blocking for stdin input seem to be "that's not what Node.js is about!"
I understand that Node.js is designed to be non-blocking and I also understand why. However I feel that it has me stuck between a rock and a hard place on how to solve this. I feel like I have three options:
Find a way to block for stdin and retain my while loop
Ditch the while loop and instead recursively call a method (like nextTurn) whenever the previous turn ends.
Ditch the while loop and instead use setTimeout(0, ...) or something similar to call a method (like nextTurn) whenever a turn ends.
With option (1) I am going against Node.js principles of non-blocking IO.
With option (2) I will eventually reach a stack overflow as each call adds another turn to the call stack.
With option (3) my code ends up being a mess to follow.
Internal to Node.js there are default functions that are marked **Sync (e.g. see the fs library or the sleep function) and I'm wondering why there is no Sync method for getting user input? And if I were to write something similar to fs.readSync how would I go about doing it and still follow best practices?
Just found this:
https://www.npmjs.com/package/readline-sync
Example code (after doing an npm install readline-sync)
var readlineSync = require('readline-sync');
while(true) {
var yn = readlineSync.question("Do you like having tools that let you code how you want, rather than how their authors wanted?");
if(yn === 'y') {
console.log("Hooray!");
} else {
console.log("Back to callback world, I guess...");
process.exit();
}
}
Only problem so far is the wailing of the "That's not how node is meant to be used!" chorus, but I have earplugs :)
I agree with the comment about moving towards an event based system and would ditch the loops. I've thrown together a quick example of text based processing which can be used for simple text games.
var fs = require('fs'),
es = require('event-stream');
process.stdin
.pipe(es.split())
.on('data', parseCommand);
var actionHandlers = {};
function parseCommand(command) {
var words = command.split(' '),
action = '';
if(words.length > 1) {
action = words.shift();
}
if(actionHandlers[action]) {
actionHandlers[action](words);
} else {
invalidAction(action);
}
}
function invalidAction(action) {
console.log('Unknown Action:', action);
}
actionHandlers['move'] = function(words) {
console.log('You move', words);
}
actionHandlers['attack'] = function(words) {
console.log('You attack', words);
}
You can now break up your actions into discrete functions which you can register with a central actionHandlers variable. This makes adding new commands almost trivial. If you can add some details on why the above approach wouldn't work well for you, let me know and I'll revise the answer.
ArtHare's solution, at least for my use case, blocks background execution, including those started by a promise. While this code isn't elegant, it did block execution of the current function, until the read from stdin completed.
While this code must run from inside an async function, keep in mind that running an async function from a top-level context (directly from a script, not contained within any other function) will block that function until it completes.
Below is a full .js script demonstrating usage, tested with node v8.12.0:
const readline = require('readline');
const sleep = (waitTimeInMs) => new Promise(resolve => setTimeout(resolve, waitTimeInMs));
async function blockReadLine() {
var rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
terminal: false
});
let result = undefined;
rl.on('line', function(line){
result = line;
})
while(!result) await sleep(100);
return result;
}
async function run() {
new Promise(async () => {
while(true) {
console.log("Won't be silenced! Won't be censored!");
await sleep(1000);
}
});
let result = await blockReadLine();
console.log("The result was:" + result);
process.exit(0);
}
run();

Resources