How to change vue-cli message after successfull compile ("App running at...")? - vue-cli

I use vue-cli in my dockerized project, where port mapping looks like this: "4180:8080", and the actual message after compiling my SPA looks like:
App running at:
- Local: http://localhost:8080/app/
It seems you are running Vue CLI inside a container.
Access the dev server via http://localhost:<your container's external mapped port>/app/
App works fine, I could access at via http://localhost:4180/app/ as conceived, but I'm not able to find a proper way to change the message above to show this link instead of "It seems you are running Vue CLI inside a container...". I could use webpack hooks to insert link before the message but actually wanna find the way to change the message, generated by cli. Is it possible somehow?

I came to this question - as I was looking to do the same thing with bash, running inside a Docker container (possibly what you're already doing).
You could achieve this by invoking Vue CLI commands through spawning a child node process, from within your Docker container (assuming your container is running node). You can then modify the output of stdout and stderr accordingly.
You can call a Javascript file in one of two ways:
use a shell script (bash, for example) to call node and run this script
set the entrypoint of your Dockerfile to use this script (assuming you're running node by default)
// index.js
const { spawn } = require('child_process')
const replacePort = string => {
return string.replace(`<your container's external mapped port>`, 8000)
}
const vueCLI = (appLocation, args) => {
return new Promise((resolve, reject) => {
const vue = spawn('vue', args, {cwd: appLocation})
vue.stdout.on('data', (data) => {
console.log(replacePort(data.toString('utf8', 0, data.length)))
})
vue.stderr.on('data', (error) => {
console.log(replacePort(error.toString('utf8', 0, error.length)))
})
vue.on('close', (exitCode) => {
if (exitCode === 0) {
resolve()
} else {
reject(new Error('Vue CLI exited with a non-zero exit code'))
}
})
})
}
vueCLI('path/to/app', CLI_Options).then(() => resolve()).catch(error => console.error(error))
This approach does have drawbacks, not limited to:
performance being slower - due to this being less efficient
potential danger of memory leaks, subject to implementation
risk of zombies, should the parent process die
For the reasons above and several others, this is a route that was found to be unsuitable in my specific case.

Instead of changing the message, it's better to change the port Vue is listening on.
. npm run serve -- --port 4180
This automatically updates your message to say the new port, and after you updated your docker port forward for the new port, it it will work again.

Related

Error when executing npm command in firebase cloud functions

I've a project with the following folder structure:
enter image description here
It has a firebase cloud functions folder as well as a react app.
Here's the code for the index.js file in the functions folder:
const functions = require("firebase-functions");
const { exec } = require("child_process");
const util = require("util");
const execPromise = util.promisify(exec);
exports.installModules = functions.https.onCall((data, context) => {
return new Promise((resolve, reject) => {
console.log("Starting install modules process");
exec(
"npm i",
{
cwd: "../react-app",
},
(err, stdout, stderr) => {
if (err) {
console.error("Error running build command:", err);
reject(err);
}
console.log("Modules installed successfully with output:", stdout);
resolve("Modules installed successfully");
}
);
});
});
exports.build = functions.https.onCall((data, context) => {
return new Promise((resolve, reject) => {
console.log("Starting build process...");
exec(
"npm run build",
{
cwd: "../react-app",
},
(err, stdout, stderr) => {
if (err) {
console.error("Error running build command:", err);
reject(err);
}
console.log("Build completed successfully with output:", stdout);
resolve("Build completed successfully");
}
);
});
});
The first function (installModules) installs modules inside the react app. The second function (build) makes a build folder for the react app. Both of these functions work fine when testing them with the firebase functions shell (with the command firebase functions:shell, and then nameOfFunction({}).
However, when running deploying these to the cloud I get the following error when calling them from a frontend.
**severity: "ERROR"
textPayload: "Unhandled error Error: spawn /bin/bash ENOENT
at Process.ChildProcess._handle.onexit (node:internal/child_process:285:19)
at onErrorNT (node:internal/child_process:485:16)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
errno: -2,
code: 'ENOENT',
syscall: 'spawn /bin/bash',
path: '/bin/bash',
spawnargs: [ '-c', 'npm i' ],
cmd: 'npm i'
}**
I've tried running these in an express server deployed to both Google Cloud and Heroku but was running into too many issues which is why I decided to give firebase cloud functions a try. According to this post it is possible to run npm commands inside of a google function.
Thanks, any help is appreciated!
What you're trying to do isn't possible. You can't dynamically install modules or do anything at all to modify the filesystem that was created for your function (it is a read-only docker image). From the documentation:
The function execution environment includes an in-memory file system that contains the source files and directories deployed with your function (see Structuring source code). The directory containing your source files is read-only, but the rest of the file system is writeable (except for files used by the operating system). Use of the file system counts towards a function's memory usage.
The question you linked does not actually say that it's possible to run npm commands in Cloud Functions. It just says that it's possible to spawn commands (typically that you provide in your own deployment).
If you want to run arbitrary commands to build a filesystem that you can use to execute more programs, Cloud Functions is not the right product for your use case. If you are just trying to build some software and deploy it somewhere, maybe Cloud Build is what you need as part of your deployment pipeline.
The issue I see is you are trying to execute /bin/bash shell scripting through exec within a Firebase function. Although it may be possible, I think you've outgrown Firebase in this regard and may want to look into using Cloud Run which has full access to shell without restriction and can still communicate with your GCFs (Firebase functions).
Creating a Cloud Run service is really easy with the pre-made container images you can use in the Google Cloud Console GUI so you can be up and running quickly.

Blockchain listener app exited with code 0 docker

I have a simple websocket application running. I use ethers.js to listen to blockchain events, which uses websockets in the background. I connect to blockchain via Infura provider. When I dockerize the app and run the image, it continues to live for about 3-5 minutes, but then exits with code 0 without any errors or messages. When I run the application without dockerizing it from the terminal with simply npx ts-node src/index.ts command, then there is no problem and it keeps running forever.
It also logs ⛓️ [chain]: Started listening events in the docker logs so it is started successfully as well.
No event is captured from the listener and nothing happens, so it is not the case that something is executed and caused it to exit somehow. Also when I am able to make the transaction quick before it exits, it captures the event successfully and continues for some more time as well.
What could be reason behind this and what should I do to keep it running?
Here is my Dockerfile:
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npx", "ts-node", "src/index.ts"]
Here is my index.ts:
import { listenContractEvents } from './events';
listenContractEvents()
.then(() => console.log('⛓️ [chain]: Started listening events'))
.catch(console.log);
Here is my events.ts:
const provider = new ethers.providers.WebSocketProvider(
`wss://goerli.infura.io/ws/v3/${process.env.INFURA_API_KEY}`
);
async function listenContractEvents() {
const contract = new ethers.Contract(
contractAddress,
contractAbi,
provider
);
let userList: User[] = await contract.getUserList();
contract.on('Register', async () => {
console.log('⛓️ [chain]: New user registered!');
userList = await contract.getUserList();
});
}
It seems like the problem was with the Infura closing the WebSocket after certain amount of idle time, and since there was no other process running except the WebSockets, the docker container was closing itself considering the job as done.
I have used the following piece of code to restart the websocket every time it closes:
provider._websocket.onclose = () => {
listenContractEvents();
};

Unable to read a continuous data stream from a node child process in Node.js

First things first. The goal I want to achieve:
I have two processes:
the first one is webpack that just watches for file changes and pushes the bundled files into the dist/ directory
the second process (Shopify CLI) watches for any file changes in the dist/ directory and pushes them to a remote destination
My goal is to have only one command (like npm run start) which simultaneously runs both processes without printing anything to the terminal so I can print custom messages. And that's where the problem starts:
How can I continuously read child process terminal output?
Printing custom messages for webpack events at the right time is pretty easy, since webpack has a Node API for that. But the Shopify CLI only gives me the ability to capture their output and process it.
Normally, the Shopify CLI prints something like "Finished Upload" as soon as the changed file has been pushed. It works perfectly fine for the first time but after that, nothing is printed to the terminal anymore.
Here is a minimal representation of what my current setup looks like:
const spawn = require('spawn');
const childProcess = spawn('shopify', ['theme', 'serve'], {
stdio: 'pipe',
});
childProcess.stdout.on('data', (data) => {
console.log(data);
});
childProcess.stderr.on('data', (data) => {
// Just to make sure there are no errors
console.log(data);
});

Node's spawn/exec not working when called from a scheduled Windows task

I'm facing a very odd issue where I have a Node script which invokes a process, it looks like this:
// wslPath declared here (it's a shell file)
const proc = cp.spawn('ubuntu.exe', ['run', wslPath]);
let stdout = '';
proc.stdout.on('data', data => stdout += data.toString());
let stderr = '';
proc.stderr.on('data', data => stderr += data.toString());
return await new Promise((resolve, reject) => {
proc.on('exit', async code => {
await fs.remove(winPath);
if (code) {
reject({ code, stdout, stderr });
}
resolve({ stdout, stderr });
});
});
As you can see, the script invokes WSL. WSL is enabled on the computer. When I run this script manually, it works fine. When I log in to the computer the script is at using RDP from another computer and run it with the same credentials, it works fine as well. But when the script is invoked from a scheduled task which also runs with the same credentials, the spawn call returns:
(node:4684) UnhandledPromiseRejectionWarning: Error: spawn UNKNOWN
at ChildProcess.spawn (internal/child_process.js:394:11)
at Object.spawn (child_process.js:540:9)
I verified the user is the same by logging require('os').userInfo() and require('child_process').spawnSync('whoami', { encoding: 'utf8' }) and it returns the same in all three cases.
I assume it is because ubuntu.exe is not being found, but I don't know why that would be as the user is the same in all three cases.
What could be the reason for this and how can I debug this further?
The Windows Task Scheduler allows you to specify a user to run as (for privilege reasons), but does not give you the environment (PATH and other environment variables) that are configured for that user.
So, when running programs from the Windows Task Scheduler, it's important to not make any assumptions about what's in the environment (particularly the PATH). If my program depends on certain things in the environment, I will sometimes change my Task to be a .BAT file that first sets up the environment as needed and then launch my program from there.
Among other things, the simplest way to not rely on the path is to specify the full path to the executable you are running rather than assuming it will be found in the path somewhere. But, you also need to make sure that your executable can find any other resources it might need without any environment variables or you need to configure those environment variables for it before running.

Can't spawn `gcloud app deploy` from a Node.js script on Windows

I'm building an Electron application (Node.js) which needs to spawn gcloud app deploy from the application with realtime feedback (stdin/stdout/stderr).
I rapidly switched from child_process to execa because I had some issues on Mac OS X with the child_process buffer which is limited to 200kb (and gcloud app deploy sends some big chunk of string > 200kb which crash the command).
Now, with execa everything seems to work normally on OSX but not on Windows.
The code looks something like this:
let bin = `gcloud${/^win/.test(process.platform) ? '.cmd' : ''}`
//which: https://github.com/npm/node-which
which(bin, (err, fullpath) => {
let proc = execa(fullpath, ['app', 'deploy'], {
cwd: appPath
})
proc.stdout.on('data', data => {
parseDeploy(data.toString())
})
proc.stderr.on('data', data => {
parseDeploy(data.toString())
})
proc.then(() => {
...
}).catch(e => {
...
})
})
This code works perfectly on Mac OS X while I haven't the same result on Windows
I have tried lots of thing:
execa()
execa.shell()
options shell:true
I tried maxBuffer to 1GB (just in case)
It works with detached:true BUT I can't read stdout / stderr in realtime in the application as it prompts a new cmd.exe without interaction with the Node.js application
Lots of child_process variant.
I have made a GIST to show the responses I get for some tests I have done on Windows with basic Child Process scripts:
https://gist.github.com/thyb/9b53b65c25cd964bbe962d8a9754e31f
I also opened an issue on execa repository: https://github.com/sindresorhus/execa/issues/97
Does someone already got this issue ? I've searched around and found nothing promising except this reddit thread which doesn't solve this issue.
Behind the scene, gcloud.cmd is running a python script. After reading tons of Node.js issue with ChildProcess / Python and Windows, I fell on this thread: https://github.com/nodejs/node-v0.x-archive/issues/8298
There is some known issue about running Python scripts from a Node.js Child Process.
They talk in this comment about an unbuffered option for python. After updating the shell script in gcloud.cmd by adding the -u option, I noticed everything was working as expected
This comment explains how to set this option as an environment variable (to not modify the windows shell script directly): https://docs.python.org/2/using/cmdline.html#envvar-PYTHONUNBUFFERED
So adding PYTHONUNBUFFERED to the environment variable fix this issue !
execa(fullpath, ['app', 'deploy'], {
cwd: appPath,
env: Object.assign({}, process.env, {
PYTHONUNBUFFERED: true
})
})

Resources