I have an application that when the main method is executed, it starts a web server to host some RESTful services (using Dropwizard). I'm trying to write tests that access the HTTP methods (rather than the Java methods), so the tests have a prerequisite that the server is running.
Here is my task that executes the application and starts the web server:
task run (dependsOn: 'classes', type: JavaExec) {
main = 'com.some.package.to.SomeService'
classpath = sourceSets.main.runtimeClasspath
args 'server', 'some.yml'
}
The server takes a few seconds to start up, too. Roughly, what I want to do is something like this:
test.doFirst {
println "Starting application..."
Thread.startDaemon {
// What goes here???
}
sleep 20000
println "Application should be started."
}
In other words, before running tests, start the application in a separate thread and wait some time before running tests, giving it time to finish starting up.
That said, I can't figure out what goes in Thread.startDaemon (tasks.run.execute() doesn't work), nor if this is even the best approach. What would be the best way of going about this?
Thanks!
What I would probably do is something like this:
task startServer (type: Exec) {
workingDir 'tomcat/bin'
// using START hopefully forks the process
commandLine 'START', 'start.bat'
standardOutput = new ByteArrayOutputStream()
ext.output = {
return standardOutput.toString()
}
// loop through output stream for finished flag
// or just put a timeout here
}
task testIt (type: Test) {
description "To test it."
include 'org/foo/Test*.*'
}
Then, when calling Gradle targets, call "gradle.bat startServer testIt" . That is the basic idea.
Related
I am very new to NodeJS and trying to develop an application which acts as a scheduler that tries to fetch data from ELK and sends the processed data to another ELK. I am able to achieve the expected behaviour but after completing all the processes, scheduler job does not exists and wait for another scheduler job to come up.
Note: This scheduler runs every 3 minutes.
job.js
const self = module.exports = {
async schedule() {
if (process.env.SCHEDULER == "MinuteFrequency") {
var timenow = moment().seconds(0).milliseconds(0).valueOf();
var endtime = timenow - 60000;
var starttime = endtime - 60000 * 3;
//sendData is an async method
reports.sendData(starttime, endtime, "SCHEDULER");
}
}
}
I tried various solutions such Promise.allSettled(....., Promise.resolve(true), etc, but not able to fix this.
As per my requirement, I want the scheduler to complete and process and exit so that I can save some resources as I am planning to deploy the application using Kubernetes cronjobs.
When all your work is done, you can call process.exit() to cause your application to exit.
In this particular code, you may need to know when reports.sendData() is actually done before exiting. We would have to know what that code is and/or see the code to know how to know when it is done. Just because it's an async function doesn't mean it's written properly to return a promise that resolves when it's done. If you want further help, show us the code for sendData() and any code that it calls too.
I am trying to Deploy to a list of servers in parallel to save some time. The names of servers are listed in a collection: serverNames
The original code was:
serverNames.each({
def server = new Server([steps: steps, hostname: it, domain: "test"])
server.stopTomcat()
server.ssh("rm -rf ${WEB_APPS_DIR}/pc*")
PLMScriptUtils.secureCopy(steps, warFileLocation, it, WEB_APPS_DIR)
})
Basically i want to stop the tomcat, rename a file and then copy a war file to a location using the following lines:
server.stopTomcat()
server.ssh("rm -rf ${WEB_APPS_DIR}/pc*")
PLMScriptUtils.secureCopy(steps, warFileLocation, it, WEB_APPS_DIR)
The original code was working properly and it took 1 server from the collection serverNames and performed the 3 line to do the deploy.
But now i have requirement to run the deployment to the servers listed in serverNames parallely
Below is my new modified code:
def threads = []
def th
serverNames.each({
def server = new Server([steps: steps, hostname: it, domain: "test"])
th = new Thread({
steps.echo "doing deployment"
server.stopTomcat()
server.ssh("rm -rf ${WEB_APPS_DIR}/pc*")
PLMScriptUtils.secureCopy(steps, warFileLocation, it, WEB_APPS_DIR)
})
threads << th
})
threads.each {
steps.echo "joining thread"
it.join()
}
threads.each {
steps.echo "starting thread"
it.start()
}
The echo statements were added to visualize the flow.
With this the output is coming as:
joining thread
joining thread
joining thread
joining thread
starting thread
starting thread
starting thread
starting thread
The number of servers in the collection was 4 hence 4 times the thread is being added and started. but it is not executing the 3 lines i want to run in parallel, which means "doing deployment" is not being printed at all and later the build is failing with an exception.
Note that i am running this Groovy code as a pipeline through Jenkins this whole piece of code is actually a function called deploy of the class deployment and my pipeline in jenkins is creating an object of the class deployment and then calling the deploy function
Can anyone help me with this ? I am stuck like hell with this one. :-(
Have a look at the parallel step. In scripted pipelines (which you seem to be using), you can pass it a map of thread name to action (as a Groovy closure) which is then run in parallel.
deployActions = [
Server1: {
// stop tomcat etc.
},
Server2: {
...
}
]
parallel deployActions
It is much simpler and the recommended way of doing it.
NodeJS server with a Mongo DB - one feature will generate a report JSON file from the DB, which can take a while (60 seconds up - has to process hundreds of thousands of entries).
We want to run this as a background task. We need to be able to start a report build process, monitor it, and abort it if the user decides to change the params and re build it.
What is the simplest approach with node? Don't really want to get into the realms of separate worker servers processing jobs, message queues etc - we need to keep this on the same box and fairly simple implementation.
1) Start the build as a async method, and return to the user, with socket.io reporting progress?
2) Spin off a child process for the build script?
3) Use something like https://www.npmjs.com/package/webworker-threads?
With the few approaches I've looked at I get stuck on the same two areas;
1) How to monitor progress?
2) How to abort an existing build process if the user re-submits data?
Any pointers would be greatly appreciated...
The best would be to separate this task from your main application. That said, it'd be easy to run it in the background.
To run it in the background and monit without message queue etc., the easiest would be a child_process.
You can launch a spawn job on an endpoint (or url) called by the user.
Next, setup a socket to return live monitoring of the child process
Add another endpoint to stop the job, with a unique id returned by 1. (or not, depending of your concurrency needs)
Some coding ideas:
var spawn = require('child_process').spawn
var job = null //keeping the job in memory to kill it
app.get('/save', function(req, res) {
if(job && job.pid)
return res.status(500).send('Job is already running').end()
job = spawn('node', ['/path/to/save/job.js'],
{
detached: false, //if not detached and your main process dies, the child will be killed too
stdio: [process.stdin, process.stdout, process.stderr] //those can be file streams for logs or wathever
})
job.on('close', function(code) {
job = null
//send socket informations about the job ending
})
return res.status(201) //created
})
app.get('/stop', function(req, res) {
if(!job || !job.pid)
return res.status(404).end()
job.kill('SIGTERM')
//or process.kill(job.pid, 'SIGTERM')
job = null
return res.status(200).end()
})
app.get('/isAlive', function(req, res) {
try {
job.kill(0)
return res.status(200).end()
} catch(e) { return res.status(500).send(e).end() }
})
To monit the child process you could use pidusage, we use it in PM2 for example. Add a route to monit a job and call it every second. Don't forget to release memory when job ends.
You might want to check out this library which will help you manage multi processing across microservices.
I am not very much familiar with nodejs but, I need some guidance in my task. Any help would be appreciated.
I have nodejs file which runs from command line.
filename arguments and that do some operation whatever arguments I have passed.
Now, I have html page and different options to select different operation. Based on selection, I can pass my parameters to any file. that can be any local node js file which calls my another nodejs file internally. Is that possible ? I am not sure about what would be my approach !
I always have to run different command from terminal to execute different task. so, my goal is to reduce that overhead. I can select options from UI and do operations through nodejs file.
I was bored so I decided to try to answer this even though I'm not totally sure it's what you're asking. If you mean you just need to run a node script from a node web app and you normally run that script from the terminal, just require your script and run it programmatically.
Let's pretend this script you run looks like this:
// myscript.js
var task = process.argv[2];
if (!task) {
console.log('Please provide a task.');
return;
}
switch (task.toLowerCase()) {
case 'task1':
console.log('Performed Task 1');
break;
case 'task2':
console.log('Performed Task 2');
break;
default:
console.log('Unrecognized task.');
break;
}
With that you'd normally do something like:
$ node myscript task1
Instead you could modify the script to look like this:
// Define our task logic as functions attached to exports.
// This allows our script to be required by other node apps.
exports.task1 = function () {
console.log('Performed Task 1');
};
exports.task2 = function () {
console.log('Performed Task 2');
};
// If process.argv has more than 2 items then we know
// this is running from the terminal and the third item
// is the task we want to run :)
if (process.argv.length > 2) {
var task = process.argv[2];
if (!task) {
console.error('Please provide a task.');
return;
}
// Check the 3rd command line argument. If it matches a
// task name, invoke the related task function.
if (exports.hasOwnProperty(task)) {
exports[task]();
} else {
console.error('Unrecognized task.');
}
}
Now you can run it from the terminal the same way:
$ node myscript task1
Or you can require it from an application, including a web application:
// app.js
var taskScript = require('./myscript.js');
taskScript.task1();
taskScript.task2();
Click the animated gif for a larger smoother version. Just remember that if a user invokes your task script from your web app via a button or something, the script will be running on the web server and not the user's local machine. That should be obvious but I thought I'd remind you anyway :)
EDIT
I already did the video so I'm not going to redo it, but I just discovered module.parent. The parent property is only populated if your script was loaded from another script via require. This is a better way to test if your script is being run directly from the terminal or not. The way I did it might have problems if you pass an argument in when you start your app.js file, such as --debug. It would try to run a task called "--debug" and then print out "Unrecognized task." to the console when you start your app.
I suggest changing this:
if (process.argv.length > 2) {
To this:
if (!module.parent) {
Reference: Can I know, in node.js, if my script is being run directly or being loaded by another script?
I have a MultiJob Project (made with the Jenkins Multijob plugin), with a series of MultiJob Phases. Let's say one of these jobs is called SubJob01. The jobs that are built are each configured with the "Restrict where this project can be run" option to be tied to one node. SubJob01 is tied to Slave01.
I would like it if these jobs would fail fast when the node is offline, instead of saying "(pending—slave01 is offline)". Specifically, I want there to be a record of the build attempt in SubJob01, with the build being marked as failed. This way, I can configure my MultiJob project to handle the situation as I'd like, instead of using the Jenkins build timeout plugin to abort the whole thing.
Does anyone know of a way to fail-fast a build if all nodes are offline? I could intersperse the MultiJob project with system Groovy scripts to check whether the desired nodes are offline, but that seems like it'd be reinventing, in the wrong place, what should already be a feature.
I ended up creating this solution which has worked well. The first build step of SubJob01 is an Execute system Groovy script, and this is the script:
import java.util.regex.Matcher
import java.util.regex.Pattern
int exitcode = 0
println("Looking for Offline Slaves:");
for (slave in hudson.model.Hudson.instance.slaves) {
if (slave.getComputer().isOffline().toString() == "true"){
println(' * Slave ' + slave.name + " is offline!");
if (slave.name == "Slave01") {
println(' !!!! This is Slave01 !!!!');
exitcode++;
} // if slave.name
} // if slave offline
} // for slave in slaves
println("\n\n");
println "Slave01 is offline: " + hudson.model.Hudson.instance.getNode("Slave01").getComputer().isOffline().toString();
println("\n\n");
if (exitcode > 0){
println("The Slave01 slave is offline - we can not possibly continue....");
println("Please contact IT to resolve the slave down issue before retrying the build.");
return 1;
} // if
println("\n\n");
The jenkins pipeline statement 'beforeAgent true' can be used in evaluating the when condition previous to entering the agent.
stage('Windows') {
when {
beforeAgent true
expression { return ("${TARGET_NODES}".contains("windows")) }
}
agent { label 'win10' }
steps {
cleanWs()
...
}
Ref:
https://www.jenkins.io/doc/book/pipeline/syntax/
https://www.jenkins.io/blog/2018/04/09/whats-in-declarative/