We have multiple scripts to run simultaneously on server, in order to perform some tasks. These scripts are set to run on a frequency via crons.
Using crons for this purpose as so many drawbacks. Like using crons we are unable to track if previous command completed then run it again otherwise wait for its completion. Also we are unable to get the errors occurred during script running and its output. This approach increases CPU load also.
So I need a tool where we can set these scripts, which executes only if previous one is completed and track output of those scripts as well. Firstly I tried to use ActiveMQ for this but I think this tool not suitable for this purpose.
Can someone please suggest me a tool for this requirement?
Related
I am using task to run various build steps in parallel. It is like 1000% faster then doing it all synchronously. However, it seems that task uses all these different subshells. I want to be able to prompt the user for their input using a node program. However, when node logs and attempts to prompt from task its restamped with tasks logger and I cannot get the fancy node prompt to show up.
I was wondering what the best way of solving this would be. Ideally, I could use a one-liner that attaches whatever code task runs to the main terminal Window so that the node script can do its thing. However, if that's not possible I'm open to other ideas. I was thinking of creating a bash alias for task that is the front end and then I could post the messages I want logged while node is listening to the socket that task posts onto. I just want to make sure there is no easy way of handling this before I create a custom solution.
Thanks.
When I need to compile an application from sources (I'm talking in a linux environment) basically the procedure is the following:
download and extract sources
./configure [optional params]
make
make install
Usually I pass -j4 to make in order to use all the CPU resources and speed up (a lot!) the compilation process.
I'm wondering if there is something similar for configure which often takes a lot of time for execution. Of course I've already tried to pass the same option but it fails, and I find nothing related in configure --help.
No, configure scripts generally do not conventionally allow for distributed or parallelized execution.
Results are usually cached in configure.cache so you might be able to refactor for parallel execution without too much effort.
If you want to save on running multiple configuration jobs for different libraries where they may run the same tests multiple times, have them share the same cache file. See https://www.gnu.org/software/autoconf/manual/autoconf-2.65/html_node/Cache-Files.html
I am trying to automate the creation of JMeter scripts based on existing Cucumber tests to avoid maintaining two separate sets of tests (one for acceptance and one for load testing).
The Cucumber recording works great locally when I add the HTTP Recorder to the Workbench and start the recording, however I cannot figure out how I can automatically start it from the command line. Is this possible at all?
Why not run Cucumber from JMeter?
Because I'd like to avoid running multiple instances of Cucumber at the same time, and I'd like to be able to distribute the load generation (using jmeter-server)
This is not possible yet.
You should discuss this on user mailing list to give more details on your request.
If this looks useful, then you would create an Enhancement request on JMeter bugzilla and feature may be developed.
I want to create an application that runs in the background in Linux (daemon) that will basically at set times (5 times) play a music file or any sound given every single day. I want this daemon to start when the computer is started in terminal mode (non-GUI). I want to know if this is possible and if so, What considerations, tools, and programming language would be the most efficient in doing so? This will be a dedicated computer that will only be executing this task, so if any recommendations on how I can maximize efficiency while disabling other features that are not required for this task will be appreciated. Also, could you please explain how processes and tasks work in terminal (non-GUI)? I always thought terminal was something like CMD in Windows and can only run tasks one at a time.
EDIT: I need the sound to run at variable times, I'll be fetching these times from a website. Any suggestions regarding how to achieve this?
Thanks for the help and sorry for any shortcoming in the questions or my research.
Look at using cron to run your tasks. cron is a very flexible scheduling utility built in to most Linux distributions.
Basically, with cron you specify a task to run (your main program, or maybe just a sound-playing program), all of its arguments, and when it runs. cron takes care of running it, and will even send you "mail" if the job produces any output (such as errors).
You can make processes fork into a subprocess of your terminal, i.e. you are able to run more than one task at a time by putting a & after your terminal command:
> cmd&
> [you can type other commands here but the "cmd" program is still running]
However, for services you generally don't have to worry about starting it as a subprocess because the system already knows to do this. Here's a good question from Super User that has an example of a working service. Simply place your service as a shell script in the /etc/init.d and it will be automatically started as a service.
Is it possible to use Jenkins server to run custom tasks one by one?
By task I mean to execute an external groovy program which designed as an independent performance and integration test for specific deployment.
If it is possible then how to:
To define tasks in Jenkins and group them so they can start by starting a group.
To see an output of each task (output log).
If there is a specific outcome like "-1" then stop execution of the whole group.
And all this should start automatically after software has been built and deployed.
I feel there has to be a way to do it with Jenkins utilising its out-of-the-box functionality, just not sure how. Or I am wrong and we are looking at custom plugin as a solution?
Thanks a lot!
P.S. I am not asking for detailed answer, just a general direction would be Ok. Also Jenkins is not a requirement, it can be another similar CI server.
It sounds like this could work by a simple Jenkins task with Execute shell commands.
The Console Output for the jobs will contain the output from the processes that you run externally, and the exit status of the script can cause the task to be in failure (any non-zero exit code will do this by default).
On unix systems, #! beginning the first line will denote the script environment to use.
To chain this together with the other Jenkins steps, you can use Build Triggers for Build after other projects are built and use your deployment step as the starting off point.
It is possible, but be careful. Normally Jenkins is used to run build jobs and to deploy software to a QA or staging server. It does not touch Production. But when you start doing this in Jenkins you increase the risk that someone will accidentally run a production job that should not have been run. So if you do decide to use Jenkins for this, set up an entirely separate instance of Jenkins that does nothing other than run these jobs. Then go to Manage Jenkins->Configure Global Security and set up login users. At the least, use "logged in users can do anything" but it would be better to set up "matrix-based security". Then run any jobs that you need by using an Execute Shell step. You can schedule jobs by using a Build Trigger, and you can connect jobs sequentially by setting up Build Other Projects in the post build section. If you want to do more complex job chaining, look into the Join Plugin.
Just keep this Jenkins entirely separate from the Jenkins which you use for CI.