I am looking for a solution that monitors a service on a server and runs a custom script when a problem is found.
To be more specific:
We have a service that relies on many Elastic IPs at EC2, when a problem occurs on the primary server, all those EIPs are required to move to a slave server.
I have written the script for the EIP failover, but my company wants to use an open source tool for the monitoring part.
I have looked into pacemaker/heartbeat solution but it seems too complex for what i want to achieve.
Please help me find a good solution for this problem, thanks in advance!
If your problem is as simple as watching a process and trigger scripts, monit will be your best friend:
http://mmonit.com/monit/
The good thing about monit is that it scales well if you have a lot of servers as it runs and executes everything locally on the machine being monitored.
Have you considered using Scout ? It allows you to write your custom scripts that get executed after triggers. For example you can setup a trigger from a third server that when it can't reach one of your EIPs then it's time to do the EIP switchover.
We are currently monitoring all of our servers using Scout and are pretty happy.
Related
i have to do a reptitive task in nodeJS and i've seen there is existing package like this one.
https://www.npmjs.com/package/node-cron
and the actual platform where i'm hosted propose inside cronjob.
https://www.netlify.com/docs/webhooks/
so my question is when it's more interessant to use the platform or a package.
thanks.
From the URL posted i didn't see any method of setting up a cron job using webhooks. Unless you were thinking of setting up a webhook that listens for a post which is sent using a linux cron job or the like?
Regardless, the actual question about using a platform or a package. They have pros and cons, but based purely on your question I would go with the platform.
If you choose to use a package you will have to write the code to call the package (which you need to test, maintain and run). You need to ensure that the node process is always up and running, if it dies or exits that it is re-spawned, that if the operating system reboots the node process gets kicked off again. All these problems are can be easily solved (PM2 for instance) but the fact is you need to think of the problems and solve them yourself or the cron job might not run when you want it to.
When using the platform you know that it is well tested, that it will work as documented, and that it will be resilient to failure modes that you might not be aware of.
I'd like to check the status of an app registered with pm2 remotely such that other web-based monitoring services can give us a notification when something breaks.
Are there any options available to remotely check the status of a process in pm2 remotely? One possibility is to have a web script remotely eval() the pm2 status command and look for certain keywords, and make that script accessible on the web for the notification tool. This doesn't seem ideal, though, as we're using an eval command and maybe a regex of that output just to see what is going on.
Any advice?
I wrote a simple web interface for PM2.
You can simply start a websocket connection to /logs and get your application(s) stats updates such as status, uptime, cpu usage, memory usage, restarts in realtime.Feel free to use and contribute. Cheers!
https://github.com/doorbash/pm2-web
The best option is to use keymetrics. It's free to monitor upto 4 processes(great for development and side projects), easy to link an instance/server but quickly turns out to be very expensive when you scale up.
You could always try switching to other alternatives like upstart or pm2-gui.
I need to be able to generate some type of Scheduling service within Windows Azure, but which is the best and most resilient?
Currently I have a Windows Service running Quartz, which works okay, but on a Windows Server. I need this to run in the cloud.
The tasks, read/write to a database and some will send emails.
I've looked over all the possible solutions in Stack Overflow, but they appear to be old and not updated to the latests Azure Platform.
Any suggestions or pointers?
The most adapted solution might be a worker role, MS has a tutorial specifically for what you're looking for: http://www.windowsazure.com/en-us/develop/net/tutorials/multi-tier-web-site/4-worker-role-a/
This would definitely a less expensive solution than instantiating a virtual machine, but might require some work.
I ended up using the Azure Mobile service and the Scheduler that come with it, which works a treat
I run a Worker Role using Quartz .NET to schedule stuff. Works great!
https://github.com/quartznet/quartznet
Obviously, that would be difficult to do on the cloud since you won't be able to install services or anything that could run in the background. A less than perfect solution would be to have a workstation under your control handle the scheduling and send updates to the web server which would then write them to the DB server. Otherwise, you should self host the website and application, etc.
I have a Windows application that does some calculations and is called from command line. On my Windows machine, I have a PHP script running under Apache that executes the application and shows the output.
Is there any hosting solution that I can use to do the same? I can't figure out if EC2 or Azure are the right solutions. Basically, I need a web server + ability to execute my application.
Suggestions? Thanks.
You can host your application on AppHarbor, the .NET Platform-as-a-Service. You can either port your web frontend to .NET or try to get your PHP stuff working with Phalanger. AppHarbor is working on Background Tasks, which might be a good match for your workload.
I would just run the PHP script you already have under IIS in a Windows Azure web role.
If it is a Windows Application and you have the source code I would go with an Azure Worker Role. The advantage of using a PaaS (as Azure) instead of an IaaS (as Amazon) is that you wont have to bother of keeping the server up to date.
The real investment in time will be when you rewrite your application to make it work as a Worker Role. The time needed to do this work depends on how your application works right now. If is uses a lot of disc access it might be difficult and perhaps an Amazon server would be better. But if it only crunches numbers in memory an Azure Worker Role is a very good candidate.
The real advantage of using an Amazon server is that you probably wont need to do any work at all. Except maintaining the server.
As described in the question both Azure and EC2 will do the job very well. This is the kind of task both systems are designed for.
So the question becomes really: which is best? That depends on two things: what the application needs to do and your own experience and preference.
As it's a Windows application there should probably be a leaning towards Azure. While EC2 supports Windows, the tooling and support resources for Azure are probably deeper at this point.
If cost is a factor then a (somewhat outdated) resource is here: http://blog.mccrory.me/2010/10/30/public-cloud-hourly-cost-comparison/ -- the conclusion is that, by and large, Azure and Amazon are roughly similar for compute charges.
Steve Marx has a blog post that describes how to run another web server (i.e not IIS) on Azure
This potentially has everything you need - you can deploy Apache and your executable and run it in exactly the same way.
Alternatively - you can deploy your executable along side a bit of code in a worker role that would run that application periodically, all depending on your exact requirements
Sorry about the really ambiguous question, I really have no idea how to word it though hopefully I can give you more detail here.
I am developing a project where a user can log into a website and book a server to run a game for a specific amount of time. When the time is up the server stops running and the players on the server are kicked off. The website part is not a problem, I am doing this in PHP and everything works. It has a calendar system to book a server and can generate config files based on what the user wants.
My question is what should I use to run the specific game server on the linux box with those config files at the correct time? I have got this working with bash scripts and cron, but it seems very un-elegant. It literally uses FTP to connect to the website so it can download all the necessary config files and put them in a folder for that game and time. I was wondering if there was a better way of doing this. Perhaps writing a program in C, but I am not sure how to go about doing this.
(I am not asking for someone to hold my hand and tell me "write this code here", just some ideas of a better way of approaching this problem)
Thanks so much guys!
Edit: The webserver is a totaly different machine. I would theoreticaly like to have more than one game server where each of them "connects" (at the moment FTP) to the webserver, gets a file saying what it has to do at a specific time and downloads any associated files then disconnects.
I think at is better suited for running one time jobs than cron.
For a better approach for the downloading files etc, you should give more details on your setup (like, the website and the game server, are they on the same machine? Or the same network? etc etc.
You need a distributed task scheduler. With that, you can:
Schedule command "X" to be run at a certain time.
Specify the machine (or ask it to pick a machine from a pool of available machines)
Webserver would send request to this scheduler via command line or via web service when user selects a game server and a time.
You can have a look at : http://www.acelet.com/super/SuperWatchdog/index.html
EDIT :
One more option :http://jobscheduler.sourceforge.net/