Recommended framework for a lightweight heartbeat app? - node.js

I have a number of microservices I want to monitor for uptime. I would like to make a call to each microservice to evaluate its state. If the call succeeds, I know the application is "UP".
For an overly simplified use case, say I have the following three calls below. I want to make a call to each of them every 10 minutes. If all three respond with a 200, I want to modify an HTML file with the word "UP", otherwise the file should have the word "DOWN".
GET /api/movies/$movieId
POST /api/movies
DELETE api/movies/$movieId
Is Express/Node.js a good framework for this lightweight app? If so, can someone point me to a GitHub stub that can get me started? Thanks!

Both Express and Restify would be fine for this sort of example if they're simply API's. The clincher would be your note about returning HTML.
I want to modify an HTML file with the word "UP", otherwise the file should have the word "DOWN".
This would be more appropriate for Express as it allows you to use libraries like handlebars, mustache, pug, etc to do this HTML transformation.
You can use a scheduled job to check the status of your three applications, store that latest status check somewhere (a database, flat file, etc). Then a request to an endpoint such as /status on this new service would look up the latest status check, and return some templated HTML (using something like handlebars).
Alternatively, if you're comfortable with a bit of Bash you could probably just use linux / unix tooling to do this if you don't care about up-time history or further complexities.
You could setup apache or nginx to serve a file on the /status endpoint. Then use a cron job to ping all your health check URL's. If they all return without errors, you can update the file being served by nginx to say "UP", and if any errors are returned change the text to "DOWN".
This unix approach can also be done on windows if that's your jam. It would be about as light weight as you can get, and very easy to deploy and correct, but if you want to expand this application significantly in the future (storing up time history for example) you may wish to fall back to Express.

Framework? You kids are spoilt. Back when I was a lad all this round here used to be fields...
Create two html template files for up and down, make them as fancy as you want.
Then you just need a few lines of bash run every 10 minutes as a cron job. As a basic example, create statuspage.sh:
#!/bin/bash
for http in GET POST DELETE
do
res=$(curl -s -o /dev/null -w "%{http_code}" -X $http https://$1)
if [ $res -ne 200 ]
then
cp /path/to/template/down.html /var/www/html/statuspage.html
exit $res
fi
done
cp /path/to/template/up.html /var/www/html/statuspage.html
Make it executable chmod +x statuspage.sh and run like this ./statuspage.sh "www.example.com/api"
3 curl requests, stopping as soon as one fails then copying the up or down template to the location of your status page as applicable.

Related

Forward log to http server

I have a pipeline that I run with nextflow which is a workflow framework.
It has an option of seeing real time logs on the http server.
The command to do this is like so:
nextflow run script.nf --with-weblog http://localhost:8891
But I don't see anything when I open my web browser. I have port forwarded while logging into the ubuntu instance and the python http server seems to work fine.
I will need help in understanding how I can set this up so I can view logs generated by my script on the url provided.
Thanks in advance!
In nextflow you need to be careful with the leading dashes of commandline parameters. Everything starting with two dashes like --input will be forwarded to your workflow/processes (e.g. as params.input) while parameters with a single leading dash like -entry are interpreted as parameters by nextflow.
It might be a typo in your question, but to make this work you have to use -with-weblog <url> (note that I used only a single dash here)
See the corresponding docs for further information on this:
Nextflow is able to send detailed workflow execution metadata and
runtime statistics to a HTTP endpoint. To enable this feature use the
-with-weblog as shown below:
nextflow run <pipeline name> -with-weblog [url]
However, this might not be the only problem you are encountering with your setting. You will have to store or process the webhooks that nextflow sends on the server.
P.S: since this is already an old question, how did you proceed? Have you solved that issue yourself in the meantime or gave up on it?

What's the easiest way to request a list of web pages from a web server one by one?

Given a list of URLs, how does one implement the following automated task (assuming windows and ubuntu are the available O/Ses)? Are there existing types of tools that can make implementing this easier or do this out of the box?
log in with already-known credentials
for each specified url
request page from server
wait for page to be returned (no specific max time limit)
if request times out, try again (try x times)
if server replies, or x attempts failed, request next url
end for each
// Note: this is intentionally *not* asynchronous to be nice to the web-server.
Background: I'm implementing a worker tool that will request pages from a web server so the data those pages need to crunch through will be cached for later. The worker doesn't care about the resulting pages' contents, although it might care about HTML status codes. I've considered a phantom/casper/node setup, but am not very familiar with this technology and don't want to reinvent the wheel (even though it would be fun).
You can request pages easily with the http module.
Here's an example.
Some people prefer the request module available in npm.
Here's a link to the github page
If you need more than that, you can use phantomjs.
Here's a link to the github page for bridging node and phantom
However, you could also look for simple cli commands for making requests such as wget or curl.

Create a bot that just visits my website

I have a Wordpress website automatically that gets some information from a RSS feed, posts it and then, with the help of a built-in Wordpress function, sets a custom field for that post with a name and a value. The problem is that this custom field only gets set when someone visits the published post. So, I have to visit every single new post for the custom field to be applied or to wait a visitor to do so.
I was looking forward to create a bot, web-crawler or spider that just visits all my new webpages once in an hour or whatever so the custom field gets automatically applied when the post is published.
There is any way of creating this with PHP, or other web-based language. I'm on a Mac, so I don't think that Visual Basic is a solution but I could try installing it.
You could for instance write a shell script that invokes wget (or if you don't have it, you can call curl -0 instead) and have it scheduled to run every hour, e.g. using cron.
It can be as simple as the following script:
#!/bin/sh
curl -0 mysite.com
Assuming it's called visitor.sh and is set to be executable, you can then edit your crontab by typing crontab -e to schedule it. Here is a link that explains how to do that second part. You will essentially need to add this line to your crontab:
0 * * * * /path/to/.../visitor.sh
(It means: run the script located at /path/to/.../visitor.sh every round hour.)
Note that the script would run from your computer, so it will only run when the computer is running.
crontab is a good point, also you can use curl or lynx to browse the web. They are pretty light-weighted.

Get 2 userscripts to interact with each other?

I have two scripts. I put them in the same namespace (the #namespace field).
I'd like them to interactive with another.
Specifically I want script A to set RunByDefault to 123. Have script B check if RunByDefault==123 or not and then have script A using a timeout or anything to call a function in script B.
How do I do this? I'd hate to merge the scripts.
The scripts cannot directly interact with each other and // #namespace is just to resolve script name conflicts. (That is, you can have 2 different scripts named "Link Remover", only if they have different namespaces.)
Separate scripts can swap information using:
Cookies -- works same-domain only
localStorage -- works same-domain only
Sending and receiving values via AJAX to a server that you control -- works cross-domain.
That's it.
Different running instances, of the same script, can swap information using GM_setValue() and GM_getValue(). This technique has the advantage of being cross-domain, easy, and invisible to the target web page(s).
See this working example of cross-tab communication in Tampermonkey.
On Chrome, and only Chrome, you might be able to use the non-standard FileSystem API to store data on a local file. But this would probably require the user to click for every transaction -- if it worked at all.
Another option is to write an extension (add-on) to act as a helper and do the file IO. You would interact with it via postMessage, usually.
In practice, I've never encountered a situation were it wasn't easier and cleaner to just merge any scripts that really need to share data.
Also, scripts cannot share code, but they can inject JS into the target page and both access that.
Finally, AFAICT, scripts always run sequentially, not in parallel. But you can control the execution order from the Manage User Scripts panel

standard way of setting a webserver deploy using webhooks

I am working on code for a webserver.
I am trying to use webhooks to do the following tasks, after each push to the repository:
update the code on the webserver.
restart the server to make my changes take effect.
I know how to make the revision control run the webhook.
Regardless of the specifics of which revision control etc. I am using, I would like to know what is the standard way to create a listener to the POST call from the webhook in LINUX.
I am not completely clueless - I know how to make a HTTP server in python and I can make it run the appropriate bash commands, but that seems so cumbersome. Is there a more straightforward way?
Setup a script to receive the POST request ( a PHP script would be enough )
Save the request into database and mark the request as "not yet finished"
Run a crontab and check the database for "not yet finished" tasks, and do whatever you want with the information you saved into database.
This is definately not the best solution but it works.
You could use IronWorker, http://www.iron.io, to ssh in and perform your tasks on every commit. And to kick off the IronWorker task you can use it's webhook support. Here's a blog post that shows you how to use IronWorker's webhooks functionality and the post already has half of what you want (it starts a task based on a github commit): http://blog.iron.io/2012/04/one-webhook-to-rule-them-all-one-url.html

Resources