I have a Wordpress website automatically that gets some information from a RSS feed, posts it and then, with the help of a built-in Wordpress function, sets a custom field for that post with a name and a value. The problem is that this custom field only gets set when someone visits the published post. So, I have to visit every single new post for the custom field to be applied or to wait a visitor to do so.
I was looking forward to create a bot, web-crawler or spider that just visits all my new webpages once in an hour or whatever so the custom field gets automatically applied when the post is published.
There is any way of creating this with PHP, or other web-based language. I'm on a Mac, so I don't think that Visual Basic is a solution but I could try installing it.
You could for instance write a shell script that invokes wget (or if you don't have it, you can call curl -0 instead) and have it scheduled to run every hour, e.g. using cron.
It can be as simple as the following script:
#!/bin/sh
curl -0 mysite.com
Assuming it's called visitor.sh and is set to be executable, you can then edit your crontab by typing crontab -e to schedule it. Here is a link that explains how to do that second part. You will essentially need to add this line to your crontab:
0 * * * * /path/to/.../visitor.sh
(It means: run the script located at /path/to/.../visitor.sh every round hour.)
Note that the script would run from your computer, so it will only run when the computer is running.
crontab is a good point, also you can use curl or lynx to browse the web. They are pretty light-weighted.
Related
I am running a simple suitelet with a form, to which I am adding a clientscript.
form.clientScriptModulePath = './clientScript.js';
It works fine, as long as the suitlet is run from the 'normal' url.
But if External URL is used, clientScript seems to be completely ignored, no error, just ignored.
Are Client Scripts not available for External URL's in NetSuite? Or is there some workaround for it?
I didn't find any documentations for External URL restrictions.
When you select Available Without Login and then save the Script Deployment record, an External URL
field is displayed on the Script Deployment page (see following figure). Use this URL for Suitelets you want
to make available to users who do not have an active NetSuite session.
Note The Website feature must be enabled for Clients Scripts to work in externally available Suitelets
Please go to Set up > company > Enable features >Web presence > Website.
Here is a screenshot for your reference
The Suitelet should be in released status to avoid any other errors.
The following table shows how you can specify the localization context based on the script type.
Script Type
Defining Localization Context Filtering
SuiteScript 2.0 Client Script Type
Complete the following steps to add localization context filtering to client scripts:1. Use the localizationContextEnter and localizationContextExit entry points in your script.
Please let me know how this goes!! Happy coding :)
It's been a while but I think your clientScriptModule path needs to be absolute for it to work externally. I think I ran into this a couple of years ago and that turned out to be the solution.
Here's what I want to do:
I have a hosted website on a Linux server.
This site is pointed to a GitHub repository.
I want to be able to push changes to the repository, then be able to log into my website and click a button to have the site pull the new code in order to update itself.
Here's how I do it manually:
I created a file on the Linux server called update_site
I log into my Linux server via ssh and type .\update_site which goes to the site's directory and executes a fetch and pull
the first time it asked me to enter my login and password which I did
but since I had set git config --global credential.helper store, it never asks me again
Here's how I want to automate it:
I created a PHP file which executes the file update_site
However, since the PHP website is apparently executing code as another user, it doesn't have my credentials for GitHub
Here's my question:
How can I automate the process so that when the website executes the update_site file, my GitHub login and password are sent as well. And needless to say, how can I do this as securely as possible, i.e. without saving my GitHub password somewhere as plain text?
One possible way to do this automation is to use cron. Edit your cron record (with crontab -e command) and add line like this:
*/5 * * * * /path/to/update_site
In above line 5 mean every 5 minutes
I want to execute a php file located in my apache server on localhost/remote from Processing. But I want to do it without opening the browser. How can I do this. The php script just adds data to mysql database using values obtained from GET request. Is this possible? Actually I tried using link("/receiver.php?a=1&b=2") but it opens a web page containing the php output.
Ideally such scripts must be generic so that it can be used as utility for both web and bash scripts, in-case you cannot change/modify script, then I would suggest to curl from terminal/bash script to make HTTP request to the given link.
Refer to this solution, as how to make request with CURL.
I have a number of microservices I want to monitor for uptime. I would like to make a call to each microservice to evaluate its state. If the call succeeds, I know the application is "UP".
For an overly simplified use case, say I have the following three calls below. I want to make a call to each of them every 10 minutes. If all three respond with a 200, I want to modify an HTML file with the word "UP", otherwise the file should have the word "DOWN".
GET /api/movies/$movieId
POST /api/movies
DELETE api/movies/$movieId
Is Express/Node.js a good framework for this lightweight app? If so, can someone point me to a GitHub stub that can get me started? Thanks!
Both Express and Restify would be fine for this sort of example if they're simply API's. The clincher would be your note about returning HTML.
I want to modify an HTML file with the word "UP", otherwise the file should have the word "DOWN".
This would be more appropriate for Express as it allows you to use libraries like handlebars, mustache, pug, etc to do this HTML transformation.
You can use a scheduled job to check the status of your three applications, store that latest status check somewhere (a database, flat file, etc). Then a request to an endpoint such as /status on this new service would look up the latest status check, and return some templated HTML (using something like handlebars).
Alternatively, if you're comfortable with a bit of Bash you could probably just use linux / unix tooling to do this if you don't care about up-time history or further complexities.
You could setup apache or nginx to serve a file on the /status endpoint. Then use a cron job to ping all your health check URL's. If they all return without errors, you can update the file being served by nginx to say "UP", and if any errors are returned change the text to "DOWN".
This unix approach can also be done on windows if that's your jam. It would be about as light weight as you can get, and very easy to deploy and correct, but if you want to expand this application significantly in the future (storing up time history for example) you may wish to fall back to Express.
Framework? You kids are spoilt. Back when I was a lad all this round here used to be fields...
Create two html template files for up and down, make them as fancy as you want.
Then you just need a few lines of bash run every 10 minutes as a cron job. As a basic example, create statuspage.sh:
#!/bin/bash
for http in GET POST DELETE
do
res=$(curl -s -o /dev/null -w "%{http_code}" -X $http https://$1)
if [ $res -ne 200 ]
then
cp /path/to/template/down.html /var/www/html/statuspage.html
exit $res
fi
done
cp /path/to/template/up.html /var/www/html/statuspage.html
Make it executable chmod +x statuspage.sh and run like this ./statuspage.sh "www.example.com/api"
3 curl requests, stopping as soon as one fails then copying the up or down template to the location of your status page as applicable.
I want to get all the links(web posts) available in website . And also if any new post is added to website I should be able get the link. I will be having list of 10 websites and the link extraction process needs to be run periodically.
Can some one help me how to get only post links and new post link that is added.
I would suggest to write a php script (since you mentioned php) which is called by a cron-job periodically. Inside the script you can
Option 1: Define a curl commando which automatically fetches all the content of one url. (May be better if you have to deliver some information to the website with post-method.)
Option 2: Use file_get_contents function to get all contents
Than you can parse these result with a regular-expression to extract the parts you are interested in (for example search for something like <div class=".post">...</div>). After that you can add the information to your database or just check if the information is already there.