Close uzbl-browser on certain url - browser

I'm using uzbl-browser for a kiosk computer. I'd like to send "close" (or kill) to my uzbl-browser's instance when a user opens a certain URL. What is the best way?

My aim is not that.
I have a survey and i would show it before logout. If user close it then logout. Otherwise wait until last page of survey (identify by a "certain url") and the close uzbl and logout
My solution is that.
Add this to config file
#on_event LOAD_FINISH spawn #scripts_dir/survey_end_check.sh
and in my survey_end_check.sh
#!/bin/sh
if [ $UZBL_URI = "http://yoururl" ];
then
sleep 5
echo "exit" | socat - unix-connect:$UZBL_SOCKET
fi
variant in order to find ad certain string in final page.
After grep, $? is 0 if grep succeeded
#!/bin/sh
end=`echo "#<document.getElementsByClassName('success')[0].innerText>#" | socat - unix-connect:$UZBL_SOCKET | grep -q 'Success!'; echo $?`
if [ $end -eq 0 ];
then
sleep 5
echo "exit" | socat - unix-connect:$UZBL_SOCKET
fi

If I were a user on that computer and had any window, browser or not, closing itself without warning I'd consider this an application crash and try again.
Forcing that behavior on your users may not be the most informative choice.
What you want to look into is a transparent proxy that can filter content. This is how most companies restrict their employees from visiting certain pages.
Squid is one example of a proxy solution commonly used for this, usually setup together with SquidGuard. This guide will get you started.
Alternatively you could also use a DNS solution that redirects all filtered hostnames to a given page. DansGuardian is a possibility here.
A search on stackoverflow will also give you answers as several users already asked similar questions.

Related

Determine Nemo context menu actions ordering

I am having the following problem / question
and I am seeking for help / answers here. :)
I am using Debian 9 with Cinnamon UI and it works fine so far.
I recently started to get myself familiar with the nemo
actions, in order to extend the context menu with my entries.
While this works, I could not figure out how to determine
in which order the menu points are shown.
I tried the common method of using two-digit starts for the .nemo_action files (like for udev rules etc), changing zhe action names, ....
However, I could not figure out what algorithm is behind this
Can anyone shed some light on this?
I can even live with an answer like: “you need to modify the code here...”
The only thing I found on the internet so far:
https://forums.linuxmint.com/viewtopic.php?t=178757
Thanks in advance.
O.K., found it...nemo_action_manager.c, set_up_actions():
file_list = nemo_directory_get_file_list (directory);
// AlexG: List seems to be retrieved unsorted, so let's sort it.
// Then the order of menu items is == alphabetical order of nemo action file names
file_list = g_list_sort(file_list, _cbSortFileList);
[...]
I obtained a small bash script mints nemo github that allow sorting of Nemo Actions based on name; on demand. The default order is by modification date.
Below you find the script to sort actions and to set the order i named them alphabetically.
#!/bin/bash
if ! zenity --question --no-wrap --icon-name="folder" --title="Sort Nemo Actions?" --no-wrap --text="Sorting actions will close down all existing nemo instances.\n\nWould you like to proceed?"; then
exit 1
fi
mkdir -p /tmp/actions/
mv "$HOME"/.local/share/nemo/actions/*.nemo_action /tmp/actions/
ACTIONS=$(find /tmp/actions -iname '*.nemo_action' | sort -n)
for i in $ACTIONS; do
touch "$i"
done
mv /tmp/actions/*.nemo_action "$HOME"/.local/share/nemo/actions/
nemo -q
nemo-desktop -q
nemo-desktop & disown

How to use cURL to verify a web page is fully loaded? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I have a case where after deploying to a server, the UI of my web page takes around 20 mins to load. The API is available almost immediately.
I need a way to use curl to load a web page and verify from the response that whether the web page is loaded or not.
Combining curl with grep you can request your page and see if it loads by looking for a specific string you'd expect to see when it renders correctly.
Something like:
curl -o - https://www.example.com/ | grep "Something from successful response"
if [ $? -eq 0 ] ; then
echo "Success"
else
echo "Fail"
fi
The -o - option to curl outputs the response to stdout which is then piped to grep which looks for a specific string from a successful. Depending on your needs there may be other ways but this sounds like it matches what you're asking.
Also note if your UI takes 20 minutes to load the first time, you might need to adjust some curl options (e.g. --max-time) to allow for longer timeouts.

Linux Read - Timeout after x seconds *idle*

I have a (bash) script on a server that I have inherited the administration aspect of, and have recently discovered a flaw in the script that nobody has brought to my attention.
After discovering the issue, others have told me that it has been irritating them, but never told me (great...)
So, the script follows this concept
#!/bin/bash
function refreshscreen(){
# This function refreshes a "statistics screen"
...
echo "Enter command to override update"
read -t 10 variable
}
This script refreshes a statistics screen, and allows the user to stall the update in lieu of commands built into a case statement. However, the read times-out (read -t 10) after 10 seconds, regardless of if the user is typing.
Long story short, is there a way to prevent read from timing out if the user is actively typing a command? Best case scenario would be a "Time out of SEC idle/inactive seconds" opposed to just timeout after x seconds.
I have thought about running a background script at the end of the cycle before the read command pauses the screen to check for inactivity, but have not found a way to make that command work.
You can use read in a loop, reading one character at a time, and adding it to a final read string. This would then give the user some timeout amount of time per character rather than per command. Here's a sample function you might be able to incorporate into your script that shows what I'm talking about:
read_with_idle_timeout() {
local input=""
read -t 10 -N 1 variable
while [ ! -z $variable ]
do
input+=$variable
read -t 10 -N 1 variable
done
echo "Read: $input"
}
This will give the user 10 seconds to type each character. If they stop typing, you'll get as much of the command as they had started typing before the timeout occurred, and then your case statement can handle it. Perhaps you can store the final string in a global variable, or just put this code directly into your other function.
If you need more than one word, since read breaks on $IFS, you could call this function multiple times until you get all the input you're expecting.
I have searched for a simple solution that will do the following:
timeout after 10 seconds, if there is no user input at all
the user has infinite time to finish his answer if the first character was typed within the first 10 sec.
This can be implemented in two lines as follows:
read -N 1 -t 10 -p "What is your name? > " a
[ "$a" != "" ] && read b && echo "Your name is $a$b" || echo "(timeout)"
In case the user waits 10 sec before he enters the first character, the output will be:
What is your name? > (timeout)
If the user types the first character within 10 sec, he has unlimited time to finish this task. The output will look like follows:
What is your name? > Oliver
Your name is Oliver
Caveat: the first character is not editable, once it was typed, while all other characters can be edited (backspace and re-type). Any ideas for a simple solution?

Create an aggregate Nagios check based on values from other checks

I have multiple checks for ten different web servers. One of these checks is to monitor the number of established connections (using neststat and findstr to filter for ESTABLISHED). It works as expected on servers WEB1 through to WEB10. I can graph (using pnp4nagios) the TCP established connection count because the output is an integer. If it's above a certain threshold it goes into warning status, above another it becomes critical.
The individual checks are working the way I want.
However, what I'm looking to do is add up all of these connections into one graph. This would be some sort of aggregate graph or SUM of all of the others.
Is there a way to take the values/outputs from other checks and add them into one?
Server TCP Connections
WEB1 223
WEB2 124
WEB3 412
WEB4 555
WEB5 412
WEB6 60
WEB7 0
WEB8 144
WEB9 234
WEB10 111
TOTAL 2275
I want to graph only the total.
Nagios itself does not use performance data in any way, it just takes it and passes it to whatever you specify in your config. So there's no good way to do this in Nagios (You could pipe the performance output of nagios to some tee command which passes it to pnp4nagios and a different script that sums everything up, but that's just horrible to maintain).
If i had your problem, i'd do the following:
At the end of your current plugin, do something like
echo $nconnections > /some/dir/connections.$NAGIOS_HOSTNAME
where nconnections is the number of connections the plugin found. This example is shell, replace if you use some different language for the plugin. The important thing is: it should be easy to write the number to a special file in the plugin.
Then, create a new plugin which has code similar to:
#!/bin/bash
WARN=1000
CRIT=2000
sumconn=$(cat /some/dir/connections.* | awk '{sum += $1} END {print sum}')
if [ $sumconn -ge $CRIT ]; then
echo "Connection sum CRITICAL: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 2
elif [ $sumconn -ge $WARN ]; then
echo "Connection sum WARNING: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 1
else
echo "Connection sum OK: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 0
fi
That way, whenever you probe an individual server, you'll save the data for the new plugin; the plugin will just pick up the data that's there, which makes it extremely short. Of course, the output of the summary will lag behind a bit, but you can minimize that effect by setting the normal_check_interval of the individual services low enough.
If you want to get fancy, add code to remove files older than a certain threshold from the cache directory. Or, you could even remove the individual services from your nagios configuration, and call the individual-server-plugin from the summation plugin for each server, if you're really uninterested in the connection count per server.
EDIT:
To solve the nrpe problem, create a check_nrpe_and_save plugin like this:
#!/bin/bash
output=$($NAGIOS_USER1/check_nrpe "$#")
rc=$?
nconnections=$(echo "$output" | head -1 | sed 's/.*you have \([0-9]*\) connections.*/$1/')
echo $nconnections > /some/dir/connections.$NAGIOS_HOSTNAME
echo $output
exit $rc
Create a new define command entry for this script, and use the new command in your service definitions. You'll have to adjust the sed pattern to what your plugin outputs. If you don't have the number of connections in your regular output, an expression like .*connections=\([0-9]*\);.* should work. This check_nrpe_and_save should behave just like check_nrpe, especially it should output the same string and return the same exit code, and write to the special file as well.

How to search in many servers their logs and sort the information?

The idea is very simple:
I would like to pass some word as something as argument to some script, then this scripts d would search in all my servers into their logs, when found something relevant, they would throw this information in some file which this one would be rsync to some server which would sort the whole information of all servers and presents to me where and when something has been passed.
I think this is possible because my servers are syncronized with NTP which grants me they won't have the exact same time in two or more servers.
But I wonder if this is a good idea and how do this search and sort these logs ?
The problem for me is:
1) How do I access my servers to run this search in each one of them ?
2) How do I make this search ?
3) How do I sort this whole information in the final log (contained the whole information of all servers) ?
You could add your ssh keys to each server and then from your main server add this to your bashrc
export web_servers=(server1 server2 server3 server4 )
function grepallservers() {
for s in ${web_servers[#]}; do echo $s; ssh $s grep "$#"; done
}
function all-serv-grep() {
grepallservers $1 /var/log/error.log
}

Resources