How to use cURL to verify a web page is fully loaded? [closed] - linux

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I have a case where after deploying to a server, the UI of my web page takes around 20 mins to load. The API is available almost immediately.
I need a way to use curl to load a web page and verify from the response that whether the web page is loaded or not.

Combining curl with grep you can request your page and see if it loads by looking for a specific string you'd expect to see when it renders correctly.
Something like:
curl -o - https://www.example.com/ | grep "Something from successful response"
if [ $? -eq 0 ] ; then
echo "Success"
else
echo "Fail"
fi
The -o - option to curl outputs the response to stdout which is then piped to grep which looks for a specific string from a successful. Depending on your needs there may be other ways but this sounds like it matches what you're asking.
Also note if your UI takes 20 minutes to load the first time, you might need to adjust some curl options (e.g. --max-time) to allow for longer timeouts.

Related

I have a pcap with two MPLS headers . i observe the match criteria for every field in both the MPLS headers are similar . How do I differentiate? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have a pcap with two MPLS headers . i observe the match criteria for every field in both the MPLS headers are similar . How do I differentiate between the two MPLS headers ? Is it possible to achieve this via Wireshark or tshark ? If it is possible to achieve via tshark , please share the linux cmd.
For example , i am trying to filter using -
mpls.exp==7 && mpls.bottom == 0
but with the above match filter criteria , even those packets where mpls.exp==7 (in header1) and mpls.bottom==0 (in header 2) are matched. Attaching pcap snip for your reference.
above match criteria matching exp from header 1 and bottom of stack from header 1
above match criteria matching exp from header 2 and bottom of stack from header 1
TIA.
Tried to filter this using tsahrk in linux . Still not able to get the desired result -
Expected result - only the first 8 packets only should be matched
Observed result - 16 packets are matched
Tshark cmd :
tshark -r capture2_11-17-2022_11-15-15.pcap -T fields -E header=y -e mpls.exp -e mpls.bottom mpls.bottom==0 and mpls.exp==7
tshark output table
2nd EDIT: I thought of an alternative solution, which I'll now describe here. (Note that I would have provided this alternative solution, which involves programming in the form of a Lua script, as a separate answer, but it seems folks were a little trigger-happy in closing this question, so I have no choice but to supply it here. If the question is reopened, which I've voted to do, I can make this a separate answer.)
What you can do is create an MPLS Lua postdissector that adds new mpls_post.exp and mpls_post.bottom fields to an MPLS postdissector tree. You can then use those new fields in your filter to accomplish your goal. As an example, consider the following Lua postdissector:
local mpls_post = Proto("MPLSPost", "MPLS Postdissector")
local pf = {
expbits = ProtoField.uint8("mpls_post.exp", "MPLS Experimental Bits", base.DEC),
bottom = ProtoField.uint8("mpls_post.bottom", "MPLS Bottom of Label Stack", base.DEC)
}
mpls_post.fields = pf
local mpls_exp = Field.new("mpls.exp")
local mpls_bottom = Field.new("mpls.bottom")
function mpls_post.dissector(tvbuf, pinfo, tree)
local mpls_exp_ex = {mpls_exp()}
local mpls_bottom_ex = {mpls_bottom()}
if mpls_exp_ex == nil or mpls_bottom_ex == nil then
return
end
local mpls_post_tree = tree:add(mpls_post)
mpls_post_tree:add(pf.expbits, mpls_exp_ex[1].range, mpls_exp_ex[1].value)
mpls_post_tree:add(pf.bottom, mpls_bottom_ex[1].range, mpls_bottom_ex[1].value)
end
register_postdissector(mpls_post)
If you save this to a file, e.g. mpls_post.lua and place that file in your Wireshark Personal Lua Plugins directory, which you can find from "Help -> About Wireshark -> Folders" or from tshark -G folders, then [re]start Wireshark, you will be able to apply a filter such as follows:
mpls_post.exp==7 && mpls_post.bottom == 0
You can also use tshark to do the same, e.g.:
tshark -r capture2_11-17-2022_11-15-15.pcap -Y "mpls_post.exp==7 && mpls_post.bottom==0" -T fields -E header=y -e mpls_post.exp -e mpls_post.bottom
(NOTE: The tshark command, as written, will simply print out what you already know, namely 7 and 0, so presumably you want to print more than just that, but this is the idea.)
I think this is probably the best that can be done for now until the Wireshark MPLS dissector is modified so that layer operators work as expected for this protocol, but there are no guarantees that any changes to the MPLS dissector will ever be made in this regard.
EDIT: I'm sorry to say that the answer I provided doesn't actually work for MPLS. It doesn't work because the MPLS dissector is only called once and it then loops through all labels as long as bottom of stack isn't true, but it doesn't call itself recursively, which is what would be needed in this case in order for the second label to be considered another layer. The layer syntax does work for other protocols such as IP (in the case of tunneled traffic or ICMP error packets) and others though, so it's a good thing to keep in mind, but unfortunately it won't be of much use for MPLS, at least not in the Wireshark MPLS dissector's current state. I suppose I'll leave the answer up [for now] in case the dissector is ever changed in the future to allow for the layer syntax to work as one might intuitively expect it to work. And unfortunately, I can't think of an alternative solution to this problem at this time.
With Wireshark >= version 4.0, you can use the newly introduced syntax for matching fields from specific layers. So, rather than specifying mpls.exp==7 && mpls.bottom == 0 as the filter, which matches fields from any layer, use the following syntax instead, which will only match against fields from the first layer:
mpls.exp#1 == 7 && mpls.bottom#1 == 0
Refer to the Wireshark 4.0.0 Release Notes for more details about this new syntax as well as for other display filter changes, and/or to the wireshark-filter man page.
NOTE: You can also achieve this with tshark, although you can't [yet] selectively choose which field is displayed. For example:
tshark -r capture2_11-17-2022_11-15-15.pcap -Y "mpls.exp#1 == 7 && mpls.bottom#1 == 0" -T fields -E header=y -e mpls.exp -e mpls.bottom
To be clear, you can't [yet] specify -e mpls.exp#1 and -e mpls.bottom#1.

diplaying output of mutliple logs in the same terminal instance

I am trying to find flow of a request in my application.I have multiple components that are responsible for handling that request.
For instance I have 4 components and I want to display all the logs in the same terminal instance to check the flow of the request. How can I achieve that?
Currently I am displaying all logs in different terminal instances like this.
tail -f path-to-log/component1.log
tail -f path-to-log/component2.log
I hope my question is understandable.
Just use:
tail -f component1.log component2.log
Try multitail. It allows you to tail multiple files at the same time.

Close uzbl-browser on certain url

I'm using uzbl-browser for a kiosk computer. I'd like to send "close" (or kill) to my uzbl-browser's instance when a user opens a certain URL. What is the best way?
My aim is not that.
I have a survey and i would show it before logout. If user close it then logout. Otherwise wait until last page of survey (identify by a "certain url") and the close uzbl and logout
My solution is that.
Add this to config file
#on_event LOAD_FINISH spawn #scripts_dir/survey_end_check.sh
and in my survey_end_check.sh
#!/bin/sh
if [ $UZBL_URI = "http://yoururl" ];
then
sleep 5
echo "exit" | socat - unix-connect:$UZBL_SOCKET
fi
variant in order to find ad certain string in final page.
After grep, $? is 0 if grep succeeded
#!/bin/sh
end=`echo "#<document.getElementsByClassName('success')[0].innerText>#" | socat - unix-connect:$UZBL_SOCKET | grep -q 'Success!'; echo $?`
if [ $end -eq 0 ];
then
sleep 5
echo "exit" | socat - unix-connect:$UZBL_SOCKET
fi
If I were a user on that computer and had any window, browser or not, closing itself without warning I'd consider this an application crash and try again.
Forcing that behavior on your users may not be the most informative choice.
What you want to look into is a transparent proxy that can filter content. This is how most companies restrict their employees from visiting certain pages.
Squid is one example of a proxy solution commonly used for this, usually setup together with SquidGuard. This guide will get you started.
Alternatively you could also use a DNS solution that redirects all filtered hostnames to a given page. DansGuardian is a possibility here.
A search on stackoverflow will also give you answers as several users already asked similar questions.

Postfix transport - invoke script after receive mail [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Debian Sid, latest postfix from Sid.
I need to invoke bash script after user reveive mail. So, what I did:
create file /etc/postfix/transport, for example:
mail#domain.com myscript
run command to create database: postmap transport
add to main.cf: transport_maps = hash:/etc/postfix/transport
add to master.cf: myscript unix - n n - - pipe user=michal flags=FR argv=/home/michal/test.sh
reload postfix
What's the problem? If I configure it this way, after mail is received, script "test.sh" will be executed, but incoming mail will not be delivered to mailbox and it will be deleted immediatelly after receiving.
So - how to avoid this? I need the script to be executed, but incoming mail should be also delivered to my mailbox.
Use Procmail.
:0c
| $HOME/test.sh
The script receives the full message on standard input, but if you don't feel like parsing the message yourself, there are standard techniques for extracting header values into Procmail variables. You can pipe to formail:
SUBJECT=`formail -zcxSubject:`
or you can grab into MATCH, which avoids spawning an external process, but is a bit trickier for more-complex tasks;
:0
* ^Subject:[ ]*\/.+
{ SUBJECT=$MATCH }
(the whitespace inside [ ] should be a space and a tab); either way, you can now pass in $SUBJECT as a parameter on the test.sh command line. Obviously, other header values can be extracted into variables in a similar way.
PS. You cannot inline the formail call like this because it will consume the standard input from the pipe.
:0c
| $HOME/test.sh "`formail -zcxSubject:`" # erroneous!
Instead, you need to split it up, like this:
:0
* ^Subject:[ ]*\/.+
{ SUBJECT=$MATCH }
:0c
| $HOME/test.sh "$SUBJECT"

Compare two websites and see if they are "equal?"

We are migrating web servers, and it would be nice to have an automated way to check some of the basic site structure to see if the rendered pages are the same on the new server as the old server. I was just wondering if anyone knew of anything to assist in this task?
Get the formatted output of both sites (here we use w3m, but lynx can also work):
w3m -dump http://google.com 2>/dev/null > /tmp/1.html
w3m -dump http://google.de 2>/dev/null > /tmp/2.html
Then use wdiff, it can give you a percentage of how similar the two texts are.
wdiff -nis /tmp/1.html /tmp/2.html
It can be also easier to see the differences using colordiff.
wdiff -nis /tmp/1.html /tmp/2.html | colordiff
Excerpt of output:
Web Images Vidéos Maps [-Actualités-] Livres {+Traduction+} Gmail plus »
[-iGoogle |-]
Paramètres | Connexion
Google [hp1] [hp2]
[hp3] [-Français-] {+Deutschland+}
[ ] Recherche
avancéeOutils
[Recherche Google][J'ai de la chance] linguistiques
/tmp/1.html: 43 words 39 90% common 3 6% deleted 1 2% changed
/tmp/2.html: 49 words 39 79% common 9 18% inserted 1 2% changed
(he actually put google.com into french... funny)
The common % values are how similar both texts are. Plus you can easily see the differences by word (instead of by line which can be a clutter).
The catch is how to check the 'rendered' pages. If the pages don't have any dynamic content the easiest way to do that is to generate hashes for the files using a md5 or sha1 commands and check then against the new server.
IF the pages have dynamic content you will have to download the site using a tool like wget
wget --mirror http://thewebsite/thepages
and then use diff as suggested by Warner or do the hash thing again. I think diff may be the best way to go since even a change of 1 character will mess up the hash.
I've created the following PHP code that does what Weboide suggest here. Thanks Weboide!
the paste is here:
http://pastebin.com/0V7sVNEq
Using the open source tool recheck-web (https://github.com/retest/recheck-web), there are two possibilities:
Create a Selenium test that checks all of your URLs on the old server, creating Golden Masters. Then running that test on the new server and find how they differ.
Use the free and open source (https://github.com/retest/recheck-web-chrome-extension) Chrome extension, that internally uses recheck-web to do the same: https://chrome.google.com/webstore/detail/recheck-web-demo/ifbcdobnjihilgldbjeomakdaejhplii
For both solutions you currently need to manually list all relevant URLs. In most situations, this shouldn't be a big problem. recheck-web will compare the rendered website and show you exactly where they differ (i.e. different font, different meta tags, even different link URLs). And it gives you powerful filters to let you focus on what is relevant to you.
Disclaimer: I have helped create recheck-web.
Copy the files to the same server in /tmp/directory1 and /tmp/directory2 and run the following command:
diff -r /tmp/directory1 /tmp/directory2
For all intents and purposes, you can put them in your preferred location with your preferred naming convention.
Edit 1
You could potentially use lynx -dump or a wget and run a diff on the results.
Short of rendering each page, taking screen captures, and comparing those screenshots, I don't think it's possible to compare the rendered pages.
However, it is certainly possible to compare the downloaded website after downloading recursively with wget.
wget [option]... [URL]...
-m
--mirror
Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP
directory listings. It is currently equivalent to -r -N -l inf --no-remove-listing.
The next step would then be to do the recursive diff that Warner recommended.

Resources