How to see all Request URLs the server is doing (final URLs) - linux

How list from the command line URLs requests that are made from the server (an *ux machine) to another machine.
For instance, I am on the command line of server ALPHA_RE .
I do a ping to google.co.uk and another ping to bbc.co.uk
I would like to see, from the prompt :
google.co.uk
bbc.co.uk
so, not the ip address of the machine I am pinging, and NOT an URL from servers that passes my the request to google.co.uk or bbc.co.uk , but the actual final urls.
Note that only packages that are available on normal ubuntu repositories are available - and it has to work with command line
Edit
The ultimate goal is to see what API URLs a PHP script (run by a cronjob) requests ; and what API URLs the server requests 'live'.
These ones do mainly GET and POST requests to several URLs, and I am interested in knowing the params :
Does it do request to :
foobar.com/api/whatisthere?and=what&is=there&too=yeah
or to :
foobar.com/api/whatisthathere?is=it&foo=bar&green=yeah
And does the cron jobs or the server do any other GET or POST request ?
And that, regardless what response (if any) these API gives.
Also, the API list is unknown - so you cannot grep to one particular URL.
Edit:
(OLD ticket specified : Note that I can not install anything on that server (no extra package, I can only use the "normal" commands - like tcpdump, sed, grep,...) // but as getting these information with tcpdump is pretty hard, then I made installation of packages possible)

You can use tcpdump and grep to get info about activity about network traffic from the host, the following cmd line should get you all lines containing Host:
tcpdump -i any -A -vv -s 0 | grep -e "Host:"
If I run the above in one shell and start a Links session to stackoverflow I see:
Host: www.stackoverflow.com
Host: stackoverflow.com
If you want to know more about the actual HTTP request you can also add statements to the grep for GET, PUT or POST requests (i.e. -e "GET"), which can get you some info about the relative URL (should be combined with the earlier determined host to get the full URL).
EDIT:
based on your edited question I have tried to make some modification:
first a tcpdump approach:
[root#localhost ~]# tcpdump -i any -A -vv -s 0 | egrep -e "GET" -e "POST" -e "Host:"
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
E..v.[#.#.......h.$....P....Ga .P.9.=...GET / HTTP/1.1
Host: stackoverflow.com
E....x#.#..7....h.$....P....Ga.mP...>;..GET /search?q=tcpdump HTTP/1.1
Host: stackoverflow.com
And an ngrep one:
[root#localhost ~]# ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST"
^[[B GET //meta.stackoverflow.com HTTP/1.1..Host: stackoverflow.com..User-Agent:
GET //search?q=tcpdump HTTP/1.1..Host: stackoverflow.com..User-Agent: Links
My test case was running links stackoverflow.com, putting tcpdump in the search field and hitting enter.
This gets you all URL info on one line. A nicer alternative might be to simply run a reverse proxy (e.g. nginx) on your own server and modify the host file (such as shown in Adam's answer) and have the reverse proxy redirect all queries to the actual host and use the logging features of the reverse proxy to get the URLs from there, the logs would probably a bit easier to read.
EDIT 2:
If you use a command line such as:
ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST" --line-buffered | perl -lne 'print $3.$2 if /(GET|POST) (.+?) HTTP\/1\.1\.\.Host: (.+?)\.\./'
you should see the actual URLs

A simple solution is to modify your '/etc/hosts' file to intercept the API calls and redirect them to your own web server
api.foobar.com 127.0.0.1

Related

Why is the netcat response not displaying in browser in Mac OS?

I've tried using netcat in various ways.
The common just exits
echo "Response from server" | nc -l 127.0.0.1 8080
I get the following error in browser net::ERR_CONNECTION_REFUSED
nc -l localhost 8080 < temp.resp
I just simply want to print some text from nectar to browser and maybe later pass some headers along with it too. But right now I am not able to figure out what is going wrong.
I've looked in the man page and tried -w and -I options but none of them are working.

How to reduce the output of the Curl command?

Am looking for a way to restrict the output of curl command
For example when using curl to check if port is open on server, just want to restrict the output to first lines to confirm that port is open
curl -v host:1521
want to just display first 3 lines of output
*About to connect to
*Trying host ..connected
* Connected to host
Why not to pipe it to head?
curl -v host:1521 | head -n3
where -n3 means 3 lines from top.
EDIT:
As discussed in comments you use -v option to capture headers etc. which are printed on stderr instead of stdout so head doesn't affect it. You have to redirect stderr to stdout and after that operate on:
curl -v www.example.com 2>&1 | grep Connected
This will return * Connected to www.example.com (IP_ADDRESS_HERE) port 443 (#0) if connected successfully and nothing otherwise.

How to configure https_check URL in nagios

I have installed Nagios (NagiosĀ® Coreā„¢ Version 4.2.2) in Linux Server.I have written a JIRA URL check using check_http for HTTPS url.
It should get a response 200, but It gives response HTTP CODE 302.
[demuc1dv48:/pkg/vdcrz/Nagios/libexec][orarz]# ./check_http -I xx.xx.xx -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT
SSL Version: TLSv1
HTTP OK: HTTP/1.1 302 Found - 296 bytes in 0.134 second response time |time=0.134254s;;;0.000000 size=296B;;;
So I configured the same in the nagios configuration file.
define command{
command_name check_https_jira_prod
command_line $USER1$/check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
}
Now my JIRA server is down, But it is not reflected in the nagios check.The nagios response still shows HTTP code 302 only.
How to fix this issue?
You did not specify, but I assume you defined your command in the Nagios central server commands.cfgconfiguration file, but you also need to define a service in services.cfg as services use commands to run scripts.
If you are running your check_httpcheck from a different server you also need to define it in the nrpe.cfg configuration file on that remote machine and then restart nrpe.
As a side note, from the output you've shared, I believe you're not using the flags that the check_http Nagios plugin supports correctly.
From your post:
check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
From ./check_http -h:
-I, --IP-address=ADDRESS
IP address or name (use numeric address if possible to bypass DNS lookup).
You are using a host name instead (xxx.xxx.xxx.com )
-S, --ssl=VERSION
Connect via SSL. Port defaults to 443. VERSION is optional, and prevents auto-negotiation (1 = TLSv1, 2 = SSLv2, 3 = SSLv3).
You specified CONNECT
You can't get code 200 unless you set follow parameter in chech_http script.
I suggest you to use something like this:
./check_http -I jira-ex.telefonica.de -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S -f follow
The -f follow is mandatory for your use case.

Streaming log file data over http by using unix command. Combination of tail and curl

I need to follow a log file on linux machine and stream the updates of log file over http port to a remote machine. I have written a command with the combination of "tail" and "curl".
To test it initially, i used "tail -n", it works well and posts data successfully to remote machine. Below is the command.
$tail -n 200 /path/to/logfile/file1.log | curl --data-binary #- http://remotemachineIP:9000
Now, When i try to run the same command with "tail -f", it's not posting any data over http even though the log file is updated multiple times. Below is the command
$tail -f --follow=name /path/to/logfile/file1.log | curl --data-binary #- http://remotemachineIP:9000
As per my understanding, "tail -f" is not conveying my "curl" command that "input feed is complete over stdin(#-)". Any help on how to rectify this issue?
Thanks in advance
curl will make a single HTTP POST request with the piped data. What you want to do instead is to continuously send the data.
Assuming that by "HTTP port" you actually meant TCP there is a way using netcat:
Remote
nc -l 9000
Local
tailf /path/to/log/file | nc remote_ip 9000

Linux command for public ip address

I want command to get Linux machine(amazon) external/public IP Address.
I tried hostname -I and other commands from blogs and stackoverflow like
ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'
and many more. But they all are giving me internal IP Address.
Then I found some sites which provides API for this.
Example : curl http://ipecho.net/plain; echo
But I don't want to rely on third party website service. So, is there any command line tool available to get external IP Address?
simplest of all would be to do :
curl ifconfig.me
A cleaner output
ifconfig eth0 | awk '/inet / { print $2 }' | sed 's/addr://'
You could use this script
# !/bin/bash
#
echo 'Your external IP is: '
curl -4 icanhazip.com
But that is relying on a third party albeit a reliable one.
I don't know if you can get your external IP without asking someone/somesite i.e. some third party for it, but what do I know.
you can also just run:
curl -4 icanhazip.com
This is doing the same thing as a command the -4 is to get the output in Ipv4
You can use this command to get public ip and private ip(second line is private ip; third line is public ip.)
ip addr | awk '/inet / {sub(/\/.*/, "", $2); print $2}'
I would suggest you to use the command external-ip (sudo apt-get install miniupnpc) as it (I'm almost sure) uses upnp protocol to ask the router instead of asking an external website so it should be faster, but of course the router has to have upnp enabled.
You can simply do this :
curl https://ipinfo.io/ip
It might not work on amazon because you might be using NAT or something for the server to access the rest of the world (and for you to ssh into it also). If you are unable to ssh into the ip that is listed in ifconfig then you are either in a different network or dont have ssh enabled.
This is the best I can do (only relies on my ISP):
ISP=`traceroute -M 2 -m 2 -n -q 1 8.8.8.8 | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'`
extIP=`ping -R -c 1 -t 1 -s 1 -n $ISP | grep RR | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'`
echo $extIP
Or, the functionally same thing on one line:
ISP=`traceroute -M 2 -m 2 -n -q 1 8.8.8.8 | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'` | ping -R -c 1 -t 1 -s 1 -n $ISP | grep RR | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
to save it to a temporary & hidden file add > .extIP to the end of the last line, then cat .extIP to see it.
If your ISP's address never changes (honestly i'm not sure if it would or not), then you could fetch it once, and then replace $ISP in line two with it
This has been tested on a mac with wonderful success.
the only adjustment on linux that I've found so far is the traceroute "-M" flag might need to be "-f" instead
and it relies heavily on the ping's "-R" flag, which tells it to send back the "Record Route" information, which isn't always supported by the host. But it's worth a try!
the only other way to do this without relying on any external servers is to get it from curl'ing your modem's status page... I've done this successfully with our frontier DSL modem, but it's dirty, slow, unreliable, and requires hard-coding your modem's password.
Here's the "process" for that:
curl http://[user]:[password]#[modem's LAN address]/[status.html] | grep 'WanIPAddress =' | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
That fetches the raw html, searches for any lines containing "WanIpAddress =" (change that so it's appropriate for your modem's results), and then narrows down those results to an IPv4 style address.
Hope that helps!
As others suggested, we have to rely on third party service which I don't feel safe using it. So, I have found Amazon API on this answer :
$ curl http://169.254.169.254/latest/meta-data/public-ipv4
54.232.200.77
For more details, https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval
The super-easy way is using the glances tool. you can install it on Ubuntu using:
$ sudo apt install glances
then using it with:
$ glances
and at the top of the terminal, it highlights your public IP address, and so many other information about your system (like what htop does) and network status.
For a formatted output use :-
dig TXT +short o-o.myaddr.l.google.com #ns1.google.com
it'll give you formatted output like this
"30.60.10.11"
also FYI,
dig is more faster than curl and wget
The following works as long as you have ifconfig and curl.
curl ifconfig.me

Resources