Getting different local-port value in tcpdump when using --local-port option in curl - linux

I'm using curl command to send the traffic to the Server. In curl command, I'm using --local-port option. Below is the command:
curl -v --interface 10.1.1.3 -b --local-port 10000 http://30.1.1.101/myfile.txt
I'm taking tcpdump to confirm that whether the local-port is used or not. Below is the tcpdump stats(image):
After checking the tcpdump, I have observed that the local-port value is different in tcpdump. It is supposed to be like 10.1.1.1:10000 not like 10.1.1.1:random_val.
So my questions are:
Is it possible to force curl to use the same local-port that I have mentioned in the command?
What's the reason for getting different local-port value?
Any help would be appreciated.

Related

Why cant you curl into nano?

So i was using the curl command the other day to get the information on a webpage and i piped it into nano to try and save the information but all it did was make the console completely unresponsive. The command i used was in the form:
curl -vk [web address] | nano
This caused the console to completely seize up, i sorted the issue by using a different command but i cant seem to find an answer anywhere on why this happens...
Can anyone enlighten me.
nano reads stdin with the dash - notation.
In your case, that'd give you :
curl -vk [web address] | nano -

How to see all Request URLs the server is doing (final URLs)

How list from the command line URLs requests that are made from the server (an *ux machine) to another machine.
For instance, I am on the command line of server ALPHA_RE .
I do a ping to google.co.uk and another ping to bbc.co.uk
I would like to see, from the prompt :
google.co.uk
bbc.co.uk
so, not the ip address of the machine I am pinging, and NOT an URL from servers that passes my the request to google.co.uk or bbc.co.uk , but the actual final urls.
Note that only packages that are available on normal ubuntu repositories are available - and it has to work with command line
Edit
The ultimate goal is to see what API URLs a PHP script (run by a cronjob) requests ; and what API URLs the server requests 'live'.
These ones do mainly GET and POST requests to several URLs, and I am interested in knowing the params :
Does it do request to :
foobar.com/api/whatisthere?and=what&is=there&too=yeah
or to :
foobar.com/api/whatisthathere?is=it&foo=bar&green=yeah
And does the cron jobs or the server do any other GET or POST request ?
And that, regardless what response (if any) these API gives.
Also, the API list is unknown - so you cannot grep to one particular URL.
Edit:
(OLD ticket specified : Note that I can not install anything on that server (no extra package, I can only use the "normal" commands - like tcpdump, sed, grep,...) // but as getting these information with tcpdump is pretty hard, then I made installation of packages possible)
You can use tcpdump and grep to get info about activity about network traffic from the host, the following cmd line should get you all lines containing Host:
tcpdump -i any -A -vv -s 0 | grep -e "Host:"
If I run the above in one shell and start a Links session to stackoverflow I see:
Host: www.stackoverflow.com
Host: stackoverflow.com
If you want to know more about the actual HTTP request you can also add statements to the grep for GET, PUT or POST requests (i.e. -e "GET"), which can get you some info about the relative URL (should be combined with the earlier determined host to get the full URL).
EDIT:
based on your edited question I have tried to make some modification:
first a tcpdump approach:
[root#localhost ~]# tcpdump -i any -A -vv -s 0 | egrep -e "GET" -e "POST" -e "Host:"
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
E..v.[#.#.......h.$....P....Ga .P.9.=...GET / HTTP/1.1
Host: stackoverflow.com
E....x#.#..7....h.$....P....Ga.mP...>;..GET /search?q=tcpdump HTTP/1.1
Host: stackoverflow.com
And an ngrep one:
[root#localhost ~]# ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST"
^[[B GET //meta.stackoverflow.com HTTP/1.1..Host: stackoverflow.com..User-Agent:
GET //search?q=tcpdump HTTP/1.1..Host: stackoverflow.com..User-Agent: Links
My test case was running links stackoverflow.com, putting tcpdump in the search field and hitting enter.
This gets you all URL info on one line. A nicer alternative might be to simply run a reverse proxy (e.g. nginx) on your own server and modify the host file (such as shown in Adam's answer) and have the reverse proxy redirect all queries to the actual host and use the logging features of the reverse proxy to get the URLs from there, the logs would probably a bit easier to read.
EDIT 2:
If you use a command line such as:
ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST" --line-buffered | perl -lne 'print $3.$2 if /(GET|POST) (.+?) HTTP\/1\.1\.\.Host: (.+?)\.\./'
you should see the actual URLs
A simple solution is to modify your '/etc/hosts' file to intercept the API calls and redirect them to your own web server
api.foobar.com 127.0.0.1

Streaming log file data over http by using unix command. Combination of tail and curl

I need to follow a log file on linux machine and stream the updates of log file over http port to a remote machine. I have written a command with the combination of "tail" and "curl".
To test it initially, i used "tail -n", it works well and posts data successfully to remote machine. Below is the command.
$tail -n 200 /path/to/logfile/file1.log | curl --data-binary #- http://remotemachineIP:9000
Now, When i try to run the same command with "tail -f", it's not posting any data over http even though the log file is updated multiple times. Below is the command
$tail -f --follow=name /path/to/logfile/file1.log | curl --data-binary #- http://remotemachineIP:9000
As per my understanding, "tail -f" is not conveying my "curl" command that "input feed is complete over stdin(#-)". Any help on how to rectify this issue?
Thanks in advance
curl will make a single HTTP POST request with the piped data. What you want to do instead is to continuously send the data.
Assuming that by "HTTP port" you actually meant TCP there is a way using netcat:
Remote
nc -l 9000
Local
tailf /path/to/log/file | nc remote_ip 9000

Linux command for public ip address

I want command to get Linux machine(amazon) external/public IP Address.
I tried hostname -I and other commands from blogs and stackoverflow like
ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'
and many more. But they all are giving me internal IP Address.
Then I found some sites which provides API for this.
Example : curl http://ipecho.net/plain; echo
But I don't want to rely on third party website service. So, is there any command line tool available to get external IP Address?
simplest of all would be to do :
curl ifconfig.me
A cleaner output
ifconfig eth0 | awk '/inet / { print $2 }' | sed 's/addr://'
You could use this script
# !/bin/bash
#
echo 'Your external IP is: '
curl -4 icanhazip.com
But that is relying on a third party albeit a reliable one.
I don't know if you can get your external IP without asking someone/somesite i.e. some third party for it, but what do I know.
you can also just run:
curl -4 icanhazip.com
This is doing the same thing as a command the -4 is to get the output in Ipv4
You can use this command to get public ip and private ip(second line is private ip; third line is public ip.)
ip addr | awk '/inet / {sub(/\/.*/, "", $2); print $2}'
I would suggest you to use the command external-ip (sudo apt-get install miniupnpc) as it (I'm almost sure) uses upnp protocol to ask the router instead of asking an external website so it should be faster, but of course the router has to have upnp enabled.
You can simply do this :
curl https://ipinfo.io/ip
It might not work on amazon because you might be using NAT or something for the server to access the rest of the world (and for you to ssh into it also). If you are unable to ssh into the ip that is listed in ifconfig then you are either in a different network or dont have ssh enabled.
This is the best I can do (only relies on my ISP):
ISP=`traceroute -M 2 -m 2 -n -q 1 8.8.8.8 | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'`
extIP=`ping -R -c 1 -t 1 -s 1 -n $ISP | grep RR | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'`
echo $extIP
Or, the functionally same thing on one line:
ISP=`traceroute -M 2 -m 2 -n -q 1 8.8.8.8 | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'` | ping -R -c 1 -t 1 -s 1 -n $ISP | grep RR | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
to save it to a temporary & hidden file add > .extIP to the end of the last line, then cat .extIP to see it.
If your ISP's address never changes (honestly i'm not sure if it would or not), then you could fetch it once, and then replace $ISP in line two with it
This has been tested on a mac with wonderful success.
the only adjustment on linux that I've found so far is the traceroute "-M" flag might need to be "-f" instead
and it relies heavily on the ping's "-R" flag, which tells it to send back the "Record Route" information, which isn't always supported by the host. But it's worth a try!
the only other way to do this without relying on any external servers is to get it from curl'ing your modem's status page... I've done this successfully with our frontier DSL modem, but it's dirty, slow, unreliable, and requires hard-coding your modem's password.
Here's the "process" for that:
curl http://[user]:[password]#[modem's LAN address]/[status.html] | grep 'WanIPAddress =' | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
That fetches the raw html, searches for any lines containing "WanIpAddress =" (change that so it's appropriate for your modem's results), and then narrows down those results to an IPv4 style address.
Hope that helps!
As others suggested, we have to rely on third party service which I don't feel safe using it. So, I have found Amazon API on this answer :
$ curl http://169.254.169.254/latest/meta-data/public-ipv4
54.232.200.77
For more details, https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval
The super-easy way is using the glances tool. you can install it on Ubuntu using:
$ sudo apt install glances
then using it with:
$ glances
and at the top of the terminal, it highlights your public IP address, and so many other information about your system (like what htop does) and network status.
For a formatted output use :-
dig TXT +short o-o.myaddr.l.google.com #ns1.google.com
it'll give you formatted output like this
"30.60.10.11"
also FYI,
dig is more faster than curl and wget
The following works as long as you have ifconfig and curl.
curl ifconfig.me

How do I get cURL to not show the progress bar?

I'm trying to use cURL in a script and get it to not show the progress bar.
I've tried the -s, -silent, -S, and -quiet options, but none of them work.
Here's a typical command I've tried:
curl -s http://google.com > temp.html
I only get the progress bar when pushing it to a file, so curl -s http://google.com doesn't have a progress bar, but curl -s http://google.com > temp.html does.
curl -s http://google.com > temp.html
works for curl version 7.19.5 on Ubuntu 9.10 (no progress bar). But if for some reason that does not work on your platform, you could always redirect stderr to /dev/null:
curl http://google.com 2>/dev/null > temp.html
In curl version 7.22.0 on Ubuntu and 7.24.0 on OSX the solution to not show progress but to show errors is to use both -s (--silent) and -S (--show-error) like so:
curl -sS http://google.com > temp.html
This works for both redirected output > /some/file, piped output | less and outputting directly to the terminal for me.
Update: Since curl 7.67.0 there is a new option --no-progress-meter which does precisely this and nothing else, see clonejo's answer for more details.
I found that with curl 7.18.2 the download progress bar is not hidden with:
curl -s http://google.com > temp.html
but it is with:
curl -ss http://google.com > temp.html
Since curl 7.67.0 (2019-11-06) there is --no-progress-meter, which does exactly this, and nothing else. From the man page:
--no-progress-meter
Option to switch off the progress meter output without muting or
otherwise affecting warning and informational messages like -s,
--silent does.
Note that this is the negated option name documented. You can
thus use --progress-meter to enable the progress meter again.
See also -v, --verbose and -s, --silent. Added in 7.67.0.
It's available in Ubuntu ≥20.04 and Debian ≥11 (Bullseye).
For a bit of history on curl's verbosity options, you can read Daniel Stenberg's blog post.
Not sure why it's doing that. Try -s with the -o option to set the output file instead of >.
this could help..
curl 'http://example.com' > /dev/null
On macOS 10.13.6 (High Sierra), the -sS option works. It is especially useful inside Perl, in a command like curl -sS --get {someURL}, which frankly is a whole lot more simple than any of the LWP or HTTP wrappers, for just getting a website or web page's contents.

Resources