I've tried using netcat in various ways.
The common just exits
echo "Response from server" | nc -l 127.0.0.1 8080
I get the following error in browser net::ERR_CONNECTION_REFUSED
nc -l localhost 8080 < temp.resp
I just simply want to print some text from nectar to browser and maybe later pass some headers along with it too. But right now I am not able to figure out what is going wrong.
I've looked in the man page and tried -w and -I options but none of them are working.
Am looking for a way to restrict the output of curl command
For example when using curl to check if port is open on server, just want to restrict the output to first lines to confirm that port is open
curl -v host:1521
want to just display first 3 lines of output
*About to connect to
*Trying host ..connected
* Connected to host
Why not to pipe it to head?
curl -v host:1521 | head -n3
where -n3 means 3 lines from top.
EDIT:
As discussed in comments you use -v option to capture headers etc. which are printed on stderr instead of stdout so head doesn't affect it. You have to redirect stderr to stdout and after that operate on:
curl -v www.example.com 2>&1 | grep Connected
This will return * Connected to www.example.com (IP_ADDRESS_HERE) port 443 (#0) if connected successfully and nothing otherwise.
I have installed Nagios (NagiosĀ® Coreā¢ Version 4.2.2) in Linux Server.I have written a JIRA URL check using check_http for HTTPS url.
It should get a response 200, but It gives response HTTP CODE 302.
[demuc1dv48:/pkg/vdcrz/Nagios/libexec][orarz]# ./check_http -I xx.xx.xx -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT
SSL Version: TLSv1
HTTP OK: HTTP/1.1 302 Found - 296 bytes in 0.134 second response time |time=0.134254s;;;0.000000 size=296B;;;
So I configured the same in the nagios configuration file.
define command{
command_name check_https_jira_prod
command_line $USER1$/check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
}
Now my JIRA server is down, But it is not reflected in the nagios check.The nagios response still shows HTTP code 302 only.
How to fix this issue?
You did not specify, but I assume you defined your command in the Nagios central server commands.cfgconfiguration file, but you also need to define a service in services.cfg as services use commands to run scripts.
If you are running your check_httpcheck from a different server you also need to define it in the nrpe.cfg configuration file on that remote machine and then restart nrpe.
As a side note, from the output you've shared, I believe you're not using the flags that the check_http Nagios plugin supports correctly.
From your post:
check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
From ./check_http -h:
-I, --IP-address=ADDRESS
IP address or name (use numeric address if possible to bypass DNS lookup).
You are using a host name instead (xxx.xxx.xxx.com )
-S, --ssl=VERSION
Connect via SSL. Port defaults to 443. VERSION is optional, and prevents auto-negotiation (1 = TLSv1, 2 = SSLv2, 3 = SSLv3).
You specified CONNECT
You can't get code 200 unless you set follow parameter in chech_http script.
I suggest you to use something like this:
./check_http -I jira-ex.telefonica.de -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S -f follow
The -f follow is mandatory for your use case.
I need to follow a log file on linux machine and stream the updates of log file over http port to a remote machine. I have written a command with the combination of "tail" and "curl".
To test it initially, i used "tail -n", it works well and posts data successfully to remote machine. Below is the command.
$tail -n 200 /path/to/logfile/file1.log | curl --data-binary #- http://remotemachineIP:9000
Now, When i try to run the same command with "tail -f", it's not posting any data over http even though the log file is updated multiple times. Below is the command
$tail -f --follow=name /path/to/logfile/file1.log | curl --data-binary #- http://remotemachineIP:9000
As per my understanding, "tail -f" is not conveying my "curl" command that "input feed is complete over stdin(#-)". Any help on how to rectify this issue?
Thanks in advance
curl will make a single HTTP POST request with the piped data. What you want to do instead is to continuously send the data.
Assuming that by "HTTP port" you actually meant TCP there is a way using netcat:
Remote
nc -l 9000
Local
tailf /path/to/log/file | nc remote_ip 9000
I want command to get Linux machine(amazon) external/public IP Address.
I tried hostname -I and other commands from blogs and stackoverflow like
ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'
and many more. But they all are giving me internal IP Address.
Then I found some sites which provides API for this.
Example : curl http://ipecho.net/plain; echo
But I don't want to rely on third party website service. So, is there any command line tool available to get external IP Address?
simplest of all would be to do :
curl ifconfig.me
A cleaner output
ifconfig eth0 | awk '/inet / { print $2 }' | sed 's/addr://'
You could use this script
# !/bin/bash
#
echo 'Your external IP is: '
curl -4 icanhazip.com
But that is relying on a third party albeit a reliable one.
I don't know if you can get your external IP without asking someone/somesite i.e. some third party for it, but what do I know.
you can also just run:
curl -4 icanhazip.com
This is doing the same thing as a command the -4 is to get the output in Ipv4
You can use this command to get public ip and private ip(second line is private ip; third line is public ip.)
ip addr | awk '/inet / {sub(/\/.*/, "", $2); print $2}'
I would suggest you to use the command external-ip (sudo apt-get install miniupnpc) as it (I'm almost sure) uses upnp protocol to ask the router instead of asking an external website so it should be faster, but of course the router has to have upnp enabled.
You can simply do this :
curl https://ipinfo.io/ip
It might not work on amazon because you might be using NAT or something for the server to access the rest of the world (and for you to ssh into it also). If you are unable to ssh into the ip that is listed in ifconfig then you are either in a different network or dont have ssh enabled.
This is the best I can do (only relies on my ISP):
ISP=`traceroute -M 2 -m 2 -n -q 1 8.8.8.8 | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'`
extIP=`ping -R -c 1 -t 1 -s 1 -n $ISP | grep RR | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'`
echo $extIP
Or, the functionally same thing on one line:
ISP=`traceroute -M 2 -m 2 -n -q 1 8.8.8.8 | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'` | ping -R -c 1 -t 1 -s 1 -n $ISP | grep RR | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
to save it to a temporary & hidden file add > .extIP to the end of the last line, then cat .extIP to see it.
If your ISP's address never changes (honestly i'm not sure if it would or not), then you could fetch it once, and then replace $ISP in line two with it
This has been tested on a mac with wonderful success.
the only adjustment on linux that I've found so far is the traceroute "-M" flag might need to be "-f" instead
and it relies heavily on the ping's "-R" flag, which tells it to send back the "Record Route" information, which isn't always supported by the host. But it's worth a try!
the only other way to do this without relying on any external servers is to get it from curl'ing your modem's status page... I've done this successfully with our frontier DSL modem, but it's dirty, slow, unreliable, and requires hard-coding your modem's password.
Here's the "process" for that:
curl http://[user]:[password]#[modem's LAN address]/[status.html] | grep 'WanIPAddress =' | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
That fetches the raw html, searches for any lines containing "WanIpAddress =" (change that so it's appropriate for your modem's results), and then narrows down those results to an IPv4 style address.
Hope that helps!
As others suggested, we have to rely on third party service which I don't feel safe using it. So, I have found Amazon API on this answer :
$ curl http://169.254.169.254/latest/meta-data/public-ipv4
54.232.200.77
For more details, https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval
The super-easy way is using the glances tool. you can install it on Ubuntu using:
$ sudo apt install glances
then using it with:
$ glances
and at the top of the terminal, it highlights your public IP address, and so many other information about your system (like what htop does) and network status.
For a formatted output use :-
dig TXT +short o-o.myaddr.l.google.com #ns1.google.com
it'll give you formatted output like this
"30.60.10.11"
also FYI,
dig is more faster than curl and wget
The following works as long as you have ifconfig and curl.
curl ifconfig.me