I've tried using netcat in various ways.
The common just exits
echo "Response from server" | nc -l 127.0.0.1 8080
I get the following error in browser net::ERR_CONNECTION_REFUSED
nc -l localhost 8080 < temp.resp
I just simply want to print some text from nectar to browser and maybe later pass some headers along with it too. But right now I am not able to figure out what is going wrong.
I've looked in the man page and tried -w and -I options but none of them are working.
Related
Am looking for a way to restrict the output of curl command
For example when using curl to check if port is open on server, just want to restrict the output to first lines to confirm that port is open
curl -v host:1521
want to just display first 3 lines of output
*About to connect to
*Trying host ..connected
* Connected to host
Why not to pipe it to head?
curl -v host:1521 | head -n3
where -n3 means 3 lines from top.
EDIT:
As discussed in comments you use -v option to capture headers etc. which are printed on stderr instead of stdout so head doesn't affect it. You have to redirect stderr to stdout and after that operate on:
curl -v www.example.com 2>&1 | grep Connected
This will return * Connected to www.example.com (IP_ADDRESS_HERE) port 443 (#0) if connected successfully and nothing otherwise.
I am trying to understand why netcat listener isn't working in my Kali Linux VM. From what I understand,I open a terminal and open the port.
nc -l 155
Then, I open another terminal within my VM and use the following command to connect to that port number.
nc 127.0.0.1 155 (loopback IP address and same port number)
It was unsuccessful and since I am just a newbie in this field, I was hoping to get some assistance on this issue. However, I found a new way to execute this command but I am not understanding the logic behind why this new way works and not the original method that I learned in class. Thank you for your help in advance!
First of all, to elevate your self from newbie status, you have to understand what errors mean. "it was unsuccessful" is an insufficient description of your results for any real debugging. Probably, what happened was a valuable clue to the issue - you should have included that information. Furthermore, you really have to get your commands in the quetsion exactly right. Don't say you did one thing, then post a screenshot of something else happening. I'm not sure what the -e is supposed to be doing, but I don't find any record of it in my osx implementation or online man pages.
Different builds or implementations of netcat could differ, but from what I'm seeing from a netcat on my osx box, -p is not the right way to specify destination port.
$ nc localhost -p 1055
nc: missing hostname and port
usage: nc [-46AacCDdEFhklMnOortUuvz] [-K tc] [-b boundif] [-i interval] [-p source_port] [--apple-delegate-pid pid] [--apple-delegate-uuid uuid]
[-s source_ip_address] [-w timeout] [-X proxy_version]
[-x proxy_address[:port]] [hostname] [port[s]]
-p specifies source port. You don't usually need to specify this. Furthermore, you can't have a source and destination of the socket on the same box on the same port. Usually source port doesn't need to be specified.
Finally, ports under 1024 can only be allocated as root. Like most linux professionals, I don't run anything as root unless I really have to, so I changed to 1055 for this demonstration. One nc each in a termina window, typing messages in one print out the other side. Observe:
$ nc -l 1055
hi world
hi yourself, world!
$ nc localhost 1055
hi world
hi yourself, world!
server: nc -l ${port} > ${file}
local: nc ${ip} -z ${port} < ${file}
When I run this command cat index.html | nc -lnvp 2222 and then open the local address of the server in the browser with this header:
GET / HTTP/1.1
Host: 192.168.146.131:2222
User-Agent: "my user agent here"
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: de,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive
and this as the index.html: hi whats up (thats it)
I get the http request in the terminal where netcat is running and my browser on my other machine is waiting. Only when I CTRL-C the connection on the terminal I get the response in the browser.
My uname -a prints out: Linux kali 4.0.0-kali1-amd64 #1 SMP Debian 4.0.4-1+kali2 (2015-06-03) x86_64 GNU/Linux
When I try to use cat index.html | nc -l 2222 it doesnt work at all. My Kali machine doesnt even get a http request. When I try the same on an Ubuntu machine it works like I want to it to work: It simply sends the index.html to the browser without waiting for me to CTRL-C netcat.
Anyone an idea why netcat behaves so weird?
There are two different netcats, and you're using one of each:
Traditional netcat will avoid closing the connection by default, because the client might send more data.
OpenBSD netcat closes the connection by default when there's no more data to send.
Old versions of HTTP (like you're falling back on) expects the connection to be closed, so it works by default with the OpenBSD netcat.
You can have traditional netcat close the connection on eof as well, using -q 0:
stuff | nc.traditional -l -p 2222 -q 0
man nc writes:
"netcat stays running until the network side closes" so your browser will wait forever for the data to finish. Use -q option to quit, see the man page again: "after EOF on stdin, wait the specified number of seconds and then quit".
cat index.html | nc -lnvp 2222 -q 0
nc -l 2222 does not make sense, you need to use -p to specify the port.
How list from the command line URLs requests that are made from the server (an *ux machine) to another machine.
For instance, I am on the command line of server ALPHA_RE .
I do a ping to google.co.uk and another ping to bbc.co.uk
I would like to see, from the prompt :
google.co.uk
bbc.co.uk
so, not the ip address of the machine I am pinging, and NOT an URL from servers that passes my the request to google.co.uk or bbc.co.uk , but the actual final urls.
Note that only packages that are available on normal ubuntu repositories are available - and it has to work with command line
Edit
The ultimate goal is to see what API URLs a PHP script (run by a cronjob) requests ; and what API URLs the server requests 'live'.
These ones do mainly GET and POST requests to several URLs, and I am interested in knowing the params :
Does it do request to :
foobar.com/api/whatisthere?and=what&is=there&too=yeah
or to :
foobar.com/api/whatisthathere?is=it&foo=bar&green=yeah
And does the cron jobs or the server do any other GET or POST request ?
And that, regardless what response (if any) these API gives.
Also, the API list is unknown - so you cannot grep to one particular URL.
Edit:
(OLD ticket specified : Note that I can not install anything on that server (no extra package, I can only use the "normal" commands - like tcpdump, sed, grep,...) // but as getting these information with tcpdump is pretty hard, then I made installation of packages possible)
You can use tcpdump and grep to get info about activity about network traffic from the host, the following cmd line should get you all lines containing Host:
tcpdump -i any -A -vv -s 0 | grep -e "Host:"
If I run the above in one shell and start a Links session to stackoverflow I see:
Host: www.stackoverflow.com
Host: stackoverflow.com
If you want to know more about the actual HTTP request you can also add statements to the grep for GET, PUT or POST requests (i.e. -e "GET"), which can get you some info about the relative URL (should be combined with the earlier determined host to get the full URL).
EDIT:
based on your edited question I have tried to make some modification:
first a tcpdump approach:
[root#localhost ~]# tcpdump -i any -A -vv -s 0 | egrep -e "GET" -e "POST" -e "Host:"
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
E..v.[#.#.......h.$....P....Ga .P.9.=...GET / HTTP/1.1
Host: stackoverflow.com
E....x#.#..7....h.$....P....Ga.mP...>;..GET /search?q=tcpdump HTTP/1.1
Host: stackoverflow.com
And an ngrep one:
[root#localhost ~]# ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST"
^[[B GET //meta.stackoverflow.com HTTP/1.1..Host: stackoverflow.com..User-Agent:
GET //search?q=tcpdump HTTP/1.1..Host: stackoverflow.com..User-Agent: Links
My test case was running links stackoverflow.com, putting tcpdump in the search field and hitting enter.
This gets you all URL info on one line. A nicer alternative might be to simply run a reverse proxy (e.g. nginx) on your own server and modify the host file (such as shown in Adam's answer) and have the reverse proxy redirect all queries to the actual host and use the logging features of the reverse proxy to get the URLs from there, the logs would probably a bit easier to read.
EDIT 2:
If you use a command line such as:
ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST" --line-buffered | perl -lne 'print $3.$2 if /(GET|POST) (.+?) HTTP\/1\.1\.\.Host: (.+?)\.\./'
you should see the actual URLs
A simple solution is to modify your '/etc/hosts' file to intercept the API calls and redirect them to your own web server
api.foobar.com 127.0.0.1
I am having a linux powered board on which I wanted to capture the telnet output to a file. I tried doing it as shown below-
telnet localhost xxxx >> /mnt/sd-xxx/log/file.txt &
as well as
telnet localhost xxxx | tee /mnt/sd-xxx/log/file.txt &
and
telnet localhost xxxx -f /mnt/sd-xxx/log/file.txt &
But it's unable to go in background in all of the above cases. I also tried to keep it in a script,but this also doesn't work and the program crashes. How can I capture the telnet output and redirect to a file by running it as a background process.
Thanks in Advance.
Have you tried curl?
curl telnet://localhost:xxxx >> /mnt/sd-xxx/log/file.txt &