I want to sort and calculate how much clients downloaded files (3 types) from my server.
I installed tshark and ran followed command that should capture GET requests:
`./tshark 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' -R'http.request.method == "GET"'`
so sniffer starts to work and every second I get new row, here is a result:
0.000000 144.137.136.253 -> 192.168.4.7 HTTP GET /pids/QE13_593706_0.bin HTTP/1.1
8.330354 1.1.1.1 -> 2.2.2.2 HTTP GET /pids/QE13_302506_0.bin HTTP/1.1
17.231572 1.1.1.2 -> 2.2.2.2 HTTP GET /pids/QE13_382506_0.bin HTTP/1.0
18.906712 1.1.1.3 -> 2.2.2.2 HTTP GET /pids/QE13_182406_0.bin HTTP/1.1
19.485199 1.1.1.4 -> 2.2.2.2 HTTP GET /pids/QE13_302006_0.bin HTTP/1.1
21.618113 1.1.1.5 -> 2.2.2.2 HTTP GET /pids/QE13_312106_0.bin HTTP/1.1
30.951197 1.1.1.6 -> 2.2.2.2 HTTP GET /nginx_status HTTP/1.1
31.056364 1.1.1.7 -> 2.2.2.2 HTTP GET /nginx_status HTTP/1.1
37.578005 1.1.1.8 -> 2.2.2.2 HTTP GET /pids/QE13_332006_0.bin HTTP/1.1
40.132006 1.1.1.9 -> 2.2.2.2 HTTP GET /pids/PE_332006.bin HTTP/1.1
40.407742 1.1.2.1 -> 2.2.2.2 HTTP GET /pids/QE13_452906_0.bin HTTP/1.1
what I need to do to store results type and count like /pids/*****.bin in to other file.
Im not strong in linux but sure it can be done with 1-3 rows of script.
Maybe with awk but I don't know what is the technique to read result of sniffer.
Thank you,
Can't you just grep the log file of your webserver?
Anyway, to extract the lines of captured http traffic relative to your server files, just try with
./tshark 'tcp port 80 and \
(((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' \
-R'http.request.method == "GET"' | \
egrep "HTTP GET /pids/.*.bin"
Related
I'm trying to make a GET request to an old Linux machine using cURL inside WSL2/Debian. The connection between my windows PC and the remote Linux is via VPN. VPN is working as I can ping the IP, as well as VNC to it (via Windows).
The curl command I'm using on WSL2/Debian is:
curl -k --header 'Host: 10.48.1.3' --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2'
Using the verbose option, I get:
Note: Unnecessary use of -X or --request, GET is already inferred.
* Expire in 0 ms for 6 (transfer 0x555661293fb0)
* Expire in 4000 ms for 8 (transfer 0x555661293fb0)
* Trying 10.48.1.3...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x555661293fb0)
* Connection timed out after 4001 milliseconds
* Closing connection 0
curl: (28) Connection timed out after 4001 milliseconds
After the max-time of 4s the command is cancelled
When I execute the same command on the same computer but using Windows Powershell, it works:
curl.exe -k --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2' -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.48.1.3:443...
* Connected to 10.48.1.3 (10.48.1.3) port 443 (#0)
* schannel: disabled automatic use of client certificate
* schannel: using IP address, SNI is not supported by OS.
* schannel: ALPN, offering http/1.1
* ALPN, server did not agree to a protocol
Parameter1 HTTP/1.1
> User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Tue, 17 May 2022 15:39:00 GMT
< Server: IosHttp/0.0.1 (MANUFACTURER)
< Content-Type: application/json
< Content-Length: 239
< Strict-Transport-Security: max-age=15768000
<
PAYLOAD OF API* Connection #0 to host 10.48.1.3 left intact
Using Postman inside Windows works also.
Inside WSL2/Debian I'm able to ping the machine, but ssh is not working either; the cursor just stays there blinking without getting back any answer from remote machine:
$ ping 10.48.1.3 -c 2
PING 10.48.1.3 (10.48.1.3) 56(84) bytes of data.
64 bytes from 10.48.1.3: icmp_seq=1 ttl=61 time=48.9 ms
64 bytes from 10.48.1.3: icmp_seq=2 ttl=61 time=28.4 ms
--- 10.48.1.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 28.353/38.636/48.919/10.283 ms
$ ssh -c aes256-cbc root#10.48.1.3
^C # Cancelled as nothing happened for several minutes
On Windows Powershell both, ping and ssh work:
> ssh -c aes256-cbc root#10.48.1.3
The authenticity of host '10.48.1.3 (10.48.1.3)' can't be established.
ECDSA key fingerprint is SHA256:FINGERPRINTDATAOFMACHINE.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
I've about 100 similar machines on field to which I need to cURL, and this error is coming in about 10 of them, the rest 90 work fine (also on WSL2/Debian).
I guess the error may come from the SSL Version on my WSL2/Debian... Has anyone an idea how to solve this problem?
I want to allow only HTTP(S) requests to my server that comes from cloudflare. I think the best way to do that is to have some script that will run once every day and it's job will be to collect all ip addresses from https://www.cloudflare.com/ips-v4 and https://www.cloudflare.com/ips-v6, and add them to whitelist (If not added already). The only problem is that I don't know to write this script so if anyone can give me some guidelines or link to some tutorial, I would've appriciate it. (U can also write it on your own if you have free time, I don't mind)
My Server Configuration:
OpenLiteSpeed,
Cyberpanel,
AlmaLinux
EDIT
In the meantime I managed (I think) to somehow make it work. I created bash script cloudflare.sh with the following content:
#!/bin/sh
for i in `curl https://www.cloudflare.com/ips-v4`; do iptables -I INPUT -p tcp -m multiport --dports http,https -s $i -j ACCEPT; done
for i in `curl https://www.cloudflare.com/ips-v6`; do ip6tables -I INPUT -p tcp -m multiport --dports http,https -s $i -j ACCEPT; done
iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
ip6tables -A INPUT -p tcp -m multiport --dports http,https -j DROP
then, I set up cron job to run it every day at 10 AM.
0 10 * * * /home/cloudflare.sh >/dev/null 2>&1
I have done this with a help of a google so can u just tell me if this script will make duplicates of ip addresses or no when it's executed?
actually , for your goal , here is a simpler solution
the cloudflare request will always contains certain CF headers
e.g.
Connection: Keep-Alive
Accept-Encoding: gzip
CF-IPCountry: BR
X-Forwarded-For: xxxxx
CF-RAY: 56fc3a8f9ccbf203-EWR
Content-Length: 345
X-Forwarded-Proto: https
CF-Visitor: {"scheme":"https"}
origin: https://www.google.com
user-agent: Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36
content-type: application/x-www-form-urlencoded
accept: */*
referer: https://www.google.com/
accept-language: pt-BR,pt;q=0.8,en-US;q=0.6,en;q=0.4
cookie: xxxx
CF-Connecting-IP: xxxx
then you can use one of these headers to create a rewrite rule to 403 the request doesn't have it
e.g.
RewriteCond %{HTTP:CF-IPCountry} !^[A-Z]{2} [NC]
RewriteRule .* - [F,L]
the first line will check if header CF-IPCountry exsits and matches for any 2 capital letter , like US, UK , ES, FR ...etc as country code
if not match , then give Forbid flag as 403 response
the setback for this way is the header might be faked
otherwise you may need to go to the "hard" mode to manipulate the OpenLiteSpeed conf file or firewall conf file to add/remove IPs
I have a list of banners which are at this format:
Hostname: []
IP: xxx.xxx.xxx.xxx
Port: xx
HTTP/1.0 301 Moved Permanently
Location: /login.html
Content-Type: text/html
Device-Access-Level: 255
Content-Length: 3066
Cache-Control: max-age=7200, must-revalidate
I have used the following grep statement in order to grab the ip:
grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}"
What do I have to add to the statement in order to grab the port also? (while still getting the IP.).
Thank you for the answers..!
Why not use awk
awk '/IP:/ {ip=$2} /Port:/ {print ip,$2}' file
When it find line with IP: it stores the IP in variable ip
When it find port, print ip and port number.
Example
cat file
Hostname: []
IP: 163.248.1.20
Port: 843
HTTP/1.0 301 Moved Permanently
Location: /login.html
Content-Type: text/html
Device-Access-Level: 255
Content-Length: 3066
Cache-Control: max-age=7200, must-revalidat
awk '/IP:/ {ip=$2} /Port:/ {print ip,$2}' file
163.248.1.20 843
I have a server running on a Linux/debian machine. I can GET/PUT correctly from within the same machine.
$ curl -v -X PUT -d "blabla" 127.0.1.1:5678
* About to connect() to 127.0.1.1 port 5678 (#0)
* Trying 127.0.1.1...
* connected
* Connected to 127.0.1.1 (127.0.1.1) port 5678 (#0)
> PUT / HTTP/1.1
> User-Agent: curl/7.26.0
> Host: 127.0.1.1:5678
> Accept: */*
> Content-Length: 6
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 6 out of 6 bytes
* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 405 Unsupported request method: "PUT"
< Connection: close
< Server: My Server v1.0.0
<
* Closing connection #0
However if I try from another machine (same local network), here is what it says:
$ curl -v -X PUT -d "blabla" 192.168.0.21:5678
* About to connect() to 192.168.0.21 port 5678 (#0)
* Trying 192.168.0.21...
* Connection refused
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
From the server side, no firewall is running:
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Here is what netcat reveals:
$ netstat -alnp | grep 5678
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.1.1:5678 0.0.0.0:* LISTEN -
Is there a way to debug what could be going on ?
The webapp is listening on 127.0.0.1 which is loopback interface. In order to be accessible outside it would have to be listening on 192.168.0.21:5678 or *:5678 which means all interfaces.
I had to comment out the following line:
$ cat /etc/hosts
#127.0.1.1 [...]
This is similar to my debian squeeze setup and appears to be working as expected. I have no clue why this extra line messed things up so badly. Apparently this is due to an issue in GNOME package, but this server does not even have X installed.
When I try to ping or retrieve an invalid domain, I get redirect to default domain on my local server.
ex:
trying to ping www.invaliddomainnameexample.com from my server s1.mylocaldomain.com
~: ping www.invaliddomainnameexample.com
PING www.invaliddomainnameexample.com.mylocaldomain.com (67.877.87.128) 56(84) bytes of data.
64 bytes from mylocaldomain.com (67.877.87.128): icmp_seq=1 ttl=64 time=0.040 ms
64 bytes from mylocaldomain.com (67.877.87.128): icmp_seq=2 ttl=64 time=0.039 ms
or using curl
~: curl -I www.invaliddomainnameexample.com
HTTP/1.1 301 Moved Permanently
Date: Mon, 26 Nov 2012 16:09:57 GMT
Content-Type: text/html
Content-Length: 223
Connection: keep-alive
Keep-Alive: timeout=10
Location: http://mylocaldomain.com/
my resolve.conf
~: cat /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
Could it be that your /etc/resolv.conf also contains a
search mylocaldomain.com
statement and there's an „*“ DNS A RR for your domain?
Because then the searchlist is applied, the * record matches, and voilà!
Try ping www.invaliddomainnameexample.com. with a dot appended to mark the domain name as a FQDN which prevents applying the searchlist.
It looks like the only way to fix this is to disallow unknown hosts to be processed by Http server. Though I did that only for local IPs
I use Nginx so the config would be
#list of server and local IPs
geo $local {
default 0;
34.56.23.0/21 1;
127.0.0.1/32 1;
}
#Deny access to any host other
server {
listen 80 default_server;
server_name _;
if ( $local = 1 ){
return 405;
}
rewrite ^(.*) http://mylocaldomain.com$1 permanent;
}