I'm trying to use this project to integrate WebDAV into my .NET MVC2 application.
I've traced the traffic from Office to my WebDAV server, and compared it to this example on how office determines if the document should be readonly or edit.
After Office successfully authenticates with the server I see these requests as the document is opening.
2014-07-22 18:41:36 127.0.0.1 OPTIONS / - 80 username#mydomain.com 127.0.0.1 Microsoft+Office+Protocol+Discovery 200 0 0 23
2014-07-22 18:41:36 127.0.0.1 OPTIONS /wordstorage - 80 username#mydomain.com 127.0.0.1 Microsoft-WebDAV-MiniRedir/6.1.7601 200 0 0 5
2014-07-22 18:41:36 127.0.0.1 PROPFIND /wordstorage - 80 username#mydomain.com 127.0.0.1 Microsoft-WebDAV-MiniRedir/6.1.7601 200 0 0 29
2014-07-22 18:41:36 127.0.0.1 PROPFIND /wordstorage - 80 username#mydomain.com 127.0.0.1 Microsoft-WebDAV-MiniRedir/6.1.7601 200 0 0 10
2014-07-22 18:41:36 127.0.0.1 OPTIONS / - 80 - 127.0.0.1 Microsoft-WebDAV-MiniRedir/6.1.7601 403 0 0 7
2014-07-22 18:41:36 127.0.0.1 PROPFIND /wordstorage - 80 - 127.0.0.1 Microsoft-WebDAV-MiniRedir/6.1.7601 302 0 0 9
2014-07-22 18:41:36 127.0.0.1 PROPFIND /Account/LogOn ReturnUrl=%2fwordstorage 80 - 127.0.0.1 Microsoft-WebDAV-MiniRedir/6.1.7601 200 0 0 29
2014-07-22 18:42:25 127.0.0.1 PROPFIND /wordstorage - 80 username#mydomain.com 127.0.0.1 Microsoft-WebDAV-MiniRedir/6.1.7601 200 0 0 33
2014-07-22 18:42:25 127.0.0.1 PROPFIND /wordstorage - 80 username#mydomain.com 127.0.0.1 Microsoft-WebDAV-MiniRedir/6.1.7601 200 0 0 6
2014-07-22 18:42:59 127.0.0.1 GET /wordstorage/Test-2.docx - 80 username#mydomain.com 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+WOW64;+Trident/7.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+.NET4.0C;+.NET4.0E;+InfoPath.2;+IPH+1.1.21.4019;+MSOffice+12) 200 0 0 37
2014-07-22 18:42:59 127.0.0.1 HEAD /wordstorage/Test-2.docx - 80 username#mydomain.com 127.0.0.1 Microsoft+Office+Existence+Discovery 200 0 0 186
The first two OPTIONS and PROPFIND requests return a 200 OK, but the third OPTIONS request is denied with a 403 - forbidden code.
If authentication is successful why would MiniRedir not send authentication with the OPTIONS request?
Here's my environment:
Win 7
Office 2007
IIS 7.5
Have you checked if the IIS Webdav module is disabled ?
It seems that it may cause problems if not disabled.
Related
I'm trying to make a GET request to an old Linux machine using cURL inside WSL2/Debian. The connection between my windows PC and the remote Linux is via VPN. VPN is working as I can ping the IP, as well as VNC to it (via Windows).
The curl command I'm using on WSL2/Debian is:
curl -k --header 'Host: 10.48.1.3' --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2'
Using the verbose option, I get:
Note: Unnecessary use of -X or --request, GET is already inferred.
* Expire in 0 ms for 6 (transfer 0x555661293fb0)
* Expire in 4000 ms for 8 (transfer 0x555661293fb0)
* Trying 10.48.1.3...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x555661293fb0)
* Connection timed out after 4001 milliseconds
* Closing connection 0
curl: (28) Connection timed out after 4001 milliseconds
After the max-time of 4s the command is cancelled
When I execute the same command on the same computer but using Windows Powershell, it works:
curl.exe -k --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2' -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.48.1.3:443...
* Connected to 10.48.1.3 (10.48.1.3) port 443 (#0)
* schannel: disabled automatic use of client certificate
* schannel: using IP address, SNI is not supported by OS.
* schannel: ALPN, offering http/1.1
* ALPN, server did not agree to a protocol
Parameter1 HTTP/1.1
> User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Tue, 17 May 2022 15:39:00 GMT
< Server: IosHttp/0.0.1 (MANUFACTURER)
< Content-Type: application/json
< Content-Length: 239
< Strict-Transport-Security: max-age=15768000
<
PAYLOAD OF API* Connection #0 to host 10.48.1.3 left intact
Using Postman inside Windows works also.
Inside WSL2/Debian I'm able to ping the machine, but ssh is not working either; the cursor just stays there blinking without getting back any answer from remote machine:
$ ping 10.48.1.3 -c 2
PING 10.48.1.3 (10.48.1.3) 56(84) bytes of data.
64 bytes from 10.48.1.3: icmp_seq=1 ttl=61 time=48.9 ms
64 bytes from 10.48.1.3: icmp_seq=2 ttl=61 time=28.4 ms
--- 10.48.1.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 28.353/38.636/48.919/10.283 ms
$ ssh -c aes256-cbc root#10.48.1.3
^C # Cancelled as nothing happened for several minutes
On Windows Powershell both, ping and ssh work:
> ssh -c aes256-cbc root#10.48.1.3
The authenticity of host '10.48.1.3 (10.48.1.3)' can't be established.
ECDSA key fingerprint is SHA256:FINGERPRINTDATAOFMACHINE.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
I've about 100 similar machines on field to which I need to cURL, and this error is coming in about 10 of them, the rest 90 work fine (also on WSL2/Debian).
I guess the error may come from the SSL Version on my WSL2/Debian... Has anyone an idea how to solve this problem?
How do I find what process/application is running an http server on a machine? All the Usual tools (netstat, lsof, fuser, ss aren't helping in this instance)
vinayb#carbon ~ $ sudo fuser 80/tcp
vinayb#carbon ~ $ sudo ss -pt state listening 'sport = :80'
Recv-Q Send-Q Local Address:Port Peer Address:Port Process
vinayb#carbon ~ $ curl http://localhost:80
404 page not found
vinayb#carbon ~ $ curl -vv http://localhost:80
* Trying 127.0.0.1:80...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.73.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=utf-8
< Vary: Accept-Encoding
< X-Content-Type-Options: nosniff
< Date: Sat, 27 Feb 2021 12:45:05 GMT
< Content-Length: 19
<
Using netstat usually helps in this case, ie. netstat -tupan.
Best executed as root, that will give you a nice list, such as:
tom:~/ $ sudo netstat -tupan
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1450/master
tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 1764/smbd
tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 1764/smbd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1106/sshd
...
this is the code I am using to save files from a camera and name them from 0001 onward. The camera is running Busybox, and it has an ash shell inside.
The code is based on a previous answer by Charles Duffy here.
#!/bin/sh
# Snapshot script
cd /mnt/0/foto
sleep 1
set -- *.jpg # put the sorted list of picture namefiles on argv ( the number of files on the list can be requested by echo $# )
while [ $# -gt 1 ]; do # as long as there's more than one...
shift # ...some rows are shifted until only one remains
done
if [ "$1" = "*.jpg" ]; then # If cycle to determine if argv is empty because there is no jpg file present in the dir. #argv is set so that following cmds can start the sequence from 0 on.
set -- snapfull0000.jpg
else
echo "Piu' di un file jpg trovato."
fi
num=${1#*snapfull} # $1 is the first row of $#. The alphabetical part of the filename is removed.
num=${num%.*} # removes the suffix after the name.
num=$(printf "%04d" "$(($num + 1))") # the variable is updated to the next digit and the number is padded (zeroes are added)
# echoes for debug
echo "variabile num="$num # shows the number recognized in the latest filename
echo "\$#="$# # displays num of argv variables
echo "\$1="$1 # displays the first arg variable
wget http://127.0.0.1/snapfull.php -O "snapfull${num}.jpg" # the snapshot is requested to the camera, with the sequential naming of the jpeg file.
This is what I get on the cmd line during the script operation. I manually ran the script nine times, but after the saving of file snapfull0008.jpg, as you can see in the last lines, files are named only snapfull0000.jpg.
# ./snap4.sh
variable num=0001
$#=1
$1=snapfull0000.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:22 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0001.jpg 100% |*******************************| 246k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0002
$#=1
$1=snapfull0001.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:32 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0002.jpg 100% |*******************************| 249k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0003
$#=1
$1=snapfull0002.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:38 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0003.jpg 100% |*******************************| 248k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0004
$#=1
$1=snapfull0003.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:43 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0004.jpg 100% |*******************************| 330k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0005
$#=1
$1=snapfull0004.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:51 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0005.jpg 100% |*******************************| 308k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0006
$#=1
$1=snapfull0005.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:55 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0006.jpg 100% |*******************************| 315k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0007
$#=1
$1=snapfull0006.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:59 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0007.jpg 100% |*******************************| 316k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0008
$#=1
$1=snapfull0007.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:23:04 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0008.jpg 100% |*******************************| 317k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0000
$#=1
$1=snapfull0008.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:23:10 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0000.jpg 100% |*******************************| 318k --:--:-- ETA
What could be the cause of the sequence stopping after file number 8?
The problem is that leading 0s cause a number to be read as octal.
In bash, using $((10#$num)) will force decimal. Thus:
num=$(printf "%04d" "$((10#$num + 1))")
To work with busybox ash, you'll need to strip the 0s. One way to do this which will work even in busybox ash:
while [ "${num:0:1}" = 0 ]; do
num=${num:1}
done
num=$(printf '%04d' "$((num + 1))")
See the below transcript showing use (tested with ash from busybox v1.22.1):
$ num=0008
$ while [ "${num:0:1}" = 0 ]; do
> num=${num:1}
> done
$ num=$(printf '%04d' "$((num + 1))")
$ echo "$num"
0009
If your shell doesn't support even the baseline set of parameter expansions required by POSIX, you could instead end up using:
num=$(echo "$num" | sed -e 's/^0*//')
num=$(printf '%04d' "$(($num + 1))")
...though this would imply that your busybox was built with a shell other than ash, a decision I would strongly suggest reconsidering.
I want to sort and calculate how much clients downloaded files (3 types) from my server.
I installed tshark and ran followed command that should capture GET requests:
`./tshark 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' -R'http.request.method == "GET"'`
so sniffer starts to work and every second I get new row, here is a result:
0.000000 144.137.136.253 -> 192.168.4.7 HTTP GET /pids/QE13_593706_0.bin HTTP/1.1
8.330354 1.1.1.1 -> 2.2.2.2 HTTP GET /pids/QE13_302506_0.bin HTTP/1.1
17.231572 1.1.1.2 -> 2.2.2.2 HTTP GET /pids/QE13_382506_0.bin HTTP/1.0
18.906712 1.1.1.3 -> 2.2.2.2 HTTP GET /pids/QE13_182406_0.bin HTTP/1.1
19.485199 1.1.1.4 -> 2.2.2.2 HTTP GET /pids/QE13_302006_0.bin HTTP/1.1
21.618113 1.1.1.5 -> 2.2.2.2 HTTP GET /pids/QE13_312106_0.bin HTTP/1.1
30.951197 1.1.1.6 -> 2.2.2.2 HTTP GET /nginx_status HTTP/1.1
31.056364 1.1.1.7 -> 2.2.2.2 HTTP GET /nginx_status HTTP/1.1
37.578005 1.1.1.8 -> 2.2.2.2 HTTP GET /pids/QE13_332006_0.bin HTTP/1.1
40.132006 1.1.1.9 -> 2.2.2.2 HTTP GET /pids/PE_332006.bin HTTP/1.1
40.407742 1.1.2.1 -> 2.2.2.2 HTTP GET /pids/QE13_452906_0.bin HTTP/1.1
what I need to do to store results type and count like /pids/*****.bin in to other file.
Im not strong in linux but sure it can be done with 1-3 rows of script.
Maybe with awk but I don't know what is the technique to read result of sniffer.
Thank you,
Can't you just grep the log file of your webserver?
Anyway, to extract the lines of captured http traffic relative to your server files, just try with
./tshark 'tcp port 80 and \
(((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' \
-R'http.request.method == "GET"' | \
egrep "HTTP GET /pids/.*.bin"
I´m trying to access https sites through squid proxy 3.1.14 on a Ubuntu server but I don´t know why I can´t. Here is my squid -v output:
Squid Cache: Version 3.1.14
configure options: '--build=i686-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode' '--disable-dependency-tracking' '--disable-silent-rules' '--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' '--mandir=/usr/share/man' '--with-cppunit-basedir=/usr' '--enable-inline' '--enable-ssl' '--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' '--enable-underscores' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM' '--enable-ntlm-auth-helpers=smb_lm,' '--enable-digest-auth-helpers=ldap,password' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' '--enable-arp-acl' '--enable-esi' '--enable-zph-qos' '--disable-translation' '--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter' 'build_alias=i686-linux-gnu' 'CFLAGS=-g -O2 -g -O2 -Wall' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' 'CXXFLAGS=-g -O2 -g -O2 -Wall' --with-squid=/etc/squid3/squid3-3.1.14
And Here is my squid.conf:
http_port 3124
cache_mem 256 MB
maximum_object_size_in_memory 10 MB
maximum_object_size 100 MB
minimum_object_size 0 KB
cache_swap_low 90
cache_swap_high 95
cache_dir diskd /cache/squid1 5000 16 256
cache_dir diskd /cache/squid2 5000 16 256
cache_dir diskd /cache/squid3 5000 16 256
cache_dir diskd /cache/squid4 5000 16 256
cache_dir diskd /cache/squid5 5000 16 256
cache_dir diskd /cache/squid6 5000 16 256
cache_dir diskd /cache/squid7 5000 16 256
access_log /var/log/squid3/access.log squid
cache_peer x.x.x.x parent 3124 0 no-query login=PASS default no-digest
memory_replacement_policy lru
cache_replacement_policy lru
cache_store_log /var/log/squid3/store.log
emulate_httpd_log on
cache_log /var/log/squid3/cache.log
debug_options ALL,2
coredump_dir /var/spool/squid3
minimum_expiry_time 120 seconds
cache_mgr nutel.rn#dprf.gov.br
cache_effective_user squid
cache_effective_group squid
cachemgr_passwd 1234567890 all
refresh_pattern -i ([^.]+.|)jre-6u31-linux-i586\.bin 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i exe$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i com$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i br$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i [0-9]+$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i AutoDL?BundleId=59620$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i htm$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i php$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i html$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i asp$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i zip$ 0 50% 999999 ignore-reload override-lastmod override-expire reload-into-ims
refresh_pattern -i \.(mp3|mp4|m4a|ogg|mov|avi|wmv)$ 10080 90% 999999 ignore-no-cache override-expire ignore-private
refresh_pattern -i flv$ 0 50% 999999 ignore-reload override-lastmod override-expire reload-into-ims
refresh_pattern -i swf$ 0 50% 999999 ignore-reload override-lastmod override-expire reload-into-ims
refresh_pattern -i cab$ 0 50% 999999 ignore-reload override-lastmod override-expire reload-into-ims
refresh_pattern -i rar$ 0 50% 999999 ignore-reload override-lastmod override-expire reload-into-ims
refresh_pattern ^http:// 30 40% 20160
refresh_pattern ^ftp:// 30 50% 20160
refresh_pattern ^gopher:// 30 40% 20160
refresh_pattern . 1440 100% 1440 ignore-reload override-lastmod override-expire reload-into-ims
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl SSL_ports port 443 563
acl cacic_ports port 20 21 22 3306 # cacic
acl Safe_ports port 80 23 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
#Cache videos youtube
acl youtube dstdomain .youtube.com
cache allow youtube
# Aqui você irá definir o IP da sua rede interna
acl redelocal src x.x.x.x/24
cache allow redelocal
http_access allow redelocal
http_access allow localhost
http_access deny all
I´ve tried to access gmail, facebook, ...., any site that uses https doesn´t open, but any other sites that doesn´t use https opens perfectly.
What am I doing wrong?
Thanks for the help!!!
Everybody who played with Squid on Ubuntu, have probably encountered with this problem;.
Ubuntu Squid packages had been compiled without SSL option. Therefore, it is not possible to proxy HTTPS connections with Squid on Ubuntu Server.
Refer This