I've a very strange problem boring me since a long time, and I dont find any clue to check what's the matter.
I've a ftp root with a lot of directory without any problem.
In a specific directory, I've a website containing several directories. On each directory, I've no problem on the root, but on the subdirectories, I can only download in Passive Mode, and upload on Port Mode, so I've to switch every time between modes to work.
+ Site1 > no problem
- directories > no problem
- subfolders > no problem
+ Site2 > no problem
- directories > no problem
- subfolders > DL in PASV, UL in PORT
As all sites are on the same IIS, all directories have the same privileges, I really stuck on what can explain that problem.
I would really enjoy if someone one day encountered the same problem and found a solution...
Thank you :)
Edit : a trace of an attempt to Download in PORT mode :
[15:56:28] [R] Connecting to x.x.x.x -> IP=x.x.x.x PORT=21
[15:56:28] [R] Connected to x.x.x.x
[15:56:28] [R] 220 Microsoft FTP Service
[15:56:28] [R] USER x
[15:56:28] [R] 331 Password required
[15:56:28] [R] PASS (hidden)
[15:56:28] [R] 230 User logged in.
[15:56:28] [R] SYST
[15:56:28] [R] 215 Windows_NT
[15:56:28] [R] FEAT
[15:56:28] [R] 211-Extended features supported:
[15:56:28] [R] LANG EN*
[15:56:28] [R] UTF8
[15:56:28] [R] AUTH TLS;TLS-C;SSL;TLS-P;
[15:56:28] [R] PBSZ
[15:56:28] [R] PROT C;P;
[15:56:28] [R] CCC
[15:56:28] [R] HOST
[15:56:28] [R] SIZE
[15:56:28] [R] MDTM
[15:56:28] [R] REST STREAM
[15:56:28] [R] 211 END
[15:56:28] [R] OPTS UTF8 ON
[15:56:28] [R] 200 OPTS UTF8 command successful - UTF8 encoding now ON.
[15:56:28] [R] PWD
[15:56:28] [R] 257 "/" is current directory.
[15:56:28] [R] CWD /site2/dir/subdir/
[15:56:28] [R] 250 CWD command successful.
[15:56:28] [R] PWD
[15:56:28] [R] 257 "/site2/dir/subdir" is current directory.
[15:56:28] [R] Listening on PORT: 59644, Waiting for connection.
[15:56:28] [R] PORT 192,168,212,170,232,252
[15:56:28] [R] 200 PORT command successful.
[15:56:28] [R] LIST -al
[15:56:28] [R] 125 Data connection already open; Transfer starting.
[15:56:28] [R] 226 Transfer complete.
[15:56:28] [R] List Complete: 5 KB in 0,05 seconds (5,3 KB/s)
[15:56:36] [R] TYPE A
[15:56:36] [R] 200 Type set to A.
[15:56:36] [R] SIZE file.doc
[15:56:36] [R] 213 155
[15:56:36] [R] MDTM file.doc
[15:56:36] [R] 213 20190221173054
[15:56:38] [R] PORT 192,168,212,170,232,253
[15:56:38] [R] 200 PORT command successful.
[15:56:38] [R] RETR file.doc
[15:56:38] [R] 125 Data connection already open; Transfer starting.
[15:56:38] [R] 550 The specified network name is no longer available.
[15:56:38] [R] Transfer Failed: file.doc
[15:56:38] Transfer queue completed
[15:56:38] Transferred 0 Files (0 bytes) in 2 seconds (0,0 KB/s)
[15:56:38] 1 File Failed
According to your error message, this may related with the antivirus interference.
If you have enable the antivirus interference in the server, I suggest you could try to disable it and try again.
Related
Well its 2022 and httpf.conf no longer exists. its seems to be split up into site-available, and conf-available, I can't figure it out and I can't find any instructions on how to get a simple helloworld perl script to run (in runs fine from the command line" "perl hw.pl")
The index.html page works fine in firefox, and by changing the 000-default.conf I was able to at least get the script "localhost/cgi-bin/hw.pl" to change from a 404 error to a 403 error by adding the section as marked:
leslie#jl-vr0sr4:/etc/apache2/sites-available$ pwd
/etc/apache2/sites-available
jleslie#jl-vr0sr4:/etc/apache2/sites-available$ cat 000-default.conf
<VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
# JL:: 221116 uncomment out the include to allow cgi-bin
# Include conf-available/serve-cgi-bin.conf
#JL:: 221116 did nothing. Lets add the below:
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
AddHandler cgi-script .pl
</Directory>
#JL:: 221116 ok, that changed the 404 not found error
# to a 403 forbidden error what gives?
# Forbidden
#
# You don't have permission to access this resource.
# Apache/2.4.52 (Ubuntu) Server at 127.0.0.1 Port 80
</VirtualHost>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
So how do I now get it to actually run?
Did I do anything make a mistake in my conf file?
I also want to be able to run .exe .cgi and .sh files from /cgi-bin/ how do specify them as well?
Here is the test hello worl perl script I tried to run:
jleslie#jl-vr0sr4:/usr/lib/cgi-bin$ ll
/usr/lib/cgi-bin
total 44
drwxr-xr-x 2 root root 4096 Nov 16 09:17 ./
drwxrwxrwx 115 root root 4096 Nov 14 13:07 ../
-rwxrwxrwx 1 jleslie jleslie 30144 Nov 16 08:51 fh_fe.exe*
-rwxr-xr-x 1 root root 76 Nov 16 09:17 hw.pl*
jleslie#jl-vr0sr4:/usr/lib/cgi-bin$ cat hw.pl
#!/usr/bin/perl
print "Content-type: text/html\n\n";
print "Hello, World.";
jleslie#jl-vr0sr4:/usr/lib/cgi-bin$
OK, I finally figured it out. No thanks to the apache folks who keep changing the rules and fail to document properly how do do the most basic:
start an apache server
set up a cgi-bin directory.
They'll gladly spend pages talking about virtual hosts, and double nested hyper-crayon whatevers, but not the most basic setup: a webserver that can run cgi-bin programs. Unbelievable. /end gripe.
Anyway I edited :
/etc/apache2/sites-available/000-default.conf
with this code, to both fix and document what is necessary:
31 # JL:: 221116 uncomment out the include to allow cgi-bin
32
33 # Include conf-available/serve-cgi-bin.conf
34
35 #JL:: 221116 did nothing. Lets add the below:
36
37
38 #ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
39 #<Directory "/usr/lib/cgi-bin">
40 ScriptAlias /cgi-bin/ /var/www/cgi-bin/
41 <Directory "/var/www/cgi-bin">
42 AllowOverride None
43 Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
44 Order allow,deny
45 Allow from all
46 AddHandler cgi-script .pl .exe .cgi .sh
47 </Directory>
48
49 #JL:: 221116 ok, that changed the 404 not found error
50 # to a 403 forbidden error what gives?
51 # Forbidden
52 #
53 # You don't have permission to access this resource.
54 # Apache/2.4.52 (Ubuntu) Server at 127.0.0.1 Port 80
55
56 # here is the fix. run this at the command line:
57
58 ### RUNME ****> cd /etc/apache2/mods-enabled
59 ### RUNME ****> sudo ln -s ../mods-available/cgi.load
60
61
62 </VirtualHost>
63
Here is the complete history (with my mistakes, don't bother with them,) of the session that fixed the issue:
1807 cd /etc/apache2/sites-available/
1808 vi 000-default.conf
1809 sudo systemctl stop apache2
1810 sudo systemctl start apache2
1811 cd ..
1812 cd conf-available/
1813 ll
1814 vi serve-cgi-bin.conf
1815 cd ../sites-available/
1816 ll
1817 vi 000-default.conf
1818 pwd
1819 cd /etc/apache2/mods-enabled
1820 sudo ln -s ../mods-available/cgi.load
1821 ll
1822 sudo systemctl stop apache2
1823 sudo systemctl start apache2
please note in the documentation the double secret "turn on cgi-bin" by making the soft link. It took me over an hour of searching on the internet to find that one. - J
I have this on the last line in my .htaccess
# Default response
RewriteRule ^.*$ https://miranda-zhang.github.io/cloud-computing-schema/v1.0/406.html [R=406,L]
But when I tested, it seems to return a default Apache server page.
$ curl -H "Accept: anything else" http://150.203.213.249/test
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 336 100 336 0 0 336 0 0:00:01 --:--:-- 0:00:01 328k<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>406 Not Acceptable</title>
</head><body>
<h1>Not Acceptable</h1>
<p>An appropriate representation of the requested resource /test could not be found on this server.</p>
<hr>
<address>Apache/2.4.18 (Ubuntu) Server at 150.203.213.249 Port 80</address>
</body></html>
This is intended as an entry to w3id.org
However, I can change the code to 308 to make it redirect, is this a bad practice or it doesn't matter?
You can just use an ErrorDocument directive to render a custom page for 406 (or for 408):
ErrorDocument 406 /error.html
this is the code I am using to save files from a camera and name them from 0001 onward. The camera is running Busybox, and it has an ash shell inside.
The code is based on a previous answer by Charles Duffy here.
#!/bin/sh
# Snapshot script
cd /mnt/0/foto
sleep 1
set -- *.jpg # put the sorted list of picture namefiles on argv ( the number of files on the list can be requested by echo $# )
while [ $# -gt 1 ]; do # as long as there's more than one...
shift # ...some rows are shifted until only one remains
done
if [ "$1" = "*.jpg" ]; then # If cycle to determine if argv is empty because there is no jpg file present in the dir. #argv is set so that following cmds can start the sequence from 0 on.
set -- snapfull0000.jpg
else
echo "Piu' di un file jpg trovato."
fi
num=${1#*snapfull} # $1 is the first row of $#. The alphabetical part of the filename is removed.
num=${num%.*} # removes the suffix after the name.
num=$(printf "%04d" "$(($num + 1))") # the variable is updated to the next digit and the number is padded (zeroes are added)
# echoes for debug
echo "variabile num="$num # shows the number recognized in the latest filename
echo "\$#="$# # displays num of argv variables
echo "\$1="$1 # displays the first arg variable
wget http://127.0.0.1/snapfull.php -O "snapfull${num}.jpg" # the snapshot is requested to the camera, with the sequential naming of the jpeg file.
This is what I get on the cmd line during the script operation. I manually ran the script nine times, but after the saving of file snapfull0008.jpg, as you can see in the last lines, files are named only snapfull0000.jpg.
# ./snap4.sh
variable num=0001
$#=1
$1=snapfull0000.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:22 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0001.jpg 100% |*******************************| 246k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0002
$#=1
$1=snapfull0001.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:32 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0002.jpg 100% |*******************************| 249k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0003
$#=1
$1=snapfull0002.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:38 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0003.jpg 100% |*******************************| 248k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0004
$#=1
$1=snapfull0003.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:43 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0004.jpg 100% |*******************************| 330k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0005
$#=1
$1=snapfull0004.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:51 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0005.jpg 100% |*******************************| 308k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0006
$#=1
$1=snapfull0005.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:55 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0006.jpg 100% |*******************************| 315k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0007
$#=1
$1=snapfull0006.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:22:59 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0007.jpg 100% |*******************************| 316k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0008
$#=1
$1=snapfull0007.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:23:04 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0008.jpg 100% |*******************************| 317k --:--:-- ETA
# ./snap4.sh
More than a jpg file found.
variable num=0000
$#=1
$1=snapfull0008.jpg
Connecting to 127.0.0.1 (127.0.0.1:80)
127.0.0.1 127.0.0.1 - [05/Dec/2014:20:23:10 +0000] "GET /snapfull.php HTTP/1.1" 302 0 "-" "Wget"
snapfull0000.jpg 100% |*******************************| 318k --:--:-- ETA
What could be the cause of the sequence stopping after file number 8?
The problem is that leading 0s cause a number to be read as octal.
In bash, using $((10#$num)) will force decimal. Thus:
num=$(printf "%04d" "$((10#$num + 1))")
To work with busybox ash, you'll need to strip the 0s. One way to do this which will work even in busybox ash:
while [ "${num:0:1}" = 0 ]; do
num=${num:1}
done
num=$(printf '%04d' "$((num + 1))")
See the below transcript showing use (tested with ash from busybox v1.22.1):
$ num=0008
$ while [ "${num:0:1}" = 0 ]; do
> num=${num:1}
> done
$ num=$(printf '%04d' "$((num + 1))")
$ echo "$num"
0009
If your shell doesn't support even the baseline set of parameter expansions required by POSIX, you could instead end up using:
num=$(echo "$num" | sed -e 's/^0*//')
num=$(printf '%04d' "$(($num + 1))")
...though this would imply that your busybox was built with a shell other than ash, a decision I would strongly suggest reconsidering.
I've been having a problem with my server, and the host is refusing to look into the issue.
It's a dedicated Cent OS machine with DirectAdmin, nothing out of the ordinary, with a PHP/MySQL site running on it.
So I ran a netstat command on the box, and got this (xs in place to mask live data)
netstat -plan|grep :80|awk {'print $5'}|cut -d: -f 1|sort|uniq -c|sort -nk 1
1 xx.xx.xx.xx
1 xx.xx.xx.xx
1 xx.xx.xx.xx
109
163 xx.xx.xx.xx
344 xx.xx.xx.xx
The 163, for some reason, is coming from Facebook Ireland. The 344 is from my own server itself - am not sure why, but can't get to the root of the problem either - at times it can balloon up to 500,600 connections.
Any ideas? Am not sure if I should block the Facebook one as not sure why it would need to crawl the site with that many connections.
Thanks a lot!
I want to sort and calculate how much clients downloaded files (3 types) from my server.
I installed tshark and ran followed command that should capture GET requests:
`./tshark 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' -R'http.request.method == "GET"'`
so sniffer starts to work and every second I get new row, here is a result:
0.000000 144.137.136.253 -> 192.168.4.7 HTTP GET /pids/QE13_593706_0.bin HTTP/1.1
8.330354 1.1.1.1 -> 2.2.2.2 HTTP GET /pids/QE13_302506_0.bin HTTP/1.1
17.231572 1.1.1.2 -> 2.2.2.2 HTTP GET /pids/QE13_382506_0.bin HTTP/1.0
18.906712 1.1.1.3 -> 2.2.2.2 HTTP GET /pids/QE13_182406_0.bin HTTP/1.1
19.485199 1.1.1.4 -> 2.2.2.2 HTTP GET /pids/QE13_302006_0.bin HTTP/1.1
21.618113 1.1.1.5 -> 2.2.2.2 HTTP GET /pids/QE13_312106_0.bin HTTP/1.1
30.951197 1.1.1.6 -> 2.2.2.2 HTTP GET /nginx_status HTTP/1.1
31.056364 1.1.1.7 -> 2.2.2.2 HTTP GET /nginx_status HTTP/1.1
37.578005 1.1.1.8 -> 2.2.2.2 HTTP GET /pids/QE13_332006_0.bin HTTP/1.1
40.132006 1.1.1.9 -> 2.2.2.2 HTTP GET /pids/PE_332006.bin HTTP/1.1
40.407742 1.1.2.1 -> 2.2.2.2 HTTP GET /pids/QE13_452906_0.bin HTTP/1.1
what I need to do to store results type and count like /pids/*****.bin in to other file.
Im not strong in linux but sure it can be done with 1-3 rows of script.
Maybe with awk but I don't know what is the technique to read result of sniffer.
Thank you,
Can't you just grep the log file of your webserver?
Anyway, to extract the lines of captured http traffic relative to your server files, just try with
./tshark 'tcp port 80 and \
(((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' \
-R'http.request.method == "GET"' | \
egrep "HTTP GET /pids/.*.bin"