Nginx: how to view what files are in downloading state? - linux

I still wait for the answer.
I need to view if someone is downloading a specific file at this moment. The initial problem: I would like to know when someone interrupts downloading the file.
The server configuration:
server
{
listen 80;
server_name mysite.com;
location /ugp/
{
alias /www/ugp/;
}
break;
}
A user can download files from http://mysite.com/ugp/, for example http://mysite.com/ugp/1.mp3.
UPDATE.
It's no so obviously how to do it analyzing access.log. Some browsers send 206 code when user stops downloading (Google Chrome) some not (HTC player, mobile application):
85.145.8.243 - - [18/Jan/2013:16:08:41 +0300] "GET /ugp/6.mp3 HTTP/1.1" 200 10292776 "-" "HTC Streaming Player htc_wwe / 1.0 / htc_ace / 2.3.5"
85.145.8.243 - - [18/Jan/2013:16:08:41 +0300] "GET /ugp/2.mp3 HTTP/1.1" 200 697216 "-" "HTC Streaming Player htc_wwe / 1.0 / htc_ace / 2.3.5"
85.145.8.243 - - [18/Jan/2013:16:09:44 +0300] "GET /ugp/7.mp3 HTTP/1.1" 200 4587605 "-" "HTC Streaming Player htc_wwe / 1.0 / htc_ace / 2.3.5"

In order to offer more flexibility, you can add a PHP server, and publish URLs like
http://mysite.com/download.php?file=2.mp3
the download.php reads the file from its location (eg /var/www/files/2.mp3), having a suggested code. ( Headers )
<?php
// Here you know precisely that a download is requested
$path = '/home/var/www/files';
$file = "$path/" . $_GET['file']; // check user input!
$size = filesize($file);
$read = 0;
$state = 'downloading ' . $file;
// echo headers for a binary file download
# $f = fopen ($file, 'r');
if ( $f ) {
// Output 100 bytes in each iteration
while (($chunk = fread($f, 100)) !== false) {
// specify somewhere $state, still downloading
echo $chunk;
$read += strlen($chunk);
}
fclose ($f);
}
if ($read >= $size)
$state = 'done';
else
$state = 'fail';
?>
This is an exemple to provide an algorithm - not tested at all! But should be pretty close from a download code (usually readfile is used, but it reads the file one shot)

Have a look at the nginx extended status module.

Related

Using SFTP to transfer images from HTML form to remote linux server using PERL/CGI.pm

This is a school project, and the instructor has no knowledge of how to write the code.
I am using CGI and I am attempting to transfer a file without using Net::FTP or Net::SFTP since the server I am attempting to transfer it to will not allow connections from these services. I have written the HTML form and I am able to grab the name of the file uploaded through CGI.
Is it possible to use the SFTP command within a Perl script that resides on a Linux server using bash to transfer a file uploaded through an HTML form?
If anyone knows a way to do it please post the code so I can modify it and insert into my script.
use CGI qw(:standard);
use File::Basename;
my ( $name, $path, $extension) = fileparse ( $productimage, '..*' );
$productimage = $name . $extension;
$productimage =~ tr/ /_/; $productimage =~ s/[^$safechars]//g;
if ( $productimage =~/^([$safechars]+)$/ ) {
$productimage = $1;
} else {
die "Filename contains invalid characters";
}
$fh = upload('image');
$uploaddir = "../../.hidden/images";
open ( UPLOADFILE, ">$uploaddir/$productimage" )
or die "$!"; binmode UPLOADFILE;
while (<$fh>) {
print UPLOADFILE;
}
close UPLOADFILE;
This is the code I used to upload the file into the server.

display /var/log/messages in html/php output?

I am trying to display the output of /var/log/messages or similar (../secure for instance) in a webpage that I access through a webserver on the same host.
Should I use bash to tail -f >> the messages file to a new output file and display that text file in the html page, or is there a better way to do this?
Thanks!
idiglivemusic
If you're looking for a way to display actual file contents online without the need to reload the page, then you should setup a WebSocket server.
You can build a WebSocket server using a framework such as phpDaemon, ReactPHP, Ratchet, icicle, or implement your own server with the help of PHP extensions wrapping asynchronous libraries: event, ev, or similar.
I've chosen a random framework from the list above: Ratchet. Ratchet is based on ReactPHP. ReactPHP chooses a backend for the event loop from the following list:
- libevent extension,
- libev extension,
- event extension,
- or an internal class based on the built-in stream_select() function.
As a maintainer of the event extension, I've chosen event.
I've written a "quick" example just to give you idea of how it might be implemented. You'll most likely have to work out your own version, maybe using different tools. But the code should give you an impulse.
src/MyApp/Server.php
<?php
namespace MyApp;
use Ratchet\MessageComponentInterface;
use Ratchet\ConnectionInterface;
class Server implements MessageComponentInterface {
protected $clients;
public function __construct() {
$this->clients = new \SplObjectStorage;
}
public function onOpen(ConnectionInterface $conn) {
$this->clients->attach($conn);
echo "New connection! ({$conn->resourceId})\n";
}
public function onMessage(ConnectionInterface $from, $msg) {
$numRecv = count($this->clients) - 1;
printf("Connection %d sending '%s' to %d other connection%s\n",
$from->resourceId, $msg, $numRecv, $numRecv == 1 ? '' : 's');
foreach ($this->clients as $client) {
if ($from !== $client) {
$client->send($msg);
}
}
}
public function onClose(ConnectionInterface $conn) {
$this->clients->detach($conn);
echo "Connection {$conn->resourceId} has disconnected\n";
}
public function onError(ConnectionInterface $conn, \Exception $e) {
echo "An error has occurred: {$e->getMessage()}\n";
$conn->close();
}
public function broadcast($msg) {
foreach ($this->clients as $client) {
$client->send($msg);
}
}
}
server.php
<?php
use Ratchet\Server\IoServer;
use Ratchet\Http\HttpServer;
use Ratchet\WebSocket\WsServer;
use MyApp\Server;
require __DIR__ . '/vendor/autoload.php';
$server = IoServer::factory(
new HttpServer(
new WsServer(
$my_app_server = new Server()
)
),
9989
);
$loop = $server->loop;
$filename = '/var/log/messages';
$loop->addPeriodicTimer(5, function ()
use ($filename, $my_app_server)
{
static $stat_info;
if ($stat_info == null) {
clearstatcache(true, $filename);
$stat_info = stat($filename);
}
clearstatcache(true, $filename);
$st = stat($filename);
$size_diff = $st['size'] - $stat_info['size'];
echo "Diff = $size_diff bytes\n";
if ($size_diff > 0) {
$offset = $stat_info['size'];
$bytes = $size_diff;
} elseif ($size_diff < 0) {
// The file is likely truncated by `logrotate` or similar utility
$offset = 0;
$bytes = $st['size'];
} else {
$bytes = 0;
}
$stat_info = $st;
if ($bytes) {
if (! $fp = fopen($filename, 'r')) {
fprintf(STDERR, "Failed to open file $filename\n");
return;
}
if ($offset > 0) {
fseek($fp, $offset);
}
if ($msg = fread($fp, $bytes)) {
$my_app_server->broadcast($msg);
}
fclose($fp);
}
}
);
$server->run();
test.html
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<title>Test</title>
</head>
<body>
<script>
var conn = new WebSocket('ws://localhost:9989');
conn.onopen = function(e) {
console.log("Connection established!");
};
conn.onmessage = function(e) {
console.log("Msg from server", e.data);
};
</script>
</body>
</html>
I'll skip the steps required to setup a basic test environment using Composer. Assuming you have successfully configured the test environment for the files above, you'll be able to run the server with the following command:
php server.php
Check, if the user has permissions to read /var/log/messages. On my system only root can read the file. So you might need to run the above-mentioned command with sudo(root permissions).
Now you can open test.html in a browser and look at the console output. Then trigger some event which is normally logged to the messages file. For instance, you can invoke sudo with a wrong password. The server should detect the changes within interval of 5 seconds, then send it to the WebSocket clients.
If you're using tail -f, that means that you'll be continuously getting data from the file while it grows as the command runs.
You can use cat or tail -n. Also, of course, you can access files directly by creating a symbolic or hard link to them (ln source-file link-file, ln -s source-file link-file) - but make sure, that your web-server has enough rights to access them.
Make sure that your html-server has rights to access the page and read the page (with something like cat, tail, grep).
In <html> put the output between <pre> and </pre>.
Method 1
In one of your base directories, create a symbolic link
ln -s /var/log/messages messages
If the directory belonged to say, test.web, access the log with
unmesh
http://test.web/messages
Method 2
If you're looking for a php script then, first create the link as mentioned in method 1. Then, create a new file, say, readlog.php in the base directory of test.web with the below content :
<?php
readfile(“$DOCUMENT_ROOT/messages”);
?>
Access the readlog.php like :
http://test.web/readlog.php
Requirement:
Read access should be enabled for all users for /var/log/messages.
Note:
Setting read option for /var/log/messages for the whole world is NOT a good idea.
<!DOCTYPE html>
<html>
<head>
<title>toyLogs</title>
</head>
<body>
<div><p><?php include('/var/www/html/accesslog.txt'); ?></p></div>
</body>
</html>

File upload fails when file is over 60K in size

I have been working to convert an in-house application away from using FTP, as the security team has told us to get off FTP. So I've been using HTTP uploads instead, and for the most part it works very well. Our environment is a mishmash of Linux, HP-UX, Solaris, and AIX. On our Linux servers, curl is universally available, so I have been using curl's POST capabilities for uploads, and it's worked flawlessly. Unfortunately, the Unix machines rarely have curl, or even wget, so I wrote a GET script with perl which works fine, and the POST script I wrote for perl(lifted and adapted from elsewhere on the web) works brilliantly for Unix, up until the data being uploaded is greater than about 60K(which curl handles fine in Linux, btw). Beyond that, the Apache error log starts spitting out:
CGI.pm: Server closed socket during multipart read (client aborted?).
No such error ever occurs when I use curl for the upload. Here's my POST script, using Socket, since LWP is not available on every server, and not at all on any of the Unix servers.
#!/usr/bin/perl -w
use strict;
use Socket;
my $v = 'dcsm';
my $upfile = $ARGV[0] or die 'Upload File not found or not specified.' . "\n";
my $hostname = $ARGV[1] or die 'Hostname not specified.' . "\n";
$| = 1;
my $host = "url.mycompany dot com";
my $url = "/csmtar.cgi";
my $start = times;
my ( $iaddr, $paddr, $proto );
$iaddr = inet_aton($host);
$paddr = sockaddr_in( 80, $iaddr );
$proto = getprotobyname('tcp');
unless ( socket( SOCK, PF_INET, SOCK_STREAM, $proto ) ) {
die "ERROR : init socket: $!";
}
unless ( connect( SOCK, $paddr ) ) {
die "no connect: $!\n";
}
my $length = 0;
open( UH, "< $upfile" ) or warn "$!\n";
$length += -s $upfile;
my $boundary = 'nn7h23ffh47v98';
my #head = (
"POST $url HTTP/1.1",
"Host: $host",
"User-Agent: z-uploader",
"Content-Length: $length",
"Content-Type: multipart/form-data; boundary=$boundary",
"",
"--$boundary",
"Content-Disposition: form-data; name=\"hostname\"",
"",
"$hostname",
"--$boundary",
"Content-Disposition: form-data; name=\"ren\"",
"",
"true",
"--$boundary",
"Content-Disposition: file; name=\"filename\"; filename=\"$upfile\"",
"--$boundary--",
"",
"",
);
my $header = join( "\r\n", #head );
$length += length($header);
$head[3] = "Content-Length: $length";
$header = join( "\r\n", #head );
$length = -s $upfile;
$length += length($header);
select SOCK;
$| = 1;
print SOCK $header;
while ( sysread( UH, my $buf, 8196 ) ) {
if ( length($buf) < 8196 ) {
$buf = $buf . "\r\n--$boundary";
syswrite SOCK, $buf, length($buf);
} else {
syswrite SOCK, $buf, 8196;
}
print STDOUT '.',;
}
close UH;
shutdown SOCK, 1;
my #data = (<SOCK>);
print STDOUT "result->#data\n";
close SOCK;
Anybody see something that jumps out at them?
UPDATE:
I made the following updates, and the errors appear to be unchanged.
To address the content-length issue, and attempt to eliminate the potential for the loop equalling the exact number of characters before appending the final boundary, I made the following code update.
my $boundary = 'nn7h23ffh47v98';
my $content = <<EOF;
--$boundary
Content-Disposition: form-data; name="hostname"
$hostname
--$boundary
Content-Disposition: file; name="filename"; filename="$upfile"
--$boundary--
EOF
$length += length($content);
my $header = <<EOF;
POST $url HTTP/1.1
Host: $host
User-Agent: z-uploader
Content-Length: $length
Content-Type: multipart/form-data; boundary=$boundary
EOF
$header .= $content;
select SOCK;
$| = 1;
print SOCK $header;
my $incr = ($length + 100) / 20;
$incr = sprintf("%.0f", $incr);
while (sysread(UH, my $buf, $incr )) {
syswrite SOCK, $buf, $incr;
}
syswrite SOCK, "\n--$boundary", $incr;
You are asking if there's "something that jumps out", from looking at the code.
Two things jump out at me:
1) The Content-Length parameter in a POST HTTP message specifies the exact byte count of the entity portion of the HTTP message. See section 4.4 of RFC 2616.
You are setting the Content-Length: header to the exact size of the of the file you're uploading. Unfortunately, in addition to the file itself, you are also sending the MIME headers.
The "entity" portion of the HTTP message, as defined by RFC 2616, essentially consists of everything after the blank line following the last header of the HTTP message header. Everything below that point must be included in the Content-Length: header. The Content-Length header is NOT the size of the file you're uploading, but the same of the HTTP message's entire entity portion, which follows the header.
2) Ignoring the broken Content-Length: header, if the size of the file happens to be an exact multiple of 8196 bytes, the MIME document you are constructing will most likely be corrupted. Your last sysread() call will get the last 8196 bytes in the file, which you will happy copy through, the next call to sysread() will return 0, and you will terminate the loop, without emitting the trailing boundary delimiter. The MIME document will be corrupt.

intel XDK directory browsing

I'm trying to get a way to reach and parse all JSON file from a directory, witch inside an Intel Xdk project. All files in my case stored in '/cards' folder of the project.
I have tried to reach theme with fs.readdirSync('/cards') method, but that wasn't pointed to the location what I expected, and I haven't got luck with 'intel.xdk.webRoot' path.
With the simulator I've managed to get files, with a trick:
fs.readdirSync(intel.xdk.webRoot.replace('localhost:53862/http-services/emulator-webserver/ripple/userapp/', '')+'cards');
(intel.xdk.webRoot contains absolute path to my '/cards' folder)
and its work like a charm, but it isn't working in any real device what I'd like to build it.
I have tried it with iOS7 and Android 4.2 phones.
Please help me to use in a great way the fs.readdirSync() method, or give me some alternate solution.
Thanks and regards,
satire
You can't use nodejs api's on mobile apps. It is a bug that the XDK emulator lets you do it. The bug is fixed in the next release of XDK.
You could write a node program that scans the directory and then writes out a js file with the contents of the directory. You would run the node script on your laptop before packaging the app in the build tab. The output would look like this:
files.js:
dir = {files: ['file1.txt', 'file2.txt']}
Then use a script tag to load it:
And your js can read the dir variable. This assumes that the contents does not change while the app is running.
The location of your application's root directory will vary depending on the target platform and can also vary with the emulator and the debug containers (e.g., App Preview versus App Analyzer). Here's what I've done to locate files within the project:
// getWebPath() returns the location of index.html
// getWebRoot() returns URI pointing to index.html
function getWebPath() {
"use strict" ;
var path = window.location.pathname ;
path = path.substring( 0, path.lastIndexOf('/') ) ;
return 'file://' + path ;
}
function getWebRoot() {
"use strict" ;
var path = window.location.href ;
path = path.substring( 0, path.lastIndexOf('/') ) ;
return path ;
}
I have not been able to test this exhaustively, but it appears to be working so far. Here's an example where I'm using a Cordova media object and want it to play a file stored locally in the project. Note that I have to "special case" the iOS container:
var x = window.device && window.device.platform ;
console.log("platform = ", x) ;
if(x.match(/(ios)|(iphone)|(ipod)|(ipad)/ig)) {
var media = new Media("audio/bark.wav", mediaSuccess, mediaError, mediaStatus) ;
}
else {
var media = new Media(getWebRoot() + "/audio/bark.wav", mediaSuccess, mediaError, mediaStatus) ;
}
console.log("media.src = ", media.src) ;
media.play() ;
Not quite sure if this is what you are looking for...
You will need to use the Cordova build in the Intel XDK, here is information on building with Cordova:
https://software.intel.com/en-us/html5/articles/using-the-cordova-for-android-ios-etc-build-option
And a DirectoryReader example:
function success(entries) {
var i;
for (i=0; i<entries.length; i++) {
console.log(entries[i].name);
}
}
function fail(error) {
alert("Failed to list directory contents: " + error.code);
}
// Get a directory reader
var directoryReader = dirEntry.createReader();
// Get a list of all the entries in the directory
directoryReader.readEntries(success,fail);

mod_rewrite to download - makes suspicous looking file

I decided to try and use mod_rewrite to hide the location of a file that a user can download.
So they click on a link that's directed to "/download/some_file/" and they instead get "/downloads/some_file.zip"
Implemented like this:
RewriteRule ^download/([^/\.]+)/?$ downloads/$1.zip [L]
This works except they when the download progress appears I'm getting a file "download" with no extension which looks suspicious and the user might not be aware they are supposed to unzip it. Is there a way of doing this so it looks like an actual file? Or is there a better a way I should be doing this?
To provide some context/reason for hiding the location of the file. This is for a band where the music can be downloaded for free provided the user signs up for the mailing list.
Also not I need to do this within .htaccess
You can set the filename by sending the Content-disposition header:
https://serverfault.com/questions/101948/how-to-send-content-disposition-headers-in-apache-for-files
Ok so I believe that I'm restricted as to what headers I can set using .htaccess
So I have instead solved this using php.
I initially copied a download php script found here:
How to rewrite and set headers at the same time in Apache
However my file size was too big and so this was not working properly.
After a bit of googling I came across this: http://teddy.fr/blog/how-serve-big-files-through-php
So my complete solution is as follows...
First send requests to download script:
RewriteRule ^download/([^/\.]+)/?$ downloads/download.php?download=$1 [L]
Then get full filename, set headers, and serve it chunk by chunk:
<?php
if ($_GET['download']){
$file = $_SERVER['DOCUMENT_ROOT'].'media/downloads/' . $_GET['download'] . '.zip';
}
define('CHUNK_SIZE', 1024*1024); // Size (in bytes) of tiles chunk
// Read a file and display its content chunk by chunk
function readfile_chunked($filename, $retbytes = TRUE) {
$buffer = '';
$cnt =0;
// $handle = fopen($filename, 'rb');
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, CHUNK_SIZE);
echo $buffer;
ob_flush();
flush();
if ($retbytes) {
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status) {
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
$save_as_name = basename($file);
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Pragma: no-cache');
header("Content-Type: application/zip");
header("Content-Disposition: disposition-type=attachment; filename=\"$save_as_name\"");
readfile_chunked($file);
?>

Resources