I am running play on multiple machines in our datacenter. We loadbalance the hell out of everything. On each play node/VM I'm using Apache and an init.d/play script to start and stop the play service.
The problem is that our play websites are hosted on shared network storage. This makes deployment really nice, you deploy to one place and the website is updated on all 100 machines. Each machine has a mapped folder "/z/www/PlayApp1" where the play app lives.
The issue is that when the service starts or stops the server.pid file is being written to that network location where the apps files live.
The problem is that as I bring up 100 nodes, the 100th node will override the PID file with it's pid and now that pid file only represents the correct process ID for 1 out of 100 nodes.
So how do I get play to store the pid file locally and not with the app files on the network share? I'll need each server's PID file to reflect that machines actual process.
We are using CentOS (Linux)
Thanks in advance
Josh
According to https://github.com/playframework/play/pull/43 it looks like there is a --pid_file command line option; it might only work with paths under the application root so you might have to make directories for each distinct host (which could possibly be symlinks)
I have 0 experience with Play so hopefully this is helpful information.
I don't even think it should run a second copy, based on the current source code. The main function is:
public static void main(String[] args) throws Exception {
File root = new File(System.getProperty("application.path"));
if (System.getProperty("precompiled", "false").equals("true")) {
Play.usePrecompiled = true;
}
if (System.getProperty("writepid", "false").equals("true")) {
writePID(root);
}
:
blah blah blah
}
and writePID is:
private static void writePID(File root) {
String pid = ManagementFactory.getRuntimeMXBean().getName().split("#")[0];
File pidfile = new File(root, PID_FILE);
if (pidfile.exists()) {
throw new RuntimeException("The " + PID_FILE + " already exists. Is the server already running?");
}
IO.write(pid.getBytes(), pidfile);
}
meaning it should throw an exception when you try to run multiple copies using the same application.path.
So either you're not using the version I'm looking at or you're discussing something else.
It seems to me it would be a simple matter to change that one line above:
File root = new File(System.getProperty("application.path"));
to use a different property for the PID file storage, one that's not on the shared drive.
Although you'd need to be careful, root is also passed to Play.int so you should investigate the impact of changing it.
This is, after all, one of the great advantages of open source software, inasmuch as you can fix the "bugs" yourself.
For what it's worth, I'm not a big fan of the method you've chosen for deployment. Yes, it simplifies deployment but upgrading your servers is an all-or-nothing thing which will cause you grief if you accidentally install some dodgy software.
I much prefer staged deployments so I can shut down non-performing nodes as needed.
Change your init script to write the pid to /tmp or somewhere else machine-local.
If that is hard, a symlink might work.
Related
I'm writing a shell script that will run on Linux but can operate on files sitting on a mounted partition,
that may or may not have an ext* file-system. It could eg be NTFS, FAT32, or any inode- or non-inode-based system;
that could get further re-mounted on other boxes running non-Linux OSes like Windows or Mac.
Further, my script needs to be able to delete a file on this shared, arbitrarily formatted partition (even while the file's being read from or written to) by a remote process running on Linux, Windows, or Mac boxes
Questions:
Is the ability to be able to delete a file in use a feature of,
a. only the file-system?
b. or, only the OS?
c. or, a combo of both?
(Extension of Q1) Does it matter if the processe doing I/O on the file and the one deleting the file are both local or remote?
Does it matter if the processes doing I/O on the file and the one
deleting the file are both local or remote?
interesting - how remote system can access file (open, read-write data, delete) on windows direct ? really this impossible. we need some agent (LANMan server), running in the local system, which will be by remote commands (send say by Network Redirector) do local operations on file. so from file system view - all operations always local.
Is the ability to be able to delete a file in use a feature of
this is implemented of course by file system driver, but this driver written for concrete OS and based on it rules. while file system data on disk is have common format (as result drive formatted(and written files) in one OS, can be read from another OS) - how file system driver process requests, open, read, write, delete files - this is OS specific. different for different OS. based on it rules. so data format on disk is common and only depend from file system. but how this data read/write/delete - already os specific.
in windows we have next rules for delete files:
Normally a file marked for deletion is not actually deleted until all
open handles for the file have been closed and the link count for the
file is zero. When marking a file for deletion using
FILE_DISPOSITION_POSIX_SEMANTICS, the link gets removed from the
visible namespace as soon as the POSIX delete handle has been closed,
but the file’s data streams remain accessible by other existing
handles until the last handle has been closed.
so in general file will be not deleted, until the last handle to it will be closed. file became not accessible after we try delete it - can not more open it (we get error A non close operation has been requested of a file object with a delete pending. if try do this, after file marked to delete). but if file was already opened - we can still work with it by this handle. also file can not be deleted if section exist on file - will be error An attempt has been made to remove a file or directory that cannot be deleted.
begin from win10 redstone1 build exist FILE_DISPOSITION_POSIX_SEMANTICS flag which let removed file name from the visible namespace when delete handle has been closed, but the file’s data streams remain accessible by other existing handles until the last handle has been closed
windows code test demo: (FILE_DISPOSITION_POSIX_SEMANTICS supported by ntfs begin only from _WIN32_WINNT_WIN10_RS1. FileDispositionInfoEx information class also supported begin from _WIN32_WINNT_WIN10_RS1 only. in previous build we simply got not implemented error)
void print_error(PCSTR name)
{
PWSTR sz;
NTSTATUS status = RtlGetLastNtStatus();
if (FormatMessageW(FORMAT_MESSAGE_ALLOCATE_BUFFER|FORMAT_MESSAGE_FROM_HMODULE,
GetModuleHandle(L"ntdll"), status, 0, (PWSTR)&sz, 0, 0))
{
DbgPrint("%s=%x\n%S\n", name, status, sz);
LocalFree(sz);
}
}
HANDLE OpenFile(PCWSTR lpFileName, DWORD dwDesiredAccess)
{
HANDLE hFile = CreateFileW(lpFileName, dwDesiredAccess, FILE_SHARE_VALID_FLAGS, 0, OPEN_EXISTING, 0, 0);
if (hFile == INVALID_HANDLE_VALUE)
{
print_error("OpenFile");
return 0;
}
return hFile;
}
void ReadTest(HANDLE hFile)
{
if (hFile)
{
ULONG dwBytes;
if (ReadFile(hFile, &dwBytes, sizeof(dwBytes), &dwBytes, 0))
{
DbgPrint("ReadFile=OK\n");
}
else
{
print_error("ReadFile");
}
}
}
void DeleteTest(PCWSTR lpFileName)
{
HANDLE hFile1, hFile2, hFile3;
if (hFile1 = OpenFile(lpFileName, DELETE))
{
hFile2 = OpenFile(lpFileName, FILE_GENERIC_READ);
FILE_DISPOSITION_INFO_EX fdi = { FILE_DISPOSITION_DELETE | FILE_DISPOSITION_POSIX_SEMANTICS };
if (!SetFileInformationByHandle(hFile1, FileDispositionInfoEx, &fdi, sizeof(fdi)))
{
print_error("SetFileInformationByHandle");
}
// file already not accessible here (open must fail) but visible
if (hFile3 = OpenFile(lpFileName, FILE_GENERIC_READ))
{
CloseHandle(hFile3);
}
ReadTest(hFile2);
// win10 rs1: file removed from the visible namespace here
CloseHandle(hFile1);
// are file still visible ?
if (hFile3 = OpenFile(lpFileName, FILE_GENERIC_READ))
{
CloseHandle(hFile3);
}
// are possible create new file with this name &
hFile3 = CreateFileW(lpFileName, DELETE,
FILE_SHARE_VALID_FLAGS, 0, CREATE_NEW, FILE_FLAG_DELETE_ON_CLOSE, 0);
if (hFile3 == INVALID_HANDLE_VALUE)
{
print_error("CreateFile");
}
else
{
CloseHandle(hFile3);
DbgPrint("CreateFile OK\n");
}
ReadTest(hFile2);
if (hFile2)
{
CloseHandle(hFile2);
}
}
}
and output
OpenFile=c0000056
A non close operation has been requested of a file object with a delete pending.
ReadFile=OK
OpenFile=c0000034
Object Name not found.
CreateFile OK
ReadFile=OK
it depends how do you define filesystem and OS. Generally, I understand under filesystem the way how the data storage is organized on a device. OS is then responsible for the I/O of data and files. Particularly, if your script wants to delete a file it calls some utility like rm and provides name of the file. This utility is a program which makes an appropriate system call. This system call is a part of operating system which is executed in the privileged mode. It realizes what and how to do (e.g. which drivers should be used to mark HDD blocks as free on a particular drive or maybe some remote procedure should be called or samba server involved etc.)
So, to give an answer to your question 1, I would tend to the answer b.
I am working on a very old Nodejs application which creates a new child process using forever-monitor. The logs of this child process are taken care by forever-monitor only. This is how the configuration looks like:
var child = new (forever.Monitor)(__dirname + '/../lib/childprocess.js', {
max: 3,
silent: true,
options: [program.port],
'errFile': __dirname + '/../childprocess_error.log',
'outFile': __dirname + '/../childprocess_output.log'
}
);
Everything is working fine in this setup. The new requirement is to rotate these logs every 12 hours. That is every 12 hours a new file will be created which will have all the content of this file childprocess_output.log and should be stored in some other directory. The new log file will obviously have the timestamp appended at the end of the name (eg: childprocess_output_1239484034.log).
And the original file childprocess_output.log should be reset, that is all its content should be deleted and it should start logging from fresh.
I am trying to understand which npm library should I used for this purpose. I googled a bit and found a few of the npm libraries which matches my requirement, but the number of downloads for these libraries was really small, so I doubt the reliability of those libraries.
Which library NodeJs developers use for log rotation?
Also, my last resort would be to use the Linux tool Logrotate if I couldn't find any appropriate library in Node. I am avoiding using Logroate because I want my application to handle the scenario and not depend on the instance configuration.
you can use :
fs (the file system library) handled with methods like statSync and renameSync coupled with try-catches block-codes.
I am working on more of a security dashboard, it watches for changes in files in the entire home directory with hundreds of sites (all Joomla, so a lot of files).
In order to keep on top of potential security issues we want to watch for file changes in an efficient way without creating unnecessary CPU/Memory overhead. We want to watch it at a faster interval but I know its more of a balancing act when you do want to keep it from using more cpu then a side process should.
I have tried to use "watch" with the following code, running in the home directory:
var watch, fs;
watch = require('watch');
fs = require('fs');
watch.createMonitor(__dirname,{interval:500,filter:function(file,stat){
if(file.indexOf('index.php')!=-1){
return true;
}else{
return false;
}
}},function(monitor){
monitor.filter(function(file){
console.log(file);
})
monitor.on('created',function(file,stat){
console.log(file + ' new');
});
monitor.on('changed',function(file,stat){
console.log(file + ' changed');
});
monitor.on('removed',function(file,stat){
console.log(file + ' deleted');
});
});
However this spikes the CPU to over 100% of a single core (sometimes 2) out of 8. Memory also takes up about 20% of 8gb pretty quickly as well. This is all just to create the watch event on all the files, so its before it can actually detect any file changes.
I know the issue with this is it goes through each file individually, and only does not track it if you filter that sort of file. Typically all I need to watch is the index.php in every directory, down to a point that it could be consistent (with some exceptions).
Is there a module already built to do this? Or is this something new? All modules I find assume its a smaller directory (like watching LESS or something) So not built for this sort of application at all.
Any ideas? I know this code will need to be scrapped as there is no way I can see to stop the CPU overhead.
Do not use package 'watch', just use fs.watch(...)
package 'watch':
consistent APIs
very slow because implement mostly in node, look source to see how it work
souce code: https://github.com/mikeal/watch/blob/master/main.js
fs.watch(..)
non-consistent APIs, not all OSs are supported.
very fast because it reused OS features
document: http://nodejs.org/docs/latest/api/fs.html#fs_fs_watch_filename_options_listener
As you can see here:
https://github.com/visionmedia/express/blob/master/examples/multipart/index.js
Express support file uploads by default and store each uploaded file on the temp folder for later use.
My question is: Is it safe?
As I see it, an attacker can fill up all the temp folder with garbage files without any control on it.
Should i check each POST request and delete any unused file?
Let me suggest two solutions to your problem.
Use a virtual drive for your upload location. If your server is running on linux, it is very easy to mount a virtual file system which is in memory only. The files will be placed here faster than if it was on a real harddrive, and if you have problems like the one you describe, it is only a matter of cleaning out the virtual drive or restarting the server. Look at this article for an explaination of ram disks.
Make sure that you only accept a maximum number of x uploads from the same ip address during during a 24 hour period. Combine this solution with solution 1 for maximum effect. One way of implementing this, is to have a global object with upload counts for each ip address, and then clear it out every 24 hours.
var uploads = {}
setInterval(function(){
uploads = {}
}, 24*60*60*1000); //Run every 24 hours
var onUpload = function(request, file){
if(uploads[req.ip] > maxUploadsAllowedPrUser)
fs.unlink(file) //Delete the file
else
uploads[req.ip]++ //Keep the file, and increase count
}
Question: Why do I get this error while scanning a users 'My Documents' folder, but not when I scan the 'My Music/My Pictures/My Videos' directory?
Secondary, less important question: Is there a way to avoid this without having to specifically filter these folders out, or using a try/catch block?
I prefer answers that teach me how to fish, instead of just giving me fish. Just at this point I am not sure where I need to look to specifically answer this question. I've read through documents about elevating permissions and iterating through the file system, and spent a good week looking for why I can set DirectoryInfo on 'User\My Music' but not 'User\Documents\My Music'(link) and just would enjoy a little boost in a different direction in regards to learning more.
I catch the initial 'UnauthorizedAccessException' that is thrown initially when attempting Directory.GetFiles('path', "*", SearchOption.AllDirectories) where path is the users 'My Documents'. To handle the exception I know that I need to walk the directory manually. Which works, returning the files from the sub-directories.
The code for the initial GetFiles function:
public static string[] GetFiles(string path)
{
string[] files;
try
{
files = Directory.GetFiles(path, "*", SearchOption.AllDirectories);
}
catch(UnauthorizedAccessException ex)
{ files = WalkDirectory(path); }
return files;
}
public static string[] WalkDirectory(string path)
{
List<string> files = new List<string>();
DirectoryInfo dir = new DirectoryInfo(path);
foreach (DirectoryInfo subDir in dir.GetDirectories())
{
try
{
files.AddRange(WalkDirectory(subDir.FullName));
}
catch(UnauthorizedAccessException ex)
{
// complete fail to walk directory listed
throw ex;
}
}
foreach (FileInfo file in dir.GetFiles())
{
files.Add(file.FullName);
}
}
This works out perfectly, until the code attempts to walk the hidden folders: My Music, My Pictures, or My Videos. No matter how I try and re-code to walk the hidden files, I keep receiving the UnauthorizedAccessException.
I understand completely that I am going to code around this. Mainly what I am curious to know, is why is the exception happening under a users folder?
An asssumption I am making is that the folder is a symlink to another directory, because I can make the path ?:\users directory\user\My (Music, Pictures, or Videos) and the code walks those directories then without any issues. This only happens when trying to scan the directory files after setting them from within the users My Documents.
OS: Windows 7
User Privliages: Administrator
Application Elevated to run as administrator
I was speaking about this with a friend, who is not technical, but knows enough tech to hold a conversation and he helped me narrow this question down further. This is actually a duplicate question and was answered at Check if a file is real or a symbolic link.
The folder is a symbolic link that was placed there for backwards compatibility purposes according to this article on TechRepublic: Answers to some common questions about symbolic links under the section Windows Vista and Windows 7 have built-in symbolic links paragraph 2.
In order to specifically avoid attempting to scan this directory without a Try/Catch block on an UnauthorizedAccessException the folder attributes need to be checked to determine if the folder or file in question is a symbolic link. Which again was answered in the above listed stackoverflow question.