Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Does anyone know of an linux based script/program which will run continuously on the server and monitor a folder (with sub folders preferably) image files and optimise them, ala smash.it pngout, jpegtrans etc. Preferably all those tools.
I know there are lot of linux apps which will call upon these tools but I want one that will monitor a folder containing our website images and optimise new images (ignoring the images its already optimised previously) - on first run it should do everything, but after that it should know what its already processed.
Does such a tool exist?
if you can use a hook into inotify, use that. for example, let incron monitor specific directories, to which files are added or modified. if so, incron passes control to a program of your choice. convert, from imagemagick, can be useful for reducing image filesizes.
incron can pass name of altered/added files to your command or script, allowing it to work specifically on mutations, by using these variables as parameters to your commands:
$# watched filesystem path
$# event-related file name
$% event flags (textually)
$& event flags (numerically)
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Is there any alternative of popular windows desktop search "Search Everything (by Voidtools)" for linux. "Everything" is the only reason I have to stay with windows and not able to switch to linux as primary OS. I am looking for the alternative for quite sometime. I guess, someone who has already used "Everything" on windows, can only understand what I am looking for. Any help is appreciated.
Take a look at Recoll.
Recoll finds keywords inside documents as well as file names. It can search most document formats. It can reach any storage place: files, archive members, email attachments, transparently handling decompression. One click will open the document inside a native editor or display an even quicker text preview. The software is free, open source, and licensed under the GPL.
I don't really know what is your use case.
To have a index of all file names and search for them use
updatedb & locate
to find manually things in files use ( only as example ):
grep <search-string> -R *
find . -type f -exec grep <search-string> {} \;
Indexing source code:
ctags & etags
More information about text indexing:
Command-line fulltext indexing?
Hope this helps
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am using Linux operating System. and my most of the application is in TCL. I am thinking of adding a module to it for creating xls file with multiple sheets and colored boxes as per requirement. Is there a way by which i can created xls file? csv will not help me for the task. Any Help/Suggestion/Keyword will be appreciated. Thanks in advance.
The excel formats are formidably tricky (only CSV is remotely easy to handle, and that's because it doesn't do much). I'd use Apache POI for this, even bearing in mind that it is Java code and so likely to be a bit awkward to integrate with your Tcl code.
If you were able to run on Windows instead, the TCOM package would let you talk to a running Excel instance to do the work more directly. That package is platform-specific though…
This i made it done by simple hit and trial of perl and half day learning of Spreadsheet::WriteExcel
http://homepage.tinet.ie/~jmcnamara/perl/WriteExcel.html
Creating perl file using tcl data and at the end executing the perl script. Final output of xls file generated. Thanks everyone for your effort.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I would like to start writing lots of tiny "utility" NodeJS-based apps -- things like stream filters, generators, and the like, that might be 30-40LOC each. Each one would consume nearly zero CPU, RAM, or bandwidth (when the overhead of NodeJS and OS processes are factored out). The point is, I want a simple way to run thousands of them.
What do I need? Are there any PaaS's that can run thousands of NodeJS apps for a reasonable price ($10/mo)? Is there some kind of middleware that can give me thousands of sandboxed "partitions" on top of one Node process? Or is there some binary that's made for this that I could put on a VPS?
You can use vm module for sandboxing javascript code. It is still in works, be sure to read the caveats.
Functions that you can use:
runInThisContext: runs code in a separate context (but has access to global vars, not local).
runInNewContext: takes a seperate set of global var for context.
runInContext: takes a Context object(previously defined), for running the code.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
How can I search for a code snippet in all folders/files of a site?
I can't find the line of code I'm looking for. It's a large site and looking through file by file is not reasonable. How can I go about finding this snippet?
Other details:
It's a Drupal site
I use a Mac
Code editing software I currently have avail: Coda, Text Wrangler, Dreamweaver
Any help would be incredibly appreciated.
Just use grep - take a closer look at this tool here: http://www.cyberciti.biz/faq/unix-linux-finding-files-by-content/
FYI: it is a command-line tool.
Also consider that Drupal may be storing something within the database, even the PHP code (which happens when you have PHP filter enabled for some contents and put PHP inline). Thus you may not find the specific code snippet within the code of the application and then you will probably need to look into the database.
While grep via a terminal will work wonderfully for OSX and *nix users, those on Windows may find that grep isn't available.
For Windows users who want a solution to this, you can try the following:
1) Using cygwin (http://www.cygwin.com), which can be installed with grep
or
2) Grabbing a copy of grepWin (http://tools.tortoisesvn.net/grepWin.html); a grep tool with a frontend UI.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am collecting logs from several custom made applications. Each application has it's own log format. What I'm looking for is a central tool which would allow me to search through all of my logs. This means the tool would have to be able to define a different regex (or alike) for each log file (marking where a record begins, ends, and what are the fields). I've been trying Splunk, but I'm not happy with it, since performance are slow, I'm limited (free version) with the amount of indexed data per-day, and it's not as flexible as I want it to be.
Could you recommend a software (preferably free or cheap) for the task?
You can try Lucene. It is free. It is written in Java, and it allows full-text search over large amount of data. It is not a complete application, but rather a library, so you have to write code that uses it to index and to search your logs. You may have to define different document types or at least different indexing functions for your logs, but then search works beautifully.
If you can use Windows, try out Microsoft's best tool ever, Logparser. I wish there was such a simple tool for Unix. But there isn't. And although I've kept wanting to get around to making a Unix version of Logparser, I just haven't had the time.
Note: This would be a great project for someone with time on their hands or for a grad-student somewhere!
http://www.splunk.com/
Never used it, but have heard of it.