I need a command line tool for windows ce to generate md5 checksums recursively from a directory and preferably generate a xml file as a result.
Does anybody know such a tool? I have searched the internet all over, including this page, but could not find one.
Thanks!
Use dump.exe from the RAPI tools. Get it here: http://itsme.home.xs4all.nl/projects/xda/tools.html . Quote from their page:
using the -md5, -sha1, -sha256, -crc or -sum options, you can use
dump.exe to calculate the checksum, crc or hash of a specific region
of a file
Related
I need to search my drive for about 200 passwords that could possibly be on my drive. Doing this one at a time is time consuming. I would like to find a tool that can accept a file, or a list of terms to incrementally search on.
Try using grep for Windows:
http://gnuwin32.sourceforge.net/packages/grep.htm
The command-line argument --file=<file> tells grep to look inside <file> for the strings to match. Full documentation here.
I have a scenario where .egp's are created on Windows environment. As part of migration these need to be migrated to UNIX/Linux server and from EG 4.1 to 4.2 and we have to make the programs comply with LINUX/Unix standards (like font casing) and the directory paths to the linux or unix environment.
As we have around 300 .egp's to be migrarted, Say in the first go if we use migration wizard on sas eg 4.2 version to automatically have the .egp's converted to 4.2 standards, the bigggest question is how to incorporate changes to the sas programs.Is there any automated way to extract the program from respective node in .egp, edit and insert at the same node.
Thanks in advance.
If the code exists purely in EG, not that I'm aware of via SAS - EG is not itself programmable.
If the code objects are stored as physical files outside of EG they could conceivably be imported into EG (by looping over the folders involved) and some text substitution done.
Alternatively it involves a full on scripting language. EG files are zip files, and once uncompressed contain .sas text files in subfolders within the zip file. It should be possible to iterate over them all and make the required changes.
In neither case will it be much fun. (Though doing it manually doesn't sound great either.)
Talk to SAS - they may have a tool they've put together for someone else they can let you have.
I write project where I need to identify certain file formats.
For some formats I have found signatures that I use for identifying easily (mp3, ogg), with another formats I have a big problem (like MPEG ADTS) - I just cannot find what kind of signature can be used for it.
I found out that File utility for Linux environment can do it.
I tried to search it in source code, but I've found nothing.
I found that file utility holds its database in magic.mgc file. But it's hold in binary form.
It looks like:
Does someone perhaps know how to find that database in plain text format?
That utility isn't a Linux-specific utility; it's the version of the UN*X file command originally written by Ian Darwin. The binary .mgc file is generated from a bunch of source files.
Your Linux distribution probably has a source code package for it; where you get that package, and how you install it, depends on which distribution you're using.
The source files from which the .mgc file was generated might also be available on your distribution without installing the source package for file; if so, you could use the file command to generate it, using the -C flag. I don't see them anywhere obvious on my Ubuntu 12.04 virtual machine, so that might require some other package to be installed (file itself is installed). (On OS X, they're in the directory /usr/share/file/magic.)
Alternatively, you could download the standard version of that file (which might have been modified by your distribution, so you might not want that version) and modify and build it.
Note that, on some versions of UN*X systems, the bulk of the work done by the file command is done in library routines in the "libmagic" library; see whether your distribution has that or can install it (try, for example, man libmagic) and whether it can do the job for you.
Background (not necessary to read)
I'm tinkering with MS office files for work (trying to figure out the quickest, easiest way to automate generation of arbitrary-length excel and powerpoint files). Since actual excel files are just zipped archives with .xlsx appended to the filename, I've been unzipping them, editing the xml, rezipping them, and seeing whether OpenOffice can still load them.
However, I've realized (after not too much such testing, thankfully) that, by default, the 'zip' command in bash (or, at least, on my mac) is zipping the files in a format that only requires unzip v1.0 to extract, but normal excel files are zipped in such a way that they require v2.0 to extract. I checked this is a problem by zipping and unzipping an excel file that I knew loaded normally, and then trying to load it. OpenOffice was displeased.
So, I know I need to make the file zip exactly the way excel does, but how to make that happen I'm not sure. I have zip version 3 on my computer, so hopefully if the zip/unzip release cycles are synchronized it should be possible, but I didn't see anything on the man page that immediately seemed to be the solution.
edit:
And zip -9 (which zip -h helpfully says instructs zip to 'zip better') still only requires v1.0 to extract.
Question:
How can I specify in bash that I want zip to zip a file in such a way that it would require unzip v2.0 to unzip?
Often, the reason for an incompatibility between compressed files produced by different versions is the compression algorithm used. If the files were compressed with an algorithm that didn't exist in zip 1.0, that would cause the incompatibility you're seeing.
Look at the man page for your zip utility, see if there's an option to specify the type of compression to use. If there is, look at the existing files created from Excel, and find out what type of compression algorithm they're compressed with, and use that.
On my Linux system, zip reports "This is Zip 2.31 (March 8th 2005), by Info-ZIP.", and it does not have an option for specifying the compression algorithm. On my Windows system, 7-zip does have the option, and it looks like they do have a Mac version available, so you could try that if your zip utility doesn't support that option.
I'm developing a website, and want to create a directory after the user username, what is going to be the email address (so I don’t have to generate new ɪᴅs, etc)
I've made some tests and it seems to work fine. Also, I didn’t find any documentation against using the "#" in a directory, but could I find some problem in the future with this approach?
I mean, might some browser not be able to upload images from this directory, or some other problem?
if you plan to run perl scripts (and possibly other languages) against those files you will need to remember to escape the # sign. It's not a huge problem, but I personally would not do it.
More importantly if the path is visible to the browser you would be disclosing the user's email address to the whole world.
I would suggest using something like an MD5 hash of the user's email instead. It is (relatively) unique, and you can recalculate it very easily if you need to. Gravatar uses this approach for instance. See: http://en.gravatar.com/site/implement/hash/
no.. there should be no problems.. browsers are trying to read the file and they don't care that much about the title only file content... (header matters)
So.. there should be no problem...
Historically some remote filesystems have used the # to "escape" from normal path processing to do "interesting" stuff.
Some version control systems use # to denote a certain version of a path (e.g. Subversion, ClearCase).
Some other tools use # to denote "user#remote_host" stuff - AFAIK rsync is one of them which might bite you - you should check if that tool is used somewhere on your site for backup or syncing or something like that.
So - I would not use that character within filenames.