I want to copy few files to image of disk (*.img extension). On linux this command is used:
mcopy -i "$target_img" $file ::/
What is same command for mac (OS X)? How can I copy files to image?
Apparently my last answer doesn't fit the criteria of the moderator... well heres an answer acceptable in their "format." i guess i wont get back the little reputation i received that the admin stripped me from. its tough being 20 yr old silicon valley techy ;)
The hdiutil command can also be used to create a disk image based upon a folder.
hdiutil create {imagename}.dmg -volname "{Name of volume}" -srcfolder /{path to folder}'
Using the above command, we could perform the following:
hdiutil create ~/Desktop/newimage.dmg -volname "New Disk Image" -srcfolder ~/Desktop/myfolder
The name of the disk image and volume name are two different things, the first simply refers to the filename of the disk image created. When the image is mounted or restored to a disk, the volume name is what's referred to. So if I mount this new disk image, you'll see OS X mounts it as "New Disk Image".
Disk images generated using the above methods are read-only and formatted as HFS+ by default, though their size will be just enough to contain all the data needed.
Standard disk images cannot increase in size, so you'll need to specify a particular size if you intend to be adding data to it later. The parameter -size can be used to specify the size of the disk image to create.
If you need will need to modify or add more data to the disk image, use the parameter -format UDRW which represents read/write.
Combining all of the above, the command would be:
hdiutil create ~/Desktop/newimage.dmg -volname "New Disk Image" -size 1g -format UDRW -srcfolder ~/Desktop/myfolder
Related
I created an image file using dd on my disk /dev/sda which fdisk says it is 500107862016 bytes in size. The resulting image file is 500108886016 bytes which is exactly 1024000 bytes larger.
Why is the image file 1MB larger than my source disk? Is there something related to the fact that I specified bs=1M in my dd command?
When I restore the image file onto another identical disk, I get "dd: error writing ‘/dev/sda’: No space left on device" error. Is this a problem? Will my new disk be corrupted?
conv=noerror makes dd(1) continue after a reading error, and this is not what you want. Also conv=sync fills incomplete blocks (mainly last block) with zeros up to fill a complete block, so probably this appending zeros to your last block is what is making your file greater than the actual disk size.
You don't need to use any of the conv options you used. No conversion is going to be made, and dd(1) will write the incomplete last block in case of the image doesn't have a full block size (which is the case)
Just retry your command with:
dd if=/dev/sda of=yourfile.img
and then
dd if=yourfile.img of=/dev/sdb
If you plan to use some greater buffer size (not needed, as you are using a block device and the kernel doesn't impose a blocksize for reading block devices) just use a multiple of the sector size that is a divisor of the whole disk size (something like one full track ---absurd, as today's disks' tracks are completely logical and don't have any relationship with actual disk geometry)
I have a pendrive where I restore an image maked by dd command in other system.
When I restore the image, always I must do a fsck.
Is possbile that when I've generated the image the source system was corrupted or it is that I corrupt the pendrive when I restore the image file ?
Many thanks and sorry for my English!
To know if you have corruption in your data, when you create an image, you should compute a checksum (md5sum or sha1).
# dd if=/dev/sdb of=my_image.img # --> here you create your image
# md5sum /dev/sdb # --> here you compute your checksum
e2985322ca0ed3632027a6eb8fe70ae8 /dev/sdb
# md5sum my_image.img # --> check the integrity of the image
e2985322ca0ed3632027a6eb8fe70ae8 my_image.img
Thus, when you flash to another device, on an other computer or any system, you are able to check the integrity of data.
# dd if=my_image.img of=/dev/hdc # --> here you flash your image to a device
# md5sum /dev/hdc # --> check the integrity of the flashed data
e2985322ca0ed3632027a6eb8fe70ae8 /dev/hdc
Of course the obtained hash here is just an example but it is always the same for the same data. If any byte was altered the checksum would be totally different.
Obviously, if you copy with dd to a device (even from), be sure that one is not mounted using something like mount | grep /dev/hdc should return nothing if you want to flash the dev/hdc device.
This is just guessing, since you do not provide much information about the creation of the image, but yes, it is possible that the file system was in an unclean state when the image was taken. It is also possible that the file system was simply still mounted at that point of time - always unmount a file system (or mount it read-only) before you take an image of it.
Is there any C# way to check an ISO file is valid or not i.e. valid Iso format or any other check possible or not.
The scenario is like, if any text file(or any other format file) is renamed to ISO and given it for further processing. I want to check weather this ISO file is a valid ISO file or not? Is there any way exist programmatically like to check any property of the file or file header or any other things
Thanks for any reply in advance
To quote the wiki gods:
There is no standard definition for ISO image files. ISO disc images
are uncompressed and do not use a particular container format; they
are a sector-by-sector copy of the data on an optical disc, stored
inside a binary file. ISO images are expected to contain the binary
image of an optical media file system (usually ISO 9660 and its
extensions or UDF), including the data in its files in binary format,
copied exactly as they were stored on the disc. The data inside the
ISO image will be structured according to the file system that was
used on the optical disc from which it was created.
reference
So you basically want to detect whether a file is an ISO file or not, and not so much check the file, to see if it's valid (e.g. incomplete, corrupted, ...) ?
There's no easy way to do that and there certainly is not a C# function (that I know of) that can do this.
The best way to approach this is to guess the amount of bytes per block stored in the ISO.
Guess, or simply try all possible situations one by one, unless you have an associated CUE file that actually stores this information. PS. If the ISO is accompanied by a same-name .CUE file then you can be 99.99% sure that it's an ISO file anyway.
Sizes would be 2048 (user data) or 2352 (raw or audio) bytes per block. Other sizes are possible as well !!!! I just mentioned the two most common ones. In case of 2352 bytes per block the user data starts at an offset in this block. Usually 16 or 24 depending on the Mode.
Next I would try to detect the CD/DVD file-systems. Assume that the image starts at sector 0 (although you could for safety implement a scan that assumes -150 to 16 for instance).
You'll need to look into specifics of ISO9660 and UDF for that. Sectors 16, 256 etc. will be interesting sectors to check !!
Bottom line, it's not an easy task to do and you will need to familiarize yourself with optical disc layouts and optical disc file-systems (ISO9660, UDF but possibly also HFS and even FAT on BD).
If you're digging into this I strongly suggest to get IsoBuster (www.isobuster.com) to help you see what the size per block is, what file systems there are, to inspect the different key blocks etc.
In addition to the answers above (and especially #peter's answer): I recently made a very simple Python tool for the detection of truncated/incomplete ISO images. Definitely not validation (which as #Jake1164 correctly points out is impossible), but possibly useful for some scenarios nevertheless. It also supports ISO images that contain Apple (HFS) partitions. For more details see the following blog post:
Detecting broken ISO images: introducing Isolyzer
And the software's Github repo is here:
Isolyzer
You may run md5sum command to check the integrity of an image
For example, here's a list of ISO: http://mirrors.usc.edu/pub/linux/distributions/centos/5.4/isos/x86_64/
You may run:
md5sum CentOS-5.4-x86_64-LiveCD.iso
The output is supposed to be the same as 1805b320aba665db3e8b1fe5bd5a14cc, which you may find from here:
http://mirrors.usc.edu/pub/linux/distributions/centos/5.4/isos/x86_64/md5sum.txt
I would like to resize a large number (about 5200) of image files (PPM format, each 5 MB in size) and save them to PNG format using convert.
Short version:
convert blows up 24 GB of memory although I use the syntax that tells convert to process image files consecutively.
Long version:
Regarding more than 25 GB of image data, I figure I should not process all files simultaneously. I searched the ImageMagick documentation about how to process image files consecutively and I found:
It is faster and less resource intensive to resize each image it is
read:
$ convert '*.jpg[120x120]' thumbnail%03d.png
Also, the tutorial states:
For example instead of...
montage '*.tiff' -geometry 100x100+5+5 -frame 4 index.jpg
which reads all the tiff files in first, then resizes them. You can
instead do...
montage '*.tiff[100x100]' -geometry 100x100+5+5 -frame 4 index.jpg
This will read each image in, and resize them, before proceeding to
the next image. Resulting in far less memory usage, and possibly
prevent disk swapping (thrashing), when memory limits are reached.
Hence, this is what I am doing:
$ convert '*.ppm[1280x1280]' pngs/%05d.png
According to the docs, it should treat each image file one by one: read, resize, write. I am doing this on a machine with 12 real cores and 24 GB of RAM. However, during the first two minutes, the memory usage of the convert process grows to about 96 %. It stays there a while. CPU usage is at maximum. A bit longer and the process dies, just saying:
Killed
At this point, no output files have been produced. I am on Ubuntu 10.04 and convert --version says:
Version: ImageMagick 6.5.7-8 2012-08-17 Q16 http://www.imagemagick.org
Copyright: Copyright (C) 1999-2009 ImageMagick Studio LLC
Features: OpenMP
It looks like convert tries to read all data before starting the conversion. So either there is a bug in convert, an issue with the documentation or I did not read the documentation properly.
What is wrong? How can I achieve low memory usage while resizing this large number of image files?
BTW: a quick solution would be to just loop over the files using the shell and invoke convert for each file independently. But I'd like to understand how to achieve the same with pure ImageMagick.
Thanks!
Without having direct access to your system it's really hard to help you debugging this.
But you can do three things to help yourself narrowing down this problem:
Add -monitor as the first commandline argument to see more details about what's going on.
(Optionally) add -debug all -log "domain: %d +++ event: %e +++ function: %f +++ line: %l +++ module: %m +++ processID: %p +++ realCPUtime: %r +++ wallclocktime: %t +++ userCPUtime: %u \n\r"
Temporarily, don't use '*.ppm[1280x1280]' as an argument, but use 'a*.ppm[1280x1280]' instead. The purpose is to limit your wildcard expansion (or some other suitable way to achieve the same) to only a few matches, instead of all possible matches.
If you do '2.' you'll need to do '3.' as well otherwise you'll be overwhelmed by the mass of output. (Also your system does seem to not be able to process the full wildcard anyway without having to kill the process...)
If you do not find a solution, then...
...register a username at the official ImageMagick bug report forum.
...report your problem there to see if they can help you (these guys are rather friendly and responsive if you ask politely).
Got the same issue, it seems it's because ImageMagick create temporary files into the /tmp directory, which is often mounted as a tmpfs.
Just move your tmp somewhere else.
For example:
create a "tmp" directory on a big external drive
mkdir -m777 /media/huge_device/tmp
make sure the permissions are set to 777
chmod 777 /media/huge_device/tmp
as root, mount it in replacement to your /tmp
mount -o bind /media/huge_device/tmp /tmp
Note: It should be possible to use with the TMP environment variable to do the same trick.
I would go with GNU Parallel if you have 12 cores - something like this, which works very well. As it does only 12 images at a time, whilst still preserving your output file numbering, it only uses minimal RAM.
scene=0
for f in *.ppm; do
echo "$f" $scene
((scene++))
done | parallel -j 12 --colsep ' ' --eta convert {1}[1280x1280] -scene {2} pngs/%05d.png
Notes
-scene lets you set the scene counter, which comes out in your %05d part.
--eta predicts when your job will be done (Estimated Arrival Time).
-j 12 runs 12 jobs in parallel at a time.
In order to create two VMs on VirtualBox for same Fedora 17 vdi, I first installed Fedora, then I copied Fedora17.vdi using dd command. Then I changed the UUID of the new image using the command
$ VBoxManage internalcommands sethduuid /home/pradeep/Fedora_New.vdi "NewUUID"
But this doesnot work..I am interested to know what is the relation between the UUID value of a disk and its disk image and how it is calculated.
How did you use dd to create the copy? My guess is that is where things went south most likely.
The best way to duplicate a virtual disk image is to use the clonehd command:
VBoxManage clonehd original.vdi copy.vdi
This will also clean up the UUID as the copy is created.