Clone one MicroSD card to another - linux

So, I have a Raspbian Lite booted from PINN, the bootloader, on my Raspberry Pi 2B v1.1. I have it all written on a 8.0GB micro SD Card. I just bought an upgrade - a 64.0GB micro SD. I have a lot of things on my original 8GB SD, so I don't want to have to manually re-install every little thing I have.
My question is this: Is there a way to clone my whole card, with every partition, using the terminal in Raspbian Lite to a new SD Card?
I have tried rpi-clone: it seems to only copy over two partitions.
I have the 64GB plugged in via a USB adapter, no problem there.
Here are my partitions on my 8.0GB card:
Thanks, Bobbay

It's best to duplicate your SD card on a computer where the operating system is not running from that SD card - mainly because the contents of the card could change while you are duplicating it in a live system.
So, I would boot a PC off a live distro, like Knoppix. Once booted, start a Terminal and check the names of the disk drives like this:
ls /dev/sd?
You'll probably just have /dev/sda, but check! Now attach your 8GB SD card, wait a couple of seconds and check what name that got allocated. It's likely to be /dev/sdb
ls /dev/sd?
If it's /dev/sdb save that as SRC (source), like this:
SRC=/dev/sdb
Now attach your 64GB SD card, wait a couple of seconds and check what name that got allocated. It's likely to be /dev/sdc
ls /dev/sd?
If it's /dev/sdc save that as DST (destination), like this:
DST=/dev/sdc
If, and only if, everything works as above, you can now clone SRC to DST with:
sudo dd if=$SRC of=$DST bs=65536
The command above will take a fair time to run. Once it is complete, you will have a clone of your original disk, as /dev/sdc However, this will have the same size partitions as your 8GB drive, so you will want to expand the partitions out to fill the available space. I don't know which one(s) you want to expand, or by how much, but you will want to use the resize2fs command on the new disk. You can get help on that with:
man resize2fs

Related

Device to small to write image of same device created with dd back to device using win10 image writer echer

on a raspberry pi running buster i create a backup image of a raspberry pi 32 Gb SD card with the command
dd if=/dev/mmcblk0 of=//NAS/backup.img bs=1M conv=sync,noerror iflag=fullblock
the result file is 29,7 GB (31.915.507.712 bytes)
when i try to use balena etcher to write this file back to the SAME SD card, etcher tells me the SD card is to small. i am told i need additional 512 mb.
how to resolve this?
any advice is welcome.
I don't think stackoverflow is the right place for this question, there is also Linux stackexchange which would be more suited for this question.
But from my experience I think dd does not create images in the same way that rasperry images are created (if that makes sense). So in short you can't use balena etcher to restore the image, a friend if mine tried the same and failed.
So I would advice you to just live boot a linux and from there run the dd command to restore the image onto the sdcard. Maybe even the windows subsytem would also work but I didn't try so live usb stick would be the best I think.

Modify Debian image for raspberry pi

I need to modify a Raspbian image for use with Raspberry Pi's in a commercial setting. This way I won't have to modify the defaults of every single pi afterwards. I want to set the default keyboard to U.S., disable auto-login and boot to command line rather than GUI. Is it possible to modify an image with these settings before flashing each card? If so, how?
The easiest approach would be to get one Raspi behaving the exact way you want (called a golden master), then shut it down, pull the card, and do something similar to the following in your PC's SD card reader (from which I assume you baked the first card):
sudo dd if=/dev/<sddevice> bs=1k | gzip -c > myProduct-1.0-master.bin.gz
Then just bake that image onto card #2, #3...#n using:
zcat myProduct-1.0-master.bin.gz | sudo dd of=/dev/<sddevice> bs=1k
NB about card sizes: Always make sure your golden master card is SIGNIFICANTLY SMALLER than your target cards (ideally 2x, like 8-vs-16 GB). The reasons for this are twofold:
If both cards are "8GB," the target might be slightly smaller than the source (in which case you'll end up with filesystem truncation and possibly weirdness in subtle and unpredictable ways).
SD card controllers have EXTREMELY PRIMITIVE wear leveling and dd'ing over a bunch of zeroes defeats it utterly (which means cards can die if you're doing e.g. a bunch of logging). Keeping a bunch of unused space means that you have fallow cells that can be used by wear leveling (note that modern SSDs have much more sophisticated wear leveling and don't suffer from this problem for the most part).
I created a product not too long ago that did just this--the master was an 8GB full-size card and the targets were all 16GB micros. We'd put the master in the mass duplicator, then the targets and hit the big duplicate button. Because the cards were different storage sizes, we had ~50% underprovisioning (giving us tons of wear-level room) and because the cards were different physical sizes, we never mixed them up :-)
(Yes, I'm ridiculously conservative about wear-leveling--nothing worse IMO than having an embedded card die in the field and having to crawl through God-knows-what to replace a $8 part that didn't have to fail in the first place...)
It's worth creating a VERSION file on your master, as well, so as you rev your product you know which version is installed (you can edit /etc/issue to display that at the login prompt, or just edit some other arbitrary text file).
It's possible to create from-scratch images for the RasPi that have a more-tightly-controlled OS distro, but if you're only adjusting a couple of files, the easiest way is as I describe.
Oh, and make sure to save these versioned images someplace safe, like git LFS (e.g. https://git-lfs.github.com/).
Make all the changes you want on a raspberry pi.
Figure out where sd cards get mounted on your computer. On linux it will be something like /dev/sdb, on mac it will be something like /dev/rdisk2
Take your pi image, stick it in a computer and make a disk image dd if=/dev/<sd_path> of=~/raspi.img bs=1m
Flash your other cards: dd if=~/raspi.img of=/dev/<sd_path> bs=1m

Insufficient space error on Intel Galileo running yocto

I want to install a new node library with npm on my Intel Galileo Gen 2 board running yocto (iot-devkit-1.5-i586-galileo). This has worked perfectly a couple of times before, however I have come to a point where npm tells me that I do not have sufficient space on my system which I can't really believe as I am using a 8GB SD card and yocto only takes up 1.3GB.
When I run npm install geoip-lite I get the following error:
When I run df -h I get the following:
Yocto won't create a larger rootfs unless you tell it to (you can imagine someone with a 2GB SD card would be annoyed if the image was 4GB for no apparent reason).
You should probably use IMAGE_ROOTFS_EXTRA_SPACE = "1048576" in your image recipe to set the amount of free space you want in Kbytes, but please read the IMAGE_ROOTFS_SIZE documentation as well for the bigger picture.
Well, your rootfs is full (100% used). npm install writes to the rootfs, so the problem is clear. So either remove unnecessary bits from the rootfs or increase the rootfs size.
I do not really prefer IMAGE_ROOTFS_EXTRA_SPACE = as this will increase the download size of the file (*.sdcard *.rootfs) by a big chunk given that I compile the Image in Amazon EC2.
What I usually do is, compress the rootfs to tar ball and download to local.
In my SD Card, I set up 2 partitions using fdisk , one is for kernel and the other is for Rootfs. Use dd command for the uboot, put the Kernel .dtb and .bin into the first partition and just extract the rootfs tarball into the second parition.
Doing this way, I make sure that I use every single space in the SD Card. And, it is easier for me to change the rootfs if I need to.

mtd_stresstest does not show any output and runs for ever even with count=1

I am using a rockchip 3188 based system on board which has 8 gb NAND flash.
I want to test the reliability of the NAND flash.
At least , I want to identify boards with bad NAND flashes shipping from factory.
I am using Ubuntu 14.04.
The NAND flash is partitioned into two parts :
1. mtd0: contains bootloader, kernel and initrd
2. mtd1: contains RFS
I tried running mtd_stresstest by "modprobe mtd_stresstest dev=1" and it never says a word. If I run it for too long, my system is getting corrupted. The corruption is expected as it is playing with the same device / is mounted on.
But the command is not returning even if I use count=1.
Please let me know what is going wrong.
I tried the following too:
Flashed a USB stick with ubuntu rfs meant for arm and plugged it to SOB.
bind mounted /proc to /media//proc
bind mounted /sys to /media/
cd /media/
chroot .
init 1
modprobe mtd_stresstest dev=1 count=1 ----> never says a word
Could you please also suggest if there is any other way test the NAND flash device reliability.

fsync not working on ext3 or ext4 system

I tried to use fsync to write some file to SD card ASAP. However fsync does not actually block before the file is physically written to the SD card. It seems to take about 5-6 seconds before the data is actually on the SD card. However mount the file system (I tried ext3, ext4) with commit = 1 or sync option does seem to work, the data is safe after reboot in 1 second. My question is that is there anyway to achieve flushing without resort to partition wide solution? I'm using linux kernel 2.6.37. Thank you
If you want to be sure the content is written on the SD card, you should call blockdev with --flushbufs before exiting the program.
If you want to benchmark the writing process, you can call this after every write.
/sbin/blockdev --flushbufs $dev

Resources