Insufficient space error on Intel Galileo running yocto - node.js

I want to install a new node library with npm on my Intel Galileo Gen 2 board running yocto (iot-devkit-1.5-i586-galileo). This has worked perfectly a couple of times before, however I have come to a point where npm tells me that I do not have sufficient space on my system which I can't really believe as I am using a 8GB SD card and yocto only takes up 1.3GB.
When I run npm install geoip-lite I get the following error:
When I run df -h I get the following:

Yocto won't create a larger rootfs unless you tell it to (you can imagine someone with a 2GB SD card would be annoyed if the image was 4GB for no apparent reason).
You should probably use IMAGE_ROOTFS_EXTRA_SPACE = "1048576" in your image recipe to set the amount of free space you want in Kbytes, but please read the IMAGE_ROOTFS_SIZE documentation as well for the bigger picture.

Well, your rootfs is full (100% used). npm install writes to the rootfs, so the problem is clear. So either remove unnecessary bits from the rootfs or increase the rootfs size.

I do not really prefer IMAGE_ROOTFS_EXTRA_SPACE = as this will increase the download size of the file (*.sdcard *.rootfs) by a big chunk given that I compile the Image in Amazon EC2.
What I usually do is, compress the rootfs to tar ball and download to local.
In my SD Card, I set up 2 partitions using fdisk , one is for kernel and the other is for Rootfs. Use dd command for the uboot, put the Kernel .dtb and .bin into the first partition and just extract the rootfs tarball into the second parition.
Doing this way, I make sure that I use every single space in the SD Card. And, it is easier for me to change the rootfs if I need to.

Related

Device to small to write image of same device created with dd back to device using win10 image writer echer

on a raspberry pi running buster i create a backup image of a raspberry pi 32 Gb SD card with the command
dd if=/dev/mmcblk0 of=//NAS/backup.img bs=1M conv=sync,noerror iflag=fullblock
the result file is 29,7 GB (31.915.507.712 bytes)
when i try to use balena etcher to write this file back to the SAME SD card, etcher tells me the SD card is to small. i am told i need additional 512 mb.
how to resolve this?
any advice is welcome.
I don't think stackoverflow is the right place for this question, there is also Linux stackexchange which would be more suited for this question.
But from my experience I think dd does not create images in the same way that rasperry images are created (if that makes sense). So in short you can't use balena etcher to restore the image, a friend if mine tried the same and failed.
So I would advice you to just live boot a linux and from there run the dd command to restore the image onto the sdcard. Maybe even the windows subsytem would also work but I didn't try so live usb stick would be the best I think.

Clone one MicroSD card to another

So, I have a Raspbian Lite booted from PINN, the bootloader, on my Raspberry Pi 2B v1.1. I have it all written on a 8.0GB micro SD Card. I just bought an upgrade - a 64.0GB micro SD. I have a lot of things on my original 8GB SD, so I don't want to have to manually re-install every little thing I have.
My question is this: Is there a way to clone my whole card, with every partition, using the terminal in Raspbian Lite to a new SD Card?
I have tried rpi-clone: it seems to only copy over two partitions.
I have the 64GB plugged in via a USB adapter, no problem there.
Here are my partitions on my 8.0GB card:
Thanks, Bobbay
It's best to duplicate your SD card on a computer where the operating system is not running from that SD card - mainly because the contents of the card could change while you are duplicating it in a live system.
So, I would boot a PC off a live distro, like Knoppix. Once booted, start a Terminal and check the names of the disk drives like this:
ls /dev/sd?
You'll probably just have /dev/sda, but check! Now attach your 8GB SD card, wait a couple of seconds and check what name that got allocated. It's likely to be /dev/sdb
ls /dev/sd?
If it's /dev/sdb save that as SRC (source), like this:
SRC=/dev/sdb
Now attach your 64GB SD card, wait a couple of seconds and check what name that got allocated. It's likely to be /dev/sdc
ls /dev/sd?
If it's /dev/sdc save that as DST (destination), like this:
DST=/dev/sdc
If, and only if, everything works as above, you can now clone SRC to DST with:
sudo dd if=$SRC of=$DST bs=65536
The command above will take a fair time to run. Once it is complete, you will have a clone of your original disk, as /dev/sdc However, this will have the same size partitions as your 8GB drive, so you will want to expand the partitions out to fill the available space. I don't know which one(s) you want to expand, or by how much, but you will want to use the resize2fs command on the new disk. You can get help on that with:
man resize2fs

How do I boot from an .hddimg file?

After running BitBake on several different recipe files, BitBake generates a file of type '.hddimg'. I haven't been able to find a clear explanation on what this file is used for, the closest that I have found is some speculation on the mailing list here. The author Paul states that:
the image isn't an image of a regular bootable system drive, but is a "live
image" of a smaller system that can either boot the real system from a
virtualized file system in RAM whose image is read from a single file in the
first level, or it can install the real system to a different drive.
The 'bootimg.bbclass' is what generates the .hddimg, and in the opening comments it is written that:
A .hddimg file [is] an msdos filesystem containing syslinux, a kernel, an initrd and a rootfs image. These can be written to harddisks directly and also booted on USB flash disks (write them there with dd).
Which appears to corroborate with what Paul wrote, but still leaves a lot of ambiguity on how to go about booting from this file (at least to a greenhorn like me).
Well, the doc says "write them there with dd". So:
dd if=/path/to/your/hddimg of=/path/to/raw/usb/device
so, if you have the file as my.hddimg and the usb flash disk appears as /dev/sdg
dd if=/home/karobar/my.hddimg of=/dev/sdg
As it's name implies, it's an image, so needs to be written as such. The actual file system is inside of the rootfs file, which is similarly an image!
Once you have that on the usb stick, the usb stick itself should be bootable. Depending on what you're trying to do this may not be the easiest kind of output from bitbake to work with.

Embedded Linux Boot Optimization

I am doing project on Pandaboard using Embedded Linux (UBUNTU 12.10 Server Prebuild image) to optimize boot time. I need techniques or tools through which I can find boot time and techniques to optimize the boot time. If anyone can help.
Just remove application which is not required from /etc/init.d/rc file also put echo after every process initialization and check which process is taking much time for starting,
if you find application which is taking more time then debug that application and so on.
There is program that can be helpful to know the approximate boot-up time. Check this link
Time Stamp.
First of all the best you have to do is to compile yourself your own made kernel, get the source on the internet and do a make xconfig and then unselected everythin you don't need.
In a second time create your own root filesystem using Buildroot and make xconfig to select/unselect everything you need or not.
Hope this help.
I had the same problem and do that way, now it's clearly not the same ;)
EDIT: Everything you need will be here
to analyze the boot process, you can use Bootchart2, its available on github: https://github.com/mmeeks/bootchart
or Bootchart, from the Ubuntu packages:
sudo apt-get update
sudo apt-get install bootchart pybootchartgui
There are broadly 3 areas where you can reduce boot time
Bootloader:
Modify the linker script to initialize only the required h/w. Also, if you are using an SD card to boot, merge kernel and bootloader image to save time.
Kernel:
Remove unwanted modules from kernel config. Also try using compressed and uncompressed image. If your CPU is good enough to handle it go compressed image and check uncompression time required for different compression types.
Filesystem:
FS size can be significantly reduced by removing the unwanted bins and libs. Check for dependencies and use only the one's that are required.
For more techniques and information on tools that help in measuring the boot time please refer to the following link.
Refer to Training Material
The basic rule is: the fastest code is code that never gets loaded and
run, so remove everything you don't need:
in U-Boot: don't load and run the full U-Boot at all; use FALCON
mode and have the SPL load the Linux kernel and DTB directly
in Linux: remove all drivers and other stuff you don't really need;
load all drivers that are not essential for your core application as
modules - and load them after your application was started. If you
take this serious, you may even want to start only one CPU core
initially (and start the remaining ones after your application is
running).
in user space: minimize the size of the root file system. throuw
out anything you don't need; configure tools (like busybox) to
contain only the really needed functionality; use efficient code
(for example, link against musl libc instead of glibc) etc.
What can be acchieved by combining all these measures can be seen in
this video - and yes, the complete code for this optimization is
available here.
Optimizing embedded Linux Boot process , needs modifications in three level of embedded Linux design.
Note: you will need the source codes of bootloader and kernel
Boot : the first step in optimizing and reducing boot time of board is optimizing boot loader. first you should know what is your bootloader is. If your bootloader is an opensource bootloader like u-boot than you have the opportunity to modify and optimize it. In u-boot we have a procedure that we can skip unnecessary system check and just upload kernel image to ram and start. the documentation and instruction for this is available in u-boot website. by doing this you will save about 4 ~ 5 second in boot.
Kernel : for having a quicker kernel , you should optimize kernel in many sections. for editing you can use on of Linux config menu. I always use a low graphic menu. it need some dependency you can use it by this command:
$ make menuconfig
our goal for Linux kernel is to have smaller kernel image and less module to load in boot. first change the algorithm of compression from gzip to LZO. the point of this action is gzip algorithm will take much time to extract kernel. by using LZO we have a quicker kernel decompression process. the second , disable any unnecessary driver or module that you don’t have it on your board or you don’t use it any more. by doing this , you will lose some device access and cannot use them in Linux but you will have two positive points: less Ram usage , quicker boot time.
but please remind that some driver are necessary for Linux and by disabling them you will lose some of main features (for example if you disable I2C driver in Linux you will no longer have a HDMI interface) that you need or in worst case you will have a boot problem (such as boot-loop). The third is to disable some of unusable filesystem to reduce kernel size and boot time. The Fourth is to remove some of compression algorithm to have smaller kernel image.
the last thing , If you are using a u-boot bootloader create a uImage instead of zImage. the following steps , are general and main actions , for having quicker boot as 1 second after power attach you should change more option.
after two base layer modifications, now we should optimize boot process in user-space (root file system). depend on witch system are you using , we have different changes to do. in abstract root file system of Linux that have necessary package and system to boot Linux we should use systemd instead of Unix systemv , because systemd have a multi-task init. system and it is faster , after that is udev that you should modify some of loading modules. if you have a graphical user-interface , we can use an easy trick to have a big boot time reduction by initing GUI first and load other module after loading GUI.
if you do all of following tasks , you can have quick boot time and fast system to work with.

Safe way to replace linux libs on embedded flash

I have a linux busybox based system on a chip. I want to provide an update to users in the field and this requires updating some files in /lib /usr/bin and /etc. I don't think that it's safe to simple untar the files directly. Is there a safe way to do this including /lib files that may be in use?
Some things I strongly prefer in embedded systems:
a) Have the root file system be a ramdisk uncompressed from an image in flash. This is great because you can experimentally monkey around with it to your heart's content and if you mess up, all you need is a reboot to get back to the flashed configuration. When you have tested a set of change you like, you generate a new compressed root filesystem image and flash that.
b) Use a bootloader such as u-boot to do your updates - flashing a new complete image - rather than trying to change the linux system while it is running. Though since the flashed copy isn't live, you can actually flash it while running. If you flash a bad version, u-boot is still there to flash a good one.
c) Processors which have mask-rom UART (or even USB) bootloaders, making the system un-brickable - nothing more than a laptop and a serial cable or usb/serial converter is ever needed to do maintenance (ie, get a working u-boot image on the flash, which you then use to get a working linux kernel+compressed root fs image on it)
Ideally your flash device is big enough to partition into two complete filesystems and each update updates the other side (plus copying over config files if necessary) and updates the boot configuration to boot from the updated side.
Less ideal is to update in-place but have some means of detecting boot failure (watchdog that's not touched until after boot, for example) and have a smaller, fallback partition which is capable of accepting another update and fixing the primary partition.
As far as the in-place update of a live filesystem, just use a real installer (which will move the target files out of the way before replacing them to avoid the problem you describe).
You received two excellent answers above and I Strongly encourage you to do what you were advised to.
There is, however, a more simple way. In a matter of fact you can just untar your libraries, provided that the process that does this is statically linked.

Resources