Modify Debian image for raspberry pi - linux

I need to modify a Raspbian image for use with Raspberry Pi's in a commercial setting. This way I won't have to modify the defaults of every single pi afterwards. I want to set the default keyboard to U.S., disable auto-login and boot to command line rather than GUI. Is it possible to modify an image with these settings before flashing each card? If so, how?

The easiest approach would be to get one Raspi behaving the exact way you want (called a golden master), then shut it down, pull the card, and do something similar to the following in your PC's SD card reader (from which I assume you baked the first card):
sudo dd if=/dev/<sddevice> bs=1k | gzip -c > myProduct-1.0-master.bin.gz
Then just bake that image onto card #2, #3...#n using:
zcat myProduct-1.0-master.bin.gz | sudo dd of=/dev/<sddevice> bs=1k
NB about card sizes: Always make sure your golden master card is SIGNIFICANTLY SMALLER than your target cards (ideally 2x, like 8-vs-16 GB). The reasons for this are twofold:
If both cards are "8GB," the target might be slightly smaller than the source (in which case you'll end up with filesystem truncation and possibly weirdness in subtle and unpredictable ways).
SD card controllers have EXTREMELY PRIMITIVE wear leveling and dd'ing over a bunch of zeroes defeats it utterly (which means cards can die if you're doing e.g. a bunch of logging). Keeping a bunch of unused space means that you have fallow cells that can be used by wear leveling (note that modern SSDs have much more sophisticated wear leveling and don't suffer from this problem for the most part).
I created a product not too long ago that did just this--the master was an 8GB full-size card and the targets were all 16GB micros. We'd put the master in the mass duplicator, then the targets and hit the big duplicate button. Because the cards were different storage sizes, we had ~50% underprovisioning (giving us tons of wear-level room) and because the cards were different physical sizes, we never mixed them up :-)
(Yes, I'm ridiculously conservative about wear-leveling--nothing worse IMO than having an embedded card die in the field and having to crawl through God-knows-what to replace a $8 part that didn't have to fail in the first place...)
It's worth creating a VERSION file on your master, as well, so as you rev your product you know which version is installed (you can edit /etc/issue to display that at the login prompt, or just edit some other arbitrary text file).
It's possible to create from-scratch images for the RasPi that have a more-tightly-controlled OS distro, but if you're only adjusting a couple of files, the easiest way is as I describe.
Oh, and make sure to save these versioned images someplace safe, like git LFS (e.g. https://git-lfs.github.com/).

Make all the changes you want on a raspberry pi.
Figure out where sd cards get mounted on your computer. On linux it will be something like /dev/sdb, on mac it will be something like /dev/rdisk2
Take your pi image, stick it in a computer and make a disk image dd if=/dev/<sd_path> of=~/raspi.img bs=1m
Flash your other cards: dd if=~/raspi.img of=/dev/<sd_path> bs=1m

Related

Fix the boot order/eMMC on a Beagle Bone Black

Problem
The problem I'm having was caused by the following action: When I had BBB connected to my PC (using USB cable), I accidentally formatted the ~92 MB partion that contained the getting started files.
Because of this, each time I apply power to BBB, the USB LEDs do not light up. It only works when I have the Angstrom image on an external microSD card.
What I've tried
I thought that this was caused because the eMMC is corrupt and for some reason is not bootable. So, I tried to boot from the external microSD card (that has the newest image running) and to use dd command where if was equal to the current microSD card and of to the target microSD card (built in on the board).
When I restarted the BBB, I looks like dd was successful (when I executed it, it told me that everything was successful). Now, there is one partition with the GettingStarted files and another with the Linux kernel.
Question
Despite this, it's not possible to boot from the internal microSD card. Does anyone know how this can be solved? Is there something to do with the boot order?
To force a boot from SD you need to remove power from the board completely, hold down S2 and then re-apply power. Keep holding the button until the four leds start turning on. You have to do this at power on, and once you've done it the board will continue to boot from SD on a reboot or reset, only removing power will change the behaviour.
You could also move R68 to R93 if you want to make the board boot from SD by default.
Also note the boot sequence in the table on page 6 of the schematics, by default if MLO can't be found on the eMMC, it'll look for it on the SD card. So deleting MLO normally causes the board to boot from SD if the appropriate files are present.
According to the Beaglebone Black Cook Book,
the card boots from the SD if available.
This is also how it work with Debian 8.3 image for BBB
(note that I am using the version of the image that does NOT
flash...).

Graphics development on ARM

I am planning to make a small OS and run a Tetris clone on it using an ARM Cortex-M3. Unfortunately, I am not able to buy any development boards as of now, so I will have to use simulators.
I have actually looked into QEMU which has LM3S6965EVB support, which contains an ARM Cortex-M3 processor. But apparently newer revisions of the board are not compatible with the model in QEMU as none of the examples I have downloaded from TI seem to work. Even the OLED display is different.
Another problem is to do graphics development as the OLED display for LM3S6965EVB has a really low resolution. I was able to get it up to 640x480 by editing the QEMU source but as I could not get any examples to work, so I don't know if it works either. Using the debug parameters for SSD0323, all I can see is that it accepts some of the data that is sent to initialize the device, then hangs...
I have considered choosing another board in QEMU but that would mean redoing many things from scratch when I get my hands on a real device, as the other ones are too powerful for something as simple as this.
What should I do? Are there any other simulators out there that can help me accomplish what I am trying to do? I want to develop a small OS and some small games.
Thanks in advance. I have been searching for a solution for days and I am really stuck.
How much you have to redo, in part, has to do with your software/system engineering, you can abstract where needed and only have to re-write the abstraction layer not the entire package. Actually you can do much of your software design/testing on your host system and never cross compile, only later cross compile to a simulator or real hardware.
For example, I assume you would construct the next video screen somewhere in ram then depending on the hardware change some bits in a register and page flip or have to do a copy from this frame buffer to the video/lcd in whatever form it wants. Using thumbulator you could build your screen in ram somewhere (in the simulated memory space) then add to the simulator when the simulation writes to such and such register take these bytes from ram and display them on the host computer (running the simulation) basically simulate some hardware. use sdl or basic X calls or whatever you prefer. I normally take snapshots to .bmp files (very easy to write) then look at them later.
Later, on hardware your abstracted update_screen() function would have hardware specific code to display that screen.
thumbulator only runs thumb instructions not ARM and not Thumb2, thumb being the common denominator between the arm processors (ARMv4T and newer except for cortex-m) and those that support thumb2 extensions (cortex-m). other than startup code the compiling and programming experience is the same across the arm family. the code (other than startup code and of course hardware specific accesses) will run across the arm family as well as the simulator. If you go to a cortex-m then adding an architecture specification to the command line will change the build from thumb only to thumb+thumb2 instructions giving you some performance boost. if you surf around my other projects on github you will find this idea reapeated over and over again, I have many simple cortex-m examples where I use gcc and llvm and build the same .C code with thumb instructions and thumb+thumb2.
Another completely different answer is get a GBA (Nintendo Game Boy Advance). You can get a GBA SP (has a backlit display, makes the whole experience better) for about $30 or so on ebay. You can buy flash cartridges that take sd cards for about the same amount. It has an ARM7TDMI, it runs thumb code much faster than ARM code, giving you that thumb experience in preparation for other/newer cores like the cortex-m. For another $30 you can get a game link cable, chop it up, attach a rs232 level shifter (I can talk you through all of this), and make a gba serial cable. My preferred setup is to have a flash cartridge that I have pre-programmed with a serial bootloader, I download the program over serial into ram then run from ram. This avoids having to yank the flash cartridge and/or sd card every time you re-compile the program. doable, and a cheaper solution but gets tiring fast.
If you have a Nintendo DS for $12 to $15 you can get an sd based flash cartridge that you can likewise use for development. I recommend learning the gba first, which you can do on the NDS if you buy a gba side memory cartridge (need a ds lite not an ndsi nor 3d) supported by the software on the cartridge. (the ez flash 3 in 1 gba size for example is a good one, as well as memory you can flash that one with the nds and carry it over to the gba (this is how I put my serial bootloader on it)). these loaders will let you put your .gba file on the nds cartridge sd card then load it into the gba cartridge and it switches the nds into gba mode and runs as a gba.
there are lots of other solutions, sparkfun.com likely has a number of arm based boards that can drive lcds and/or come with lcds. You can go to earthlcd and get one of the serial based lcd panels that make for rapid development, later of course a cheaper solution is desired. Along the same lines you could instead simulate an earthlcd like thing using your host computer have the embedded microcontroller send screen updates over serial to the host and the host displays the graphics. Later replace that screen update with something else.
This latter solution, for about $20 you can get a stm32f4 discovery board, has a cortex-m4, runs up to 168MHz, has a number of serial ports of which at least two have pins not being used by something else you could easily have one port for debug messages and the other for this virtual serial screen. In the stm32f4 directory in my stm32vld repo on github I have a number of getting started examples for using that board (as well as the stm32vld which is a few bucks cheaper but not as powerful as this stm32f4). Likewise your host application can take keystrokes and turn them into user control/game control commands back into the game software on the microcontroller.
There is of course the beagleboard or hawkboard or raspberri pi when it comes out, or open-rd (I dont like the plug computer but do like the open rd) which have video processing and video output direct to a monitor and/or tv using composite or whatever. About $150 to $200 and it just works run with it. You definitely dont need to run linux on these platforms, you can make your own os or whatever you like and run that, very simple.
There are more solutions than you probably have time and/or money to pursue you need to find one that fits within your comfort or happyness zone for how you like to do development and try that path.

Is there a way to independently task and use heterogenous multi gpus in a windows 7 system?

Can I have two mixed chipset/generation AMD gpus in my desktop; a 6950 and 4870, and dedicate one gpu (4870) for opencl/gpgpu purposes only, eliminating the device from video output or display driving consideration by the OS, allowing the 4870 to essentially remain in a deep sleep or appear ejected/disabled until it's stream processors are called upon?
Compared to the 4870, the 6950 is a heavyweight in opencl calculations; enough so that it can crunch numbers and still allow an active user session, and even web browsing. HOWEVER, as soon as I navigate to a webpage with embedded flash video, forget what I have running and open media player or media center- basically any gpu-accelerated video task that requires the 6950 to initialize UVD, the display system hangs.
I'm looking for a way to plug my 4870 in an open pcie slot, have it sit in a dormant state with near-0 heat production and power consumption (essentially only maintain the interface signalling, like an ethernet card in a powered-off desktop maintaining the line and waiting for a WOL command), and attain a D0 state (I don't even care if the latency of this wake event is on the scale of seconds) to then run opencl calculations ON ITS OWN. I do not wish to achieve a non-CF heterogeneous gpu teaming setup! In my example of a UVD hung situation I would see manually stopping the opencl calculations on the 6950, beginning those calculations then on the 4870 to free up the 6950 for multimedia usage/gaming as my desire outcome (granted, with a hit to the calculation rate). Even better if the two gpus could independently run similar calculations while no one is using the desktop. I don't even mind if I have to initiate the power-state transitions of the 4870 from/into an 'OFF' state (say, by a shortcut on the desktop), as long as it doesn't require a system restart, ending the user session and logging off... and the manual ON/OFF 'switch' for the 4870 is something any proficient windows end-user could do- like click a shortcut to run a script, or even go into device manage and toggle enable/disable. As long as the 4870 isn't wastefully idling by for 1 sole use that may occur sporadically.
I couldn't think of a solution to facilitate this function besides writing a new ini for the 4870 to override the typical power management characteristics written for usage of the device as a typical graphics card (say to drop in/out of powered state w/o relinquishing irq or other allocated resources to 'hold the door open' on interface availability and addressing). But that is an endeavor that is both well above my abilities, and I easily see an additional involvement of licensing being necessitated to achieve.
Windows 7 (and maybe windows 10) doesn't define a "selected device". It's softwares' own responsibility to pick the right device. For example, google chrome's add-on software(for video decode) will pick whatever gpu comes as first target defined in it. If it is written to pick first-indexed device, then it needs a pci-e re-plug of both cards to switch their roles.
This OS written to fit for majority(%99) of users, not for multi-gpu users(%1 ?). It simply picks one of gpus or software has explicit control over devices and simply benchmarks all gpus and picks fastest. So you should look for software's abilities instead of OS.
Same thing goes for games too! When I play dota-2 on vulkan api, it uses HD7870 for compute(of textures, particles, etc..) but uses R7-240 for graphics! But I prefer the opposite because R7-240 can't draw fast. Because this game is written for majority of people who don't have more than 1 gpu.
Money controls development I'm sorry for this. Then, market-penetration is needed for money. %99 market penetration needs writing code for public, not scientific guys or rich ones. Public has simply 1 gpu and that is a cheap one.
I wish this:
select 1 gpu for: unzipping files, wathing videos, compressing internet uploads and caching for file system(up to 2GB)
select another gpu for: gaming, opencl applications, mining, ..
select all gpus for: games, benchmarks, seen as single device by my applications,..
but is not guaranteed to become true because money still talks.
If I were a driver developer, I would add a "rename" option(and become poor in return) to give N virtual devices to OS, so OS and other software will be able to gain only 1/N 'th power of whole system or N/N by just using those renames or main devices. A rename could be a single compute unit of a gpu. When OS tells drivers "give me %25 of all cores that share same memory" so it pick a device and gives %25 of total cores of system. So even users could create renames for their own work.
I even sent a message to microsoft for "file system cache on my 2nd graphics card" but they did not return!

Safe way to replace linux libs on embedded flash

I have a linux busybox based system on a chip. I want to provide an update to users in the field and this requires updating some files in /lib /usr/bin and /etc. I don't think that it's safe to simple untar the files directly. Is there a safe way to do this including /lib files that may be in use?
Some things I strongly prefer in embedded systems:
a) Have the root file system be a ramdisk uncompressed from an image in flash. This is great because you can experimentally monkey around with it to your heart's content and if you mess up, all you need is a reboot to get back to the flashed configuration. When you have tested a set of change you like, you generate a new compressed root filesystem image and flash that.
b) Use a bootloader such as u-boot to do your updates - flashing a new complete image - rather than trying to change the linux system while it is running. Though since the flashed copy isn't live, you can actually flash it while running. If you flash a bad version, u-boot is still there to flash a good one.
c) Processors which have mask-rom UART (or even USB) bootloaders, making the system un-brickable - nothing more than a laptop and a serial cable or usb/serial converter is ever needed to do maintenance (ie, get a working u-boot image on the flash, which you then use to get a working linux kernel+compressed root fs image on it)
Ideally your flash device is big enough to partition into two complete filesystems and each update updates the other side (plus copying over config files if necessary) and updates the boot configuration to boot from the updated side.
Less ideal is to update in-place but have some means of detecting boot failure (watchdog that's not touched until after boot, for example) and have a smaller, fallback partition which is capable of accepting another update and fixing the primary partition.
As far as the in-place update of a live filesystem, just use a real installer (which will move the target files out of the way before replacing them to avoid the problem you describe).
You received two excellent answers above and I Strongly encourage you to do what you were advised to.
There is, however, a more simple way. In a matter of fact you can just untar your libraries, provided that the process that does this is statically linked.

Building a custom Linux Live CD

Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch?
I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible.
The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot.
Can anyone fix me up with a current How-To?
Update:
So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator".
My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell.
The whole job would have only taken an hour or two if there had been a README explaining the configuration file!
There are a couple of interesting projects you could look into.
But first: does it have to be a CD-ROM? That's probably the slowest possible storage (well, apart from tape, maybe) you could use. What about a fast USB stick or a an IEE1394 hard-disk or maybe even an eSATA hard-disk?
Okay, there are several Live-CDs that are designed to be very small, in order to e.g. fit on a business card sized CD. Some were also designed to be booted from a USB stick, back when that meant 64-128 MiByte: Damn Small Linux is one of the best known ones, however it uses a 2.4 kernel. There is a sister project called Damn Small Linux - Not, which has a 2.6 kernel (although it seems it hasn't been updated in years).
Another project worth noting is grml, a Live-CD for system administration tasks. It does not boot into a graphic environment, and is therefore quite fast; however, it still contains about 2 GiByte of software compressed onto a CD-ROM. But it also has a smaller flavor, aptly named grml-small, which only contains about 200 MiByte of software compressed into 60 MiByte.
Then there is Morphix, which is a Live-CD builder toolkit based on Knoppix. ("Morphable Knoppix"!) Morphix is basically a tool to build your own special purpose Live-CD.
The last thing I want to mention is MachBoot. MachBoot is a super-fast Live-CD. It uses various techniques to massively speed up the boot process. I believe they even trace the order in which blocks are accessed during booting and then remaster the ISO so that those blocks are laid out contiguously on the medium. Their current record is less than 6 seconds to boot into a full graphical desktop environment. However, this also seems to be stale.
One key piece of advice I can give is that most LiveCDs use a compressed filesystem called squashfs to cram as much data on the CD as possible. Since you don't need compression, you could run the mksquashfs step (present in most tutorials) with -noDataCompression and -noFragmentCompression to save on decompression time. You may even be able to drop the squashfs approach entirely, but this would require some restructuring. This may actually be slower depending on your CD-ROM read speed vs. CPU speed, but it's worth looking into.
This Ubuntu tutorial was effective enough for me to build a LiveCD based on 8.04. It may be useful for getting the feel of how a LiveCD is composed, but I would probably not recommend using an Ubuntu LiveCD.
If at all possible, find a minimal LiveCD and build up with only minimal stripping out, rather than stripping down a huge LiveCD like Ubuntu. There are some situations in which the smaller distros are using smaller/faster alternatives rather than just leaving something out. If you want to get seriously hardcore, you could look at Linux From Scratch, and include only what you want, but that's probably more time than you want to spend.
Creating Your Own Custom Ubuntu 7.10 Or Linux Mint 4.0 Live-CD With Remastersys
Depends on your distro. Here's a good article you can check out from LWN.net
There is a book I used which covers a lot of distros, though it does not cover creating a flash-bootable image. The book is Live Linux(R) CDs: Building and Customizing Bootables. You can use it with supplemental information from your distro of choice.
So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator".
My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell.
The whole job would have only taken an hour or two if there had been a README explaining the configuration file!
Debian Live provides the best tools for building a Linux Live CD. Webconverger uses Debian Live for example.
It's very easy to use.
sudo apt-get install live-helper # from Debian unstable, which should work fine from Ubuntu
lh_config # edit config/* to your liking
sudo lh_build

Resources