How to minimize the Python 3.6 runtime for an embedded system? - linux

We make an ARM 9 embedded system, Linux, etc. Recently a customer asked we pre-install Python 3 for them. We worked out the cross-compile (QEMU is really magical), but the resultant build is 177Mb, so that's a no go. What's the best plan here - we can leave off the Doc directory, and maybe the "build" dir? I see the Lib directory is 54Mb - is it a case of picking an choosing what libs the customer needs? We know very little about Python. To be clear, I see lots of advice about packaging Python apps - that's not what we are doing - we are packaging the Python runtime. Thanks! (directory size summary below)
1.6M ./Python-3.6.0/Mac
760K ./Python-3.6.0/Include
17M ./Python-3.6.0/build
5.4M ./Python-3.6.0/Python
8.0K ./Python-3.6.0/.github
54M ./Python-3.6.0/Lib
2.7M ./Python-3.6.0/PC
2.7M ./Python-3.6.0/Tools
6.9M ./Python-3.6.0/Objects
11M ./Python-3.6.0/Doc
824K ./Python-3.6.0/Parser
20M ./Python-3.6.0/Modules
15M ./Python-3.6.0/Programs
620K ./Python-3.6.0/PCbuild
12K ./Python-3.6.0/Grammar
5.8M ./Python-3.6.0/Misc
161M ./Python-3.6.0
177M .

A possible choice to meet your requirements may be MicroPython which is an implementation of PYthon 3 which consists of a small subset of the Python standard library. According to their website it is targeted for micro controllers and constrained environments:
https://micropython.org/
According to their description MicroPython aims to be as compatible with normal Python as possible to allow you to transfer code with ease from the desktop to a microcontroller or embedded system. Yet it is compact enough to fit and run within just 256k of code space and 16k of RAM.
There is an online project that puts an implementation of MicroPyton in an Arm Cortex M4 based embeeded system may also prove helpful:
https://github.com/micropython/micropython/wiki

We removed the following:
python3-tests
libpython3.8-staticdev
python3-*src
python3-*dbg
python3-*dev
All the .pyc files (they can be recompiled after install)
We chose not to go ahead with the following:
Removing "unneeded" modules: may break the dependencies of 3rd party libraries
UPX binary packer: the zipped installation media is the same size, so this only slows down startup
python source file minification: produced broken code, as we found out when regenerating the .pyc with python3 -m compileall .

Related

Enormous appimage created by appimage-builder

I'm packaging an application I have written into an AppImage so that it can be delivered to Linux users.
One of the key features of the GUI toolkit I'm using is that it is small and lightweight, allowing me to compile a build which is statically linked to the GUI library of around 6Mb.
However, after building the AppImage - where I do what the instructions say - use all the functionality (which basically includes only using file browser dialogues to load files) - it generates an absolutely enormous AppImage of around 200Mb!
I know that AppImages are supposed to be a "little bit" big, but this is completely mad as a proposed solution for portability when the natively compiled binary including a statically linked GUI toolkit is only 6Mb.
However, I'm not convinced at all that I need all of that 200Mb. A very similar piece of software to mine, but that additionally uses Qt (which is pretty bloated in comparison) is only about 30Mb. I actually suspect that appimage-builder is doing something very wrong - I think it is listing the files in the directory I explore when using the file browser dialogue as dependencies (they are big files). I have no real other explanation. But if so how do I stop it doing that?
Why is mine so big? What can I do about it?
For the record I am using this method for building my AppImage
Building my binary separately
Running appimage-builder --generate and completing the form
Running appimage-builder --recipe AppImageBuilder.yml --skip-tests
Edit: Removing the obviously not needed files that were being packaged have reduced the size of the appimage to just 140Mb, but this is still almost 5 times bigger than equivalent appimages I've seen. Are there some tricks/options I'm not aware of?
In few recent days got started with AppImage and faced the same problem.
Shortly: check dependencies of your app by any possible ways and configure recipe to include only concrete dependencies and avoid includings of any theme/icon/etc packages which are not realy used :)
My case is a small app, written in Dart (with Flutter). The built project itself weights about 22MB (du -sh . in output directory). My host os is Linux Mint (Cinnamon).
First time I run appimage-builder --generate it generated me the "recipe" with 17 packages to be installed and bunch of libraries to be copied from /lib/x86_64-linux-gnu/. When I generated AppImage from this recipe, result was about 105MB, which are extremely large in my opinion for small app.
My first experiments was to cleanup included files section, as I guess "all necessary" libraries should be installed from apt. I referred to a few configs from network where were marked only few libraries for include and was exclude section, which contains some DE related files (themes, fonts, icons and etc.)
Using that I got result about 50MB (which are still large enough).
Next experiments were referred to from this issue - https://github.com/AppImageCrafters/appimage-builder/issues/130#issuecomment-843288012
Shortly - after generating an AppImage file, there appeared file .bundle.yml file inside AppDir folder, which contains deployed libraries. Advice is to try exclude something from that. May be it's a good enough advice, but it takes too long time to check for each package/library if it breaks resulted AppImage file at least with official tests of appimage-builder (docker containers). I faced more broken results than any sane size reduction.
My next experiment was to reduce dependencies which should be installed from package manager and use files from host system. I deleted AppDir and appimage-builder-cache folders and regenerated the recipe. At next step I commented all packages which should be installed from package manager and leaved only included files. Result was fail, because of needing one package, but after adding it I got AppImage result in 36MB. That sounds much better than starting 105MB or even previous result of 50MB.
Here I got small "boost" - Flutter project built into AOT binaries, without runtime. So I checked output of ldd for my app, and then mapped list of required libraries to list of library files which were detected by appimage-builder. Finally some of them was correct, some not found in ldd output and some was in ldd output, but were not detected by appimage-builder. I added all undetected, removed all unused. My final result is 26MB and it passed all appimage-builder tests (running in docker images of fedora, cent, debian, ubuntu and arch)
I understand that it's bad enough for continuous building, because it will require to always check for used libraries and adapt config if something changed, but for rare enough builds I guess it's has some kind of balance between good and bad.

How to reduce the capacity of RedHawk Python library

There are many RedHawk Python 2.7 libraries, and the capacity including Linux is getting bigger. Can RedHawk remove unnecessary Python 2.7 library module to reduce the capacity? Please let me know if there is a module available for deletion. Or, I am currently creating an image by compressing the whole system with gzip, but is there a more efficient compression method?
What is it you are trying to accomplish? If file size is that important it sounds like you are working on an embedded platform with limited disk space. Check out the features and work here on openembedded-hawk. It's been a while since I've updated it but you can have the entire REDHAWK framework built, with or without python, a REDHAWK Application, and the entire linux based OS all under 100 MiB. (Slightly more if you include python)

Can Xilinx ISE iMPACT write an SVF to a PicoBlaze like Adept can?

I'm midway through a VHDL class and have been able to play relatively nice with the ISE and Digilent toolchain in Linux... until trying to reflash a PicoBlaze program. For details, I am currently running and targeting,
Fedora 21 64-bit (3.19.3-200.fc21.x86_64)
Nexys2 development board from Digilent (with a Spartan3)
Xilinx ISE 14.7
Adept 2.16.1 Runtime
Adept 2.2.1 Utilities
I've been able to run ISE and program the Nexys2 bit files with iMPACT just fine so far in Linux, but this current project is to write an assembly program for the PicoBlaze soft core processor, compile and update the memory of the running vector without having to resynthesize any VHDL.
Using the steps from Kris Chaplin's post, I can compile a PSM to HEX and then convert that HEX file to an SVF in dosbox. From here I can use Digilent's Adept tool in Windows to program a top_level.bit file which has the PicoBlaze already synthesized, I could also do this in ISE's iMPACT in Linux. After the design is running, I can use Adept to program the SVF file into the running memory of the design and everything is peachy. However, trying to load the SVF into iMPACT in Linux throws an exception,
EXCEPTION:iMPACT:SVFYacc.c:208:1.10 - Data mismatch.
The only issue I've found online with that error shows that there should be an '#' symbol that needs to be removed, but I haven't seen any '#'s anywhere in the SVF.
I also tried to convert the SVF to XSVF. iMPACT doesn't throw an error loading the XSVF, but programming/executing the XSVF freezes the design instead of running the new program.
Adept doesn't have a comparable GUI in Linux that I've seen, just a cmd line tool 'djtgcfg'. Just like iMPACT, I've been able to program the toplevel.bit file fine with
$ djtgcfg prog -d Nexys2 -i 0 -f ../../toplevel.bit
but attempting to program the svf file with the same call doesn't seem to affect anything. It says it should take a few minutes and immediately reports "Programming succeeded" but I don't see any change on the device.
I'd really like to keep my environment all in Linux if I can, I don't have quite enough room on my laptop to juggle between two VMs.
Is it possible to use use iMPACT to write an SVF file to the Nexus2? Or can/should I be using the Adept utility differently?
Has anyone gotten this to work? Thanks a ton!
There are many better ways to reconfigure the PicoBlaze InstructionROM without resynthesizing:
use Xilinx's data2mem tool
This toll is shipped with ISE and can patch BlockRAM contents in bit-files
=> requires FPGA reprogramming
use PicoBlaze's embedded JTAGLoader6
Enable the embedded JTAGLoader6 design in the template file. Use JTAG_Loader_RH_64 binary or JTAG_Loader_Win7_64.exe to upload a hex-file via JTAG into the PicoBlaze ROM.
=> reconfigure ROM at runtime, no FPGA reprogramming needed
The manual from Ken Chapman offers several pages on how to use JTAG_Loader. Additionally, have a look into the PicoBlaze discussions at forums.xilinx.com. There are some discussions regarding bugs and issues around JTAG_Loader and how to solve them.
Also have a look into opbasm from Kevin Thibedeau as an alternative and improved PicoBlaze assembler. It is also shipped with an ROM patch tool.
I know it's a little bit late for the original poster, but I suspect I am taking the same class and I believe I have found a solution to upload picoblaze code on linux.
Download the KCPSM3 zip file from Xilinx IP Download, extract the contents and move the executables from the JTAG_loader folder to your working directory.
In dosbox run hex2svfsetup.exe for the nexys2 board select menu options 4 - 0 - 1 - 8
Use the assembler to create the .hex file
In dosbox run hex2svf.exe to create the svf file
Then run svf2xsvf.exe -d -i < input.svf > -o < output.xsvf >
The contrary to the JTAG_Loader_quick_guide.pdf in the initial zip file use impact and open the xsvf file and program using the xsvf file.

Is there something similar to NanoBSD in Linux

NanoBSD is a script that makes light, small and in-memory FreeBSD copy. It is useful in embedded systems. Is there something similar to NanoBSD in Linux? Specially a feature like Everything is read-only at run-time as it mentioned here .
A lot of toolchain / system build systems build Linux root filesystems which are designed to run completely out of a ramdisc (rootfs / tmpfs). This means that everything is read/write at runtime, but it does not persist across reboots (a persistent FS can of course, be mounted as a non-root FS).
The most well known of these is Busybox (with or without uclibc), which ships with various scripts to build very small-footprint Linux-based embedded systems (root FS is typically a few Mb only; just add a kernel). Busybox/Linux is not the same as GNU/Linux, but it is fairly similar - most things are simpler or have fewer options; some features are entirely absent or can be disabled at compile-time.
Linux is NOT an Operating system like FreeBSD, rather it is a kernel. You can choose to layer either GNU C library and tools (which I think all major general-purpose distributions do) or something else - which is mostly used for smaller systems, including uclibc, Android etc.
There are literally hundreds of toolchains, build environments and embedded distros of Linux, some only a couple of megabytes in size. Many also support some or many of the different processors Linux runs on (i386 and friends, ARM, Power, ...).
To get you started a couple of projects I find interesting: OpenWrt and OpenEmbedded, and lpclinux, Linux for NXP LPC3xxx ARM processors but there are really hundreds of them.
Some other resources
A very good source that (also) touches a number of issues specific to embedded systems is Linux from scratch. And this pdf gives some insight in the different available filesystems for an embedded Linux system.
i would take a look at TinyCore-Linux.
which isn't really ro but nearly the same Concept and i think there is also a was to get the OS/Binary Part ro were the config part is writeable.

Building a custom Linux Live CD

Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch?
I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible.
The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot.
Can anyone fix me up with a current How-To?
Update:
So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator".
My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell.
The whole job would have only taken an hour or two if there had been a README explaining the configuration file!
There are a couple of interesting projects you could look into.
But first: does it have to be a CD-ROM? That's probably the slowest possible storage (well, apart from tape, maybe) you could use. What about a fast USB stick or a an IEE1394 hard-disk or maybe even an eSATA hard-disk?
Okay, there are several Live-CDs that are designed to be very small, in order to e.g. fit on a business card sized CD. Some were also designed to be booted from a USB stick, back when that meant 64-128 MiByte: Damn Small Linux is one of the best known ones, however it uses a 2.4 kernel. There is a sister project called Damn Small Linux - Not, which has a 2.6 kernel (although it seems it hasn't been updated in years).
Another project worth noting is grml, a Live-CD for system administration tasks. It does not boot into a graphic environment, and is therefore quite fast; however, it still contains about 2 GiByte of software compressed onto a CD-ROM. But it also has a smaller flavor, aptly named grml-small, which only contains about 200 MiByte of software compressed into 60 MiByte.
Then there is Morphix, which is a Live-CD builder toolkit based on Knoppix. ("Morphable Knoppix"!) Morphix is basically a tool to build your own special purpose Live-CD.
The last thing I want to mention is MachBoot. MachBoot is a super-fast Live-CD. It uses various techniques to massively speed up the boot process. I believe they even trace the order in which blocks are accessed during booting and then remaster the ISO so that those blocks are laid out contiguously on the medium. Their current record is less than 6 seconds to boot into a full graphical desktop environment. However, this also seems to be stale.
One key piece of advice I can give is that most LiveCDs use a compressed filesystem called squashfs to cram as much data on the CD as possible. Since you don't need compression, you could run the mksquashfs step (present in most tutorials) with -noDataCompression and -noFragmentCompression to save on decompression time. You may even be able to drop the squashfs approach entirely, but this would require some restructuring. This may actually be slower depending on your CD-ROM read speed vs. CPU speed, but it's worth looking into.
This Ubuntu tutorial was effective enough for me to build a LiveCD based on 8.04. It may be useful for getting the feel of how a LiveCD is composed, but I would probably not recommend using an Ubuntu LiveCD.
If at all possible, find a minimal LiveCD and build up with only minimal stripping out, rather than stripping down a huge LiveCD like Ubuntu. There are some situations in which the smaller distros are using smaller/faster alternatives rather than just leaving something out. If you want to get seriously hardcore, you could look at Linux From Scratch, and include only what you want, but that's probably more time than you want to spend.
Creating Your Own Custom Ubuntu 7.10 Or Linux Mint 4.0 Live-CD With Remastersys
Depends on your distro. Here's a good article you can check out from LWN.net
There is a book I used which covers a lot of distros, though it does not cover creating a flash-bootable image. The book is Live Linux(R) CDs: Building and Customizing Bootables. You can use it with supplemental information from your distro of choice.
So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator".
My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell.
The whole job would have only taken an hour or two if there had been a README explaining the configuration file!
Debian Live provides the best tools for building a Linux Live CD. Webconverger uses Debian Live for example.
It's very easy to use.
sudo apt-get install live-helper # from Debian unstable, which should work fine from Ubuntu
lh_config # edit config/* to your liking
sudo lh_build

Resources