Yocto do_package_qa hangs for bin_package nodejs recipe - node.js

Using Yocto morty, I'm trying to add a prebuilt version of nodejs in my distribution. When I bitbake core-image-sato, do_package_qa hangs for hours. I'd be grateful for your help in getting me past this issue.
I've added this to the bottom of local.conf:
CORE_IMAGE_EXTRA_INSTALL += "mynode"
This is my recipe for mynode:
SUMMARY = "puts the node.js binary distribution into my image"
SECTION = "base"
LICENSE = "MIT & BSD & Artistic-2.0"
LIC_FILES_CHKSUM = "file://usr/node-v7.10.0-linux-x64/LICENSE;md5=d29463feca32ea5977af7b6c7d62c14a"
SRC_URI = "https://nodejs.org/dist/v7.10.0/node-v7.10.0-linux-x64.tar.xz;subdir=usr"
SRC_URI[md5sum] = "b9122f212e0716d199d7e954ff81e1ec"
SRC_URI[sha256sum] = "6166b9f3fb1a9e861335d864688fee5366f040db808080856a1a2b71b6019786"
S = "${WORKDIR}"
inherit bin_package
This is the content of log.do_install for my nodejs package. Maybe the message from tar describes my problem somehow?
DEBUG: Executing shell function do_install
tar: ./pseudo/pseudo.socket: socket ignored
DEBUG: Shell function do_install finished
There doesn't appear to be anything useful in log.do_package_qa for my nodejs package, but maybe somebody will see something that I don't see:
DEBUG: Executing python function sstate_task_prefunc
DEBUG: Python function sstate_task_prefunc finished
DEBUG: Executing python function do_package_qa
NOTE: DO PACKAGE QA
DEBUG: Executing python function read_subpackage_metadata
DEBUG: Python function read_subpackage_metadata finished
NOTE: Checking Package: mynode-dev
NOTE: Checking Package: mynode
I see a few bitbake-worker processes running, one with argument decafbad, two with argument decafbadbeef. I also see a pseudo process running.

If you're going to use
subdir=usr
At the end of SRC_URI then you also need to change the source directory it uses (S) accordingly:
S = "${WORKDIR}/usr"
In addition, I think for all pre-built binary packages (inherit bin_package), you want to do it that way. I tried without either and it hung forever. Also, you may want to use a subdir name that nothing else uses, like say external_binary. That way each binary recipe can use the same subdir.

Related

How to put an extra file in the kernel image by yocto

I have a trouble of putting my initramfs.cpio in my kernel image by yocto.
I have two bb files, one is used to build an initramfs, and the other one is used to build a fitimage.
I successful to build the fitimage bundled with my initramfs image.
But it always failed to build a fitImage that has an initramfs.cpio.gz in the /usr directory in the fitImage.
( I mean, I want to see a file named initramfs.cpio in the /usr when I use my fitImage booting to console )
====================================================================
Here are my error message..
ERROR: linux-mine-1_4.9.27+gitAUTOINC+d87116e608-r0 do_package: QA Issue: linux-mine: Files/directories were installed but not shipped in any package:
/usr
/usr/initramfs-mine-qemu.cpio
Please set FILES such that these items are packaged. Alternatively if they are unneeded, avoid installing them or delete them within do_install.
linux-mine: 2 installed and not shipped files. [installed-vs-shipped]
ERROR: linux-mine-1_4.9.27+gitAUTOINC+d87116e608-r0 do_package: Fatal QA errors found, failing task.
ERROR: linux-mine-1_4.9.27+gitAUTOINC+d87116e608-r0 do_package: Function failed: do_package
ERROR: Logfile of failure stored in: /home/paul/projects/Test/yocto/build/tmp/work/mine-poky-linux-gnueabi/linux-mine/1_4.9.27+gitAUTOINC+d87116e608-r0/temp/log.do_package.26149
ERROR: Task (/home/paul/projects/Test/yocto/yocto-2.2/poky/../meta-mine/recipes-kernel/linux/linux-mine_4.9.bb:do_package) failed with exit code '1'
====================================================================
Here is my kernel image bb file
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}-${PV}:"
LINUX_VERSION ?= "4.9.27"
SRCREV = "d87116e608e94ad684b5e94d46c892e33b9e2d78"
SRC_URI = "git://local/kernel;protocol=ssh;branch=master"
#FILES_${PN} += "/usr /usr/initramfs-mine-${MACHINE_ARCH}.cpio"
#FILES_${PN}-${PV} += "/usr /usr/initramfs-mine-${MACHINE_ARCH}.cpio"
#IMAGE_INSTALL = "initramfs-mine"
do_install_append () {
echo "WangPaul : S=[${S}]"
echo "WangPaul : B=[${B}]"
echo "WangPaul : D=[${D}]"
install -d ${D}/usr/
install -m 0444 ${B}/usr/initramfs-mine-${MACHINE_ARCH}.cpio ${D}/usr/
}
====================================================================
Here is my initramfs bb file
LICENSE = "GPLv2"
PACKAGE_INSTALL = "initramfs-live-boot ${VIRTUAL-RUNTIME_base-utils} udev ${ROOTFS_BOOTSTRAP_INSTALL}"
IMAGE_FSTYPES = "${INITRAMFS_FSTYPES}"
inherit core-image
====================================================================
I have found similar questions:
Ship extra files in kernel module recipe and
An example of using FILES_${PN}
The way in aboves discussion are not work...
Any information would be appreciate !!
Thanks !!
The error seems to QA issues it means the source is compiled but not adding to rootfs. Add below line to yourkernel-image.bb. it will solve the issue.
FILES_${PN} += "${exec_prefix}/*"
Note: may be In your kernel.bb file you have given wrong format

How do I include this directory in the $PATH env var?

I'm building a package for Github's Atom editor and Im running into a challenge trying to get a child process to execute with node js. I'm pretty sure that the problem is that the environment that Atom runs in, doesn't include the path to the mrt script. So when I run this from within my package:
exec = require("child_process").exec
child = undefined
child = exec("/usr/local/bin/mrt add iron-router", { cwd: path },(error, stdout, stderr) -
console.log "stdout: " + stdout
console.log "stderr: " + stderr
console.log "exec error: " + error if error isnt null
return
)
in the console, I get:
Atom has a web inspector built right into it and you can actually see the Paths that atom has included. So when I go to Atom's console and type: process.env.PATH it shows the paths: /usr/bin:/bin:/usr/sbin:/sbin. So I somehow need to make atom aware of that mrt script's path. Anyone know how I might go about doing that?
I also reached out on on Atom's discussion forum yesterday, but have yet to come up with a solution.
Edit:
I should also note that the normal command for excuting the mrt package installer is mrt add package-name but as advised on Atom's discussion forum, I've been using the full path.
Edit 2:
I've creating symlinks to node in my /usr/bin directory, and it's working now. Now I'm trying to get node to create the symlinks for me using fs.symlink but that doesn't seem to be working.
To sum it up, the problem is that Atom uses PATH from where it is launched. Consequently, the path to node and the path to mrt where not included in Atom's path. The solution came to me when someone on the Atom Discussion forum pointed out Atom's Class BufferedNodeProcess.
At the time of Answer there is a slight bug with that class so I was not able to use it - the Github team works fast, I wouldn't be surprised if it was fixed within the next couple days. I was, however, able use some of the code to get Atom's environments. Also, I ended up using node's spawn method instead of execute since that's what BufferedNodeProcess uses. Plus you can read each individual line of the stdout.
options =
cwd: atom.project.getPath()
options.env = Object.create(process.env) unless options.env?
options.env["ATOM_SHELL_INTERNAL_RUN_AS_NODE"] = 1
node = (if process.platform is "darwin" then path.resolve(process.resourcesPath, "..", "Frameworks", "Atom Helper.app", "Contents", "MacOS", "Atom Helper") else process.execPath)
mrt = spawn(node, [
"/usr/local/lib/node_modules/meteorite/bin/mrt.js"
"add"
"iron-router"
], options )
mrt.stdout.on "data", (data) ->
console.log "stdout: " + data
return
mrt.stderr.on "data", (data) ->
console.log "stderr: " + data
return
mrt.on "close", (code) ->
console.log "child process exited with code " + code
return

What directory should I use for "error: 'extra_PROGRAMS' is used but 'extradir' is undefined"?

I have a autoconf/automake system that has a stand-alone target called stand. I don't want stand to be normally built, so I have this in my Makefile.am:
bin_PROGRAMS = grace
extra_PROGRAMS = stand
...
stand_SOURCES = stand.cpp barry.cpp ...
This has worked for a while, but automake just got updated on my system and I'm now getting this error:
src/Makefile.am:4: error: 'extra_PROGRAMS' is used but 'extradir' is undefined
src/Makefile.am:66: warning: variable 'stand_SOURCES' is defined but no program or
src/Makefile.am:66: library has 'stand' as canonical name (possible typo)
So I added this:
extradir = .
But that has caused problems.
I don't want the stand program installed. It's just a test program for me. But it's not part of a formal test suite, it's just for my own purposes. What should I do?
We found the bug! It turns out that extra needs to be capitalized, like this:
bin_PROGRAMS = grace
EXTRA_PROGRAMS = stand
...
stand_SOURCES = stand.cpp barry.cpp ...
You could try conditionally building it:
noinst_PROGRAMS=
if BUILD_STAND
noinst_PROGRAMS += stand
endif
stand_SOURCES = stand.cpp barry.cpp ...
This will not install it since it's in noinst_PROGRAMS and others will normally not build it since BUILD_STAND will normally not be defined for them.

SCons manual build step

Is it possible to get SCons to remind me to perform a manual step using it's dependancy tracking?
My build uses the .swc output from a .fla, which you can't do using a command-line.
I tried something like:
env.Command(target, sources + SHARED_SOURCES,
Action(lambda target, source, env: 1, "Out of date: $TARGET"))
But with that method, I have to use Decider('make') or I get:
$ scons --debug=explain
scons: rebuilding `view_bin\RoleplaySkin.swc' because `view_src\RoleplaySkin.fla' changed
Out of date: view_bin\RoleplaySkin.swc
scons: *** [view_bin\RoleplaySkin.swc] Error 1
And, more importantly, SCons never realizes it's cache is out of date, so any change in the Environment or sources since it wrote the signature in .sconsign.dblite means it will allways try to rebuild (and therefore, always fail).
What about using the Precious method to protect the *.swc output before converting it into a *.fla?
How about creating your own RemindMe builder which reminds you and fails to build the target?
It would look something like this:
def remind_me(target, source, env):
os.remove(target.abspath) #we do not build, we destroy
print ("This is a friendly reminder, your $SOURCE is out of date, run manual build step")
return None
reminder = Builder(action = remind_me,
suffix = '.swc',
src_suffix = '.fla')
env = Environment(BUILDERS = {'RemindMe' : reminder})
#Run builder like this
swc_file = env.RemindMe('some_fla_file')
final_target = env.BuildWithSWC(some_other_target,swc_file)
This is however only a theory, I have never tried actually deleting the target instead of creating it. It might be worth a try at least.

Update variables within makefile label

I have C++ static libraries and executable that uses them, each one is in a seperate folder. Each such project can be built in Debug or Release configuration, when the files hierarchy is like the following:
Static_Lib1\Debug\staticlib1.a
Static_Lib1\Release\staticlib1.a
//same for all other static libraries
Executable\Debug\executable
Executable\Release\executable
All Debug and Release folders contain makefiles.
I'm trying to write an external makefile to call each one of the internal projects, using the selected configuration - debug or release.
So, I tried something like:
CFG= #empty declaration
PROJECTS=Static_Lib1 Static_Lib2 ... Executable
all:
release #default config is release
release:
CFG = Release
make build-all
debug:
CFG = Debug
make build-all
build-all:
make clean
$(foreach projectName, $(PROJECTS), cd $(projectName)/$(CFG) && make all;)
But I get this output when trying, for example, to run make debug:
CFG = Debug
make: CFG: Command not found
make: *** [debug] Error 127
How can I fix this?
My OS is SLED 11x64.
Thank you in advance!
Change it to:
...
release:
make CFG=Release build-all
debug:
make CFG=Debug build-all
...

Resources