Passing params from the build system to buildbot - linux

I'd like to share how I implemented a solution to a problem I had, to get some feedback and maybe learn some new feature of buildbot.
Scenario:
Create a package of a given software, and upload the package to the buildmaster into a shared folder.
The package name contains some data that are known to the build system (i.e. Makefiles) specifically the sw version. Let's assume the package name is:
myapp-1.2.3-r2435.tar.gz
Question:
How do I send to the buildslave steps the required to build up the very same package name, so that the buildslave can upload the package?
Specifically I need to know the version number (but I guess this could be any param)
Implemented (and working) solution:
The makefile, once the compilation process is completed, writes a file with the required param.
The slave uses the SetProperty() step to read the content of the file into a custom named property
Once I have the value of interest in the property (let's say APP_VERSION) I use it to build the package name with the same pattern used by the build system.
The described solution works, but I do not really like it because:
1) it's complicated, hence, I guess, fragile
2) it is not OS independent (I use "echo $VAR > file" to write the file, and "cat file" to read it and set the buildslave Property)
Is there in your opinion a better way to solve this issue?
Do you have any suggestion to make the solution OS independent? (It will not work for sure on Windows, while my package shoudl be built on Windows OS too)

Related

Is it possible to start a program with a missing shared library

I'm running Linux and have a situation like this:
A binary file 'bin1' loads via dlopen 'shared1.so' which is linked with 'shared2.so' and 'shared3.so'.
if 'shared2.so' or 'shared3.so' is missing the program 'bin1' won't run.
There are runs that I know that I won't touch any code from 'shared2.so' and I want 'bin1' to be able to run even when this library is missing, can this be done ?
You could ship program with dummy shared2.so library. You might need to add dummy functions which shared1 expects to find there. This can be done manually or via automatic tool like Implib.so.

Why Is Doppl Trying To Pull in ReactiveStreams?

I am attempting to convert parts of an Android app to iOS using Doppl, and I am getting a strange result: Doppl keeps trying to pull in android.arch.lifecycle:reactivestreams, even though I don't want it to.
Specifically, in app/build/j2objcSrcGenMain/android/arch/lifecycle/, there is a reactivestrams/ subdirectory with R.h and R.m files in it. This seems to make Xcode cranky and may explain why I had some oddities with pod install.
My app/build.gradle has compile "android.arch.lifecycle:reactivestreams:$archVer", because my activity is using LiveDataReactiveStreams.fromPublisher(). However:
The activity is not in the translatePattern (and since its code is not showing up in app/build/j2objcSrcGenMain/, I have to assume that the translatePattern is fine)
I do not have a doppl statement related to reactivestreams, because there does not appear to be a Doppl conversion of this library (nor should it be needed here)
AFAIK, nowhere else in this app am I referring to LiveDataReactiveStreams, which AFAIK is the one-and-only public class from the reactivestreams library
So, the questions:
What determines whether Doppl creates R.h and R.m files for some dependency? It's not the existence of a doppl statement, as I have doppl statements for a lot of other dependencies (RxJava, RxAndroid, Retrofit) and those do not get R.h and R.m files. It's not whether the dependency is referenced from generated code, as my repository definitely uses RxJava and Retrofit, yet there are no R files for those.
How can I figure out why Doppl generates R.h and R.m for reactivestreams?
Once I get this cleared up... do I re-run pod install, or is there some other pod command to refresh an existing pod with a new implementation?
Look into 'app/build/generated/source/r/debug' and confirm there's an R.java being created for the architecture component. It'll be under 'android/arch/lifecycle/reactivestrams'.
I think there are 2 problems here.
Problem 1
Somehow Doppl/J2objc is of the opinion that this file should be transpiled. It could be either that 'translatePattern' matches with it, or that something in the shared code is referencing it. If you can't figure out which, please post a comment and I'll try to help (or post in slack group).
Problem 2
Regardless of why that 'R.java' is being sucked into the translate step, because of how stock J2objc is configured, the code is being generated with package folders instead of creating One Big Name. That generated file should be called 'AndroidArchLifecycleReactivestramsR.h' (and AndroidArchLifecycleReactivestramsR.m). Xcode really doesn't like package folders. That's why there's a slightly custom J2ojbc being used with Doppl, so we can have files with big names instead of folders.
In cases where you intentionally use package names that match with what J2objc considers to be "system" classes, you need to provide a header mapping file to force long names. The 'androidbase' doppl library needs to add a lot of files that are in the 'android' package, which J2objc considers "system". We override those names in the mapping file.
build.gradle
https://github.com/doppllib/core-doppl/blob/master/androidbase/build.gradle#L19
mapping file
https://github.com/doppllib/core-doppl/blob/master/androidbase/src/main/java/androidbase.mappings
I screwed up.
In my dopplConfig, I have:
translatePattern {
include '**/api/**'
include '**/arch/**'
include '**/RepositoryTest.java'
}
In this case, **/arch/** not only matches my arch package, but also the arch package from the Architecture Components.
Ordinarily, this would not matter, because the Architecture Components source code is not in my project. But, R.java gets generated, due to resources, and the translatePattern includes generated source code in addition to lovingly hand-crafted source code. So, that's where my extraneous Objective-C was coming from.
Many thanks to Kevin Galligan for his assistance with this, out on the #newbiehelp Doppl Slack channel!

How do I use a CPAN module in a perl script that I want to give to others?

I'm writing a Perl script that takes data and writes it to an Excel file. I'm using Excel::Writer::XLSX to do this.
I'm hoping to write the script and then give it to the rest of my team so we can all use it to compile the data when we need to.
I have a few questions about this:
Do my colleagues need to have the module installed to for the script to work?
If not, how do I wrap up the module with the script to give it to them?
Is there a better way of doing this that using the module I've chosen?
There are a few ways of doing this. One options is to put together a Makefile.PL that specifies the dependencies. This allows you to bundle your script as a distribution. E.g.
use ExtUtils::MakeMaker;
WriteMakefile(
ABSTRACT => 'myscript creates Excel files',
AUTHOR => 'A.U. Thor',
EXE_FILES => [ 'myscript' ],
NAME => 'myscript',
VERSION => '1.2.3',
PREREQ_PM => {
'Excel::Writer::XLSX' => '0.88',
},
);
Then, people can do perl Makefile.PL which will inform them of the dependencies. If you do make dist, and distribute the resulting archive file, they can also use cpanm to install your script along with its dependencies.
Another option is to put together a cpanfile. Then, recipients can install all the dependencies using a tool such as cpanm.
Now, if you are distributing the script to people who do not use Perl normally, and you want them to be able to just click and run etc, you might want to look into pp.
A long time ago, I wrote a program I called scriptdist to turn a single-file program into a CPAN-like distribution, complete with a build file. That way you could pass it around as an archive and people could treat it like any other CPAN distribution. It basically automates what Sinan posted. I wrote about it for Dr. Dobbs.
There's a trick that you can use if you want to pass around the archive. The cpan tool can install from the current directory. That will get the dependencies (which, by the nature of being dependencies, are required):
$ cpan .
That way, you can install your program and its dependencies without putting anything in a CPAN-like repository.
It's far from clear what you need to know
Do my colleagues need to have the module installed to for the script to work?
I think it's obvious that your colleagues need access to your code to be able to make use of it
It's not clear what you have written, but if you have created a module then any program with access to your module files can simply use it to access its capabilities
If not, how do I wrap up the module with the script to give it to them?
Your "if not" isn't clear. What you have written means "If they don't need to have the module installed to for the script to work", and I doubt if that is your intention
"how do I wrap up the module with the script" Are you asking how to create a module, or do you already have one? Typically, modules are accessed by programmers who write a script with the use statement
If you have a module and you want other people to be able to load it with use then it must appear in one of the directories listed in their #INC array. If you are working on separate systems then it is best to create a package that you can alter as necessary and have others update
Is there a better way of doing this (than) using the module I've chosen?
Are you referring to Excel::Writer::XLSX or your own module?
If Excel::Writer::XLSX is doing what you need then you probably shgouldn't change. But if you are having problems with it in some way then you need to ask a new question and describe those issues

Node.js/npm - dynamic service discovery in packages

I was wondering whether Node.js/npm include any kind of exension mechanism comparable to Python setuptools' "entry points".
So, in short:
is there any way I can do dynamic discovery of services provided by other packages using npm?
if not, what would be the best way to implement something similar? Specifying the extension name in the main module's configuration file seems to be the logical solution, but I wonder whether something "automatic" can be done.
I'm not aware of any builtin mechanism to do this.
One viable way of doing it yourself:
I made a small tool (Jumpstart) to quickly create project scaffolding from templates with placeholders, and I used a kind of plugin mechanism for that. It basically comes down to that the Jumpstart script searches for modules named jumpstart-* "adjacent" to where the module itself is installed. So it would work for both local and global installations. If installed locally, it would search the other local modules (on the same level) and if global, it searches the other global modules.
Note that here, "search" comes down to a simple fs.exists check to see if there's a Jumpstart template module with a particular name installed. However, nothing would stand in the way to actually get a full list of all installed packages matching the jumpstart-* pattern, and loading all at once. I could also search up the entire directory tree for node_modules directories and do the same. There's no point in doing this for this particular program, however.
See https://npmjs.org/package/jumpstart for docs.
The only limitation to this technique is that all modules must be named in a consistent fashion. Start with some string, end with some string, something like that. Any rogue packages polluting the namespace could be detected by doing further checks on a package contents: What files does it contain? What kind of object does its main module export? etc.
Brunch also uses a plugin mechanism. This one actually deals with file extensions, so is more relevant: https://github.com/brunch/brunch/wiki/Plugins . See for example source of the CoffeeScript plugin https://github.com/brunch/coffee-script-brunch/blob/master/src/index.coffee .

On GNU/Linux systems, Where should I load application data from?

In this instance I'm using c with autoconf, but the question applies elsewhere.
I have a glade xml file that is needed at runtime, and I have to tell the application where it is. I'm using autoconf to define a variable in my code that points to the "specified prefix directory"/app-name/glade. But that only begins to work once the application is installed. What if I want to run the program before that point? Is there a standard way to determine what paths should be checked for application data?
Thanks
Thanks for the responses. To clarify, I don't need to know where the app data is installed (eg by searching in /usr,usr/local,etc etc), the configure script does that. The problem was more determining whether the app has been installed yet. I guess I'll just check in install location first, and if not then in "./src/foo.glade".
I dont think there's any standard way on how to locate such data.
I'd personally do it in a way that i'd have a list of paths and i'd locate if i can find the file from anyone of those and the list should containt the DATADIR+APPNAME defined from autoconf and CURRENTDIRECTORY+POSSIBLE_PREFIX where prefix might be some folder from your build root.
But in any case, dont forget to use those defines from autoconf for your data files, those make your software easier to package (like deb/rpm)
There is no prescription how this should be done in general, but Debian packagers usually installs the application data somewhere in /usr/share, /usr/lib, et cetera. They may also patch the software to make it read from appropriate locations. You can see the Debian policy for more information.
I can however say a few words how I do it. First, I don't expect to find the file in a single directory; I first create a list of directories that I iterate through in my wrapper around fopen(). This is the order in which I believe the file reading should be done:
current directory (obviously)
~/.program-name
$(datadir)/program-name
$(datadir) is a variable you can use in Makefile.am. Example:
AM_CPPFLAGS = $(ASSERT_FLAGS) $(DEBUG_FLAGS) $(SDLGFX_FLAGS) $(OPENGL_FLAGS) -DDESTDIRS=\"$(prefix):$(datadir)/:$(datadir)/program-name/\"
This of course depends on your output from configure and how your configure.ac looks like.
So, just make a wrapper that will iterate through the locations and get the data from those dirs. Something like a PATH variable, except you implement the iteration.
After writing this post, I noticed I need to clean up our implementation in this project, but it can serve as a nice start. Take a look at our Makefile.am for using $(datadir) and our util.cpp and util.h for a simple wrapper (yatc_fopen()). We also have yatc_find_file() in case some third-party library is doing the fopen()ing, such as SDL_image or libxml2.
If the program is installed globally:
/usr/share/app-name/glade.xml
If you want the program to work without being installed (i.e. just extract a tarball), put it in the program's directory.
I don't think there is a standard way of placing files. I build it into the program, and I don't limit it to one location.
It depends on how much customising of the config file is going to be required.
I start by constructing a list of default directories and work through them until I find an instance of glade.xml and stop looking, or not find it and exit with an error. Good candidates for the default list are /etc, /usr/share/app-name, /usr/local/etc.
If the file is designed to be customizable, before I look through the default directories, I have a list of user files and paths and work through them. If it doesn't find one of the user versions, then I look in the list of default directories. Good candidates for the user config files are ~/.glade.xml or ~/.app-name/glade.xml or ~/.app-name/.glade.xml.

Resources