Am using scons with renesas compiler.
i am able to compile and link my project. But while linking i am getting following message:
"Software license problem:Internal error in licensing or accessing feature UNKNOWN"
even though i have a trial license of Renesas compiler.
I am able to generate executable (.abs file for renesas) for small application even with the above message. when i tried to create executable for a bigger application i am getting following message while linking:
"Software license problem:Internal error in licensing or accessing feature UNKNOWN"
Maximum link size limited to 64KB code+data.
I tried creating executables for the above appliations using SCONS on a machine which has valid Renesas license. Even on this machine i saw the same messages and i am not able to generate .abs file. ( In this machine i am able to create executables without using scons)
can any one help me in overcoming this issue. I don't have clues whether the message i am getting is from SCONS or Renesas tool chain.
Thanks
It's possible that your tool chain sets up some environment variables telling the compiler where to find the licence files. scons wipes out your environment, pretty much, and you may not be propagating the information it needs.
Related
I am getting into a position where I have to use other people code for projects, for example openTLD. I want to change some of the code to give it more functionality and use it in a diffrent way. What I have found is that many people have packaged their files in such a way that you are supposed to use
cmake
and then
make
and sometimes after that
make install
I don't want to install the software on my system. What I am looking to do is get these peoples code to a point where I can add to it in Eclipse or even just using Nano and then compile it.
At what point is the code in a workable/usable state. Can I use it after doing cmake or do I need to also call make? Is my thinking correct that it would be better to edit the code after calling cmake as opposed to before? I am not going to want my finished code to be cross platform supported, it will only be on Linux. Is it easer to learn cmake and edit the code befor running cmake as opposed to not learning cmake and using the code afterwards, if that is possible?
You question is a little open ended.
Looking at the opentld project, there is a binary and a library available for use. If you are interested in using the binary in your code, you need to download the executables(Linux executables are not posted). If you are planning to use the library, you have two options. Either you use the pre-built library or build it during your build process. You would include the header files in your custom application and link with the library.
If you add more details, probably others can pitch in with new answers or refine the older ones.
I am facing a strange problem, maybe somebody can point me to right direction.
I have an application that uses a shared library that I built back in the day, shared library is stored under /usr/lib/ folder. My application binary used to work OK with this set up. Yesterday I tried to install ORACLE-XE to my linux distro. Ran some scripts that set some environment variables. My installation failed and I had to uninstall ORACLE-XE.
When I came back to work today, I tried to run my binary just like I used to, but I've seen some errors about undefined symbol. Symbol name was related to the shared library that I used seamlessly for months. I have the same setup in other machines, I confirmed that application is still working there, so I copied application binary and shared library from other computers to the computer that I am working on, still no luck. It seemed to me that like shared library is not being loaded at all, I tried deleting the shared library and running the application one more time, I received the same error, right around the same time.
I think the oracle scripts might have mingled some of the environment variables, therefore shared library can not be loaded. I am not sure what to check next though, any suggestion would be appreciated.
ldd application-name helped me to identify where the shared library is being read from, it appears that there was another version of shared libary under /usr/local/lib, which was causing the issue.
I have an executable I want to be able to distribute and run in other Linux systems. Is there a way to be reasonably sure if this will work, without access to the final runtime environment?
For example, I am concerned my executable could be using a dynamic library that is only present on my development machine.
Supply any relevant shared libraries with the executable, and set $LD_LIBRARY_PATH in a shell script that invokes the executable to tell the linker where to find the shared libraries. See the ld.so(8) man page for more details, and be sure to follow the appropriate licenses.
Take a look at the Debian (.deb) and Redhat (.rpm) packaging stuff. This is the installer for your package. They aren't all that difficult, and you can tell it to reference other packages that have the required shared objects.
The packaging tools can usually repair packages that are missing libraries and so on. It will also help you place your binaries in such a way that you don't need to set LD_LIBRARY_PATH or put a shell-script front end on your executable.
They aren't that difficult to learn either. Spend a day playing with each and you'll get a passable installer package.
Is there a way to be reasonably sure if this will work, without access to the final runtime environment?
One's environment could be a different architecture than yours, even while it stays Linux. Therefore, the only sure way to get your program to the widemost audience is to ship source code.
Couldn't you just statically linking everything into a supper massive black hole^W^W binary do the trick?
Background:
At work I'm used to working on Solaris 10. We have sysadmins who know what they're doing and can help out if required.
I've compiled things like apache, perl and mod_perl from source without any problems.
I've been given a redhat server to play with and am hitting problems. The sysadmins are out sick at the moment.
I keep hitting problems regarding LD_LIBRARY_PATH when building software. At the moment for test purposes I am compiling to my home directory, as I don't have root, or permissions to install anywhere else.
I plan on having an area under /opt for us to install into, like we do on Solaris, but I'll need out sysadmin around to create that for us.
My .bashrc had nothing for LD_LIBRARY_PATH so I've been appending things to that to get stuff built (e.g. ffmpeg from source). I've been reading about this and apparently this isn't the way to go, it's not reliable or something. I don't have access to ldconfig (permission denied).
Now the quetions:
What is the best way to build applications under linux so that they won't break? Creating entries under /etc/ld.so.conf.d/ ?
Can anyone give a brief overview of what LD_LIBRARY_PATH actually does?
From the ld.so(8) man page:
LD_LIBRARY_PATH
A colon-separated list of directories in which to search for ELF
libraries at execution-time. Similar to the PATH environment
variable.
But honestly, find an admin. Become one if need be. Oh, and build packages.
LD_LIBRARY_PATH makes it possible for individual users or individual processes to add locations to the search path on a fine-grained basis. /etc/ld.so.conf should be used for system wide library path setting, i.e. deploying your application. (Better yet you could package it as an rpm/deb and deploy it through your distributions usual package channels)
Typically a user might use LD_LIBRARY_PATH to force execution of their program to pick a different version of a library. Normally this is useful for favouring debugging or instrumented versions of libraries, but you can also use it to inject your own code into 3rd party code. (It is also possible use this for malicious purposes sometimes, if you can alter someone's bash profile to trick them into executing your code, without realising it).
Some applications also set LD_LIBRARY_PATH if they install "private" libraries in non-default locations, i.e. so they won't be used for normal dynamic linking but still exist. For scenarios like this though I'd be inclined to prefer dlopen() and friends.
Setting LD_LIBRARY_PATH is considered harmful because (amongst other reasons):
Your program is dynamically linked based on your LD_LIBRARY_PATH. This means that it could link against a particular version of a library, which happened to be in your LD_LIBRARY_PATH e.g. /home/user/lib/libtheora.so. This can cause lots of confusion if someone else tries to run it with without yourLD_LIBRARY_PATH and ends up linking against the default version e.g. in /usr/lib/libtheora.so.
It is used in preference to any default system link path. This means that if you end up having a dodgy libc on you LD_LIBRARY_PATH it could end up doing bad things like compromising your account.
As ignacio said, use packages wherever you can. This avoids library nightmares.
I am developing cross-platform Qt application.
It is freeware though not open-source. Therefore I want to distribute it as a compiled binary.
On windows there is no problem, I pack my compiled exe along with MinGW's and Qt's DLLs and everything goes great.
But on Linux there is a problem because the user may have shared libraries in his/her system very different from mine.
Qt deployment guide suggests two methods: static linking and using shared libraries.
The first produces huge executable and also require static versions of many libraries which Qt depends on, i.e. I'll have to rebuild all of them from scratches. The second method is based on reconfiguring dynamic linker right before the application startup and seems a bit tricky to me.
Can anyone share his/her experience in distributing Qt applications under Linux? What method should I use? What problems may I confront with? Are there any other methods to get this job done?
Shared libraries is the way to go, but you can avoid using LD_LIBRARY_PATH (which involves running the application using a launcher shell script, etc) building your binary with the -rpath compiler flag, pointing to there you store your libraries.
For example, I store my libraries either next to my binary or in a directory called "mylib" next to my binary. To use this on my QMake file, I add this line in the .pro file:
QMAKE_LFLAGS += -Wl,-rpath,\\$\$ORIGIN/lib/:\\$\$ORIGIN/../mylib/
And I can run my binaries with my local libraries overriding any system library, and with no need for a launcher script.
You can also distribute Qt shared libraries on Linux. Then, get your software to load those instead of the system default ones. Shared libraries can be over-ridden using the LD_LIBRARY_PATH environment variable. This is probably the simplest solution for you. You can always change this in a wrapper script for your executable.
Alternatively, just specify the minimum library version that your users need to have installed on the system.
When we distribute Qt apps on Linux (or really any apps that use shared libraries) we ship a directory tree which contains the actual executable and associated wrapper script at the top with sub-directories containing the shared libraries and any other necessary resources that you don't want to link in.
The advantage of doing this is that you can have the wrapper script setup everything you need for running the application without having to worry about having the user set environment variables, install to a specific location, etc. If done correctly, this also allows you to not have to worry about from where you are calling the application because it can always find the resources.
We actually take this tree structure even further by placing all the executable and shared libraries in platform/architecture sub-directories so that the wrapper script can determine the local architecture and call the appropriate executable for that platform and set the environment variables to find the appropriate shared libraries. We found this setup to be particularly helpful when distributing for multiple different linux versions that share a common file system.
All this being said, we do still prefer to build statically when possible, Qt apps are no exception. You can definitely build with Qt statically and you shouldn't have to go build a lot of additional dependencies as krbyrd noted in his response.
sybreon's answer is exactly what I have done. You can either always add your libraries to LD_LIBRARY_PATH or you can do something a bit more fancy:
Setup your shipped Qt libraries one per directory. Write a shell script, have it run ldd on the executable and grep for 'not found', for each of those libraries, add the appropriate directory to a list (let's call it $LDD). After you have them all, run the binary with LD_LIBRARY_PATH set to it's previous value plus $LDD.
Finally a comment about "I'll have to rebuild all of them from scratches". No, you won't have to. If you have the dev packages for those libraries, you should have .a files, you can statically link against these.
Not an answer as such (sybreon covered that), but please note that you are not allowed to distribute your binary if it is statically linked against Qt, unless you have bought a commercial license, otherwise your entire binary falls under the GPL (or you're in violation of Qt's license.)
If you have a commercial license, never mind.
If you don't have a commercial license, you have two options:
Link dynamically against Qt v4.5.0 or newer (the LGPL versions - you may not use the previous versions except in open source apps), or
Open your source code.
The probably easiest way to create a Qt application package on Linux is probably linuxdeployqt. It collects all required files and lets you build an AppImage which runs on most Linux distributions.
Make sure you build the application on the oldest still-supported Ubuntu LTS release so your AppImage can be listed on AppImageHub.
You can look into QtCreator folder and use it as an example. It has qt.conf and qtcreator.sh files in QtCreator/bin.
lib/qtcreator is the folder with all needed Qt *.so libraries. Relative path is set inside qtcreator.sh, which should be renamed to you-app-name.sh
imports,plugins,qml are inside bin directory. Path to them is set in qt.conf file. This is needed for QML applications deployment.
This article has information on the topic. I will try it myself:
http://labs.trolltech.com/blogs/2009/06/02/deploying-a-browser-on-gnulinux/
In a few words:
Configure Qt with -platform linux-lsb-g++
Linking should be done
with –lsb-use-default-linker
Package everything and deploy (will
need a few tweaks here but I haven't yet tried it sorry)