I'm trying to create a .deb file from a binary and support files. It works, however I get the following warning before I create it:
The installation of a package which violates the quality standards isn't allowed.
This could cause serious problems on your computer.
Please contact the person or organisation who provided this package
file and include the details beneath.
Lintian check results for /home/javaherd/program-5/debian/program-5_1.4.2_i386.deb:
E: program-v5: control-interpreter-without-depends control/rules #!/usr/bin/make
E: program-v5: wrong-file-owner-uid-or-gid usr/local/include/titles.txt 1006/1007
E: program-v5: wrong-file-owner-uid-or-gid usr/local/include/counties.txt 1006/1007
What can I do to rectify this situation?
Software is relatively easy to package for Debian and Ubuntu, but somewhat harder to package correctly. The result is that there are a lot of incorrectly packaged, unofficial *.deb packages floating around out there.
Here is how to address the problem. First, extract the contents of the archive into a new directory program by dpkg-deb -x program-5_1.4.2_i386.deb program. Second, look in the new directory program to see what's in there; for, if you find what you need, it may solve your problem right there. Finally, apply the lintian command to the *.deb package, itself, by lintian program-5_1.4.2_i386.deb, which is likely to give you useful information about how the package was mispackaged in the first place (of course, you may have to install lintian first).
It is possible that lintian, so invoked, will only repeat the information it has already given you, but lintian can be invoked in several, different ways; so, now that you know how to invoke it manually, you can read the manpage and experiment.
If you do these things, they will give you much more information to lead you to a complete solution of your problem. Good luck.
Related
im currently building LFS and looking for a package management solution
specifically a program that keeps track of what files got installed when you compiled
something from source also has a method for removing those files in case make uninstall isnt present
i have looked into programs like install-log and checkinstall but couldnt get both to compile
any help is appreciated, thanks!
Personally, I have always used the User Based Management (aka "Package Users"), as described in section 8.2. Package Management. It doesn't get much love, even from the LFS community, despite it (still) being in the book and being the only method "unique to LFS".
It's not great for the first timer as it will really force you to dig deep at times to solve issues and make important decisions. I suggest you complete your first LFS system build, then consider Package Users your next time around.
But once you get used to it, it works great.
Another, simpler, method is the Timestamp Based technique (described in the same link).
For example, when it comes time to copy a package's files to your system, you can do something like this:
touch timestamp_start
make install
# do other stuff as instructed
touch timestamp_stop
find / -newer timestamp_start -not -newer timestamp_stop > list_of_files_affected
And I do use this approach when installing the Nvidia proprietary drivers, because successfully installing them with a non-root account is a real pain.
I am a 65 year old "newbie" and generally use default options when downloading. Python.org wants to download to an obscure directory such as
C:\Users\Facdev\AppData\Local\Programs\Python\Python36-32".
Is there anything wrong with downloading instead to "C:"?
This should not pose a problem, as long as the directory is specified correctly in you PATH.
It is OK to modify the location where you will download and install Python. However, I would advise against doing so if you are unfamiliar with how system environment variables and PATH locations work in Windows.
Why does it matter?
Once you have the python executable (in your case Python-3.6.0.exe) on your system, your computer needs to know where it is in order to execute it! If you place the executable in a location like the main directory on the C: drive your computer does not care. Your computer also does not care if the executable is deep down in the AppData\ directory.
By changing the default behavior you run a risk when troubleshooting unexpected behavior that instructions will not be written for your situation. This is OK as long as you understand what you will need to change in order to apply the troubleshooting techniques listed on documentation, blog posts, and forums.
Because of those factors and this being a new process for you, I recommend sticking to the default. You can change the location later, once you understand what doing so means. Learning to program can be frustrating and trying to grasp managing the software environment only adds to the frustration. Tackle that issue later.
Good luck on your new adventure! I hope you learn to enjoy writing your own programs in python!
The title may seem complicated.
I made a library to be loaded within a Tcl script. Now I need to transfer it to Ubuntu 12.04.
Tclsh gives the following error:
couldn't load file "/apollo/applications/Linux-PORT/i586/lib/libapmntwraptcl.so":
**libgeos-3.4.2.so**:
cannot open shared object file: No such file or directory
while executing "load $::env(ACCLIB)/libapmntwraptcl[info sharedlibextension]"
The library libgeos doesn't have the version 3.4.2 under Ubuntu 12.04. So I need to know which (sub) dependency of my library needs the famous libgeos-3.4.2.so, so that I can rebuild it or find an alternative.
Many thanks in advance.
Edit:
Thank you for your USEFUL answers. I already did ldd -v or -r. I have 200+ dependencies when I do ldd -r. The worst is, in the result list I see libgeos-3.3.8.so => /usr/lib/libgeos-3.3.8.so (0xb3ea9000) (version I have), but when I execute, Tclsh says
libgeos-3.4.2.so missing.
That's why I need something able to tell me the complete dependency tree of my library.
Could anyone give me a hint (not some useless showoff)?
Thank you so much.
You've accidentally (probably through no fault of your own) wandered into “DLL Hell”; the problem is that something that libapmntwraptcl.so depends on, possibly indirectly, does not have its dependencies satisfied. This sort of thing can be very difficult to solve precisely because the tools that know what went wrong (in particular, the system dynamic linker library) produce such little informative output by default.
What's even worse is that you have apparently multiple versions about. That's where DLL Hell reaches its worst incarnation. You need to be a detective to solve this; it's too hard to sensibly do remotely as many of the things that you poke your fingers at are determined by what previous steps said.
You need to identify exactly what versions you're loading, with ldd libapmntwraptcl.so (in your shell, not in Tcl). You also need to double check what your environment variables are immediately before the offending load command, as several of them can affect the loading process. The easiest way to do that is to put parray env just before the offending load, which will produce a dump of everything in the context where things could be failing; reading the manual page for ld.so will tell you a lot more about each of the possible candidates for trouble (there's many!).
You might also need to go through the list of libraries identified by the ldd program above and check whether each of those also has all their dependencies satisfied and in a way that you expect, and you should also bear in mind that failing to locate with ldd might not mean that the code actually fails. (That would be too easy.)
You can also try setting the LD_DEBUG environment variable to all before doing the load. That will produce quite a lot of information on standard out; maybe it will give you enough to figure out what is going wrong?
Finally, on Linux you need to bear in mind that there can be an RPATH set for a particular library (which can affect where it is found) and there's a system library cache which can also affect things.
I'm really sorry the error message isn't better. All I can really say is that it's exactly as much as Tcl is told about what went wrong, and its hardly anything.
What is the most portable and robust way to get the list of paths, configured by /etc/ld.so.conf and files included from it? Parsing the file manually seems to be not a good idea — the format is likely to change in the future revisions.
To allow better understanding of the question, I will give you specific details below. Note that, despite these details, this is a general programming question, applicable to other situations.
There is a program, called LuaRocks. It is a package manager for Lua programming language (somewhat like Ruby gems or Python eggs). LuaRocks packages are called "rocks".
As a convenience feature, LuaRocks allows a rock author to specify a list of external dependencies for a rock, formulated as a list of C header files and / or dynamic library files. (.so on Linux.) If the specified file does not exist, the rock can't be installed.
Currently, on Linux, LuaRocks by default checks .so file existance by searching for the file in two hardcoded paths, /usr/lib and /usr/local/lib.
I believe that this is incorrect behaviour, and it is broken by the recent changes in the Ubuntu and other Debian distributions.
Update: the paths are not hardcoded per se, but are user-configurable in the config file. Still, IMO, not a best solution.
Instead (as I understand it), LuaRocks should look up file in the paths, specified by /etc/ld.so.conf and files included from it.
(Now please re-read the question above ;-) )
You shouldn't need to parse /etc/ld.so.conf or any of the config files - if you run 'ldconfig', it will scan the configured directories and generate a cache file.
Then, subsequently when you attempt a dlopen it'll automatically find the files by iterating through the cached library directories. Same thing with compiling and giving -lSomeLib, you shouldn't need to specify -L/my/other/path if you've got it configured in ld.so.conf(.d)
autoconf accomplishes this by attempting to compile a test program that links to the shared library, but that's just a functional wrapper around the dlopen() call.
So, while other methods may not necessarily be 'wrong', at the root of it attempting to link to the library or doing a dlopen() are the 'most right' ways of doing it.
Consider this, if you attempt to link to a library in a directory that ISN'T cached in /etc/ld.so.cache, when you try to run the program it will fail because it won't be able to dlopen() the library!
Hence, any 'good' shared library will be in /etc/ld.so.cache and be linkable/dlopen()able, this means that gcc can use it to link and that the user-generated library or executable will be able to open it when it executes.
You can circumvent this by expressly setting the environment variable LD_LIBRARY_PATH, or LD_PRELOAD_PATH - but each of these has it's own caveats and should be avoided if possible for 'standard' use.
A good write-up on writing shared libraries covers some of these issues, and is a good read for anyone working on programmatic consuming of other-shared libraries. Ulrich Drepper's How to write shared libraries.
According to the FHS, the following are valid locations for dynamic libraries:
/lib*/
/opt/*/lib*/
/usr/lib*/
/usr/local/lib*/
(And most likely ~/lib*/ as well.)
All entries in my /etc/ld.so.conf.d/* conform to this. Some entries reference subdirectories below the FHS dirs, which probably means that you can use the libraries in there without path information.
Now I don't know enough about LuaRocks. If you're limited to Lua-path-style globs (only ?), you cannot match these and have to parse the configs. Otherwise, you could just try to find them anywhere in these directories.
This would break on non-FHS-conforming systems (only option: parse config) and if a directory is not included in the config, the installer might see libraries that the linker cannot find.
These two seem acceptable to me, therefore I'd simply ignore the config and look at these dirs.
(Another possibility could be trying to link the library, this should automagically use the right path. However, this is platform-specific and maybe dangerous.)
Mostly for my amusement, I created a makefile in my $HOME/bin directory called rebuild.mk, and made it executable, and the first lines of the file read:
#!/bin/make -f
#
# Comments on what the makefile is for
...
all: ${SCRIPTS} ${LINKS} ...
...
I can now type:
rebuild.mk
and this causes make to execute.
What are the reasons for not exploiting this on a permanent basis, other than this:
The makefile is tied to a single directory, so it really isn't appropriate in my main bin directory.
Has anyone ever seen the trick exploited before?
Collecting some comments, and providing a bit more background information.
Norman Ramsey reports that this technique is used in Debian; that is interesting to know. Thank you.
I agree that typing 'make' is more idiomatic.
However, the scenario (previously unstated) is that my $HOME/bin directory already has a cross-platform main makefile in it that is the primary maintenance tool for the 500+ commands in the directory.
However, on one particular machine (only), I wanted to add a makefile for building a special set of tools. So, those tools get a special makefile, which I called rebuild.mk for this question (it has another name on my machine).
I do get to save typing 'make -f rebuild.mk' by using 'rebuild.mk' instead.
Fixing the position of the make utility is problematic across platforms.
The #!/usr/bin/env make -f technique is likely to work, though I believe the official rules of engagement are that the line must be less than 32 characters and may only have one argument to the command.
#dF comments that the technique might prevent you passing arguments to make. That is not a problem on my Solaris machine, at any rate. The three different versions of 'make' I tested (Sun, GNU, mine) all got the extra command line arguments that I type, including options ('-u' on my home-brew version) and targets 'someprogram' and macros CC='cc' WFLAGS=-v (to use a different compiler and cancel the GCC warning flags which the Sun compiler does not understand).
I would not advocate this as a general technique.
As stated, it was mostly for my amusement. I may keep it for this particular job; it is most unlikely that I'd use it in distributed work. And if I did, I'd supply and apply a 'fixin' script to fix the pathname of the interpreter; indeed, I did that already on my machine. That script is a relic from the first edition of the Camel book ('Programming Perl' by Larry Wall).
One problem with this for generally distributable Makefiles is that the location of make is not always consistent across platforms. Also, some systems might require an alternate name like gmake.
Of course one can always run the appropriate command manually, but this sort of defeats the whole purpose of making the Makefile executable.
I've seen this trick used before in the debian/rules file that is part of every Debian package.
To address the problem of make not always being in the same place (on my system for example it's in /usr/bin), you could use
#!/usr/bin/env make -f
if you're on a UNIX-like system.
Another problem is that by using the Makefile this way you cannot override variables, by doing, for example make CFLAGS=....
"make" is shorter than "./Makefile", so I don't think you're buying anything.
The reason I would not do this is that typing "make" is more idiomatic to building Makefile based projects. Imagine if every project you built you had to search for the differently named makefile someone created instead of just typing "make && make install".
You could use a shell alias for this too.
We can look at this another way: is it a good idea to design a language whose interpreter looks for a fixed filename if you don't give it one? What if python looked for Pythonfile in the absence of a script name? ;)
You don't need such a mechanism in order to have a convention based around a known name. Example: Autoconf's ./configure script.