How to remove warning: link.res contains output sections; did you forget -T? - linux

I'm using fpc compiler and I want to remove this warning. I've read fpc's options but I can't find how to do that. Is this possible?
it appear when I run command:
fpc foo.pas
out:
Target OS: Linux for i386 Compiling foo.pas Linking p2 /usr/bin/ld:
warning: link.res contains output sections; did you forget -T? 79
lines compiled, 0.1 sec

It's a bug in certain LD versions. Just ignore it for now, or see if your distro has an update for your LD. (package binutils)
http://www.freepascal.org/faq.var#unix-ld219

It‘s not a bug because ld behaves like its specification. The man page of ld 2.28 reads:
If the linker cannot recognize the format of an object file, it will assume that it is a linker script. A script specified in this way augments the main linker script used for the link (either the default linker script or the one specified by using -T). This feature permits the linker to link against a file which appears to be an object or an archive, but actually merely defines some symbol values, or uses "INPUT" or "GROUP" to load other objects. Specifying a script in this way merely augments the main linker script, with the extra commands placed after the main script; use the -T option to replace the default linker script entirely, but note the effect of the "INSERT" command.
TL;DR ☺. In a nutshell: In most cases the users are not aware of the linker script they are using because a “main script” (= default script) is provided by the tool chain. The main script heavily refers to intrinsics of the compiler-generated sections and you have to learn the ropes to change it. Most users do not.
The common approach to provide your own script is via the -T option. That way the main linker script is ignored and your script takes over control over the linkage. But you have to write everything from scratch.
If you just want to add a minor feature, you can write your specs into a file and append the file name to the command line of ld (or gcc / g++) without the -T option. That way the main linker script still does the chief work but your file augments it. If you use this approach, you get the message of the header of this thread to inform you that you might have provided a broken object unintentionally.
The source of this confusion is that there is no way to specify the rôle of the additional file. This could easily be resolved by adding another option to ld just like the -dT option for “default scriptfile”: Introduce a -sT option for “supplemental scriptfile”.

This is fixed in version 2.35.1 (or later) of binutils.
If you have a problematic version of binutils, I've created a quick program to binary patch /usr/bin/ld to silence this extremely annoying warning message.
This program can be saved as main.go and be executed with sudo go run main.go to patch ld. Remember to take a backup of ld first and modify the path in the main function, if your binary is placed elsewhere.
main.go:
package main
import (
"bytes"
"fmt"
"io/ioutil"
"os"
)
// patchAway takes a filename and a string
// If the string is found in the file, the first byte is
// set to 0, to make the string zero length in C.
func patchAway(filename, cstring string) error {
data, err := ioutil.ReadFile(filename)
if err != nil {
return err
}
// Find the position of the warning
pos := bytes.Index(data, []byte(cstring))
// If it does not exist, the file has most likely already been patched
if pos == -1 {
return fmt.Errorf("%s has already been patched", filename)
}
// Silence the message with a 0 byte
data[pos] = 0
// Retrieve the permissions of the original file
fi, err := os.Stat(filename)
if err != nil {
return err
}
perm := fi.Mode().Perm()
// Write the patched data to the new file, but with the same permissions as the original file
return ioutil.WriteFile(filename, data, perm)
}
func main() {
filename := "/usr/bin/ld"
warningMessage := "%P: warning: %s contains output sections"
fmt.Printf("Patching %s... ", filename)
if err := patchAway(filename, warningMessage); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
fmt.Println("ok")
}

Related

How do I implement "file -s <file>" on Linux in pure Go?

Intent:
Does Go have the functionality (package or otherwise) to perform a special file stat on Linux akin to the command file -s <path>
Example:
[root#localhost ~]# file /proc/uptime
/proc/uptime: empty
[root#localhost ~]# file -s /proc/uptime
/proc/uptime: ASCII text
Use Case:
I have a fileglob of files in /proc/* that I need to very quickly detect if they are truly empty instead of appearing to be empty.
Using The os Package:
Code:
result,_ := os.Stat("/proc/uptime")
fmt.Println("Name:",result.Name()," Size:",result.Size()," Mode:",int(result.Mode()))
fmt.Printf("%q",result)
Result:
Name: uptime Size: 0 Mode: 292
&{"uptime" '\x00' 'Ĥ' {%!q(int64=63606896088) %!q(int32=413685520) %!q(*time.Location=&{ [] [] 0 0 <nil>})} {'\x03' %!q(uint64=4026532071) '\x01' '脤' '\x00' '\x00' '\x00' '\x00' '\x00' 'Ѐ' '\x00' {%!q(int64=1471299288) %!q(int64=413685520)} {%!q(int64=1471299288) %!q(int64=413685520)} {%!q(int64=1471299288) %!q(int64=413685520)} ['\x00' '\x00' '\x00']}}
Obvious Workaround:
There is the obvious workaround of the following. But it's a little over the top to need to call in a bash shell in order to get file stats.
output,_ := exec.Command("bash","-c","file -s","/proc/uptime").Output()
//parse output etc...
EDIT/MY PRACTICAL USE CASE:
Quickly determining which files are zero size without needing to read each one of them first.
file -s /cgroup/memory/lsf/<cluster>/*/tasks | <clean up commands> | uniq -c
6 /cgroup/memory/lsf/<cluster>/<jobid>/tasks: ASCII text
805 /cgroup/memory/lsf/<cluster>/<jobid>/tasks: empty
So in this case, I know that only those 6 jobs are running and the rest (805) have terminated. Reading the file works like this:
# cat /cgroup/memory/lsf/<cluster>/<jobid>/tasks
#
or
# cat /cgroup/memory/lsf/<cluster>/<jobid>/tasks
12352
53455
...
I'm afraid you might be confusing matters here: file is special in precisely a way it "knows" a set of heuristics to carry out its tasks.
To my knowledge, Go does not have anything like this in its standard library, and I've not came across a 3rd-party package implementing a file-like functionality (though I invite you to search by relevant keywords on http://godoc.org)
On the other hand, Go provides full access to the syscall interface of the underlying OS so when it comes to querying the OS in a way file does it, there's nothing you could not do in plain Go.
So I suggest you to just fetch the source code of file, learn what it does in its mode turned on by the "-s" command-line option and implement that in your Go code.
We'll try to have you with specific problems doing that — should you have any.
Update
Looks like I've managed to grasp the OP is struggling with: a simple check:
$ stat -c %s /proc/$$/status && wc -c < $_
0
849
That is, the stat call on a file under /proc shows it has no contents but actually reading from that file returns that contents.
OK, so the solution is simple: instead of doing a call to os.Stat() while traversing the subtree of the filesystem one should instead merely attempt to read a single byte from the file, like in:
var buf [1]byte
f, err := os.Open(fname)
if err != nil {
// do something, or maybe ignore.
// A not existing file is OK to ignore
// (the POSIX error code will be ENOENT)
// because after the `path/filepath.Walk()` fetched an entry for
// this file from its directory, the file might well have gone.
}
_, err = f.Read(buf[:])
if err != nil {
if err == io.EOF {
// OK, we failed to read 1 byte, so the file is empty.
}
// Otherwise, deal with the error
}
f.Close()
You might try to be more clever and first obtain the stat information
(using a call to os.Stat()) to see if the file is a regular file—to
not attempt reading from sockets etc.
I have a fileglob of files in /proc/* that I need to very quickly
detect if they are truly empty instead of appearing to be empty.
They are truly empty in some sense (eg. they occupy no space on file system). If you want to check whether any data can be read from them, try reading from them - that's what file -s does:
-s, --special-files
Normally, file only attempts to read and
determine the type of argument files which stat(2) reports are
ordinary files. This prevents problems, because reading special files
may have peculiar consequences. Specifying the -s option causes file
to also read argument files which are block or character special
files. This is useful for determining the filesystem types of the
data in raw disk partitions, which are block special files. This
option also causes file to disregard the file size as reported by
stat(2) since on some systems it reports a zero size for raw disk
partitions.

Setting RPATH order in QMake

I have a Linux Qt program. I'd like it to preferentially use the (dynamic) Qt libraries in the executable's directory if they exist, otherwise use the system's Qt libs. RPATH to the rescue.
I add this line to the qmake's .pro file:
QMAKE_LFLAGS += '-Wl,-rpath,\'\$$ORIGIN\''
and looking at the resulting executable with readelf I see:
0x000000000000000f (RPATH) Library rpath: [$ORIGIN:/usr/local/Trolltech/Qt-5.2.0/lib]
0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN:/usr/local/Trolltech/Qt-5.2.0/lib]
Seems right, but ldd shows it's using the system version:
libQt5Core.so.5 => /usr/local/Trolltech/Qt-5.2.0/lib/libQt5Core.so.5 (0x00007f2d2fe09000)
If I manually edit qmake's resulting Makefile to swap the order of the two rpaths, so $ORIGIN comes after /usr/local/..., I get the right behavior:
0x000000000000000f (RPATH) Library rpath: [/usr/local/Trolltech/Qt-5.2.0/lib:$ORIGIN]
0x000000000000001d (RUNPATH) Library runpath: [/usr/local/Trolltech/Qt-5.2.0/lib:$ORIGIN]
libQt5Core.so.5 => ./libQt5Core.so.5 (0x00007fb92aba9000)
My problem is with how qmake constructs the final LFLAGS variable. I can't figure out how to make it put my addition ($ORIGIN) after the system library. Any ideas?
You can add the following to your .pro file to force the dynamic linker to look in the same directory as your Qt application at runtime in Linux :
unix:{
# suppress the default RPATH if you wish
QMAKE_LFLAGS_RPATH=
# add your own with quoting gyrations to make sure $ORIGIN gets to the command line unexpanded
QMAKE_LFLAGS += "-Wl,-rpath,\'\$$ORIGIN\'"
}
If you want it to look in a subdirectory of the executable path, you can use :
QMAKE_LFLAGS += "-Wl,-rpath,\'\$$ORIGIN/libs\'"
Note that you should have the .so files with the exact same name in your application directory. For example you should copy libQt5Core.so.5.2.0 to your application directory with the name libQt5Core.so.5. Now the ldd shows the directory of the application.
You can also have libQt5Core.so.5.2.0 and a link to it with the name libQt5Core.so.5 in the application directory.
As far as my research can say, you can only add RPATH at the beginning of the list with QMake.
But if you are on Linux and can install chrpath, you can hack your way around that.
Add this block at the end of your .pro file
# Add spacing since chrpath cannot expand RPATH length
QMAKE_RPATHDIR = \
/XYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXY1\
/XYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXY2\
/XYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXY3\
/XYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXYXY4
QMAKE_POST_LINK += 'chrpath -r \'/my/qt/installation:\$$ORIGIN\' $$OUT_PWD/mybinaryname;'
I'm taking a bit of a guess at what's happening, but it's based on knowing some of the odd behaviours of ld.
check for the presence of an LD_LIBRARY_PATH variable that will come into effect before the processing of a RUNPATH variable. Because of the presence of both RPATH and RUNPATH, the LD_LIBRARY_PATH rule comes into effect, so if it's set then unset it.
Secondly, I'd never expect to see:
libQt5Core.so.5 => ./libQt5Core.so.5 (0x00007fb92aba9000)
in the output of ldd, I would always see the expansion of $ORIGIN to the directory of the binary (maybe you shortened it?), so I would have expected:
libQt5Core.so.5 => /path/to/bin/./libQt5Core.so.5 (0x00007fb92aba9000)
Which means it sounds like the LD_LIBRARY_PATH expansion is .:/usr/local/Trolltech/Qt-5.2.0/lib, which to me sounds like you've got environmental overrides happening.
qmake would always append the QMAKE_RPATHDIR with the QT_INSTALL_LIBS internally defined in $(QT_DIR)/mkspecs/features/qt.prf file:
170: relative_qt_rpath:!isEmpty(QMAKE_REL_RPATH_BASE):contains(INSTALLS, target):\
173: QMAKE_RPATHDIR += $$relative_path($$[QT_INSTALL_LIBS], $$qtRelativeRPathBase())
175: QMAKE_RPATHDIR += $$[QT_INSTALL_LIBS/dev]
179:!isEmpty(QMAKE_LFLAGS_RPATHLINK):!contains(QT_CONFIG, static) {
189: QMAKE_RPATHLINKDIR *= $$unique(rpaths)
So to avoid your application using the QT library from system path, comment out the lines above which append the QMAKE_RPATHDIR and add QMAKE_RPATHDIR=$ORIGIN into your .pro file.

Runtime determination of files used in a makefile

I am trying to generate a make file in Linux that is fairly dynamic and will take get all the files from the /src directory of a certain type. Essentially the output of ls *.type I seem to be having difficulties in doing this. Below is what I currently have but it does not seem to work. Hopefully someone can help me out. Thanks!
JIL_B_TMPL : sh = ls *.type
JIL_LIST = $(JIL_B_TMPL)
I will also add this is not for compiling a C program.
To capture the output of a shell command in a makefile, you can do:
JIL_B_TMPL := $(shell ls *.type)
JIL_LIST := $(JIL_B_TMPL)
This is of course the same as writing:
JIL_LIST := $(shell ls *.type)
This works with GNU make, but since you mention Linux, I suppose you're using that.
Pat got the core of something that works, but in your case, you'll probably want something more like
JIL_LIST := $(wildcard *.type)
This gets rid of a call to an external program, which will be important if you decide in the future that you want to support Windows. Also, if you're using makepp, the wildcard function will also catch any .type files that can be built, regardless of whether or not they already have been.

How to check if command is available or existant?

I am developing a console application in C on linux.
Now an optional part of it (its not a requirement) is dependant on a command/binary being available.
If I check with system() I'm getting sh: command not found as unwanted output and it detects it as existent. So how would I check if the command is there?
Not a duplicate of Check if a program exists from a Bash script since I'm working with C, not BASH.
To answer your question about how to discover if the command exists with your code. You can try checking the return value.
int ret = system("ls --version > /dev/null 2>&1"); //The redirect to /dev/null ensures that your program does not produce the output of these commands.
if (ret == 0) {
//The executable was found.
}
You could also use popen, to read the output. Combining that with the whereis and type commands suggested in other answers -
char result[255];
FILE* fp = popen("whereis command", "r");
fgets(result, 255, fp);
//parse result to see the path of the bin if it has been found.
pclose(check);
Or using type:
FILE* fp = popen("type command" , "r");
The result of the type command is a bit harder to parse since it's output varies depending on what you are looking for (binary, alias, function, not found).
You can use stat(2) on Linux(or any POSIX OS) to check for a file's existence.
Use which, you can either check the value returned by system() (0 if found) or the output of the command (no output equal not found):
$ which which
/usr/bin/which
$ echo $?
0
$ which does_t_exist
$ echo $?
1
If you run a shell, the output from "type commandname" will tell you whether commandname is available, and if so, how it is provided (alias, function, path to binary). You can read the documentation for type here: http://ss64.com/bash/type.html
I would just go through the current PATH and see whether you can find it there. That’s what I did recently with an optional part of a program that needed agrep installed. Alternately, if you don’t trust the PATH but have your own list of paths to check instead, use that.
I doubt it’s something that you need to check with the shell for whether it’s a builtin.

In scons, how can I inject a target to be built?

I want to inject a "Cleanup" target which depends on a number of other targets finishing before it goes off and gzip's some log files. It's important that I not gzip early as this can cause some of the tools to fail.
How can I inject a cleanup target for Scons to execute?
e.g. I have targets foo and bar. I want to inject a new custom target called 'cleanup' that depends on foo and bar and runs after they're both done, without the user having to specify
% scons foo cleanup
I want them to type:
% scons foo
but have scons execute as though the user had typed
% scons foo cleanup
I've tried creating the cleanup target and appending to sys.argv, but it seems that scons has already processed sys.argv by the time it gets to my code so it doesn't process the 'cleanup' target that I manually append to sys.argv.
you shouldn't use _Add_Targets or undocumented features, you can just add your cleanup target to BUILD_TARGETS:
from SCons.Script import BUILD_TARGETS
BUILD_TARGETS.append('cleanup')
if you use this documented list of targets instead of undocumented functions, scons won't be confused when doing its bookkeeping. This comment block can be found in SCons/Script/__init__.py:
# BUILD_TARGETS can be modified in the SConscript files. If so, we
# want to treat the modified BUILD_TARGETS list as if they specified
# targets on the command line. To do that, though, we need to know if
# BUILD_TARGETS was modified through "official" APIs or by hand. We do
# this by updating two lists in parallel, the documented BUILD_TARGETS
# list, above, and this internal _build_plus_default targets list which
# should only have "official" API changes. Then Script/Main.py can
# compare these two afterwards to figure out if the user added their
# own targets to BUILD_TARGETS.
so I guess it is intended to change BUILD_TARGETS instead of calling internal helper functions
One way is to have the gzip tool depend on the output of the log files. For example, if we have this C file, 'hello.c':
#include <stdio.h>
int main()
{
printf("hello world\n");
return 0;
}
And this SConstruct file:
#!/usr/bin/python
env = Environment()
hello = env.Program('hello', 'hello.c')
env.Default(hello)
env.Append(BUILDERS={'CreateLog':
Builder(action='$SOURCE.abspath > $TARGET', suffix='.log')})
log = env.CreateLog('hello', hello)
zipped_log = env.Zip('logs.zip', log)
env.Alias('cleanup', zipped_log)
Then running "scons cleanup" will run the needed steps in the correct order:
gcc -o hello.o -c hello.c
gcc -o hello hello.o
./hello > hello.log
zip(["logs.zip"], ["hello.log"])
This is not quite what you specified, but the only difference between this example and your requirement is that "cleanup" is the step that actually creates the zip file, so that is the step that you have to run. Its dependencies (running the program that generates the log, creating that program) are automatically calculated. You can now add the alias "foo" as follows to get the desired output:
env.Alias('foo', zipped_log)
In version 1.1.0.d20081104 of SCons, you can use the private internal SCons method:
SCons.Script._Add_Targets( [ 'MY_INJECTED_TARGET' ] )
If the user types:
% scons foo bar
The above code snippet will cause SCons to behave as though the user had typed:
% scons foo bar MY_INJECTED_TARGET

Resources