'ld86: no start symbol' when I try to combine 2 object files - object

so for a small project I have for my OS course, we're supposed to make a very small kernel. We're provided with a couple of assembly files and so on; essentially for this part we're supposed to use the following lines:
bcc -ansi -c -o kernel.o kernel.c
as86 kernel.asm -o kernel_asm.o
ld86 -o kernel -d kernel.o kernel_asm.o
dd if=kernel of=floppya.img bs=512 conv=notrunc seek=3
ld86, where it's supposed to link kernel.o and kernel_asm.o, is where it goes wrong. It issue the error in the title (ld86: no start symbol), and if I then try to use the dd utility, it tells me that it failed to open kernel (because the file was never created by ld86)
I've tried looking up the error for over an hour now, and I have found nothing. Any help (even speculation) would be appreciated.

Related

Standard error file when there is no error

I'm new to Linux & shell and I'm struggling with checking if the compilation is successful.
g++ code.cpp -o code.o 2>error.txt
if [ ! -e error.txt ]
then
do something
else
echo "Failed to compile"
I guess an error file is created even if the compilation is successful. What is the content of the error file when there is no error? I need to change the if condition to check if the compilation is successful.
It's just the order of things. What happens when the shell parses the string g++ code.cpp -o code.o 2>error.txt is:
The shell creates error.txt, truncating the file if that name already exists.
g++ is called with its error output redirected to the new file.
If g++ does not write any data, then the file remains as it was (empty) at the end of step 1.
You probably aren't so much interested in the error file as you are the return value. You probably ought to just do:
if g++ code.cpp -o code; then : do something; done
or even just:
g++ code .cpp -o code && : do something
but if really want to do something else with the errors, you can do:
if g++ code.cpp -o code.o 2> error.txt; then
rm error.txt
: do something
else
echo >&2 Failed to compile code.cpp.\ See "$(pwd)"/error.txt for details.
fi
Make sure you escape at least one of the spaces after the . so that you get 2 spaces after the period (or just quote the whole argument to echo). Although it's become fashionable lately to claim that you only need one space, all of those arguments rely on the use of variable width fonts and any command line tool worth using will be used most often in an environment where fixed width fonts are still dominant. This last point is totally unrelated to your question, but is worth remembering.

Make ignores the rule when run for the first time

SO
I can't find out why these lines are not called for the first time I run 'make' but are called the next time:
sb_path = sb
sb_src := $(sb_path)/src
sb_build := $(sb_path)/build
ifndef DO_NOT_GENERATE_COMMIT_INFO
commit_sb: | $(sb_bin)
#$(sb_build)/generate-commit-info $(sb_path)
$(sb_src)/last_git_commit_info.h: | commit_sb ;
endif
I'm just curious because there is no file generate-commit-info file and make crashes when I call it for the second time, but it compiles my program ok for the first try.
I use script on my local machine to copy sources over ssh to another machine and to run compile.sh script there:
...
scp -r $sbfolder/build $sbfolder/Makefile "$buildserver:$root/$curdate"
check_retcode
scp -r $sbfolder/sb/Makefile "$buildserver:$root/$curdate/sb/"
...
ssh $buildserver "$root/compile.sh $curdate $debug"
compile.sh:
# fix Makefile: we don't have git installed here
#DO_NOT_GENERATE_COMMIT_INFO=true
#now we can compile sb
curdir="/home/tmp/kamyshev/sb_new/$1"
cd $curdir
check_retcode
t_path=$curdir
debug=$2
config=RELEASE
if [[ debug -eq 1 ]]; then
config=DEBUG
fi
echo "building sb... CONFIG=$config"
make -j2 CONFIG=$config
check_retcode
As you see DO_NOT_GENERATE_COMMIT_INFO=true is commented out. So I just don't see a reason why the code is not run when I call a make or the script for the first time (either from the remote script or myselft from command line).
Do you have any clues?
UPDATE on Etan Reisner comment:
commit_sb target is checked, it does not exist, so it's rule is being run and it updates last_git_commit_info.h. Thus it forces to update the .h file. It also gives me a .PHONY target commit_sb so I could do it directly by calling make commit_sb.
The generate-commit-info also creates a file in a $(sb_bin) folder.
My another guess is that you are talking about a better way to organize this code.
I can update last_git_commit_info.h directly with a such rule:
commit_sb $(sb_src)/last_git_commit_info.h: FORCE | $(sb_bin)
#$(sb_build)/generate-commit-info $(sb_path)
FORCE:
Thanks to the commenters on my question I've done some additional research: I've tried to make a minimal complete example. And this led me to the answer.
My code generates dependency files (look at -MMD command in SB_CXXFLAGS):
# just example - in real Makefile these are calculated on the fly
sb_deps := file1.d file2.d [...]
# rules with dependances of .o files against .h files
-include $(sb_deps)
SB_CXXFLAGS = $(CXXFLAGS) [...] -MMD
# compile and generate dependency info;
$(sb_obj)/%.o:$(sb_src)/%.cpp
$(CXX) $(SB_CXXFLAGS) $< -o $#
And when I run make for the first time there no *.d files, so no *.cpp depends on last_git_commit_info.h file and the rule is not applied.
On the subsequent runs the dependency rule appears in one of *.d files, the rule is executed and I get the error.
UPDATE: This does not concern the question directly, but this is the better way to write these rules:
ifndef DO_NOT_GENERATE_COMMIT_INFO
commit_sb $(sb_src)/last_git_commit_info.h: FORCE | $(sb_bin)
#$(sb_build)/generate-commit-info $(sb_path)
FORCE:
endif

No rule to make target `–f'

I am trying to make a c file like
make –f makefile1
This is my make file:
TestAssn1: test_assign1_1.o dberror.o storage_mgr.o
cc -o TestAssn1 test_assign1_1.o dberror.o storage_mgr.o
test_assign1_1.o: test_assign1_1.c test_helper.h dberror.h storage_mgr.h
cc -c test_assign1_1.c
dberror.o: dberror.c dberror.h
cc -c dberror.c
storage_mgr.o: storage_mgr.c storage_mgr.h dberror.h
cc -c storage_mgr.c
But I only get this message:
make: *** No rule to make target `–f'. Stop.
How should I correct this?
You need to use a normal dash (-), not an en dash, in the command.
My guess is you copied this command from a blog or other web source. Many blog/web frameworks have a bug where they will replace typewriter punctuation with their typographically correct counterparts even within code formatted text.
This is very odd as your make usage is correct per http://linux.die.net/man/1/make
Please try some of the other formats for this option:
-f file, --file=file, --makefile=FILE
Use file as a makefile.
Otherwise, perhaps your make is not the one listed in that man page (which is GNU make).

How can I get perf to find symbols in my program

When using perf report, I don't see any symbols for my program, instead I get output like this:
$ perf record /path/to/racket ints.rkt 10000
$ perf report --stdio
# Overhead Command Shared Object Symbol
# ........ ........ ................. ......
#
70.06% ints.rkt [unknown] [.] 0x5f99b8
26.28% ints.rkt [kernel.kallsyms] [k] 0xffffffff8103d0ca
3.66% ints.rkt perf-32046.map [.] 0x7f1d9be46650
Which is fairly uninformative.
The relevant program is built with debugging symbols, and the sysprof tool shows the appropriate symbols, as does Zoom, which I think is using perf under the hood.
Note that this is on x86-64, so the binary is compiled with -fomit-frame-pointer, but that's the case when running under the other tools as well.
This post is already over a year old, but since it came out at the top of my Google search results when I had the same problem, I thought I'd answer it here. After some more searching around, I found the answer given in this related StackOverflow question very helpful. On my Ubuntu Raring system, I then ended up doing the following:
Compile my C++ sources with -g (fairly obvious, you need debug symbols)
Run perf as
record -g dwarf -F 97 /path/to/my/program
This way perf is able to handle the DWARF 2 debug format, which is the standard format gcc uses on Linux. The -F 97 parameter reduces the sampling rate to 97 Hz. The default sampling rate was apparently too large for my system and resulted in messages like this:
Warning:
Processed 172390 events and lost 126 chunks!
Check IO/CPU overload!
and the perf report call afterwards would fail with a segmentation fault. With the reduced sampling rate everything worked out fine.
Once the perf.data file has been generated without any errors in the previous step, you can run perf report etc. I personally like the FlameGraph tools to generate SVG visualizations.
Other people reported that running
echo 0 > /proc/sys/kernel/kptr_restrict
as root can help as well, if kernel symbols are required.
In my case the solution was to delete the elf files which contained cached symbols from previous builds and were messing things up.
They are in ~/.debug/ folder
You can always use the '$ nm ' command.
here is some sample output:
Ethans-MacBook-Pro:~ phyrrus9$ nm a.out
0000000100000000 T __mh_execute_header
0000000100000f30 T _main
U _printf
0000000100000f00 T _sigint
U _signal
U dyld_stub_binder
I had this problem too, I couldn't see any userspace symbol, but I saw some kernel symbols. I thought this was a symbol loading issue. After tried all the possible solutions I could find, I still couldn't get it work.
Then I faintly remember that
ulimit -u unlimited
is needed. I tried and it magically worked.
I found from this wiki that this command is needed when you use too many file descriptors.
https://perf.wiki.kernel.org/index.php/Tutorial#Troubleshooting_and_Tips
my final command was
perf record -F 999 -g ./my_program
didn't need --call-graph
Make sure that you compile the program using -g option along with gcc(cc) so that debugging information is produced in the operating system's native format.
Try to do the following and check if there are debug symbols present in the symbol table.
$objdump -t your-elf
$readelf -a your-elf
$nm -a your-elf
How about your dev host machine? Is it also running the x86_64 OS?
If not, please make sure the perf is cross-compiled, because the perf depends on the objdump and other tools in toolchain.
I got the same problem with perf after overriding the name of my program via prctl(PR_SET_NAME)
As I can see your case is pretty similar:
70.06% ints.rkt [unknown]
Command you have executed (racket) is different from the one perf have seen.
you can check the value of kptr_restrict by cat /proc/kallsyms. If the addresses of the symbols in the result are all 0x000000, you can fix it by command echo 0 > sys/kernel/kptr_restrict . After this , you may get a wanted result of the perf report

Compressing the core files during core generation

Is there way to compress the core files during core dump generation?
If the storage space is limited in the system, is there a way of conserving it in case of need for core dump generation with immediate compression?
Ideally the method would work on older versions of linux such as 2.6.x.
The Linux kernel /proc/sys/kernel/core_pattern file will do what you want: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#191
Set the filename to something like |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz and your core files should be saved compressed for you.
For an embedded Linux systems, following script change perfectly works to generate compressed core files in 2 steps
step 1: create a script
touch /bin/gen_compress_core.sh
chmod +x /bin/gen_compress_core.sh
cat > /bin/gen_compress_core.sh #!/bin/sh exec /bin/gzip -f - >"/var/core/core-$1.$2.gz"
ctrl +d
step 2: update the core pattern file
cat > /proc/sys/kernel/core_pattern |/bin/gen_compress_core.sh %e %p ctrl+d
As suggested by other answer, the Linux kernel /proc/sys/kernel/core_pattern file is good place to start: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#141
As documentation says you can specify the special character "|" which will tell kernel to output the file to script. As suggested you could use |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz as name, however it doesn't seem to work for me. I expect that the reason is that on my system kernel doesn't treat the > character as a output, rather it probably passes it as a parameter to gzip.
In order to avoid this problem, like other suggested you can create your file in some location I am using /home//crash/core.sh, create it using the following command, replacing with your user. Alternatively you can also obviously change the entire path.
echo -e '#!/bin/bash\nexec /bin/gzip -f - >"/home/<username>/crashes/core-$1-$2-$3-$4-$5.gz"' > ~/crashes/core.sh
Now this script will take 5 input parameters and concatenate them and add to core-path. The full paths must be specified in the ~/crashes/core.sh. Also the location of this script can be specified. Now lets tell kernel to use tour executable with parameters when generating file:
sudo sysctl -w kernel.core_pattern="|/home/<username>/crashes/core.sh %e %p %h %t"
Again should be replaced (or entire path to match location and name of core.sh script). Next step is to crash some program, lets create example crashing cpp file:
int main (){
int * a = nullptr;
int b = *a;
}
After compiling and running there are 2 options, either we will see:
Segmentation fault (core dumped)
Or
Segmentation fault
In case we see the latter, there are few possible reasons.
ulimit is not set, ulimit -c should specify what is limit for cores
apport or your distro core dump collector is not running, this should be investigated further
there is an error in script we wrote, I suggest than checking some basic dump path to check if the other things aren't reason the below should create /tmp/core.dump:
sudo sysctl -w kernel.core_pattern="/tmp/core.dump"
I know there is already an answer for this question however it wasn't obvious for me why it isn't working "out of the box" so I wanted to summarize my findings, hope it helps someone.

Resources