How to compile and run assembly code in linux - linux

I am quite new to assembly and Linux as a whole. I found on gitHub a space invader program written in Assembly. But I tried compiling and running it, but have no clue how.
I first thought I could use gcc -o name name.s and that didnt work.
Like I said I am completely new to usuing linux and I would greatly appreciate it if someone could explain to me how to compile this and get it running.
Here's the link : https://github.com/timotei/Space-Invaders-Clone/blob/master/spacein.asm

Under linux the standard assembler is as. If you want to use assembler sources inside a c or c++ environment you can give the compiler e.g. gcc a hint with -xassembler or xassembler-with-cpp if you want to use the C-preprocessor. Normally gcc also accepts assembler sources if the name of the file ends with standard file name like *.s or *.S.
There is already another question here which handles dos assembler:
Taking an Assembly Course, Stuck in DOS!
And maybe you find some other discussions helpful:
https://www.winehq.org/pipermail/wine-users/2007-November/028174.html
But I have no idea why someone wants to start with assembler on x86 at all if you are not a os programmer or kernel hacker. :-)

Related

How to use assembly code you get online?

I have some C code I would like to optimize. It turns out the Intel C Compiler (ICC) does a much better job at this than GCC but I don't have a copy of that compiler and it is very expensive. However, I can compile it using ICC and get the assembly online at godbolt.org.
If I copy and paste this assembly into a text file, how can I then convert it into a functioning executable?
You will need to begin by making sure that the runtime environment for which godbolt.org compiles is similar enough to your runtime environment, (good luck with that,) because for example you may be using windows, and godbolt.org may be using linux, (or the other way around,) so when you bring the assembly to your system you might be able to convert it to object code, but it will still not link and it will not run.
Then you will need to find an assembler for your platform which is compatible with the syntax of assembly produced by the intel C compiler of godbolt.org so as to produce object files from the assembly files. (Good luck with that.)
Then you will need to find any and all runtime libraries (redistributables) required by code produced by the intel C compiler. (Good luck with that.)
Finally you will need to obtain a linker to link your resulting object files with the runtime libraries to produce an executable. (Good luck with that.)
Sometimes we need honest answers to our questions just so that we can realize how impossible our ideas are.

Compile linux gcc in windows - nvcc in windows

here is an interesting question that, if answered positively, would make cross compiling a whole lot easier.
Since gcc is written in C++, would it be possible to recompile the Linux gcc compiler on Windows MinGW G++ or VSC++ compiler, so that the resulting Windows executable would be able to compile c code to linux programs?
If so, what would be needed to do that?
So to simplify, here is what I want to do.
mingw32-g++ gcc.cpp -o gcc.exe
The command will probably not work because it would probably have been done before if it were that easy. What I ask is if this concept would be even possible.
Edit: thanks and expanding the question to NVCC
fvu was able to answer the question for the gcc compiler (please use the answer button next time), so if you had the same question you can thank him (or her) .
As an extention to the question, would it be possible to edit or recompile nvcc or the things it uses so that nvcc.exe can create a linux program from CUDA C code? I read that the windows variant of nvcc can only use the Visual Studio cl.exe and not MinGW or CygWin.
Is it possible to create linux programs with cl.exe? And if so, could that be used to generate linux programs with nvcc.exe?
Read the chapter on cross compiling in the gcc manual, gcc's architecture makes it quite easy to set up a toolchain where the target is different from the development machine.
I never went the exact route you describe, but I have built toolchains under Windows that target ARM9 embedded Linux machines, works like a charm - using cygwin btw. Look here for a gentle introduction. Also very useful info here.
I am not going to comment on what can be done with respect to nvcc, CUDA is somewhere on my (long) list of stuff to tinker with...
Now, can cl generate Linux binaries? The answer to this question is "sort of" : as long as the target processor is from a processor family that's supported by cl, the object files generated by it should probably not contain anything that would inhibit its execution on Linux, as they'll just contain machine code. That's the theory. However:
as Linux uses another executable format, you will need a Windows-hosted linker that understands Windows style object files (afaik, COFF), and links them together to a Linux style (ELF) executable. I never heard of such a beast, although in theory it could exist
the startup code (a tiny program that wraps around your main function) will also be different and needs to be written
and some more, eg library related issues
So, the practical answer is no, although it might be a nice summer project for a bored student :)

Executing machine codes attached at the end of an executable

I have a object file which has a main() function inside and just needs to be linked with crt... objects to be an executable . Unfortunately I can only compile and I can not link it to be an executable .
so I decided to create a c program ( on a pc with working GCC and linker ) to attach object(s) at the end of itself and execute the codes attached at run time (simulating a linked object ).
I saw DL API but I do'nt know how to use it for the problem I said .
May sb help me to know , how I can executing a code attached at the end of an executable .
Avoid doing that; it would be a mess .... And it probably won't reliably work, at least if the program is dynamically linked to the  libc6.so (e.g. because of ASLR)
Just use shared objects and dynamically linked libraries (See dynamic linker wikipage). You need to learn about dlopen(3) etc.
If you really insist, take many weeks to learn a lot more: read Levine's book on Linker and Loaders, read Advanced Linux Programming, read many man pages (including execve(2), mmap(2), elf(5), ld.so(8), ...) study the kernel code for execve and mmap, the GNU libc and MUSL libc source codes (for details about implementations of the dynamic linker), the x86-64 ABI or the ABI for your target processor (is it an ARM?), learn more about the GNU binutils etc, etc, etc.
In short, your life is too short doing such messy things, unless you are already an expert, e.g. able to implement your own dynamic linker.
addenda
Apparently your real issue seems to use tinycc on the ARM (under Android I am guessing). I would then ask on their mailing list (perhaps contribute with some patch), or simply use binutils and make your own GNU ld linker script to make it work. Then the question becomes entirely different and completely unrelated to your original question. There might be some previous attempts to solve that, according to Google searches.

Strange situation to get a Segmentation Fault

A friend of mine gave me a project in c to work on a project in Linux with sockets.(tic tac toe)
The project has already the executable files and the program works nice.
When I delete the executable files and compile the program myself I get no errors, but there is a certain situation on the program (when I challenge other player to a game) where I get a segmentation fault, and with the original executable files I get no error in this situation.
I didn't change anything on the program, just removed the previous executable files and compiled the program myself, I have no idea why this is happening.
Theoretically there is any explanation?
It is a usual case when you use different versions of compilers, libraries, utilities etc etc.. No wonder that big projects (as Linux Kernel) explicitly define what version of tools you should use to get the expected results. Firstly, try to recompile again using the same compiler as you friend does, if that does not help, dig deeper - libraries, utilities..

Is it possible to compile Linux kernel with something other than gcc

I wonder if someone managed to compile the Linux kernel with some other compiler than gcc. Or if someone have ever tried? Question may seem to be silly or academic, but it arose when I thought about answers to: Are C++ int operations atomic on the mips architecture
It seems that the atomicity of some operations depends not only on the cpu architecture, but also on used compiler. So, I wonder if in Linux world some compiler other than gcc even exists.
Linux explicitly depends on some gcc extensions, so any other compiler must be compatible with the needed extensions, in that case.
This is not a "no", since it's of course not impossible for a separate compiler vendor/developer to track gcc's extensions, just a data point that might help you search.
At some point tcc would process and run the linux kernel source. SO that would be a yes, I guess.
::Hat tip to ephemient in the comments.::
The LLVM developers are trying to compile it with clang. The meta-bug on compiling the Linux kernel with clang has more details (the dependency tree for that meta-bug shows how little seems to be left).
There have been some efforts (and patches) to compile an early version of the 2.6 kernel with icc.
Yup. I've done this. See [cfe-dev] Clang builds a working Linux Kernel (Boots to RL5 with SMP, networking and X, self hosts).
IBM's compiler was able to do it some Linux versions ago, but I'm not sure about now, nor am I sure of how well IBM optimized the kernel as instructed. All I know is, they got it to build.
As Linux is self hosting (with its own libc) and has been developed from the start with gcc (and gcc cross compilers), its sort of silly to use anything else.
I think mainly, playing nice with preprocessor macros and instructed optimizations is the biggest obstacle (not even getting into a departure from gas), as GNU has basically written the book on the above, and extended it. Beyond that, Linux tunes its optimizations to work with gcc, for instance, don't get caught using 'volatile' in the kernel without a damn good reason. Using inline and actually having the compiler agree is another challenge.
Linus is the first one to call GCC an &*#$ hole, which makes for a better compiler.
This is why we have the great GNU/Linux debate.
Many, many, many years ago, it was actually possible to compile the kernel with g++, and as far as I remember part of the motivation was because C++ had stronger type check, not necessarily to have g++ to produce object files. But as Neil Butterworth have pointed out, Linus is not particular fond of C++, and there is zero chance that this ever will be possible again.
EKOPath 4 Compiler, not now. but probably with some minor patches
https://github.com/path64/repositories
http://www.pathscale.com/ekopath-compiler-suite
I am just now working on compile Linux kernel using Open64 for MIPS archtecture, and some other guys are now just working for make Open64 can build for X86 arch. Now the kernel can partly run, and still have Run fail errors.
However for the atomic problem, at least i have not come up with it. And I do not think it is really a problem.The reasons are:
The Linux kernel have already been a collection of source code, which can successfully build with GCC, so it is only the compiler's problem if it can not build it, or the built kernel runs fail.
If a compiler want to successfully build Linux kernel, it should obide the GNU C Extension, and this extension will give a clear discription of what a atomic operation is, so such a compile only need to generate code according to this extension.
My non-technical guess: The Linux Kernel can't currently (2009) be compiled with any compiler other than the GNU compiler, gcc.
I say this on the basis that I've heard Richard Stallman, with some conviction, say Linux should be called GNU/Linux because the kernel is "only 1 part of the operating system" and I'm guessing he would not be able to say this if the kernel was non-dependant on GNU (e.g. a tonne of embedded devices run a Linux OS without any GNU software).
As I said, just a guess, let me know if I'm wrong...

Resources