Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
i have 64bit Ubuntu but i need to code ASM code for Intel 8086 Windows... Is there any software or IDE or emulator you can suggest?
I know that there is different instructions for each kind of processor...
At first, you always can compile your program in 32bit Windows PE executable, using the proper compiler. I don't know for other assemblers, but, for example, FASM can do it in easy.
The big problem is with the testing - i.e. how to run and debug the compiled program?
The only IDE I know that is able to compile, run and debug Windows and Linux applications in the same time, from Linux is Fresh IDE (it is based on FASM compiler). Fresh IDE is windows application, so you will need WINE installed in order to use it (and it uses Wine in order to be able to run Windows applications from Linux).
As far as I know, WINE can run on 64bit Linux, but I never used it, so can't give any guides for installing it.
After installing WINE, install and configure Fresh IDE as described in the setup guide.
NOTE: Fresh IDE is my product and I am possibly little bit biased. But it has the features described in this answer (and many more useful features as well). :)
Specific assembly syntax is compiler specific. You will need to investigate this for whichever compiler you're planning to use. I am unaware of any specific IDE/emulator for this purpose in Linux, however you can use any editor of your choice (vi, emacs, eclipse, nano, textedit, etc) to write the text document which contains your code. You could then transfer this to a windows machine with the appropriate compiler. Unless you lack regular access to the compilation machine, however, it's likely more in your interest to work in the environment which you're going to compile to.
You can write code on whatever machine you want. Testing it is another story entirely. In your case, you'll at least want to get a cross-assembler to build Windows binaries on your Linux machine, and then probably a VM running windows of some kind to test your program. If you're going to the effort of setting up the VM, you can probably just use a native assembler there too.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have a static library mylib.a generated under Linux. Now how can I link it to a project under windows environment? The mylib.a provides functions for others to call. The reason I build this library in Linux is because everything has already been set up in Linux.
I have a static library mylib.a generated under Linux. Now how can I link it to a project under windows environment?
You simply cannot do that (unless on Windows you'll use some Linux compatibility layer like WSL). Because Windows and Linux are very different and incompatible : different system calls (for Linux, listed in syscalls(2); on Windows, the set of system calls is not well documented), different ABI (for Linux, see this), different calling conventions, different executable format (for Linux, ELF; for Windows, PE), different library format and different dynamic loading concepts (so the notion of plugin is different and incompatible on Windows and on Linux).
BTW, a static library is not enough alone (even on Linux only) so is useless by itself. You need additional header files and documentation to use it in a project.
You could read something like Operating Systems: Three Easy Pieces to understand better what an OS is and provides. An entire book is needed. Then you could dive into the specific OS API for your system (e.g. for Linux, read ALP or something newer -related to POSIX- and the man pages; for Windows, study in details the WinAPI).
My recommendation is to always deal with source code (above what your OS provides). So if you can get the source code of mylib.a you might port it to Windows (and that could take years of work, if that library is Linux or POSIX specific).
Be aware that several frameworks exist to provide a nearly common API (at source code level) on Linux and Windows and MacOSX. For example, Qt, POCO, GTK, SDL, and many others. If you code in C or C++ for one of these frameworks (and nothing else!), porting your source code from Windows to Linux or vice versa should be really easy. However, some difference still remains: file paths, font names, command language (and many other resources) etc... are still different on Linux and on Windows.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
There are many distributions of Linux. All of them have one thing in common, however: the kernel. And Linux programs run across all of them. If I make a minimalistic distribution from the kernel, will current programs made for Linux run? What defines the differences of distributions? I am a beginner on this stuff, don't go harsh if it is a stupid question. Thank you
Yes, with caveats.
You need to make sure you have full C support and by that I mean something like glibc installed or installable or you cannot build programs for your minimal install. If you can install and compile C programs on Linux then you can in effect build practically everything else from scratch.
If you want to be able to download binaries and run them this is different, the binaries will likely require shared libraries that they would have on the systems they were built for. Unless you have those libraries you cannot run the existing binaries you find online.
What defines the differences of distributions?
There are a lot of defining factors in each distribution. If we disregard things like...
Licensing ie Redhat vs Debian
Stance on things like GPL/BSD/NonFree
Release schedules Debian Vs Ubuntu
Target audience ie Ubuntu vs Debian
I think the biggest defining factor is package management ie yum/rpm vs apt/dpkg and how the base configuration is managed on the machine. This is certainly the thing I seem to use the most and miss the most when I change distributions. The kernel itself is very rarely on my mind which is in part a large part of it's success.
Most people start with something like ISO Linux and get a bootable CD but even then you normally choose a base distribution. If you want to create a base distributions that's a ton of work. Have a look at this great info graphic of the linux family tree
https://en.wikipedia.org/wiki/List_of_Linux_distributions#/media/File:Linux_Distribution_Timeline.svg
If you look at Debian/Ubuntu the amount of infrastructure these distributions have setup is quite staggering. They have millions perhaps even billions of lines of code in them all designed to run on their supported versions. You might be able to take a binary from one of them and run it on Redhat but it's likely to fail unless the planets are in alignment etc. Some people think this is actually a bad thing
https://en.wikipedia.org/wiki/Ingo_Moln%C3%A1r#Quotes
The basic failure of the free Linux desktop is that it's, perversely, not free
enough...
Desktop Linux distributions are trying to "own" 20 thousand
application packages consisting of over a billion lines of code and have
created parallel, mostly closed ecosystems around them... The Linux package
management method system works reasonably well in the enterprise (which is a
hierarchical, centrally planned organization in most cases), but desktop Linux
on the other hand stopped scaling 10 years ago, at the 1000 packages limit...
If I make a minimalistic distribution from the kernel, will current programs made for Linux run?
Very few programs actually use the kernel directly. They also need a libc, which is responsible for actually implementing most of the C routines used by either the programs themselves or the VMs running their code.
It is possible to statically link the libc to the program, but this both bloats the size of the program and makes it impossible to fix security issues in the linked libraries without rebuilding the whole program.
Well, certain programs demand a specific version of the kernel. Usually these programs act as "drivers" for the rest of the system (e.g. nvidia proprietary drivers: some of them act in kernel space while others run in user space, but require that very specific kernel module and thus that very specific kernel build).
A less stricter case is when a program demand a specific capability from kernel. For example almost all modern Linux virtualization systems rely on cgroups feature. Thus, to use them you need to have a reasonably fresh kernel.
Nevertheless a lot of kernel API is stable, so you can rely on it. But usually programs don't call kernel routines directly. A typical way to use a kernel function is to call a correspondent library routine which wraps and leverages the kernel API. The main, most basic library of that kind is libc.
Technically programs compiled for one version of libc (as well as other shared libraries) can be used with slightly different versions of correspondent libraries. For example, a lot of people use Skype compiled for SuSE in completely different Linux distributions. That Skype is a pretty complex application with a lot libraries being linked in and so on, but nevertheless it works without any significant problem. So that with a lot of other proprietary programs, which couldn't be compiled for a given distribution or even for a given installation. But sometimes shit just happens :) Those binary incompabilities are quite rare but they happen from time to time.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
after searching about it i found some info (yet confusing for me)
Cygwin is a Unix-like environment and command-line interface for Microsoft Windows.
i found the above line in wikipedia but what does that mean?
i'm not getting a clear idea about MinGW and cygwin and msys, please help
Because it keeps confusing people:
Cygwin: think of it as an OS. It provides a POSIX C runtime built on top of Windows so you can compile most Unix software to run on top of it. It comes with GCC, and to some extent, you can call the Win32 API from within Cygwin, although I'm not sure that is meant to happen or work at all.
MSYS(2): a fork of Cygwin which has path translation magic to make native Windows programs work nicely with it. Its main goal is to provide a shell so you can run autotools configure scripts. You are not meant to build MSYS applications at all. Note that MSYS2 strives for much more and contains a full-blown package management system so you can easily install MinGW-w64 libraries and tools.
MinGW(-w64): A native Windows port of the GCC compiler, including Win32 API headers and libs. It contains a tiny POSIX compatibility layer (through e.g. winpthreads, the unistd.h headers and some other bits), but you cannot compile POSIX software with this. This is best compared to MSVC, which also produces native code using the Win32 API.
Note that there are MinGW-w64 cross-compilers that run on Cygwin. With MSYS2, I frankly don't see a good reason to do that. Might as well run a VM with Linux if you're going to use Cygwin for that.
More or less from its web page
cygwin is
a POSIX compatibility layer on top of windows API. This is mainly encapsulated in a cygwin1.dll
a distribution system and repository of open source software compiled with this dll.
In a nutshell, if you have a linux source, you can try to recompile for cygwin and be able to run it on windows...
This enables to have accessible many of the typical unix commands (shells, gcc/g++, find....)
Alternatives are:
MSYS: are a set of typical unix command implemented in windows.
mingw: A gcc/g++ target able to produce win32 programs (note that cygwin gcc/g++ programs will have a dependency on cygwin1.dll that mingw programs will not have).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am interested in cross-compiling a Linux kernel for an ARM target on a x86 host. Are there some good practices you recommend? Which is the best cross-compile suite in your opinion?
Have you settled up a custom cross-compile environment? If yes, what advices do you have? Is it a good idea?
There are two approaches I've used for ARM/Linux tools. The easiest is to download a pre-built tool chain directly.
Pro: It just works and you can get on with the interesting part of your project
Con: You are stuck with whichever version of gcc/binutils/libc they picked
If the later matters to you, check out crosstool-ng. This project is a configuration tool similar to the Linux kernel configuration application. Set which versions of gcc, binutils, libc (GNU or uCLibc), threading, and Linux kernel to build and crosstool-ng does the rest (i.e. downloads the tar balls, configures the tools, and builds them).
Pro: You get exactly what you selected during the configuration
Con: You get exactly what you selected during the configuration
meaning you take on full responsibility for the choice of compiler/binutil/libc and their associated features/shortcomings/bugs. Also, as mentioned in the comments, there is some "pain" involved in selecting the versions of binutils, C library etc. as not all combinations necessarily work together or even build.
One hybrid approach might be to start with the pre-built tools and replace them later with a custom solution via crosstool-ng if necessary.
Update: The answer originally used the CodeSourcery tools as an example of a pre-built tool chain. The CodeSourcery tools for ARM were free to download from Mentor Graphics, but they are now called the Sourcery CodeBench and must be purchased from Mentor Graphics. Other options now include Linaro as well as distribution specific tools from Android, Ubuntu, and others.
I use the emdebian toolchain for compiling stuff for my ARM machines that isn't happy being compiled natively in the small resources available (/me glares at the kernel). The main package is gcc-4.X-arm-linux-gnueabi (X = 1,2,3), and provides appropriately suffixed gcc/cpp/ld/etc commands. I add this to my sources.list:
deb http://www.emdebian.org/debian/ unstable main
Of course, if you're not using Debian, this probably isn't so useful, but by gum it works well for me.
I've used scratchbox while experimenting with building apps for maemo (Nokia N810), which uses an ARM processor. Supposedly, scratchbox is not restricted to maemo development.
I've used crosstool on several targets. It's great as long as you want to build your toolchain from scratch.
Of course there are several pre built toolchains for arm as well, just google it -- too many to mention here.
1) In my opinion building your own toolchain works the best. You end up having tight control over everything, plus if you're new to embedded linux, it's a GREAT learning experience.
2) Don't go with a commercial toolchain. Even if you don't want to take the time to build your own, there are free alternatives out there.
If your company will spend the money, have them buy you a jtag debugger.
It will save you tons of time. and it allows you to easily learn and step through the kernel startup, etc..
I highly recommend using the Lauterbach jtag products... They work with a ton of targets and the software is cross platform. Their support is great as well.
If you can't get a jtag debugger and you're working in the kernel, use an VM to do that, usermode linux, vmware..etc.. your code will be debugged on x86.. porting it to your arm target will be a different story, but it's a cheaper way to iron out some bugs.
If you're porting a bootloader, use uboot. Of course, if you're using a reference platform, then you're probably better off using what they provide with the BSP.
I hope that helps.
Buildroot is a tool I've had reasonably good luck with for building a custom uClibc-based toolchain from scratch. It's very customizable, and not excessively particular about what distribution you happen to be running on.
Also, many of its existing users (ie. embedded router distros) are also targeting ARM.
If you're using Gentoo, getting a cross-compiling toolchain is as easy as
$ emerge crossdev
$ crossdev -t $ARCH-$VENDOR-$OS-$LIBC
where ARCH is arm or armeb, VENDOR is unknown or softfloat, OS is linux, and LIBC is gnu or uclibc.
If all you want is a compiler (and linker) for the kernel, the LIBC part is irrelevant, and you can use -s1/--stage1 to inform crossdev that you only need binutils and gcc.
This is what Eurotech uses for their Debian ARM distibution. You'll note that they don't recommend using a cross compiler if you can avoid it. Compiling on the target itself tends to be a more reliable way of getting outputs that you know will run.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've been using linux at university for quite a while, and it seems much more customisable and better for coding.
So I want to switch to linux from windows 7 at home.
What branch of linux should I use? I'm an emacs user if that gives any insight.
Which desktop enviroment should I use? At uni we use KDE, but it's too graphical, often I just click on stuff instead of using the terminal. I want one where it encourages me to use terminal more.
and the biggest question, how do I install it all? Should I put everything on external hard drive and wipe my computer completley?
I primarily program in Java and python.
I would recommend that you first try using Linux off Live CD/DVD. Linux Mint, Ubuntu, etc.
Just download and burn .iso onto blank media and boot your computer off of it. Just play around, check various desktop environments, see if all your hardware work with the specific Linux distribution. This step is very useful to decide which distribution you actually want to install onto your computer, especially the latter since, while it has been improving, the biggest obstacle you may face in configuring your computer to run on Linux is often hardware incompatibility. Just make sure everything that you need to work actually works.
If you have no issues wiping out Windows, Linux installation is pretty straightforward these days. It even takes less time in general than re-installing Windows. I would browse the web for an installation note for your specific computer model to see if anyone has already successfully done so, so that you can just follow. That saves a lot of time.
I use Debian (Wheezy now) and KDE. It's very easy to install and switch desktop environments after installing Linux though, so that shouldn't be any concern.
I suggest creating a virtual machine using VMWare or Virtual Box. As far as the distribution goes, Linux Mint and Ubuntu are pretty user-friendly for first time installations. And for the desktop environment, I suggest XFCE.
A few Google searches will do you good. I think a virtual environment will be much more easier to manage than partitioning a hard-drive.
Well, the installation step, if you use Windows 7, you may want to make a full backup of your hdd - so if things go wrong you will be safe and able to recover.
I was in somewhat similar situation recently - figuring out which linux distro to use. Previously I had luck with ScientificLinux, but this time it didn't like my laptop hardware for some reason - after wake-up wireless network card was getting stuck and wasnt picking any signal. I didn't want to recompile kernel etc., so I installed Ubuntu, but the Gnome 3 was a show stopper - I had to roll back to Gnome 2, but later I tried and liked a lot XFCE desktop - which I use right now on my workstation and laptop.
Java, Python and emacs probably work well with any linux distribution out of the box, so it is up to you which one to choose after all. Good luck!
Sorry, forgot to mention - all contemporary Linux distributions are able to install a dual boot feature - so you can keep your Windows 7 setup along with Linux (if you have enough of free space), moreover Windows partition will be accessible from Linux which is handy sometimes.