Linux equivalent header for synch.h - linux

I have a C code which contains #include<synch.h> The code compiles successfully in solaris but in Linux I find that the header is missing.
As suggested in a few links, can "sync.h" be used instead?
Or is there any other equivalent header for synch.h in Linux?

The synch.h header in Solaris is for Solaris threads. Among other things, it provides declarations for semaphores and mutexes. You can either this library (http://sctl.sourceforge.net/sctl_v1.1_rn.html) to give you Solaris compatible threading on Linux or, a much better idea, rework your code to use POSIX threading.

We can't be sure, but one likely possibility is that this is the synch.h that's part of NACHOS, which is often used in educational environments. Head over to the NACHOS project page and read up on it, and decide whether you think that's probably the right thing; if so, you can download and install it for free.

Related

Does Debian system uses FreeBSD kernel and Redhat system uses GNU kernel?

Is it true that the Debian system uses FreeBSD kernel and the Redhat system uses GNU kernel. Does it make any difference to use Free BSD kernel or GNU kernel?
Well the comment grew too long so I might as well post an answer. It is quite unclear what you are asking for, since you don't present any specific problem. At first glance this question does not belong to SO. But I assume since posted here you ask about difference from coders point of view.
1st to clear up, the BSD kernel is rather the alternative one like Hurd, so a Debian system can use FreeBSD kernel, it's not default. Actually its status is "official port".
Probably the biggest difference lies in the philosophy the user believes and their peace of mind, alternatively the hood they grew up in. Unless you are a distro developer or work even lower on the system.
Generally as a programmer you work with standard library implementation (libc,libc++,libstdc++ or whatever), they need to follow the standard, and the differences are really small. On debian this is natively provided with gnu toolset. You can assume to work in gnu wonderland. The kernel lies even lower, and you would need to work on really low-level kernel related stuff to notice difference of the kernel itself. Usually you work few layers from it and kernel should pleasantly abstract.
So no need to worry unless you acutally run into trouble.

Will writing C in both Windows and Linux cause compiling problems?

I work from 2 different machines. One is Windows and the other is Linux. If I alternately work on the same project but switch between both OSes, will I eventually run into compiling errors? I ask because maybe there are standards supported by one but not by the other.
That question is a pretty broad one and it depends, strictly speaking, on your tool chain. If you were to use the same tool chain (e.g. GCC/MinGW or Clang), you'd be minimizing the chance for this class of errors. If you were to use Visual Studio on Windows and GCC or Clang on the Linux side, you'd run into more issues alone because some of the headers differ. So once your program leaves the realm of strict ANSI C (C89) you'll be on your own.
However, if you aren't careful you may run into a lot of other more profane errors, such as the compiler on Linux choking on the line endings if you didn't tell your editor on the Windows side to use these.
Ah, and also keep in mind that if you want to actually cross-compile, GCC may be the best choice and therefore the first part I mentioned in my answer becomes a moot point. GCC is a proven choice on both ends. And given your question it's unlikely that you are trying to write something like a kernel mode driver - which would be fundamentally different.
That may be only if your application use some specific API.
It is entirely possible to write code that works on both platforms, with no issues to compile the code. It is, however, not without some difficulties. Compilers allow you to use non-standard features in the compiler, and it's often hard to do more fancy user interfaces (even if it's still just text) because as soon as you start wanting to do more than "read a line of text as it is entered in a shell", it's into "non-standard" land.
If you do find yourself needing to do more than what the standard C library can do, make sure you isolate those parts of the code into a separate file (or a couple of files, one for Linux/Unix style systems and one for Windows systems).
Using the same compiler (gcc) would help avoiding problems with "compiler B doesn't compile code that works fine in compiler A".
But it's far from an absolute necessity - just make sure you compile the code on both platforms and with all of your "suppoerted" compilers often enough that you haven't dug a very deep hole that is hard to get out of before you discover that "it's not working on the other system". It certainly helps if you have (at least) a virtual machine running the other OS, so you can easily try both variants.
Ideally, you want to set up an automated system, such that when you change the code [and feel that the changes are "complete"], it automatically gets built on both platforms and all compilers you want to use. And if possible, also automatically tested!
I would also seriously consider using version control - that way, when something breaks on one or the other side, you can go back and look at what the code looked like before it stopped working, and (hopefully) find the reason it broke much quicker than "Hmm, I think it's the change I made to foo.c, lets take that out... No, not that one, ok how about the change here..." - at least with version control, you can say "Ok, so version 1234 doesn't work, let's try version 1220 - ok, that works. Now try 1228, still works - so change between 1229 and 1234 - try 1232, ah, it's broken..." No editing files and you can still go to any other version you like with very little difficulty. I have used Mercurial quite a bit, git a little bit, some subversion, and worked on a project in Perforce for a few years. All of these are good - personally, I think I prefer mercurial.
As a side-effect: Most version control systems also deal with filename and line endings in the saner way than doing this manually.
If you combine your version control system with a "automated build and test-system", such as Jenkins, you can get everything very automated. Jenkins is free and runs on both Windows and Linux, and you can use it to automatically build and test your code as and when you submit the code to the version control system.
It will not create a problem until you recompile the source code in the respective OS. If you wanna run your compiled file generated by windows(.exe or .obj), into linux or vice-versa then it will definitely create a problem and wont be possible. But you can move you source code (file with extension .c/.c++) into any of the os. And sometimes it also create problems with different header files, so take care of that also. Best practice is to use single OS for you entire project, avoid multiple os until it is extremely necessary.

Linux/Mac: What is good method to determine platform at compile time?

I would like to generalize a build system to compile on several (somewhat similar) platforms. What is a good method for determining the type of host that the shell script or Makefile is running on. I would like to distinguish between mac and linux, but also different specific distributions of linux (e.g. RHEL, Ubuntu). Cygwin is not important for me, but if you include it in your response I am sure others will find it valuable.
The rationale may include using the host type to fetch and install the correct versions of binary packages when it is more convenient to do so than compile from source. In addition, some commercial software is binary-packaged for specific distros, so part of the motivation is to grab the right binary.
Thanks,
SetJmp
Autotools to the rescue. It has tons of macros that help you do this kind of stuff.
http://www.lrde.epita.fr/~adl/autotools.html
uname -a to distinguish major *nix variants
Not so sure what the best way to distinguish red hat from ubuntu would be - could look for package managing tools and query installed packages, eventually helping you narrow down different debian derivatives, etc. There's probably something more obvious and up front though.
linux variants generally store distro information in /etc/issue.
most kernels will put info in /proc/version
It's not completely straightforward. You can use uname to find out the general parameters but to differentiate between distributions is a harder task. Maybe you should consider using something like autoconf to generalise your build system?
Just in case you're using Qt, there's this really nice set of defines, Q_OS_*, that guide you to the Operating System you're compiling on:
Q_OS_AIX
Q_OS_BSD4
Q_OS_BSDI
Q_OS_CYGWIN
Q_OS_DARWIN
Q_OS_DGUX
Q_OS_DYNIX
Q_OS_FREEBSD
Q_OS_HPUX
Q_OS_HURD
Q_OS_IRIX
Q_OS_LINUX
Q_OS_LYNX
Q_OS_MAC
Q_OS_MSDOS
Q_OS_NETBSD
Q_OS_OS2
Q_OS_OPENBSD
Q_OS_OS2EMX
Q_OS_OSF
...
They are defined in QtGlobal. There are even defines that help you figure out the compiler used Q_CC_* or the target Windowing System Q_WS_*.
But if you're not using Qt and want to go for a generic method, you most likely have to fall back to the Autotools package or CMake.
Determining Linux distributions is pretty tricky, but not hard. You first have to figure out what distributions you care about and then make all kinds of distribution specific file/configuration checks like in this example for the ones you've chosen, since you can't really support all of the myriad of Linux distros available out the. :-)
As for the Mac side i'll let the Mac experts answer, but it shouldn't be that hard, since at least the diversity issue is out of the question.

What is the "linux-2.6.3x.x/include/asm-generic/' for?

My os-book says that if you want to add a system call to the Linux kernel, edit the linux-2.x/include/asm-i386/unistd.h.
But the linux kernel's source structure seems to change a lot. In the linux-2.6.34.1 version kernel source tree, I only find a linux-2.6.34.1/include/asm-generic/unistd.h and linux-2.6.34.1/arch/x86/include/asm/unistd.h.
It seems that editing the latter one make more sense.
My question is what the /inlcude/asm-generic is for? How can asm related code be generic?
asm-generic is a generic versions of functions usually coded in assembly, but coded in plain C, without any inline assembly. It's probably made for easy porting of the kernel to new platforms, and to keep platfom-independent common code in one place.

Licensing and using the Linux kernel [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I would like to write my own OS, and would like to temporarily jump over the complicated task of writing the kernel and come back to it later by using the Linux kernel in the mean time. However, I would like to provide the OS as closed source for now. What license is the Linux kernel under and is it possible to use it for release with a closed source OS?
Edit: I am not interested in closing the source of the Linux kernel, I would still provide that as open sourced. I am wondering if I could use a closed source OS with an open source kernel.
Further edit: By OS, I mean the system that runs on top of the kernel and is used to launch other programs. I certainly did not mean to include the kernel in the closed source statement.
You can of course write whatever closed-source OS over the Linux kernel that you like provided you are compatible with the licensing of components you link against.
Of course that's likely to include the gnu C library (or some other C library). You may also need some command line utilities which will probably be GPL to do things such as filesystem maintenance, network setup etc. But provided you leave those as their own standalone programs, it should not be a problem.
Anything that you link into the kernel itself (e.g. custom modules, patches) should be released as open source GPL to comply with the kernel's licence.
The Linux kernel is released under the GPLv2 and you can use it as part of a closed-source OS but you have to keep the kernel and all modifications released GPLv2.
Edit: Btw, you may want to use something like OpenSolaris instead. It's much easier to work with, in my opinion (obviously very subjective), and you can keep modifications closed-source if you so choose, so long as you follow the terms of the CDDL.
I think you're going to have to be more specific about what you mean by 'OS'. It's by no means a clear concept. Some would say that the kernel is all of the OS. Others would say that the shell and core utilities such as 'ls' are part of the OS. Others would go as far as to say that standard applications such as Notepad are part of the OS.
IANAL, but I don't believe there's anything to stop you from bundling the Linux kernel with a load of closed-source programs of your own. Take care not to use any GPL library code however (LGPL is OK).
I do question your motives.
It's GPL version 2 and you may certainly not close its source.
You must keep the source open, and any works derived from the code, however, if you use the Kernel, write your own application stack on top of that (pretty much ALL the GNU stuff) then you don't have to open that up.
The GPL says that "derived" works... so if you're writing new code, instead of expanind on, then that's fine. In fact, you could even, for example, use the GNU toolchain, the Linux Kernel, and then have your own system on top of that (or just a DE) that is closed source.
It's when you modify/derive from something that you have to keep it open!
Linux has the GPL (v2) as its licence, which means you have to open source any derivative works.
You may want to use BSD, its license is a lot les restrictive in what you can do with derived works
If the filesystem you use is to be linked into the kernel itself, and if you plan to distribute it to others, the GPL pretty unambiguously requires that the filesystem be GPL'ed as well.
That being said: one way to legally interface Linux with a GPL-incompatible filesystem is via FUSE (filesystem in userspace). This has been used, for example, to run the GPL-incompatible ZFS filesystem on top of Linux. Running a filesystem in userspace does, however, carry a performance penalty that may be significant.
It is GPL. Short answer -- no.
You can always keep any extensions (modules) and/or applications you write closed source, but the kernel itself will need to remain open source.
There's a not-so-obvious aspect of the GPLv2 that you can exploit while testing the system: you only need to release source code to those who have access to the system. The GPLv2 states that you need to give full access to the source code to anyone with access to the binary/compiled distribution of the program. So, if you are only going to use the software inside of the company that is paying to develop it, you don't need to distribute the source code to the rest of the world, but just them.
Generally I would say that you're allowed to do such a thing, as long as you provide the source for the kernel, but there's one point where I'm unsure:
On a normal Linux system between the (GPL) kernel and a non-GPL compatible application, there is always the GNU libc, which is LGPL and thus allows derived works that are non-free. Now, if you have a non-free libc, that might be considered a derived work, since you are directly calling the kernel, and also using kernel headers.
As many others have said before, you might be better off using a *BSD.
If you're serious in developing a new operating system and want a working kernel to start with I suggest that you look into the FreeBSD kernel. It has a much more relaxed license than Linux, I think you might find it worthwhile.
Just my 2 cents...
I agree with MarkR but nobody has stated the obvious to you. If you are serious, you need to consult a lawyer with expertise in this area.

Resources