Is Rust's support for fork() cross-platform? - rust

I know Rust can handle windows and *nix filesystems. I saw there is support to fork processes - is this also cross-platform? Would I be able to write a *nix daemon and a Windows service with the same codebase?

There is no such thing as fork on Windows (it uses CreateProcess instead).
More generally, Unix daemons and Windows services are very different (the latter has to comply with specific Windows interfaces), so you would need a significant abstraction layer if you want to share some code base. As far as I can tell, there is no library providing such an abstraction layer yet.

Related

What to consider when writing distribution-independent Linux applications?

I wish to write a graphical tool with which one could configure and query information about a Linux system.
In order to achieve some independence from the underlying Linux distribution, I am planning to require that the target system uses systemd, and that the target system has the PackageKit console program installed.
With this, I will have excluded Slackware Linux, since it does not use systemd.
What other considerations should I have in mind when designing such a tool? With the use of an abstraction layer away from the package manager, and with the use of systemd, are there any other things that I would have to consider?

Linux's system calls for GUI? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
I'll take x86 as an example as I am more aware of x86 stuff than ARM stuff. Also, I may get some information wrong as I've been doing some research on this question while answering. Feel free to correct me if I am wrong.
System booting
Some time ago, Linux used to boot with a legacy bootloader (GRUB legacy version). The GRUB bootloader would be started by the BIOS at 0x7c00 in RAM and then would read the kernel from the hard-disk. It would follow the multiboot specification. The multiboot specification mentions the state that the computer needs to be in before jumping to the kernel's entry point. The kernel would then launch a first process (init) that every process would be a child of.
Today, most Linux distributions boot with UEFI (with the option of legacy booting also available). A UEFI app is placed on the boot partition partionned as a GPT ESP (EFI System Partition). This EFI app is launched and then follows the Linux Boot Protocol to launch Linux. The init process was also replaced by systemd. Linux will thus launch systemd as the first process of the computer. Actually, as stated on the manpage for systemd:
systemd is usually not invoked directly by the user, but is
installed as the /sbin/init symlink and started during early
boot.
The process that will be started is thus /sbin/init but it is a symlink to systemd. The systemd process will then read several configuration files on the hard-disk called units. These units are often targets which specify several units to read. Targets are thus units which specify several units to read. At first systemd will read default.target which specifies several other units. Some of these other units will start some processes among which is the Display Manager (fancy terminology which means login prompt). Recently, Ubuntu starts the Gnome Display Manager (GDM) as the first displaying program (gdm.service unit). This program will start the X server before presenting the user login screen (https://en.wikipedia.org/wiki/X_display_manager).
When the display manager runs on the user's computer, it starts the X server before presenting the user the login screen, optionally repeating when the user logs out.
Once logged in, GDM will start several other binaries responsible to let you interact with the system (the actual desktop, a binary to gather input for this desktop, etc). All of these components depend on the X server to work properly.
The DRM
The X server is a user program which makes extensive use of the Direct Rendering Manager (DRM) of the Linux kernel. The DRM is a system call interface which is used to interact with graphics cards. When the DRM detects a graphics card, it exposes a file like /dev/dri/card0 which is a character device (http://manpages.ubuntu.com/manpages/bionic/man7/drm.7.html).
In earlier days, the kernel framework was solely used to provide raw hardware access to
priviledged user-space processes which implement all the hardware abstraction layers. But
more and more tasks were moved into the kernel. All these interfaces are based on ioctl(2)
commands on the DRM character device. The libdrm library provides wrappers for these
system-calls and many helpers to simplify the API.
When a GPU is detected, the DRM system loads a driver for the detected hardware type. Each
connected GPU is then presented to user-space via a character-device that is usually
available as /dev/dri/card0 and can be accessed with open(2) and close(2). However, it
still depends on the grapics driver which interfaces are available on these devices. If an
interface is not available, the syscalls will fail with EINVAL.
The ioctl call allows to have any number of operations on the /dev/dri/card0 file since it is a general call which includes a request argument which is simply an unsigned long. It also takes a variable amount of arguments (see https://man7.org/linux/man-pages/man2/ioctl.2.html).
The ioctl call thus allows hardware vendors (like NVIDIA, AMD, etc) to provide drivers for their cards with the general ioctl call used as a general interface between user mode and kernel mode.
OpenGL
There exists several 3D rendering APIs available (OpenGL, Direct3D). OpenGL is mostly a set of C headers and a convention. The convention says what a certain call should do. It is up to the hardware vendor to implement the convention for their own card. Mesa3D has been an attempt to create an open source implementation of OpenGL for certain graphics cards. It worked quite well for integrated Intel HD Graphics (since documentation is open) and AMD (since they cooperated and offered some insight into the workings of their cards), but not for NVIDIA (the Nouveau driver is mostly not working or slow).
When you program some OpenGL, you include the OpenGL headers and link with libraries provided by hardware vendors which provide the definitions of the functions in the headers. These definitions make use of the DRM and cooperate with the X server to show content on the screen.
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
System calls (provided by the kernel) are often buried (e.g. in some cases deliberately undocumented and proprietary) and should not be used. Almost everything you see are actually normal functions in dynamically linked libraries/shared libraries. This allows the kernel's system calls to be radically changed without breaking everything (because everything only depends on the dynamically linked libraries/shared libraries); and reduces the functionality needed in the kernel itself.
For an example; most of the "system calls for managing windows and GUI components" you think Windows has could (internally, inside the relevant DLL) just end up using a single "send_message()" system call (to tell a different process, the GUI, that you want to create a window or change its position or ...).
For Linux it's roughly similar. The kernel's system calls (which actually are documented, for no sane reason - it goes against the spirit of SYS-V specs and means badly written "linux executables" aren't compatible with other Unix clones like FreeBSD or Solaris or OSX) exist to use things like low level memory management and raw file IO and sockets; but (like Windows) the kernel's system calls are buried under layers of shared libraries, and those shared libraries (e.g. like Xlib, GLib, KWindowSystem, Qt, ...) just use "something" (file IO, pipes, sockets, ...) provided by kernel to talk to another process (display server, GUI, ..).
Linux and Windows fall under separate categories; Linux is just a kernel, i.e. the piece under the hood that gives us the basic functionality we expect to run programs, like threads, memory and process management, etc. Windows is a full operating system, including the user facing components and numerous system libraries. An apter comparison would be a specific Linux distro and Windows.
On that note, distros, as independent operating systems, obviously can have different implementations of any OS component. Some distros, like Arch, don't come with a GUI by default at all. That said, essentially the entire Linux ecosystem uses Xorg and/or Wayland; I would recommend looking into the implementation details of those two.
A Linux GUI has quite a few differences compared to Windows GUI. For example, the GUI is not considered to be a part of the operating system, but rather an external part of it; that means no syscalls (not embedded whatsoever in the OS). After all, like the previous answer says, Linux is a kernel, that means it's only something really basic (allows execution of programs, memory/threads management, processes management, but not really much more). Whatever comes next (GUI, for example) are added features using packages.
This allows, for example, installing a GUI on top of a minimal installation of any Linux distro (CentOS, for example), and that GUI can be the one you want (Gnome, KDE...).

Porting linux based application to uC/OS-II platform

I am planning to implement Hiawatha web server on a non-linux platform which is uC/OS-II RTOS.
I need help to port the Linux dependent API's to RTOS platform.
Kindly let me if there are already build libraries that I can use to port Linux on RTOS.
Thanks in Advance
Any code that uses more than just the standard C library will require some porting effort - the extent to which non-standard and OS specific libraries and calls are made will determine the effort required or even the feasibility of such a port.
Most Linux code of any complexity will require a POSIX API and networking code will probably use BSD sockets. Multi-threaded code would likley use pthreads. uC/OS-II has neither of these; it deals with only scheduling, timing, synchronisation and inter-process communication; it is a scheduling kernel, not a full OS in the same sense as Linux - it does not even have a file system - a requirement of most Linux code. Of course adding additional libraries and extensions may provide some or all of what you may need.
Moreover uC/OS-II's simple one-thread-per-priority-level scheduler would make typical Linux multi-threaded code hard to schedule in the manner intended. Most RTOSes (including uC/OS-III) support round-robin/time-sliced scheduling of tasks at the same priority level, but uC/OS-II does not; possibly making it unsuitable for this task.
Something more sophisticated that uC/OS-II may be in order, or perhaps using code more suited to uC/OS-II perhaps. eCos for example is a far more complete RTOS for embedded systems; it is open-source and includes a POSIX API, file-system support and a socket API. It would be far easier to port Linux code to that. Equally there are many lightweight embedded webserver examples that are probably more suited to uC/OS-II and other simple RTOS or even no OS at all. LwIP for example is a TCP/IP stack for small embedded systems for which uC/OS-II ports exist and for which there are web server examples.
The point is that Linux are uC/OS-II are not comparable; one requires < 10Kb of code, the other has a minimal foot-print of about 4Mb! To get Linux code to run on such a system will require you to add a lot of additional code to provide the missing services, and it may not be feasible on your target platform.
[Edit: 08 July 2012]
Have you considered using Micrium's own TCP/IP stack and μC/HTTPs web-server add-on? It is likley to be better integrated to uC/OS-II and provide better performance than non-RTOS specific third-party code.

Porting windows application to linux

I have an application which I need to port on Linux. I was thinking to use Free Pascal the problem is the application uses Windows API's to perform tasks such as serial port communication etc... Is there a msdn for linux users or a book covering how linux works internaly if there are apis.
I am very confused.
Well, it's sad to say but if your code in very Windows-dependend (not VCL depended!) then probably it'll be faster to write the application from the begining rather then porting it.
But if it's only serial port matter then give a try to multiplatform SynaSer library, find it here: http://synapse.ararat.cz.
hope this help :)
Robert Love has a book on Linux Systems Programming - that will cover this area and Love's books are generally good, so it is worth looking at.
It's not entirely clear from your question, but if your concern is that there are specific calls to hardware controlling functions in your Windows application that make it difficult to port I would suggest that is a misplaced fear. Both Windows and Linux operate on the principle that the application level programmer should be kept away from the hardware and that all that should be handled by the operating system kernel and only be accessible to applications via system calls. As the kernels of different operating systems face similar demands from users/applications, they tend to have system calls that do the same sorts of things. It is necessary to match the system calls to one another but I can see no reason why that should be impossible.
What may be more troublesome is that your Windows application may rely heavily on the Windows executive's windowing code/API. Again finding analogues for your code is not impossible but is likely to be a bit more complex e.g. in Linux this stuff is generally not handled in the kernel at all (unlike Windows).
But then again, your code may be written against a portable toolkit/library like Qt which would make things a lot easier.
Good luck in any case.
If the program contains GUI code you must use linux libraries like GTK/XLIB in order to create windows, forms, buttons, etc...
Windows specific functions (like EnterCriticalSection, WaitForSingleObject or _beginthreadex) must be replaced with equivalent linux api functions (a nice tutorial can be found here:
"www.ibm.com/developerworks/systems/library/es-MigratingWin32toLinux.html") or you can use libraries such as w2lpl or wine
A useful library for this kind of problems i've found at http://www.adontec.com/windows-to-linux-port-library.htm
I've had great experiences just using WINE. (https://www.winehq.org/)
You don't really port your app at all. You just make sure it doesnt violate some of the basc constraints of WINE and just run it as is. WINE (though is says it is not) is an emulator of the windows API's and will just do the translation for you. It's pretty complete in its coverage of the API's.

node.js on Windows. When and why?

Note: There is a similar question called 'installing nodejs on windows machine'.
And various answers explaining how by installing cygwin you can get it working there.
now, I don't want to install cygwin.
I just wish I could run nodejs on a windows box.
I want a "nodejs.exe" to kick off.
Can somebody explain to me
1) why nodejs has not been ported on windows - what are the technical reasons for not providing an exe ?
2) are there any plans to have nodejs on windows ?
I really would like to use it but I can't accept that I have to accept cygwin.
That's just not right.
Update:
For optimum node on windows development I recommend you use the Windows Azure powershell for node.js. It's a powershell optimised for using node, npm and the azure APIs. (the azure apis are optional. I would still use this powershell if I didn't use azure).
When : v0.6
Currently you can get a binary file that (kind of) works under windows. Go ask on the node.js IRC channel. They'll hook you up.
Basically if you read up on node.js road plans you will find that proper windows support is planned for 0.6, we are currently on v0.4.7 and the v0.5.x beta is in full storm.
I won't give an ETA but it's soon.
IRC can be found at the Community links
PDF showing v0.6 road plan
July 2011 update:
#nodejs v0.5.1 is the first to ship with an official Windows executable. We're hoping to get some good feedback.
Microsoft has officially gotten involved with joyent in making node.js run natively on windows.
Running Node under Windows presents several technical problems, mostly related to how Windows' internal design differs from that of Linux and the "change in mindset" required to port Unix applications to Windows.
Background
Linux was designed to be a replacement for Unix, a well-known multitasking operating system, so from day one it has been a multi-user/multi-process, server-oriented operating system. The idea of multiple processes sharing system resources is key to its internal design.
Windows was initially designed as a single-user/single-process desktop operating system and so did not support shared access to system resources. In 1993 Microsoft released a newly redesigned version of Windows--called Windows/NT--to better support the shared resource, multitasking model required by servers, but due to its existing installed base of users, Microsoft required NT to also support all the features of its single-user/single-tasking forebearer.
Windows 7 is a direct descendant of NT and Microsoft's need to support legacy users continues to this day (and in the opinion of many, has severely muddled Windows' internal design.)
Further, Microsoft hired a systems architect named Dave Cutler to design NT. Dave is best known for designing a competitor to Unix called VMS, the internal design of which differs significantly from that of Linux, which has caused a lot of problems for developers interested in porting their Unix programs to Windows.
The clearest example of this "impedance mismatch" between the internal design of Windows and Linux is how they handle event-driven, non-blocking input/output (io) on which Node relies to perform its (apparent) multitasking magic in a single thread of operation.
Linux supports two system-level functions called select() and epoll() which can be used to asynchronously inform a process of changes within the operating system that affect it. Node relies heavily on these functions but Windows doesn't support either, relying instead on "Change Notifications" (mostly) to handle event-driven io.

Resources