node.js on Windows. When and why? - node.js

Note: There is a similar question called 'installing nodejs on windows machine'.
And various answers explaining how by installing cygwin you can get it working there.
now, I don't want to install cygwin.
I just wish I could run nodejs on a windows box.
I want a "nodejs.exe" to kick off.
Can somebody explain to me
1) why nodejs has not been ported on windows - what are the technical reasons for not providing an exe ?
2) are there any plans to have nodejs on windows ?
I really would like to use it but I can't accept that I have to accept cygwin.
That's just not right.

Update:
For optimum node on windows development I recommend you use the Windows Azure powershell for node.js. It's a powershell optimised for using node, npm and the azure APIs. (the azure apis are optional. I would still use this powershell if I didn't use azure).
When : v0.6
Currently you can get a binary file that (kind of) works under windows. Go ask on the node.js IRC channel. They'll hook you up.
Basically if you read up on node.js road plans you will find that proper windows support is planned for 0.6, we are currently on v0.4.7 and the v0.5.x beta is in full storm.
I won't give an ETA but it's soon.
IRC can be found at the Community links
PDF showing v0.6 road plan
July 2011 update:
#nodejs v0.5.1 is the first to ship with an official Windows executable. We're hoping to get some good feedback.
Microsoft has officially gotten involved with joyent in making node.js run natively on windows.

Running Node under Windows presents several technical problems, mostly related to how Windows' internal design differs from that of Linux and the "change in mindset" required to port Unix applications to Windows.
Background
Linux was designed to be a replacement for Unix, a well-known multitasking operating system, so from day one it has been a multi-user/multi-process, server-oriented operating system. The idea of multiple processes sharing system resources is key to its internal design.
Windows was initially designed as a single-user/single-process desktop operating system and so did not support shared access to system resources. In 1993 Microsoft released a newly redesigned version of Windows--called Windows/NT--to better support the shared resource, multitasking model required by servers, but due to its existing installed base of users, Microsoft required NT to also support all the features of its single-user/single-tasking forebearer.
Windows 7 is a direct descendant of NT and Microsoft's need to support legacy users continues to this day (and in the opinion of many, has severely muddled Windows' internal design.)
Further, Microsoft hired a systems architect named Dave Cutler to design NT. Dave is best known for designing a competitor to Unix called VMS, the internal design of which differs significantly from that of Linux, which has caused a lot of problems for developers interested in porting their Unix programs to Windows.
The clearest example of this "impedance mismatch" between the internal design of Windows and Linux is how they handle event-driven, non-blocking input/output (io) on which Node relies to perform its (apparent) multitasking magic in a single thread of operation.
Linux supports two system-level functions called select() and epoll() which can be used to asynchronously inform a process of changes within the operating system that affect it. Node relies heavily on these functions but Windows doesn't support either, relying instead on "Change Notifications" (mostly) to handle event-driven io.

Related

Can one Linux OS based environment application run in another Linux OS environment?

I don't have any knowledge about Linux/Unix environment. So for some understanding I have put this question in front of all the developers and Unix/Linux technical people.
By applications I target IDE's used by developers, especially:
Visual Studio
IntelliJ Idea Community Version
PyCharm Community Version
Eclipse
And other peripheral apps used by developers, gamer and network engineers
To some experienced Linux users, my question might be baseless. But consider me a beginner with Linux. Thank You in advance.
The term "application" is a very vague, fuzzy one these days. It does not describe some artifact with a certain internal structure and way how to invoke it but merely the general fact that it is something that can be "used".
Different types of applications are in wide spread use on today's systems, that is why I asked for a clarification of your usage of the term "application" in the comments. The examples you then gave are diverse though they appear comparable at first sight.
A correct and general answer to your question would be:
One application can be used in different Linux based environments if that that environment provides the necessary preconditions to do so.
So the core of your question is kind of shifted towards whether different flavors of Linux based systems offer similar execution environments. Actually it makes sense to extend that question to operating systems in general, the difference between today's alternative is relatively small from an applications point of view.
A more detailed answer will have to differ between the different types of applications or better between their different preconditions. Those can be derived from the architectural platform the application is build on The following is a bit simplified, but should express what the situation actually is:
Take for example the IntelliJ IDEA and the Eclipse IDE. Both are java based IDEs. Java can be seen as a kind of abstraction layer that offers a very similar execution environment on different systems. Therefore both IDEs typically can be used on all systems offering such a "java runtime environment", though differences in behavior will exist where necessary. Those differences are either programmed into the IDEs or origin from the fact that certain components (for example file selection dialogs) are not actually part of the application, but the chosen platform. Naturally they may look and behave different on different platforms.
There is however another aspect that is important here especially when regarding Linux based environments: the diversity of what is today referred to as "Linux". Unlike with pure operating systems like MS-Windows or Apple's MaxOSX which both follow a centralized and restrictively controlled approach we find differences in various Linux flavors that far extend things like component versions and that availability. Freedom of choice allows for flexibility, but also holds a slightly more complex reality in the outcome. Here that means different Linux flavors do indeed offer different environments:
different hardware architecture, unlike MS-Windows and MacOSX the system can not only be used on intel x86 based hardware, but on a variety of maybe 120 completely different hardware architectures.
the graphical user interface (GUI or desktop environment, so windows, panels, buttons, ...) is not an integral part of the operating system in the Linux (Unix) world, but a separate add on. That means you can chose.
the amount of base components available in installations of different Linux flavors differs vastly. For example there are "full fledged, fat desktop flavors" like openSUSE, RedHat or Ubuntu, but there are also minimalistic variants like Raspbian, Damn Small Linux, Puppy, Scientific Linux, distributions specialized in certain tasks like firewalling or even variants tailored for embedded devices like washing machines or moon rockets. Obviously they offer a different environment for applications. They only share the same operating system core, the "kernel", which is what the name "Linux" actually only refers to.
...
However, given all that diversity with it's positive and negative aspects, the Linux community has always been extremely clever and active and crafted solutions to handle that specific situations. That is why all modern desktop targeting distributions come with a mighty software management system these days. That controls dependencies between software packages and makes sure that those dependencies are met or resolved when attempting to install some package, like for example an addition IDE as in your example. So the system would take care to install a working java environment if you attempt to install one of the two java based IDE's mentioned above. That mechanism only works however if the package to be installed is correctly prepared for the distribution. That is where the usage of Linux based systems differs dramatically from other operating systems: here comes the introduction of repositories, how to search, select and install available and usable software packages for a system and and and, all a bit to wide a field to be covered here. Basically: if the producer of a package does his homework (or someone else does for him) and correctly "packages" the product, then the dependencies are correctly resolved. If however the producer only dumps the raw bunch of files, maybe as a ZIP archive and insists on a "wild" installation as typically done for example on MS-Windows based systems, so writing files into the local file system by handing administrative rights to some bundled "installer" script that can do whatever it wants (including breaking and ruining or corrupting) the system is executed on, then the systems software management is bypassed and often the outcome is "broken".
However no sane Linux user or administrator would follow such a path and install such a software. That would show a complete lack of understanding how the own system actually works and the consequent abandonment of all the advantages and comfort offered.
To make a complex story simple:
An "application" usually can be used in different Linux based environments if that application is packaged in a suitable way and the requirements like runtime environment posed by the application are offered by that system.
I hope that shed some light on a non trivial situation ;-)
Have fun!

How applications are managed in Linux?

I am looking for design suggestions/documents etc. which contains specifics of system applications management in Linux (Ubuntu, Debian etc.)
Can you please point to a source of information or suggest a design?
I'm not sure to understand what you mean by system applications management (certainly it can mean several different things).
In practice, Linux distributions have some package management system to deal with that issue. init or systemd (etc....) is in charge of starting/stopping/managing daemons and servers. Its configuration is related to packaging.
Read also Ubuntu Packaging Guide and How To Package For Debian and Debian new Maintainer guide etc...
If you are coding some service application, read Advanced Linux Programming and about daemon(3) & syslog(3)
Also, study the source code of relevant system applications (similar to the one you are dreaming of), since Linux is generally (mostly) free software.

convert exe application into sis

I am working on an application for windows os but I want to port it to Symbian belle os so it will work on multiple operating systems..is there any way or software to edit and convert exe app into sis or sisx file ?
Short answer - no, there is no an easy/automatic way of doing that (like Wine for Linux would do).
Long answer - these are two different fileformats which require different loading procedures (OS-specific-ones). Als,o Symbian will normaly run on the hardware architectures different from the one Windows runs (if you meant Desktop Windows application in your question). Not to mention that underlaying APIs are completely different.
Having some experience on both platforms I would comare the porting efforts of such an application to two different projects being developed separately. If it is going to be a long-term maintenance, I would suggest to write an "abstraction layer" and code the application logic to be common to both platforms, separating the platform-dependent details only.

Porting windows application to linux

I have an application which I need to port on Linux. I was thinking to use Free Pascal the problem is the application uses Windows API's to perform tasks such as serial port communication etc... Is there a msdn for linux users or a book covering how linux works internaly if there are apis.
I am very confused.
Well, it's sad to say but if your code in very Windows-dependend (not VCL depended!) then probably it'll be faster to write the application from the begining rather then porting it.
But if it's only serial port matter then give a try to multiplatform SynaSer library, find it here: http://synapse.ararat.cz.
hope this help :)
Robert Love has a book on Linux Systems Programming - that will cover this area and Love's books are generally good, so it is worth looking at.
It's not entirely clear from your question, but if your concern is that there are specific calls to hardware controlling functions in your Windows application that make it difficult to port I would suggest that is a misplaced fear. Both Windows and Linux operate on the principle that the application level programmer should be kept away from the hardware and that all that should be handled by the operating system kernel and only be accessible to applications via system calls. As the kernels of different operating systems face similar demands from users/applications, they tend to have system calls that do the same sorts of things. It is necessary to match the system calls to one another but I can see no reason why that should be impossible.
What may be more troublesome is that your Windows application may rely heavily on the Windows executive's windowing code/API. Again finding analogues for your code is not impossible but is likely to be a bit more complex e.g. in Linux this stuff is generally not handled in the kernel at all (unlike Windows).
But then again, your code may be written against a portable toolkit/library like Qt which would make things a lot easier.
Good luck in any case.
If the program contains GUI code you must use linux libraries like GTK/XLIB in order to create windows, forms, buttons, etc...
Windows specific functions (like EnterCriticalSection, WaitForSingleObject or _beginthreadex) must be replaced with equivalent linux api functions (a nice tutorial can be found here:
"www.ibm.com/developerworks/systems/library/es-MigratingWin32toLinux.html") or you can use libraries such as w2lpl or wine
A useful library for this kind of problems i've found at http://www.adontec.com/windows-to-linux-port-library.htm
I've had great experiences just using WINE. (https://www.winehq.org/)
You don't really port your app at all. You just make sure it doesnt violate some of the basc constraints of WINE and just run it as is. WINE (though is says it is not) is an emulator of the windows API's and will just do the translation for you. It's pretty complete in its coverage of the API's.

Windows CE vs Embedded Linux [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Now I'm sure we're all well aware of the relative merits of Linux vs Windows Desktop. However I've heard much less about the world of embedded development. I'm mainly interested in solutions for industry and am therefore uninterested about the IPhone or Android and more interested in these two OSes.
What are the relative trade-offs between the two platforms in the embedded world? If you were considering building a box for a specific project with custom hardware, a partially customised OS and a custom app then which would you choose and why?
I would assume that Windows CE wins on tools and Linux wins on both cost and possibly performance. However this is just utter speculation. Does anyone have any facts or experience of the two?
I worked for several years at a company that provided both CE and Linux for all of their hardware, so I'm fairly familiar with both sides of this equation.
Tools: Windows CE tools certainly are better than those provided by Linux, though the linux tools are certainly getting better.
Performance: Windows CE is real-time. Linux is not. The linux kernel is not designed for determinism at all. There are extensions that you can add to get sort-of real time, but CE beats it.
Cost: This is an area of great misunderstanding. My general experience is that CE is lower cost out of the box ($1k for Platform Builder and as low as $3 per device for a shipping runtime. "What?" you ask? "Linux is free." Well, not really so much, especially in the embedded arena. Yes, there are free distributions like Debian. But there are plenty of pieces that you might need that aren't in that free category. UI frameworks like QT, Java runtimes and media codecs just as a start. Also, most Linux distributions with a commercially-backed support system (e.g. MontaVista) are far from free.
Source Availability: Linux proponents may like to say that CE is a bad choice due to lack of source code. All I can say is that in over a decade of working with CE, half of which spent doing custom kernel and driver work for custom boards, I've only ever had need for source that didn't ship with CE (they ship a vast majority of it) once. I like having source too, but Microsoft provides support, so in the rare case you might think you need that source, you can get them to fix the problem (the one time we needed source, Microsoft provided a fix, and for free - which is their model under CE.
Does this mean that CE wins every time? No. I wouldn't suggest that at all. If you are a Linux shop and you have lots of Linux experience and code assets, you'd be foolish to run out and go CE. However, if you're coming into it from scratch CE usually has a lower TCO. Developers with Win32/C# experience are more prevalent and consequently less expensive. You also get a lot more "in the box" with CE than most other distributions, meaning faster time to market if you don't already have these things done in-house already.
I'll speak for the Linux side, at least for the category of software I'm familiar with (which is RF data collection equipment). Or industrial apps vs. consumer apps.
Windows CE (and its associated tools) IMH fairly recent E) is strongly biased to creating a "Windows Experience" on a small screen. The user input mode emphasizes mouse-like actions. Logons, application selection, etc. all try to be as similar to standard Windows as possible.
If a user is driving a lift truck, or filling a picking cart, or moving material from one place to another, there's a problem.
And it's a moving target - particularly on the .NET side. The Compact .NET runtime is seriously handicapped, and important libraries (like networking, data handling, and UI) are incomplete and versions too often deprecate the previous version. . CE seems to be the stepchild in the Windows family (possibly because there's not a lot of active competition selling to the hardware integrators.)
A nice stable rows-and-columns Linux console is a pretty handy context for many (in my experience most) high-use apps on a dinky screen.
Not much good for games on your cell-phone or Zune, though.
NOTE:
I think ctacke probably speaks accurately for the hardware integrator's side. I'm more aligned with the players further down the pipe - software integrators and users.
Choice is often made largely on perception and culture, rather than concrete data. And, making a choice based on concrete data is difficult when you consider the complexity of a modern OS, all the issues associated with porting it to custom hardware, and unknown future requirements. Even from an application perspective, things change over the life of a project. Requirements come and go. You find yourself doing things you never thought you would, especially if they are possible. The ubiquitous USB and network ports open a lot of possibilities -- for example adding Cell modem support or printer support. Flash based storage makes in-field software updates the standard mode of operation. And in the end, each solution has its strengths and weaknesses -- there is no magic bullet that is the best in all cases.
When considering Embedded Linux development, I often use the iceberg analogy; what you see going into a project is the part above the water. These are the pieces your application interacts with, drivers you need to customize, the part you understand. The other 90% is under water, and herein lies a great deal of variability. Quality issues with drivers or not being able to find a driver for something you may want to support in the future can easily swamp known parts of the project. There are very few people who have a lot of experience with both WinCE and Linux solutions, hence the tendency to go with what is comfortable (or what managers are comfortable with), or what we have experience with. Below are thoughts on a number of aspects to consider:
SYSTEM SOFTWARE DEVELOPMENT
Questions in this realm include CPU support, driver quality, in field software updates, filesystem support, driver availability, etc. One of the changes that has happened in the past two years, is CPU vendors are now porting Linux to their new chips as the first OS. Before, the OS porting was typically done by Linux software companies such as MontaVista, or community efforts. As a result, the Linux kernel now supports most mainstream embedded cpus with few additional patches. This is radically different than the situation 5 years ago. Because many people are using the same source code, issues get fixed, and often are contributed back to the mainstream source. With WinCE, the BSP/driver support tends to be more of a reference implementation, and then OEM/users take it, fix any issues, and that is where the fixes tend to stay.
From a system perspective, it is very important to consider flexibility for future needs. Just because it is not a requirement now does not mean it will not be a requirement in the future. Obtaining driver support for a peripheral may be nearly impossible, or be too large an effort to make it practical.
Most people give very little thought to the build system, or never look much beyond the thought that "if there is a nice gui wrapped around the tool, it must be easy". OpenEmbedded is very popular way to build embedded Linux products, and has recently been endorsed as the technology base of MontaVista's Linux 6 product, and is generally considered "hard to use" by new users. While WinCE build tools look simpler on the surface (the 10% above water), you still have the problem of what happens when I need to customize something, implement complex features such as software updates, etc. To build a production system with production grade features, you still need someone on your team who understands the OS and can work at the detail level of both the operating system, and the build system. With either WinCE or Embedded Linux, this generally means companies either need to have experienced developers in house, or hire experts to do portions of the system software development. System software development is not the same as application development, and is generally not something you want to take on with no experience unless you have a lot of time. It is quite common for companies to hire expert help for the first couple projects, and then do follow-on projects in-house. Another feature to consider is parallel build support. With quad core workstations becoming the standard, is it a big deal that a full build can be done in 1.2 hours versus 8? How flexible is the build system at pulling and building source code from various sources such as diverse revision control systems, etc.
Embedded processors are becoming increasingly complex. It is no longer good enough to just have the cpu running. If you consider the OMAP3 cpu family from TI, then you have to ask the following questions: are there libraries available for the 3D acceleration engine, and can I even get them without being committing to millions of units per year? Is there support for the DSP bridge? What is the cost of all this? On a recent project I was involved in, a basic WinCE BSP for the Atmel AT91SAM9260 cost $7000. In terms of developer time, this is not much, but you have to also consider the on-going costs of maintenance, upgrading to new versions of the operating system, etc.
APPLICATION DEVELOPMENT
Both Embedded Linux and WinCE support a range of application libraries and programming languages. C and C++ are well supported. Most business type applications are moving to C# in the WinCE world. Linux has Mono, which provides extensive support for .NET technologies and runs very well in embedded Linux systems. There are numerous Java development environments available for Embedded Linux. One area where you do run into differences is graphics libraries. Generally the Microsoft graphical APIs are not well supported on Linux, so if you have a large application team that are die-hard windows GUI programmers, then perhaps WinCE makes sense. However, there are many options for GUI toolkits that run on both Windows PCs and Embedded Linux devices. Some examples include GTK+, Qt, wxWidgets, etc. The Gimp is an example of a GTK+ application that runs on windows, plus there are many others. The are C# bindings to GTK+ and Qt. Another feature that seems to be coming on strong in the WinCE space is the Windows Communication Foundation (WCF). But again, there are projects to bring WCF to Mono, depending what portions you need. Embedded Linux support for scripting languages like Python is very good, and Python runs very well on 200MHz ARM processors.
There is often the perception that WinCE is realtime, and Linux is not. Linux realtime support is decent in the stock kernels with the PREEMPT option, and real-time support is excellent with the addition of a relatively small real-time patch. You can easily attain sub millisecond timing with Linux. This is something that has changed in the past couple years with the merging of real-time functionality into the stock kernel.
DEVELOPMENT FLOW
In a productive environment, most advanced embedded applications are developed and debugged on a PC, not the target hardware. Even in setups where remote debugging on a target system works well, debugging an application on workstation works better. So the fact that one solution has nice on-target debugging, where the other does not is not really relevant. For data centric systems, it is common to have simulation modes where the application can be tested without connection to real I/O. With both Linux and WinCE applications, application programing for an embedded device is similar to programming for a PC. Embedded Linux takes this a step further. Because embedded Linux technology is the same as desktop, and server Linux technology, almost everything developed for desktop/server (including system software) is available for embedded for free. This means very complete driver support (see USB cell modem and printer examples above), robust file system support, memory management, etc. The breadth of options for Linux is astounding, but some may consider this a negative point, and would prefer a more integrated solution like Windows CE where everything comes from one place. There is a loss of flexibility, but in some cases, the tradeoff might be worth it. For an example of the number of packages that can be build for Embedded Linux systems using Openembedded, see.
GUI TRENDS
It is important to consider trends for embedded devices with small displays being driven by Cell Phones (iPhone, Palm Pre, etc). Standard GUI widgets that are common in desktop systems (dialog boxes, check boxes, pull down lists, etc) do not cut it for modern embedded systems. So, it will be important to consider support for 3D effects, and widget libraries designed to be used by touch screen devices. The Clutter library is an example of this type of support.
REMOTE SUPPORT
Going back to the issue of debugging tools, most people stop at the scenario where the device is setting next to a workstation in the lab. But what about when you need to troubleshoot a device that is being beta-tested half-way around the world? That is where a command-line debugger like Gdb is an advantage, and not a disadvantage. And how do you connect to the device if you don't have support for cell modems in New Zealand, or an efficient connection mechanism like ssh for shell access and transferring files?
SUMMARY
Selecting any advanced technology is not a simple task, and is fairly difficult to do even with experience. So it is important to be asking the right questions, and looking at the decision from many angles. Hopefully this article can help in that.
I have worked in projects that involved customizing the software of an OEM board and I wouldn't say that Linux is cheaper. When buying a board you also need to buy the SDK. You still need to pay even for the Linux version. Some manufacturers offer both Windows CE and Linux solutions for their boards and there isn't a price difference. For Windows CE you also need the Platform Builder and pay for the licenses, but it is easier to go without support.
Another important issue is if you are building a User Interface or a headless device. For devices that require an LCD screen and human interaction is much easier to go with Windows CE. If on the other hand you are building a headless device, Linux may be a sounder option - especially if network protocols are involved. I believe that Linux implementations are more reliable and easier to tweak.
With Linux you are never on you own and you are never dependent on one single entity to provide permissions. There are many support options and you have the freedom to choose your support options for any part of the system through many competing sources.
With Windows CE you must adhere to the license and restrictions as set forth in the complex license agreements that must be agreed to. Get a lawyer. With windows CE you have only one proprietary source for OS support and you will proceed only as they see fit to support and provide what you need. You may not agree with their position, but will not have any recourse but to bend to what they prescribe. The costs of incremental components, modules, development kits, licensing, and support tend to pile up with proprietary platforms. In the longer term, what happens when the vendor no longer desires to support the platform and you do not have the rights to support and distribute it yourself? What happens when the vendor moves to newer technology and wants you to move along with them even though you may not be ready to make the move? $$$
Our experience with Windows solutions in general is that they tend to become more expensive over time. What was originally considered lowest TCO gravitates quickly towards and solution that is encumbered and costly to maintain and support. Licenses have to be re-negotiated over time and the new technologies, often unneeded, are forced into the picture at the whim of the provider for the sake of THEIR business needs. On top of that, the license agreements are CONTINUALLY changing--get a lawyer.
With Linux you have the freedom to provide in-house support and expertise without being encumbered against distributing the solution as you need. You also have the freedom to continue to use and support technology that original providers no longer want to support. Having the source code and the RIGHTs to do with it what you want (GPL, LGPL) is a powerful attractor when it comes to business continuity and containing costs while providing access to the very latest technologies or technologies that fit your needs.
I have developed network drivers that work both on RT Linux (to be more specific, Linux preemptive kernel with RT patch) and Windows CE. My experience was windows CE was more stable in terms of real-time response. Frame timings also showed that windows CE had less jitter.
On RT Linux, we had all sorts of problems. For example, when user moved the mouse; our frames were being delayed. Guess what, certain variants of x-windows disable interrupts. You may also feel that you are safer on console screen only. If you have VGA frame buffers enabled, you are doomed again. We had only one problem with windows CE in terms of jitter again. The problem happened when the USB controller was set to an incorrect mode in the BIOS and windows CE was using lots of time for polling.
To be honest, windows CE had more support. On Linux, you are on your own. You have to read every possible mailing list to understand what problems you may hve.
a partially customised OS
Is much easier to achieve if the OS is open source (and you have the expertise).
Android is a good option for some embedded systems.(it's linux based)
You have many experts that are able to develop on this system.
You have access to many libraries in java or C.
but it uses lot of memory and energy.
What we often forget with paid / licenced software is that you have to deal with licenses. It takes time and energy! Then you have to track if you pay it correctly. It involves many different people with different skills and it costs in decision.
This cost is often not included in the studies that show that open-source/free is more expensive than paid software.
With "free software" it's way easier to deal with licenses and you spend less time on dealing with these issues. Personally I prefer to avoid unnecessary communications with your legal / financing team every time you change some pieces of the software.

Resources