I have the Node.js (0.10.18) the absolutely same big loaded application on a few hosts (Debian 6). Recently, I have noticed that, in contrast of 64-bit version, the 32-bit version has a leak - Open files descriptors (sockets) permanently growing and I have to restart the application every 24 hours. Find below the monitoring data for 32 and 64-bit node.js behavior for same application.
for 32 bit Node
for 64-bit node
Does it node.js bug? Did somebody hit this?
Simon
Related
I have a device that has an ARM processor and runs Win CE OS.
Now I have got a requirement to implement a node js server inside the device.
The same requirement was implemented on another ARM device that was running Linux
Since node is compiled for Linux they were able to run a node js server inside the device.
But there is no Win CE compatible version of node available
Is that not done yet or am i missing something?
I read about Microsoft chakracore, but I didnot understand much.
Does anybody know how to run node on Win CE running devices.
Any kind of leads/help is appreciated. Thanks
Windows CE provides an implementation of the Win32 API that is someway compatible with the full-Win32 version implemented on Windows desktop operating systems.
It also provides C/C++ libraries but, as you know, evil is in the details and those implementations can be considered a subset of those you have on the desktop and missing a single function or feature can force you to re-implement a huge amount of code to work around the limitation.
Windows CE is also meant to run on resource-limited devices with a limited amount of RAM and processing power and, honestly, node.js does not seem to exactly target this kind of platforms.
First I would like to understand if the requirement makes sense and why there is a need to mix a small real-time OS like CE with a huge interpreted and resource-hungry monster like nodejs.
windows CE has not been updated in over 3 years it is unreasonable to expect node.js to work as is on top of CE. windows on arm however (used by windows phone, windows Iot, and the ill fated surface RT) can run this https://github.com/nodejs/node-chakracore. Windows on arm only accepts thumb2 instructions, so you won't be able to use regular node.js.
This is b\c v8 just in time compiler does not produce thumb2 instructions. more reading material here: https://blogs.msdn.microsoft.com/ntdebugging/2014/05/15/understanding-arm-assembly-part-2/
I'm a long-time user of Cygwin. I'm running Win7/x64, but my Cygwin installation is 32-bit. At the time I installed it, 64-bit version was considered experimental. Now the Cygwin website lists 32-bit and 64-bit versions on their website without any special mentions or recommendations on which one to use.
As a programmer, my experience is: unless specifically designed for capabilities of 64-bit CPUs, very few applications can get any gains by being recompiled for a 64-bit CPU. They might use more memory, though, since each memory pointer now uses 8 bytes rather than 4.
So, my question is: is there any benefit to choose 64-bit Cygwin over 32-bit ones? Should I upgrade my 32-bit installation sooner or later?
Performance/memory benchmark results for common usage patterns (e.g. running common UNIX commands, particular shell scripts etc.) would be very welcome.
If you've upgraded your 32-bit Cygwin to 64-bit, please tell your story - was the upgrade worth it?
EDIT: I've been using 64-bit Cygwin for a while now, and haven't really noticed any differences - good or bad - from the 32-bit version.
After running 32-bit Cygwin at work for a number of years, I recently upgraded to 64-bit. It was quite painless as you can run the two installations side-by-side. For me, there was no noticeable difference in performance though I did not run any benchmarks. The biggest gain for me in upgrading was starting with a new fresh install without any of the cruft that built up over the last few years.
The one disadvantage that I had when running 32-bit Cygwin was that the WoW64 subsystem filesystem redirector silently redirects 32-bit applications. This has the result that running ls /cygdrive/c/Windows/System32/ in 32-bit Cygwin doesn’t actually display the contents of %WINDIR%\System32, but instead %WINDIR%\SysWOW64. Attempts to access files in the %WINDIR%\System32 directory, e.g., ls /cygdrive/c/Windows/System32/nbtstat.exe fail with the error message:
ls: cannot access /cygdrive/c/Windows/System32/nbtstat.exe: No such file or directory
This means that Windows commands installed in %WINDIR%\System32 such as nbtstat and dnscmd aren’t available without modifying the PATH. The work-around for accessing 64-bit Windows files was that the %SystemRoot%\sysnative pseudo-directory had to be used, e.g., to run the nbtstat command, use /cygdrive/c/Windows/Sysnative/nbtstat.
32-bit applications on 64-bit Windows
WoW64 (Windows 32-bit on Windows 64-bit) is a subsystem capable of running 32-bit applications. It is included on all 64-bit versions of Windows and creates a 32-bit environment to run unmodified 32-bit applications on a 64-bit system using DLLs to provide the necessary interfaces.
Windows uses the %SystemRoot%\system32 directory for its 64-bit library and executable files. This is done for backward compatibility reasons, as many legacy applications are hardcoded to use that path. When executing 32-bit applications, WoW64 transparently redirects 32-bit DLLs to SystemRoot%\SysWoW64, which contains 32-bit libraries and executables.
32-bit applications are generally not aware that they are running on a 64-bit operating system. 32-bit applications can access %SystemRoot%\System32 through the pseudo directory %SystemRoot%\sysnative.
For 64-bit applications (such as Windows Command Prompt, cmd.exe) there’s no filesystem redirection:
See also: File System Redirection
I ran one I/O heavy benchmark, but even for this the noticeable performance gain is well worth it.
On a cygwin bash shell on Windows 7, I ran a find of many patterns piped to grep and then redirected to a file on the same hard drive in 32 and 64 bit:
32 bit
real 1m58.57s 68% more time
user 0m11.95s 41% more time
sys 0m40.83s 23% more time
64 bit
real 1m10.36s
user 0m8.50s
sys 0m33.05s
I'm looking for a way to deploy a Haskell web application on a low-spec toy server. The server specs:
OS: debian stable (squeeze) i386
CPU: 1 GHz Pentium IV
RAM: 512 MB
Storage: 512 MB compact flash (mounted on /var), 4 GB USB compact flash (mounted on /)
The server runs fine, it doesn't see much traffic (it's mainly used by myself, friends and family members), and I can afford to run it from my living room because it's completely silent and draw very little power (around 10 W idle, 40 W peak).
Quite obviously, I would like to avoid installing the entire Haskell platform and compile on the server - I'd run out of disk space fairly quickly, and compilation is bound to take ages due to slow storage. I can't just deploy binaries from my development machine though, because that one runs debian testing amd64, so the binaries won't be compatible. My ideas so far:
install a VM with debian/i386 to build on
figure out a way to build i386 binaries on amd64
compile to C on the development machine, copy C sources to server, finish build there (installing gcc or clang on the server is probably acceptable)
other ideas?
Which one sounds the most promising? Are option 2 and 3 even possible?
Also, I'm a bit concerned about libraries; the application depends on a few system libraries such as libcairo; installing them on the server is not a problem, but I wonder whether, especially for option 2, this would work (library versions etc.)
Not tried with haskell, but with similar requirements in the past I've found it simplest to just set up a vm with the same version of debian as the target system. Means you don't need to worry about library versioning etc.
I'm running the x64 version of some simulation app, on a very nice IBM x-server (4 8-core CPUs). The OS is Linux - redhat 5.6 x64 kernel.
So this app crashes exactly when it needs more than 2 GB of memory (as evident from its own log files).
My question really is how to debug this issue - what relevant environment settings should I look at? Is 'ulimit' (or sysctl.conf) relevant to this issue? What additional info can I post in order for you to help me?
This would be an application problem. Although the application is compiled as a 64-bit application, it still uses signed 32-bit integers for some things instead of proper pointers or the appropriate *_t types.
If you compile the application yourself, look for any "unsigned" or "truncated" warnings in the compilation output, and fix them.
The shmmax value defines the amount of memory that applications can use, you should check the value with this command:
cat /proc/sys/kernel/shmmax
If you need to increment, you can use:
echo 4096000000 > /proc/sys/kernel/shmmax
Bye
I remember hearing that for performance a development machine should be 32 bit, while servers should be 64 bit. I think it was Richard Campell on Dot Net Rocks! that mentioned this.
Why would 32-bit be faster than the 64-bit for a development box and vice versa for servers?
One major reason is the fact that 32-bit OSs can't address 4GB of RAM. 4-8GB can be crucial in a lot of development environments where virtual machines are involved, or even heavy lifting in general. This is why I always stick with 64-bit where possible, and all modern CPUs support it.
It depends in part on your tools - for example, Visual Studio is still a 32 bit app (but usable from x64 - just no huge gain).
However, if you are using your main OS to host VMs, then you can probably benefit from a ton of memory for your various virtuals - and then you can choose 32-bit and 64-bit VMs to suit your needs (it is harder to have a 64-bit guest VM in a 32-bit host).
Personally, I'm still on 32-bit for development. For most of what I do, it is fine.
I run 64-bit 2008 Server and see not performance issues whatsoever. In fact, it's much better than 32-bit XP. It performs generally faster. In a funny way, file operations are quicker on my laptop 5400rpm drive running 64-bit 2008 Server than on a office PC with a 7200rpm drive running 32-bit XP.
I can think of only one thing why you would want to run a 32-bit OS (XP being the latest): you get there IE6 to debug your sites.
The other thing is that a 32-bit OS is incapable of addressing RAM capacity over ~3,4 Gb. If your PC has 4+ Gb of RAM you only loose with a 32-bit OS. Recollecting that even consumer laptops are sold these days with 4, 6 and 8 Gb of RAM, one can safely say goodbye to a 32-bit OS.
If you are talking about non-Windows OS then my experience may not apply.
Having a lot of memory changes the way you work, sometimes dramatically. I run 8 virtual screens with 4 different development environments (1 trunk, 2 branches and a fourth environment for external projects). Just with 12GB mem and a 30" screen.
I don't think that 32 bit machines are faster then 64 machines for developers. It is true that your development environment on a 64 bit OS is running in an emulated 32 bit environment and that creates a slight overhead. On the other hand you will find that the 64 bit OS is slightly faster as the internal data paths are 64 bit enabling the OS to move twice as much data in a single operation. This makes the 64 bit OS slightly faster than a 32 bit OS. The downside of a 64 bit OS is that pointers are twice as big.
What really matters is that 64 bit OS'es are very stable, have access to much more physical memory, and can run both 64 bit and 32 bit applications and virtual machines without sacrificing performance. The 32 bit OS belongs to the past.
I have a 64-bit Ubuntu installed in my laptop. I use it for development and I have no performance issues at all. I have the feeling that computer resources are better used this way.
The only reason I can think of to choose 32-bit OS is that you know that what you develop will work on 32-bit and 64-bit machines. But VS let you choose your target machines...
His point was if you develop for 32bit you will have less than 4GB of ram to work with. And on a 64bit server you may have much more than 4GB of RAM. Basically tricking you into being more frugal with your memory requirements. It had more to do with memory usage than raw number crunching on the CPU.
Although I can't quantify it in numbers, I have noticed the same thing as 'new in town'. I used to run XP x86, and later vista x86 on my notebook. After I upgraded to Vista X64 it is a lot snappier. Don't know if it is a driver issue, the fact that I run SQL Server x64 etc, that it can use twice the amount of cpu registers, optimizations in 'internal' stuff in windows or what, but I can notice the difference...
I'd think the obvious suggestion would be to use whatever OS your code is going to be deployed on. If your development environment is as close as possible to the deployment environment, there's less chance of bugs showing up only in the deployment environment.