OpenMPI communication over local network[2 machines] - openmpi

Is there any way to make a MPI program work on two different machines (both windows 7) over local network ? Lets assume that I have 1 PC with local IP 192.168.1.1 and other one with 192.168.1.2. I've heard of DeinoMPI, but isn't there any way to do it faster ?

Prior versions of Open MPI natively supported Microsoft Windows, but that support was dropped due to lack of interest. The current release series of Open MPI (1.8.x) has Cygwin builds available; check out http://cygwin.com/cgi-bin2/package-grep.cgi?grep=openmpi (that URL takes a while to load; it seems to be a slow/overloaded server).

You can also use Microsoft's MPI offering: http://msdn.microsoft.com/en-us/library/bb524831%28v=vs.85%29.aspx. It's freely available but only supports MPI-1 functions as far as I know.

Related

Are there any benefits to keep your files (scripts) on WSL filesystem

When reading the WSL documentation, it is stated that:
"Unlike our practice with trying to exclusively install programs and software on Ubuntu, our files and folders need to live exclusively on the Windows FS [...] Windows and Windows Apps can only read and write Windows files, and VSCode will be making our changes."
I understand the reasoning behind this and indeed, if one uses VSCode for example, it all makes sense. But my question is:
Is there any real reason why you couldn't keep your files (i.e. scripts) on the WSL filesystem itself? More specifically, if you don't ever intend to use the Windows filesystem (i.e you won't ever need a GUI or else), is there any sense in placing the files in the Windows FS?
Obviously you need to make sure you backup your data (GitHub or else) but aside from that, is there any downside? I guess what I'm saying is: can I use WSL like a VM? Can I keep BOTH software AND scripts all in WSL, separate from the Windows filesystem?
PS: The reason for avoiding a VM in this context is because I have a low spec laptop which has struggled a lot in the past with VMs (slow, not enough RAM), and so far, WSL seems be running much more smoothly.
Thanks
The simple answer is yes, you can use WSL as if it were a VM. WSL is for the most part fully-fledged Linux, and you can use Linux as your primary operating system, ignoring the fact you need to start it from within Windows. I haven't tried WSL 2, but it's said to be implemented as a fast VM, which is exactly what you ask for. (Further, the lack of GUIs can be mitigated using built in support for sending X data over SSH to the Windows half of your computer, and display it with an X Server. If I remember correctly, these two articles got me most of my way there.)
However, if you want to get pedantic, you can't store any files separate from your Windows filesystem on WSL 1. If you run e.g. Ubuntu, your Linux filesystem is instead always contained within %USERPROFILE%\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState, so it'll technically not be separate. I can't test WSL 2, but according to this article, WSL 2 also stores its data in that folder, just as a single VDHX image. Presumably every WSL distro stores its data on the Windows filesystem.
Warning: Do not access the files themselves in your Linux filesystem within AppData using Windows tools, or you run a high risk of corrupting those files.
Yes, you can, and only place files in the Windows filesystem if you want to share them with Windows programs. Moreover, in Windows 1903 you don't need to place files in Windows filesystem to share them with Windows Programs, they can access them.
In WSL2 they encourage you to keep everything in WSL filesystem to take advantage of the filesystem's performance improve.
So, yes you can and you should.

New to linux server, about execute binaries with possibly different versions of MPI

I wrote and compilied some binaries and uploaded them to a linux server that operating on a remote supercomputer through SSH, if the binaries were compiled with, lets say, Intel's MPI libaries, but however if the remote linux server dont have the corresponding MPI lib installed over there, then will the binaries execute properly if I uploaded needed intel MPI dynamic-linking lib files to the linux server?
I would recommend you to compile the binaries on the target supercomputer machine (after you SSH). This way you're guaranteed that at least all the libraries are present at compile time, and if at all you would need to modify LD_LIBRARY_PATH to pick up the correct libraries.
Sometimes.
A few points
Many programs (MATLAB for example) will allow users to specify their own MPI library. Closed sourced programs should have an ability to specify their own MPI.
MPI is highly optimized for the cluster, it is often a selling point of the cluster.
Many clusters run in 2 modes, the optimized mode and a Cluster Compatibility Mode. In the former only the vendor MPI runs, in the later OpenMPI and other MPIs are supported. For erudition, in the former many common system() calls can fail.
Yes it will work, have a look at Intel's Redistributing Libraries When Deploying Applications document. Make also sure to use the compiler's cross-compile options to match the CPU architecture of the supercomputer.

Cygwin vs Linux Virtual Machine for Development?

< skippable part >
I work in IT (mostly desktop support and network administration) in a Windows environment, and I occasionally program.
A couple weeks ago, I decided I couldn't be as effective as I want to be without a Bash environment for my command prompt needs. This is especially true when I am using Ruby and git. I used Msysgit for a while, but I just didn't like how it wasn't extensible like Linux. So, I installed Cygwin and played around with that for a couple weeks.
As great as Cygwin is, it seems like it is meant to be a suped up command prompt, and its compatibility with Linux is just a pleasant side effect. This especially became evident when I tried to upgrade Ruby to 1.9.3 (it worked, but it wasn't straightforward), install rvm (never worked), and install RMagick (may or may not work, but looks like a headache).
So, now I'm considering running Linux in a virtual machine. But I'm worried that might be another can of worms and I'll have wasted hours before I find that out. I like that Cygwin runs in Windows and I get to use my IDE, user folder, and more with it. But I don't like that support for it is not as thorough as for a major distro.
< /skippable part >
Does anyone here have insight on using Cygwin vs running a Linux virtual machine?
Any advice on setting up a Linux development environment in a virtual machine within Windows?
I have faced common issues before, and the best solution according to my experience is just 2 workstations :).
Apart from that having Linux running in a virtual environment is way better.
First of all, you will have full Linux capabilities (except 3d acceleration, but you probably don't need that).
You will have the capability of creating snapshots and revert back to them when things go wrong!
You can start multiple environment using templates, which is very convenient.
The only downfall I can think of is performance issues of the host machine.
If it's a normal workstation/PC, an IDE + one virtual machine + a 100+tabs browser just makes it slow.
1: cygwin is good for quick hacks, and for being able to acces host-os resources(you can run IE for example in a bash script). For something tightly integrated and some "real" word, go to a vm. It will emulate everything and separate development from the real machine, and this may be a good thing in some cases... as a plus it simulates a real server:)
2: in virtualbox at least, you have shared folders, and you can share a local folder, and see it in the vm as a local folder(local or as a windows share..it actually depends). Then you can use that "entry point" to symlink stuff into the vm, and do the things you need with the real files being located in the real(host) machine
SSH into a linux box. This is what everyone does. Why isn't this the answer?
There is something I have heard of called Cooperative Linux. It runs Linux alongside with Windows kernel so you can use them at the same time. I've never used it, but here:
http://www.colinux.org/
What I think now is getting the pros of 2 options is using
Docker
, it is giving you cygwin simplicity and VM functionality with better performance.
Linux in a virtual machine will give you the experience you want more than cygwin or any mock shell as I like to call them.
Running VM's though require a lot of ram depending on whether you want a desktop version of linux or just a command line version.
Myself in work I have a pc with 8gb of ram and I run ubuntu 64bit as main OS, two ubuntu servers (these are for dev environments two different projects) and a windows 7 VM and a win XP VM.
I can run the two ubuntu servers and one other VM at the same time, key here is more ram if you want to be able to do VM's.
If you're going to be working with Ruby then get an Ubuntu virtual machine up and running :) I've not tried Ruby, etc on Windows but I have heard that it is a pain to setup and configure. I use a Mac for all my Rails development so I cannot comment on the Windows side for that.
As for virtual machine creation, I prefer VMware Workstation, however there are free alternatives such as Virtualbox and VMware Server.
I'm using a Linux VM within a Windows seven environment as this VM is as representative as possible of the final production environment. The whole setup is binded to the Eclipse IDE under ms-Windows seven. So this is really great for local full testing, before committing or tagging the tested version to the production servers.
As you mentioned as well, this takes some time to get properly setup and fully configured. So if your need is only for little tricks or tasks, you may keep using cygwin. For example, I faced significant issues to configure perl and compile mysql within cygwin. So it's ok for basic usages, but not to fully take advantage of a full linux environment.
Your choice strongly depends on the final server setup purpose. A VM will do it whatever your need is. The setup cost for it is higher, so this time investment must be used often to get returned.

node.js on Windows. When and why?

Note: There is a similar question called 'installing nodejs on windows machine'.
And various answers explaining how by installing cygwin you can get it working there.
now, I don't want to install cygwin.
I just wish I could run nodejs on a windows box.
I want a "nodejs.exe" to kick off.
Can somebody explain to me
1) why nodejs has not been ported on windows - what are the technical reasons for not providing an exe ?
2) are there any plans to have nodejs on windows ?
I really would like to use it but I can't accept that I have to accept cygwin.
That's just not right.
Update:
For optimum node on windows development I recommend you use the Windows Azure powershell for node.js. It's a powershell optimised for using node, npm and the azure APIs. (the azure apis are optional. I would still use this powershell if I didn't use azure).
When : v0.6
Currently you can get a binary file that (kind of) works under windows. Go ask on the node.js IRC channel. They'll hook you up.
Basically if you read up on node.js road plans you will find that proper windows support is planned for 0.6, we are currently on v0.4.7 and the v0.5.x beta is in full storm.
I won't give an ETA but it's soon.
IRC can be found at the Community links
PDF showing v0.6 road plan
July 2011 update:
#nodejs v0.5.1 is the first to ship with an official Windows executable. We're hoping to get some good feedback.
Microsoft has officially gotten involved with joyent in making node.js run natively on windows.
Running Node under Windows presents several technical problems, mostly related to how Windows' internal design differs from that of Linux and the "change in mindset" required to port Unix applications to Windows.
Background
Linux was designed to be a replacement for Unix, a well-known multitasking operating system, so from day one it has been a multi-user/multi-process, server-oriented operating system. The idea of multiple processes sharing system resources is key to its internal design.
Windows was initially designed as a single-user/single-process desktop operating system and so did not support shared access to system resources. In 1993 Microsoft released a newly redesigned version of Windows--called Windows/NT--to better support the shared resource, multitasking model required by servers, but due to its existing installed base of users, Microsoft required NT to also support all the features of its single-user/single-tasking forebearer.
Windows 7 is a direct descendant of NT and Microsoft's need to support legacy users continues to this day (and in the opinion of many, has severely muddled Windows' internal design.)
Further, Microsoft hired a systems architect named Dave Cutler to design NT. Dave is best known for designing a competitor to Unix called VMS, the internal design of which differs significantly from that of Linux, which has caused a lot of problems for developers interested in porting their Unix programs to Windows.
The clearest example of this "impedance mismatch" between the internal design of Windows and Linux is how they handle event-driven, non-blocking input/output (io) on which Node relies to perform its (apparent) multitasking magic in a single thread of operation.
Linux supports two system-level functions called select() and epoll() which can be used to asynchronously inform a process of changes within the operating system that affect it. Node relies heavily on these functions but Windows doesn't support either, relying instead on "Change Notifications" (mostly) to handle event-driven io.

How to get Network Statistics of a Remote PC

H!
I have to make an application in vc++ which can get the network statics of remote PC.
Is any one can help me to solve my problem?
Since you said vc++, therefore, I assume that you have to do this in Windows environment. Have a look at SNMP protocol. And How it can be used to get information of remote machines.
Look at this SO Question as well.
Depending on the network statistics that you want to capture, you may find that (as Aamir suggested) SNMP is a good choice.
There are standard MIBs defined that will provide a number of network statistics. Three that are worth investigation are:
IF-MIB,
RMON, and
Etherlike MIB
NET-SNMP is a good library for accessing SNMP information and is available for Windows (as you've mentioned vc++). Others are available.
This does assume that you have an SNMP agent running and accessible on the remote machines that you wish to monitor.

Resources