I am currently trying to utilize the Unsat Core on Alloy, but my options for solvers are limited to PLingeling and Sat4J. I am also given a warning that JNI-based solvers are not available on my platform (see Alloy Solver Options Capture). I am using Windows 10 with a 64 bit Java JDK.
How do I enable Alloy solvers with Unsat Core?
The comments on the post Alloy - Can't find unsat core suggests that I need to add the native library to LD_LIBRARY_PATH. However, LD_LIBRARY_PATH is a Linux concept, and it leaves enabling JNI on Windows unclear. Is there an equivalent to LD_LIBRARY_PATH in Windows 10? If not, how do I enable JNI in order to use solvers with Unsat Core?
A workaround would be to run Alloy inside the Windows Subsystem for Linux. It then thinks its platform is Linux and gives access to, e.g., Minisat that supports Unsat Core. (On Debian/Ubuntu, you can install minisat with apt.)
Related
I want to use some methods in the ole32 and gdi32 libraries under Linux, but these two libraries do not exist under Linux, so are there alternatives when using under Linux?
I am using CGO
Use the following way to quote
#cgo LDFLAGS: -lws2_32 -lgdi32 -lole32
No, there are no in-place alternatives for these libraries on Linux as they are part of proprietary Win32 API and facilities they provide are specific to Windows only.
To make your application build on Windows and Linux you will need to abstract parts of application that use Windows specific libraries and implement them again for Linux using relevant replacement libraries (likely with different interface). Typically this is done so that programming interface - such as functions, methods, types, etc. towards your application are same but the underlying implementation is platform specific (using e.g. gdi32 on Windows and your favorite GUI framework on Linux). To achieve this Go provides Build Constraints mechanism that tells compiler to pull-in/ignore only certain files from codebase on each platform.
Go's own os package is good example of doing this.
If your application is heavily dependent on Windows ecosystem and porting does not make sense, perhaps building Windows native binary and running it in Wine emulation layer on Linux might be cost-efficient option.
If you read Wikipedia articles about Wine, Cygwin and CrossOver, you will see that this software is classified as "compatibility layer".
I'm trying to understand what compatibility layer is from the point of view of virtualization layers. I mean, does it use:
library-level virtualization; or...
application-level virtualization; or...
some different-level virtualization (which one?)
and does it use virtualization at all?
It does NOT use any virtulization.
The cygwin1.dll provides a C Library and the Posix compatibility layer,
between the program and the underhood Microsoft system
Cygwin programs are special craft Windows programs compiled with ad hoc tools and linked to the cygwin1.dll.
I'v been trying for the past day to get Tensorflow built with OpenCL on the Linux Subsystem.
I followed this guide. But when typing clinfo it says
Number of platforms 0
Then typing /usr/local/computecpp/bin/computecpp_info gives me
OpenCL error -1001: Unable to retrieve number of platforms. Device Info:
Cannot find any devices on the system. Please refer to your OpenCL vendor documentation.
Note that OPENCL_VENDOR_PATH is not defined. Some vendors may require this environment variable to be set.
Am I doing anything wrong? Is it even possible to install OpenCL on Windows Linux Subsystem?
Note:
I'm using an AMD R9 390X from MSI, 64bit Windows Home Edition
With the launch of WSL2, CUDA programs are now supported in WSL (more information here), however there is still no support for OpenCL as of this writing: https://github.com/microsoft/WSL/issues/6951.
According to a Microsoft representative in this forum post, Windows Subsystem for Linux does not support OpenCL or CUDA GPU programs, and support is not currently planned. To experiment with TensorFlow/OpenCL it would probably be easiest to install Linux in a dual-boot configuration.
You could use the Intel OpenCL SDK for the CPU, https://software.intel.com/en-us/articles/opencl-drivers.
I have set up a few H16R instances on Microsoft Azure that support RDMA, and the Intel pingpong test works fine:
mpirun -hosts <host1>,<host2> -ppn 1 -n 2 -env I_MPI_FABRICS=dapl -env I_MPI_DAPL_PROVIDER=ofa-v2-ib0 -env I_MPI_DYNAMIC_CONNECTION=0 IMB-MPI1 pingpong
However, an issue arises when I want to compile MPI applications (LAMMPS, for instance). It doesn't appear that Microsoft includes Intel compilers on their HPC CentOS 7.1 images, despite the fact that these H16R instances communicate using Intel MPI.
So I installed OpenMPI and compiled LAMMPS using mpic++; however, OpenMPI's mpirun complains and won't run anything.
Do I actually need to purchase the Intel compiler for this task?? Is there no way to use OpenMPI on these VMs? This is rather expensive for a personal project.
You don't need Intel compilers in order to use Intel MPI. It works fine with GCC too. IMPI provides both Intel-specific compiler wrappers (mpiicc, mpiicpc, mpiifort) and generic ones (mpicc, mpicxx, mpif90, etc.) The latter work with any compatible compiler.
In order to use mpicxx for LAMMPS, you must tell the wrapper to use GCC either by providing it in a command-line argument:
$ mpicxx -cxx=g++ ...
or by setting the I_MPI_CXX environment variable:
$ export I_MPI_CXX=g++
$ mpicxx ...
The same applies to the C and Fortran wrappers. Run them with no arguments whatsoever and you'll get a list of options that can be used to provide the actual compiler name.
As for using an alternative MPI implementation, the virtual InfiniBand adapters provided by Azure seem to lack support for shared receive queues and Open MPI won't run with its default configuration. You could try running with the following mpiexec option:
--mca btl_openib_receive_queues P,128,256,192,128:P,2048,1024,1008,64:P,12288,1024,1008,64:Pāā,65536,1024,1008,64
This reconfigures all shared receive queues into private ones. I have no idea whether it actually works - I don't have access to an Azure HPC instance and this is all based on the error message from this question (unfortunately, the OP has not responded to my inquiry whether the above argument makes Open MPI work)
I would like to know if MPI.NET + Mono framework can be used to run distributed computations on supercomputer nodes that are all Linux based?
I know that Mono run-time is available on the clusters and mono compiled programs with standard libraries run fine. But what about MPI.NET ?
And one more question, I am a bit confused with the difference between MPI.NET and MPIch2 etc. Is MPI.NET a wrapper around standard MPIch2 and works on Linux if MPIch2 is available? or is it an alternative to MPIch2 and requires installation of MPI.NET clients?
I highly appreciate your inputs on this if you have had any experience base upon this.
Thank you.
I finally found the answer to some of the questions from MPI.NET website on this matter. I quote:
Does MPI.NET work with other MPI implementations?
It depends on the platform. Even though MPI is a standard, on Windows MPI.NET encodes some information about specific data types used in MS-MPI that tie MPI.NET directly to Microsoft's MPI. It is certainly possible to make MPI.NET work with other MPI implementations, but we do not currently plan to do so. On Unix, however, MPI.NET adapts itself to the native MPI detected at configure time, and can work with (at least) Open MPI, LAM/MPI, and MPICH2
Does MPI.NET work with Mono?
Yes! As of version 0.6.0, MPI.NET works under Mono using a variety of different native MPIs, including Open MPI, LAM/MPI, and MPICH2. Note, however, that problems will run-time code generation in Mono cause MPI.NET to be slightly more conservative in its optimizations. Due to this more-conservative approach and the fact that the Mono JIT has not received as much tuning as the Microsoft JIT, we expect that performance on Mono will not be as good.
for more info please visit: MPI FAQ
additionally, here's a link on how to compile MPI.NET under Ubuntu: Compiling MPI.NET under Ubuntu Oneiric
Before I start, when you have multiple questions, you should post them separately. I don't know the answer to the first part of your question, but I can help you with the second. This means that if you like my answer and one other, you can't accept both. Anyway, on to the second half of your question.
MPI.NET is an MPI implementaiton that seems to run on top of Boost MPI. This means that it probably doesn't implement the low level communication calls themselves, but instead has bindings for .NET that allow you to call Boost MPI from any of those languages. If you don't have both installed, it won't work.
MPICH (the project isn't called MPICH2 anymore) is a separate implementation of the MPI standard that does provide everything necessary at all levels of the stack to run on most systems. It provides all of the standard required language bindings (C, FORTRAN, and C++ which is deprecated). The two projects are separate and as far as I know, you can't use MPI.NET on top of MPICH.