Environment detection in WASM: Web, Node.js or standalone runtime? - node.js

There are many ways to run a WebAssembly module, besides Web, Node.js can load wasm module with --experimental-wasm-modules, standalone runtime like wasmtime and lucet can run it too.
So how do I detect the current environment in WASM and is it possible to apply a restriction on wasm modules that make it can only work on a specific website?

WebAssembly has no built-in I/O capabilities - it cannot access the DOM, read the filesystem, renders to the screen etc. In order to perform any of these tasks it needs to interoperate with the host environment (most typically JavaScript).
As a result, WebAssembly cannot detect its runtime environment. It could ask the host what environment it is executing within, although this could of course be faked!

Related

Calling an executable like a script

I'm working on a HTTP/1.1 server in C as a learning experience and want to make it performant while still being dynamic. Performing a get or post on static files or scripts was easy enough, but I'd like to add the ability to call compiled binaries for greater speed.
Currently, I link these compiled binaries directly into the server binary, but I'd like to be able to update and hot swap them. I considered dynamically linking them as shared libraries, but I don't want to relink them to handle every request. I also considered creating a new process to run them, however that incurs significant overhead every request and makes getting the response back to the client difficult (I'm using OpenSSL sockets).
How could I efficiently relink these compiled binaries when they update, without shutting down the server?
I'm testing on Debian Sid and running on an AWS ECS instance with CentOS 7. Both have Linux kernel versions 4.19+
I'd like to be able to update and hot swap them. I considered dynamically linking them as shared libraries
You appear to believe that you can update a shared library (on disk) that a running server binary is currently using, and expect that running server process to start using the updated library.
That is not how shared libraries work. If you try that, your server process will either crash, or continue using the old library (depending on exactly how you update the library on disk).
This can be made to work in limited circumstances if you use dlopen to load the library, and if you can quiesce your server, and have it dlclose to unload the previously loaded version, and then dlopen updated version. But the exact details of making this work are quite tricky.

How to Get Fully Functional PowerShell running on Linux?

Getting powershell running on Linux is straightforward.
Unfortunately this is based on .NetCore which excludes a lot a important functionality and modules e.g the DNSServer module.
Is there a workaround to obtain a fully functional PowerShell installation on linux including modules that don't appear in .NetCore (specifically DNSServer) ?
Modules like DNSServer are owned and maintained by the DNS team within Microsoft and aren't part of the PowerShell project itself. This also means they aren't open source.
On top of that, for DNSServer specifically, that module uses WMI under the hood (I'd go so far as to say it's a thin wrapper around the WMI calls), and since WMI is also not open source and not available on Linux I'd say there's little chance of this module making there any time soon.
As a general case, your best bet is probably to use PSRemoting from Linux to a Windows machine that has the modules you want, then either use Implicit Remoting (Import-PSSession) or just straight up make remote calls with Invoke-Command.

Running Paraview Benchmarks remotely

ERROR: In /home/kitware/dashboards/buildbot/paraview-debian6dash-linux-shared-release_opengl2_qt4_superbuild/source-paraview/VTK/Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx, line 286
vtkXOpenGLRenderWindow (0x529c2b0): bad X server connection. DISPLAY=Aborted
I understand that the main reason why it is not running is because it needs to use a window which it cannot create remotely.
I am able to run it locally
This is a benchmark provided by paraview.org
The issue arises because the process needs access to an X server to create a Window and then an OpenGL context for all the rendering. The default linux binaries shipped by paraview.org use rely on X to provide the context. If your sever is not going to provide X access to your processes (which is not unheard of), then you should build ParaView with either OSMesa support or EGL support and use that build instead. In those builds, ParaView uses non-X dependent methods to create the OpenGL context.

Sandboxing code on a Linux machine

I'm in the process of writing an application in C++ on Linux. The goal is to have it load dynamically linked libraries at run-time and to provide all the services that the libraries require. The main aim is to have it act as a black box where code loaded at run-time can not break out and damage the rest of the system.
I've never done anything like this before and am a little lost of the best method to take. If I load all the dynamically linked libraries under a special process and then use something like SELinux to limit the ability for the central daemon to do anything outside of its requirements would that seem like a reasonable solution?
The reason I ask is that I want to allow people to load code into this container application that then handles all the server side stuff for them, so things such as security, permissions, networking, logging etc are all provided with a simple, clean and cross platform API regardless of the version of UNIX that the container is running on.

Using Vagrant, why is puppet provisioning better than a custom packaged box?

I'm creating a virtual machine to mimic our production web server so that I can share it with new developers to get them up to speed as quickly as possible. I've been through the Vagrant docs however I do not understand the advantage of using a generic base box and provisioning everything with Puppet versus packaging a custom box with everything already installed and configured. All I can think of is;
Advantages of using Puppet vs custom packaged box
Easy to keep everyone up to date - Ability to put manifests under
version control and share the repo so that other developers can
simply pull new updates and re-run puppet i.e. 'vagrant provision'.
Environment is documented in the manifests.
Ability to use puppet modules defined in production environment to
ensure identical environments.
Disadvantages of using Puppet vs custom packaged box
Takes longer to write the manifests than to simply install and
configure a custom packaged box.
Building the virtual machine the first time would take longer using
puppet than simply downloading a custom packaged box.
I feel like I must be missing some important details, can you think of any more?
Advantages:
As dependencies may change over time, building a new box from scratch will involve either manually removing packages, or throwing the box away and repeating the installation process by hand all over again. You could obviously automate the installation with a bash or some other type of script, but you'd be making calls to the native OS package manager, meaning it will only run on the operating system of your choice. In other words, you're boxed in ;)
As far as I know, Puppet (like Chef) contains a generic and operating system agnostic way to install packages, meaning manifests can be run on different operating systems without modification.
Additionally, those same scripts can be used to provision the production machine, meaning that the development machine and production will be practically identical.
Disadvantages:
Having to learn another DSL, when you may not be planning on ever switching your OS or production environment. You'll have to decide if the advantages are worth the time you'll spend setting it up. Personally, I think that having an abstract and repeatable package management/configuration strategy will save me lots of time in the future, but YMMV.
One great advantages not explicitly mentioned above is the fact that you'd be documenting your setup (properly), and your documentation will be the actual setup - not a (one-time) description of how things were/may have been intended to be.

Resources