How to Get Fully Functional PowerShell running on Linux? - linux

Getting powershell running on Linux is straightforward.
Unfortunately this is based on .NetCore which excludes a lot a important functionality and modules e.g the DNSServer module.
Is there a workaround to obtain a fully functional PowerShell installation on linux including modules that don't appear in .NetCore (specifically DNSServer) ?

Modules like DNSServer are owned and maintained by the DNS team within Microsoft and aren't part of the PowerShell project itself. This also means they aren't open source.
On top of that, for DNSServer specifically, that module uses WMI under the hood (I'd go so far as to say it's a thin wrapper around the WMI calls), and since WMI is also not open source and not available on Linux I'd say there's little chance of this module making there any time soon.
As a general case, your best bet is probably to use PSRemoting from Linux to a Windows machine that has the modules you want, then either use Implicit Remoting (Import-PSSession) or just straight up make remote calls with Invoke-Command.

Related

How does RunKit make their virtual servers?

There are many websites providing cloud coding sush as Cloud9, repl.it. They must use server virtualisation technologies. For example, Clould9's workspaces are powered by Docker Ubuntu containers. Every workspace is a fully self-contained VM (see details).
I would like to know if there are other technologies to make sandboxed environment. For example, RunKit seems to have a light solution:
It runs a completely standard copy of Node.js on a virtual server
created just for you. Every one of npm's 300,000+ packages are
pre-installed, so try it out
Does anyone know how RunKit acheives this?
You can see more in "Tonic is now RunKit - A Part of Stripe! " (see discussion)
we attacked the problem of time traveling debugging not at the application level, but directly on the OS by using the bleeding edge virtualization tools of CRIU on top of Docker.
The details are in "Time Traveling in Node.js Notebooks"
we were able to take a different approach thanks to an ambitious open source project called CRIU (which stands for checkpoint and restore in user space).
The name says it all. CRIU aims to give you the same checkpointing capability for a process tree that virtual machines give you for an entire computer.
This is no small task: CRIU incorporates a lot of lessons learned from earlier attempts at similar functionality, and years of discussion and work with the Linux kernel team. The most common use case of CRIU is to allow migrating containers from one computer to another
The next step was to get CRIU working well with Docker
Part of that setup is being opened-source, as mentioned in this HackerNews feed.
It uses linux containers, currently powered by Docker.

Weblogic: Mixed Windows and Linux Domain

The project I am currently working on has a mix of legacy software and new development. The new dev work is being done on Linux and we have created a large domain on the Linux side. However, all of the legacy software must remain on Windows...
I haven't found any documentation indicating a mixed domain is possible although I can't see why the node managers or servers would have a problem communicating.
Can I add a Windows managed server to my Linux domain? Has anyone ever tried this? I can leave the domains separate if need be (although management won't be happy) but I was tasked with consolidating everything into a single domain.
If you don't have an exact answer, any links to documentation would be appreciated.
I do not have a practical experience with running such mixed-OS domain but I do not see a why it should not conceptually work.
Weblogic runs on Java, so that should work on both platforms.
The only problem that you may experience is that if the domain was created for a particular OS, its startup scripts will either be .sh for Linux or e.g. .cmd for Windows. In this case, you will probably need to get startup scripts for the particular OS and slightly modify them to match your target domain.
WebLogic is supported on both platforms, and startup scripts are also for both windows and linux.
The protocol they communicate is not in any way I know platform specific, so there's no reason for this to not work.
There doesn't seem to be any documentation on this however, so you need to just go for it.
We've got this up and running... it wasn't all that bad. Here's what we did:
Create a domain on Linux (NFS)
Add Weblogic .cmd start/stop scripts into <domain home>/bin folder
On Windows side:
Create a symlink under C: to the NFS domain location
mklink /D folder_name \\OUR-NFS01\path\to\domain
Update nodemanager.properties and nodemanager.domains to use the symlink path
Update nodemanager.properties to use our startManagedWebLogic.cmd for the start script
Update all of the .cmd files to reference the symlink path to the domain (e.g. DOMAIN_HOME)
Make sure in nodemanager.properties and .cmd files we reference the correct Windows JAVA_HOME location
Make sure any paths in the admin console (e.g. log file location) for the Windows managed server also reference the symlink path
That was it. Once we had the Windows nodemanager up and running we were able to start a managed server on the Windows host.
Side Note: We had issues using running the nodemanager as a Windows service when using mapped network drives. The service would not always see that mapped drive. That is why we chose to use a symlink instead (and it seems cleaner to me anyway).
The most recent WebLogic documentation is quite clear on this. A domain can mix hardware, operating system and JVM as long as all of them are supported:
Hardware, Operating System, and JVM Platform Compatibility
Oracle does recommend to use homogenous clusters as managed servers are expected to be equivalent to eachother, if this is not the case this may negatively impact load balancing and performance (see the above link).

Using Vagrant, why is puppet provisioning better than a custom packaged box?

I'm creating a virtual machine to mimic our production web server so that I can share it with new developers to get them up to speed as quickly as possible. I've been through the Vagrant docs however I do not understand the advantage of using a generic base box and provisioning everything with Puppet versus packaging a custom box with everything already installed and configured. All I can think of is;
Advantages of using Puppet vs custom packaged box
Easy to keep everyone up to date - Ability to put manifests under
version control and share the repo so that other developers can
simply pull new updates and re-run puppet i.e. 'vagrant provision'.
Environment is documented in the manifests.
Ability to use puppet modules defined in production environment to
ensure identical environments.
Disadvantages of using Puppet vs custom packaged box
Takes longer to write the manifests than to simply install and
configure a custom packaged box.
Building the virtual machine the first time would take longer using
puppet than simply downloading a custom packaged box.
I feel like I must be missing some important details, can you think of any more?
Advantages:
As dependencies may change over time, building a new box from scratch will involve either manually removing packages, or throwing the box away and repeating the installation process by hand all over again. You could obviously automate the installation with a bash or some other type of script, but you'd be making calls to the native OS package manager, meaning it will only run on the operating system of your choice. In other words, you're boxed in ;)
As far as I know, Puppet (like Chef) contains a generic and operating system agnostic way to install packages, meaning manifests can be run on different operating systems without modification.
Additionally, those same scripts can be used to provision the production machine, meaning that the development machine and production will be practically identical.
Disadvantages:
Having to learn another DSL, when you may not be planning on ever switching your OS or production environment. You'll have to decide if the advantages are worth the time you'll spend setting it up. Personally, I think that having an abstract and repeatable package management/configuration strategy will save me lots of time in the future, but YMMV.
One great advantages not explicitly mentioned above is the fact that you'd be documenting your setup (properly), and your documentation will be the actual setup - not a (one-time) description of how things were/may have been intended to be.

Use "apt" or compile from scratch for a web service?

For the first time, I am writing a web service that will call upon external programs to process requests in batch. The front-end will accept file uploads and then place them in a queue. The workers on the backend will take that file, run it through ffmpeg and the rest of my pipeline, and send an email when the process is complete.
I have my backend process working on my computer (Ubuntu 10.04). The question is: should I try to re-create that pipeline using binaries that I've compiled from scratch? Or is it okay to use apt when configuring in The Real World?
Not all hosting services uses Ubuntu, and not all give me root access. (I haven't chosen a host yet.) However, they will let me upload binaries to execute, and many give me shell access with gcc.
Usually this would be a no-brainier and I'd compile it all from scratch. But doing so - not to mention trying to figure out how to create a platform-independent .tar.gz binary - will be quite a task which ultimately doesn't really help me ship my product.
Do you have any thoughts on the best way to set up my stack so that I'm not tied to a specific hosting provider? Should I try creating my own .deb, which contains Ubuntu's version of ffmpeg (and other tools) with the configurations I need?
Short of a setup where I manage my own servers/VMs (which may very well be what I have to do), how might I accomplish this?
The question is: should I try to re-create that pipeline using binaries that I've compiled from scratch? Or is it okay to use apt when configuring in The Real World?
It is in reverse: it is not okay to deploy unpackaged in The Real World IMHO
and not all give me root access
How would you be deploying a .deb without root access. Chroot jails?
But doing so - not to mention trying to figure out how to create a platform-independent .tar.gz binary - will be quite a task which ultimately doesn't really help me ship my product.
+1 You answer you own question. Don't meddle unless you have to.
Do you have any thoughts on the best way to set up my stack so that I'm not tied to a specific hosting provider?
Only depend on wellpackaged standard libs (such as ffmpeg). Otherwise include them in your own deployment. This problem isn't too hard too solve for 10s of thousand Linux applications over decades now, so it would probably be feasible for you too.
Out of the box:
Look at rightscale and other cloud providers/agents that have specialized images/tool chains especially for video encoding.
A 'regular' VPS provider (with Xen or Virtuozzo) will not normally be happy with these kinds of workload, but EC2, Rackspace and their lot will be absolutely fine with that.
In general, I wouldn't believe that a cloud infrastructure provider that doesn't grant root access will allow for computationally intensive workloads. $0.02

Running external code in a restricted environment (linux)

For reasons beyond the scope of this post, I want to run external (user submitted) code similar to the computer language benchmark game. Obviously this needs to be done in a restricted environment. Here are my restriction requirements:
Can only read/write to current working directory (will be large tempdir)
No external access (internet, etc)
Anything else I probably don't care about (e.g., processor/memory usage, etc).
I myself have several restrictions. A solution which uses standard *nix functionality (specifically RHEL 5.x) would be preferred, as then I could use our cluster for the backend. It is also difficult to get software installed there, so something in the base distribution would be optimal.
Now, the questions:
Can this even be done with externally compiled binaries? It seems like it could be possible, but also like it could just be hopeless.
What about if we force the code itself to be submitted, and compile it ourselves. Does that make the problem easier or harder?
Should I just give up on home directory protection, and use a VM/rollback? What about blocking external communication (isn't the VM usually talked to over a bridged LAN connection?)
Something I missed?
Possibly useful ideas:
rssh. Doesn't help with compiled code though
Using a VM with rollback after code finishes (can network be configured so there is a local bridge but no WAN bridge?). Doesn't work on cluster.
I would examine and evaluate both a VM and a special SELinux context.
I don't think you'll be able to do what you need with simple file system protection because you won't be able to prevent access to syscalls which will allow access to the network etc. You can probably use AppArmor to do what you need though. That uses the kernel and virtualizes the foreign binary.

Resources