Sandbox/JRE limitations of CloudBees? - sandbox

I'm going to start developing a Java web app that I believe I will be deploying to CloudBees, but am concerned about what JRE/sandbox restrictions may apply.
For instance, with Google App Engine, you're not allowed to execute any methods packaged inside java.io.file or java.net. You're not allowed to start threads without using their custom ThreadFactory. You're not allowed to use JNDI, JMX or make calls to remote RDBMSes hosted on 3rd party machines. You're not allowed to use reflection. With GAE, there's a lot you're not allowed to do.
Do these same restrictions hold true for CloudBees? I'm guessing no, as I just read their entire developer docs and didn't run across anything of the sort.
However, what happens if my app tries to write to the local file system when deployed to their servers? They must have certain restrictions as to what can run on their machines, if for no other reason than security!
So I ask: what are these restrictions, or where can I find them listed in their docs? Thanks in advance!

Last I checked (a) there is no sandbox; (b) you can write to the local filesystem, but any files you write there may be discarded if the application is reprovisioned for any reason, i.e. use it for temporary files only. (An optional permanent file store service has been considered as a feature useful for certain applications.)

Related

Using the temp directory for Azure Functions

I have a set of Azure functions running on the same host, which scales up to many instances at times. I'd like to store a very small amount of ephemeral data (a few kb's) and opportunistically share those data between function executions. I know that the temp directory is only available to the functions running on that same instance. I also know that I could use the home directory, durable functions, or other Azure (such as blob) storage to share data between all functions persistently.
I have two main questions
What are the security implications of using the temp directory? Who can access its contents outside of the running function?
Is this still a reasonable solution? I can't find much in the way of Microsoft documentation outside of what looks like some outdated kudu documentation here.
Thanks!
Answer to Question 1
Yes, it is secure. The Function host process runs inside a sandbox. All access data stored to D:\local is self-contained and isolated to the processes within the sandbox. Kindly see https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox
Answer to Question 2
The data in D:\local\Temp exists as long as the Function host process is alive. The Functions host process can be recycled at any time due to unexpected events such as unhandled exceptions, timeouts, hitting resource usage limits for your plan. As long as your workflow accounts for the fact that the data stored in D:\local\Temp is ephemeral, then the answer is a 'yes'.
I believe this will answer your question :
Please refer to this for more details.
Also, when Folder/Files when created via code inside the “Temp” folder; you cannot view them when you visit KUDU site. But you can use those files/ folders.
How to view the files/ folders if created via KUDU?
We will need to add - WEBSITE_DISABLE_SCM_SEPARATION = true in Configuration(app settings).
Note:- Another important note is that the Main site and the scm site do not share temp files. So if you write some files there from your site, you will not see them from Kudu Console (and vice versa).
You can make them use the same temp space if you disable separation (via WEBSITE_DISABLE_SCM_SEPARATION).
But note that this is a legacy flag, and its use is not recommended/supported.
(ref : shared document link)
Security implications depend on the level of isolation you are seeking.
In shared app-service plan or consumption plan you need to trust the sandbox isolation. This is not an isolated microvm like AWS lambda.
If you have your own app-service plan then you need to trust the VM hypervisor isolation of your app-service plan.
If you are really paranoid or running healtcare application, then you likely need to run your function in a ASE plan.
Reasonable solution is one where the cost is not exceeding the worth of data you are protecting :)

node.js on shared server space

I am playing with node.js and angular for first time. I use shared server hosting space. I am trying to get some node.js tests running.
CPanel seem to provide interface to deploy node applications. Example:
application url: myurl.com
application root: node-hello-world
application startup file: app.js
This seems to create directory and some artifacts in
/home/myurl/nodevenv/node-hello-world/6/bin
I have (limited?) shell access through Cpanel emulation, however I get error on source command.
source activate
got error: error: jailshell: fork: Cannot allocate memory
Does this mean node.js is installed and ready to run? Do I have to upload project as well? Where to? Trying to find more info on process of deploying to this type of server if possible.
sorry for nooby question.
Googling your error message, I came across this thread -- which admittedly is very old, but it's from cPanel and has the following comment from an administrator at the time:
Jailshell is a constrained environment by design. It is not meant to be a replacement for a full-featured, unrestricted, shell environment, such as is provided by Bash. If your user's need such full-featured environments then perhaps they need full shell access, or another method whereby they can accomplish their goal.
That answer was given in 2006 (yes, 13 years ago) but I have to imagine the spirit of that response is still true.
To be perfectly honest, I'd be afraid to use any shared hosting provider that would give you more than very limited shell to use -- it opens the door to many security vulnerabilities, and if multiple customers are in the same runtime (i.e. shared hosting) it could be catastrophic. Maybe your host does allow this, or maybe what you're describing isn't actually the same thing I'm referring to... you didn't offer a lot of details on this point.
Back to your question: Does this mean node.js is installed and ready to run? Do I have to upload project as well? Where to?
If I had to guess, Node probably isn't installed (it isn't in most shared hosting providers) -- but I can't say for sure based on the information you provided. My recommendation would be to call their customer support. Or pay for a dedicated hosting account where you get root access. Or just use something like Heroku.

Stateful behaviour with guest exe on service fabric

I have a guest exe where it expects folder name to be passed during startup, which it then uses as a "working" directory for writing data, logs etc.
If I wanted to host this exe in service fabric and make it reliable and stateful do I just pass the exe a UNC path to a common location where it would write to no matter which VM the instance was running on ?
Or are there better ways of managing this ?
That should work. Do make sure to replicate/back-up the contents of the common drive. When using multiple service instances you'll likely need to deal with file locking.
And see if it's worth the effort (or even possible) to change the application to start using the SDK, and create a genuine reliable Stateful Service from it. That'll provide you with transactions, concurrency control and data replication by default.

Creating a customized sandbox in node.js (Can only read in a certain directory, and cannot write anywhere)

I am trying to make an application that runs submitted scripts, and would like to try to sandbox the submitted scripts. The scripts need to be able to be able to read in a certain directory (and in all of its subdirectories), but shouldn't be able to write at all, and, other than being able to read, should not be able to do anything that could not be done in a browser (ie download files using http). How would I go about doing this?
I don't think Node has this capability built in, but you should be able to run an "unsandboxed" Node on a *nix operating system as a severely restricted user (might be possible in other OSes too, I'm not sure). You might also want to look at Node's VM module.
Eventually, I decided on using the vm node module. I basically just made a namespace that the script running in the sandbox could use that would filter out malicious requests / requests that ought to be out of the bounds of the sandbox. The namebox included fs methods that would be necessary, but failed to execute any of the ones that would modify any directory other than the certain one that I wished the script to be able to modify.

Windows Azure and a third-party Windows Service

I am developing a website that I intend to run within Windows Azure using a single Web Role. The site will make use of the Sphinx Search engine which will need to run as a Windows Service. So, my question is this...is it possible to install the Sphinx Search Windows Service inside of a Web Role.
From my initial research into Azure I am thinking "yes" for the reason that the Web Role is a VM running IIS. Therefore I should be able to remote in, install the service, and it should work. :)
Does this sound right?
Installing software via RDP is not a viable solution with Web/Worker role instances, as these changes won't persist. You need to install it either from a startup script or from OnStart(). Since you want to install as a service, that would imply startup script, since it would need elevated permissions. Note: The installer must support unattended mode, where all parameters are specified via command line with no human interaction.
What about scalability? If you have more than one instance of your web role running, can sphinx run across two instances? From what I read, it supports ODBC-compliant databases, and you might be able to use it against Windows Azure SQL Database. If that's the case, can two sphinx engines run on two different machines accessing the same data store? If so, this sounds like a viable solution.
If installation cannot be automated, or you need something additional like MySQL, you may want to consider placing the sphinx search engine inside a Virtual Machine (new in June 2012). Now you can spin up a Windows 2008 Server, RDP into it, configure it exactly how you want it.
Strictly speaking yes, you could do that. However this makes the assumption that you would be running on one VM instance and also that the instance would never need restarting.
You should consider looking at Azure worker roles for any functionality that would normally exist as a windows service.
After reading your answers, and thinking about it a bit more, I think dropping the idea of installing a service would be the best course of action. I've been looking at the API for Lucene.NET (this may be the same for Sphinx) and it's possible to encapsulate the writing/managing of indexes, etc, within in code and therefore no need for a service.
For the Azure, there is a library for managing index files using both local and Azure storage which could be of use. Scenarios I've read about show that it's then possible to have a Web Role that will process HTTP requests and perform the searches and a Worker Role to accept DB changes via a queue and have it write them to the indexes.

Resources