I'm currently writing a Node library to execute untrusted code within Docker containers. It basically maintains a pool of containers running, and provides an interface to run code in one of them. Once the execution is complete, the corresponding container is destroyed and replaced by a new one.
The four main classes of the library are:
Sandbox. Exposes a constructor with various options including the pool size, and two public methods: executeCode(code, callback) and cleanup(callback)
Job. A class with two attributes, code and callback (to be called when the execution is complete)
PoolManager, used by the Sandbox class to manage the pool of containers. Provides the public methods initialize(size, callback) and executeJob(job, callback). It has internal methods related to the management of the containers (_startContainer, _stopContainer, _registerContainer, etc.). It uses an instance of the dockerode library, passed in the constructor, to do all the docker related stuff per se.
Container. A class with the attributes tmpDir, dockerodeInstance, IP and a public method executeCode(code, callback) which basically sends a HTTP POST request against ContainerIP:3000/compile along with the code to compile (a minimalist API runs inside each Docker container).
In the end, the final users of the library will only be using the Sandbox class.
Now, my question is: how should I test this?
First, it seems pretty clear to my that I should begin by writing functional tests against my Sandbox class:
it should create X containers, where X is the required pool size
it should correctly execute code (including the security aspects: handling timeouts, fork bombs, etc. which are in the library's requirements)
it should correctly cleanup the resources it uses
But then I'm not sure what else it would make sense to test, how to do it, and if the architecture I'm using is suitable to be correctly tested.
Any idea or suggestion related to this is highly appreciated! :) And feel free to ask for a clarification if anything looks unclear.
Christophe
Try and separate your functional and unit testing as much as you can.
If you make a minor change to your constructor on Sandbox then I think testing will become easier. Sandbox should take a PoolManager directly. Then you can mock the PoolManager and test Sandbox in isolation, which it appears is just the creation of Jobs, calling PoolManager for Containers and cleanup. Ok, now Sandbox is unit tested.
PoolManager may be harder to unit test as the Dockerode client might be hard to mock (API is fairly big). Regardless if you mock it or not you'll want to test:
Growing/shrinking the pool size correctly
Testing sending more requests than available containers in the pool
How stuck containers are handled. Both starting and stopping
Handling of network failures (easier when you mock things)
Retries
Any of failure cases you can think of
The Container can be tested by firing up the API from within the tests (in a container or locally). If it's that minimal recreating it should be straightforward. Once you have that it's really just testing an HTTP client it sounds like.
The source code for the actual API within the container can be tested however you like with standard unit tests. Because you're dealing with untrusted code there are a lot of possibilities:
Doesn't compile
Never completes execution
Never starts
All sorts of bombs
Uses all host's disk space
Is a bot and talks over the network
The code could do basically anything. You'll have to pick the things you care about. Try and restrict everything else.
Functional tests are going to be important too, there are a log of pieces to deal with here and mocking Docker isn't going to be easy.
Code isolation is a difficult problem; I wish Docker was around last time I had to deal with it. Just remember that your customers will always do things you didn't expect! Good luck!
Related
I am trying to lift an OptaPlanner project into the cloud as an Azure Function. My goal in this would be to enhance the scaling so that our company can process more solutions in parallel.
Background: We currently have a project running in a Docker container using the optaplanner-spring-boot-starter MVN package. This has been successful when limited to solving one solution at a time. However, we need to dramatically scale the system so that a higher number of solutions can be solved in a limited time frame. Therefore, I'm looking for a cloud-based solution for the extra CPU resources needed.
I created an Azure Function using the optaplanner-core MVN package and our custom domain objects for our existing solution as a proof of concept. The Azure Function uses an HTTP trigger, this seems to work to get a solution, but the performance is seriously degraded. I'm expecting to need to upgrade the consumption plan so that we can specify CPU and memory requirements. However, it appears that Azure is not scaling out additional instances as expected leading to OptaPlanner blocking itself.
Here is the driver of the code:
#FunctionName("solve")
public HttpResponseMessage run(
#HttpTrigger(name = "req", methods = {HttpMethod.POST },authLevel = AuthorizationLevel.FUNCTION)
HttpRequestMessage<Schedule> request,
final ExecutionContext context) {
SolverConfig config = SolverConfig.createFromXmlResource("solverConfig.xml");
//SolverManagerConfig managerConfig = new SolverManagerConfig().withParallelSolverCount("2");
//SolverManagerConfig managerConfig = new SolverManagerConfig().withParallelSolverCount("10");
//SolverManagerConfig managerConfig = new SolverManagerConfig().withParallelSolverCount("400");
SolverManagerConfig managerConfig = new SolverManagerConfig().withParallelSolverCount("AUTO");
SolverManager<Schedule, UUID> solverManager = SolverManager.create(config ,managerConfig);
SolverJob<Schedule, UUID> solverJob = solverManager.solve(UUID.randomUUID(), problem);
// This is a blocking call until the solving ends
Schedule solution = solverJob.getFinalBestSolution();
return request.createResponseBuilder(HttpStatus.OK)
.header("Content-Type", "application/json")
.body(solution)
.build();
}
Question 1: Does anyone know how to set up Azure so that each HTTP call causes a scaling out of a new instance? I would like this to happen so that each solver isn't competing for resources. I have tried to configure this by setting FUNCTIONS_WORKER_PROCESS_COUNT=1 and maxConcurrentRequests=1. I have also tried changing OptaPlanners parallelSolverCount and moveThreadCount to different values without any noticeable differences.
Question 2: Should I be using Quarkus with Azure instead of the core MVN package? I've read that Geoffrey De Smet answered, "As for AWS Lambda (serverless): Quarkus is your friend".
I'm out of my element here as I haven't coded with Java for over 20 years AND I'm new to both Azure Functions and OptaPlanner. Any advice would be greatly appreciated.
Thanks!
Consider using OptaPlanner's Quarkus integration to compile natively. That is better for serverless deployments because it dramatically reduces the startup time. The README of the OptaPlanner quickstarts that use Quarkus explain how.
By switching from OptaPlanner in plain java to OptaPlanner in Quarkus (which isn't a big difference), a few magical things will happen:
The parsing of solverConfig.xml with an XML parser won't happen at runtime during bootstrap, but at build time. If its in src/main/resources/solverConfig.xml, quarkus will automatically pick it up to configure the SolverManager to inject.
No reflection at runtime
You will want to start 1 run per dataset. So parallelSolverCount shouldn't be higher than 1 and no run should handle 2 datasets (even not sequentially). If a run gets 8000 cpuMillis, you can use moveThreadCount=4 for it to get better results faster. If it only gets a 1000 cpuMillis (= 1 core), don't use move threads. Verify a run gets enough memory.
As for your Question 1, unfortunately, I don't have a solution for Azure Functions, but let me point you to a blogpost about running (and scaling) OptaPlanner workloads on OpenShift, which could address some of your concerns on the architecture level.
Scaling is only static for now (the number of replicas is specified manually), but it can be paired with KEDA to scale based on the number of pending datasets.
Important to note, the optaplanner-operator is only experimental at this point.
I have two Azure Functions. I can think of them as "Producer-Consumer". One is "HttpTrigger" based Function (Producer) which can be fired randomly. It writes the input data in a static "ConcurrentDictionary". The second one is "Timer Trigger" Azure Function(consumer). It reads the data periodically from the same "ConcurrentDictionary" which was being used by the "Producer" function App and then do some processing.
Both the functions are within the same .Net project (but in different classes). The in-memory data sharing through static "ConcurrentDictionary" works perfectly fine when I run the application locally. While running locally, I assume that they are running under the same process. However, when I deploy these Functions in Azure Portal ( They are in the same function App Resource), I found that data sharing through static "ConcurrentDictionary" is not not working.
I am just curious to know, if in Azure Portal, both the Functions have their own process (Probably, that's why they are not able to share in-process static collection). If that is the case, what are my options that these two Functions work as proper "Producer-Consumer"? Will keeping both the Functions in the same class help?
Probably, the scenario is just opposite to what is described in the post - "https://stackoverflow.com/questions/62203987/do-azure-function-from-same-app-service-run-in-same-instance". As against the question in the post, I would like both the Functions to use the same static member of a static class instance.
I am sorry that I cannot experiment too much because the deployment is done through Azure-DevOps pipeline. Too many check-ins in repository is slightly inconvenient. As I mention, it works well locally. So, I don't know how to recreate what's happening in Azure Portal in local environment so that I can try different options? Is there any configurable thing which I am missing to apply?
Don't do that, use an azure queue, event grid, service bus or something else that is reliable but just don't try using a shared object. It will fail as soon as scale out happens or as soon as one of the processes dies. Do think about functions as independent pieces and do not try to go against the framework.
Yes, it might work when you run the functions locally but then you are running on a single machine and the runtime might use the same process but once deployed that ain't true anymore.
If you really really don't want to decouple your logic into a fully seperated producer and consumer then write a single function that uses an in process queue or collection and have that function deal with the processing.
We have done some experiments with the continuation-local-storage, and it seems to solve a specific problem nicely.
What makes me nervous is that it relies on a polyfill of process.addAsyncListener that seems to have been deprecated ages ago.
This makes me worry a bit about using it for production code, so is it safe to use?
Our particular code base still has to run on node 6 on some servers for some time yet (it runs on node 8 on most servers).
The problem
This module helps us creating an execution context that is kept during async operations.
What we specifically add to this context are bunyan child loggers, initialized with context data, such as request ID/message ID/correlation ID.
This helps us to avoid having to pass tramp data through the layers, yet still being able to track all log messages associated with a specific request.
I am building a small utility that packages Locust - performance testing tool (https://locust.io/) and deploys it on azure functions. Just a fun side project to get some hands on with the serverless craze.
Here's the git repo: https://github.com/amanvirmundra/locust-serverless.
Now I am thinking that it would be great to run locust test in distributed mode on serverless architecture (azure functions consumption plan). Locust supports distributed mode but it needs the slaves to communicate with master using it's IP. That's the problem!!
I can provision multiple functions but I am not quite sure how I can make them talk to each other on the fly(without manual intervention).
Thinking out loud:
Somehow get the IP of the master function and pass it on to the slave functions. Not sure if that's possible in Azure functions, but some people have figured a way to get an IP of azure function using .net libraries. Mine is a python version but I am sure if it can be done using .net then there would be a python way as well.
Create some sort of a VPN and map a function to a private IP. Not sure if this sort of mapping is possible in azure.
Some has done this using AWS Lambdas (https://github.com/FutureSharks/invokust). Ask that person or try to understand the code.
Need advice in figuring out what's possible at the same time keeping things serverless. Open to ideas and/or code contributions :)
Update
This is the current setup:
The performance test session is triggered by an http request, which takes in number of requests to make, the base url, and no. of concurrent users to simulate.
Locustfile define the test setup and orchestration.
Run.py triggers the tests.
What I want to do now, is to have master/slave setup (cluster) for a massive scale perf test.
I would imagine that the master function is triggered by an http request, with a similar payload.
The master will in turn trigger slaves.
When the slaves join the cluster, the performance session would start.
What you describe doesn't sounds like a good use-case for Azure Functions.
Functions are supposed to be:
Triggered by an event
Short running (max 10 minutes)
Stateless and ephemeral
But indeed, Functions are good to do load testing, but the setup should be different:
You define a trigger for your Function (e.g. HTTP, or Event Hub)
Each function execution makes a given amount of requests, in parallel or sequentially, and then quits
There is an orchestrator somewhere (e.g. just a console app), who sends "commands" (HTTP call or Event) to trigger the Function
So, Functions are "multiplying" the load as per schedule defined by the orchestrator. You rely on Consumption Plan scalability to make sure that enough executions are provisioned at any given time.
The biggest difference is that function executions don't talk to each other, so they don't need IPs.
I think the mentioned example based on AWS Lambda is just calling Lambdas too, it does not setup master-client lambdas talking to each other.
I guess my point is that you might not need that Locust framework at all, and instead leverage the built-in capabilities of autoscaled FaaS.
The SPOJ is a website that lists programming puzzles, then allows users to write code to solve those puzzles and upload their source code to the server. The server then compiles that source code (or interprets it if it's an interpreted language), runs a battery of unit tests against the code, and verifies that it correctly solves the problem.
What's the best way to implement something like this - how do you sandbox the user input so that it can not compromise the server? Should you use SELinux, chroot, or virtualization? All three plus something else I haven't thought of?
How does the application reliably communicate results outside of the jail while also insuring that the results are not compromised? How would you prevent, for instance, an application from writing huge chunks of nonsense data to disk, or other malicious activities?
I'm genuinely curious, as this just seems like a very risky sort of application to run.
A chroot jail executed from a limited user account sounds like the best starting point (i.e. NOT root or the same user that runs your webserver)
To prevent huge chunks of nonsense data being written to disk, you could use disk quotas or a separate volume that you don't mind filling up (assuming you're not testing in parallel under the same user - or you'll end up dealing with annoying race conditions)
If you wanted to do something more scalable and secure, you could use dynamic virtualized hosts with your own server/client solution for communication - you have a pool of 'agents' that receive instructions to copy and compile from X repository or share, then execute a battery of tests, and log the output back via the same server/client protocol. Your host process can watch for excessive disk usage and report warnings if required, the agents may or may not execute the code under a chroot jail, and if you're super paranoid you would destroy the agent after each run and spin up a new VM when the next sample is ready for testing. If you're doing this large scale in the cloud (e.g. 100+ agents running on EC2) you only ever have enough spun up to accommodate demand and therefore reduce your costs. Again, if you're going for scale you can use something like Amazon SQS to buffer requests, or if you're doing a experimental sample project then you could do something much simpler (just think distributed parallel processing systems, e.g. seti#home)