Running integration tests on ephemeral server instance using Heroku CI - node.js

I'm trying to take advantage of this new feature of Heroku to test a parse-server/nodejs application that we have on Heroku, using mocha.
I was expecting Heroku to launch an ephemeral instance of my app along with the tests so that they could be run against it, but it doesn't seem like that's happening. Only the tests get launched.
Now, I found at least one snippet about configuring the Dyno formation to use dynos other than performance-m for the test, so I'm trying to declare my other dynos there as well:
"environments": {
"test": {
"scripts": {
"test-setup": "echo done",
"test": "npm run test"
},
"addons": [
{
"plan": "rediscloud:30",
"as": "REDISCLOUD_URL"
}
],
"formation": {
"test": {
"quantity": 1,
"size": "standard-1x"
},
"worker": {
"quantity": 1,
"size": "standard-1x"
},
"web": {
"quantity": 1,
"size": "standard-1x"
}
}
}
}
in my app.json, but it seems to be getting totally ignored.
I know my mocha script could import the relevant part of the web server and test against it, and that's what I've seen in the non-heroku-related examples, but our app consists of a worker too, and I'd like to profile the interaction of both and test the job lengths against our expectations of performance, rather than individual components, hence "integration tests". Is this a legit use for Heroku tests or I'm doing something wrong or have wrong expectations? I'm more concerned about this than getting it to work, because I'm quite certain I could get it to work in a certain number of ways (mocha spawning the server processes, npm concurrently package, etc), but if I can avoid hacks, all the better.
Locally, I was able to get both imported in the script, but the performance is degraded since it's now 2 processes + the tests running in a single memory process, with nodejs's memory cap limitations and a single event loop instead of 3. While writing this I'm thinking I could probably use throng and spawn different functions depending on the process ID. I'll try this if I don't get any better solutions.
Edit: I already managed to make it run by spawning the server/worker as separate processes in a before step in mocha, calculating the proper ram amounts to allow to each using the env vars. I'm still interested in knowing if there's a better solution.

Related

Why doesn't Azure Functions use other instances in app service plan to to process data?

I have an Azure Function durable task that will spread into 12 smaller tasks. I am using dedicated plan, my maxConcurrentActivityFunctions is currently set to 4, and I have total of 3 instances (P3v2 - 4 cores) in the app service plan.
What I understand is, I should be able to process 12 concurrent tasks, and each instance should use all of its CPU to process the job, because the job is CPU oriented.
But in reality, scaling doesn't improve the performance, all of the task go to a single instance. Other 2 instances just stay idle, despite the fact that the main instance is being totally tortured and CPU usage always sit at 100% percent.
I am sure they go to the same instance because I can read that information from the log analytics. Every log has the same host instance id. If I filter out that host instance id, no logs will even get returned.
I also tested making 3 separated call, with 4 tasks in each. It also doesn't seem to use 3 instances too. The metric for the app service plan seem like, there can only be 1 instance online at a time, depite having 3 instances available. The dashline seems to mean "offline". Because when I filter by instance, it just show at 0.
Here is the host.json file
{
"version": "2.0",
"functionTimeout": "01:00:00",
"logging": {
"logLevel": {
"default": "Information"
},
"console": {
"isEnabled": "false"
},
"applicationInsights": {
"samplingSettings": {
"ExcludedTypes": "Request",
"isEnabled": true
}
}
},
"extensions": {
"durableTask": {
"hubName": "blah",
"maxConcurrentActivityFunctions": 4,
"maxConcurrentOrchestratorFunctions": 20
}
}
}
My expection is. 12 tasks should immediately begin. And 3 instances should all be busy processing the data, instead of only 1 instance with 4 concurrent task.
Am I doing anything wrong. Or am I misunderstand something here?
As far as I know and as per the Microsoft documentation Multiple applications in the same app service plan will share all the instances you have in your premium plan.
For example if you have if the app service plan is configured to run multiple VM instances, then all the apps in the plan will run on multiple instances.
In your case, the application you have is only one but that application has many sub units (functions). So the application is using only one instance.
if you want to use all the instances then try deploying multiple function apps into the same app service plan.
Also, you can use Scaling functionalities or you can set by default Auto Scaling for the app service plan

Jest tests leaking due to improper teardown

While doing testing with Jest I am getting a warning saying "A worker process has failed to exit gracefully and has been force exited. This is likely caused by tests leaking due to improper teardown. Try running with --detectOpenHandles to find leaks." I realize that this is coming because inside of one of functions I use Bull https://github.com/OptimalBits/bull which uses Redis. So when adding a task to the queue it results in this warning. I use default Bull configuration (no configuration). I do have a mockup for the add function on the queue which is used by Jest, however it didn't help.
const notificationQueue = {
add: jest.fn().mockImplementation((data: any, opts?: JobOptions) => {}),
};
I'd like to know if there is a way to avoid this warning. If it helps I use in memory mongo for testing but redis is an actual one. As a side note when I run each test suite separately I am not seeing this warning, only when I run all tests.
As suggested in the warning, add --detectOpenHandles option to jest's script in package.json file:
"scripts": {
"test": "jest --watchAll --detectOpenHandles"
}
Dont forget to stop then start the server !
This solution can work whatever your problem. But, according to your case, your problem is coming from the redis connection. You need to close redis at the end of the test:
import { redis } from "redis_file_path";
afterAll(async () => {
await redis.quit();
});

Docker exit status 1 for Node app on AWS

I'm hosting a beta app on AWS using Express.js, Node, mongoose and docker. Daily active users < 10, mainly friends of mine for testing. The app is down almost everyday for some reason. Initially I thought it was AWS's issue, so I stopped my app, changed it from free tier to t2.medium and started it again.
It didn't resolve the issue, I checked docker log for the container. It was not caused by OOMKilled.
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 1,
"Error": "",
"StartedAt": "2017-03-22T00:51:59.234643501Z",
"FinishedAt": "2017-03-22T07:21:41.351927073Z"
},
"Config": {
...
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
...
}
I could set docker to always restart, but I want to figure out what's the root cause of it. Any suggestions?
That happens to everyone. Lots of things can kill Express applications, like weird HTTP requests. The Docker log should show the exception. Add an uncaughtException handler to log the issue.
process.on('uncaughtException', (e) => {
console.error(e); // try console.log if that doesn't work
process.exit(10);
});
If you can't find an error in the Docker log, then instead of logging to the console, maybe you can log to a file (make sure it is in a volume that stays between Docker runs though).
People don't like admitting this but many applications actually just log the exception and eat it and keep going in the uncaughtException handler without exiting. Because things like broken requests or other stuff that doesn't matter often kill the server you can usually get away with this. But then once in a while you will get burned by something strange happening to the server state that it can't recover from and you will have no idea because you just ate the exception.
You may be able to have it autorestart the app https://docs.docker.com/docker-cloud/apps/autorestart/ which might be a good solution.
Otherwise look into an example of using pm2 along with Docker if possible, pm2 will handle restarting for you.

Service Fabric Application PackageDeployment Operation Time Out exception

i have service fabric cluster and 3 nodes are created in 3 systems and it is inter-connected. i am able to connect each of nodes. These nodes are created in windows server. These Windows Server(VMs) are on-premises.
Manually i am trying to deploy my package into my cluster/one of nodes, i am getting Operation Timeout exception. i have used below commands to execute for deployment.
Service Fabric Power shell Commands:
Copy-ServiceFabricApplicationPackage -ApplicationPackagePath 'c:\sample\etc' -ApplicationPackagePathInImageStore 'abc.app.portaltype'
after execute above command it runs for 2 -3 mins and throws Operation Timeout exception. My package size is almost 250 MB and approximately 15000 file exist in my package. after that i have passed an extra parameter -TimeOutSec to 600(10mins) explicitly in above command, then it successfully executed and it copied to service fabric imagestore.
Register-ServiceFabricApplicationType -ApplicationPathInImageStore 'abc.app.portaltype'
after executed Copy-ServiceFabricApplicationPackage command , i have executed above Register-ServiceFabricApplicationType command to register my in cluster.but it also throws Operation timeout exception then i have passed an extra parameter -TimeOutSec to 600(10mins) explicitly in above command, but no luck it throws same operation timeout exception.
Just to make sure these operation Timeout issue because of no files in package or not. i have created simple empty service fabric asp.net core app and created package and try to deploy in same server with using above command, it deployed with in fraction of second and it works as smoothly.
Anybody has any idea how to over come service fabric operation timeout issue ?
How to handle the operation timeout issue if the package contains large set of files ?
Any help/suggestion would be very appreciated.
Thanks,
If this is taking longer than the 10 Minute default max it's probably one of the following issues:
Large application packages (>100s of MB)
Slow network connections
A large number of files within the application package (>1000s).
The following workarounds should help you.
Add the following settings to your cluster config:
"fabricSettings": [
{
"name": "NamingService",
"parameters": [
{
"name": "MaxOperationTimeout",
"value": "3600"
},
]
}
]
Also add:
"fabricSettings": [
{
"name": "EseStore",
"parameters": [
{
"name": "MaxCursors",
"value": "32768"
},
]
}
]
There’s a couple additional features which are currently rolling out. For these to be present and functional, you need to be sure that the client is at least 2.4.28 and the runtime of your cluster is at least 5.4.157. If you’re staying up to date these should already be present in your environment.
For register you can specify the -Async flag which will handle the upload asynchronously, reducing the need for the timeout to just the time necessary to send the command, not the application package. You can also query the status of the registration with Get-ServiceFabricApplicationType. 5.5 fixes some issues with these commands, so if they aren't working for you you'll have to wait for that release to hit your environment.

How to run a job through Queue in arangodb

I am moving from ArangoDb 2.5.7 to ArangoDb 3.1.7. I have managed to make everything work except the Jobs. I look at the documentation and I don't understand If I have to create a separate service just for that ?
So, I have a foxx application myApp
manifest.json
{
"name": "myApp",
"version": "0.0.1",
"author": "Deepak",
"files":
{
"/static": "static"
},
"engines":
{
"arangodb": "^3.1.7"
},
"scripts":
{
"setup": "./scripts/setup.js",
"myJob": "./scripts/myJob.js"
},
"main": "index.js"
}
index.js
'use strict';
module.context.use('/one', require('./app'));
app.js
const createRouter = require('org/arangodb/foxx/router');
const controller = createRouter();
module.exports = controller;
const queues = require('#arangodb/foxx/queues');
queue = queues.create('myQueue', 2);
queue.push({mount:"/myJob", name:"myJob"}, {"a":4}, {"allowUnknown": true});
myJob.js
const argv = module.context.argv;
var obj = argv[0];
console.log('obj:'+obj);
I get following error:
Job failed:
ArangoError: service not found
Mount path: "/myJob".
I am not sure if I have to expand myJob as an external service. Can you help me. I don't see a complete example of how to do it.
To answer your question:
You do not have to extract the job script into a new service. You can specify the mount point of the current service by using module.context.mount.
You can find more information about the context object in the documentation: https://docs.arangodb.com/3.1/Manual/Foxx/Context.html
By the way, it's probably not a good idea to arbitrarily create jobs at mount-time. The common use case for the queue is to create jobs in route handlers as a side-effect of incoming requests (e.g. to dispatch a welcome e-mail on signup).
If you create a job at mount-time (e.g. in your main file or a file required by it) the job will be created whenever the file as executed, which will be at least once for each Foxx thread (by default ArangoDB uses multiple Foxx threads to handle parallel requests) or when development mode is enabled once per request(!).
Likewise if you create a job in your setup script it will be created whenever the setup script is executed, although this will only happen in one thread each time (but still once per request when development mode is active).
If you need e.g. a periodic job that lives alongside your service, you should put it in a unique queue and only create it in your setup script after checking whether it already exists.
On the changes in the queue API:
The queue API changed in 2.6 due to a serious issue with the old API that would frequently result in pending jobs not being properly rescheduled when the ArangoDB daemon was restarted after a job had been pushed to the queue.
Specifically ArangoDB 2.6 introduced so-called script-based (rather than function-based) job types: https://docs.arangodb.com/3.1/Manual/ReleaseNotes/UpgradingChanges26.html#foxx-queues
Support for the old function-based job types was dropped in ArangoDB 2.7 and the cookbook recipe was updated to reflect script-based job types: https://docs.arangodb.com/2.8/cookbook/FoxxQueues.html
A more detailed description of the new queue can be found in the documentation: https://docs.arangodb.com/3.1/Manual/Foxx/Scripts.html

Resources