I am working on the SPacy 3.1 NER pipeline.
I have created an application using FastAPI and running from the VM server by exposing the IP using the Unicorn server command [uvicorn main: app --host 0.0.0.0 --port 00].
Now, when I call my training API and start the training non of the other API are working, seems like they all are in queue.
I have used async and await.
Can anyone guide me on how to run multiple APIs when training is in progress.
Related
Can someone please explain what are the main use cases when deciding how to serve a model from MLflow:
using command line "mlflow models serve -m ...."
deploying local Docker container with the same model
deploying model online for example on AWS Sagemaker
I am mainly interested in differencies between option A and B because as I understand both can be accessed as REST API endpoints. And I assume if network rules are in place then both can be called also externally.
Imho, the main difference is described in the documentation:
NB: by default, the container will start nginx and gunicorn processes. If you don’t need the nginx process to be started (for instance if you deploy your container to Google Cloud Run), you can disable it via the DISABLE_NGINX environment variable
And the model serve uses only Flask, so it could be less scalable.
I have written a Python REST API using FastAPI. It connects to Janus Graph on a remote machine and runs some Gremlin Queries using the GremlinPython API. While writing my unit tests using FastAPI's built in test client, I cannot mock Janus Graph and test my APIs. In the worst case I need to run Janus on docker in my local setup and test there. However, I would like to do a pure unit test. I've not come across any useful documentation so far. Can anyone please help?
I think running Gremlin Server locally is how a lot of people do local testing. If you do not need to test data persistence you could configure JanusGraph to use the "inmemory" backend and avoid the need to provision any storage nodes.
I am working with MaskRCNN popular instance segementation problem using tensorflow and keras. I am also using django restframework to process. While celery recieve the task it starts processing and stuck in the middle. Like I am processing two images and getting inference out of them. But after processing the first image celery stucks.
I am using Amazon ec2 - g3s.xlarge with ubuntu deep learning AMI, django rest framework, celery, tensorflow and keras
Big project thats why can't show the code
getting inference from two image processing but getting stuck in the middle
I got the fix with this command -
celery -A prodapi worker -l info --without-gossip --without-mingle --without-heartbeat -Ofair --pool=solo
I am working with AI image processing job where I am using Django rest framework, Python3, tensorflow and keras along with Celery to process asynchronous task. I am also using the redis server. But while I am executing the celery task it is receiving the tasks but getting stuck in the middle. It's happening all the time. I am trying to serve it for amazon ec2 g3s.xlarge instance though it running fine in my local machine.
I am trying to deploy it in amazon ec2 g3s.xlarge instance with Deep learning AMI (linux) version.
#task(name="predict")
def work_out(cow_front_image,cow_back_image):
return detect_cow_weight(cow_front_image,cow_back_image)
This is a large project not getting any idea how to show it here all the codes.
I repeat its running fine and quite comfortably in the local machine and also I used the all the configuration from one of our existing server served product which is production grade.
I am expecting to celery task to get executed like I will pass two image as argument then it will process the image and back the the result what he has seen in the background.
I got the fix --pool=solo
celery -A prodapi worker -l info --without-gossip --without-mingle --without-heartbeat -Ofair --pool=solo
This might be very simple to fix, but it seems that I cannot deploy two node.js OpsWorks layers on AWS. I need to have one layer for node.js for my web front-end, and I middle tier that consumes messages from a queue. I have the web node.js layer running, but now when I try to add a second node.js layer, node.js is not one of the options in the drop-down. Is this intentional? I've been forced to create a second app for my node.js layer to deal with this, but it is an ugly solution since by default the same chef scripts run on all the node.js instances and on my load balancing layer. Any help appreciated!
Creating a second App is the best way to go.
In your recipes, you can use stack configuration and deployment attribute values to see in which layer the current instance resides, and decide what you should do (if anything) when a configure/deploy lifecycle event runs.
On your front-end layer, you would deploy the frontend app and ignore
the second app and vice versa on your middle tier layer.
On your load balancer, you would probably do nothing on deploy.
To identify which app is being deployed in a Deploy life cycle event you can leverage the search function within chef and a "deploy" attribute placed on the application.
search(:aws_opsworks_app, "deploy:true").each do |app|
#Your deploy logic here
end