The search time for Milvus is always 200ms - search

Milvus2.0.0 server local test, take the official website as an example, what is the possible reason for the search execution speed of 0.2 seconds. Pymilvus version is also 2.0.0.

Related

Google cloud run speed varies

Thanks for reading my question.
I 've deployed my gcr image to cloud run. The app is based on FastAPI and it has been built on cloud build with dockerfile.
I've set the number of minimum instance as 2 to avoid the cold start and set the CPU as always allocation.
The problem is the cloud run time varies.
The response time is sometimes 2 mins and sometimes 15 mins.
I can't know the reason when on earth it takes 2 mins and when on earth it takes 15 mins.
I am wondering this is the problem of internal system on Google cloud.
Please answer my question.

Alternative to Cloud Tasks / Cron / Task Queue on CGP in Python 3 that doesn't have a 10 minute timeout

I've recently started using an App Engine on Google Cloud Platform and have set up some cron jobs to get some scheduled work done. However recently one of my tasks took more than 10 minutes and it timed out... obviously I could break this work into batches or find another way around the problem, however I'm keen to not always be mindful of how long a job might take and want future jobs to run until completed or failed.
I've looked into various services that Google offer but with no success; Task Queue is Python 2.x only and Cloud Tasks has the same 10 minute limit unless you manually manage scaling (which I would prefer to stay automatic as that's the point of App Engine for me).
Am I missing something? This 10 minute limit seems like a big unnecessary blocker and I have no idea where to look.
https://cloud.google.com/tasks/docs/creating-appengine-handlers
Thanks for your time.
Google services such as App Engine are designed to model a web server HTTP Request / Response design. You are trying to use them as task/execute engines.
Use the correct service if you require long execution times, which usually means requests that take longer than a few minutes to complete. Use Cloud Tasks and Compute Engine. Otherwise you will need to architect your application to fit with App Engine's requirements and limitations.
Cloud Tasks for Asynchronous task execution
If you want to use App Engine, you need to use either Basic Scaling or Manual Scaling. I understand that manual scaling isn't your favorite, I also don't like this mode. But the basic scaling is acceptable.
In addition, it's more designed to perform background task, exactly what you try to achieve.
If you accept this change, you can use Cloud Task. You have up to 24H of timeout if your App Engine service is in basic scaling (or manual)
You have this same information, on the scaling description on App Engine documentation.
When you use basic scaling, your instance type needs to be updated to BXXX.

How to run scripts more than 15 mins in app engine

So i wrote a node script which fetches data from WP.org themes/plugins. The theme script will take around 4-5 hours to complete ( scraping and inserting data into BigQuery ).
The problem arises when i used google app engine to deploy the script, it works fine for 15 mins then it stops. Any way to increase the execution time of scripts in app engine.
These scripts will run weekly or every fortnight and will run until they are done. But app engine stops them after 15 mins. They works fine on my localhost so its not issue with node.
The max allowed run-time of a request is based on your selected scaling type. So it sounds like you will need to create a separate service to run this task with Basic or Manual set for the scaling type
https://cloud.google.com/appengine/docs/standard/nodejs/how-instances-are-managed#scaling_types
You could also try breaking up your task into multiple 10 minute tasks and chain them together

How can I decrease deployment time of Node app on Google App Engine

Right now the time is around 10 minutes, but my app uses 2 minutes on npm install, which app engine does on every deploy, and then runs in about 5 seconds. Why does it take so long time, and is there any tricks that can be done to lower this?
I have heard other places that this is because of changing routes, and that docker slows things down. But I would believe a company like google could manage to atleast cut this down to 1/3 of the current speed.
There are some older questions, but I would like to have an up to date answer
Google cloud deploy so slow
why does google appengine deployment take several minutes to update service
https://groups.google.com/forum/#!topic/google-appengine/hZMEkmmObDU
At the moment, App Engine Flexible deployments are indeed quite slow but as stated in the links you provided (this still stands true), most of the deployment time needed is incurred by actions you can't act upon (load balancer and network configuration, etc...). What you CAN do to speed it up is to:
limit the size of the app you're deploying
limit the complexity of the build necessary in the Dockerfile, if present
ensure you have a fast and reliable internet connection during deployment
Now, there is one option to bypass most of the new setting-up overheads during development. You may specify an already existing version name as parameter during deployment and also specify --no-promote flag:
gcloud app deploy --version <existing-version-number> --no-promote
I've tried it myself and it drastically reduced the deployment time, to ~1m30 for a Hello World app. It does an in-place replacement instead of a new one. Of course, most of the saved time is due to skipped overhead and you'll have to manually direct traffic to that new version. Also, versioning clarity will obviously be impacted, that's why I wouldn't recommend it for production deployment.

Meteor Node Process CPU Usage Nears 100%

I'm having trouble with my Meteor app when it gets to its peak amount of traffic (peak for this is nothing, 1k visits, maybe 2,500 pageviews in a day). CPU usage spikes and never recovers, so I've taken to using Nodetime to monitor usage and I've been reloading the process (forever restart) to get things back to normal.
I'm fairly new to profiling, so finding the underlying cause has me at a loss for where to start. I'm fairly certain it has to do with my app's server code, but the profiling seems to point to the Fibers module as a "hotspot" which I understand aids in making my server code synchronous.
Below is a snippet from the profiling results. I hope someone can guide me in the right direction in troubleshooting this!
While I don't have a specific answer to your question, I have experience dealing with CPU issues for our production meteor app for so I can give you a list of things to investigate.
Upgrade to the latest version of meteor and the appropriate node version (see the changelog). As of this writing that's meteor 0.8.2 and node 0.10.28.
Read this and this article. The latter makes a great point that you really should always try to delay activation of subscriptions until you need them. In particular you may not need to publish anything for users who are not logged in. In my experience, meteor CPU problems have everything to do with subscriptions.
Be careful with observe and observeChanges. These are expensive and are easy to abuse. In particular:
Make sure you are calling stop() on your handles when they are no longer needed (consider using a package like publish-with-relations so this is done for you).
Fetch only the collections and fields that you absolutely need. Observe works by continually diffing objects (requires lots of CPU). The fewer and smaller objects you have, the less there is to compute.
Consider using smart-collections before it is retired. Use oplog tailing - this can make for a night and day difference in performance and CPU usage in your app.
Consider making some things not reactive (also mentioned in the articles above). For us that was a big win. We had one extremely expensive join that was used on two frequently accessed pages on the site. When it got to the point where the CPU was pegged at 100% about every 30 minutes I gave up on reactivity for that element and just did the join on the server and shipped the data to the client via a method call. I also created a server-side expiring cache for these results and stored them by user (special thanks to Matt DeBergalis for this suggestion).
Do a preventative nightly restart. I have a cron job that tells forever to restart our app once a day in the middle of the night. That brings the CPU down from ~10% to 1%. This seems like black magic, but the fact that the CPU usage changes after a reset leads me to believe this is a good idea.
Updated thoughts (1/13/14)
We migrated to oplog tailing as soon as it was available (meteor 0.7) and that made a big difference. Note that in order to get access to the oplog, you'll probably need to either host your own db or run a dedicated instance on the hosting provider of your choice. I'd also recommend adding the facts package to actually tell if its working.
There was a memory leak discovered in publish-with-relations, and as of this writing the atmosphere version (v0.1.5) hasn't been bumped to reflect these changes. If you are using it in production, I strongly recommend checking out the HEAD version and running it locally.
We stopped doing nightly restarts a couple of weeks ago. So far everything has been fine (fingers crossed).
Updated thoughts (7/2/14)
A few months ago we switched over to using an Elastic Deployment on mongohq. It's affordable, the performance has been great, and they even have a blog post which tells you how to enable oplog tailing.
I'd strongly recommend checking out kadira to help diagnose performance issues in your app. Also check out the academy articles which have a number of good tips in them.
I'm also having this problem. Actually there is an issue with 0.6.6.1, I run meteor --release 0.6.6 and the cpu is back to normal now.

Resources