I have a service that displays logs from pods running in my Kubernetes cluster. I receive them via k8s /pods/{name}/log API. The logs tend to grow big so I'd like to be able to paginate the response to avoid loading them whole every time. Result similar to how Kubernetes dashboard displays logs would be perfect.
This dashboard however seems to solve the problem by running a separate backend service that loads the logs, chops them into pieces and prepares for the frontend to consume.
I'd like to avoid that and use only the API with its query parameters like limitBytes and sinceSeconds but those seem to be insufficient to make proper pagination work.
Does anyone have a good solution for that? Or maybe know if k8s plans to implement pagination in logs API?
Related
I have deployed a Python API into OCP with 3 pod replicas. All the incoming requests seem to be going to only one pod while the other 2 being idle all the time.
Configuration I have is :
haproxy.router.openshift.io/timeout:1800s
haproxy.router.openshift.io/balance:roundrobin
haproxy.router.openshift.io/disable_cookies:”true”
Need help to resolve this issue
Tried changing balance above with leastconn and roundrobin. I don’t see any difference
I found the fix for my issue, Actually i was making the API requests to these pods from another pod in the same namespace. I used the name of service in my url instead of using OCP url in the API call ex: http://ocpservicename:port
Background
We have a PHP App Service and MySQL that is deployed using an Azure Devops Pipeline (YML). The content itself is a PHP site that it packaged up into a single file using Akeeba by an external supplier. The package is a Zip file (which can be deployed as a standard Zip deployment) and inside the Zip file is a huge JPA file. The JPA is essentially the whole web site plus database tables, settings, file renames and a ton of other stuff all rolled into one JPA file. Akeeba essentially unzips the files, copies them to the right places, does all the DB stuff and so on. To kick the process off, we can simply connect to a specific URL (web site + path) and run the PHP which does all the clever unpackaging via a web GUI. But, we want to include this stage in the pipeline instead so that the process is fully automated end to end. Akeeba has a CLI as an alternative to the Web GUI deployment, so it should go like this:
Create web app
Deploy the web site ZIP (zipDeploy)
Use the REST API to access Kudu and run the relevant command (php install.php web.jpa) to unpack the jpa and do the MySQL stuff - this normally takes well over 30 minutes (it is a big site and it has a lot of "stuff" to do - but, it does actually work in the end).
The problem is that the SCM REST API has a hard-coded 230s limit as described here: https://blog.headforcloud.com/2016/11/15/azure-app-service-hard-timeout-limit/
So, the unpack stage keeps throwing "Invoke-RestMethod : 500 - The request timed out" exactly on the 230s mark.
We have tried SCM_COMMAND_IDLE_TIMEOUT and WEBJOBS_IDLE_TIMEOUT but, unsurprisingly, they did not make any difference.
$cmd=#{"command"="php .\site\wwwroot\install.php .\site\wwwroot\web.jpa .\site\wwwroot"}
Invoke-RestMethod -Uri $url -Headers #{"Authorization"="Basic $creds"} -Body (ConvertTo-Json($cmd)) -Method Post -ContentType "application/json" -TimeoutSec 7200
I can think of a few hypothetical ways around it (some quite eccentric):
Find another way to run CLI commands inside the Web App after deployment other than the Kudu REST API. Is there such a thing? I Googled and checked SO but all I found were pointers to the way we do it (or try to do it) now.
Use something like Selenium to click the GUI buttons instead of using the CLI. (I do not know if they would suffer a timeout.)
Instead of running the command via Kudu REST, use the same API to create and deploy a script to the web server, start it and then let the REST API exit whilst the script still runs on the Web App. Essentially, bodge an async call but without the callback and then have the pipeline check in on the site at, say, 5 minute intervals. Clunky.
Extend the 230s limit - but I do not think that Microsoft make this possible.
Make the web site as fast as possible during the deployment in the hope of getting it under the 4-minute mark and then down-scale it. Yuk!
See what the Akeeba JPA unpacking actually does, unpack it pre-deployment and do what the unpackage process does but controlled via the Pipeline. This is potentially a lot of work and would lose the support of the supplier.
Give up on an automated deployment. That would rather defeat much of the purpose of a Devops pipeline.
Try AWS + terraform instead. That's not a approved infrastructure environment, however.
Given that Microsoft understandably do not want long-running API calls hanging around, I understand why the limit exists. However, I would expect therefore there to be a mechanism to interact with an App Service file system via a CLI in another way. Does anyone know how?
The 4 minute idle timeout on the TCP level and this is implemented on the Azure hardware load balancer. This timeout is not configurable and this cannot be changed. One thing I want to mention is that this is idle timeout at the TCP level which means that if the connection is idle only and no data transfer happening, only then this timeout is hit. To provide more info, this will hit if the web application got the request and kept processing the request for > 4minutes without sending any data back.
Resolution
Ideally in a web application, it is not good to keep the underlying HTTP request open and 4 minutes is a decent amount of time. If you have a requirement about background processing within your web application, then the recommended solution is to use Azure WebJobs and have the Azure Webapp interact with the Azure Webjob to notify once the background processing is done (there are many ways that Azure provides like queues triggers etc. and you can choose the method that suits you the best). Azure Webjobs are designed for background processing and you can do as much background processing as you want within them. I am sharing a few articles that talk about webjobs in detail
· http://www.hanselman.com/blog/IntroducingWindowsAzureWebJobs.aspx
· https://azure.microsoft.com/en-us/documentation/articles/websites-webjobs-resources/
============================================================================
It totally depends on the app. Message Queue comes to mind. There are a lot of potential solutions and it will be up to you to decide.
============================================================================
Option #1)
You can change the code to send some sort of header to continue to the client to keep the session open.
A sample is shown here
This shows the HTTP Headers with the Expect 100-continue header:
https://msdn.microsoft.com/en-us/library/aa287673%28v=vs.71%29.aspx?f=255&MSPPError=-2147217396
This shows how to add a Header to the collection:
https://msdn.microsoft.com/en-us/library/aa287502(v=vs.71).aspx
Option #2) Progress bar
This sample shows how to use a progress bar:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/d84f4c89-ebbf-44d3-bc4e-43525ae1df45/how-to-increase-progressbar-when-i-running-havey-query-in-sql-server-or-oracle-?forum=csharpgeneral
Option #3) A common practice to keep the connection active for a longer period is to use TCP Keep-alive. Packets are sent when no activity is detected on the connection. By keeping on-going network activity, the idle timeout value is never hit and the connection is maintained for a long period
Option #4) You can also try the option of hosting your application as an IaaS VM instead of a APP SERVICE. This may avoid the ARR timeout issue because its architecture is different and I believe that the time-out is configurable.
We have a dotnet core 3.0 solution running in a docker image running on Azure. For now, we haven't set it up in a k8s cluster. Our app service plan is PremiumV2, which basically means that we're running on dedicated hardware and not sharing our resources with anyone else.
We have a simple api-call to get the executing user based on the JWT. This validates the JWT, gets the users mail from the claims and queries cosmos to get more information about the user. When the request is sent from Postman the first time, it takes roughly 320 ms, however the subsequential requests takes around 50 ms. If we're waiting, lets say 10 minutes more or less, the requests is back at around 300 ms and again the subsequential requests takes around 50 ms. This indicates that the behavior is re-producable. Its worth mentioning that its not only this call that we see this behavior, but every "first" requests to our api takes more time than the other following.
Looking into application insights, apparently cosmos is not the bottleneck here. We've also configured the app service to be "always on"
Any ideas on how we can trace down this issue? Has anyone else experienced the same behavior? Is there any settings or configuration we should look at in Azure?
I have an Azure function app triggered by an HttpRequest. The function app reads the request, tosses one copy of it into a storage table for safekeeping and sends another copy to a queue for further processing by another element of the system. I have a client running an ApacheBench test that reports approximately 148 requests per second processed. That rate of processing will not be enough for our expected load.
My understanding of function apps is that it should spawn as many instances as is needed to handle the load sent to it. But this function app might not be scaling out quickly enough as it’s only handling that 148 requests per second. I need it to handle at least 200 requests per second.
I’m not 100% sure the problem is on my end, though. In analyzing the performance of my function app I found a LOT of 429 errors. What I found online, particularly https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-request-limits, suggests that these errors could be due to too many requests being sent from a single IP. Would several ApacheBench 10K and 20K request load tests within a given day cause the 429 error?
However, if that’s not it, if the problem is with my function app, how can I force my function app to spawn more instances more quickly? I assume this is the way to get more throughput per second. But I’m still very new at working with function apps so if there is a different way, I would more than welcome your input.
Maybe the Premium app service plan that’s in public preview would handle more throughput? I’ve thought about switching over to that and running a quick test but am unsure if I’d be able to switch back?
Maybe EventHub is something I need to investigate? Is that something that might increase my apparent throughput by catching more requests and holding on to them until the function app could accept and process them?
Thanks in advance for any assistance you can give.
You dont provide much context of you app but this is few steps how you can improve
If you want more control you need to use App Service plan with always on to avoid cold start, also you will need to configure auto scaling since you are responsible in this plan and auto scale is not enabled by default in app service plan.
Your azure function must be fully async as you have external dependencies so you dont want to block thread while you are calling them.
Look on the limits. Using host.json you can tweek it.
429 error means that function is busy to process your request, so probably when you writing to table you are not using async and blocking thread
Function apps work very well and scale as it says. It could be because request coming from Single IP and Azure could be considering it DDOS. You can do the following
AzureDevOps Load Test
You can load test using one of the azure service . I am very sure they have better criteria of handling IPs. Azure DeveOps Load Test
Provision VM in Azure
The way i normally do is provision the VM (windows 10 pro) in azure and use JMeter to Load test. I have use this method to test and it works fine. You can provision couple of them and subdivide the load.
Use professional Load testing services
If possible you may use services like Loader.io . They use sophisticated algos to run the load test and provision bunch of VMs to run the same test.
Use Application Insights
If not already you must be using application insights to have a better look from server perspective. Go to live stream and see how many instance it would provision to handle the load test . You can easily look into events and error logs that may be arising and investigate. You can deep dive into each associated dependency and investigate the problem.
As a part of next assignment, I need to prepare a scalable and full concurrent supporting node architecture. I am confused with kubernetes/containers concept and really need some help. And I cannot use any paid service! Just plain raw DO servers and load balancers.
Basically a basic sketch/idea/explanation/pointers to The architecture which should explain API endpoints, data service connectivity and data flows between database, server and client is needed!
Here is what I have in my mind:
Client <-> NginX -> Nodejs <-> MongoDB
So above is a standard setup for nodejs based REST APIs I believe. Now how to add scalability to this and concurrency?
Any help would be appreciated!
Let me give you a quick overview and after that just ask more questions in the comments of my answer if you need to know more.
You need a docker image of all your services:
You will need an nginx image wich contains your frontend code. (https://serversforhackers.com/c/dckr-nginx-image)
You will need a docker image with which contains your backend code.
(https://nodejs.org/en/docs/guides/nodejs-docker-webapp/)
You will need an simple mongo-db base image.
(https://medium.com/#pablo_ezequiel/creating-a-docker-image-with-mongodb-4c8aa3f828f2)
Now for beginners I would go to Google Cloud Plattform and set up a manged kubernetes cluster. This is done in 1 minute and you will have a fulll functinal kubernetes environment. (https://cloud.google.com/kubernetes-engine/docs/quickstart) - In the first year you will have 300$ for free usage. So this is more then enough to play arround and set up an environment for your assignment.
Now you will need an Ingress API. The Ingress is the only access point to the Services you will later deploy on your cluster. Lets say your Ingress is listening to 14.304.233. When your write 14.304.233/customerBackend, it will redirect this request to the customerBackend Service (You need to define this of course) More information here: https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
Now you need to deploy the images you created. In Kubernetes you have the concept of Pods (see here: https://kubernetes.io/docs/concepts/workloads/pods/pod/).Normally in each Pod there runs only one container. Each Pod-Group (f.e all Pods which have an Node Container inside) has one so called Service, which is managing the access on the pod. Let say you want to have 3 instances of your NodeJS backend. Each of the 3 Container will run in a individual pod. If you want to send a request to the backend, it will go trough the Service, which then redirects the requests to one of the pos. When you need to scale, you simply deploy more pods. The Service automaticly balances the load over the deployed pods.
How many pods you want have is deployt is defined in a so called deployment.yaml
(see: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
This is very simular to a docker-compose.yaml but with some more attributes you can configure.