For Larvel Vapor we can define multiple queues
Exmaple form official docs:
id: 2
name: vapor-laravel-app
environments:
production:
queues:
- emails
- invoices
Also can define queue concurrency
Example form official docs:
id: 2
name: vapor-laravel-app
environments:
production:
queue-concurrency: 50
build:
- 'composer install --no-dev'
Can we define concurrency per each queue separately?
Something I'm expecting:
id: 2
name: vapor-laravel-app
environments:
production:
queues:
emails
queue-concurrency: 50
invoices
queue-concurrency: 20
Reply Received From Official Laravel Vapor Developers
It is not possible at the moment
2022-05-10
Related
I am new to DataDog and getting back into working with Windows Servers. I am trying to push Event Viewer logs (Security, System, etc) to Datadog logs. I have been successful in terms of setting it up (used their documentation - https://docs.datadoghq.com/integrations/win32_event_log/). I am getting logs into my DD for that server for my System and Security:
logs:
- type: windows_event
channel_path: "System"
source: "System"
service: System_Event
- type: windows_event
channel_path: "Security"
source: "Security"
service: Security_Event
I know that you can push items from the Event Viewer to Events in DD by using Instances and you can be more granular there. But I want that granularity in the logs sections since we rarely view Events. Right now it is showing me all the items in the logs, success, etc. I am looking to only get the Errors and Warnings piped to the Logs.
Thanks for the help.
D
Came across the same problem and came up with below config that exclude "Information" event.
- type: windows_event
channel_path: System
source: System
service: eventlog
log_processing_rules:
- type: exclude_at_match
name: exclude_information_event
pattern: ^.*[Ll]evel.*Information.*
Vincent
In the official documentation here, the pod.spec.container.resources.limits is defined as follows :
"Limits describes the maximum amount of compute resources allowed."
I understand that k8s prohibits a pod from consuming more resources than specified in limits.
The documentation does not say that a pod is not scheduled in a node that does not have the amount of resources specified in limits.
For instance, if each node in my cluster has 2 cpus, and I try to deploy a pod defining a cpu limit to 3, my pod will never be running and will be in status Pending.
Here is the example template : mypod.yml
apiVersion: v1
kind: Pod
metadata:
name: secondbug
spec:
containers:
- name: container
image: nginx
resources:
limits:
cpu: 3
Is this behaviour intended and why?
You have 3 kinds of pods, pods with no request definition, pods that are burstable which have limits and requests and the third kind which has only limits or requests defined.
For the third case if you only define limits or requests, the other one will be defined by the first one meaning:
resources:
limits:
cpu: 3
Is actually:
resources:
limits:
cpu: 3
requests:
cpu: 3
So you cant really define only one without the other. So in your case, you want to define requests for it so it won't be defined by the limits.
I'm trying to setup Alert Manager in a simple setup where it would send one Slack notification for each notification in receives.
I've hoped to disable grouping by removing the group_by configuration.
The problem is, that when I send 2 alert one after the other, even though the Alert Manager shows the 2 alerts as 'Not Grouped' when I get Slack notifications, I get one message for the first alert, and then a second message, where the 2 alerts are grouped.
Here is the config.yml
route:
receiver: default-receiver
group_wait: 1s #30s
group_interval: 1s #5m
# repeat_interval: 10m
# group_by: [cluster, alertname]
receivers:
- name: default-receiver
slack_configs:
- channel: "#alerts-test"
Any ideas?
From the Prometheus documentation for configuration
You can use group_by: ['...'] in your Alert Manager as a solution.
However, this was introduced in v0.16. For More info, see this GitHub issue.
I have been using the serverless framework (1.61.0). I have many and many scheduled events that are syncing data from another source. For instance, I am syncing Category entities within one lambda function.
categories:
handler: functions/scheduled/categories/index.default
name: ${self:provider.stage}-categories-sync
description: Sync categories
events:
- schedule:
name: ${self:provider.stage}-moe-categories
rate: rate(1 hour)
enabled: true
input:
params:
appId: moe
mallCode: moe
scheduled: true
So for this worker, I have another 15 scheduled events. They are preserved as new resources on CloudWatch and which makes it really big. We are exceeding CloudWatch Event limit even if we increased it by submitting a limit increase request to AWS.
Is there any way to define multiple targets for the same CloudWatch Event? So that instead of defining
lambda_func_count (15) x event_count (15) x stage_count (dev, staging, prod) resources on CloudWatch, we could just define one event with multiple targets for each individual lambda function.
Currently, it is supported on AWS console but couldn't find a way to achieve this by the serverless framework.
One way to help mitigate this issue is to not use the same AWS account for all your stages. Take a look at the AWS Organisations feature that helps you create sub accounts to a master account and if you use Serverless Framework Pro, even on the free tier, you can easily have specific stages deploy to specific AWS accounts. Each sub account has its own set of resources that don't affect other accounts. You could even take this further if you have multiple ways of breaking things across multiple accounts; perhaps you can break it up per Category?
Here is an example of a single CloudWatch rule, with multiple targets (each either an AWS Lamdba function, or Lambda alias)
"LCSCombinedKeepWarmRule2":{
"Type":"AWS::Events::Rule",
"Properties": {
"Description":"LCS Keep Functions Warm Rule",
"ScheduleExpression": "rate(3 minutes)",
"State":"ENABLED",
"Targets":[
{
"Arn":{"Fn::GetAtt":["CheckCustomer","Arn"]},
"Id":"CheckCustomerId"
},
{
"Arn":{"Fn::GetAtt":["PatchCustomerId","Arn"]},
"Id":"PatchCustomerId"
},
{
"Arn":{"Ref":"GetTierAttributes.Alias"},
"Id":"GetTierAttributes"
},
{
"Arn":{"Ref":"ValidateToken.Alias"},
"Id":"ValidateTokenId"
},
{
"Arn":{"Ref":"EventStoreRecVoucher.Alias"},
"Id":"EventStoreRecVoucherId"
}
]
}
},
I'm trying to set up a Message Hub topic as an event source for the cloud function like so:
custom:
org: MyOrganization
space: dev
mhServiceName: my-kafka-service
functions:
main:
handler: src/handler.main
events:
- message_hub:
package: /${self:custom.org}_${self:custom.space}/Bluemix_${self:custom.mhServiceName}_Credentials-1
topic: test_topic
When I deploy the service, there are no triggers or rules being created. Thus the function is not being invoked when I publish messages to the Kafka topic.
I also tried to explicitly set a trigger and rule, but that only creates a trigger of type custom, instead of type message hub. Custom triggers seem to not work in this scenario.
What am I missing?
Update
As James pointed out, the reason the triggers and rules were not created was due to the fact, that the indentation wasn't correct.
I was still running into problems with the package not being found (see my answer to James solution) when trying to deploy the function and I've found out what the problem was there.
Turns out, you have to do two more things that are not explicitly mentioned in the documentation.
1) You have to manually create service credentials (the documentation assumes you called them Credentials-1 so I did the same)
2) You have to bind Kafka (Message Hub, now called Event Streams) to your function in your serverless.yml
The resulting function definition should look like this:
functions:
main:
handler: src/handler.main
bind:
- service:
name: messagehub
instance: ${self:custom.mhServiceName}
events:
- message_hub:
package: /${self:custom.org}_${self:custom.space}/Bluemix_${self:custom.mhServiceName}_Credentials-1
topic: test_topic
The YAML indentation on the serverless.yml is incorrect. This means the event properties aren't registered by the framework during deployment.
Change the serverless.yml file to the following format and it should work.
custom:
org: MyOrganization
space: dev
mhServiceName: my-kafka-service
functions:
main:
handler: src/handler.main
events:
- message_hub:
package: /${self:custom.org}_${self:custom.space}/Bluemix_${self:custom.mhServiceName}_Credentials-1
topic: test_topic