ansible handler only runs once when notified from parameterized role - linux

I have an ansible playbook for some init services that are broadly similar with a few tweaks. In the top-level playbook, I include the role twice, like
roles:
- {role: "my-service", service: webserver}
- {role: "my-service", service: scheduler}
the my-service role has tasks, which write init scripts, and handlers, which (re)start the service. tasks/main.yml looks like this:
- name: setup init scripts
template: src=../../service-common/templates/my-service.conf dest=/etc/init/my-{{ service }}.conf
notify:
- restart my service
and handlers/main.yml has this content:
- name: restart my services
service: name=my-{{ service }} state=restarted
But after the playbook runs, we're left with only the webserver service running, and the scheduler is stop/waiting. How can I make the handler see these as two separate notifications to be handled?

The Ansible documentation states:
Handlers are lists of tasks, not really any different from regular tasks, that are referenced by a globally unique name.
So it doesn't make use of any parameters, variables, etc. when determining when/how to invoke a handler. Only the name is used.

Related

prefect.io kubernetes agent and task execution

While reading kubernetes agent documentation, I am getting confused with below line
"Configure a flow-run to run as a Kubernetes Job."
Does it mean that the process which is incharge of submitting flow and communication with api server will run as kubernetes job?
On the other side, the use case which I am trying to solve is
Setup backend server
Execute a flow composed of 2 tasks
if k8s infra available the tasks should be executed as kubernetes jobs
if docker only infra available, the tasks should be executed as docker containers.
Can somebody suggest me, how to solve above scenario in prefect.io?
That's exactly right. When you use KubernetesAgent, Prefect deploys your flow runs as Kubernetes jobs.
For #1 - you can do that in your agent YAML file as follows:
env:
- name: PREFECT__CLOUD__AGENT__AUTH_TOKEN
value: ''
- name: PREFECT__CLOUD__API
value: "http://some_ip:4200/graphql" # paste your GraphQL Server endpoint here
- name: PREFECT__BACKEND
value: server
#2 - write your flow
#3 and #4 - this is more challenging to do in Prefect, as there is currently no load balancing mechanism aware of your infrastructure. There are some hacky solutions that you may try, but there is no first-class way to handle this in Prefect.
One hack would be: you build a parent flow that checks your infrastructure resources and depending on the outcome, it spins up your flow run with either DockerRun or KubernetesRun run config.
from prefect import Flow, task, case
from prefect.tasks.prefect import create_flow_run, wait_for_flow_run
from prefect.run_configs import DockerRun, KubernetesRun
#task
def check_the_infrastructure():
return "kubernetes"
with Flow("parent_flow") as flow:
infra = check_the_infrastructure()
with case(infra, "kubernetes"):
child_flow_run_id = create_flow_run(
flow_name="child_flow_name", run_config=KubernetesRun()
)
k8_child_flowrunview = wait_for_flow_run(
child_flow_run_id, raise_final_state=True, stream_logs=True
)
with case(infra, "docker"):
child_flow_run_id = create_flow_run(
flow_name="child_flow_name", run_config=DockerRun()
)
docker_child_flowrunview = wait_for_flow_run(
child_flow_run_id, raise_final_state=True, stream_logs=True
)
But note that this would require you to have 2 agents: Kubernetes agent and Docker agent running at all times

NODE_APP_INSTANCE similar variable inside kubernetes node app

When we use PM2 in cluster mode, we can find out the instance number inside node app using process.env.NODE_APP_INSTANCE, but how we can find that inside a kubernetes cluster without pm2. I'm looking for similar like find replica instance number or etc.
Imagine a node app with 2 or more replicas and we need to run node-cron scheduler only inside one of pods.
I found that when use Statefulset instead of the Deployment then it's possible to inject the determinable POD name as an environment variable.
...
containers:
...
env:
- name: "POD_NAME"
valueFrom:
fieldRef:
fieldPath: metadata.name
And then POD_NAME variable has the value like this: pod-0 or pod-1 and so on.
So we can find out the instance number with that ordinal number.

Ansible to update HKEY on azure batch node

As a part of ansible workflow , I am looking to update an azure batch pool windows images on runtime with ansible to disable windows update
I have created a azure batch node :
- name: Create Batch Account
azure_rm_batchaccount:
resource_group: MyResGroup
name: mybatchaccount
location: eastus
auto_storage_account:
name: mystorageaccountname
pool_allocation_mode: batch_service
I know for a fact I can use Start task in azure batch node and execute the a cmd to change Hkey to NoUpdate = 1 .
I have an ansible snippet ready :
- name: "Ensure 'Configure Automatic Updates' is set to 'Disabled'"
win_regedit:
path: HKLM:\Software\Policies\Microsoft\Windows\Windowsupdate\Au
name: "NoAutoUpdate"
data: "1"
type: dword
I would like to execute it on a run time in azure batch pool.
Does any one know how can this be archived with ansible ?
To run something on boot in a Batch pool you should simply include it as part of your start task (https://learn.microsoft.com/en-us/rest/api/batchservice/pool/add#starttask).
In this instance however you likely should just make use of the Azure functionality to turn off automatic updates https://learn.microsoft.com/en-us/rest/api/batchservice/pool/add#windowsconfiguration

spring-cloud-kubernetes stops listening to ConfigMap events

My simple spring-boot 2.1.5.RELEASE running on Azure Kubernetes Service, responsible for listening to ConfigMap changes - receives 'Force closing'.
spring-boot-starter-parent = 2.1.5.RELEASE
spring-cloud-dependencies = Greenwich.SR1
Relevant configuration snippet:
cloud:
kubernetes:
reload:
enabled: true
secrets:
enabled: false
after some time, AKS signals Exec Failure java.io.EOFException: null and the kubernetes-client tries to reconnect.
Eventually it succeeds with WebSocket successfully opened information, but within same second it also signals Force closing the watch io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager#59ec7020 and looks like it terminates the connect. No further updates on ConfigMap triggers any event :(
Permissions in general are set, as events are properly caught by the service for initial test runs:
- apiGroups: [""]
resources: ["services", "pods", "configmaps", "endpoints"]
verbs: ["get", "watch", "list"]
Does anyone come across similar problem, and could help me to narrow down potential root cause?

Message Hub as event source in Serverless project doesn't create any triggers or rules

I'm trying to set up a Message Hub topic as an event source for the cloud function like so:
custom:
org: MyOrganization
space: dev
mhServiceName: my-kafka-service
functions:
main:
handler: src/handler.main
events:
- message_hub:
package: /${self:custom.org}_${self:custom.space}/Bluemix_${self:custom.mhServiceName}_Credentials-1
topic: test_topic
When I deploy the service, there are no triggers or rules being created. Thus the function is not being invoked when I publish messages to the Kafka topic.
I also tried to explicitly set a trigger and rule, but that only creates a trigger of type custom, instead of type message hub. Custom triggers seem to not work in this scenario.
What am I missing?
Update
As James pointed out, the reason the triggers and rules were not created was due to the fact, that the indentation wasn't correct.
I was still running into problems with the package not being found (see my answer to James solution) when trying to deploy the function and I've found out what the problem was there.
Turns out, you have to do two more things that are not explicitly mentioned in the documentation.
1) You have to manually create service credentials (the documentation assumes you called them Credentials-1 so I did the same)
2) You have to bind Kafka (Message Hub, now called Event Streams) to your function in your serverless.yml
The resulting function definition should look like this:
functions:
main:
handler: src/handler.main
bind:
- service:
name: messagehub
instance: ${self:custom.mhServiceName}
events:
- message_hub:
package: /${self:custom.org}_${self:custom.space}/Bluemix_${self:custom.mhServiceName}_Credentials-1
topic: test_topic
The YAML indentation on the serverless.yml is incorrect. This means the event properties aren't registered by the framework during deployment.
Change the serverless.yml file to the following format and it should work.
custom:
org: MyOrganization
space: dev
mhServiceName: my-kafka-service
functions:
main:
handler: src/handler.main
events:
- message_hub:
package: /${self:custom.org}_${self:custom.space}/Bluemix_${self:custom.mhServiceName}_Credentials-1
topic: test_topic

Resources