How does puppet decide if a node is active? (Changed) - puppet

I'm trying to change default behavior in puppet when a Notify resource is applied on a node. Current state (3.8) is that whenever the client is applying a simply Notify message, the client will send a report with state "Active". This is somewhat misleading because nothing really changed on the node.
Is it possible to change this so that puppet will not make a Notify resource change node status to "Active"? Thanks.

You should get the echo module and refrain from using notify if this is a concern for you.

Related

Axon creating aggregate inside saga

I'm not sure how to properly ask this question but here it is:
I'm starting the saga on specific event, then im dispatching the command which is supposed to create some aggregate and then send another event which will be handled by the saga to proceed with the logic.
However each time i'm restarting the application i get an error saying that event for aggregate at sequence x was already inserted, which, i suppose is because the saga has not yet been finished and when im restarting it it starts it again by trying to create new aggregate.
Question is, is there any way in the axoniq to track progress of the saga? Like should i set some flags when i receive event and wrap in ifs the aggregate creation?
Maybe there is another way which i'm not seeing, i just dont want the saga to be replayed from the start.
Thanks
The solution you've posted definitely would work.
Let me explain the scenario you've hit here though, for other peoples reference too.
In an Axon Framework 4.x application, any Event Handling Component, thus also your Saga instances, are backed by a TrackingEventProcessor.
The Tracking Event Processor "keeps track of" which point in the Event Stream it is handling events. It stores this information through a TrackingToken, for which the TokenStore is the delegating piece of work.
If you haven't specified a TokenStore however, you will have in-memory TrackingTokens for every Tracking Event Processor.
This means that on a restart, your Tracking Event Processor thinks "ow, I haven't done any event handling yet, let me start from the beginning of time".
Due to this, your Saga instances will start a new, every time, trying to recreate the given Aggregate instance.
Henceforth, specifying the TokenStore as you did resolved the problem you had.
Note, that in a Spring Boor environment, with for example the Spring Data starter present, Axon will automatically create the JpaTokenStore for you.
I've solved my issue by simply adding token store configuration, it does exactly what i require - track processed events.
Basic spring config:
#Bean
fun tokenStore(client: MongoClient): TokenStore = MongoTokenStore.builder()
.mongoTemplate(DefaultMongoTemplate.builder().mongoDatabase(client).build())
.serializer(JacksonSerializer.builder().build())
.build()

Asana API Sync Error

I currently have a application running that passes data between Asana and Zendesk.
I have webhooks created for all my Project in Asana and all project events are sent to my webhook end point that verifies the request and tries to identify the event and update Zendesk with relevant data depending on the event type (Some events aren't required).
However I have been receiving the following request from the Webhooks just recently:
"events": [
{
"action": "sync_error",
"message": "There was an error with the event queue, which may have resulted in missed events. If you are keeping resources in sync, you may need to manually re-fetch them.",
"created_at": "2017-05-23T16:29:13.994Z"
}
]
Now because I don't poll the API for event updates I react when the events arrive with me, I haven't considered using a Sync key, the docs suggest this is only required when polling for events. Do I need to use one when using Webhooks also?
What am I missing?
Thanks in advance for any suggestions.
You're correct, you don't need to track a sync key for webhooks - we proactively try to reach out with them when something changes in Asana, and we track the events that haven't yet been delivered across webhooks (essentially, akin to us updating the sync key server-side whenever webhooks have been successfully delivered).
Basically what's happening here is that for some reason, our event queues detect that there's a problem with their internal state. This means that events didn't get recorded, or webhooks didn't get delivered after a long time. Our events and webhooks try to track changes in a best-effort sense, and there are some things that can happen with our production machines that can cause these sorts of issues, like a machine dying at an inopportune time.
Unfortunately, then, the only way to get back to a good state is to do a full scan of the projects you're tracking, which is what is meant by you may need to manually re-fetch them. Basically, a robust implementation of syncing Asana to external resources looks like:
A diff function that, given a particular task and external resource, detects what state is out of date or different between each resource and choose a merge/patch resolution (i.e. "Make Zendesk look like Asana")
Receiving a webhook runs that diff/patch process for that one task in a "live" fashion.
Periodically (on script startup, say, or when webhooks/events are missed and you get an error message like this) update all resources that might have been missed by scanning the entire project and do the diff/patch for every task. This is more expensive, but should be significantly more rare.

Is there a way to make saltstack behave like puppet for managed files?

Is there a way to make saltstack behave like puppet when it comes to managing config files. The task I want to do is to make sshd_config being managed by saltstack. If there are changes being made to the file, I want salt master to overwrite it by pushing in the master copy when the master detects that the managed config file is different or have been modified. Is this setup possible using saltstack ?
There is the file.managed which may look like that:
/etc/http/conf/http.conf:
file.managed:
- source: salt://apache/http.conf
- user: root
- group: root
- mode: 644
Salt will transfer this file over to a minion whenever you apply this state to a minion.
Monitoring the file from the master can be achieved, but it is more complex to do so. You can make use of the reactor system of salt and write a small python program on the minion, which watches the file and fires an custom event to the salt event bus in case of changes. Inside of the reactor you can listen on those custom events and perform the desired state on the minion.
Notice that this just a custom solution if you need to react more or less just in time (if you really want to react in time, all this should be done inside of the minion, without the reactor system).
Please notice that a more common approach in salt looks like that: you schedule highstates for minions in your favorite interval and make sure, they have your desired state each x hours.

Systemd http health check

I have a service on a Redhat 7.1 which I use systemctl start, stop, restart and status to control. One time the systemctl status returned active, but the application "behind" the service responded http code different from 200.
I know that I can use Monit or Nagios to check this and do the systemctl restart - but I would like to know if there exist something per default when using systemd, so that I do not need to have other tools installed.
My preferred solution would be to have my service restarted if http return code is different from 200 totally automatically without other tools than systemd itself - (and maybe with a possibility to notify a Hipchat room or send a email...)
I've tried googling the topic - without luck. Please help :-)
The Short Answer
systemd has a native (socket-based) healthcheck method, but it's not HTTP-based. You can write a shim that polls status over HTTP and forwards it to the native mechanism, however.
The Long Answer
The Right Thing in the systemd world is to use the sd_notify socket mechanism to inform the init system when your application is fully available. Use Type=notify for your service to enable this functionality.
You can write to this socket directly using the sd_notify() call, or you can inspect the NOTIFY_SOCKET environment variable to get the name and have your own code write READY=1 to that socket when the application is returning 200s.
If you want to put this off to a separate process that polls your process over HTTP and then writes to the socket, you can do that -- ensure that NotifyAccess is set appropriately (by default, only the main process of the service is allowed to write to the socket).
Inasmuch as you're interested in detecting cases where the application fails after it was fully initialized, and triggering a restart, the sd_notify socket is appropriate in this scenario as well:
Send WATCHDOG_USEC=... to set the amount of time which is permissible between successful tests, then WATCHDOG=1 whenever you have a successful self-test; whenever no successful test is seen for the configured period, your service will be restarted.

Starting or stopping an agent through another agent

I currently have a volttron agent that periodicially downloads some data from the web in the form of a csv. I would like to use the DataPublisher example to take that csv data and push it to pubsub. However, from looking at the code, it seems like the DataPublisher is designed to run once, automatically, as soon as the agent starts up.
So my question them becomes, is there a way to start up the datapublisher from the original agent (which would itself have some sort of timer or loop)? I would then also need to stop it afterwards.
If I can't do this, my alternatives seem to be modifying the datapublisher to work on a schedule, or altering my other agent to publish the data from the csvs to pubsub.
Any assistance would be greatly appreciated.
The way I would do this is to expose an RPC method on the data publisher that either accepts a filename to publish. When this is called it would change the filename to publish and "start" the publishing of the data. The data publisher agent would always be running so there isn't a true restarting of the publisher.
The other agent (The one the downloaded the data?) would then just need to "kick off" the publishing through the rpc call.
This sounds like a very good feature that could be committed back to the VOLTTRON repository if you saw fit.

Resources