Collectd multiple write_http plugin with rule chain - linux

I am trying to setup a collectd configuration that sends metrics to two separate http server endpoints.
My setup has Client (C) with collectd plugin running on it. Of all the metrics it collects, all of the metrics need to be sent to Server (A); and a part of the metrics also need to be sent to Server (B).
Setup:
I am using the following configuration for the write_http plugin:
<Plugin write_http>
<Node "serverA">
URL "http://servera/collectd-post"
Format "JSON"
StoreRates false
</Node>
<Node "serverB">
URL "http://serverb/collectd-post"
Format "JSON"
StoreRates false
</Node>
</Plugin>
Further, to selectively send the metrics, I tried using the following flow control configuration:
PostCacheChain "bchain"
<Chain "bchain">
<Rule "brule">
<Match "regex">
Plugin "(load|interface|disk|cpu|memory|df|swap|processes)"
</Match>
<Target "write">
Plugin "write_http/serverB"
</Target>
<Target "return">
</Target>
</Rule>
<Target "return">
</Target>
</Chain>
Per my understanding of the flow control rules, the above configuration should send metrics from plugins (load, interface, disk, cpu, memory, df, swap, and processes) to serverb node of write_http plugin (as part of the "brule" Rule). These matched metrics should also become available for sending to the other node in the http plugin (because of the Target "return" inside the "brule" Rule).
All other metrics should be processed and sent by the other node in the http plugin (because of the Target "return" outside the "brule" Rule).
The problem I am facing is that I cannot get the functionality working the way I want it to work.
Whats working:
ALL metrics duplicated to BOTH server A and server B, if I remove the PostCacheChain configuration.
Selected metrics sent ONLY to server B, if the PostCacheChain configuration is kept.
If using another write plugin, ALL metrics sent to that plugin and only SELECTED metrics sent to server B, when using PostCacheChain configuration
Whats not working:
When using the PostCacheChain as listed, NO metrics are sent to server A
Any solutions or suggestions to get the split destination working would be greatly appreciated.
ps:
The documentations for write_http plugin and the collectd flow-control both seem to indicate I am correct in my approach. However, to me, it looks like a write_http plugin is processed only once (even when it has multiple nodes) and that once a write plugin is referred in a rule, it will not be processed outside of the rule.

Related

Simple spring integraction xml config to read ftp file list and split it further

I am studying spring integration and want to write simple app to retrieve the filelist from the ftp by scheduler and split it to few channels to parallel handling.
But couldn't understand how to run it from xml configured scheduler and will it work as it outbound and what should be in inbound1 channel? (code section)
searched such examles, but failed, reading ref docs
found from doc reference
<int-ftp:outbound-gateway id="gateway1"
session-factory="ftpSessionFactory"
request-channel="inbound1"
command="ls"
command-options="-1"
expression="payload"
reply-channel="toSplitter"/>
<int:channel id="inbound1"/>
<int:inbound-channel-adapter id="i_hope_it_start_run_on_app_start"
channel="inbound1"
auto-startup="true">
<int:poller fixed-rate="2000" max-messages-per-poll="10"/>
</int:inbound-channel-adapter>
expect spring integration xml config with scheduled run retrieving file list from ftp
Actually you go right way: the <int-ftp:outbound-gateway> with LS command indeed returns for you a list of files in the remote directory provided by the expression="payload".
Your understanding about <int:inbound-channel-adapter> is also correct: with it your really initiate a task to be called every time trigger comes to activity.
What you need here is something like expression="'/YOUR_REMOTE_DIR'". So, the result of that expression is sent as a payload to the channel="inbound1". That's how your remote directory can be available for listing in the FTP gateway via mentioned expression="payload".
I wouldn't do though a fixed-rate="2000" because there is no reason to poll remote directory concurrently. The fixed-delay should be considered instead. Also the max-messages-per-poll="10" doesn't bring value here, too. You just going to send a message with the /YOUR_REMOTE_DIR 10 times on a single polling task. Configure it to 1, which is default in case of <int:inbound-channel-adapter>.
Plus with such a polling logic you will realize that you get in the toSplitter the same list of files all the time. I may guess that it is not what you may expect and your goal is really poll only new files. For this purpose you should consider to use an Idempotent Receiver approach to filter out those files you have already processed: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints-chapter.html#idempotent-receiver

file locking was not happening either in the server1 or server2 using spring poller

I have implemented the spring poller in my Application. My Application runs in two servers. I place the txt/pdf/xlsx files in the inbound folder. The same inbound folder is pointed by the two servers. Once I place multiple files like 10-15 files, both servers(both JVM's) were trying to pick the same file and One server process the file and moves the file from inbound to inprocess & when other server tries the same and throws the FileNotFoundException.
Is there any way to lock the file so other server may not be able to read the same file? (or) Is there any other solution to fix this issue.
Thanks in advance.
The <int-file:inbound-channel-adapter> can be supplied with the <locker> sub-element. See Reference Manual:
When multiple processes are reading from the same directory it can be desirable to lock files to prevent them from being picked up concurrently. To do this you can use a FileLocker. There is a java.nio based implementation available out of the box, but it is also possible to implement your own locking scheme. The nio locker can be injected as follows
<int-file:inbound-channel-adapter id="filesIn"
directory="file:${input.directory}" prevent-duplicates="true">
<int-file:nio-locker/>
</int-file:inbound-channel-adapter>

Spring Integration Mail Inbound Channel Adapter configured for POP3 access and using a poller configuration hangs after running for some time

<int:channel id="emailInputChannel"/>
<!-- Email Poller. Only one poller thread -->
<task:executor id="emailPollingExecutor" pool-size="1" />
<int-mail:inbound-channel-adapter id="pop3EmailAdapter" store-uri="pop3://${pop3.user}:${pop3.pwd}#${pop3.server.host}/Inbox"
channel="emailInputChannel" should-delete-messages="true" auto-startup="true" java-mail-properties="javaMailProperties">
<int:poller max-messages-per-poll="1" fixed-delay="${email.poller.delay}" task-executor="emailPollingExecutor"/>
</int-mail:inbound-channel-adapter>
<!-- Java Mail POP3 properties -->
<util:properties id="javaMailProperties">
<beans:prop key="mail.debug">true</beans:prop>
<beans:prop key="mail.pop3.port">${pop3.server.port}</beans:prop>
</util:properties>
This application polls for emails containing application file attachments which contain the data to process. The email attachments are sent typically a few a day and are relatively sporadic. Since the files contain data for bulk load, we resorted to this configuration with a single poller for the Inbound POP3 mail adapter. Having multiple pollers caused duplicate poller invocations to pull the same email while another poller is processing it. With this configuration, however, the single poller hangs after some time with no indications of the problem in the logs. Please review what is wrong with this configuration. Also, is there is an alternative way to trigger the email adapter (e.g cron etc at a periodic interval)? I am using Spring Integration 2.1
A hung poller is most likely caused by the the thread stuck in user code. I see you have mail.debug=true. If that shows no activity then a hung thread is probably the cause. Use us jstack to take a thread dump.
Yes, you can use a cron expression but that's unlikely to change things.
2.1 is extremely old but I still think a hung thread is the cause.

Spring Batch Partitioned Job Using Durable Subscriber

We are using Spring Batch and partitioning of jobs in a 10 server JBoss EAP 5.2 cluster. Because of a problem in JBoss messaging, we needed to use a topic for the reply message from the partitioned steps. All has been working fine until we see JBoss Messaging glitches (on the server that launches the job) and that drops it from the cluster. It recovers but the main partition does no``t pick up the messages sent from the partition steps. I see the messages in the topic in the JMX-Console but also see that the subscription and the messages are non-durable. Therefore I would like to make the communication for the partition step reply into a durable subscription. I can't seem to fine a document way to do this. This is my current configuration of the partitioned step and associated bean.
Inbound Gateway Configuration
<int:channel id="springbatch.slave.jms.request"/>
<int:channel id="springbatch.slave.jms.response" />
<int-jms:inbound-gateway
id="springbatch.master.inbound.gateway"
connection-factory="springbatch.listener.jmsConnectionFactory"
request-channel="springbatch.slave.jms.request"
request-destination="springbatch.partition.jms.requestsQueue"
reply-channel="springbatch.slave.jms.response"
concurrent-consumers="${springbatch.partition.concurrent.consumers}"
max-concurrent-consumers="${springbatch.partition.concurrent.maxconsumers}"
max-messages-per-task="${springbatch.partition.concurrent.maxmessagespertask}"
reply-time-to-live="${springbatch.partition.reply.time.to.live}"
/>
Outbound Gateway Configuration
<int:channel id="jms.requests">
<int:dispatcher task-executor="springbatch.partitioned.jms.taskExecutor" />
</int:channel>
<int:channel id="jms.reply" />
<int-jms:outbound-gateway id="outbound-gateway"
auto-startup="false" connection-factory="springbatch.jmsConnectionFactory"
request-channel="jms.requests"
request-destination="springbatch.partition.jms.requestsQueue"
reply-channel="jms.reply"
reply-destination="springbatch.partition.jms.repliesQueue"
correlation-key="JMSCorrelationID">
<int-jms:reply-listener />
</int-jms:outbound-gateway>
</code>
Further to Michael's comment; there is currently no way to configure a topic for the <reply-listener/> - it's rather unusual to use a topic in a request/reply scenario and we didn't anticipate that requirement.
Feel free to open a JIRA Issue against Spring Integration.
An alternative would be to wire in an outbound-channel-adapter for the requests and an inbound-channel-adapter for the replies. However, some special handling of the replyChannel header is needed when doing that - see the docs here for more information about that.

Unavailable exception for CassendraLog4net appenadar

I want to develop a logging techniques using CassandraLog4net Appender. I am getting Unavailable exception.
Can u tell me whether i have to create a keyspace or database before running this code?
Also, I am not able to use NODE TOOL When i click on it, it disappears again.
what changes should I make?
Please, find details of configuration of CassendraLog4netAppendar.
<KeyspaceName value="Logging" /><ColumnFamily value="LogEntries"/>\
<PlacementStrategy value="org.apache.cassandra.locator.NetworkTopologyStrategy" />
<StrategyOptions value="Datacentre1:1" /><ReplicationFactor value="1" />
<ConsistencyLevel value="QUORUM" />
<MaxBufferedRows value="1" />
UnavailableException means there aren't enough replicas available to satisfy your query. From your configuration I see a lot of inconsistency in your cluster config. Your log4net appender strategy options point to "Datacentre1"; your topology file lists a bunch of machines in "DC1", "DC2", and "DC3" with multiple racks; your keyspace is set up with only one DC called "DC1"; nodetool shows a single node listening on 127.0.0.1 (which doesn't correlate to any of your configured machines). So you're getting UnavailableException because you're asking for something that doesn't exist. You need to have a consistent configuration across the various pieces.

Resources