Logstash Multiline with Syslog - logstash

Have some difficulties with Logstash and multiline working together
I am using the Logspout container that forwards all stdout log entries as syslog to logstash.
This is the final content that logstash receives. Here are multiple lines that should represent two events.
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: 2015-02-10 11:55:38.496 INFO 1 --- [tp1302304527-19] c.z.service.DefaultInvoiceService : Creating with DefaultInvoiceService started...
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: 2015-02-10 11:55:48.596 WARN 1 --- [tp1302304527-19] o.eclipse.jetty.servlet.ServletHandler :
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]:
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:978)
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:868)
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:842)
Every log line is starts with syslog head.
Based on the above log content I created logstash config file.
input {
udp {
port => 5000
type => syslog
}
}
filter {
multiline {
pattern => "^<%{NUMBER}>%{TIMESTAMP_ISO8601} %{SYSLOGHOST:container_name} %{DATA}(?:\[%{POSINT}\])?:%{SPACE}%{TIMESTAMP_ISO8601}"
negate => true
what => "previous"
stream_identity => "%{container_name}"
}
grok {
match => [ "message", "(?m)^<%{NUMBER}>%{TIMESTAMP_ISO8601} %{SYSLOGHOST} %{DATA:container_name}(?:\[%{POSINT}\])?:%{SPACE}%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{NUMBER}%{SPACE}---%{SPACE}(?:\[%{DATA:threadname}\])?%{SPACE}%{JAVACLASS:clas
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
remove_field => ["timestamp"]
}
if !("_grokparsefailure" in [tags]) {
mutate {
replace => [ "source_host", "%{container_name}" ]
replace => [ "raw_message", "%{message}" ]
replace => [ "message", "%{logmessage}" ]
remove_field => [ "logmessage", "host", "source_host" ]
}
}
mutate {
strip => [ "threadname" ]
}
}
output {
elasticsearch { }
}
Now when the above events arrives the first event is correct parsed and displayed:
message = "Creating with DefaultInvoiceService started..."
The second event contains this message which contains three issues:
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]:
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]: org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:978)
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:868)
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]: at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:842)
<14>2015-02-10T12:59:09Z logspout dev_nginx_1[1]: 192.168.59.3 - - [10/Feb/2015:12:59:09 +0000] "POST /api/invoice/ HTTP/1.1" 500 1115 "http://192.168.59.103/"; "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.94 Safari/537.36" "-"
The message text contains a line with an dev_nginx_1 entry which does not belong here. This should be treated as an separate event.
Each line contains the prefix. <14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]:
Each line has an additional new line
Question.
Why is the dev_nginx_1 entry not an event on its own. Why is it considered to belong to the previous one?
How can I get rid of the syslog prefix in each line of the message.
How can I get rid of the additional new line?

As for (1), you're using container_name in the multiline. This is the field after the timestamp. In your example, they're all "logspout". Seems right to me.
As for (2), each line comes in with the prefix and the timestamp, so you would expect them to be there by default. You are doing a mutate{} to replace message with log_message, but I don't see that you're setting log_message. So, how did you think the prefix and timestamp were being removed?

For (1), replace %{SYSLOGHOST:container_name} %{DATA} in your multline pattern with %{SYSLOGHOST} %{DATA:container_name} (as you use in your grok).
For (2) and (3), you can try something like this:
mutate {
gsub => [ "message", "<\d+>.*?:\s", "", "message", "\n(\n)", "\1" ]
}
Here, the gsub setting is performing two operations:
Examine the field "message", find the substrings from "<14>" to a colon followed by a whitespace, and replace those substrings with empty strings.
Examine the field "message", find the substrings consisting of two consecutive newline characters, and replace them with one newline character. It performs the substitution using the \1 backrefernce to the group (\n), because if you try to use \n itself, Logstash will actually replace it with \\n, which won't work.

Related

spring-integration-smb : jcifs.smb.SmbException: The parameter is incorrect while connect to NAS

I encountered a problem when connecting to a NAS shared directory using spring-integration-smb.
The problem is that I was able to connect to another shared Nas directory but for the pre-prod Nas, I encountered this problem.
Also, the shared server administrator confirmed that both directories have the same configuration.
You will find below the stack encountered
07 mars 2022;14:49:50.702 [scheduling-1] WARN jcifs.smb.SmbTransportImpl - Disconnecting transport while still in use Transport12[NAS03/XXXXXXXX:445,state=5,signingEnforced=false,usage=1]: [SmbSession[credentials=XXXXXXXXXX,targetHost=nas03,targetDomain=null,uid=0,connectionState=2,usage=1]]
07 mars 2022;14:49:50.702 [scheduling-1] WARN jcifs.smb.SmbSessionImpl - Logging off session while still in use SmbSession[credentials=XXXXXXXXX,targetHost=nas03,targetDomain=null,uid=0,connectionState=3,usage=1]:[SmbTree[share=PPD,service=null,tid=4,inDfs=false,inDomainDfs=false,connectionState=0,usage=2]]
07 mars 2022;14:49:50.737 [scheduling-1] ERROR o.s.i.handler.LoggingHandler - org.springframework.messaging.MessagingException: Problem occurred while synchronizing '' to local directory; nested exception is org.springframework.messaging.MessagingException: Failure occurred while copying '/test.csv' from the remote to the local directory; nested exception is org.springframework.core.NestedIOException: Failed to read resource [/test.csv].; nested exception is jcifs.smb.SmbException: The parameter is incorrect.
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:348)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:267)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:69)
at org.springframework.integration.endpoint.AbstractFetchLimitingMessageSource.doReceive(AbstractFetchLimitingMessageSource.java:47)
at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:142)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:212)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:444)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.pollForMessage(AbstractPollingEndpoint.java:413)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$4(AbstractPollingEndpoint.java:348)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.lambda$execute$0(ErrorHandlingTaskExecutor.java:57)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:55)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$5(AbstractPollingEndpoint.java:341)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:95)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.springframework.messaging.MessagingException: Failure occurred while copying '/BE1_2_MOUVEMENTS_Valorisation_20211231_20220218_164451.csv' from the remote to the local directory; nested exception is org.springframework.core.NestedIOException: Failed to read resource [/BE1_2_MOUVEMENTS_Valorisation_20211231_20220218_164451.csv].; nested exception is jcifs.smb.SmbException: The parameter is incorrect.
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyRemoteContentToLocalFile(AbstractInboundFileSynchronizer.java:551)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyFileToLocalDirectory(AbstractInboundFileSynchronizer.java:488)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyIfNotNull(AbstractInboundFileSynchronizer.java:403)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.transferFilesFromRemoteToLocal(AbstractInboundFileSynchronizer.java:386)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.lambda$synchronizeToLocalDirectory$0(AbstractInboundFileSynchronizer.java:342)
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:452)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:341)
... 21 more
Caused by: org.springframework.core.NestedIOException: Failed to read resource [/BE1_2_MOUVEMENTS_Valorisation_20211231_20220218_164451.csv].; nested exception is jcifs.smb.SmbException: The parameter is incorrect.
at org.springframework.integration.smb.session.SmbSession.read(SmbSession.java:188)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyRemoteContentToLocalFile(AbstractInboundFileSynchronizer.java:545)
... 27 more
Caused by: jcifs.smb.SmbException: The parameter is incorrect.
at jcifs.smb.SmbTransportImpl.checkStatus2(SmbTransportImpl.java:1467)
at jcifs.smb.SmbTransportImpl.checkStatus(SmbTransportImpl.java:1578)
at jcifs.smb.SmbTransportImpl.sendrecv(SmbTransportImpl.java:1027)
at jcifs.smb.SmbTransportImpl.send(SmbTransportImpl.java:1549)
at jcifs.smb.SmbSessionImpl.send(SmbSessionImpl.java:409)
at jcifs.smb.SmbTreeImpl.send(SmbTreeImpl.java:472)
at jcifs.smb.SmbTreeConnection.send0(SmbTreeConnection.java:404)
at jcifs.smb.SmbTreeConnection.send(SmbTreeConnection.java:318)
at jcifs.smb.SmbTreeConnection.send(SmbTreeConnection.java:298)
at jcifs.smb.SmbTreeHandleImpl.send(SmbTreeHandleImpl.java:130)
at jcifs.smb.SmbTreeHandleImpl.send(SmbTreeHandleImpl.java:117)
at jcifs.smb.SmbFile.withOpen(SmbFile.java:1775)
at jcifs.smb.SmbFile.withOpen(SmbFile.java:1744)
at jcifs.smb.SmbFile.queryPath(SmbFile.java:793)
at jcifs.smb.SmbFile.exists(SmbFile.java:879)
at jcifs.smb.SmbFile.isFile(SmbFile.java:1102)
at org.springframework.integration.smb.session.SmbSession.read(SmbSession.java:182)
... 28 more
here is my code :
#Bean
public SmbSessionFactory smbSessionFactory() {
VaultResponse vaultResponse = vaultTemplate
.opsForKeyValue(vaultPath, VaultKeyValueOperationsSupport.KeyValueBackend.KV_2).get(vaultSecretsPath.toLowerCase());
SmbSessionFactory smbSession = new SmbSessionFactory();
smbSession.setHost(properties.getNasHost());
smbSession.setPort(properties.getNasPort());
smbSession.setDomain(properties.getNasDomain());
if (vaultResponse != null) {
Map<String, Object> data = vaultResponse.getData();
smbSession.setUsername(data != null && data.get("nasUsername") != null ? (String) data.get("nasUsername") : "");
smbSession.setPassword(data != null && data.get("nasPassword") != null ? (String) data.get("nasPassword") : "");
}
smbSession.setShareAndDir(properties.getNasShareAndDir());
smbSession.setReplaceFile(true);
smbSession.setSmbMinVersion(DialectVersion.SMB1);
smbSession.setSmbMaxVersion(DialectVersion.SMB311);
return smbSession;
}
thank you in advance,

How to capture data change in yugabyte db?

terminal 1:
postgres=# \c yugastore
You are now connected to database "yugastore" as user "postgres".
yugastore=# select count(*) from yugastore.users;
count
-------
2500
(1 row)
yugastore=# delete from yugastore.users;
DELETE 2500
(After starting insertion script at terminal 2)
yugastore=# select count(*) from yugastore.users;
ERROR: Query error: Restart read required at: { read: { physical: 1580057095845877 } local_limit: { physical: 1580057095880226 } global_limit: <min> in_txn_limit: <max> serial_no: 0 }
yugastore=# select count(*) from yugastore.users;
ERROR: Query error: Restart read required at: { read: { physical: 1580057098605539 } local_limit: { physical: 1580057098715271 } global_limit: <min> in_txn_limit: <max> serial_no: 0 }
terminal2:
yugastore.users table is created and being populated.
time: 11:44:31.796 cumulative records: 100
time: 11:44:32.608 cumulative records: 200
time: 11:44:32.909 cumulative records: 300
time: 11:44:33.213 cumulative records: 400
time: 11:44:33.661 cumulative records: 500
...
time: 11:46:24.710 cumulative records: 18900
time: 11:46:25.137 cumulative records: 19000
time: 11:46:25.606 cumulative records: 19100
terminal 3:
[root#srvr0 ~]# java -jar ./yb_cdc_connector.jar --table_name yugastore.users --master_addrs 127.0.0.1:7100 --log_only
[2020-01-26 11:45:57,844] INFO Starting CDC Kafka Connector... (org.yb.cdc.Main:28)
2020-01-26 11:45:58,201 [INFO|org.yb.cdc.KafkaConnector|KafkaConnector] Creating new YB client...
[2020-01-26 11:46:02,853] INFO Discovered tablet YB Master for table YB Master with partition ["", "") (org.yb.client.AsyncYBClient:1593)
[2020-01-26 11:46:03,724] ERROR [Peer fakeUUID -> 127.0.0.1:9100] Tablet server sent error Invalid argument (yb/rpc/yb_rpc.cc:411): Call on service yb.cdc.CDCService received from Connection (0x0000000005b8e2d0) server 127.0.0.1:46926 => 127.0.0.1:9100 with an invalid method name: CreateCDCStream (org.yb.client.TabletClient:380)
2020-01-26 11:46:03,725 [ERROR|org.yb.cdc.Main|Main] Application ran into error:
org.yb.client.NonRecoverableException: [Peer fakeUUID -> 127.0.0.1:9100] Tablet server sent error Invalid argument (yb/rpc/yb_rpc.cc:411): Call on service yb.cdc.CDCService received from Connection (0x0000000005b8e2d0) server 127.0.0.1:46926 => 127.0.0.1:9100 with an invalid method name: CreateCDCStream
at org.yb.client.TabletClient.decode(TabletClient.java:379)
at org.yb.client.TabletClient.decode(TabletClient.java:98)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.yb.client.TabletClient.handleUpstream(TabletClient.java:608)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTimeoutHandler.java:184)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.yb.client.AsyncYBClient$TabletClientPipeline.sendUpstream(AsyncYBClient.java:2002)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Update1:
After installing yugabyte db 2.0.10.0, error "Restart read required" resolved but no change logs printed:
Error logs:
[root#srvr0 ~]# java -jar ./yb-cdc-connector.jar --table_name yugastore.users --master_addrs 127.0.0.1:7100 --stream_id 1 --log_only
[2020-01-28 08:27:31,101] INFO Starting CDC Kafka Connector... (org.yb.cdc.Main:28)
2020-01-28 08:27:31,154 [INFO|org.yb.cdc.KafkaConnector|KafkaConnector] Creating new YB client...
[2020-01-28 08:27:32,288] INFO Discovered tablet YB Master for table YB Master with partition ["", "") (org.yb.client.AsyncYBClient:1593)
2020-01-28 08:27:32,597 [INFO|org.yb.cdc.KafkaConnector|KafkaConnector] Polling for new tablet ce5115a780224cd0ab8a8e9c1a46b961
2020-01-28 08:27:32,604 [INFO|org.yb.cdc.KafkaConnector|KafkaConnector] Polling for new tablet cca5b30bb7784ae2a8796097d6fd5b2f
2020-01-28 08:27:32,694 [ERROR|org.yb.cdc.Poller|Poller] Invalid Request
2020-01-28 08:27:32,695 [ERROR|org.yb.cdc.Poller|Poller] Invalid Request
[root#srvr0 ~]#
Please help me in resolving the issues.
The read restart issue that you see with select count(*) .. query has been fixed and is available from version 2.0.5.2: https://github.com/yugabyte/yugabyte-db/commit/3212616e351647436f808d4963d229e7881996c8.
Similarly, it seems like you are using an older, deprecated version of the CDC connector. You can get the connector using:
wget -O yb-cdc-connector.jar https://github.com/yugabyte/yb-kafka-connector/blob/master/yb-cdc/yb-cdc-connector.jar?raw=true
And then run:
java -jar ./yb-cdc-connector.jar --table_name yugastore.users --master_addrs 127.0.0.1:7100 --log_only

i have authentication problems with cosmosdb connection

im trying to connect to cosmosdb and i get the following error:
System.TimeoutException: A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : "1", ConnectionMode : "ReplicaSet", Type : "ReplicaSet", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/daniel.documents.azure.com:10255" }", EndPoint: "Unspecified/daniel.documents.azure.com:10255", State: "Disconnected", Type: "Unknown", HeartbeatException: "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. ---> System.IO.IOException: Authentication failed because the remote party has closed the transport stream.
at System.Net.Security.SslState.InternalEndProcessAuthentication(LazyAsyncResult lazyResult)
at System.Net.Security.SslState.EndProcessAuthentication(IAsyncResult result)
at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
--- End of stack trace from previous location where exception was thrown ---
update:
i'm using persistence with NServiceBus.
can it be that this is the problem?
do you know if NServiceBus supports AZURE CosmosDB?

Intermittent connectionTimeout errors in spark streaming job

I have a Spark (2.1) streaming job that writes processed data to azure blob storage every batch (with batch interval 1 min). Every now and then (once every couple of hours) I get the 'java.net.ConnectException' with connection timeout message. This does gets retried and eventually succeeds. But this issue is causing delay in the completion of the 1 min streaming batch and is causing it to finish in 2 to 3 min when this error occurs.
Below is the executor log snippet with error message. I have spark.executor.cores=5.
Is there some kind of number of connections limit that might be causing this?
17/10/11 16:09:02 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Starting operation.}
17/10/11 16:09:02 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Starting operation with location 'PRIMARY' per location mode 'PRIMARY_ONLY'.}
17/10/11 16:09:02 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Starting request to 'http://<name>.blob.core.windows.net/rawData/2017/10/11/16/data1.json' at 'Wed, 11 Oct 2017 16:09:02 GMT'.}
17/10/11 16:09:02 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Waiting for response.}
..
17/10/11 16:11:09 WARN root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Retryable exception thrown. Class = 'java.net.ConnectException', Message = 'Connection timed out (Connection timed out)'.}
17/10/11 16:11:09 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Checking if the operation should be retried. Retry count = '0', HTTP status code = '-1', Error Message = 'An unknown failure occurred : Connection timed out (Connection timed out)'.}
17/10/11 16:11:09 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {The next location has been set to 'PRIMARY', per location mode 'PRIMARY_ONLY'.}
17/10/11 16:11:09 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {The retry policy set the next location to 'PRIMARY' and updated the location mode to 'PRIMARY_ONLY'.}
17/10/11 16:11:09 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Operation will be retried after '0'ms.}
17/10/11 16:11:09 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Retrying failed operation.}

How can I drop an empty line in logstash

In my logstash logs I have sometimes empty lines or lines with only spaces.
To drop the empty line I created a dropemptyline filter file
# drop empty lines
filter {
if [message] =~ /^\s*$/ {
drop { }
}
}
But the empty line filter is not working as expected, mainly because this particular filter is inside a chain other filters where there are filter coming afterwards afterwards.
00_input.conf
05_syslogfilter.conf
06_dropemptylines.conf
07_classifier.conf
So I think my particular filter would work if it was the only one but its not.
2015-02-11 15:02:12.347 WARN 1 --- [tp1812226644-23] o.eclipse.jetty.servlet.ServletHandler :
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]
My question is how can I drop out of all filters and go directly to output?
you can just ignore empty lines entirely using a grok filter,
%{GREEDYDATA:1st}(\n{1,})%{GREEDYDATA:2nd}
it will generate,
{
"1st": [
[
"2015-02-11 15:02:12.347 WARN 1 --- [tp1812226644-23] o.eclipse.jetty.servlet.ServletHandler : "
]
],
"2nd": [
[
"org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]"
]
]
}
or more elegant way,
(?m)%{GREEDYDATA:log}
Output:
{
"log": [
[
"2015-02-11 15:02:12.347 WARN 1 --- [tp1812226644-23] o.eclipse.jetty.servlet.ServletHandler : \n\n\n\norg.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]"
]
]
}

Resources