AdoNetAppender in log4net not logging or throwing exception - log4net
I have defined a logger in my log4net.config file that is supposed to use the AdoNetAppender to log to an instance of SQL Server 2005. The logger is being called in my code, but no messages are being logged and no exceptions are being thrown.
Here is the part of my config file defining the logger and the appender:
<logger name="Log4NetSummarySqlLogger">
<level value="INFO"/>
<appender-ref ref="SummarySqlAppender"/>
</logger>
<appender name="SummarySqlAppender" type="log4net.Appender.AdoNetAppender">
<bufferSize value="100" />
<!--<connectionType value="System.Data.SqlClient.SqlConnection, System.Data, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />-->
<connectionType value="System.Data.SqlClient.SqlConnection, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
<connectionString value="data source=[removed];initial catalog=[removed];integrated security=false;persist security info=True;User ID=[removed];Password=[removed]" />
<commandText value="INSERT INTO Log ([Date],[Thread],[Level],[Logger],[Message],[Exception]) VALUES (#log_date, #thread, #log_level, #logger, #message, #exception)" />
<parameter>
<parameterName value="#log_date" />
<dbType value="DateTime" />
<layout type="log4net.Layout.RawTimeStampLayout" />
</parameter>
<parameter>
<parameterName value="#thread" />
<dbType value="String" />
<size value="255" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%thread" />
</layout>
</parameter>
<parameter>
<parameterName value="#log_level" />
<dbType value="String" />
<size value="50" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%level" />
</layout>
</parameter>
<parameter>
<parameterName value="#logger" />
<dbType value="String" />
<size value="255" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%logger" />
</layout>
</parameter>
<parameter>
<parameterName value="#message" />
<dbType value="String" />
<size value="4000" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%message" />
</layout>
</parameter>
<parameter>
<parameterName value="#exception" />
<dbType value="String" />
<size value="2000" />
<layout type="log4net.Layout.ExceptionLayout" />
</parameter>
I also enabled log4net's internal debugging, and it wasn't of much help either:
log4net: log4net assembly [log4net, Version=1.2.10.0, Culture=neutral, PublicKeyToken=1b44e1d426115821]. Loaded from [<executable directory>\log4net.dll]. (.NET Runtime [4.0.30319.237] on Microsoft Windows NT 5.1.2600 Service Pack 3)
log4net: DefaultRepositorySelector: defaultRepositoryType [log4net.Repository.Hierarchy.Hierarchy]
log4net: DefaultRepositorySelector: Creating repository for assembly [WindowsService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]
log4net: DefaultRepositorySelector: Assembly [WindowsService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null] Loaded From [<executable directory>\WindowsService.exe]
log4net: DefaultRepositorySelector: Assembly [WindowsService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null] does not have a RepositoryAttribute specified.
log4net: DefaultRepositorySelector: Assembly [WindowsService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null] using repository [log4net-default-repository] and repository type [log4net.Repository.Hierarchy.Hierarchy]
log4net: DefaultRepositorySelector: Creating repository [log4net-default-repository] using type [log4net.Repository.Hierarchy.Hierarchy]
log4net: XmlConfigurator: configuring repository [log4net-default-repository] using file [<executable directory>\log4net.config]
log4net: XmlConfigurator: configuring repository [log4net-default-repository] using stream
log4net: XmlConfigurator: loading XML configuration
log4net: XmlConfigurator: Configuring Repository [log4net-default-repository]
log4net: XmlHierarchyConfigurator: Configuration update mode [Merge].
log4net: XmlHierarchyConfigurator: Logger [root] Level string is [INFO].
log4net: XmlHierarchyConfigurator: Logger [root] level set to [name="INFO",value=40000].
log4net: XmlHierarchyConfigurator: Loading Appender [FileAppender] type: [log4net.Appender.RollingFileAppender]
log4net: XmlHierarchyConfigurator: Setting Property [File] to String value [<log directory>/info.log]
log4net: XmlHierarchyConfigurator: Setting Property [AppendToFile] to Boolean value [True]
log4net: XmlHierarchyConfigurator: Setting Property [RollingStyle] to RollingMode value [Size]
log4net: XmlHierarchyConfigurator: Setting Property [MaxSizeRollBackups] to Int32 value [10]
log4net: XmlHierarchyConfigurator: Setting Property [MaximumFileSize] to String value [100KB]
log4net: XmlHierarchyConfigurator: Setting Property [StaticLogFileName] to Boolean value [True]
log4net: PatternParser: Converter [message] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [newline] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [ConversionPattern] to String value [{%level}%date{MM/dd HH:mm:ss} - %message%newline]
log4net: PatternParser: Converter [literal] Option [{] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [level] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [literal] Option [}] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [date] Option [MM/dd HH:mm:ss] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [literal] Option [ - ] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [message] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [newline] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [Layout] to object [log4net.Layout.PatternLayout]
log4net: RollingFileAppender: Searched for existing files in [<log directory>]
log4net: RollingFileAppender: curSizeRollBackups starts at [0]
log4net: FileAppender: Opening file for writing [<log directory>\info.log] append [True]
log4net: XmlHierarchyConfigurator: Created Appender [FileAppender]
log4net: XmlHierarchyConfigurator: Adding appender named [FileAppender] to logger [root].
log4net: XmlHierarchyConfigurator: Retrieving an instance of log4net.Repository.Logger for logger [Log4NetSummarySqlLogger].
log4net: XmlHierarchyConfigurator: Setting [Log4NetSummarySqlLogger] additivity to [True].
log4net: XmlHierarchyConfigurator: Logger [Log4NetSummarySqlLogger] Level string is [INFO].
log4net: XmlHierarchyConfigurator: Logger [Log4NetSummarySqlLogger] level set to [name="INFO",value=40000].
log4net: XmlHierarchyConfigurator: Loading Appender [SummarySqlAppender] type: [log4net.Appender.AdoNetAppender]
log4net: XmlHierarchyConfigurator: Setting Property [BufferSize] to Int32 value [100]
log4net: XmlHierarchyConfigurator: Setting Property [ConnectionType] to String value [System.Data.SqlClient.SqlConnection, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]
log4net: XmlHierarchyConfigurator: Setting Property [ConnectionString] to String value [data source=<server>;initial catalog=<database>;integrated security=false;persist security info=True;User ID=<user>;Password=<password>]
log4net: XmlHierarchyConfigurator: Setting Property [CommandText] to String value [INSERT INTO Log ([Date],[Thread],[Level],[Logger],[Message],[Exception]) VALUES (#log_date, #thread, #log_level, #logger, #message, #exception)]
log4net: XmlHierarchyConfigurator: Setting Property [ParameterName] to String value [#log_date]
log4net: XmlHierarchyConfigurator: Setting Property [DbType] to DbType value [DateTime]
log4net: XmlHierarchyConfigurator: Setting Property [Layout] to object [log4net.Layout.RawTimeStampLayout]
log4net: XmlHierarchyConfigurator: Setting Collection Property [AddParameter] to object [log4net.Appender.AdoNetAppenderParameter]
log4net: XmlHierarchyConfigurator: Setting Property [ParameterName] to String value [#thread]
log4net: XmlHierarchyConfigurator: Setting Property [DbType] to DbType value [String]
log4net: XmlHierarchyConfigurator: Setting Property [Size] to Int32 value [255]
log4net: PatternParser: Converter [message] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [newline] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [ConversionPattern] to String value [%thread]
log4net: PatternParser: Converter [thread] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [Layout] to object [log4net.Layout.Layout2RawLayoutAdapter]
log4net: XmlHierarchyConfigurator: Setting Collection Property [AddParameter] to object [log4net.Appender.AdoNetAppenderParameter]
log4net: XmlHierarchyConfigurator: Setting Property [ParameterName] to String value [#log_level]
log4net: XmlHierarchyConfigurator: Setting Property [DbType] to DbType value [String]
log4net: XmlHierarchyConfigurator: Setting Property [Size] to Int32 value [50]
log4net: PatternParser: Converter [message] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [newline] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [ConversionPattern] to String value [%level]
log4net: PatternParser: Converter [level] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [Layout] to object [log4net.Layout.Layout2RawLayoutAdapter]
log4net: XmlHierarchyConfigurator: Setting Collection Property [AddParameter] to object [log4net.Appender.AdoNetAppenderParameter]
log4net: XmlHierarchyConfigurator: Setting Property [ParameterName] to String value [#logger]
log4net: XmlHierarchyConfigurator: Setting Property [DbType] to DbType value [String]
log4net: XmlHierarchyConfigurator: Setting Property [Size] to Int32 value [255]
log4net: PatternParser: Converter [message] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [newline] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [ConversionPattern] to String value [%logger]
log4net: PatternParser: Converter [logger] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [Layout] to object [log4net.Layout.Layout2RawLayoutAdapter]
log4net: XmlHierarchyConfigurator: Setting Collection Property [AddParameter] to object [log4net.Appender.AdoNetAppenderParameter]
log4net: XmlHierarchyConfigurator: Setting Property [ParameterName] to String value [#message]
log4net: XmlHierarchyConfigurator: Setting Property [DbType] to DbType value [String]
log4net: XmlHierarchyConfigurator: Setting Property [Size] to Int32 value [4000]
log4net: PatternParser: Converter [message] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [newline] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [ConversionPattern] to String value [%message]
log4net: PatternParser: Converter [message] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [Layout] to object [log4net.Layout.Layout2RawLayoutAdapter]
log4net: XmlHierarchyConfigurator: Setting Collection Property [AddParameter] to object [log4net.Appender.AdoNetAppenderParameter]
log4net: XmlHierarchyConfigurator: Setting Property [ParameterName] to String value [#exception]
log4net: XmlHierarchyConfigurator: Setting Property [DbType] to DbType value [String]
log4net: XmlHierarchyConfigurator: Setting Property [Size] to Int32 value [2000]
log4net: XmlHierarchyConfigurator: Setting Property [Layout] to object [log4net.Layout.Layout2RawLayoutAdapter]
log4net: XmlHierarchyConfigurator: Setting Collection Property [AddParameter] to object [log4net.Appender.AdoNetAppenderParameter]
'QTAgent32.exe' (Managed (v4.0.30319)): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_32\System.Transactions\v4.0_4.0.0.0__b77a5c561934e089\System.Transactions.dll', Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'QTAgent32.exe' (Managed (v4.0.30319)): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_32\System.EnterpriseServices\v4.0_4.0.0.0__b03f5f7f11d50a3a\System.EnterpriseServices.dll', Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'QTAgent32.exe' (Managed (v4.0.30319)): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_32\System.EnterpriseServices\v4.0_4.0.0.0__b03f5f7f11d50a3a\System.EnterpriseServices.Wrapper.dll', Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
log4net: XmlHierarchyConfigurator: Created Appender [SummarySqlAppender]
log4net: XmlHierarchyConfigurator: Adding appender named [SummarySqlAppender] to logger [Log4NetSummarySqlLogger].
log4net: XmlHierarchyConfigurator: Retrieving an instance of log4net.Repository.Logger for logger [Log4NetEventLogger].
log4net: XmlHierarchyConfigurator: Setting [Log4NetEventLogger] additivity to [True].
log4net: XmlHierarchyConfigurator: Logger [Log4NetEventLogger] Level string is [INFO].
log4net: XmlHierarchyConfigurator: Logger [Log4NetEventLogger] level set to [name="INFO",value=40000].
log4net: XmlHierarchyConfigurator: Loading Appender [EventLogAppender] type: [log4net.Appender.EventLogAppender]
log4net: XmlHierarchyConfigurator: Setting Property [LogName] to String value [Application]
log4net: XmlHierarchyConfigurator: Setting Property [ApplicationName] to String value [ConcurFilesService]
log4net: PatternParser: Converter [message] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [newline] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [ConversionPattern] to String value [{%level}%date{MM/dd HH:mm:ss} - %message%newline]
log4net: PatternParser: Converter [literal] Option [{] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [level] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [literal] Option [}] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [date] Option [MM/dd HH:mm:ss] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [literal] Option [ - ] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [message] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: PatternParser: Converter [newline] Option [] Format [min=-1,max=2147483647,leftAlign=False]
log4net: XmlHierarchyConfigurator: Setting Property [Layout] to object [log4net.Layout.PatternLayout]
log4net: EventLogAppender: Source [ConcurFilesService] is registered to log []
log4net: XmlHierarchyConfigurator: Created Appender [EventLogAppender]
log4net: XmlHierarchyConfigurator: Adding appender named [EventLogAppender] to logger [Log4NetEventLogger].
log4net: XmlHierarchyConfigurator: Hierarchy Threshold []
FWIW, I have also defined a FileAppender elsewhere and it works fine.
Try to set the buffer size to 1. It is possible that your application quits before the buffer is flushed and thus you do not see any log messages.
bufferSize value="1"
Another possible problem could be the name of the logger. Do you really have a logger with this name? Try to configure the root logger instead
<root>
<level value="INFO"/>
<appender-ref ref="SummarySqlAppender"/>
</root>
Sounds like log4net is being initialized okay, but something's amiss with this particular appender or logger. Try adding a file appender to the Log4NetSummarySqlLogger and make sure the logger is actually being used.
The ADONetAppender can also fail if the SQL generated is invalid. The failure won't cause an exception, but will issue an ERROR log entry to be written to STDERR. If this is a windows GUI or service app, I highly recommend you test your config using a console application so any appender errors are easier to spot; these ERROR logs look like this:
log4net:ERROR [AdoNetAppender] Exception while writing to database
System.Data.SqlClient.SqlException: Cannot insert the value NULL into column 'Da
te', table 'Test1.dbo.Log'; column does not allow nulls. INSERT fails.
The statement has been terminated.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolea
n breakConnection)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception
, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObj
ect stateObj)...
I read that you've tested the connection string; have you also tested the sql generated by the appender? Verified that the database field sizes and types match the parameters in your config?
I just ran into this and it was because the configuration was not initialized. So this is possibly different than that OPs issue. However in your app start you cannot forget this line:
[assembly: log4net.Config.XmlConfigurator(Watch = true)]
log4net will just happily continue and not issue any warnings etc. I ended up downloading the source and rebuilding log4net so I could debug. For reference my Global.asax looks like this:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Http;
using System.Web.Mvc;
using System.Web.Optimization;
using System.Web.Routing;
[assembly: log4net.Config.XmlConfigurator(Watch = true)]
Related
Spark Structured streaming : ClassCastException: .streaming.SerializedOffset cannot be cast to class .spark.sql.streaming.CouchbaseSourceOffset
I am using Couchbase spark connector in spark structured streaming. I have enabled checkpointing on the streaming query. But I get the class cast exception "java.lang.ClassCastException: class org.apache.spark.sql.execution.streaming.SerializedOffset cannot be cast to class com.couchbase.spark.sql.streaming.CouchbaseSourceOffset" when I rerun the spark structured streaming application on previously checkpointed location. If I delete the contents of checkpoint spark runs fine. Is it a bug on spark? I am using spark 2.4.5 20/04/23 19:11:29 ERROR MicroBatchExecution: Query [id = 1ce2e002-20ee-401e-98de-27e70b27f1a4, runId = 0b89094f-3bae-4927-b09c-24d9deaf5901] terminated with error java.lang.ClassCastException: class org.apache.spark.sql.execution.streaming.SerializedOffset cannot be cast to class com.couchbase.spark.sql.streaming.CouchbaseSourceOffset (org.apache.spark.sql.execution.streaming.SerializedOffset and com.couchbase.spark.sql.streaming.CouchbaseSourceOffset are in unnamed module of loader 'app') at com.couchbase.spark.sql.streaming.CouchbaseSource.$anonfun$getBatch$2(CouchbaseSource.scala:172) at scala.Option.map(Option.scala:230) at com.couchbase.spark.sql.streaming.CouchbaseSource.getBatch(CouchbaseSource.scala:172) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$populateStartOffsets$3(MicroBatchExecution.scala:284) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at scala.collection.AbstractIterator.foreach(Iterator.scala:1431) at scala.collection.IterableLike.foreach(IterableLike.scala:74) at scala.collection.IterableLike.foreach$(IterableLike.scala:73) at org.apache.spark.sql.execution.streaming.StreamProgress.foreach(StreamProgress.scala:25) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.populateStartOffsets(MicroBatchExecution.scala:281) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:169) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:351) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:349) at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:166) at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160) at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:281) at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)
Why cqlsh "CREATE" statement creates a "uuid" field as "timeuuid"?
Cassandra 3.11.0 running inside Ubuntu 16.04.4, JRE-8 I am trying to create a table with uuid field as follows : CREATE TABLE IF NOT EXISTS policy ( tenant_id text, policy_id uuid, name text, enabled boolean, creation_time bigint, PRIMARY KEY (tenant_id, name) ); After executing this, schema shows "policy_id" created as "timeuuid" instead of "uuid". Saw a [Similar issue] : creating uuid type field in Cassandra table using CassandraAdminOperations.createTable But is this applicable to cqlsh ? Dropped table/keyspace and tried again but no luck. This is intermittent issue. Getting following exception: Exception while loading CQL script: com.datastax.driver.core.exceptions.InvalidQueryException: Type error: cannot assign result of function system.uuid (type uuid) to id (type timeuuid) at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:50) at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37) at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245) at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:68) at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:43) Also seen following in cassandra logs: org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch (found e686a660-8994-11e9-984c-2767f9f5fd28; expected e5d72c80-8994-11e9-b706-831d59206120) at org.apache.cassandra.config.CFMetaData.validateCompatibility(CFMetaData.java:808) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.config.CFMetaData.apply(CFMetaData.java:770) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.config.Schema.updateTable(Schema.java:621) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1430) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1386) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1336) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.service.MigrationTask$1.response(MigrationTask.java:91) ~[apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53) [apache-cassandra-3.11.0.jar:3.11.0] at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) [apache-cassandra-3.11.0.jar:3.11.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_131] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_131] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131] at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [apache-cassandra-3.11.0.jar:3.11.0] at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131] policy_id should have datatype "uuid" but it has datatype "timeuuid". Any additional keyword required in cql statement to create table?
Azure oozie workflow
i am trying to run a oozie workflow on azurehdinsight cluster the job definition looks like this: <workflow-app xmlns="uri:oozie:workflow:0.2" name="oozie-sqoop"> <start to="sqoop1" /> <action name="sqoop1"> <sqoop xmlns="uri:oozie:sqoop-action:0.4"> <job-tracker>jobtrackerhost:9010</job-tracker> <name-node>wasb://abc#def.blob.core.windows.net</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>default</value> </property> </configuration> <arg>import</arg> <arg>--connect</arg> <arg>jdbc:mysql://{ip}/svnadmin</arg> <arg>--username</arg> <arg>uname</arg> <arg>--password</arg> <arg>password</arg> <arg>--table</arg> <arg>rights</arg> <arg>--hive-import</arg> </sqoop> <ok to="end" /> <error to="fail" /> </action> <kill name="fail"> <message>sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end" /> </workflow-app> log: 2016-10-04 06:16:06,816 INFO ActionStartXCommand:520 - SERVER[hn0-saint.3oitbdwtly0uzabcmledackovts0a.bx.internal.cloudapp.net] USER[saint] GROUP[-] TOKEN[] APP[oozie-sqoop] JOB[0000015-160928235712742-oozie-oozi-W] ACTION[0000015-160928235712742-oozie-oozi-W#:start:] Start action [0000015-160928235712742-oozie-oozi-W#:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10] 2016-10-04 06:16:06,827 INFO ActionStartXCommand:520 - SERVER[hn0-saint.3oitbdwtly0uzmledackovts0a.bx.internal.cloudapp.net] USER[saint] GROUP[-] TOKEN[] APP[oozie-sqoop] JOB[0000015-160928235712742-oozie-oozi-W] ACTION[0000015-160928235712742-oozie-oozi-W#:start:] [***0000015-160928235712742-oozie-oozi-W#:start:***]Action status=DONE 2016-10-04 06:16:06,828 INFO ActionStartXCommand:520 - SERVER[hn0-saint.3oitbdwtly0uzmleklasackovts0a.bx.internal.cloudapp.net] USER[saint] GROUP[-] TOKEN[] APP[oozie-sqoop] JOB[0000015-160928235712742-oozie-oozi-W] ACTION[0000015-160928235712742-oozie-oozi-W#:start:] [***0000015-160928235712742-oozie-oozi-W#:start:***]Action updated in DB! 2016-10-04 06:16:07,508 INFO WorkflowNotificationXCommand:520 - SERVER[hn0-saint.3oitbdxbtly0uzmledackovts0a.bx.internal.cloudapp.net] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000015-160928235712742-oozie-oozi-W] ACTION[] No Notification URL is defined. Therefore nothing to notify for job 0000015-160928235712742-oozie-oozi-W but it is not importing anything to hive . When i tried the same with sqoop command it succeeded and successfully imported to hive. it always showing status as running and it never change to anything
Looks like your WASB path is wrong , not sure if you changed it for posting purpose. I believe it should be as below. <name-node>wasbs://abc#def.blob.core.windows.net</name-node> You are missing s , please check.
Deploying App_Data folder to IIS in Orchard CMS
I use Orchard CMS 1.10.1, I have problem with deploying an existing App_Data folder (It already Contains a finished website), I get this error when I try to load website The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: / Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.34209 When I use a fresh App_Data, It works fine and shows me the Setup Page. But When Click on Finish Setup button, this error comes up : Setup failed: Exception has been thrown by the target of an invocation. ________________UPDATE___________________ I deployed this to another server (in different host company) and it worked fine. I don't know What this server lack for running Orchard. I called them but they had no idea what to do. I looked at App_Data/logs and this was there: 2016-08-24 23:12:25,672 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default Attempt number: 0 [(null)] NHibernate.HibernateException: Could not create the driver from Orchard.Data.Providers.SqlCeDataServicesProvider+OrchardSqlServerCeDriver, Orchard.Framework, Version=1.10.1.0, Culture=neutral, PublicKeyToken=null. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Data.SqlServerCe.SqlCeException: Unable to load the native components of SQL Server Compact corresponding to the ADO.NET provider of version 8876. Install the correct version of SQL Server Compact. Refer to KB article 974247 for more details. at System.Data.SqlServerCe.NativeMethods.LoadNativeBinaries() at System.Data.SqlServerCe.SqlCeCommand..ctor() --- End of inner exception stack trace --- at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck) at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark) at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark) at System.Activator.CreateInstance(Type type, Boolean nonPublic) at System.Activator.CreateInstance(Type type) at NHibernate.Bytecode.ActivatorObjectsFactory.CreateInstance(Type type) at NHibernate.Driver.ReflectionDriveConnectionCommandProvider.CreateCommand() at NHibernate.Driver.ReflectionBasedDriver.CreateCommand() at NHibernate.Driver.SqlServerCeDriver.Configure(IDictionary`2 settings) at Orchard.Data.Providers.SqlCeDataServicesProvider.OrchardSqlServerCeDriver.Configure(IDictionary`2 settings) at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings) --- End of inner exception stack trace --- at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings) at NHibernate.Connection.ConnectionProvider.Configure(IDictionary`2 settings) at NHibernate.Connection.ConnectionProviderFactory.NewConnectionProvider(IDictionary`2 settings) at NHibernate.Cfg.SettingsFactory.BuildSettings(IDictionary`2 properties) at NHibernate.Cfg.Configuration.BuildSettings() at NHibernate.Cfg.Configuration.BuildSessionFactory() at Orchard.Data.SessionFactoryHolder.BuildSessionFactory() at Orchard.Data.SessionFactoryHolder.GetSessionFactory() at Orchard.Data.TransactionManager.EnsureSession(IsolationLevel level) at Orchard.Data.TransactionManager.GetSession() at Orchard.Data.Repository`1.get_Session() at Orchard.Data.Repository`1.get_Table() at Orchard.Data.Repository`1.Fetch(Expression`1 predicate) at Orchard.Data.Repository`1.Get(Expression`1 predicate) at Orchard.Data.Repository`1.Orchard.Data.IRepository<T>.Get(Expression`1 predicate) at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetDescriptorRecord() at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetShellDescriptor() at Orchard.Environment.ShellBuilders.ShellContextFactory.CreateShellContext(ShellSettings settings) at Orchard.Environment.DefaultOrchardHost.CreateShellContext(ShellSettings settings) at Orchard.Environment.DefaultOrchardHost.<CreateAndActivateShells>b__41_1(ShellSettings settings) 2016-08-24 23:12:27,453 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default after 1 retries. [(null)] 2016-08-24 23:12:27,938 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default Attempt number: 0 [(null)] NHibernate.HibernateException: Could not create the driver from Orchard.Data.Providers.SqlCeDataServicesProvider+OrchardSqlServerCeDriver, Orchard.Framework, Version=1.10.1.0, Culture=neutral, PublicKeyToken=null. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Data.SqlServerCe.SqlCeException: Unable to load the native components of SQL Server Compact corresponding to the ADO.NET provider of version 8876. Install the correct version of SQL Server Compact. Refer to KB article 974247 for more details. at System.Data.SqlServerCe.NativeMethods.LoadNativeBinaries() at System.Data.SqlServerCe.SqlCeCommand..ctor() --- End of inner exception stack trace --- at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck) at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark) at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark) at System.Activator.CreateInstance(Type type, Boolean nonPublic) at System.Activator.CreateInstance(Type type) at NHibernate.Bytecode.ActivatorObjectsFactory.CreateInstance(Type type) at NHibernate.Driver.ReflectionDriveConnectionCommandProvider.CreateCommand() at NHibernate.Driver.ReflectionBasedDriver.CreateCommand() at NHibernate.Driver.SqlServerCeDriver.Configure(IDictionary`2 settings) at Orchard.Data.Providers.SqlCeDataServicesProvider.OrchardSqlServerCeDriver.Configure(IDictionary`2 settings) at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings) --- End of inner exception stack trace --- at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings) at NHibernate.Connection.ConnectionProvider.Configure(IDictionary`2 settings) at NHibernate.Connection.ConnectionProviderFactory.NewConnectionProvider(IDictionary`2 settings) at NHibernate.Cfg.SettingsFactory.BuildSettings(IDictionary`2 properties) at NHibernate.Cfg.Configuration.BuildSettings() at NHibernate.Cfg.Configuration.BuildSessionFactory() at Orchard.Data.SessionFactoryHolder.BuildSessionFactory() at Orchard.Data.SessionFactoryHolder.GetSessionFactory() at Orchard.Data.TransactionManager.EnsureSession(IsolationLevel level) at Orchard.Data.TransactionManager.GetSession() at Orchard.Data.Repository`1.get_Session() at Orchard.Data.Repository`1.get_Table() at Orchard.Data.Repository`1.Fetch(Expression`1 predicate) at Orchard.Data.Repository`1.Get(Expression`1 predicate) at Orchard.Data.Repository`1.Orchard.Data.IRepository<T>.Get(Expression`1 predicate) at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetDescriptorRecord() at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetShellDescriptor() at Orchard.Environment.ShellBuilders.ShellContextFactory.CreateShellContext(ShellSettings settings) at Orchard.Environment.DefaultOrchardHost.CreateShellContext(ShellSettings settings) at Orchard.Environment.DefaultOrchardHost.<CreateAndActivateShells>b__41_1(ShellSettings settings) 2016-08-24 23:12:29,266 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default after 1 retries. [(null)] 2016-08-24 23:12:29,891 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default Attempt number: 0 [http://studiosefid.com/] NHibernate.HibernateException: Could not create the driver from Orchard.Data.Providers.SqlCeDataServicesProvider+OrchardSqlServerCeDriver, Orchard.Framework, Version=1.10.1.0, Culture=neutral, PublicKeyToken=null. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Data.SqlServerCe.SqlCeException: Unable to load the native components of SQL Server Compact corresponding to the ADO.NET provider of version 8876. Install the correct version of SQL Server Compact. Refer to KB article 974247 for more details. at System.Data.SqlServerCe.NativeMethods.LoadNativeBinaries() at System.Data.SqlServerCe.SqlCeCommand..ctor() --- End of inner exception stack trace --- at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck) at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark) at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark) at System.Activator.CreateInstance(Type type, Boolean nonPublic) at System.Activator.CreateInstance(Type type) at NHibernate.Bytecode.ActivatorObjectsFactory.CreateInstance(Type type) at NHibernate.Driver.ReflectionDriveConnectionCommandProvider.CreateCommand() at NHibernate.Driver.ReflectionBasedDriver.CreateCommand() at NHibernate.Driver.SqlServerCeDriver.Configure(IDictionary`2 settings) at Orchard.Data.Providers.SqlCeDataServicesProvider.OrchardSqlServerCeDriver.Configure(IDictionary`2 settings) at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings) --- End of inner exception stack trace --- at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings) at NHibernate.Connection.ConnectionProvider.Configure(IDictionary`2 settings) at NHibernate.Connection.ConnectionProviderFactory.NewConnectionProvider(IDictionary`2 settings) at NHibernate.Cfg.SettingsFactory.BuildSettings(IDictionary`2 properties) at NHibernate.Cfg.Configuration.BuildSettings() at NHibernate.Cfg.Configuration.BuildSessionFactory() at Orchard.Data.SessionFactoryHolder.BuildSessionFactory() at Orchard.Data.SessionFactoryHolder.GetSessionFactory() at Orchard.Data.TransactionManager.EnsureSession(IsolationLevel level) at Orchard.Data.TransactionManager.GetSession() at Orchard.Data.Repository`1.get_Session() at Orchard.Data.Repository`1.get_Table() at Orchard.Data.Repository`1.Fetch(Expression`1 predicate) at Orchard.Data.Repository`1.Get(Expression`1 predicate) at Orchard.Data.Repository`1.Orchard.Data.IRepository<T>.Get(Expression`1 predicate) at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetDescriptorRecord() at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetShellDescriptor() at Orchard.Environment.ShellBuilders.ShellContextFactory.CreateShellContext(ShellSettings settings) at Orchard.Environment.DefaultOrchardHost.CreateShellContext(ShellSettings settings) at Orchard.Environment.DefaultOrchardHost.<CreateAndActivateShells>b__41_1(ShellSettings settings) 2016-08-24 23:12:31,344 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default after 1 retries. [http://studiosefid.com/] 2016-08-24 23:12:31,891 [19] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default Attempt number: 0 [http://studiosefid.com/] NHibernate.HibernateException: Could not create the driver from Orchard.Data.Providers.SqlCeDataServicesProvider+OrchardSqlServerCeDriver, Orchard.Framework, Version=1.10.1.0, Culture=neutral, PublicKeyToken=null. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Data.SqlServerCe.SqlCeException: Unable to load the native components of SQL Server Compact corresponding to the ADO.NET provider of version 8876. Install the correct version of SQL Server Compact. Refer to KB article 974247 for more details. at System.Data.SqlServerCe.NativeMethods.LoadNativeBinaries() at System.Data.SqlServerCe.SqlCeCommand..ctor() --- End of inner exception stack trace --- at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck) at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark) at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark) at System.Activator.CreateInstance(Type type, Boolean nonPublic) at System.Activator.CreateInstance(Type type) at NHibernate.Bytecode.ActivatorObjectsFactory.CreateInstance(Type type) at NHibernate.Driver.ReflectionDriveConnectionCommandProvider.CreateCommand() at NHibernate.Driver.ReflectionBasedDriver.CreateCommand() at NHibernate.Driver.SqlServerCeDriver.Configure(IDictionary`2 settings) at Orchard.Data.Providers.SqlCeDataServicesProvider.OrchardSqlServerCeDriver.Configure(IDictionary`2 settings) at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings) --- End of inner exception stack trace --- at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings) at NHibernate.Connection.ConnectionProvider.Configure(IDictionary`2 settings) at NHibernate.Connection.ConnectionProviderFactory.NewConnectionProvider(IDictionary`2 settings) at NHibernate.Cfg.SettingsFactory.BuildSettings(IDictionary`2 properties) at NHibernate.Cfg.Configuration.BuildSettings() at NHibernate.Cfg.Configuration.BuildSessionFactory() at Orchard.Data.SessionFactoryHolder.BuildSessionFactory() at Orchard.Data.SessionFactoryHolder.GetSessionFactory() at Orchard.Data.TransactionManager.EnsureSession(IsolationLevel level) at Orchard.Data.TransactionManager.GetSession() at Orchard.Data.Repository`1.get_Session() at Orchard.Data.Repository`1.get_Table() at Orchard.Data.Repository`1.Fetch(Expression`1 predicate) at Orchard.Data.Repository`1.Get(Expression`1 predicate) at Orchard.Data.Repository`1.Orchard.Data.IRepository<T>.Get(Expression`1 predicate) at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetDescriptorRecord() at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetShellDescriptor() at Orchard.Environment.ShellBuilders.ShellContextFactory.CreateShellContext(ShellSettings settings) at Orchard.Environment.DefaultOrchardHo
The log contains the following error message: Unable to load the native components of SQL Server Compact corresponding to the ADO.NET provider of version 8876. Install the correct version of SQL Server Compact. Refer to KB article 974247 for more details. Make sure that you have installed the correct version of SQL Server Compact (the same one as referenced by Orchard).
how to monitor multi directories in spark streaming task
I want to use use fileStream in spark streaming to monitor multi hdfs directories, such as: val list_join_action_stream = ssc.fileStream[LongWritable, Text, TextInputFormat]("/user/root/*/*", check_valid_file(_), false).map(_._2.toString).print Buy the way, i could not under the meaning of the three class : LongWritable, Text, TextInputFormat but it doesn't work... java.io.FileNotFoundException: File /user/root/*/* at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:697) at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105) at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755) at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1485) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1525) at org.apache.spark.streaming.dstream.FileInputDStream.findNewFiles(FileInputDStream.scala:176) at org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:134) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287) at scala.Option.orElse(Option.scala:257) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284) at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:35) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287) at scala.Option.orElse(Option.scala:257) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284) at org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:38) at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116) at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116) at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:243) at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:241) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:241) at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:177) at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$start$1$$anon$1$$anonfun$receive$1.applyOrElse(JobGenerator.scala:86) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$start$1$$anon$1.aroundReceive(JobGenerator.scala:84) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
From the Spark documentation: def fileStream[K, V, F <: InputFormat[K, V]](directory: String, filter: (Path) ⇒ Boolean, newFilesOnly: Boolean, conf: Configuration)(implicit arg0: ClassTag[K], arg1: ClassTag[V], arg2: ClassTag[F]): InputDStream[(K, V)] Create a input stream that monitors a Hadoop-compatible filesystem for new files and reads them using the given key-value types and input format. Files must be written to the monitored directory by "moving" them from another location within the same file system. File names starting with . are ignored. K Key type for reading HDFS file V Value type for reading HDFS file F Input format for reading HDFS file directory HDFS directory to monitor for new file filter Function to filter paths to process newFilesOnly Should process only new files and ignore existing files in the directory conf Hadoop configuration filter parameter takes a method which identifies the input files for processing. new Function<Path, Boolean>() { #Override public Boolean call(Path v1) throws Exception { return Boolean.TRUE; } } TextInputFormat is the default input format and used for plain or compressed text files. Files are broken into lines. Keys are the position (offset) in the file, and values are the line of text. ssc.fileStream[LongWritable, Text, TextInputFormat] will work exactly as ssc.textFileStream(directory) If you want to customize your file reading process then you need to define a custom input format and specify what are the key-value pair returned from To implement custom input format you can refer defining Hadoop mapreduce custom input format. Spark API Reference Source code