I get the following error when trying to use IN_RANGE function in Cognos 10. any ideas why this is?
V5 syntax error found in expression "[Appointment Date] IN_RANGE", invalid token "" found after "[Appointment Date] IN_RANG".
The detail error:
=== JAVA STACK TRACE === XQE-V5-0011 V5 syntax error found in expression "[Appointment Date] IN_RANGE", invalid token "" found after
"[Appointment Date] IN_RANG". at
com.cognos.xqe.ast.v5Exp.V5ExpressionProcessor.processWithV5ExpressionParser(V5ExpressionProcessor.java:252)
at
com.cognos.xqe.ast.v5Exp.V5ExpressionProcessor.process(V5ExpressionProcessor.java:125)
at
com.cognos.xqe.transformation.v5.binding.ConvertV5ExpressionToBinary.apply(ConvertV5ExpressionToBinary.java:120)
at
com.cognos.xqe.query.engine.QTETransformationEngine.applyTransformation(QTETransformationEngine.java:1225)
at
com.cognos.xqe.query.engine.QTETransformationEngine.applyApplicableTransformation(QTETransformationEngine.java:1143)
at
com.cognos.xqe.query.engine.QTETransformationEngine.applyIndexedTransformations(QTETransformationEngine.java:633)
at
com.cognos.xqe.query.engine.QTETransformationEngine.transformationIteration(QTETransformationEngine.java:568)
at
com.cognos.xqe.query.engine.QTETransformationEngine.applyTransformations(QTETransformationEngine.java:370)
at
com.cognos.xqe.query.planner.QueryPlanner.plan(QueryPlanner.java:328)
at
com.cognos.xqe.query.planner.QueryPlanner.planQuery(QueryPlanner.java:389)
at
com.cognos.xqe.query.planner.QueryPlanner.planQuery(QueryPlanner.java:374)
at
com.cognos.xqe.query.engine.QueryEngine.prepareRequest(QueryEngine.java:639)
at
com.cognos.xqe.query.engine.QueryEngine.fetchRSAPIDatasets(QueryEngine.java:469)
at
com.cognos.xqe.query.engine.QueryEngine.executeRequest(QueryEngine.java:421)
at
com.cognos.xqe.bibushandler.ExecuteRequestAdapter.executeRequest(ExecuteRequestAdapter.java:112)
at
com.cognos.xqe.cubingservices.V5QueryHandler.executeRequestInSequence(V5QueryHandler.java:622)
at
com.cognos.xqe.cubingservices.V5QueryHandler.execute(V5QueryHandler.java:463)
at
com.ibm.cubeservices.mdx.v5.V5ProviderFacade.execute(V5ProviderFacade.java:109)
at
com.cognos.cubics.providers.cubingservices.CSStatement.execute(CSStatement.java:156)
at
com.ibm.cubeservices.mdx.v5.V5RequestHandler.handleRequest(V5RequestHandler.java:59)
at
com.ibm.cubeservices.mdx.comms.Servlet.processMessage(Servlet.java:128)
at
com.ibm.cubeservices.mdx.comms.ComWorker.processQueryInputMsg(ComWorker.java:347)
at
com.ibm.cubeservices.mdx.comms.ComWorker.processInput(ComWorker.java:252)
at com.ibm.cubeservices.mdx.comms.ComWorker.call(ComWorker.java:162)
at com.ibm.cubeservices.mdx.comms.ComWorker.call(ComWorker.java:64) at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314) at
java.util.concurrent.FutureTask.run(FutureTask.java:149) at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
at java.lang.Thread.run(Thread.java:738) === END OF JAVA STACK TRACE
=== RSV-SRV-0042 Trace back:RSReportService.cpp(752): XQEException: CCL_CAUGHT: RSReportService::process()RSReportServiceMethod.cpp(263):
XQEException: CCL_RETHROW: RSReportServiceMethod::process():
asynchRunSpecification_RequestRSASyncExecutionThread.cpp(808):
XQEException:
RSASyncExecutionThread::checkExceptionRSASyncExecutionThread.cpp(260):
XQEException: CCL_CAUGHT: RSASyncExecutionThread::runImpl():
asynchRunSpecification_RequestRSASyncExecutionThread.cpp(864):
XQEException: CCL_RETHROW: RSASyncExecutionThread::processCommand():
asynchRunSpecification_RequestExecution/RSRenderExecution.cpp(670):
XQEException: CCL_RETHROW:
RSRenderExecution::executeAssembly/RSDocAssemblyDispatch.cpp(291):
XQEException: CCL_RETHROW:
RSDocAssemblyDispatch::dispatchAssemblyAssembly/RSLayoutAssembly.cpp(79):
XQEException: CCL_RETHROW:
RSLayoutAssembly::assembleAssembly/RSDocAssemblyDispatch.cpp(358):
XQEException: CCL_RETHROW:
RSDocAssemblyDispatch::dispatchChildrenAssemblyForwardAssembly/RSReportPagesAssembly.cpp(179):
XQEException: CCL_RETHROW:
RSReportPagesAssembly::assembleAssembly/RSDocAssemblyDispatch.cpp(308):
XQEException: CCL_RETHROW:
RSDocAssemblyDispatch::dispatchAssemblyAssembly/RSPageAssembly.cpp(303):
XQEException: CCL_RETHROW:
RSPageAssembly::assembleAssembly/RSDocAssemblyDispatch.cpp(308):
XQEException: CCL_RETHROW:
RSDocAssemblyDispatch::dispatchAssemblyAssembly/RSTableRowAssembly.cpp(177):
XQEException: CCL_RETHROW:
RSTableRowAssembly::assembleAssembly/RSDocAssemblyDispatch.cpp(308):
XQEException: CCL_RETHROW:
RSDocAssemblyDispatch::dispatchAssemblyAssembly/RSTableCellAssembly.cpp(137):
XQEException: CCL_RETHROW:
RSTableCellAssembly::assembleAssembly/RSDocAssemblyDispatch.cpp(358):
XQEException: CCL_RETHROW:
RSDocAssemblyDispatch::dispatchChildrenAssemblyForwardAssembly/RSAssembly.cpp(662):
XQEException: CCL_RETHROW:
RSAssembly::createListIteratorAssembly/RSAssembly.cpp(717):
XQEException: CCL_RETHROW:
RSAssembly::createListIteratorRSQueryMgr.cpp(1055): XQEException:
CCL_RETHROW: RSQueryMgr::getListIteratorRSQueryMgr.cpp(1131):
XQEException: CCL_RETHROW:
RSQueryMgr::getResultSetIteratorRSQueryMgr.cpp(1295): XQEException:
CCL_RETHROW: RSQueryMgr::createIteratorRSQueryMgr.cpp(1569):
XQEException: CCL_RETHROW:
RSQueryMgr::executeRsapiCommandRSQueryMgrExecutionHandlerImpl.cpp(168):
XQEException: CCL_RETHROW:
RSQueryMgrExecutionHandlerImpl::execute()QFSSession.cpp(1166):
XQEException: CCL_RETHROW:
QFSSession::ProcessDoRequest()QFSSession.cpp(1164): XQEException:
CCL_CAUGHT: QFSSession::ProcessDoRequest()QFSSession.cpp(1125):
XQEException: CCL_RETHROW:
QFSSession::ProcessDoRequest()QFSConnection.cpp(753): XQEException:
CCL_RETHROW: QFSConnection::ExecuteQFSQuery.cpp(135): XQEException:
CCL_RETHROW: QFSQuery::Execute v2XQEConnector.cpp(266): XQEException:
CCL_THROW: XQEConnector::send
I managed to figure out where the issue is. The datatype (currently being a date field) is not actually a date field so a simple CAST function did the quick fix. For a long term solution the datatype must be changed either in Framework Manager or Data Manager.
e.g. CAST([DATE] AS DATE)
Thanks,
Zak
Related
In the following piece of Spark code, I am reading ~300,000 HDFS files (2.4 TB in total), running a combineByKey() operation, and then saving the data in a separate HDFS directory. Pretty simple operations except for a large number of partitions.
sc.textFile(input_path + "*"). \
map(lambda v: (v[0], v[1])). \
combineByKey(to_list, append, extend). \
map(lambda v: json.dumps(v)). \
saveAsTextFile(output_path)
Problem is after the combineByKey stage completes, the code halts by producing the following stack trace (which I have no clue of). I ran the same code by reading a sample subset (~50000) files and it worked without any error. So, I assumed it would be because of the high no. of tasks and therefore, I threw in a coalesce() function after the combineByKey() to reduce the no. of partitions. This also resulted in the same error. Any idea how to recover from this?
Caused by: java.io.IOException: Read error or truncated source
at com.github.luben.zstd.ZstdInputStream.readInternal(ZstdInputStream.java:154)
at com.github.luben.zstd.ZstdInputStream.read(ZstdInputStream.java:120)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at java.io.ObjectInputStream$PeekInputStream.read(ObjectInputStream.java:2781)
at java.io.ObjectInputStream$BlockDataInputStream.refill(ObjectInputStream.java:3014)
at java.io.ObjectInputStream$BlockDataInputStream.read(ObjectInputStream.java:3095)
at java.io.DataInputStream.readShort(DataInputStream.java:313)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3276)
at java.io.ObjectInputStream.readShort(ObjectInputStream.java:1082)
at org.roaringbitmap.RoaringArray.deserialize(RoaringArray.java:343)
at org.roaringbitmap.RoaringArray.readExternal(RoaringArray.java:818)
at org.roaringbitmap.RoaringBitmap.readExternal(RoaringBitmap.java:2134)
at org.apache.spark.scheduler.HighlyCompressedMapStatus.$anonfun$readExternal$2(MapStatus.scala:210)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1343)
at org.apache.spark.scheduler.HighlyCompressedMapStatus.readExternal(MapStatus.scala:207)
at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:2236)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2185)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.readArray(ObjectInputStream.java:2093)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1655)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at org.apache.spark.MapOutputTracker$.$anonfun$deserializeMapStatuses$1(MapOutputTracker.scala:956)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.MapOutputTracker$.deserializeObject$1(MapOutputTracker.scala:958)
at org.apache.spark.MapOutputTracker$.deserializeMapStatuses(MapOutputTracker.scala:972)
at org.apache.spark.MapOutputTrackerWorker.$anonfun$getStatuses$2(MapOutputTracker.scala:856)
at org.apache.spark.util.KeyLock.withLock(KeyLock.scala:64)
at org.apache.spark.MapOutputTrackerWorker.getStatuses(MapOutputTracker.scala:851)
at org.apache.spark.MapOutputTrackerWorker.getMapSizesByExecutorId(MapOutputTracker.scala:808)
at org.apache.spark.shuffle.sort.SortShuffleManager.getReader(SortShuffleManager.scala:128)
at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Traceback (most recent call last):
File "xxx.py", line 450, in <module>
step5()
File "xxx.py", line 387, in step5
sc.textFile(input_path + "*"). \
File "/usr/local/hadoop/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1656, in saveAsTextFile
File "/usr/local/hadoop/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
File "/usr/local/hadoop/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 128, in deco
File "/usr/local/hadoop/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o71.saveAsTextFile.
: org.apache.spark.SparkException: Job aborted.
As per the databricks docs- https://docs.databricks.com/data/data-sources/azure/cosmosdb-connector.html, i've downloaded the latest azure-cosmosdb-spark library( azure-cosmosdb-spark_2.4.0_2.11-2.1.2-uber.jar) and placed in the libraries location of dbfs.
When trying to write the data from a dataframe to COSMOS container, i'm getting the below error, any help would be more appreciated.
My Databricks runtime version is: 7.0 (includes Apache Spark 3.0.0, Scala 2.12)
imports from the notebook:
import java.time.LocalDateTime
import java.time.format.DateTimeFormatter
import org.apache.spark.sql.{DataFrame, SaveMode, SparkSession}
import org.apache.spark.sql.types.{StructField, _}
import org.apache.spark.sql.functions._
import com.microsoft.azure.cosmosdb.spark.schema._
import com.microsoft.azure.cosmosdb.spark.CosmosDBSpark
import com.microsoft.azure.cosmosdb.spark.config._
val dtcCANWrite = Config(Map(
"Endpoint" -> "NOT DISPLAYED",
"Masterkey" -> "NOT DISPLAYED",
"Database" -> "NOT DISPLAYED",
"Collection" -> "NOT DISPLAYED",
"preferredRegions" -> "NOT DISPLAYED",
"Upsert" -> "true"
))
distinctCANDF.write.mode(SaveMode.Append).cosmosDB(dtcCANWrite)
Error:
at com.microsoft.azure.cosmosdb.spark.config.CosmosDBConfigBuilder.<init>(CosmosDBConfigBuilder.scala:31)
at com.microsoft.azure.cosmosdb.spark.config.Config$.apply(Config.scala:259)
at com.microsoft.azure.cosmosdb.spark.config.Config$.apply(Config.scala:240)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:7)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:69)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:71)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:73)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:75)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:77)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:79)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:81)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:83)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:85)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:87)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:89)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:91)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw.<init>(command-3649834446724317:93)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw.<init>(command-3649834446724317:95)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw.<init>(command-3649834446724317:97)
at line6d80624d7a774601af6eb962eb59453253.$read.<init>(command-3649834446724317:99)
at line6d80624d7a774601af6eb962eb59453253.$read$.<init>(command-3649834446724317:103)
at line6d80624d7a774601af6eb962eb59453253.$read$.<clinit>(command-3649834446724317)
at line6d80624d7a774601af6eb962eb59453253.$eval$.$print$lzycompute(<notebook>:7)
at line6d80624d7a774601af6eb962eb59453253.$eval$.$print(<notebook>:6)
at line6d80624d7a774601af6eb962eb59453253.$eval.$print(<notebook>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:745)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1021)
at scala.tools.nsc.interpreter.IMain.$anonfun$interpret$1(IMain.scala:574)
at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:41)
at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:37)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:41)
at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:600)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:570)
at com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:215)
at com.databricks.backend.daemon.driver.ScalaDriverLocal.$anonfun$repl$1(ScalaDriverLocal.scala:202)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:714)
at com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:667)
at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:202)
at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$10(DriverLocal.scala:396)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:238)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:233)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:230)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:49)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:275)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:268)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:49)
at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:373)
at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$1(DriverWrapper.scala:653)
at scala.util.Try$.apply(Try.scala:213)
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:645)
at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:486)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:598)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:391)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: scala.Product$class
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at com.microsoft.azure.cosmosdb.spark.config.CosmosDBConfigBuilder.<init>(CosmosDBConfigBuilder.scala:31)
at com.microsoft.azure.cosmosdb.spark.config.Config$.apply(Config.scala:259)
at com.microsoft.azure.cosmosdb.spark.config.Config$.apply(Config.scala:240)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:7)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:69)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:71)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:73)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:75)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:77)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:79)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:81)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:83)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:85)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:87)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:89)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw$$iw.<init>(command-3649834446724317:91)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw$$iw.<init>(command-3649834446724317:93)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw$$iw.<init>(command-3649834446724317:95)
at line6d80624d7a774601af6eb962eb59453253.$read$$iw.<init>(command-3649834446724317:97)
at line6d80624d7a774601af6eb962eb59453253.$read.<init>(command-3649834446724317:99)
at line6d80624d7a774601af6eb962eb59453253.$read$.<init>(command-3649834446724317:103)
at line6d80624d7a774601af6eb962eb59453253.$read$.<clinit>(command-3649834446724317)
at line6d80624d7a774601af6eb962eb59453253.$eval$.$print$lzycompute(<notebook>:7)
at line6d80624d7a774601af6eb962eb59453253.$eval$.$print(<notebook>:6)
at line6d80624d7a774601af6eb962eb59453253.$eval.$print(<notebook>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:745)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1021)
at scala.tools.nsc.interpreter.IMain.$anonfun$interpret$1(IMain.scala:574)
at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:41)
at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:37)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:41)
at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:600)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:570)
at com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:215)
at com.databricks.backend.daemon.driver.ScalaDriverLocal.$anonfun$repl$1(ScalaDriverLocal.scala:202)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:714)
at com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:667)
at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:202)
at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$10(DriverLocal.scala:396)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:238)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:233)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:230)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:49)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:275)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:268)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:49)
at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:373)
at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$1(DriverWrapper.scala:653)
at scala.util.Try$.apply(Try.scala:213)
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:645)
at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:486)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:598)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:391)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
at java.lang.Thread.run(Thread.java:748)
Thanks for Rayan Ral's suggestion. So the problem comes from the version conflict.
Solution is downgraded the version. In this cases, you can downgraded from Spark 3.0.0, Scala 2.12 to Spark 2.4.4, Scala 2.11.
I tried to create a recoverable spark streaming job with some arguments got from database. But then I got a problem: it always gives me a serialization error when I try to restart a job from checkpoint.
18/10/18 09:54:33 ERROR Executor: Exception in task 1.0 in stage 56.0 (TID 132) java.lang.ClassCastException: org.apache.spark.util.SerializableConfiguration cannot be cast to
scala.collection.MapLike at
com.ptnj.streaming.alertJob.InputDataParser$.kafka_stream_handle(InputDataParser.scala:37)
at
com.ptnj.streaming.alertJob.InstanceAlertJob$$anonfun$1.apply(InstanceAlertJob.scala:38)
at
com.ptnj.streaming.alertJob.InstanceAlertJob$$anonfun$1.apply(InstanceAlertJob.scala:38)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410) at
scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:463) at
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at
scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462) at
scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) at
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at
org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I followed the advice by maxime G in this existing SO question, and it seems to help.
But now there is another exception. And because of that issue,I have to
create broadcast variables while stream transforming, like
val kafka_data_streaming = stream.map(x => DstreamHandle.kafka_stream_handle(url, x.value(), sc))
So it going to be I have to put sparkcontext as a parameter into
transformation function, then it occurs:
Exception in thread "main" org.apache.spark.SparkException: Task not serializable at
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at
org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at
org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2094) at
org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:546)
at
org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:546)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:701)
at
org.apache.spark.streaming.StreamingContext.withScope(StreamingContext.scala:264)
at org.apache.spark.streaming.dstream.DStream.map(DStream.scala:545)
at
com.ptnj.streaming.alertJob.InstanceAlertJob$.streaming_main(InstanceAlertJob.scala:38)
at com.ptnj.streaming.AlarmMain$.create_ssc(AlarmMain.scala:36) at
com.ptnj.streaming.AlarmMain$.main(AlarmMain.scala:14) at
com.ptnj.streaming.AlarmMain.main(AlarmMain.scala) Caused by:
java.io.NotSerializableException: org.apache.spark.SparkContext
Serialization stack:
- object not serializable (class: org.apache.spark.SparkContext, value: org.apache.spark.SparkContext#5fb7183b)
- field (class: com.ptnj.streaming.alertJob.InstanceAlertJob$$anonfun$1, name: sc$1,
type: class org.apache.spark.SparkContext)
- object (class com.ptnj.streaming.alertJob.InstanceAlertJob$$anonfun$1, )
at
org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at
org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
... 14 more
And I have never seen this situation before. Each example shows that broadcast variables would be create in output operation function but not transformation function, so is that possible?
We're evaluating AWS Glue for a big data project, with some ETL. We added a crawler, which is correctly picking up a CSV file from S3. Initially, we simply want to transform that CSV to JSON, and drop the file in another S3 location (same bucket, different path).
We used the script as provided by AWS (no custom script here). And just mapped all the columns.
The target folder is empty (job has been just created), but the job fails with "File already exists":
snapshot here.
The S3 location were we pretend to drop the output was empty before starting the job. However after the error we do see two files, but those seems to be partials:
snapshot
Any ideas on what might be going on?
Here's the fully stack:
Container: container_1513099821372_0007_01_000001 on ip-172-31-49-38.ec2.internal_8041
LogType:stdout
Log Upload Time:Tue Dec 12 19:12:04 +0000 2017
LogLength:8462
Log Contents:
Traceback (most recent call last):
File "script_2017-12-12-19-11-08.py", line 30, in
datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options =
{
"path": "s3://primero-viz/output/tcw_entries"
}
, format = "json", transformation_ctx = "datasink2")
File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/PyGlue.zip/awsglue/dynamicframe.py", line 523, in from_options
File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/PyGlue.zip/awsglue/context.py", line 175, in write_dynamic_frame_from_options
File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/PyGlue.zip/awsglue/context.py", line 198, in write_from_options
File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/PyGlue.zip/awsglue/data_sink.py", line 32, in write
File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/PyGlue.zip/awsglue/data_sink.py", line 28, in writeFrame
File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o86.pyWriteDynamicFrame.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, ip-172-31-63-141.ec2.internal, executor 1): java.io.IOException: File already exists:s3://primero-viz/output/tcw_entries/run-1513105898742-part-r-00000
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.create(S3NativeFileSystem.java:604)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:915)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:896)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:793)
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.create(EmrFileSystem.java:176)
at com.amazonaws.services.glue.hadoop.TapeOutputFormat.getRecordWriter(TapeOutputFormat.scala:65)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1119)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1102)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1951)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1158)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1085)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1005)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:996)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:996)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:996)
at com.amazonaws.services.glue.HadoopDataSink$$anonfun$2.apply$mcV$sp(DataSink.scala:192)
at com.amazonaws.services.glue.HadoopDataSink.writeDynamicFrame(DataSink.scala:202)
at com.amazonaws.services.glue.DataSink.pyWriteDynamicFrame(DataSink.scala:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: File already exists:s3://primero-viz/output/tcw_entries/run-1513105898742-part-r-00000
For me, a similar error message turned out to be unrelated to the file already existing. The error message can be a little misleading. There was a problem in a previous stage (in my case, I was reading data from a MySQL database in a previous stage and the source DB contained an invalid date, which resulted in partial data being written and the task crashing).
I would suggest checking the other stages leading up to this write.
See also this other StackOverflow answer.
Setup the write mode to "append" whether your load is incremental or "overwrite" if it's full load.
One example could be:
events.toDF().write.json(events_dir, mode="append", partitionBy=["partition_0", "partition_1"])
The target folder is empty
Empty is not the same as not exist. It doesn't look like write_dynamic_frame supports write modes so might have to drop the directory first.
I use Orchard CMS 1.10.1, I have problem with deploying an existing App_Data folder (It already Contains a finished website), I get this error when I try to load website
The resource cannot be found.
Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly.
Requested URL: /
Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.34209
When I use a fresh App_Data, It works fine and shows me the Setup Page. But When Click on Finish Setup button, this error comes up :
Setup failed: Exception has been thrown by the target of an invocation.
________________UPDATE___________________
I deployed this to another server (in different host company) and it worked fine.
I don't know What this server lack for running Orchard.
I called them but they had no idea what to do.
I looked at App_Data/logs and this was there:
2016-08-24 23:12:25,672 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default Attempt number: 0 [(null)]
NHibernate.HibernateException: Could not create the driver from Orchard.Data.Providers.SqlCeDataServicesProvider+OrchardSqlServerCeDriver, Orchard.Framework, Version=1.10.1.0, Culture=neutral, PublicKeyToken=null. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Data.SqlServerCe.SqlCeException: Unable to load the native components of SQL Server Compact corresponding to the ADO.NET provider of version 8876. Install the correct version of SQL Server Compact. Refer to KB article 974247 for more details.
at System.Data.SqlServerCe.NativeMethods.LoadNativeBinaries()
at System.Data.SqlServerCe.SqlCeCommand..ctor()
--- End of inner exception stack trace ---
at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck)
at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
at System.Activator.CreateInstance(Type type, Boolean nonPublic)
at System.Activator.CreateInstance(Type type)
at NHibernate.Bytecode.ActivatorObjectsFactory.CreateInstance(Type type)
at NHibernate.Driver.ReflectionDriveConnectionCommandProvider.CreateCommand()
at NHibernate.Driver.ReflectionBasedDriver.CreateCommand()
at NHibernate.Driver.SqlServerCeDriver.Configure(IDictionary`2 settings)
at Orchard.Data.Providers.SqlCeDataServicesProvider.OrchardSqlServerCeDriver.Configure(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings)
--- End of inner exception stack trace ---
at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProvider.Configure(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProviderFactory.NewConnectionProvider(IDictionary`2 settings)
at NHibernate.Cfg.SettingsFactory.BuildSettings(IDictionary`2 properties)
at NHibernate.Cfg.Configuration.BuildSettings()
at NHibernate.Cfg.Configuration.BuildSessionFactory()
at Orchard.Data.SessionFactoryHolder.BuildSessionFactory()
at Orchard.Data.SessionFactoryHolder.GetSessionFactory()
at Orchard.Data.TransactionManager.EnsureSession(IsolationLevel level)
at Orchard.Data.TransactionManager.GetSession()
at Orchard.Data.Repository`1.get_Session()
at Orchard.Data.Repository`1.get_Table()
at Orchard.Data.Repository`1.Fetch(Expression`1 predicate)
at Orchard.Data.Repository`1.Get(Expression`1 predicate)
at Orchard.Data.Repository`1.Orchard.Data.IRepository<T>.Get(Expression`1 predicate)
at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetDescriptorRecord()
at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetShellDescriptor()
at Orchard.Environment.ShellBuilders.ShellContextFactory.CreateShellContext(ShellSettings settings)
at Orchard.Environment.DefaultOrchardHost.CreateShellContext(ShellSettings settings)
at Orchard.Environment.DefaultOrchardHost.<CreateAndActivateShells>b__41_1(ShellSettings settings)
2016-08-24 23:12:27,453 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default after 1 retries. [(null)]
2016-08-24 23:12:27,938 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default Attempt number: 0 [(null)]
NHibernate.HibernateException: Could not create the driver from Orchard.Data.Providers.SqlCeDataServicesProvider+OrchardSqlServerCeDriver, Orchard.Framework, Version=1.10.1.0, Culture=neutral, PublicKeyToken=null. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Data.SqlServerCe.SqlCeException: Unable to load the native components of SQL Server Compact corresponding to the ADO.NET provider of version 8876. Install the correct version of SQL Server Compact. Refer to KB article 974247 for more details.
at System.Data.SqlServerCe.NativeMethods.LoadNativeBinaries()
at System.Data.SqlServerCe.SqlCeCommand..ctor()
--- End of inner exception stack trace ---
at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck)
at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
at System.Activator.CreateInstance(Type type, Boolean nonPublic)
at System.Activator.CreateInstance(Type type)
at NHibernate.Bytecode.ActivatorObjectsFactory.CreateInstance(Type type)
at NHibernate.Driver.ReflectionDriveConnectionCommandProvider.CreateCommand()
at NHibernate.Driver.ReflectionBasedDriver.CreateCommand()
at NHibernate.Driver.SqlServerCeDriver.Configure(IDictionary`2 settings)
at Orchard.Data.Providers.SqlCeDataServicesProvider.OrchardSqlServerCeDriver.Configure(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings)
--- End of inner exception stack trace ---
at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProvider.Configure(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProviderFactory.NewConnectionProvider(IDictionary`2 settings)
at NHibernate.Cfg.SettingsFactory.BuildSettings(IDictionary`2 properties)
at NHibernate.Cfg.Configuration.BuildSettings()
at NHibernate.Cfg.Configuration.BuildSessionFactory()
at Orchard.Data.SessionFactoryHolder.BuildSessionFactory()
at Orchard.Data.SessionFactoryHolder.GetSessionFactory()
at Orchard.Data.TransactionManager.EnsureSession(IsolationLevel level)
at Orchard.Data.TransactionManager.GetSession()
at Orchard.Data.Repository`1.get_Session()
at Orchard.Data.Repository`1.get_Table()
at Orchard.Data.Repository`1.Fetch(Expression`1 predicate)
at Orchard.Data.Repository`1.Get(Expression`1 predicate)
at Orchard.Data.Repository`1.Orchard.Data.IRepository<T>.Get(Expression`1 predicate)
at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetDescriptorRecord()
at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetShellDescriptor()
at Orchard.Environment.ShellBuilders.ShellContextFactory.CreateShellContext(ShellSettings settings)
at Orchard.Environment.DefaultOrchardHost.CreateShellContext(ShellSettings settings)
at Orchard.Environment.DefaultOrchardHost.<CreateAndActivateShells>b__41_1(ShellSettings settings)
2016-08-24 23:12:29,266 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default after 1 retries. [(null)]
2016-08-24 23:12:29,891 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default Attempt number: 0 [http://studiosefid.com/]
NHibernate.HibernateException: Could not create the driver from Orchard.Data.Providers.SqlCeDataServicesProvider+OrchardSqlServerCeDriver, Orchard.Framework, Version=1.10.1.0, Culture=neutral, PublicKeyToken=null. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Data.SqlServerCe.SqlCeException: Unable to load the native components of SQL Server Compact corresponding to the ADO.NET provider of version 8876. Install the correct version of SQL Server Compact. Refer to KB article 974247 for more details.
at System.Data.SqlServerCe.NativeMethods.LoadNativeBinaries()
at System.Data.SqlServerCe.SqlCeCommand..ctor()
--- End of inner exception stack trace ---
at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck)
at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
at System.Activator.CreateInstance(Type type, Boolean nonPublic)
at System.Activator.CreateInstance(Type type)
at NHibernate.Bytecode.ActivatorObjectsFactory.CreateInstance(Type type)
at NHibernate.Driver.ReflectionDriveConnectionCommandProvider.CreateCommand()
at NHibernate.Driver.ReflectionBasedDriver.CreateCommand()
at NHibernate.Driver.SqlServerCeDriver.Configure(IDictionary`2 settings)
at Orchard.Data.Providers.SqlCeDataServicesProvider.OrchardSqlServerCeDriver.Configure(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings)
--- End of inner exception stack trace ---
at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProvider.Configure(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProviderFactory.NewConnectionProvider(IDictionary`2 settings)
at NHibernate.Cfg.SettingsFactory.BuildSettings(IDictionary`2 properties)
at NHibernate.Cfg.Configuration.BuildSettings()
at NHibernate.Cfg.Configuration.BuildSessionFactory()
at Orchard.Data.SessionFactoryHolder.BuildSessionFactory()
at Orchard.Data.SessionFactoryHolder.GetSessionFactory()
at Orchard.Data.TransactionManager.EnsureSession(IsolationLevel level)
at Orchard.Data.TransactionManager.GetSession()
at Orchard.Data.Repository`1.get_Session()
at Orchard.Data.Repository`1.get_Table()
at Orchard.Data.Repository`1.Fetch(Expression`1 predicate)
at Orchard.Data.Repository`1.Get(Expression`1 predicate)
at Orchard.Data.Repository`1.Orchard.Data.IRepository<T>.Get(Expression`1 predicate)
at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetDescriptorRecord()
at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetShellDescriptor()
at Orchard.Environment.ShellBuilders.ShellContextFactory.CreateShellContext(ShellSettings settings)
at Orchard.Environment.DefaultOrchardHost.CreateShellContext(ShellSettings settings)
at Orchard.Environment.DefaultOrchardHost.<CreateAndActivateShells>b__41_1(ShellSettings settings)
2016-08-24 23:12:31,344 [10] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default after 1 retries. [http://studiosefid.com/]
2016-08-24 23:12:31,891 [19] Orchard.Environment.DefaultOrchardHost - (null) - A tenant could not be started: Default Attempt number: 0 [http://studiosefid.com/]
NHibernate.HibernateException: Could not create the driver from Orchard.Data.Providers.SqlCeDataServicesProvider+OrchardSqlServerCeDriver, Orchard.Framework, Version=1.10.1.0, Culture=neutral, PublicKeyToken=null. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Data.SqlServerCe.SqlCeException: Unable to load the native components of SQL Server Compact corresponding to the ADO.NET provider of version 8876. Install the correct version of SQL Server Compact. Refer to KB article 974247 for more details.
at System.Data.SqlServerCe.NativeMethods.LoadNativeBinaries()
at System.Data.SqlServerCe.SqlCeCommand..ctor()
--- End of inner exception stack trace ---
at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck)
at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
at System.Activator.CreateInstance(Type type, Boolean nonPublic)
at System.Activator.CreateInstance(Type type)
at NHibernate.Bytecode.ActivatorObjectsFactory.CreateInstance(Type type)
at NHibernate.Driver.ReflectionDriveConnectionCommandProvider.CreateCommand()
at NHibernate.Driver.ReflectionBasedDriver.CreateCommand()
at NHibernate.Driver.SqlServerCeDriver.Configure(IDictionary`2 settings)
at Orchard.Data.Providers.SqlCeDataServicesProvider.OrchardSqlServerCeDriver.Configure(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings)
--- End of inner exception stack trace ---
at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProvider.Configure(IDictionary`2 settings)
at NHibernate.Connection.ConnectionProviderFactory.NewConnectionProvider(IDictionary`2 settings)
at NHibernate.Cfg.SettingsFactory.BuildSettings(IDictionary`2 properties)
at NHibernate.Cfg.Configuration.BuildSettings()
at NHibernate.Cfg.Configuration.BuildSessionFactory()
at Orchard.Data.SessionFactoryHolder.BuildSessionFactory()
at Orchard.Data.SessionFactoryHolder.GetSessionFactory()
at Orchard.Data.TransactionManager.EnsureSession(IsolationLevel level)
at Orchard.Data.TransactionManager.GetSession()
at Orchard.Data.Repository`1.get_Session()
at Orchard.Data.Repository`1.get_Table()
at Orchard.Data.Repository`1.Fetch(Expression`1 predicate)
at Orchard.Data.Repository`1.Get(Expression`1 predicate)
at Orchard.Data.Repository`1.Orchard.Data.IRepository<T>.Get(Expression`1 predicate)
at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetDescriptorRecord()
at Orchard.Core.Settings.Descriptor.ShellDescriptorManager.GetShellDescriptor()
at Orchard.Environment.ShellBuilders.ShellContextFactory.CreateShellContext(ShellSettings settings)
at Orchard.Environment.DefaultOrchardHo
The log contains the following error message:
Unable to load the native components of SQL Server Compact corresponding to the ADO.NET provider of version 8876. Install the correct version of SQL Server Compact. Refer to KB article 974247 for more details.
Make sure that you have installed the correct version of SQL Server Compact (the same one as referenced by Orchard).