I am pretty new to Streamsets and Groovy and I am updating my “Dev Raw Data Source” to “Groovy Scripting” as my input. The issue which I face here is, I have the below code in my “Groovy Scripting” and it throws an error.
Groovy scripting:
import com.streamsets.pipeline.stage.origin.scripting.*
record = sdc.createRecord("bulkUpdateRequestTemplate")
bulkUpdateRequestTemplate = "{ \"payload\": { \"objects\": { \"filter\": \"(equals(type,'configuration/entityTypes/Contract') and equals(attributes.ContractStatus, 'ACTIVE') and lt (attributes.ExpirationDate,'currentTimestampEpoch'))\", \"options\": \"searchByOv,ovOnly\" }, \"actions\": [ { \"operation\": \"UpdateAttribute\", \"operationParameters\": { \"attributeURI\": \"configuration/entityTypes/Contract/attributes/ContractStatus\", \"attributeValue\": \"INACTIVE\" } } ] } }"
record.value = bulkUpdateRequestTemplate
sdc.state['bulkUpdateRequestTemplate'] = record
Error:
com.streamsets.pipeline.api.StageException: SCRIPTING_10 - Script error in user script: javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: state for class: com.streamsets.pipeline.stage.origin.scripting.ScriptingOriginBindings
at com.streamsets.pipeline.stage.origin.scripting.AbstractScriptingSource.produce(AbstractScriptingSource.java:124)
at com.streamsets.pipeline.api.base.configurablestage.DPushSource.produce(DPushSource.java:44)
at com.streamsets.datacollector.runner.StageRuntime.lambda$execute$1(StageRuntime.java:258)
at com.streamsets.datacollector.runner.StageRuntime.execute(StageRuntime.java:232)
at com.streamsets.datacollector.runner.StageRuntime.execute(StageRuntime.java:267)
at com.streamsets.datacollector.runner.SourcePipe.process(SourcePipe.java:67)
at com.streamsets.datacollector.runner.preview.PreviewPipelineRunner.runPushSource(PreviewPipelineRunner.java:233)
at com.streamsets.datacollector.runner.preview.PreviewPipelineRunner.run(PreviewPipelineRunner.java:218)
at com.streamsets.datacollector.runner.Pipeline.run(Pipeline.java:535)
at com.streamsets.datacollector.runner.preview.PreviewPipeline.run(PreviewPipeline.java:39)
at com.streamsets.datacollector.execution.preview.sync.SyncPreviewer.start(SyncPreviewer.java:227)
at com.streamsets.datacollector.execution.preview.async.AsyncPreviewer.lambda$start$1(AsyncPreviewer.java:93)
at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.lambda$call$0(SafeScheduledExecutorService.java:214)
at com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:44)
at com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:25)
at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.call(SafeScheduledExecutorService.java:210)
at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.lambda$call$0(SafeScheduledExecutorService.java:214)
at com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:44)
at com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:25)
at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.call(SafeScheduledExecutorService.java:210)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at com.streamsets.datacollector.metrics.MetricSafeScheduledExecutorService$MetricsTask.run(MetricSafeScheduledExecutorService.java:88)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Am I missing something here? Please help.
state works only for processors and only within a given processor. It stores values you want to preserve between invocations of a processor.
Groovy Scripting origin doesn't have "invocations", it works constantly throughout the pipeline's execution, therefore it doesn't need "state".
Related
I am using JMeter to run query load tests against an Elasticsearch cluster. I have a csv file of 100 names that I am iterating through to populate the ES query with a different search term on each call.
The column in the csv file is called name, so in the JMeter HTTP request, the query in the Body Data looks like this:
{
"query": {
"match": {
"search_name": "${name}"
}
}
}
And to iterate through the csv file to get the names, I have a JSR223 PreProcessor as a child of the HTTP request as such:
upto(1, {
if (vars.get('param' + "$it") != null) {
sampler.addArgument(vars.get('param' + "$it"),'name')
}
})
then a JSR223 PostProcessor as such:
upto(1, {
vars.remove("param" + "$it")
})
I came up with this approach by reading the accepted answer at this thread: How to make the search parameters in http request as dynamic in jmeter and adjusting as I saw fit.
The processing code is doing what it's supposed to be doing as far as correctly populating the queries, and all HTTP Requests are successful. However, every call is also throwing an error in the Jmeter log at both the PreProcessor and PostProcessor step:
ERROR o.a.j.m.JSR223PreProcessor: Problem in JSR223 script, JSR223 PreProcessor
javax.script.ScriptException: groovy.lang.MissingMethodException: No signature of method: org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.upto() is applicable for argument types: (Integer, Script1$_run_closure1) values: [1, Script1$_run_closure1#7c7ff704]
Possible solutions: put(java.lang.String, java.lang.Object), wait(), grep(), any(), dump(), find()
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:320) ~[groovy-jsr223-3.0.11.jar:3.0.11]
at org.codehaus.groovy.jsr223.GroovyCompiledScript.eval(GroovyCompiledScript.java:71) ~[groovy-jsr223-3.0.11.jar:3.0.11]
at javax.script.CompiledScript.eval(CompiledScript.java:93) ~[java.scripting:?]
at org.apache.jmeter.util.JSR223TestElement.processFileOrScript(JSR223TestElement.java:217) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.modifiers.JSR223PreProcessor.process(JSR223PreProcessor.java:45) ~[ApacheJMeter_components.jar:5.5]
at org.apache.jmeter.threads.JMeterThread.runPreProcessors(JMeterThread.java:978) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:561) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:501) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:268) ~[ApacheJMeter_core.jar:5.5]
at java.lang.Thread.run(Thread.java:833) ~[?:?]
Caused by: groovy.lang.MissingMethodException: No signature of method: org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.upto() is applicable for argument types: (Integer, Script1$_run_closure1) values: [1, Script1$_run_closure1#7c7ff704]
Possible solutions: put(java.lang.String, java.lang.Object), wait(), grep(), any(), dump(), find()
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.callGlobal(GroovyScriptEngineImpl.java:404) ~[groovy-jsr223-3.0.11.jar:3.0.11]
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.access$100(GroovyScriptEngineImpl.java:90) ~[groovy-jsr223-3.0.11.jar:3.0.11]
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl$3.invokeMethod(GroovyScriptEngineImpl.java:303) ~[groovy-jsr223-3.0.11.jar:3.0.11]
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.callCurrent(PogoMetaClassSite.java:73) ~[groovy-3.0.11.jar:3.0.11]
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51) ~[groovy-3.0.11.jar:3.0.11]
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.callCurrent(PogoMetaClassSite.java:83) ~[groovy-3.0.11.jar:3.0.11]
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:194) ~[groovy-3.0.11.jar:3.0.11]
at Script1.run(Script1.groovy:1) ~[?:?]
at org.codehaus.groovy.jsr223.GroovyScrip
tEngineImpl.eval(GroovyScriptEngineImpl.java:317) ~[groovy-jsr223-3.0.11.jar:3.0.11]
In the interest of saving space, I won't paste the PostProcessor code, which is virtually identical.
I am not well-versed in groovy at all, and again, I did my best to adapt the pre and post processor code from the other thread. Even though everything appears to be working, I'd like to resolve this error b/c I can't imagine I have a clean test going with all these errors thrown.
The error you're getting means syntax error, you cannot call upto() function per se, according to its JavaDoc:
Iterates from this number up to the given number, inclusive, incrementing by one each time.
So I think you need to amend it to look like:
0.upto(1, {
if (vars.get('param' + "$it") != null) {
sampler.addArgument(vars.get('param' + "$it"),'name')
}
})
the Post-Processor can be just removed.
More information on Groovy scripting in JMeter: Apache Groovy: What Is Groovy Used For?
Groovy has its own very comprehensive documentation: https://groovy-lang.org/documentation.html#gettingstarted
I have multiple threads on Spark 1.6 writing into the same hive table (using parquet files), when they try to write at the same time it prompts an error during the renaming part of write files into HDFS. I'm searching a solution to bypass this known Spark issue.
class MyThread extends Runnable {
def run {
//some code
myTable.write.format("parquet").mode("append")
.saveAsTable("hdfstable")
//some code
}
}
Executors.defaultThreadFactory().newThread(new MyThread).start()
I get this error :
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:156)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:189)
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:239)
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:221)
at fr.neolink.spark.streaming.StreamingNeo$.algo(StreamingNeo.scala:837)
at fr.neolink.spark.streaming.StreamingNeo$$anonfun$main$3$$anonfun$apply$18$MyThread$1.run(StreamingNeo.scala:374)
at java.lang.Thread.run(Thread.java:748)
caused by :
java.io.IOException: Failed to rename
FileStatus{path=hdfs://my_hdfs_master/user/hive/warehouse/MYDB.db/hdfstable/_temporary/0/task_201812281010_1770_m_000000/part-r-00000-9a70cbea-d105-4f50-ba1b-372f555906ce.gz.parquet;
isDirectory=false; length=4608; replication=3; blocksize=134217728; modification_time=1545988247575;
access_time=1545988247494; owner=owner; group=hive; permission=rw-r--r--; isSymlink=false}
to hdfs://my_hdfs_master/user/hive/warehouse/MYDB.db/hdfstable/part-r-00000-9a70cbea-d105-4f50-ba1b-372f555906ce.gz.parquet
I found this issue on jira : https://issues.apache.org/jira/browse/SPARK-18626
Is there a way to make the writing part thread safe ? to make the execution one by one, one after another ?
Thanks.
SOLUTION
Use this.synchronized{} like below
class MyThread extends Runnable{
def run{
//some code
this.synchronized{
myTable.write.format("parquet").mode("append")
.saveAsTable("hdfstable")
}
//some code
}
}
Executors.defaultThreadFactory().newThread(new MyThread).start()
I have a following task ahead of me.
User provides set of IP addresses a config file while executing spark submit command.
Lets say that array looks like this :
val ips = Array(1,2,3,4,5)
There can be up to 100.000 values in array..
For all elements in array, I should read data for Cassandra, perform some computation and insert data back to Cassandra.
If I do:
ips.foreach(ip =>{
- read data from Casandra for specific "ip" // for each IP there is different amount of data to read (within the functions I determine start and end date for each IP)
- process it
- save it back to Cassandra})
this works fine.
I believe that process runs sequentially; I don't exploit parallelism.
On the other hand if I do:
val IPRdd = sc.parallelize(Array(1,2,3,4,5))
IPRdd.foreach(ip => {
- read data from Cassandra // I need to use spark context to make the query
-process it
save it back to Cassandra})
I get serialization exception, because spark is trying to serialize spark context, which is not serializable.
How to make this work, but still exploit parallelism.
Thanks
Edited
This is the execption I get:
Exception in thread "main" org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2055)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:919)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918)
at com.enerbyte.spark.jobs.wibeeebatch.WibeeeBatchJob$$anonfun$main$1.apply(WibeeeBatchJob.scala:59)
at com.enerbyte.spark.jobs.wibeeebatch.WibeeeBatchJob$$anonfun$main$1.apply(WibeeeBatchJob.scala:54)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at com.enerbyte.spark.jobs.wibeeebatch.WibeeeBatchJob$.main(WibeeeBatchJob.scala:54)
at com.enerbyte.spark.jobs.wibeeebatch.WibeeeBatchJob.main(WibeeeBatchJob.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.NotSerializableException: org.apache.spark.SparkContext
Serialization stack:
- object not serializable (class: org.apache.spark.SparkContext, value: org.apache.spark.SparkContext#311ff287)
- field (class: com.enerbyte.spark.jobs.wibeeebatch.WibeeeBatchJob$$anonfun$main$1, name: sc$1, type: class org.apache.spark.SparkContext)
- object (class com.enerbyte.spark.jobs.wibeeebatch.WibeeeBatchJob$$anonfun$main$1, )
- field (class: com.enerbyte.spark.jobs.wibeeebatch.WibeeeBatchJob$$anonfun$main$1$$anonfun$apply$1, name: $outer, type: class com.enerbyte.spark.jobs.wibeeebatch.WibeeeBatchJob$$anonfun$main$1)
- object (class com.enerbyte.spark.jobs.wibeeebatch.WibeeeBatchJob$$anonfun$main$1$$anonfun$apply$1, )
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
Easiest thing to do is to use the Spark Cassandra Connector which can handle connection pooling and serialization.
With that you could do something like
sc.parallelize(inputData, numTasks)
.mapPartitions { it =>
val con = CassandraConnection(yourConf)
con.withSessionDo{ session =>
//Use the session
}
//Do any other processing
}.saveToCassandra("ks","table"
This would be completely manual operation of a Cassandra Connection. The sessions would all be automatically pooled and cached and if you prepare a statement those will be cached on the executor as well.
If you would like to use more built in methods, there also exists joinWithCassandraTable which may work in your situation.
sc.parallelize(inputData, numTasks)
.joinWithCassandraTable("ks","table") //Retrieves all records for which input data is the primary key
.map( //manipulate returned results if needed )
.saveToCassandra("ks","table")
Im getting a cast exception thrown at the line "handDetailList.each". I don't understand why my code is trying to cast a list to a "Hand" class? It seems to me that sometimes Groovy does strange things with casting....?
private Hand buildHands(List handDetailList) {
def parsedHand = new Hand()
parsedHand.setTableName(handDetailList.get(1))
handDetailList.each {
}
}
I get the following exception (I have edited the exception, line 70 is "handDetailList.each {"):
Exception in thread "main" org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object <details of the list, omitted> with class 'java.util.ArrayList' to class 'gameMechanics.Hand' due to: groovy.lang.GroovyRuntimeException: Could not find matching constructor for: gameMechanics.Hand(java.lang.String,........
at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.castToType(DefaultTypeTransformation.java:358)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.castToType(ScriptBytecodeAdapter.java:599)
at advisor.HistoryParser.buildHands(HistoryParser.groovy:70)
at advisor.HistoryParser.this$2$buildHands(HistoryParser.groovy)
at advisor.HistoryParser$this$2$buildHands.callCurrent(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:49)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:133)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:141)
at advisor.HistoryParser.parse(HistoryParser.groovy:57)
at advisor.HistoryParser$parse.call(Unknown Source)
each returns the list that each was called on.
You have said the function returns an object of type Hand, and as Groovy automatically returns the last statement in a method, it is trying to convert the list to an instance of Hand and failing...
What is it you want to return? The parsedHand variable?
Maybe try:
private Hand buildHands(List handDetailList) {
def parsedHand = new Hand()
parsedHand.setTableName(handDetailList.get(1))
handDetailList.each {
}
parsedHand
}
if so.
I want to create gradle custom task. The problem is that sometimes when I change something in source code, gradle shows error that in my code is error with that non existing code. For example I've created local variable timestamp and compiled project. Everything worked fine. But later I changed that variable to final ... TIMESTAMP. After that change gradle shows me error Could not find property 'timestamp'.... How to solve this problem? I assume that gradle has got cache?
Code
class MyCustomTask extends DefaultTask {
private final String TIMESTAMP = System.currentTimeMillis().toString()
public MyCustomTask() {
}
#TaskAction
def build() {
// do something
}
}
Exception
* Exception is:
org.gradle.api.GradleScriptException: A problem occurred evaluating project ':customer'.
at org.gradle.groovy.scripts.internal.DefaultScriptRunnerFactory$ScriptRunnerImpl.run(DefaultScriptRunnerFactory.ja
va:54)
at org.gradle.configuration.DefaultScriptPluginFactory$ScriptPluginImpl.apply(DefaultScriptPluginFactory.java:127)
at org.gradle.configuration.BuildScriptProcessor.execute(BuildScriptProcessor.java:36)
at org.gradle.configuration.BuildScriptProcessor.execute(BuildScriptProcessor.java:23)
at org.gradle.configuration.ConfigureActionsProjectEvaluator.evaluate(ConfigureActionsProjectEvaluator.java:34)
at org.gradle.configuration.LifecycleProjectEvaluator.evaluate(LifecycleProjectEvaluator.java:55)
at org.gradle.api.internal.project.AbstractProject.evaluate(AbstractProject.java:465)
at org.gradle.api.internal.project.AbstractProject.evaluate(AbstractProject.java:76)
at org.gradle.configuration.DefaultBuildConfigurer.configure(DefaultBuildConfigurer.java:31)
at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:142)
at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:113)
at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:81)
at org.gradle.launcher.exec.InProcessBuildActionExecuter$DefaultBuildController.run(InProcessBuildActionExecuter.ja
va:64)
at org.gradle.launcher.cli.ExecuteBuildAction.run(ExecuteBuildAction.java:33)
at org.gradle.launcher.cli.ExecuteBuildAction.run(ExecuteBuildAction.java:24)
at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:35)
at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:26)
at org.gradle.launcher.cli.RunBuildAction.run(RunBuildAction.java:50)
at org.gradle.api.internal.Actions$RunnableActionAdapter.execute(Actions.java:171)
at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:201)
at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:174)
at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:170)
at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:139)
at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:33)
at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:22)
at org.gradle.launcher.Main.doAction(Main.java:48)
at org.gradle.launcher.bootstrap.EntryPoint.run(EntryPoint.java:45)
at org.gradle.launcher.Main.main(Main.java:39)
at org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:50)
at org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:32)
at org.gradle.launcher.GradleMain.main(GradleMain.java:26)
Caused by: org.gradle.api.tasks.TaskInstantiationException: Could not create task of type 'MyCustomTask'.
at org.gradle.api.internal.project.taskfactory.TaskFactory$1.call(TaskFactory.java:115)
at org.gradle.api.internal.project.taskfactory.TaskFactory$1.call(TaskFactory.java:110)
at org.gradle.api.internal.AbstractTask.injectIntoNewInstance(AbstractTask.java:141)
at org.gradle.api.internal.project.taskfactory.TaskFactory.createTaskObject(TaskFactory.java:110)
at org.gradle.api.internal.project.taskfactory.TaskFactory.createTask(TaskFactory.java:70)
at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory.createTask(AnnotationProcessingTaskF
actory.java:98)
at org.gradle.api.internal.project.taskfactory.DependencyAutoWireTaskFactory.createTask(DependencyAutoWireTaskFacto
ry.java:39)
at org.gradle.api.internal.tasks.DefaultTaskContainer.create(DefaultTaskContainer.java:53)
at org.gradle.api.internal.project.AbstractProject.task(AbstractProject.java:908)
at org.gradle.api.internal.BeanDynamicObject$MetaClassAdapter.invokeMethod(BeanDynamicObject.java:216)
at org.gradle.api.internal.BeanDynamicObject.invokeMethod(BeanDynamicObject.java:122)
at org.gradle.api.internal.CompositeDynamicObject.invokeMethod(CompositeDynamicObject.java:147)
at org.gradle.groovy.scripts.BasicScript.methodMissing(BasicScript.java:83)
at build_1i1akohldagki7rlpcas5oct58.run(C:\workspace-eclipse-juno\Is2k8\customer\build.gradle:19)
at org.gradle.groovy.scripts.internal.DefaultScriptRunnerFactory$ScriptRunnerImpl.run(DefaultScriptRunnerFactory.ja
va:52)
... 30 more
Caused by: groovy.lang.MissingPropertyException: Could not find property 'timestamp' on task ':customer:is2k8Test'.
at org.gradle.api.internal.AbstractDynamicObject.propertyMissingException(AbstractDynamicObject.java:43)
at org.gradle.api.internal.AbstractDynamicObject.getProperty(AbstractDynamicObject.java:35)
at org.gradle.api.internal.CompositeDynamicObject.getProperty(CompositeDynamicObject.java:94)
at pl.ydp.gradle.MyCustomTask_Decorated.getProperty(Unknown Source)
at pl.ydp.gradle.MyCustomTask.<init>(MyCustomTask.groovy:15)
at pl.ydp.gradle.MyCustomTask_Decorated.<init>(Unknown Source)
at org.gradle.api.internal.DependencyInjectingInstantiator.newInstance(DependencyInjectingInstantiator.java:62)
at org.gradle.api.internal.ClassGeneratorBackedInstantiator.newInstance(ClassGeneratorBackedInstantiator.java:36)
at org.gradle.api.internal.project.taskfactory.TaskFactory$1.call(TaskFactory.java:113)
... 44 more
At the end you will find Caused by: groovy.lang.MissingPropertyException: Could not find property 'timestamp' on task ':customer:is2k8Test'.