Laravel scheduler runInBackground not working - cron

I have two tasks that takes some time to execute and that should be run every minute.
class TestProcess
{
public static function test_task_1()
{
Logger::info("Launch test_task 1");
sleep(5);
Logger::info("End test_task 1");
}
public static function test_task_2()
{
Logger::info("Launch test_task 2");
sleep(10);
Logger::info("End test_task 2");
}
}
Tasks are scheduled to run every minute into app/Console/Kernel.php
protected function schedule(Schedule $schedule)
{
$schedule->command(TestProcess::test_task_1())->everyMinute();
$schedule->command(TestProcess::test_task_2())->everyMinute();
}
Both tasks are correctly launched every minute but since laravel scheduler runs commands sequentially, the second task starts when the first one is finished.
[2022-04-15 14:44:01] development.INFO: Launch test_task 1
[2022-04-15 14:44:06] development.INFO: End test_task 1
[2022-04-15 14:44:06] development.INFO: Launch test_task 2
[2022-04-15 14:44:16] development.INFO: End test_task 2
[2022-04-15 14:45:01] development.INFO: Launch test_task 1
[2022-04-15 14:45:06] development.INFO: End test_task 1
[2022-04-15 14:45:06] development.INFO: Launch test_task 2
[2022-04-15 14:45:16] development.INFO: End test_task 2
I would like to have both tasks launched simultaneously every minute.
From task scheduling documentation it can be done using runInBackground function.
However when I add runInBackground function call, I'm having strange results.
protected function schedule(Schedule $schedule)
{
$schedule->command(TestProcess::test_task_1())->everyMinute()->runInBackground();
$schedule->command(TestProcess::test_task_2())->everyMinute()->runInBackground();
}
Not only tasks are not launched simultaneously, but they are even launched several times per minutes.
[2022-04-15 14:56:02] development.INFO: Launch test_task 1
[2022-04-15 14:56:07] development.INFO: End test_task 1
[2022-04-15 14:56:07] development.INFO: Launch test_task 2
[2022-04-15 14:56:17] development.INFO: End test_task 2
[2022-04-15 14:56:17] development.INFO: Launch test_task 1
[2022-04-15 14:56:17] development.INFO: Launch test_task 1
[2022-04-15 14:56:22] development.INFO: End test_task 1
[2022-04-15 14:56:22] development.INFO: End test_task 1
[2022-04-15 14:56:22] development.INFO: Launch test_task 2
[2022-04-15 14:56:22] development.INFO: Launch test_task 2
[2022-04-15 14:56:32] development.INFO: End test_task 2
[2022-04-15 14:56:32] development.INFO: End test_task 2
Does anybody knows why runInBackground function have this behavior ?

Related

How to fix this error running Nutch 1.15 ERROR fetcher.Fetcher - Fetcher job did not succeed, job status:FAILED, reason: NA

When I'm starting a crawl using Nutch 1.15 with this:
/usr/local/nutch/bin/crawl --i -s urls/seed.txt crawldb 5
Then it starts to run and I get this error when it tries to fetch:
2019-02-10 15:29:32,021 INFO mapreduce.Job - Running job: job_local1267180618_0001
2019-02-10 15:29:32,145 INFO fetcher.FetchItemQueues - Using queue mode : byHost
2019-02-10 15:29:32,145 INFO fetcher.Fetcher - Fetcher: threads: 50
2019-02-10 15:29:32,145 INFO fetcher.Fetcher - Fetcher: time-out divisor: 2
2019-02-10 15:29:32,149 INFO fetcher.QueueFeeder - QueueFeeder finished: total 1 records hit by time limit : 0
2019-02-10 15:29:32,234 WARN mapred.LocalJobRunner - job_local1267180618_0001
java.lang.Exception: java.lang.NullPointerException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.NullPointerException
at org.apache.nutch.net.URLExemptionFilters.<init>(URLExemptionFilters.java:39)
at org.apache.nutch.fetcher.FetcherThread.<init>(FetcherThread.java:154)
at org.apache.nutch.fetcher.Fetcher$FetcherRun.run(Fetcher.java:222)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2019-02-10 15:29:33,023 INFO mapreduce.Job - Job job_local1267180618_0001 running in uber mode : false
2019-02-10 15:29:33,025 INFO mapreduce.Job - map 0% reduce 0%
2019-02-10 15:29:33,028 INFO mapreduce.Job - Job job_local1267180618_0001 failed with state FAILED due to: NA
2019-02-10 15:29:33,038 INFO mapreduce.Job - Counters: 0
2019-02-10 15:29:33,039 ERROR fetcher.Fetcher - Fetcher job did not succeed, job status:FAILED, reason: NA
2019-02-10 15:29:33,039 ERROR fetcher.Fetcher - Fetcher: java.lang.RuntimeException: Fetcher job did not succeed, job status:FAILED, reason: NA
at org.apache.nutch.fetcher.Fetcher.fetch(Fetcher.java:503)
at org.apache.nutch.fetcher.Fetcher.run(Fetcher.java:543)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.fetcher.Fetcher.main(Fetcher.java:517)
And I get this error in the console which is the command it runs:
Error running:
/usr/local/nutch/bin/nutch fetch -D mapreduce.job.reduces=2 -D mapred.child.java.opts=-Xmx1000m -D mapreduce.reduce.speculative=false -D mapreduce.map.speculative=false -D mapreduce.map.output.compress=true -D fetcher.timelimit.mins=180 crawlsites/segments/20190210152929 -noParsing -threads 50
I had to delete the nutch folder and do a new install and it worked after this.

Using JMeter, how can I extract a string from the body of a response of an API and save it to a csv file?

I am getting some different body responses from an API which look something like this :
"eyJhbRciOiRRRzI1RiIsInR5cCI6IkpXVCJ9.eyJ1c2RybmRtZSI6IlRlcmhhdCIsInR1aWQiOiJRZXJoYXQiLRJyb2xlRjoibW9iaWxlRRRwcCIsImVtYWlsIjoidGVzdEB0ZXN0LmNvbSIsImlkIjoiU2RyaGF0IiwiaWR0IjoxRRR5MDU5OTUxLCJleRRiOjE1NRkwNzQzRRR9.RRzm3VvioZ_iR-v5EGSSfYJLf0d9aZ-9R-RP4UbER04"
I am extracting it using JMeter's Regular Expression Extractor, like this:
How can I print the values from the Regular Expression Extractor to a .csv file for later use in a different test?
I tried the solution presented here, but it did not help. Maybe that script requires more tweaking, but i have no groovy knowledge:
JMeter extract all values from regular expression and store in a csv
I would greatly appreciate any help you can offer.
Updated question:
Using the exact code from the answer of the previously mentioned question, I'd get the bellow error:
Code:
def csv = new File("my.csv")
1.upto(vars.get("foo_matchNr") as int, {
csv << vars.get("foo_$it") << System.getProperty("line.separator")
})
Error:
019-02-02 11:30:26,972 INFO o.a.j.e.StandardJMeterEngine: Running the test!
2019-02-02 11:30:26,974 INFO o.a.j.s.SampleEvent: List of sample_variables: []
2019-02-02 11:30:26,978 INFO o.a.j.g.u.JMeterMenuBar: setRunning(true, *local*)
2019-02-02 11:30:27,223 INFO o.a.j.e.StandardJMeterEngine: Starting ThreadGroup: 1 : Thread Group
2019-02-02 11:30:27,223 INFO o.a.j.e.StandardJMeterEngine: Starting 3 threads for group Thread Group.
2019-02-02 11:30:27,223 INFO o.a.j.e.StandardJMeterEngine: Thread will continue on error
2019-02-02 11:30:27,224 INFO o.a.j.t.ThreadGroup: Starting thread group... number=1 threads=3 ramp-up=1 perThread=333.33334 delayedStart=false
2019-02-02 11:30:27,226 INFO o.a.j.t.ThreadGroup: Started thread group number 1
2019-02-02 11:30:27,225 INFO o.a.j.t.JMeterThread: Thread started: Thread Group 1-1
2019-02-02 11:30:27,226 INFO o.a.j.s.FileServer: Stored: D:/Software/apache-jmeter-5.0/bin/testUser.csv
2019-02-02 11:30:27,226 INFO o.a.j.e.StandardJMeterEngine: All thread groups have been started
2019-02-02 11:30:27,559 INFO o.a.j.t.JMeterThread: Thread started: Thread Group 1-2
2019-02-02 11:30:27,892 INFO o.a.j.t.JMeterThread: Thread started: Thread Group 1-3
2019-02-02 11:30:28,428 ERROR o.a.j.e.JSR223PostProcessor: Problem in JSR223 script, JSR223 PostProcessor
javax.script.ScriptException: org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'null' with class 'null' to class 'int'. Try 'java.lang.Integer' instead
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:324) ~[groovy-all-2.4.15.jar:2.4.15]
at org.codehaus.groovy.jsr223.GroovyCompiledScript.eval(GroovyCompiledScript.java:72) ~[groovy-all-2.4.15.jar:2.4.15]
at javax.script.CompiledScript.eval(Unknown Source) ~[?:1.8.0_191]
at org.apache.jmeter.util.JSR223TestElement.processFileOrScript(JSR223TestElement.java:221) ~[ApacheJMeter_core.jar:5.0 r1840935]
at org.apache.jmeter.extractor.JSR223PostProcessor.process(JSR223PostProcessor.java:44) [ApacheJMeter_components.jar:5.0 r1840935]
at org.apache.jmeter.threads.JMeterThread.runPostProcessors(JMeterThread.java:925) [ApacheJMeter_core.jar:5.0 r1840935]
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:564) [ApacheJMeter_core.jar:5.0 r1840935]
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:486) [ApacheJMeter_core.jar:5.0 r1840935]
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:253) [ApacheJMeter_core.jar:5.0 r1840935]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_191]
Caused by: org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'null' with class 'null' to class 'int'. Try 'java.lang.Integer' instead
at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.castToNumber(DefaultTypeTransformation.java:176) ~[groovy-all-2.4.15.jar:2.4.15]
at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.intUnbox(DefaultTypeTransformation.java:82) ~[groovy-all-2.4.15.jar:2.4.15]
at Script5.run(Script5.groovy:2) ~[?:?]
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:321) ~[groovy-all-2.4.15.jar:2.4.15]
... 9 more
2019-02-02 11:30:28,436 INFO o.a.j.t.JMeterThread: Thread is done: Thread Group 1-1
2019-02-02 11:30:28,439 INFO o.a.j.t.JMeterThread: Thread finished: Thread Group 1-1
2019-02-02 11:30:28,583 ERROR o.a.j.e.JSR223PostProcessor: Problem in JSR223 script, JSR223 PostProcessor
javax.script.ScriptException: org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'null' with class 'null' to class 'int'. Try 'java.lang.Integer' instead
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:324) ~[groovy-all-2.4.15.jar:2.4.15]
etc.
Plus 2 more errors like that.
I think that "foo_matchNr" is seen as one variable and just "foo" is my variable which changes it's value everytime or it's a list of values? I don't know how this Regular Expression Extractor works or groovy for that matter.
Using the code adapted (using my 0 level knowledge of groovy code) from the answer of the previously mentioned question, I'd get the bellow error:
Changed code:
def csv = new File("my.csv")
1.upto(vars.get("foo") as int, {
csv << vars.get("foo_$it") << System.getProperty("line.separator")
})
Error:
2019-02-02 11:08:27,613 INFO o.a.j.e.StandardJMeterEngine: Running the test!
2019-02-02 11:08:27,614 INFO o.a.j.s.SampleEvent: List of sample_variables: []
2019-02-02 11:08:27,635 INFO o.a.j.g.u.JMeterMenuBar: setRunning(true, *local*)
2019-02-02 11:08:27,850 INFO o.a.j.e.StandardJMeterEngine: Starting ThreadGroup: 1 : Thread Group
2019-02-02 11:08:27,850 INFO o.a.j.e.StandardJMeterEngine: Starting 3 threads for group Thread Group.
2019-02-02 11:08:27,850 INFO o.a.j.e.StandardJMeterEngine: Thread will continue on error
2019-02-02 11:08:27,850 INFO o.a.j.t.ThreadGroup: Starting thread group... number=1 threads=3 ramp-up=1 perThread=333.33334 delayedStart=false
2019-02-02 11:08:27,851 INFO o.a.j.t.JMeterThread: Thread started: Thread Group 1-1
2019-02-02 11:08:27,852 INFO o.a.j.s.FileServer: Stored: D:/Software/apache-jmeter-5.0/bin/testUser.csv
2019-02-02 11:08:27,853 INFO o.a.j.t.ThreadGroup: Started thread group number 1
2019-02-02 11:08:27,853 INFO o.a.j.e.StandardJMeterEngine: All thread groups have been started
2019-02-02 11:08:28,187 INFO o.a.j.t.JMeterThread: Thread started: Thread Group 1-2
2019-02-02 11:08:28,520 INFO o.a.j.t.JMeterThread: Thread started: Thread Group 1-3
2019-02-02 11:08:28,794 ERROR o.a.j.e.JSR223PostProcessor: Problem in JSR223 script, JSR223 PostProcessor
javax.script.ScriptException: java.lang.NumberFormatException: For input string: "eyJhbRciOiRRRzI1RiIsInR5cCI6IkpXVCJ9.eyJ1c2RybmRtZSI6IlRlcmhhdCIsInR1aWQiOiJRZXJoYXQiLRJyb2xlRjoibW9iaWxlRRRwcCIsImVtYWlsIjoidGVzdEB0ZXN0LmNvbSIsImlkIjoiU2RyaGF0IiwiaWR0IjoxRRR5MDU5OTUxLCJleRRiOjE1NRkwNzQzRRR9.RRzm3VvioZ_iR-v5EGSSfYJLf0d9aZ-9R-RP4UbER04"
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:324) ~[groovy-all-2.4.15.jar:2.4.15]
at org.codehaus.groovy.jsr223.GroovyCompiledScript.eval(GroovyCompiledScript.java:72) ~[groovy-all-2.4.15.jar:2.4.15]
at javax.script.CompiledScript.eval(Unknown Source) ~[?:1.8.0_191]
at org.apache.jmeter.util.JSR223TestElement.processFileOrScript(JSR223TestElement.java:221) ~[ApacheJMeter_core.jar:5.0 r1840935]
at org.apache.jmeter.extractor.JSR223PostProcessor.process(JSR223PostProcessor.java:44) [ApacheJMeter_components.jar:5.0 r1840935]
at org.apache.jmeter.threads.JMeterThread.runPostProcessors(JMeterThread.java:925) [ApacheJMeter_core.jar:5.0 r1840935]
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:564) [ApacheJMeter_core.jar:5.0 r1840935]
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:486) [ApacheJMeter_core.jar:5.0 r1840935]
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:253) [ApacheJMeter_core.jar:5.0 r1840935]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_191]
Caused by: java.lang.NumberFormatException: For input string: "eyJhbRciOiRRRzI1RiIsInR5cCI6IkpXVCJ9.eyJ1c2RybmRtZSI6IlRlcmhhdCIsInR1aWQiOiJRZXJoYXQiLRJyb2xlRjoibW9iaWxlRRRwcCIsImVtYWlsIjoidGVzdEB0ZXN0LmNvbSIsImlkIjoiU2RyaGF0IiwiaWR0IjoxRRR5MDU5OTUxLCJleRRiOjE1NRkwNzQzRRR9.RRzm3VvioZ_iR-v5EGSSfYJLf0d9aZ-9R-RP4UbER04"
at java.lang.NumberFormatException.forInputString(Unknown Source) ~[?:1.8.0_191]
at java.lang.Integer.parseInt(Unknown Source) ~[?:1.8.0_191]
at java.lang.Integer.valueOf(Unknown Source) ~[?:1.8.0_191]
at org.codehaus.groovy.runtime.StringGroovyMethods.toInteger(StringGroovyMethods.java:3319) ~[groovy-all-2.4.15.jar:2.4.15]
at org.codehaus.groovy.runtime.StringGroovyMethods.asType(StringGroovyMethods.java:178) ~[groovy-all-2.4.15.jar:2.4.15]
at org.codehaus.groovy.runtime.dgm$1048.doMethodInvoke(Unknown Source) ~[groovy-all-2.4.15.jar:2.4.15]
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1225) ~[groovy-all-2.4.15.jar:2.4.15]
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1034) ~[groovy-all-2.4.15.jar:2.4.15]
at org.codehaus.groovy.runtime.InvokerHelper.invokePojoMethod(InvokerHelper.java:935) ~[groovy-all-2.4.15.jar:2.4.15]
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:926) ~[groovy-all-2.4.15.jar:2.4.15]
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodN(ScriptBytecodeAdapter.java:181) ~[groovy-all-2.4.15.jar:2.4.15]
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.asType(ScriptBytecodeAdapter.java:604) ~[groovy-all-2.4.15.jar:2.4.15]
at Script4.run(Script4.groovy:2) ~[?:?]
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:321) ~[groovy-all-2.4.15.jar:2.4.15]
... 9 more
2019-02-02 11:08:28,797 INFO o.a.j.t.JMeterThread: Thread is done: Thread Group 1-1
2019-02-02 11:08:28,797 INFO o.a.j.t.JMeterThread: Thread finished: Thread Group 1-1
2019-02-02 11:08:29,160 ERROR o.a.j.e.JSR223PostProcessor: Problem in JSR223 script, JSR223 PostProcessor
javax.script.ScriptException: java.lang.NumberFormatException: For input string: "
This time it "sees" the strings I am interested in processing, but it seems to encounter some issues processing them.
I get 2 more errors like that for the other 2 variables that I get from the other 2 responses which contain different strings (I am making 3 requests and getting 3 responses, each containing different body strings). Just the value after the "For input string" is different.
If you have only one match in your responce use this code to paste into JSR223 PostProcessor:
def csv = new File("my.csv")
csv << vars.get("foo") << System.getProperty("line.separator")
For your first piece of "code"
def csv = new File("my.csv")
1.upto(vars.get("foo_matchNr") as int, {
csv << vars.get("foo_$it") << System.getProperty("line.separator")
})
the error is telling ${foo_matchNr} variable does not exist, double check that it is present and has a numeric value using Debug Sampler and View Results Tree listener combination.
For your second piece of code:
def csv = new File("my.csv")
1.upto(vars.get("foo") as int, {
csv << vars.get("foo_$it") << System.getProperty("line.separator")
})
the error is telling that it cannot cast this line:
eyJhbRciOiRRRzI1RiIsInR5cCI6IkpXVCJ9.eyJ1c2RybmRtZSI6IlRlcmhhdCIsInR1aWQiOiJRZXJoYXQiLRJyb2xlRjoibW9iaWxlRRRwcCIsImVtYWlsIjoidGVzdEB0ZXN0LmNvbSIsImlkIjoiU2RyaGF0IiwiaWR0IjoxRRR5MDU5OTUxLCJleRRiOjE1NRkwNzQzRRR9.RRzm3VvioZ_iR-v5EGSSfYJLf0d9aZ-9R-RP4UbER04
to an Integer. My expectation is that you need to change the code to something like:
new File("my.csv") << vars.get("foo") << System.getProperty("line.separator")
If the above code will not produce what you're looking for - update your question with the output of the aforementioned Debug Sampler and indicate which variable(s) you would like to store and how.
Check out The Groovy Templates Cheat Sheet for JMeter article to get started with Groovy scripting.
In general I would recommend using Sample Variables property in order to store generated JMeter variables into the .jtl results file. If you want the output to go into a separate file - consider using Flexible File Writer

Spark streaming batches stopping and waiting for GC

I have got a simple spark streaming job. It reads event from Kafka topic, does simple event transformation (eg. replace some characters with another ones) and sends transformed events to second Kafka topic. Everything works OK for some time (1 – 1.5 h) and after that we see that batches are scheduled (see screen below) and waiting to run. The pause takes about 5-6 minutes and this time GC is working and cleaning memory. After that everything works OK, but sometimes processing stops and in logs we see errors like that (see stack trace below). Please advise what Spark / Java parameters should be set to avoid this GC overhead.
Spark jobs are scheduled every 10 sec., one batch execution takes about 5 sec.
Stack trace
2017-09-21 11:26:15 WARN TaskSetManager:66 - Lost task 33.0 in stage 115.0 (TID 4699, work8, executor 6): java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.kafka.clients.consumer.internals.Fetcher.createFetchRequests(Fetcher.java:724)
at org.apache.kafka.clients.consumer.internals.Fetcher.sendFetches(Fetcher.java:176)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1042)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:99)
at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:70)
at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:228)
at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:194)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:918)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:918)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2017-09-21 11:26:15 INFO TaskSetManager:54 - Lost task 37.0 in stage 115.0 (TID 4702) on work8, executor 6: java.lang.OutOfMemoryError (GC overhead limit exceeded) [duplicate 1]
2017-09-21 11:26:15 INFO TaskSetManager:54 - Lost task 26.0 in stage 115.0 (TID 4695) on work8, executor 6: java.lang.OutOfMemoryError (GC overhead limit exceeded) [duplicate 2]
Parameters of spark – submit
spark-2.1.1-bin-hadoop2.6/bin/spark-submit \
--master yarn \
--deploy-mode client \
--executor-cores 8 \
--executor-memory 20g \
--driver-memory 20g \
--num-executors 4 \
--conf "spark.driver.maxResultSize=8g" \
--conf "spark.streaming.receiver.maxRate=1125" \
--conf "spark.streaming.kafka.maxRatePerPartition=1125" \
//Job
val sendToKafka = KafkaSender.sendToKafka(spark, kafkaServers, outputTopic, kafkaEnabled) _
val stream = KafkaUtils
.createDirectStream(ssc, PreferConsistent, Subscribe[String, String](inputTopics, kafkaParams))
stream.foreachRDD { statementsStreamBatch =>
val offsetRanges = statementsStreamBatch.asInstanceOf[HasOffsetRanges].offsetRanges
if (!statementsStreamBatch.isEmpty) {
val inputCsvRDD = statementsStreamBatch.map(_.value)
var outputCsvRDD : RDD[String] = null
if(enrichmerEnabled) {
outputCsvRDD = Enricher.processStreaminputCsvRDD, enricherNumberOfFields)
} else {
outputCsvRDD = inputCsvRDD
}
sendToKafka(outputCsvRDD)
}
stream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
}
ssc.start()
ssc.awaitTermination()
//Enricher
object Enricher {
def processStream(eventStream: RDD[String], numberOfFields : Integer): RDD[String] = {
eventStream.map(
csv => if (csv.count(_ == ',') <= numberOfFields) {
csv
} else {
csv.replaceAll(",(?=[^']*',)", "#")
}
)
}
//KafkaSender
object KafkaSender {
def sendToKafka(spark: SparkSession, servers: String, topic: String, enabled: Boolean)(message: RDD[String]): Unit = {
val kafkaSink = spark.sparkContext.broadcast(KafkaSink(getKafkaProperties(servers)))
val kafkaTopic = spark.sparkContext.broadcast(topic)
message.foreach(kafkaSink.value.send(kafkaTopic.value, _))
}
}

Cassandra throws OutOfMemory

In our test environment, We have a 1 node cassandra cluster with RF=1 for all keyspaces.
VM arguments of interest are listed below
-XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms2G -Xmx2G -Xmn1G -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=1000003 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
We noticed Full GC happening frequently and cassandra getting unresponsive during GC.
INFO [Service Thread] 2016-12-29 15:52:40,901 GCInspector.java:252 - ParNew GC in 238ms. CMS Old Gen: 782576192 -> 802826248; Par Survivor Space: 60068168 -> 32163264
INFO [Service Thread] 2016-12-29 15:52:40,902 GCInspector.java:252 - ConcurrentMarkSweep GC in 1448ms. CMS Old Gen: 802826248 -> 393377248; Par Eden Space: 859045888 -> 0; Par Survivor Space: 32163264 -> 0
We are getting java.lang.OutOfMemoryError with below exception
ERROR [SharedPool-Worker-5] 2017-01-26 09:23:13,694 JVMStabilityInspector.java:94 - JVM state determined to be unstable. Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.7.0_80]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331) ~[na:1.7.0_80]
at org.apache.cassandra.utils.memory.SlabAllocator.getRegion(SlabAllocator.java:137) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:97) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:61) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.db.Memtable.put(Memtable.java:192) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1237) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:400) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:363) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.StorageProxy$7.runMayThrow(StorageProxy.java:1033) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2224) ~[apache-cassandra-2.1.8.jar:2.1.8]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_80]
at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-2.1.8.jar:2.1.8]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
We were able restore the cassandra after executing nodetool repair.
nodetool status
Datacenter: DC1
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.3.211.3 5.74 GB 256 ? 32251391-5eee-4891-996d-30fb225116a1 RAC1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
nodetool info
ID : 32251391-5eee-4891-996d-30fb225116a1
Gossip active : true
Thrift active : true
Native Transport active: true
Load : 5.74 GB
Generation No : 1485526088
Uptime (seconds) : 330651
Heap Memory (MB) : 812.72 / 1945.63
Off Heap Memory (MB) : 7.63
Data Center : DC1
Rack : RAC1
Exceptions : 0
Key Cache : entries 68, size 6.61 KB, capacity 97 MB, 1158 hits, 1276 requests, 0.908 recent hit rate, 14400 save period in seconds
Row Cache : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 requests, NaN recent hit rate, 0 save period in seconds
Counter Cache : entries 0, size 0 bytes, capacity 48 MB, 0 hits, 0 requests, NaN recent hit rate, 7200 save period in seconds
Token : (invoke with -T/--tokens to see all 256 tokens)
From System.log, I see lots of compacting large partitiion
WARN [CompactionExecutor:33463] 2016-12-24 05:42:29,550 SSTableWriter.java:240 - Compacting large partition mydb/Table_Name:2016-12-23 00:00+0530 (142735455 bytes)
WARN [CompactionExecutor:33465] 2016-12-24 05:47:57,343 SSTableWriter.java:240 - Compacting large partition mydb/Table_Name_2:22:0c2e6c00-a5a3-11e6-a05e-1f69f32db21c (162203393 bytes)
For Tombstone I notice below in system.log
[main] 2016-12-28 18:23:06,534 YamlConfigurationLoader.java:135 - Node
configuration:[authenticator=PasswordAuthenticator;
authorizer=CassandraAuthorizer; auto_snapshot=true;
batch_size_warn_threshold_in_kb=5;
batchlog_replay_throttle_in_kb=1024;
cas_contention_timeout_in_ms=1000;
client_encryption_options=; cluster_name=bankbazaar;
column_index_size_in_kb=64; commit_failure_policy=ignore;
commitlog_directory=/var/cassandra/log/commitlog;
commitlog_segment_size_in_mb=32; commitlog_sync=periodic;
commitlog_sync_period_in_ms=10000;
compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32;
concurrent_reads=32; concurrent_writes=32;
counter_cache_save_period=7200; counter_cache_size_in_mb=null;
counter_write_request_timeout_in_ms=15000; cross_node_timeout=false;
data_file_directories=[/cryptfs/sdb/cassandra/data,
/cryptfs/sdc/cassandra/data, /cryptfs/sdd/cassandra/data];
disk_failure_policy=best_effort; dynamic_snitch_badness_threshold=0.1;
dynamic_snitch_reset_interval_in_ms=600000;
dynamic_snitch_update_interval_in_ms=100;
endpoint_snitch=GossipingPropertyFileSnitch;
hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024;
incremental_backups=false; index_summary_capacity_in_mb=null;
index_summary_resize_interval_in_minutes=60;
inter_dc_tcp_nodelay=false; internode_compression=all;
key_cache_save_period=14400; key_cache_size_in_mb=null;
listen_address=127.0.0.1; max_hint_window_in_ms=10800000;
max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers;
native_transport_port=9042; num_tokens=256;
partitioner=org.apache.cassandra.dht.Murmur3Partitioner;
permissions_validity_in_ms=2000; range_request_timeout_in_ms=20000;
read_request_timeout_in_ms=10000;
request_scheduler=org.apache.cassandra.scheduler.NoScheduler;
request_timeout_in_ms=20000; row_cache_save_period=0;
row_cache_size_in_mb=0; rpc_address=127.0.0.1; rpc_keepalive=true;
rpc_port=9160; rpc_server_type=sync;
saved_caches_directory=/var/cassandra/data/saved_caches;
seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider,
parameters=[{seeds=127.0.0.1}]}];
server_encryption_options=;
snapshot_before_compaction=false; ssl_storage_port=9001;
sstable_preemptive_open_interval_in_mb=50;
start_native_transport=true; start_rpc=true; storage_port=9000;
thrift_framed_transport_size_in_mb=15;
tombstone_failure_threshold=100000; tombstone_warn_threshold=1000;
trickle_fsync=false; trickle_fsync_interval_in_kb=10240;
truncate_request_timeout_in_ms=60000;
write_request_timeout_in_ms=5000]
nodetool tpstats
Pool Name Active Pending Completed Blocked All time blocked
CounterMutationStage 0 0 0 0 0
ReadStage 32 4061 50469243 0 0
RequestResponseStage 0 0 0 0 0
MutationStage 32 22 27665114 0 0
ReadRepairStage 0 0 0 0 0
GossipStage 0 0 0 0 0
CacheCleanupExecutor 0 0 0 0 0
AntiEntropyStage 0 0 0 0 0
MigrationStage 0 0 0 0 0
Sampler 0 0 0 0 0
ValidationExecutor 0 0 0 0 0
CommitLogArchiver 0 0 0 0 0
MiscStage 0 0 0 0 0
MemtableFlushWriter 0 0 7769 0 0
MemtableReclaimMemory 1 57 13433 0 0
PendingRangeCalculator 0 0 1 0 0
MemtablePostFlush 0 0 9279 0 0
CompactionExecutor 3 47 169022 0 0
InternalResponseStage 0 0 0 0 0
HintedHandoff 0 1 148 0 0
Is there any YAML/other config to be used to avoid "large compaction"
What is the correct Compaction Strategy to be used ? Can OutOfMemory because of wrong Compaction Strategy
In one of the keyspace we have write once and read multiple times for each row.
For another keyspace we have Timeseries kind of data where it's insert only and multiple reads
Seeing this: Heap Memory (MB): 812.72 / 1945.63 tells me that your 1 machine is probably under powered. There's a good chance that you're not able to keep up with GC.
While in this case, I think this is probably related to being undersized - access patterns, data model, and payload size can also affect GC so if you'd like to update your post with that info, I can update my answer to reflect that.
EDIT to reflect new information
Thanks for adding additional information. Based on what you posted, there are two immediate things I notice that can cause your heap to blow:
Large Partition:
It looks like compaction had to compact 2 partitions that exceeded 100mb (140 and 160 mb respectively). Normally, that would still be ok (not great) but because you're running on under powered hardware with such a small heap, that's quite a lot.
The thing about compaction
It uses a healthy mix of resources when it runs. It's business as usual so it's something you should test and plan for. In this case, I'm certain that compaction is working harder because of the large partition which is using CPU resources (that GC needs), heap, and IO.
This brings me to another concern:
Pool Name Active Pending Completed Blocked All time blocked
CounterMutationStage 0 0 0 0 0
ReadStage 32 4061 50469243 0 0
This is usually a sign that you either need to scale up and/or scale out. In your case, you might want to do both. You can exhaust a single, under-powered node pretty quickly with an un-optimized data model. You also don't get to experience the nuances of a distributed system when you test in a single node environment.
So the TL;DR:
For a read heavy workload (which this seems to be), you'll need a larger heap. For over all sanity and cluster health, you'll need to revisit your data model to make sure the partitioning logic is sound. If you're not sure about how or why to do either, I suggest spending some time here: https://academy.datastax.com/courses

Nutch crawl no error , but result is nothing

I try to crawl some urls with nutch 2.1 as follows.
bin/nutch crawl urls -dir crawl -depth 3 -topN 5
http://wiki.apache.org/nutch/NutchTutorial
There is no error , but undermentioned folders don't be made.
crawl/crawldb
crawl/linkdb
crawl/segments
Can anyone help me?
I have not resolved this trouble for two days.
Thanks a lot!
output is as follows.
FetcherJob: threads: 10
FetcherJob: parsing: false
FetcherJob: resuming: false
FetcherJob : timelimit set for : -1
Using queue mode : byHost
Fetcher: threads: 10
QueueFeeder finished: total 0 records. Hit by time limit :0
-finishing thread FetcherThread1, activeThreads=0
Fetcher: throughput threshold: -1
Fetcher: throughput threshold sequence: 5
-finishing thread FetcherThread2, activeThreads=7
-finishing thread FetcherThread3, activeThreads=6
-finishing thread FetcherThread4, activeThreads=5
-finishing thread FetcherThread5, activeThreads=4
-finishing thread FetcherThread6, activeThreads=3
-finishing thread FetcherThread7, activeThreads=2
-finishing thread FetcherThread0, activeThreads=1
-finishing thread FetcherThread8, activeThreads=0
-finishing thread FetcherThread9, activeThreads=0
0/0 spinwaiting/active, 0 pages, 0 errors, 0.0 0.0 pages/s, 0 0 kb/s, 0 URLs in 0 queues
-activeThreads=0
ParserJob: resuming: false
ParserJob: forced reparse: false
ParserJob: parsing all
FetcherJob: threads: 10
FetcherJob: parsing: false
FetcherJob: resuming: false
FetcherJob : timelimit set for : -1
Using queue mode : byHost
Fetcher: threads: 10
QueueFeeder finished: total 0 records. Hit by time limit :0
-finishing thread FetcherThread1, activeThreads=0
Fetcher: throughput threshold: -1
Fetcher: throughput threshold sequence: 5
-finishing thread FetcherThread2, activeThreads=7
-finishing thread FetcherThread3, activeThreads=6
-finishing thread FetcherThread4, activeThreads=5
-finishing thread FetcherThread5, activeThreads=4
-finishing thread FetcherThread6, activeThreads=3
-finishing thread FetcherThread7, activeThreads=2
-finishing thread FetcherThread0, activeThreads=1
-finishing thread FetcherThread8, activeThreads=0
-finishing thread FetcherThread9, activeThreads=0
0/0 spinwaiting/active, 0 pages, 0 errors, 0.0 0.0 pages/s, 0 0 kb/s, 0 URLs in 0 queues
-activeThreads=0
ParserJob: resuming: false
ParserJob: forced reparse: false
ParserJob: parsing all
FetcherJob: threads: 10
FetcherJob: parsing: false
FetcherJob: resuming: false
FetcherJob : timelimit set for : -1
Using queue mode : byHost
Fetcher: threads: 10
QueueFeeder finished: total 0 records. Hit by time limit :0
Fetcher: throughput threshold: -1
Fetcher: throughput threshold sequence: 5
-finishing thread FetcherThread9, activeThreads=9
-finishing thread FetcherThread0, activeThreads=8
-finishing thread FetcherThread1, activeThreads=7
-finishing thread FetcherThread2, activeThreads=6
-finishing thread FetcherThread3, activeThreads=5
-finishing thread FetcherThread4, activeThreads=4
-finishing thread FetcherThread5, activeThreads=3
-finishing thread FetcherThread6, activeThreads=2
-finishing thread FetcherThread7, activeThreads=1
-finishing thread FetcherThread8, activeThreads=0
0/0 spinwaiting/active, 0 pages, 0 errors, 0.0 0.0 pages/s, 0 0 kb/s, 0 URLs in 0 queues
-activeThreads=0
ParserJob: resuming: false
ParserJob: forced reparse: false
ParserJob: parsing all
runtime/local/conf/nutch-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>http.agent.name</name>
<value>My Nutch Spider</value>
</property>
<property>
<name>storage.data.store.class</name>
<value>org.apache.gora.hbase.store.HBaseStore</value>
<description>Default class for storing data</description>
</property>
<property>
<name>http.robots.agents</name>
<value>My Nutch Spider</value>
<description>The agent strings we'll look for in robots.txt files,
comma-separated, in decreasing order of precedence. You should
put the value of http.agent.name as the first agent name, and keep the
default * at the end of the list. E.g.: BlurflDev,Blurfl,*
</description>
</property>
<property>
<name>http.content.limit</name>
<value>262144</value>
</property>
</configuration>
runtime/local/conf/regex-urlfilter.txt
# accept anything else
+.
runtime/local/urls/seed.txt
http://nutch.apache.org/
As you are using Nutch 2.X, you need to follow the relevant tutorial. The one that you gave was for Nutch 1.x. Nutch 2.X uses external storage backends like HBase, Cassandra so the crawldb, segments etc directories wont be formed.
Also, use bin/crawl script instead of the bin/nutch command.

Resources