Trying to set up the new Nutch 2.1 in local environments. With the fresh download, then "ant build". Following the document from wiki http://wiki.apache.org/nutch/Nutch2Tutorial however, it seems that no luck
I got the following errors:
java[1815:1903] Unable to load realm info from SCDynamicStore
InjectorJob: org.apache.gora.util.GoraException: org.apache.hadoop.hbase.MasterNotRunningException
at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:167)
at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:135)
at org.apache.nutch.storage.StorageUtils.createWebStore(StorageUtils.java:75)
at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:214)
at org.apache.nutch.crawl.InjectorJob.inject(InjectorJob.java:228)
at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:248)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.InjectorJob.main(InjectorJob.java:258)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:394)
at org.apache.hadoop.hbase.client.HBaseAdmin.(HBaseAdmin.java:94)
at org.apache.gora.hbase.store.HBaseStore.initialize(HBaseStore.java:108)
at org.apache.gora.store.DataStoreFactory.initializeDataStore(DataStoreFactory.java:102)
at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:161)
... 7 more
Your help is highly appreciated. thanks
Caused by: org.apache.hadoop.hbase.MasterNotRunningException
this indicates that the clsuter setup is not done correctly. The nutch tutorial page mentions this:
Install and configure HBase. You can get it here (N.B. Gora 0.2 uses
HBase 0.90.4, however the setup is known to work with more recent
versions of the HBase 0.90.x branch)
Have you performed this step correctly ?
Related
I'm using spark in HDInsight with Jupyter notebook. I'm using the %%configure "magic" to import packages. Every time there is a problem with the package, spark crashes with the error:
The code failed because of a fatal error: Status 'shutting_down' not
supported by session..
or
The code failed because of a fatal error: Session 28 unexpectedly
reached final status 'dead'. See logs:
Usually the problem was with me mistyping the name of the package, so after a few attempts I could solve it. Now I'm trying to import spark-streaming-eventhubs_2.11 and I think I got the name right, but I still receive the error. I looked at all kinds of logs but still couldn't find the one which shows any relevant info. Any idea how to troubleshoot similar errors?
%%configure -f
{ "conf": {"spark.jars.packages": "com.microsoft.azure:spark-streaming-eventhubs_2.11:2.0.5" }}
Additional info: when I run
spark-shell --conf spark.jars.packages=com.microsoft.azure:spark-streaming-eventhubs_2.11:2.0.5
The shell starts fine, and downloads the package
I finally was able to find the log files which contain the error. There are two log files which could be interesting
Livy log: livy-livy-server.out
Yarn log
On my HDInsight cluster, I found the livy log by connecting to one of the Head nodes with SSH and downloading a a file at this path (this log didn't contain useful info):
/var/log/livy/livy-livy-server.out
The actual error was in the yarn log file accessible from YarnUI. In HDInsight Azure Portal, go to "Cluster dashboard" -> "Yarn", find your session (KILLED status), click on "Logs" in the table, find "Log Type: stderr", click "click here for full log".
The problem in my case was Scala version incompatibility between one of the dependencies of spark-streaming_2.11 and Livy. This is supposed to be fixed Livy 0.4. More info here
I am trying to start glassfish 3.1.2 with eclipse 3.7; Previously it was starting fine (I enter login/psw) but I tried to start it today, entered the login and password and that what I've got in my console :
Info: JMX005: JMXStartupService had Started JMXConnector on JMXService URL service:jmx:rmi://x.xx.xx.xx:8686/jndi/rmi://x.xx.xx.xx:8686/jmxrmi
...
Info: User [] from host x.xx.xx.xx does not have administration access
...
Severe: PWC6117: File "/.../glassfish3/glassfish/lib/install/applications/__admingui/login.jsp" not found
If not to delete expired cert as said here I can see this exception :
Severe: SEC5054: Certificate has expired:
I tried http://localhost:4848/common/index.jsf but still I have 404 error :(
So my question is... how to find out why admin console doesn't start and how to fix that?
Thanks
The solution for my standalone glassfish 3.1.2 was to select jdk 1.7 instead of jdk 1.6;
p.s. I am still not sure why cause basically it should work on 1.6; its working but I am still searching the reason;
There is a similar question:
Glassfish 3.1 says "admingui login.jsp" not found
Re-installing the server was the proposed solution there.
Could be worth a try if it is an option for you.
I have been following this tutorial in order to set up Zeppelin on a Spark cluster (version 1.5.2) in HDInsight, on Linux. Everything worked fine, I have managed to successfully connect to the Zeppelin notebook through the SSH tunnel. However, when I try to run any kind of paragraph, the first time I get the following error:
java.io.IOException: No FileSystem for scheme: wasb
After getting this error, if I try to rerun the paragraph, I get another error:
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
These errors occur regardless of the code I enter, even if there is no reference to the hdfs. What I'm saying is that I get the "No FileSystem" error even for a trivial scala expression, such as parallelize.
Is there a missing configuration step?
I am download the tar ball that the script that you pointed to as I type. But want I am guessing is that your zeppelin install and spark install are not complete to work with wasb. In order to get spark to work with wasb you need to add some jars to the Class path. To do this you need to add something like this to your spark-defaults.conf (the paths might be different in HDInsights, this is from HDP on IaaS)
spark.driver.extraClassPath /usr/hdp/2.3.0.0-2557/hadoop/lib/azure-storage-2.2.0.jar:/usr/hdp/2.3.0.0-2557/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.3.0.0-2557/hadoop/hadoop-azure-2.7.1.2.3.0.0-2557.jar
spark.executor.extraClassPath /usr/hdp/2.3.0.0-2557/hadoop/lib/azure-storage-2.2.0.jar:/usr/hdp/2.3.0.0-2557/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.3.0.0-2557/hadoop/hadoop-azure-2.7.1.2.3.0.0-2557.jar
Once you have spark working with wasb, or next step is make those sames jar in zeppelin class path. A good way to test your setup is make a notebook that prints your env vars and class path.
sys.env.foreach(println(_))
val cl = ClassLoader.getSystemClassLoader
cl.asInstanceOf[java.net.URLClassLoader].getURLs.foreach(println)
Also looking at the install script, it trying to pull the zeppelin jar from wasb, you might want to change that config to somewhere else while you try some of these changes out. (zeppelin.sh)
export SPARK_YARN_JAR=wasb:///apps/zeppelin/zeppelin-spark-0.5.5-SNAPSHOT.jar
I hope this helps, if you are still have problems I have some other ideas, but would start with these first.
I have configured Hbase 0.94.14 and Nutch 2.3 through this tutorial and made a seed directory which contains a text file with the urls. When I want to inject these urls using this command:
$NUTCH_ROOT/runtime/local/bin/nutch inject /seed
I get the following output:
InjectorJob: starting at 2015-07-23 14:00:24
InjectorJob: Injecting urlDir: /seed
and stays in this state forever.
Can anybody help me with this problem?
Your Nutch version is 2.3. You should not run the command line in $NUTCH_ROOT/runtime/local/bin/nutch, you should run the command in $NUTCH_ROOT/runtime/deploy/bin/nutch instead.
Hope this helps,
Le Quoc Do
I'm trying to create a RichFaces page theme using the instructions here. I know NOTHING about Maven, so I've followed the instructions as best I can, but I've run into an error and don't know what I'm doing wrong. I followed the instructions on the page, and then run this command:
mvn archetype:create -DarchetypeGroupId=org.richfaces.cdk -DarchetypeArtifactId=maven-archetype-theme -DarchetypeVersion=3.3.3.Final -DartifactId=test -DgroupId=org.richfaces.docs -Dversion=1.0mvn archetype:create -DarchetypeGroupId=org.richfaces.cdk -DarchetypeArtifactId=maven-archetype-theme -DarchetypeVersion=3.3.3
However, when I run the command I get the following error message:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-archetype-plugin:2.0:create (default-cli) on project standalone-pom: Error creating from archetype: org.apache.maven.archetype.downloader.DownloadException: Error downloading org.richfaces.cdk:maven-archetype-theme:jar:3.3.3. Could not transfer artifact org.richfaces.cdk:maven-archetype-theme:jar:3.3.3 from/to repository.jboss.com (http://repository.jboss.com/maven2/): Access denied to: http://repository.jboss.com/maven2/org/richfaces/cdk/maven-archetype-theme/3.3.3/maven-archetype-theme-3.3.3.jar
[ERROR] org.richfaces.cdk:maven-archetype-theme:jar:3.3.3
I tried browsing to http://repository.jboss.com/maven2/, but I get an "Access Denied" error, just as stated in the error message. My question is, how do I rectify this? Is there a different URL that I should be using? If so, do I edit the Maven settings.xml file and use the new URL? I'd REALLY appreciate anyone that can give me some direction on this.
The link of the Jboss Repository for Maven specified in the jboss_profile.txt seems to be outdated.
You can try to replace all <url> of all Jboss Repository for Maven with https://repository.jboss.org/nexus/content/groups/public-jboss/ in your Maven settings.xml