I want to disable HDFS web UI http://localhost:50070 .
I tried to disable it by below config,however it is still accessible.
<property>
<name>dfs.webhdfs.enabled</name>
<value>false</value>
<description>Enable or disable webhdfs. Defaults to false</description>
</property>
That is WebHDFS, not the WebUI.
You want dfs.namenode.http-bind-host set to 127.0.0.1 to make the server bind locally, meaning it is not externally available.
You must restart any Hadoop process after editting its configuration files.
If you use Apache Ambari or Cloudera Manager, it'll request that you do this right away
I would advise not doing this, though, since you need the UI to be informed about the cluster's overall health, if not using one of the tools I mentioned above.
Related
I am trying to detect the drive failure in Datanode in a Hadoop Cluster. Cloudera Manager API don't have any specific API for that. CM API are only related to Name node or restart services. Are there any suggestions here? Thanks a lot!
If you have access to NameNode UI, the JMX page will give you this information. If you hit the JMX page directly it'll be a JSON formatted page, which can be parsed easily.
We use HortonWorks primarily, haven't touched Cloudera in a long time, but I assume that can be made available somehow.
I'm trying to proxy individual spark applications. That means I need to get a single UI per spark application. To achieve that, I use the spark reverse proxy feature. So, if I have my spark master UI running at http://localhost:8080, when I click on one application name from this spark UI, I'm redirected to http://localhost:8080/proxy/{application-id}/jobs/ where application-id is the application id of the spark application I'm trying to access. Everything looks good, I get the spark job UI for this particular application and some other tabs displayed. But when I click on another tab, for instance "Environment" I'm redirected to http://localhost:8080/environment instead of http://localhost/proxy/{application-id}/environment/
This is the single line I add in my spark-defaults.conf file
spark.ui.reverseProxy=true
I use spark 2.1.0 in standalone mode and deploy some sample applications to reproduce the issue. Any clue? How can I make this proxy working without this issue? Thanks.
I had this problem.
Make sure that you correctly supply spark.ui.reverseProxy and spark.ui.reverseProxyUrl properties to all masters, workers and drivers.
In my case I used spark-submit (cluster mode) from remote machine and forgot to update local spark-defaults.conf on the machine I was submitting from.
I've used the Windows version of HDInsight before, and that has a tab where you can set the number of cores and ram per worker node for Zeppelin.
I followed this tutorial to get Zeppelin working:
https://azure.microsoft.com/en-us/documentation/articles/hdinsight-apache-spark-use-zeppelin-notebook/
The Linux version of HDInsight uses Ambari to manage the resources, but I can't seem to find a way to change the settings for Zeppelin.
Zeppelin is not selectable as a separate service in the list of services on the left. It also seems like it isn't available to be added when I choose 'add service' in actions.
I tried editing the general spark configs in Ambari by using override, then adding the worker nodes to my new config group and increasing the number of cores and RAM in custom spark-defaults. (Then clicked save and restarted all affected services.)
I tried editing the spark settings using
vi /etc/spark/conf/spark-defaults.conf
on the headnode, but that wasn't picked up by Ambari.
The performance in Zeppelin seems to stay the same for a query that takes about 1000-1100 seconds every time.
Zeppelin is not a service so it shouldn't show up in Ambari. If you are committed to managing it that way, you may be able to get this to work
https://github.com/tzolov/zeppelin-ambari-plugin
To edit via ssh you'll need edit the zeppelin-env.sh file. First give yourself edit perms.
sudo chmod u+w /usr/hdp/current/incubator-zeppelin/conf/zeppelin-env.sh
and then edit zeppelin configs using
vi /usr/hdp/current/incubator-zeppelin/conf/zeppelin-env.sh
Here you can configure the ZEPPELIN_JAVA_OPTS variable, adding:
-Dspark.executor.memory=1024m -Dspark.executor.cores=16
All that being said... any reason you can't just use a Jupyter notebook instead?
im am trying to do... from my prog.
val file = sc.textFile("cfs://ip/.....")
but i get java.io.IOException: No FileSystem for scheme: cfs exception...
How should i modify the core-site.xml and where? It should be on dse nodes or should i add it as a resource in my jar.
I use maven to build my jar and execute the jobs remotely...from a non dse node which does not have cassandra or spark or something similar... Other type of flows without cfs files work ok... so the jar is ok so far...
Thnx!
There is some info in the middle of this page about Spark using Hadoop for some operations, such as CFS access: http://www.datastax.com/documentation/datastax_enterprise/4.5/datastax_enterprise/spark/sparkCassProps.html
I heard about a problem using Hive from a non-DSE node that was solved by adding a property file to core-site.xml. This is really a long-shot since it's Spark, but if you're willing to experiment, try adding the IP address of the remote machine to the core-site.xml file.
<property>
<name>cassandra.host</name>
<value>192.168.2.100</value>
<property>
Find the core-site.xml in /etc/dse/hadoop/conf/ or install_location/resources/hadoop/conf/, depending on the type of installation.
I assume you started the DSE cluster in hadoop and spark mode: http://www.datastax.com/documentation/datastax_enterprise/4.5/datastax_enterprise/spark/sparkStart.html
Been quite some time.
The integration is done as usual with any integration of a hadoop client to a compatible hadoop fs.
Copy core-site.xml (append the dse-core-default.xml there) along with dse.yaml, cassandra.yaml and then it requires a proper dependency set-up in the class path eg. dse.jar, cassandra-all, etc.
Note: this is not officially supported so better use other way.
I am trying to setup Titan (server 0.4.4) with Cassandra embedded. My
environment is Windows 8.1 x64 + Cygwin.
The install is in E:\titan-server-0.4.4.
I also need to be able to access this setup via Rexster.
For my configuration, I referred to https://github.com/thinkaurelius/titan/wiki/Using-Cassandra.
I've modified graph configuration
E:\titan-server-0.4.4\conf\rexster-cassandra-es.xml
graph section to
<graph>
<graph-name>graph</graph-name>
<graph-type>com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration</graph-type>
<graph-read-only>false</graph-read-only>
<properties>
<auto-type>none</auto-type>
<storage.batch-loading>true</storage.batch-loading>
<storage.cassandra-config-dir>file:///E:\titan-server-0.4.4\conf\cassandra.yaml</storage.cassandra-config-dir>
<storage.backend>embeddedcassandra</storage.backend>
<storage.index.search.backend>elasticsearch</storage.index.search.backend>
<storage.index.search.directory>../db/es</storage.index.search.directory>
<storage.index.search.client-only>false</storage.index.search.client-only>
<storage.index.search.local-mode>true</storage.index.search.local-mode>
</properties>
<extensions>
<allows>
<allow>tp:gremlin</allow>
</allows>
</extensions>
</graph>
(Note
<auto-type>none</auto-type>
<storage.batch-loading>true</storage.batch-loading>
these are to allow bulk insert. The whole idea of embedded Cassandra is to improve the insertion performance.)
However, when I tried starting the service with ./bin/titan.sh -v start, the start failed with:
org.apache.cassandra.exceptions.ConfigurationException:
localhost/127.0.0.1:7000 is in use by another process. Change
listen_address:storage_port in cassandra.yaml to values that do not
conflict with other services
at org.apache.cassandra.net.MessagingService.getServerSocket(MessagingService.java:439)
at org.apache.cassandra.net.MessagingService.listen(MessagingService.java:387)
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:549)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:514)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:411)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:278)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:366)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:409)
at com.thinkaurelius.titan.diskstorage.cassandra.utils.CassandraDaemonWrapper.start(CassandraDaemonWrapper.java:51)
at com.thinkaurelius.titan.diskstorage.cassandra.embedded.CassandraEmbeddedStoreManager.(CassandraEmbeddedStoreManager.java:102)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at com.thinkaurelius.titan.diskstorage.Backend.instantiate(Backend.java:344)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:367)
at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:311)
at com.thinkaurelius.titan.diskstorage.Backend.(Backend.java:121)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1173)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.(StandardTitanGraph.java:75)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40)
at com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration.configureGraphInstance(TitanGraphConfiguration.java:25)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:119)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.(GraphConfigurationContainer.java:54)
at com.tinkerpop.rexster.server.XmlRexsterApplication.reconfigure(XmlRexsterApplication.java:99)
at com.tinkerpop.rexster.server.XmlRexsterApplication.(XmlRexsterApplication.java:47)
at com.tinkerpop.rexster.Application.(Application.java:96)
at com.tinkerpop.rexster.Application.main(Application.java:188)
localhost/127.0.0.1:7000 is in use by another process. Change
listen_address:storage_port in cassandra.yaml to values that do not
conflict with other services
I tried mofiying the ports in "E:\titan-server-0.4.4\conf\cassandra.yaml", but after some investigation, I've realized that the port is actually taken by Cassandra itself, i.e. in this configuration, ./bin/titan.sh -v start tries to start multiple instances of Cassandra?!
I copied cassandra.yaml to cassandra2.yaml with different port settings and specified path to cassandra2.yaml in the graph configuration xml.
After this, I was able to start Rexster with Titan and Cassandra embedded by running ./bin/titan.sh -v start.
However, I strongly believe that something is wrong with this setup. Besides, the system does not behave well - sometime I cannot save a graph in Rexster's (Web based) Gremlin shell by using g.commit() - the command succeeds, but nothing gets saved.
So is the right way to run Titan with Cassandra embedded? What is the configuration supposed to be?
If you use Titan server via the shell or bat script, it will automatically start a Titan instance for you and attempt to connect to it over localhost.
When you configured it to use Cassandra embedded, the two instances naturally conflict.
Is there a particular reason you want to use Cassandra embedded. I'd strongly encourage you to try the out-of-the-box version first. Cassandra embedded is mostly meant for low latency applications and requires a solid understanding of the JVM.
Good luck!