Access Denied Issue with Radoop. Connecting RapidMiner with Cloudera Quickstart VM - linux

I have stood up a Quickstart Cloudera VM on a PC with all services running allocating 14G of ram. On the desktop that the VM is running on(not in the vm) I installed RapidMiner in order to test out Radoop before it goes onto a production server. I used the "Import from Cluster Manager" from RapidMiner which retrieved the correct configuration from CHD. When I run the full test I am running into a Access denied when rapidminer tests if it can create a table etc on Hive.
Logs:
May 18, 2018 3:45:29 PM FINE: Hive query: SHOW TABLES
May 18, 2018 3:45:29 PM FINE: Hive query: set -v
May 18, 2018 3:45:32 PM INFO: Getting radoop_hive-v4.jar file from plugin jar...
May 18, 2018 3:45:32 PM INFO: Remote radoop_hive-v4.jar is up to date.
May 18, 2018 3:45:32 PM INFO: Getting radoop_hive-v4.jar file from plugin jar...
May 18, 2018 3:45:32 PM INFO: Remote radoop_hive-v4.jar is up to date.
May 18, 2018 3:45:32 PM FINE: Hive query: SHOW FUNCTIONS
May 18, 2018 3:45:33 PM INFO: Remote radoop-mr-8.2.1.jar is up to date.
May 18, 2018 3:45:33 PM FINE: Hive query: CREATE TABLE radoop__tmp_cloudera_1526672733223_qznjpj8 (a1 DOUBLE , a2 DOUBLE , a3 DOUBLE , a4 DOUBLE , id0 STRING COMMENT 'role:"id" ', label0 STRING COMMENT 'role:"label" ') ROW FORMAT DELIMITED FIELDS TERMINATED BY ';' STORED AS TEXTFILE
May 18, 2018 3:45:33 PM FINE: Hive query: LOAD DATA INPATH '/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew/' OVERWRITE INTO TABLE radoop__tmp_cloudera_1526672733223_qznjpj8
May 18, 2018 3:45:33 PM FINE: Hive query failed: LOAD DATA INPATH '/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew/' OVERWRITE INTO TABLE radoop__tmp_cloudera_1526672733223_qznjpj8
May 18, 2018 3:45:33 PM FINE: Error: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 20009 from org.apache.hadoop.hive.ql.exec.MoveTask. Access denied: Unable to move source hdfs://quickstart.cloudera:8020/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew to destinationhdfs://quickstart.cloudera:8020/user/hive/warehouse/radoop__tmp_cloudera_1526672733223_qznjpj8: Permission denied by sticky bit: user=hive, path="/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew":cloudera:supergroup:drwxrwxrwx, parent="/tmp/radoop/cloudera":cloudera:supergroup:drwxrwxrwt
May 18, 2018 3:45:33 PM FINER: Connecting to Hive. JDBC url: radoop_hive_0.13.0jdbc:hive2://192.168.100.113:10000/default
May 18, 2018 3:45:33 PM FINER: Connecting to Hive took 108 ms.
May 18, 2018 3:45:33 PM FINE: Hive query failed again, error: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 20009 from org.apache.hadoop.hive.ql.exec.MoveTask. Access denied: Unable to move source hdfs://quickstart.cloudera:8020/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew to destination hdfs://quickstart.cloudera:8020/user/hive/warehouse/radoop__tmp_cloudera_1526672733223_qznjpj8: Permission denied by sticky bit: user=hive, path="/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew":cloudera:supergroup:drwxrwxrwx, parent="/tmp/radoop/cloudera":cloudera:supergroup:drwxrwxrwt
May 18, 2018 3:45:33 PM FINE: Error while processing statement: FAILED: Execution Error, return code 20009 from org.apache.hadoop.hive.ql.exec.MoveTask. Access denied: Unable to move source hdfs://quickstart.cloudera:8020/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew to destination hdfs://quickstart.cloudera:8020/user/hive/warehouse/radoop__tmp_cloudera_1526672733223_qznjpj8: Permission denied by sticky bit: user=hive, path="/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew":cloudera:supergroup:drwxrwxrwx, parent="/tmp/radoop/cloudera":cloudera:supergroup:drwxrwxrwt
May 18, 2018 3:45:33 PM FINER: Connecting to Hive. JDBC url: radoop_hive_0.13.0jdbc:hive2://192.168.100.113:10000/default
Maybe this is just a configuration change I can make in CDH like modify the Hive config, or some way to allow RapidMiner to read/write.

Long story short: on Cloudera Quickstart 5.13 VM you should use the same username for "Hadoop username" on Global tab and "Hive username" and on Hive tab.

Related

how to use io.cucumber.core.cli.Main

Nov 15, 2022 6:52:42 PM cucumber.api.cli.Main run
WARNING: You are using deprecated Main class. Please use io.cucumber.core.cli.Main
Exception in thread "main" java.lang.NoSuchMethodError: 'java.util.stream.Stream io.cucumber.gherkin.GherkinParser.parse(java.lang.String, java.io.InputStream)'
at io.cucumber.core.gherkin.messages.GherkinMessagesFeatureParser.parse(GherkinMessagesFeatureParser.java:44)
at io.cucumber.core.feature.FeatureParser.parseResource(FeatureParser.java:46)
at java.base/java.util.function.BiFunction.lambda$andThen$0(BiFunction.java:70)
at io.cucumber.core.resource.ResourceScanner.lambda$processResource$1(ResourceScanner.java:79)
at io.cucumber.core.resource.PathScanner$ResourceFileVisitor.visitFile(PathScanner.java:75)
at io.cucumber.core.resource.PathScanner$ResourceFileVisitor.visitFile(PathScanner.java:60)
at java.base/java.nio.file.Files.walkFileTree(Files.java:2811)
at io.cucumber.core.resource.PathScanner.findResourcesForPath(PathScanner.java:53)
at io.cucumber.core.resource.PathScanner.findResourcesForUri(PathScanner.java:31)
at io.cucumber.core.resource.ResourceScanner.findResourcesForUri(ResourceScanner.java:61)
at io.cucumber.core.resource.ResourceScanner.scanForResourcesUri(ResourceScanner.java:134)
at io.cucumber.core.runtime.FeaturePathFeatureSupplier.loadFeatures(FeaturePathFeatureSupplier.java:62)
at io.cucumber.core.runtime.FeaturePathFeatureSupplier.get(FeaturePathFeatureSupplier.java:45)
at io.cucumber.core.runtime.Runtime.run(Runtime.java:80)
at io.cucumber.core.cli.Main.run(Main.java:87)
at cucumber.api.cli.Main.run(Main.java:30)
at cucumber.api.cli.Main.main(Main.java:15)
Can someone help with the above issue.I am not bale to run the feature file with my project.

How to generate graph from SQL Server database?

I'm trying to generate a graph of SQL Server tables. For some reason -command=graph and outputformat=pdf are not working.
I was able to generate the HTML report when I run with -command=schema but command=graph is not even working.
_schemacrawler>schemacrawler.cmd -server=sqlserver -host=sqlserver -port=1433 -database=test -schemas=test\..* -user=test -password=******** -command=graph -noinfo -infolevel=standard -routines= -infolevel=maximum -tabletypes=TABLE -grepcolumns=.*\.TestID -only-matching -outputformat=pdf -outputfile=graph.pdf -loglevel=CONFIG
Output:
Jan 16, 2019 3:12:09 PM schemacrawler.tools.executable.SchemaCrawlerExecutable execute
INFO: Executing command <graph> using <schemacrawler.tools.executable.CommandDaisyChain>
Jan 16, 2019 3:12:09 PM schemacrawler.tools.text.operation.OperationCommand execute
INFO: Output format <pdf> not supported for command <graph>

RetryableHazelcastException when launching cluster using Hazelcast 3.7

I have a cluster with two members that have map loaders to a database.
Version 3.6.1 shows no issues during startup - however when I upgraded to 3.7, I was presented with lots of exceptions like below - and the cluster failed to start!
Any ideas what it means?
Thanks
14:32:50.613), waitTimeout=-1, callTimeout=60000, name=TRADE_SETTLEMENT}, tryCount=250, tryPauseMillis=500, invokeCount=240, callTimeoutMillis=60000, firstInvocationTimeMs=1473427838152, firstInvocationTime='2016-09-09 14:30:38.152', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 01:00:00.000', target=[xxx.co.uk]:5702, pendingResponse={VOID}, backupsAcksExpected=0, backupsAcksReceived=0, connection=null}, Reason: com.hazelcast.spi.exception.RetryableHazelcastException: Map TRADE_SETTLEMENT is still loading data from external store
Sep 09, 2016 2:32:50 PM com.hazelcast.spi.impl.operationservice.impl.Invocation

nslcd: Errors in /var/log/messages Deleted User

I'm running a Linux LDAP environment with multiple servers on the domain. As we have added and removed users from our environment, I started getting these error messages:
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] lookup of user cn=Deleted1 User,ou=People,dc=company,dc=net failed: No such object
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] ldap_result() failed: No such object
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] lookup of user cn=Deleted2 User,ou=People,dc=company,dc=net failed: No such object
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] ldap_result() failed: No such object
The users showing up are only users who I have deleted. Not just recently, either.
I found the answer to this in this bug:
http://tracker.clearfoundation.com/view.php?id=1752
Basically, I had a group that was still referencing those users as members even though I had deleted them. In my case, they were part of the uniqueMember list.

Unable to start titan server with embedded cassandra and rexter

I am trying to run Titan with embedded cassandra and rexster. Downloaded Titan distribution titan-all-0.3.2 and unpacked on a linux box. After unpacking this is what i ran the command
$ ./bin/titan.sh config/titan-server-rexster.xml config/titan-server-cassandra.properties
This is what i see in the logs
After starting RexPro services its unable to deploy and start grizzly. Has anyone had this issue?
Exception stack trace:
13/10/18 14:51:31 INFO server.RexProRexsterServer: RexPro serving on port: [8184]
Oct 18, 2013 2:51:31 PM org.glassfish.grizzly.servlet.WebappContext deploy
INFO: Starting application [jersey] ...
Oct 18, 2013 2:51:31 PM org.glassfish.grizzly.servlet.WebappContext deploy
SEVERE: [jersey] Exception deploying application. See stack trace for details.
java.lang.RuntimeException: com.sun.jersey.api.container.ContainerException: No WebApplication provider is present
at org.glassfish.grizzly.servlet.WebappContext.initServlets(WebappContext.java:1479)
at org.glassfish.grizzly.servlet.WebappContext.deploy(WebappContext.java:265)
There were some packaging problems in some of the 0.3.2 zip files. You basically need to replace a jar file or two around Jersey to get it to work (or I think use the titan-cassandra distribution instead of titan-all).
You can read more about the issue here and its solution (also reported here), but the answer is:
You should be able to patch 0.3.2 by replacing this jar file in the
Titan lib directory:
jersey-core-1.8.jar
with:
jersey-core-1.17
(http://repo1.maven.org/maven2/com/sun/jersey/jersey-core/1.17/jersey-core-1.17.jar)

Resources