Hadoop Permission denied on Bash on Ubuntu on Windows - linux

I'm trying to install Hadoop in order to use the HDFS service. I'm doing it in the Bash on Ubuntu on Windows (not VM) :
https://www.microsoft.com/fr-fr/store/p/ubuntu/9nblggh4msv6
The tutorial that I followed was this one (which is really similar to most tutorials for Hadoop installation):
https://www.youtube.com/watch?v=Nb1sinaTlmo
So everything goes well until I try to run the start-dfs.sh but I get this error messages:
17/12/12 22:19:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-DESKTOP-QG5RB4T.out
localhost: nice: cannot set niceness: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-DESKTOP-QG5RB4T.out
localhost: nice: cannot set niceness: Permission denied
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-DESKTOP-QG5RB4T.out
0.0.0.0: nice: cannot set niceness: Permission denied
17/12/12 22:19:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
I made sure that the ssh localhost worked, also I gave permission to the hadoop folder chmod -R 755 hadoop/ and also give root privileges to the user I created for doing the hadoop implementation.
If you have any leads or solutions for this problem it would be amazing.

I have the exact same problem and for the love of god, I can't figure it out.
It has been like 2 Months and still nothing helps. I asked so many professionals and they can't figure it out themselves too.
The only tip I received is to check my Windows SSH Broker and disable it, but as far as I tried, it can't be done, cause it's an integral part to Windows. You can only partially disable it, which changes absolutely nothing.
I hope you can find the solution for the problem and I'm sorry that I can't help you with that any further, but I would be soo glad to know the solution to this aswell.

Edit your /etc/passwd file, giving {hadoop} user root permissions (replacing 1001 values with 0)

May be you can see this answer: https://github.com/Linuxbrew/brew/issues/695#issuecomment-386121530.
It says "You can safely ignore this warning. It's an upstream bug in Microsoft Windows."

Related

Apache Pulsar Zookeeper: Unable to access datadir, exiting abnormally

I am using these steps to use apache pulsar on docker: https://github.com/streamnative/tgip/blob/master/episodes/001/demo.md
I was able to use these steps before to install and use pulsar but for some reason now when am creating a directory, it is going to write protected and pulsar zookeeper container is exiting with following logs as soon as it is created:
ERROR org.apache.zookeeper.server.ZooKeeperServerMain - Unable to access datadir, exiting abnormally
org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Unable to create data directory data/zookeeper/version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:136) ~[org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:137) ~[org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:112) ~[org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:67) [org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:140) [org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:90) [org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
Unable to access datadir, exiting abnormally
23:36:15.223 [main] INFO org.apache.zookeeper.audit.ZKAuditProvider - ZooKeeper audit is disabled.
23:36:15.226 [main] ERROR org.apache.zookeeper.util.ServiceUtils - Exiting JVM with code 3
23:36:15.196 [PurgeTask] ERROR org.apache.zookeeper.server.DatadirCleanupManager - Error occurred while purging.
org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Unable to create data directory data/zookeeper/version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:136) ~[org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.PurgeTxnLog.purge(PurgeTxnLog.java:80) ~[org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.DatadirCleanupManager$PurgeTask.run(DatadirCleanupManager.java:141) [org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at java.util.TimerThread.mainLoop(Timer.java:556) [?:?]
at java.util.TimerThread.run(Timer.java:506) [?:?]
23:36:15.229 [PurgeTask] INFO org.apache.zookeeper.server.DatadirCleanupManager - Purge task completed
I have made sure that SELinux is disabled and tried changing permission using chmod 777 data/ and every other step available to resolve this but still unable to find any. Please help me with the possible resolution.

Running Derby as a server on Linux using JDK11

I am at my wits end!
I have a minimal install of Ubuntu Server 18.04 and OpenJDK 11 (headless).
Downloaded, to a local folder are the java 9+ binaries for Derby (db-derby-10.15.2.0-bin)
Path and Environment settings are all correct!
When I start the server startNetworkServer -h 0.0.0.0, I get an error when doing a simple connect using the ij command line tool
ij> connect 'jdbc:derby://localhost:1527/dbname;create=true';
ERROR XJ041: DERBY SQL error: ERRORCODE: 40000, SQLSTATE: XJ041, SQLERRMC: Failed to create database 'dbname', see the next exception for details.::SQLSTATE: XBM01::SQLSTATE: XJ001
The derby.log file makes reference to:
java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "getenv.SOURCE_DATE_EPOCH")
Looking further into this error, I learned that I somehow need a security.profile. I found this website that seemed to be the answers to my problems. https://www.javacodegeeks.com/2020/04/apache-derby-database-jvm-security-policy.html
Following these pretty straight-forward instructions, I get:
java.security.AccessControlException: access denied
org.apache.derby.shared.common.security.SystemPermission( "engine", "usederbyinternals" )
For the next person who has this strange problem (it seems to happen with some regularity, here's a simple workaround, copied from this FAQ page at Chalmers Institute of Technology:
Q: When we try to create a database in Derby and the database explorer in NetBeans, we get one or more of the following error(s):
An error occurred while creating the database:
java.sql.NonTransientConnectionException: DERBY SQL error: ERRORCODE:
40000, SQLSTATE: XJ041, SQLERRMC: ...
Caused by: java.security.AccessControlException: access denied
("java.lang.RuntimePermission" "getenv.SOURCE_DATE_EPOCH")
A: This is some kind of missconfiguration in the JVM with a very aggressive security policy that doesn't allow applications to fetch the time on the system (since epoch). The solution is to edit ~/.java.policy or [java.home]/lib/security/java.policy and add the following:
grant {
permission java.lang.RuntimePermission "getenv.SOURCE_DATE_EPOCH", "read";
};
If you are on Windows you can read about where this policy file is supposed to be located here;
https://docs.oracle.com/javase/7/docs/technotes/guides/security/PolicyFiles.html
Apache-Derby is a database management system prepared for a multi-user environment, therefore, when you execute the startNetworkServer -h 0.0.0.0 instruction, you are telling it by default to take certain security into account, and that is why it does not let you do an insecure connection such as ij> connect 'jdbc:derby://172.16.17.31:1527/BBDD_server;create=true';
because you are connecting without specifying username and password, so you should either connect by specifying username + password, or start the server without any security:
startNetworkServer -h 0.0.0.0 -noSecurityManager
More help:
https://db.apache.org/derby/docs/10.4/adminguide/tadminnetservopen.html
https://db.apache.org/derby/docs/10.4/adminguide/tadminnetservbasic.html

Documentum.cmis.too many open files error

We have deployed our application on rhel 7 from rhel 6 and after deployment we are seeing following error in the catalina.properties, due to this my vm link is getting down frequently. We are using Documentum CMIS 16.4 version on tomcat 8.5 version.
Following is the error's details:
27-Nov-2018 01:57:00.536 SEVERE [https-jsse-nio-0.0.0.0-12510-Acceptor-0] org.apache.tomcat.util.net.NioEndpoint$Acceptor.run Socket accept failed
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:457)
at java.lang.Thread.run(Thread.java:748)
Here are my efforts in order to solve this problem:
I have increased ulimit value from 1024 to 8192 for specific user and rebooted it, recycled tomcat service but nothing happened. I had done changed to file named /etc/security/limits.d/20-nproc.conf/20-nproc.conf. kindly help here.
I don't have privileges to add a comment, so posting as an answer. Try to find out which files are open by using the command
lsof -p <pid> | wc -l.
That will tell you which files are not getting closed.
You can also check the limits of a running process by
cat /proc/<pid>/limits

Unable to set up my own Stanford CoreNLP server with error "Could not delete shutdown key file"

I try to set up my own Stanford CoreNLP server following the official guide. However, I am not able to start the server using the following command:
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 15000
I paste the error messages below:
my_server_name$ java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 15000
[main] INFO CoreNLP - --- StanfordCoreNLPServer#main() called ---
[main] INFO CoreNLP - setting default constituency parser
[main] INFO CoreNLP - warning: cannot find edu/stanford/nlp/models/srparser/englishSR.ser.gz
[main] INFO CoreNLP - using: edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz instead
[main] INFO CoreNLP - to use shift reduce parser download English models jar from:
[main] INFO CoreNLP - http://stanfordnlp.github.io/CoreNLP/download.html
Exception in thread "main" java.lang.IllegalStateException: Could not delete shutdown key file
at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.<init>(StanfordCoreNLPServer.java:195)
at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.main(StanfordCoreNLPServer.java:1323)
[Thread-0] INFO CoreNLP - CoreNLP Server is shutting down.
The main problem is the IllegalSstateException: Could not delete shutdown key file. I just wonder whether the cause of this problem is the sudo access. The official guide doesn't explicitly state this command needs sudo access.
I want to ask 1) whether the above command requires the sudo access and 2) if that command doesn't need sudo access, what could be the potential error for my IllegalSstateException.
Thanks.
PS: I am running on a server with Ubuntu 16.04.3 LTS.
This error happens when the shutdown key file already exists on your filesystem, you're starting a new CoreNLP server instance, and it can't delete the old shutdown key file. Are you running the server as two different users?
More generally, do you have permissions to the directory stored in java property java.io.tmpdir? This is, traditionally, /tmp on Linux machines. The shutdown key is stored in:
System.getProperty("java.io.tmpdir") + File.separator + "corenlp.shutdown"
So, for a Linux system:
/tmp/corenlp.shutdown
The error says that this file exists, and cannot be deleted by Java. You should check your permissions on this file, and that should help you debug what's wrong.
An easy workaround, in the worst case, is to set the tmpdir yourself when starting the server. For example:
java -Djava.io.tmpdir=/path/to/tmp -mx4g edu.stanford.nlp.pipeline.StanfordCoreNLPServer 9000

(bdutil) Unable to get hadoop/spark cluster working with a fresh install

I'm setting up a tiny cluster in GCE to play around with it but although instances are created some failures prevent to get it working. I'm following the steps in https://cloud.google.com/hadoop/downloads
So far I'm using (as of now) lastest versions of gcloud (143.0.0) and bdutil (1.3.5), freshly installed.
./bdutil deploy -e extensions/spark/spark_env.sh
using debian-8 as image (as bdutil still uses debian-7-backports).
At some point I got
Fri Feb 10 16:19:34 CET 2017: Command failed: wait ${SUBPROC} on line 326.
Fri Feb 10 16:19:34 CET 2017: Exit code of failed command: 1
full debug output is in https://gist.github.com/jlorper/4299a816fc0b140575ed70fe0da1f272
(project id and bucket names changed)
Instances are created, but spark not even installed. Digging a bit I've managed to run spark installation and start hadoop commands in the master after after ssh. But it fails badly when starting the spark-shell:
17/02/10 15:53:20 INFO gcs.GoogleHadoopFileSystemBase: GHFS version: 1.4.5-hadoop1
17/02/10 15:53:20 INFO gcsio.FileSystemBackedDirectoryListCache: Creating '/hadoop_gcs_connector_metadata_cache' with createDirectories()...
java.lang.RuntimeException: java.lang.RuntimeException: java.nio.file.AccessDeniedException: /hadoop_gcs_connector_metadata_cache
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
and not able to import sparkSQL. For what I've read everything should be started automatically.
Up to this point I'm a bit lost and don't know what else to do.
Am I missing any step? Is any of the commands faulty? Thanks in advance.
Update: solved
As pointed out in accepted solution I cloned the repo and cluster was created without issues. When trying to start the spark-shell though it gave
java.lang.RuntimeException: java.io.IOException: GoogleHadoopFileSystem has been closed or not initialized.`
That sounded to me like connectors were not initialized properly, so after running
./bdutil --env_var_files extensions/spark/spark_env.sh,bigquery_env.sh run_command_group install_connectors
it worked as expected.
The last version of bdutil on https://cloud.google.com/hadoop/downloads is a bit stale and I'd instead recommend using the version of bdutil at head on github: https://github.com/GoogleCloudPlatform/bdutil.

Resources