I have installed CDH4 in pseudo distributed mode on CentOs without any problems, but when I am installing it on Ubuntu 12.04 I am getting some errors with setting my JAVA_HOME environment variable.
I installed JDK and and have JAVA_HOME set correctly in /etc/profile.d and in ~/bash.rc using the following lines:
export JAVA_HOME=/usr/local/java/latest
export PATH=${JAVA_HOME}/bin:$PATH
I know that is redundant to define it in both places, but apparently setting it in /etc/profile.d wasn't working. From my user, when I type $echo $JAVA_HOME I get:
/usr/local/java/latest
From sudo, I run $ sudo -E echo $JAVA_HOME, I get:
/usr/local/java/latest
If you are wondering, I am specifying the -E option for sudo to preserver my environment.
So my real problem is when I am trying to start HDFS, using the following command:
for x in `cd /etc/init.d ; ls hadoop-hdfs-*` ; do sudo service $x start ; done
I get the following error:
* Starting Hadoop datanode:
Error: JAVA_HOME is not set and could not be found.
Running the same command with the -E option gives me the same result. Has anyone had this problem?
Thanks in advance.
After some research, I found the answer to my question.
I am using CDH4 and have hadoop installed in pseudo-distributed mode.
To fix my JAVA_HOME problems, I created a the hadoop-env.sh file in /etc/hadoop/conf.pseudo.mr1
The file contained the line:
export JAVA_HOME=/usr/local/java/latest
Where /usr/local/java/latest is the path to my installation of JAVA_HOME
Related
I've installed spark-2.3.0-bin-hadoop2.7 on Ubuntu and I don’t think it has some problem with the java path. When I run "spark-submit --version" or "spark-shell" or "pyspark" I get the following error:
/usr/local/spark-2.3.0-bin-hadoop2.7/bin/spark-class: line 71: /usr/lib/jvm/java-8-openjdk-amd-64/jre/bin/java: No such file or directory
It seems "/bin/java" is problematic, but I'm not sure where to change the configuration. The spark-class file has the following lines:
if [ -n "${JAVA_HOME}" ]; then
RUNNER="${JAVA_HOME}/bin/java
The /etc/environment is:
bash: /etc/environment: Permission denied
What I now have in gedit ~/.bashrc is:
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd-64/jre
export PATH=$PATH:JAVA_HOME/bin
This is the current java setup that I have:
root#ubuntu:~# update-alternatives --config java There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
Nothing to configure.
bashrc has the following:
export PATH=$PATH:/usr/share/scala-2.11.8/bin
export SPARK_HOME=/usr/local/spark-2.3.0-bin-hadoop2.7
export PATH=$PATH:$SPARK_HOME/bin
Suggest me:
What files I need to change and
how I need to change them?
Java Home
Your JAVA_HOME should be set to your JDK
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd-64/jre
should be
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd-64
Here is the Oracle doc on JAVA_HOME (which should apply to Open JDK as well)
https://docs.oracle.com/cd/E19182-01/820-7851/inst_cli_jdk_javahome_t/
Spark Environmental Variables
The JAVA_HOME should also be set in the $SPARK_HOME/conf/spark-env.sh
https://spark.apache.org/docs/latest/configuration.html#environment-variables
😊
I am currently working on this: https://mozilla.github.io/ichnaea/install/devel.html#prerequisites and have been a bit stuck since they recommend working with Linux/Mac but I am limited to Windows. I tried getting their steps to work with Git Bash, Powershell, and Command Prompt but to no success. I am currently trying Cygwin to see if it would work however I am running into some issues. I currently run docker-machine env default and see the output:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="..."
export DOCKER_CERT_PATH="..."
export DOCKER_MACHINE_NAME="default"
export COMPOSE_CONVERT_WINDOWS_PATHS="true"
# Run this command to configure your shell:
# eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env default)
When I attempt to run the command to configure the shell, eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env default), I get an error saying: -bash: C:\Program Files\Docker Toolbox\docker-machine.exe: command not found.
This is different than Git Bash, Powershell, and CMD since when I ran the respective commands on those shells there were no issues at all and it led me to successfully move on to the next steps. Is there any reason why I am getting this command not found error on Cygwin, and what should I do to fix it?
Thanks for reading
Adding a cygpath invocation to convert the executable path:
eval "$("$(cygpath -u "C:\Program Files\Docker Toolbox\docker-machine.exe")" env default)"
I am trying to check my installation of hadoop. I did create the environment variables and when I call printenv, I do see my HADOOP_HOME and PATH variables printed and correct (home/hadoop and HADOOP_HOME/bin respectively).
If I go to home/hadoop in the terminal and call ls, I see the hadoop file there. If I try to run it by calling hadoop, it still tells me command not found.
First day on Linux, so there may be a stupid answer to this problem.
Your current working directory is probably not part of your path.
That is default on linux systems.
If you are in the same directory, where your hadoop file is, run that command with an relative path, like: ./hadoop
HOME DIRECTORY:
/home/hadoop is a home directory created by linux similar to Document and settings in windows.
Open your terminal and type:
ls -l /home/hadoop
Post your result for this command: ls -l /home/hadoop
SETTING GLOBAL PATH:
Go to /home/hadoop and open .bashrc in text editor.
Add these lines at the end:
export HADOOP_HOME=/path/to/your/hadoop/installation/folder
export PATH=$PATH:$HADOOP_HOME/bin
Save and exit. Now type, this in your teminal:
echo $PATH
echo $HADOOP_HOME
If these commands shows correct directories, try hadoop command. It should work.
Post your result for these command: echo $PATH and echo $HADOOP_HOME
Go to Hadoop-x.x.x/bin folder
check for hadoop folder there
run ./hadoop version
You must run “hadoop version” command.
If the hadoop setup is fine, then you should see the following result:
Hadoop 2.4.1
Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768
Compiled by hortonmu on 2013-10-07T06:28Z
Compiled with protoc 2.5.0
From source with checksum 79e53ce7994d1628b240f09af91e1af4
For installation related guide you can refer here:
Hadoop Environment Setup
Link to my quora answer https://qr.ae/TWngHN
Hope this helps.
Thanks
Enter which hadoop in your terminal. If you see a path as an output, hadoop is set in PATH of your system. If you get something similar to this,
usr/bin/which: no hadoop in (/usr/local/hadoop.... you might not have setup everything properly. Modify the /etc/bash.bashrc with
export HADOOP_HOME = /path/to/hadoop/folder and add it to PATH using export PATH=$PATH:HADOOP_HOME/bin
You may be editing the wrong ~/.bashrc file.
Open terminal and run sudo gedit ~/.bashrc and edit these command
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
Note: You must not use sudo gedit ~/.bashrc.sh these both work differently on newer OS
Please suggest a solution for solving this issue?? While giving the command:
sqlplus /nolog
the error that occurred:
sqlplus: error while loading shared libraries:
libsqlplus.so: cannot open shared object file: No such file or directory
The minimum configuration to properly run sqlplus from the shell is to set ORACLE_HOME and LD_LIBRARY_PATH. For ease of use, you might want to set the PATH accordingly too.
Assuming you have unzipped the required archives in /opt/oracle/instantclient_11_1:
$ export ORACLE_HOME=/opt/oracle/instantclient_11_1
$ export LD_LIBRARY_PATH="$ORACLE_HOME"
$ export PATH="$ORACLE_HOME:$PATH"
$ sqlplus
SQL*Plus: Release 11.1.0.7.0 - Production on Wed Dec 31 14:06:06 2014
...
sudo sh -c "echo /usr/lib/oracle/12.2/client64/lib > /etc/ld.so.conf.d/oracle-instantclient.conf";sudo ldconfig
from https://help.ubuntu.com/community/Oracle%20Instant%20Client
I did solve this error by setting
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib:$ORACLE_HOME
yes, not only $ORACLE_HOME/lib but $ORACLE_HOME too.
You should already have all needed variables in /etc/profile.d/oracle.sh. Make sure you source it:
$ source /etc/profile.d/oracle.sh
The file's content looks like:
ORACLE_HOME=/usr/lib/oracle/11.2/client64
PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_HOME
export LD_LIBRARY_PATH
export PATH
If you don't have it, create it and source it.
I know it's an old thread, but I got into this once again with Oracle 12c and LD_LIBRARY_PATH has been set correctly.
I have used strace to see what exactly it was looking for and why it failed:
strace sqlplus /nolog
sqlplus tries to load this lib from different dirs, some didn't exist in my install. Then it tried the one I already had on my LD_LIBRARY_PATH:
open("/oracle/product/12.1.0/db_1/lib/libsqlplus.so", O_RDONLY) = -1
EACCES (Permission denied)
So in my case the lib had 740 permissions, and since my user wasn't an owner or didn't have oracle group assigned I couldn't read it. So simple chmod +r helped.
On Ubuntu Server 20.04 and using instant client version 19.10.0.0, I used alien to install the rpm package. I got this error when I just used the -i option. However when, I added -c I did not have this issue. from the man page for alien:
-c, --scripts
Try to convert the scripts that are meant to be run when the package is installed and removed. Use this with caution,
because these scripts might be designed to work on a system unlike
your own, and could cause problems. It is recommended that you
examine the scripts by hand and check to see what they do before using
this option.
So it seems the correct configuration (in 19c) or the environment variables (in earlier versions) are set in these scripts which are not generated unless you run alien like this. (Thanks #Christopher Jones for correcting me on this)
sudo alien -i -c BasicPackage.rpm
sudo alien -i -c SqlPlus.rpm
PERMISSIONS:
I want to stress the importance of permissions for "sqlplus".
For any "Other" UNIX user other than the Owner/Group to be able to run sqlplus and access an ORACLE database , read/execute permissions are required (rx) for these 4 directories :
$ORACLE_HOME/bin , $ORACLE_HOME/lib, $ORACLE_HOME/oracore, $ORACLE_HOME/sqlplus
Environment. Set those properly:
A. ORACLE_HOME
(example: ORACLE_HOME=/u01/app/oranpgm/product/12.1.0/PRMNRDEV/)
B. LD_LIBRARY_PATH
(example: ORACLE_HOME=/u01/app/oranpgm/product/12.1.0/PRMNRDEV/lib)
C. ORACLE_SID
D. PATH
export PATH="$ORACLE_HOME/bin:$PATH"
You can try usage:
# echo "/usr/lib/oracle/12.2/client64/lib" > /etc/ld.so.conf.d/oracle.conf
# ldconfig
This problem are because oracleinstant client not configure shared library.
Could you please check if LD_LIBRARY_PATH points to the oracle libs
Don't forget
apt-get install libaio1 libaio-dev
or
yum install libaio
On Oracle's own Linux (Version 7.7, PRETTY_NAME="Oracle Linux Server 7.7"
in /etc/os-release), if you installed the 18.3 client libraries with
sudo yum install oracle-instantclient18.3-basic.x86_64
sudo yum install oracle-instantclient18.3-sqlplus.x86_64
then you need to put the following in your .bash_profile:
export ORACLE_HOME=/usr/lib/oracle/18.3/client64
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib:$ORACLE_HOME
in order to be able to invoke the SQLPlus client, which, incidentally, is called sqlplus64 on this platform.
This worked for me: sudo dnf install libnsl
It means you didn't set ORACLE_HOME and ORACLE_SID variables. Kindly set proper working $ORACLE_HOME and $ORACLE_SID and after that execute sqlplus /nolog command. It will be working.
#laryx-decidua: I think you are only seeing the 18.x instant client releases that are in the ol7_oci_included repo. The 19.x instant client RPMs, at the moment, are only in the ol7_oracle_instantclient repo. Easiest way to access that repo is:
yum install oracle-release-el7
Every thing is okay (Opengts dir & tomcat dir have permission 777) but i am getting this error again & again, why--
executing # sudo ant all then i get this error
BUILD FAILED
/usr/local/OpenGTS_2.4.5/build.xml:111: CATALINA_HOME environment variable has not been defined.
(make sure CATALINA_HOME is defined and exported to the list of environment variables)
I got the this msg when starting the tomcat
sudo ./startup.sh
Using CATALINA_BASE: /usr/local/apache-tomcat-6.0.36
Using CATALINA_HOME: /usr/local/apache-tomcat-6.0.36
Using CATALINA_TMPDIR: /usr/local/apache-tomcat-6.0.36/temp
Using JRE_HOME: /usr
Using CLASSPATH: /usr/local/apache-tomcat-6.0.36/bin/bootstrap.jar
Any one have the solution please tell me how to fix this error.
Try to configure OpenGTS in your home directory (rather than /usr/local/).
And use ant all command (not sudo ant all).
Good Lock.. :)
First run this command:
echo $CATALINA_HOME
It should give you the path to your tomcat directory, which I'm assuming is /usr/local/apache-tomcat-6.0.36, but if you see a different path, or if the response is blank, try running this command:
export CATALINA_HOME=/usr/local/apache-tomcat-6.0.36
If you read the OpenGTS Configuration Manual, it talks about the CATALINA_HOME environment variable for Linux in section 2.4a. There are other environment variables too that you must set to install OpenGTS successfully (All mentioned in the manual).
To solve this problem, ensure the OpenGTS2.6.x has all files and directories to user:group same as logged in user. then run / ant all command
Please note that "sudo ant all" doesn't work.
Use this command to change ownership of OpenGTS files/dir
/usr/local/OpenGTS2.6.2> cd ..
sudo chown -R ranjan:ranjan OpenGTS2.6.2
change ranjan to your username: group name
/usr/local/OpenGTS2.6.2> ant all
It will work.
Try to install tomcat7 rather than tomcat6 via command line
apt-get update
apt-get install tomcat7
Configure CATALINA_HOME by
export CATALINA_HOME=/usr/share/tomcat7