Where is SCALA_HOME on Ubuntu? - linux

I installed Scala on Ubuntu using the following
sudo apt-get install scala
~$ which scala
/usr/bin/scala
~$ whereis scala
scala: /usr/bin/scala /usr/bin/X11/scala /usr/share/man/man1/scala.1.gz
~$ scala -version
Scala code runner version 2.9.1 -- Copyright 2002-2011, LAMP/EPFL
My question is what should I put in the variable SCALA_HOME? /usr/bin ?

Today I installed scala using "apt-get install scala" and confirmed the scala jar files are located in /usr/share/java
You should be able to set your SCALA_HOME to /usr/share/java and have it all work. I assume you want to use NetBeans so you will need to set SCALA_HOME in your .profile (or .bash_profile) rather than in your .bashrc because NetBeans won't see any variables set in your .bashrc unless you start it from the command line
$ find / -maxdepth 6 -iname \*scala\*jar 2> /dev/null
/usr/share/java/scala-dbc.jar
/usr/share/java/scala-partest.jar
/usr/share/java/scala-partest-2.9.1.jar
/usr/share/java/scala-dbc-2.9.1.jar
/usr/share/java/scalacheck.jar
/usr/share/java/scalap.jar
/usr/share/java/scala-library-2.9.1.jar
/usr/share/java/scala-compiler-2.9.1.jar
/usr/share/java/scala-library.jar
/usr/share/java/scalacheck-2.9.1.jar
/usr/share/java/scala-compiler.jar
/usr/share/java/scala-swing-2.9.1.jar
/usr/share/java/scalap-2.9.1.jar
/usr/share/java/scala-swing.jar

For me its: /usr/share/java/scala
I determined this by doing
dpkg -L scala
This assumes you install scala using APT.

As of today I couldn't find an easy (and reliable) way of setting this.
As per Alex (in the comment above) installing from tarball (downloaded from scala-lang.org) into /location/of/scala/untar
Then I set export SCALA_HOME=/location/of/scala/untar in my .bashrc
Everything works for now!

I had the same issue and I did some digging
This takes into account that you are using sudo dpkg -i scala-2.11.4.deb; where the debian package has been downloaded
The SCALA_HOME should be /usr/share/scala; This is based on the following
/usr/bin/scala is a symbolic link to /usr/share/scala/bin/scala
/usr/bin/X11/scala is also a symbolic link to
/usr/share/scala/bin/scala
The way I see the scala package is installed in /usr/share/scala which should be your SCALA_HOME

I installed the untarred scala into /usr/local/share as it is on the scala download site.
In my .bashrc, I placed the following line:
export PATH="/usr/local/share/scala-2.11.8/bin:$PATH"
works great from terminal regardless of what directory I'm in.

If you have installed Scala using
$apt-get install scala
then, after a successful install to see where it installed, run
which scala
If this command shows you the path to scala binaries.
Now run
pwd
Now export SCALA_HOME path into either of these environment files
~/.bashrc
or
/etc/profile
export SCALA_HOME=<output of pwd>

The SCALA_HOME should be the directory where you install scala from.
For example, the name of this directory may be scala-2.9.2.

Related

start-all.sh, start-dfs.sh command not found

I am using Ubuntu 16.04 LTS and installed hadoop 2.7.2. The Output of
hadoop version
is
Hadoop 2.7.2
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41
Compiled by jenkins on 2016-01-26T00:08Z
Compiled with protoc 2.5.0
From source with checksum d0fda26633fa762bff87ec759ebe689c
This command was run using /usr/local/hadoop-2.7.2/share/hadoop/common/hadoop-common-2.7.2.jar
and when i run
whereis hadoop
it gives output as
hadoop: /usr/local/hadoop /usr/local/hadoop-2.7.2/bin/hadoop.cmd /usr/local/hadoop-2.7.2/bin/hadoop
But when i run command
start-all.sh
it says command not found.
also when i run
start-dfs.sh
it gives output as command not found.
I am able to run these command when i navigate to hadoop directory but i want to run these command without navigating into hadoop directory.
Your problem is that bash doesn't know where to look for ./start-all.sh.
You can fix this by opening $HOME/.bashrc and adding a line that looks like this:
PATH=$PATH:/usr/local/hadoop/sbin
This tells bash that it should look in '/usr/local/hadoop/sbin' for start-all.sh.
Note:
Changes to $HOME/.bashrc will not take affect in any terminals that are currently open.
If you need the changes to take affect in a terminal that is currently open, run
source $HOME/.bashrc
I had to search for it with find.
find / -iname start-all.sh 2> /dev/null
It found:
/usr/local/sbin/start-all.sh
/usr/local/Cellar/hadoop/3.3.4/libexec/sbin/start-all.sh
/usr/local/Cellar/hadoop/3.3.4/sbin/start-all.sh
So in addition to the previous answer, the variable in $HOME/.bashrc looks like this:
PATH=$PATH:/usr/local/sbin/
Note: I'm not sure which one should be set as PATH

Installing Apache Spark on linux

I am installing Apache Spark on linux. I already have Java, Scala and Spark downloaded and they are all in the Downloads folder inside the Home folder with the path /home/alex/Downloads/X where X=scala, java, spark, literally that's what the folders are called.
I got scala to work but when I try to run spark by typing ./bin/spark-shell it says:
/home/alex/Downloads/spark/bin/saprk-class: line 100: /usr/bin/java/bin/java: Not a directory
I have already included the file path by editing the bashrc with sudo gedit ~/.bashrc:
# JAVA
export JAVA_HOME=/home/alex/Downloads/java
export PATH=$PATH:$JAVA_HOME/bin
# scala
export SCALA_HOME=/home/alex/Downloads/scala
export PATH=$PATH:$SCALA_HOME/bin
# spark
export SPARK_HOME=/home/alex/Downloads/spark
export PATH=$PATH:$SPARK_HOME/bin
When I try to type sbt/sbt package in the spark folder it say no such file or directory is found also. What should I do from here?
It seems you have a few issues, namely your JAVA_HOME is not pointed to a directory with java, when you are running sbt in spark you should run ./sbt/sbt (or in new versions ./build/sbt). While you can download Java & Scala by hand, you may find that your system packages are sufficient (make sure to get jdk 7 or later).
Furthermore, after using system packages as Holden points out, in Linux you may use the command whereis to make sure of the right paths.
Finally, the following link may prove useful:
http://www.tutorialspoint.com/apache_spark/apache_spark_installation.htm
Hope this helps.
Note: It looks like there may be a configuration issue, misspelling, of the directory name
/home/alex/Downloads/spark/bin/saprk-class: line 100: /usr/bin/java/bin/java: Not a directory
saprk-class
That could be a configuration issue only, but it's worth a look if it is called /spark-class elsewhere to see if it's causing related issues.

Ubuntu: hadoop command not found

I am trying to check my installation of hadoop. I did create the environment variables and when I call printenv, I do see my HADOOP_HOME and PATH variables printed and correct (home/hadoop and HADOOP_HOME/bin respectively).
If I go to home/hadoop in the terminal and call ls, I see the hadoop file there. If I try to run it by calling hadoop, it still tells me command not found.
First day on Linux, so there may be a stupid answer to this problem.
Your current working directory is probably not part of your path.
That is default on linux systems.
If you are in the same directory, where your hadoop file is, run that command with an relative path, like: ./hadoop
HOME DIRECTORY:
/home/hadoop is a home directory created by linux similar to Document and settings in windows.
Open your terminal and type:
ls -l /home/hadoop
Post your result for this command: ls -l /home/hadoop
SETTING GLOBAL PATH:
Go to /home/hadoop and open .bashrc in text editor.
Add these lines at the end:
export HADOOP_HOME=/path/to/your/hadoop/installation/folder
export PATH=$PATH:$HADOOP_HOME/bin
Save and exit. Now type, this in your teminal:
echo $PATH
echo $HADOOP_HOME
If these commands shows correct directories, try hadoop command. It should work.
Post your result for these command: echo $PATH and echo $HADOOP_HOME
Go to Hadoop-x.x.x/bin folder
check for hadoop folder there
run ./hadoop version
You must run “hadoop version” command.
If the hadoop setup is fine, then you should see the following result:
Hadoop 2.4.1
Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768
Compiled by hortonmu on 2013-10-07T06:28Z
Compiled with protoc 2.5.0
From source with checksum 79e53ce7994d1628b240f09af91e1af4
For installation related guide you can refer here:
Hadoop Environment Setup
Link to my quora answer https://qr.ae/TWngHN
Hope this helps.
Thanks
Enter which hadoop in your terminal. If you see a path as an output, hadoop is set in PATH of your system. If you get something similar to this,
usr/bin/which: no hadoop in (/usr/local/hadoop.... you might not have setup everything properly. Modify the /etc/bash.bashrc with
export HADOOP_HOME = /path/to/hadoop/folder and add it to PATH using export PATH=$PATH:HADOOP_HOME/bin
You may be editing the wrong ~/.bashrc file.
Open terminal and run sudo gedit ~/.bashrc and edit these command
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
Note: You must not use sudo gedit ~/.bashrc.sh these both work differently on newer OS

How to install SQL * PLUS client in linux

I am working on AWS services. I have an ec2 ( centos ) instance. I need to configure SQL*Plus client on this centos machine.
The server with whom I want to connect is at some remote area. The server version is oracle-se(11.2.0.2)
How can I get the client installed on the CentOS machine?
Go to Oracle Linux x86-64 instant clients download page
Download the matching client
oracle-instantclient11.2-basic-11.2.0.2.0.x86_64.rpm
oracle-instantclient11.2-sqlplus-11.2.0.2.0.x86_64.rpm
Install
rpm -ivh oracle-instantclient11.2-basic-11.2.0.2.0.x86_64.rpm
rpm -ivh oracle-instantclient11.2-sqlplus-11.2.0.2.0.x86_64.rpm
Set environment variables in your ~/.bash_profile
ORACLE_HOME=/usr/lib/oracle/11.2/client64
PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_HOME
export LD_LIBRARY_PATH
export PATH
Reload your .bash_profile by simply typing source ~/.bash_profile (suggested by jbass) or Log-out user and log-in again.
Now you're ready to use SQL*Plus and connect your server. Type in :
sqlplus "username/pass#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.1)(PORT=1521))(CONNECT_DATA=(SID=YOURSID)))"
The solution by #ChamaraKeragala is good, but it is unnecessary to logout/login. Instead type:
source ~/.bash_profile
For everyone still getting the following error:
sqlplus command not found
The original post refers to a set of environment variables, the most important of which is ORACLE_HOME. This is the parent directory where the oracle binaries get installed.
Depending on what version of oracle you downloaded you'll have to change the ORACLE_HOME accordingly. For example, the original question's ORACLE_HOME was set to:
ORACLE_HOME=/usr/lib/oracle/11.2/client64
My version of Oracle happens to be 12.1, so my ORACLE_HOME is set to:
ORACLE_HOME=/usr/lib/oracle/12.1/client64
If you are unsure of the version that you downloaded, you can:
cd /usr/lib/oracle after the installation and find the version.
Look at the RPM file oracle-instantclient12.1, where the bolded bits would refer to the version number.
There's a good blog post[1] on $subject. setup oracle client in ubuntu with minimum effort. Following are the main steps on how to step up the client.
In my case, I was installing rpm files using alien package.
Install alien and related packages
sudo apt-get install alien
Install oracle client packages using alien.
sudo alien -i oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm
sudo alien -i oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64.rpm
In my opinion these two steps are the easiest way to install oracle client rpm's on your ubuntu system. (I'm not going to mention about export oracle specific variables as it's already clearly explained in above answers)
Hope it helps someone.
[1] http://pumuduruhunage.blogspot.com/2016/04/setup-oracle-sql-plus-client-on-aws.html
For any one who is using proxy, you'd need to add an extra line to the bash profile. At least this is what made it work for me. I'm using cntlm.
export no_proxy=
Install via zip (tried with 12_2)
First of all there is no need to set ORACLE_HOME.
Simply download the .zip files from here starting with the first one Basic: followed by SQL*Plus: and any additional zips you may need.
Extract them all under /opt/oracle
You will then have a directory: /opt/oracle/instantclient_x_y
On ubuntu I had to do also:
sudo apt install libaio1
To run:
# This can be also done by adding only the path below in: /etc/ld.so.conf.d/oracle-instantclient.conf
export LD_LIBRARY_PATH=/opt/oracle/instantclient_x_y:$LD_LIBRARY_PATH
# This can be added in ~/.profile or ~/.bashrc
export ORACLE_HOME=/opt/oracle/instantclient_x_y
/opt/oracle/instantclient_x_y/sqlplus user/pass#hostname:1521/sidorservicename
At the bottom of the the above link page there are more details.

Why can't I get Openfire to start?

I am having trouble getting Openfire to work. I done the following:
[root#jiaoyou logs]# which java
/usr/bin/java
and I've run this command:
ln -s /usr/bin/java /opt/openfire/jre/bin/java
but when starting Openfire, it still says:
cannot run command `/opt/openfire/jre/bin/java': No such file or directory
It seems like a permission issue, but I don't know how to fix that.
This was solved for me, in CentOS6 64bit, using the following commands:
cd /opt/openfire/jre/bin
cp java java.bak
rm java
ln -s /usr/bin/java java
service openfire start
If you're on a 64-bit machine, you should install zlib package for 32-bit architectures.
For Redhat/Centos, use:
yum install -y zlib.i686
/usr/bin/java is just a shell script that runs the actual binary. If you don't have the JAVA_HOME environment variable set correctly, it might not be able to locate the binary if invoked through a symlink like that.
Another thing to keep in mind is that some distros of Linux put /usr/bin/java in place even though you haven't installed the Sun JRE. Don't bother trying to use the GNU version of Java, it's rubbish. Do you know if the Sun JRE is installed or not? What does "java -version" tell you?
I think the correct answer is using the right command to start openfire. I have found that "service openfire start" actually doesn't work.
I attempted the above mentioned method of removing the java executable from opt/openfire/jre/bin and all that did was force me into reconfiguring all of my current openfire settings. Thank god I made that java.bak file.
I believe the proper method to stop|start|restart is from /opt/openfire/bin and run ./openfire start or from anywhere "/opt/openfire/bin/openfire start"
At least that's what worked for me.

Resources