In Cassandra the official documentation (https://wiki.apache.org/cassandra/GettingStarted) it states, to start the service use
'bin/cassandra -f'
Then use
'bin/cqlsh'
to access. But to use cqlsh in this way I always have to go to the bin folder. What is the procedure to make it work such that I can type 'cqlsh' from anywhere in the console to access (not have to be in the bin folder of Cassandra setup) ?
(just like we access python directly from anywhere by just typing python3 in console )
To get this work work, you have to add your Cassandra bin directory to your $PATH.
From a terminal prompt, check the contents of your $PATH.
$ echo $PATH
On my Ubuntu VM, this is what I see:
/usr/local/apache-maven/apache-maven-3.1.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/jvm/jdk1.7.0_45/bin
Since you mention Python3, I'll check the location of that on my system as well:
$ which python3
/usr/bin/python3
As you can see, Python3 is in my /usr/bin directory, and /usr/bin is in my $PATH, which is why simply typing python3 works for me (and you as well).
There are a few ways to get your Cassandra bin directory into your $PATH. There is some debate about which is the "correct" way to do accomplish this. So in lieu of telling you how I would do it, I will provide a link to a question on AskUbuntu that details something like 3 ways to add a directory into your $PATH: How to add a directory to my path?
Use cassandra -f in your root folder and then you should be able to use cqlsh anywhere you have cassandra installed
Related
For personal preference reasons , and updating simplicity, i prefer the installation location /opt (when i install from source).
But for example if i install node or ffmpeg to /opt (./configure --prefix=/opt) the commands are not available in command line, but if i wouldn't use the prefix it would.
I should be creating i guess a script but i have no idea to which location and how.
Some more detail: I have installed nginx server in /opt/ , and i have created an executable script in /etc/init.d/ and its working fine, but i have no idea how to do that with node or ffmpeg since as far as i know they are not service but something more like "environment variables".
Any solution appreciated, thanks.
I have zero experience in hadoop and trying to set up hadoop in ec2 environment. After formatted the filesystem, I tried to start hadoop and it keeps saying command not found.
I think I have tried every advice I found on stackoverflow previous questions/answers.
Here is the line I am having trouble with:
[root#ip-172-31-22-92 ~]# start-hadoop.sh
-bash: start-hadoop.sh: command not found
I have tried all the following commands (which I found on previous answers)
[root#ip-172-31-22-92 ~]# hadoop-daemon.sh start namenode
-bash: hadoop-daemon.sh: command not found
[root#ip-172-31-22-92 ~]# ./start-all.sh
-bash: ./start-all.sh: No such file or directory
[root#ip-172-31-22-92 ~]# cd /usr/local/hadoop/
-bash: cd: /usr/local/hadoop/: No such file or directory
Honestly, I don't know what I am doing wrong. Plus, I am doing this as root...is this right? it seems like I should be in user...?! (discard this question if i just sounded dumber)
I am not sure whether you have downloaded/installed the hadoop package or not, so let me walk you through the process of it briefly:
Download the latest package using wget:
wget http://apache.cs.utah.edu/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz
Extract the package relative to where you have downloaded it:
tar xzf hadoop-2.7.1.tar.gz
change the dir into the extracted directory
cd hadoop-2.7.1
Now you would be able to find or start the hadoop daemons using:
sbin/start-all.sh
You can find the script's you are trying to use in the extracted dir's (hadoop-2.7.1) sbin folder.
Make sure you follow the proper documentation to get it completed properly, because I haven't really covered installing Java or configuring hadoop which are extensively covered in the following documentation link:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
The scripts in this repository could help you to understand the steps to install hadoop. https://github.com/lalosam/EasyHadoop (hadoop.sh). You could try to download it and execute it. The script should download the hadoop library and confgure it as pseudo cluster. start-hadoop and stop-hadoop scripts start and stop all the services required for hadoop.
First You may have to add your HADOOP_HOME variable in .bashrc file .
Ex:
export HADOOP_HOME=/usr/local/bigdata/hadoop/hadoop-1.2.1
export CLASSPATH=$JAVA_HOME:/usr/local/bigdata/hadoop/hadoop-1.2.1/hadoop-core-1.2.1.jar
export PATH=$PATH:$HADOOP_HOME/bin
Then open a new session and execute ./start-all.sh
I am trying to check my installation of hadoop. I did create the environment variables and when I call printenv, I do see my HADOOP_HOME and PATH variables printed and correct (home/hadoop and HADOOP_HOME/bin respectively).
If I go to home/hadoop in the terminal and call ls, I see the hadoop file there. If I try to run it by calling hadoop, it still tells me command not found.
First day on Linux, so there may be a stupid answer to this problem.
Your current working directory is probably not part of your path.
That is default on linux systems.
If you are in the same directory, where your hadoop file is, run that command with an relative path, like: ./hadoop
HOME DIRECTORY:
/home/hadoop is a home directory created by linux similar to Document and settings in windows.
Open your terminal and type:
ls -l /home/hadoop
Post your result for this command: ls -l /home/hadoop
SETTING GLOBAL PATH:
Go to /home/hadoop and open .bashrc in text editor.
Add these lines at the end:
export HADOOP_HOME=/path/to/your/hadoop/installation/folder
export PATH=$PATH:$HADOOP_HOME/bin
Save and exit. Now type, this in your teminal:
echo $PATH
echo $HADOOP_HOME
If these commands shows correct directories, try hadoop command. It should work.
Post your result for these command: echo $PATH and echo $HADOOP_HOME
Go to Hadoop-x.x.x/bin folder
check for hadoop folder there
run ./hadoop version
You must run “hadoop version” command.
If the hadoop setup is fine, then you should see the following result:
Hadoop 2.4.1
Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768
Compiled by hortonmu on 2013-10-07T06:28Z
Compiled with protoc 2.5.0
From source with checksum 79e53ce7994d1628b240f09af91e1af4
For installation related guide you can refer here:
Hadoop Environment Setup
Link to my quora answer https://qr.ae/TWngHN
Hope this helps.
Thanks
Enter which hadoop in your terminal. If you see a path as an output, hadoop is set in PATH of your system. If you get something similar to this,
usr/bin/which: no hadoop in (/usr/local/hadoop.... you might not have setup everything properly. Modify the /etc/bash.bashrc with
export HADOOP_HOME = /path/to/hadoop/folder and add it to PATH using export PATH=$PATH:HADOOP_HOME/bin
You may be editing the wrong ~/.bashrc file.
Open terminal and run sudo gedit ~/.bashrc and edit these command
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
Note: You must not use sudo gedit ~/.bashrc.sh these both work differently on newer OS
I have repackaged a Bash RPM to include automatic logging to syslog. I am trying to work out a way to set this up so that it is used ONLY when a user or service account runs a command as root. The option I'm looking at is installing this version of Bash to an alternate location, and then pointing root to use that version as it's default shell.
Can someone go through the process of installing this RPM to an alternate path and associating the root account to it as the default shell? I have been having difficulty finding a way to do this when searching online.
Since you are repackaging the RPM, it is probably best to change the destination path directly in the RPM.
As for the default shell, run chsh -s /path/to/your/bash root to change it.
Be aware that this solution may not work for all purposes though. For example, running a script that starts with #!/bin/bash will still execute it with /bin/bash instead of your default login shell.
I am trying to give a non-root user the ability to run mercurial commands from the shell. When I log in as the user and type "hg", I get this message:
abort: couldn't find mercurial libraries in [/usr/local/bin /usr/lib/python24.zip /usr/lib/python2.4 /usr/lib/python2.4/plat-linux2 /usr/lib/python2.4/lib-tk /usr/lib/python2.4/lib-dynload /usr/lib/python2.4/site-packages /usr/lib/python2.4/site-packages/Numeric /usr/lib/python2.4/site-packages/gtk-2.0]
(check your install and PYTHONPATH)
I do not have this problem as the root. I can run mercurial commands from any directory.
My problem is that I'm not very familiar with Linux at all, and so I don't know exactly how I'm supposed to change my PYTHONPATH variable (if indeed that's what I'm trying to do). I don't even know where the PYTHONPATH variable is being stored to see what's written there now.
Can someone tell me where the PYTHONPATH (or even regular PATH) environment variable is stored in Linux, and what steps I might take to remove the error method I'm getting above? If it helps, I'm using Putty and SSH to access the server.
Thanks! :)
The PYTHONPATH is just an environment variable that gets prepended to the python's internal search path. To see what is in there, do the following in python shell:
>>> import sys
>>> sys.path
It should print something like:
['', '/usr/lib64/python27.zip', '/usr/lib64/python2.7', '/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7/site-packages', '/usr/lib64/python2.7/site-packages/PIL', '/usr/lib64/python2.7/site-packages/gst-0.10', '/usr/lib64/python2.7/site-packages/gtk-2.0', '/usr/lib64/python2.7/site-packages/webkit-1.0', '/usr/lib64/python2.7/site-packages/wx-2.8-gtk2-unicode', '/usr/lib/python2.7/site-packages', '/usr/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg-info']
In practice, I would guess your shell is bash so the places where the environment variables can be set are:
/etc/profile, /etc/bashrc, ~/.profile and ~/.bashrc - the first 2 being system wide and the latter per user.
For further explanation, see this blog article abour bashrc and profile
EDIT
To fix this, the probably the easiest way is to install Mercurial via pip (I am assuming that you do not have Mercurial in the official repository for your Linux distribution, but usually python-setuptools or similar, that provides easy_install is). See this question for instructions.