I am not able to find CASSANDRA_HOME variable anywhere being set in the CASSANDRA installed path.
I could guess that it is my installation directory of cassandra because the log files are created in the installed_dir/logs.
Where can I find CASSANDRA_HOME being set?
You haven't provided a lot of information but I'll try and answer.
CASSANDRA_HOME is set in cassandra.in.sh or cassandra.bat if you are running on windows. If CASSANDRA_HOME isn't set it sets it to the parent of the directory that the script is running in.
I'm assuming that you are running from a tarball installation since you say that the log files are enter up under your install directory, hence your bin directory is directly under the install directory.
Related
I am following instructions from here:
https://www.datacamp.com/community/tutorials/apache-spark-python#gs.WEktovg
I downloaded and prebuilt version of Spark , untarred it and mv it to /usr/local/spark.
According to this, this is all I should have to do.
Unfortunately, I can run the interactive shell as it cant find the file.
When i run :
./bin/pyspark
I get
-bash: ./bin/pyspark: No such file or directory.
I also notice that installing it this way does not add it to the bin directory.
Is this tutorial wrong or am I missing a trick?
You need to change your working directory to /usr/local/spark. Then this command will work.
And also, when you untar it, it usually will not add it to bin folder. You need to add it manually by adding the path to environment variables.
Update your working Directory to /usr/local/spark and execute the command. Hopefully this will fix the issue.
I'm completely new to hadoop framework and for the past few months I've been using linux . After I installing hadoop to /usr/local directory. I tried to run hadoop command in CLI and it responds as hadoop command not found, then I figured out environment variables aren't set, so I set the environment variables by the following commands
export HADOOP_HOME=/usr/local/hadoop/
export PATH=$PATH:$HADOOP_HOME/bin/
It worked. Also I know what is an environment variable but my doubt is how does the Shell refers hadoop command by using the HADOOP_HOME variable
HADOOP_HOME does nothing for when you type the hadoop command. (Or anything else in $HADOOP_HOME/bin, for that matter).
The $PATH where all the commands you type into the terminal are looked for.
Just echo $PATH and you'll see all the folders.
HADOOP_HOME is looked for by the hadoop components themselves, and isn't Linux specific.
HADOOP_HOME variable is used by shell files like yarn-config.sh, mapred-config.sh that is why it is required for setting HADOOP_HOME variable so that when config files access it they can reach to main hadoop folder.
If you do not want to define HADOOP_HOME then you need to edit config script files by replacing HAOOP_HOME with the required directory address
I'm running Cygwin 1.7.17 on Windows Server 2012. My user account is "Administrator". Where should I put a .bashrc file for the Cygwin bash to pick it up?
I've tried the "c:\users\Administrator" folder, which seems to be the HOME in Cygwin 1.7. Tryed c:\cygwin\home\Administrator also.
Start a shell instance and run the command echo $HOME to see what your home path is set to. That's where all your user config files will be read from. It might not be one of the paths you tried.
Once you know where it is, just copy the template .bash_profile and .bashrc files from the /etc/skel folder to get you started.
If you don't like the path that's currently being used as your home, you can change it by editing /etc/passwd. Here's more info on that... Safely change home directory
I'm trying to compile a piece of software in my home directory (OpenMPI). One of the build dependencies (autoconf) installed on my system is not the newer version asked for by the OpenMPI autogen script. I compiled and installed the newer version of autoconf in my home directory.
Is there anyway for the binary installed in my home directory to "override" the version installed on the system for my session?
I tried setting an alias which works via command line but not for the script used to generate the configure script.
Add the path to the binary you want to override to your $PATH environment variable.
Like PATH=/path/to/binary:$PATH ./compile
Your added path will then be looked up first when trying to find the compile command. It will only be valid for that execution and will not remain after command has returned. You can use export PATH=/path/to/binary/:$PATH and it will be saved for that session.
EDIT: As Eric.J states, you can use which compile which will output the path to the command, just to make sure it's the right one.
You can change the PATH environment variable so that your home directory appears before the system directory, e.g.
PATH=$HOME/bin:$PATH
You can then use the which command to ensure the correct binary is being picked up.
I need to update Apache Ant on my server.
I downloaded the newest Ant, built it, and (I thought) installed it. But when when I check it says the old version is still installed.
How do I update/replace the previous version of Apache Ant on a CentOS 5.? server?
take care,
lee
As mentioned it's probably getting picked up in your path. Post the output from echo $PATH
To configure your CentOS after installing a new version of Apache Ant, proceed to the following steps:
Locate the directory where the new Ant is located
Set the ANT_HOME environment variable to this directory
Add $ANT_HOME/bin to your PATH
P.S. To modify environment variables, you may edit the /etc/environment file, and reboot, or modify your local .bashrc. Look at your current environment variables by analyzing the output of printenv, e.g., to see the current value of PATH and then add the Ant path to it, e.g.
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin:/usr/local/ant/bin