Can not run hadoop localy using cygwin - linux

I am new to hadoop so i am following this tuturial to install a single node on my computer. My OS is Windows 7 so I installed cygwin. In the tuturial I've been asked to:
Try the following command:
$ bin/hadoop
but when I try it I get this response:
-bash: binhadoop: command not found
The file "hadoop" is in the directory and have not changed it. As I mentioned I am new to Hadoop and Linux\Cygwin so maybe its something trivial that I do wrong.
Thanks

The command would be $cd /bin/hadoop to change directory. Use Linux for making things easier I would suggest.

Related

Using Wine in Heroku

Can anybody help me figure out how do I use Wine in Heroku?
I deployed Wine to Heroku with the button in the readme of https://github.com/TheBotlyNoob/heroku-buildpack-wine.
But when I tried to run Wine like this:
It didn't work. Am I trying to run Wine wrong? Or is there another step I need to do?
Thanks.
I've installed both versions of wine (stable and release) but when I try to execute the command on the bash the shell answer that the file doesn't exist:
bash: /app/vendor/wine/bin/wine: No such file or directory
Heroku bash: wine not such file
Even if wine is installed and the file is executable:
wine
If you try to launch notepad the answer of the bash is the same:
/app/vendor/wine/bin/notepad: 46: exec: /app/vendor/wine/bin/wine: not found
even if wine is in the PATH.
Conclusion: wine doesn't execute.
A way to have wine on Heroku is to download the package from ubuntu with his dependencies, put them in the repository and install it.
Hope it was useful.

start-all.sh, start-dfs.sh command not found

I am using Ubuntu 16.04 LTS and installed hadoop 2.7.2. The Output of
hadoop version
is
Hadoop 2.7.2
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41
Compiled by jenkins on 2016-01-26T00:08Z
Compiled with protoc 2.5.0
From source with checksum d0fda26633fa762bff87ec759ebe689c
This command was run using /usr/local/hadoop-2.7.2/share/hadoop/common/hadoop-common-2.7.2.jar
and when i run
whereis hadoop
it gives output as
hadoop: /usr/local/hadoop /usr/local/hadoop-2.7.2/bin/hadoop.cmd /usr/local/hadoop-2.7.2/bin/hadoop
But when i run command
start-all.sh
it says command not found.
also when i run
start-dfs.sh
it gives output as command not found.
I am able to run these command when i navigate to hadoop directory but i want to run these command without navigating into hadoop directory.
Your problem is that bash doesn't know where to look for ./start-all.sh.
You can fix this by opening $HOME/.bashrc and adding a line that looks like this:
PATH=$PATH:/usr/local/hadoop/sbin
This tells bash that it should look in '/usr/local/hadoop/sbin' for start-all.sh.
Note:
Changes to $HOME/.bashrc will not take affect in any terminals that are currently open.
If you need the changes to take affect in a terminal that is currently open, run
source $HOME/.bashrc
I had to search for it with find.
find / -iname start-all.sh 2> /dev/null
It found:
/usr/local/sbin/start-all.sh
/usr/local/Cellar/hadoop/3.3.4/libexec/sbin/start-all.sh
/usr/local/Cellar/hadoop/3.3.4/sbin/start-all.sh
So in addition to the previous answer, the variable in $HOME/.bashrc looks like this:
PATH=$PATH:/usr/local/sbin/
Note: I'm not sure which one should be set as PATH

How to execute a .run file using Mac Terminal?

Once I navigate to a certain directory that has a .run file in it, how do I execute that file using mac Terminal?
I have change the chmod 777 for fileName.run file and run it
$ chmod 777 file-name.run
$ ./file-name.run
but it does not work for me. any idea ?
it give me the following error.
$ ./ppasmeta-9.4.1.3-linux-x64.run: cannot execute binary file
Its postgres plus advanced server for linux I want to install on mac os-x .
As suggested by Muhammad Iqbal, you won't be able to run a Linux-specific binary/executable on Mac OS X without some modifications. It's like attempting to run an .exe on Linux - without wine. While Linux and OS X are similar, they are not that compatible. If you can get your desired program in the form of a .deb, that may work. If not, I would suggest either dual-booting your Mac with a small Linux distro (ie. DSL) or picking up a virtual machine. I've dual-booted before - it's a decent option for this situation if you have enough hard drive space. If another solution shows up, I'll be sure to let you know through this channel.
I am not sure what you are asking here but try this
./run
As I said in the comments, I don't think that Postgres Advanced Server is available for OSX. If Postgres 9.5.3 is suitable for your purposes, you could install homebrew and use it to install Postgres 9.5.3 with:
brew install postgresql
You can download the one-liner to install homebrew from here.

Starting hadoop - command not found

I have zero experience in hadoop and trying to set up hadoop in ec2 environment. After formatted the filesystem, I tried to start hadoop and it keeps saying command not found.
I think I have tried every advice I found on stackoverflow previous questions/answers.
Here is the line I am having trouble with:
[root#ip-172-31-22-92 ~]# start-hadoop.sh
-bash: start-hadoop.sh: command not found
I have tried all the following commands (which I found on previous answers)
[root#ip-172-31-22-92 ~]# hadoop-daemon.sh start namenode
-bash: hadoop-daemon.sh: command not found
[root#ip-172-31-22-92 ~]# ./start-all.sh
-bash: ./start-all.sh: No such file or directory
[root#ip-172-31-22-92 ~]# cd /usr/local/hadoop/
-bash: cd: /usr/local/hadoop/: No such file or directory
Honestly, I don't know what I am doing wrong. Plus, I am doing this as root...is this right? it seems like I should be in user...?! (discard this question if i just sounded dumber)
I am not sure whether you have downloaded/installed the hadoop package or not, so let me walk you through the process of it briefly:
Download the latest package using wget:
wget http://apache.cs.utah.edu/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz
Extract the package relative to where you have downloaded it:
tar xzf hadoop-2.7.1.tar.gz
change the dir into the extracted directory
cd hadoop-2.7.1
Now you would be able to find or start the hadoop daemons using:
sbin/start-all.sh
You can find the script's you are trying to use in the extracted dir's (hadoop-2.7.1) sbin folder.
Make sure you follow the proper documentation to get it completed properly, because I haven't really covered installing Java or configuring hadoop which are extensively covered in the following documentation link:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
The scripts in this repository could help you to understand the steps to install hadoop. https://github.com/lalosam/EasyHadoop (hadoop.sh). You could try to download it and execute it. The script should download the hadoop library and confgure it as pseudo cluster. start-hadoop and stop-hadoop scripts start and stop all the services required for hadoop.
First You may have to add your HADOOP_HOME variable in .bashrc file .
Ex:
export HADOOP_HOME=/usr/local/bigdata/hadoop/hadoop-1.2.1
export CLASSPATH=$JAVA_HOME:/usr/local/bigdata/hadoop/hadoop-1.2.1/hadoop-core-1.2.1.jar
export PATH=$PATH:$HADOOP_HOME/bin
Then open a new session and execute ./start-all.sh

Drush on Cygwin setup

I followed the instructions here to install pear and download drush in usr/local/src folder and create the symlink in usr/bin/drush
At the end of the instructions is says you can test by running drush. I get this output:
-bash: /cygdrive/c/xampp/php/drush: No such file or directory
Not the bash root of xampp/php. Does that need to be changed?
So, then I tried running /usr/bin/drush and got this output:
Unable to untar C:\cygwin\usr\local\src\drush\lib\dru6B61.tmp.
[error]
Does anyone know where I'm going wrong here?
I had the same issue. I reinstalled the cygwin packages above in the tutorial you mentioned above(I had them already from other installs, I thought). I think it may have been the 'bsdtar' package.
Good luck!

Resources