I have Hadoop 2.7.2 installed on Ubuntu 16.04. When I run the command:
start-yarn.sh
It gives the following output:
starting yarn daemons
/usr/local/hadoop-2.7.2/etc/hadoop/yarn-env.sh: line 122: rt: command not found
starting resourcemanager, logging to /usr/local/hadoop-2.7.2/logs/yarn-hduser-resourcemanager-brij-Compaq-15-Notebook-PC.out
/usr/local/hadoop-2.7.2/etc/hadoop/yarn-env.sh: line 122: rt: command not found
localhost: /usr/local/hadoop-2.7.2/etc/hadoop/yarn-env.sh: line 122: rt: command not found
localhost: starting nodemanager, logging to /usr/local/hadoop-2.7.2/logs/yarn-hduser-nodemanager-brij-Compaq-15-Notebook-PC.out
localhost: /usr/local/hadoop-2.7.2/etc/hadoop/yarn-env.sh: line 122: rt: command not found
I am just curious about the last line that is
yarn-env.sh
: command not found.
Should I be concerned? Or have I done anything wrong which resulted in this error?
Sorry, i have figured out my mistake. I had replaced export by rt in
bashrc
by mistake. I have corrected it and now its working
Related
I am trying to use Flow-Type in a react native project by using npm package flow-bin. but when I try to run flow, it gives an error Unix.Unix_error(Unix.ENOTSOCK, "select", ""). I have been looking for a solution but no luck so far. Following are the details of the error. Plus; I have tried to completely uninstall nodejs and installed it again but still the same result.
Any help would be highly appreciated!!
node --version
v15.9.0
npm --version
7.5.3
Operating System
Windows 10 Pro 64bit
Build 20H2
Steps to Error
npm init
npm i -D flow-bin
add flow to script in package.json
npm run flow init
npm run flow
ERROR
Error Details
PowerShell Screenshot
Logs
[2021-02-24 03:44:28.921] argv=E:\LearningLab\Work\TechnicianApp\node_modules\flow-bin\flow-win64-v0.135.0\flow.exe start --flowconfig-name .flowconfig --temp-dir C:\Users\ajplu\AppData\Local\Temp\flow E:\LearningLab\Work\TechnicianApp
[2021-02-24 03:44:28.921] lazy_mode=off
[2021-02-24 03:44:28.921] arch=types_first
[2021-02-24 03:44:28.921] abstract_locations=off
[2021-02-24 03:44:28.921] max_workers=4
[2021-02-24 03:44:28.921] debug=false
[2021-02-24 03:44:28.922] Initializing Server (This might take some time)
[2021-02-24 03:44:28.922] executable=E:\LearningLab\Work\TechnicianApp\node_modules\flow-bin\flow-win64-v0.135.0\flow.exe
[2021-02-24 03:44:28.923] version=0.135.0
[2021-02-24 03:44:28.923] No saved state available
[2021-02-24 03:44:28.924] Parsing
Monitor died unexpectedly
Monitor Logs
Feb 24 03:44:28.796 [info] argv=E:\LearningLab\Work\TechnicianApp\node_modules\flow-bin\flow-win64-v0.135.0\flow.exe start --flowconfig-name .flowconfig --temp-dir C:\Users\ajplu\AppData\Local\Temp\flow E:\LearningLab\Work\TechnicianApp
Unix.Unix_error(Unix.ENOTSOCK, "select", "")
Raised by primitive operation at file "src/common/lwt/lwtInit.ml", line 36, characters 18-46
Called from file "list.ml", line 117, characters 24-34
Called from file "src/common/lwt/lwtInit.ml", line 34, characters 8-206
Called from file "src/unix/lwt_engine.ml", line 344, characters 8-19
Called from file "src/unix/lwt_main.ml", line 33, characters 4-78
Called from file "src/common/lwt/lwtInit.ml", line 129, characters 4-135
Called from file "src/hack_forked/utils/sys/daemon.ml", line 150, characters 4-20
Found this reason.
It turned out somehow Astril VPN was causing flow server to crash. even though it was only installed and not running. After uninstalling Astril flow works like a charm.
If someone encounters a similar problem, then they should try to uninstall their VPN/Proxy software. I am not sure if other VPN software can cause this issue because I have only used Astril.
When I submit the spark-shell command, I see the following error:
# spark-shell
> SPARK_MAJOR_VERSION is set to 2, using Spark2
File "/usr/bin/hdp-select", line 249
print "Packages:"
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(t "Packages:")?
ls: cannot access /usr/hdp//hadoop/lib: No such file or directory
Exception in thread "main" java.lang.IllegalStateException: hdp.version is not set while running Spark under HDP, please set through HDP_VERSION in spark-env.sh or add a java-opts file in conf with -Dhdp.version=xxx
at org.apache.spark.launcher.Main.main(Main.java:118)
The problem is that the HDP script /usr/bin/hdp-select is apparently run under Python3, whereas it contains incompatible Python2 specific code.
You may port /usr/bin/hdp-select to Python3 by:
adding parentheses to the print statements
replacing the line "packages.sort()" by "list(package).sort()")
replacing the line "os.mkdir(current, 0755)" by "os.mkdir(current, 0o755)"
You may also try to force HDP to run /usr/bin/hdp-select under Python2:
PYSPARK_DRIVER_PYTHON=python2 PYSPARK_PYTHON=python2 spark-shell
Had the same problem: I set HDP_VERSION before running spark.
export HDP_VERSION=<your hadoop version>
spark-shell
When im trying to deploy at Windows this error comes. Im using apache-spark 2.0.
Command: ./bin/spark-class org.apache.spark.deploy.master.Master
Error: ./bin/spark-class: line 84: [: too many arguments
Its the same error reported here
The command is wrong, i forgot the ".cmd". The right command is:
./bin/spark-class.cmd org.apache.spark.deploy.master.Master
I am very new to Cassandra. I am unable to start nodes locally using CCM. I am getting this error. Anyone is having any idea about this error.
D:\ccm>python ccm status
node1: DOWN (Not initialized)
node3: DOWN (Not initialized)
node2: DOWN (Not initialized)
D:\ccm>python ccm start
ERROR: Problem starting node1 (Timed out waiting for dirty_pid file.)
Traceback (most recent call last):
File "ccm", line 72, in <module>
cmd.run()
File "D:\ccm\ccmlib\cmds\cluster_cmds.py", line 458, in run
profile_options=profile_options) is None:
File "D:\ccm\ccmlib\cluster.py", line 260, in start
p = node.start(update_pid=False, jvm_args=jvm_args, profile_options=profile_
options)
File "D:\ccm\ccmlib\node.py", line 459, in start
self.__clean_win_pid()
File "D:\ccm\ccmlib\node.py", line 1183, in __clean_win_pid
raise Exception('Error while parsing <node>/dirty_pid.tmp in path: ' + self.
get_path())
Exception: Error while parsing <node>/dirty_pid.tmp in path: C:\Users\Ram\.ccm\cluster2\node1
D:\ccm>
Please help on this.
Thanks in advance
Take a look at this example of using ccm to trace consistency changes: http://www.datastax.com/documentation/cql/3.1/cql/cql_using/useTracingSetup.html
If you are not using Cassandra source code, the procedure is a little different. See the ccm README or let me know and I'll send you the details.
Don't forget to set up the alias on the local ip, for example:
$ sudo ifconfig lo0 alias 127.0.0.2 up
$ sudo ifconfig lo0 alias 127.0.0.3 up
$ sudo ifconfig lo0 alias 127.0.0.4 up
$ sudo ifconfig lo0 alias 127.0.0.5 up
root#ubuntu:-$ apachectl restart
Gives me this error:
apache2: Syntax error on line 140 of /etc/apache2/apache2.conf: Syntax
error on line 1 of /etc/apache2/mods-enabled/perl.load: Cannot load
/usr/lib/apache2/modu les/mod_perl.so into server:
/usr/lib/apache2/modules/mod_perl.so: cannot open s
hared object file: No such file or directory Action 'restart' failed.
The Apache error log may have more information.
IN line 140 of apache2.conf there is this:
Include mods-enabled/.load*
Include mods-enabled/.conf*
In the file perl.load there is only one line:
LoadModule perl_module /usr/lib/apache2/modules/mod_perl.so
Kindly assist me on how I can rectify this as it was working properly until apache did and update and it wont restart now.