Run Visual studio vs package in custom Hive (not Exp) - vspackage

I'm developing a VsPackage and try to test it under Hive other then the Exp.
I created a new Hive using the vs developer command prompt using the following command: devenv /rootsuffix MyExp
And than run VS with /rootsuffix MyExp command line arguments, but failed to make my extension work on that hive . Only when I run it under the Exp it work.
The main reason that I don't want to run it under the Exp hive is that I don't want to effects other extension on the hive.

After searching the 'Exp' string under my package project sub folders.
I found the place that declares to where the VSSDK build tool will copy my vs-package.
On my machine, under my vs-package folder:
MyPackage\packages\Microsoft.VSSDK.BuildTools.14.1.24720\tools\vssdk\Microsoft.VsSDK.Common.targets file.
I change the value in VSSDKTargetPlatformRegRootSuffix under PropertyGroup section from:
<VSSDKTargetPlatformRegRootSuffix Condition="'$(VSSDKTargetPlatformRegRootSuffix)' == ''">*Exp*</VSSDKTargetPlatformRegRootSuffix>
to:
<VSSDKTargetPlatformRegRootSuffix Condition="'$(VSSDKTargetPlatformRegRootSuffix)' == ''">*MyExp*</VSSDKTargetPlatformRegRootSuffix>
MyExp is the name of my new Hive.

Related

Run spark from source code on Windows - no such file or directory error

I would like to run Spark from source code on my Windows machine. I did the following steps:
git clone https://github.com/apache/spark
Added the SPARK_HOME variable into the user variables.
Added %SPARK_HOME%\bin to the PATH variable.
./build/mvn -DskipTests clean package
./bin/spark-shell
The last command returns the following error:
What should I do to fix the error?
First, refer to the link below for the solution. The top voted answer gave me the working script for this problem.
: Failed to start master for Spark in Windows
The reason is that spark launch scripts do not support Windows. The spark documentation (https://spark.apache.org/docs/1.2.0/spark-standalone.html) insists you to start the master and workers manually if you are a Windows user. So you need to first run the master and then run spark-shell.

How to install talend in silent mode (unattended mode)?

I have talend installer in my directory and now i want to run talend in silent mode so it do not give me options while running it on command line
my command to execute talend in silent mode :-
./Talend-Installer-20151214_1327-V6.1.1-linux64-installer.run --optionfile silentMode.txt
As soon as i run the above command i get the version and build detail but I cannot find it in the directory that i have given in me key value text file i.e /opt/talend-6.1.1. below is the result i get after executing the command.
"Talend 6.1.1 --- Built on 2015-12-14 18:02:36 IB: 9.5.3-201412111637"
Can anyone help me installing talend in silent mode ?
You are not specifying what components of Talend you are installing in unattended mode. The option file is there for that. So, let's say you would like to install TAC using included H2 DB and a JobServer, your option file (silentMode.txt that you are specifying) would probably look similar to the following:
mode=unattended
enable-components=tac,jobserver,serv
disable-components=logserv,mdm,dsc,cmdline,soa,runtime,svn,tdqp,sap_rfc,studio,esb
prefix=/opt
installStyle=advanced
installType=custom
licenseFile=/talend_packages/license
tacAdminUser=admin#company.com
tacAdminPwd=admin
tacWebAppName=tac611
svnInstall=create
svnUser=svnadmin
svnPass=admin
Please notice, that you have to specify both enable-components and disable-components together. If you don't, it might try to install all components and result in an error (since it misses necessary information from the option file). Don't ask me why ...
In order to get an exhaustive list of options to put in your optionfile, simply take a look at the installer's options:
# ./Talend-Installer-20151214_1327-V6.1.1-linux64-installer.run --help
The same content is available here
Please also notice that it is recommended to install Talend using root (in order to activate RC scripts, etc.), but you can then, manually change Talend Directory owner to another user and slightly modify the RC scripts to run services as that user.
Some more documentation could be read here

Microsoft Azure HDInsight -"Not Valid JAR"

I have got the following prompt (see attachment below) when I run an example from the Implementing Big Data Analysis course.
"Not a Valid JAR"
The command:
C:\apps\dist\hadoop-2.6.0.2.2.7.1-0004>hadoop jar hadoop-examples.jar wordcount /example/data/gutenberg/davici.txt /example/results
Please advise how to resolve this issue.
Thanks
The examples file was renamed when YARN was added in Hadoop 2.x, HDInsight 3.x. If you do a dir listing at the command prompt, you will see that it's now called hadoop-mapreduce-examples.jar, so the following command should work
hadoop jar hadoop-mapreduce-examples.jar wordcount /example/data/gutenberg/davinci.txt /example/results
(you also had a typo in davinci.txt)

Opening Microsoft Visual Studio Code from command prompt Windows

Is there a way to launch Microsoft Visual Studio Code from the command line in windows? I can't even seem to find the directory for code on my computer. It didn't even ask me where to download it.
Navigate to the directory that you want to open and type code . to launch VS Code.
As many folks already suggested ways to open code from command prompt using code . command. This will only open Visual Studio Code Stable build. But If you have downloaded Visual Studio Code Insider build/version (Which has all latest build/features but unstable version) then you need to follow below instructions in windows :
Go to Control Panel\System and Security\System. Click on Advanced System Settings
Click on Environment Variables
Under System Variables tab, Click on Edit for Path Variable
Add a new path C:\Users\tsabu\AppData\Local\Programs\Microsoft VS Code Insiders\bin
(or)
C:\Program Files\Microsoft VS Code Insiders\bin based on location at which you have installed vscode insider in your machine.
Open a new command prompt and type code-insiders . to open vscode-insider
build/version
Short answer:
code your_path your_filename
Long answer:
Here your_path can simply be . if you want to use the current directory as your working path. Or .. for 1 level up, etc.
code is the name of the executable of Visual Studio Code (code.exe). If it doesn't launch, perhaps your VSC path hasn't been added to the path environment variable. Run this command to add it:
set PATH=";C:\Program Files\Microsoft VS Code\bin"
Of course you'll need to specify a different path if your VSC is installed somewhere else.
How can you find out the installation path? (click for screenshot) Go to "Start" menu, type in "Visual Studio Code", right click on the found program, "Properties", check "Target". Now you'll see!
It may come already added to your path when installed. Try using code <filename> in your command line. If it's not you can add the command line script's directory to your path. The command line script's directory is downloaded by default in the following location
C:\Users\<username>\AppData\Local\Code\bin
Point your command prompt to the specific folder that has the file that you want to open. Let's say you want to open the file titled main.scss. Simply run this command:
start code main.scss
If Visual Studio Code is already open, you can simply do:
code main.scss

Why does spark-submit and spark-shell fail with "Failed to find Spark assembly JAR. You need to build Spark before running this program."?

I was trying to run spark-submit and I get
"Failed to find Spark assembly JAR.
You need to build Spark before running this program."
When I try to run spark-shell I get the same error.
What I have to do in this situation.
On Windows, I found that if it is installed in a directory that has a space in the path (C:\Program Files\Spark) the installation will fail. Move it to the root or another directory with no spaces.
Your Spark package doesn't include compiled Spark code. That's why you got the error message from these scripts spark-submit and spark-shell.
You have to download one of pre-built version in "Choose a package type" section from the Spark download page.
Try running mvn -DskipTests clean package first to build Spark.
If your spark binaries are in a folder where the name of the folder has spaces (for example, "Program Files (x86)"), it didn't work. I changed it to "Program_Files", then the spark_shell command works in cmd.
In my case, I install spark by pip3 install pyspark on macOS system, and the error caused by incorrect SPARK_HOME variable. It works when I run command like below:
PYSPARK_PYTHON=python3 SPARK_HOME=/usr/local/lib/python3.7/site-packages/pyspark python3 wordcount.py a.txt
Go to SPARK_HOME. Note that your SPARK_HOME variable should not include /bin at the end. Mention it when you're when you're adding it to path like this: export PATH=$SPARK_HOME/bin:$PATH
Run export MAVEN_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=1g" to allot more memory to maven.
Run ./build/mvn -DskipTests clean package and be patient. It took my system 1 hour and 17 minutes to finish this.
Run ./dev/make-distribution.sh --name custom-spark --pip. This is just for python/pyspark. You can add more flags for Hive, Kubernetes, etc.
Running pyspark or spark-shell will now start pyspark and spark respectively.
If you have downloaded binary and getting this exception
Then please check your Spark_home path may contain spaces like "apache spark"/bin
Just remove spaces will works.
Just to add to #jurban1997 answer.
If you are running windows then make sure that SPARK_HOME and SCALA_HOME environment variables are setup right. SPARK_HOME should be pointing to {SPARK_HOME}\bin\spark-shell.cmd
For Windows machine with the pre-build version as of today (21.01.2022):
In order to verify all the edge cases you may have and avoid tedious guesswork about what exactly is not configred properly:
Find spark-class2.cmd and open it in with a text editor
Inspect the arguments of commands staring with call or if exists by typing the arguments in Command Prompt like this:
Open Command Prompt. (For PowerShell you need to print the var another way)
Copy-paste %SPARK_HOME%\bin\ as is and press enter.
If you see something like bin\bin in the path displayed now then you have appended /bin in your environment variable %SPARK_HOME%.
Now you have to add the path to the spark/bin to your PATH variable or it will not find spark-submit command
Try out and correct every path variable that the script in this file uses and and you should be good to go.
After that enter spark-submit ... you may now encounter the missing hadoop winutils.exe for which problem you can go get the tool and paste it where the spark-submit.cmd is located
Spark Installation:
For Window machine:
Download spark-2.1.1-bin-hadoop2.7.tgz from this site https://spark.apache.org/downloads.html
Unzip and Paste your spark folder in C:\ drive and set environment variable.
If you don’t have Hadoop,
you need to create Hadoop folder and also create Bin folder in it and then copy and paste winutils.exe file in it.
download winutils file from [https://codeload.github.com/gvreddy1210/64bit/zip/master][1]
and paste winutils.exe file in Hadoop\bin folder and set environment variable for c:\hadoop\bin;
create temp\hive folder in C:\ drive and give the full permission to this folder like:
C:\Windows\system32>C:\hadoop\bin\winutils.exe chmod 777 /tmp/hive
open command prompt first run C:\hadoop\bin> winutils.exe and then navigate to C:\spark\bin>
run spark-shell

Resources