Remote Desktop Not Working on Hadoop on Azure - azure

I am able to allocate a Hadoop cluster on Windows Azure by entering my Windows Live ID, but after that, I am unable to do Remote Desktop to the master node there.
Before the cluster creation, it's showing a message that says "Microsoft has got overwhelming positive feedback from Hadoop On Azure users, hence it's giving a free trial for 5 days with 2 slave nodes."
[P.S. that this Preview Version of HoA was working before]
Any suggestions for this problem?
Thanks in advance..

When you created your Hadoop cluster, you were asked to enter the DNS name for cluster which could something like your_hadoop_cluster.cloudapp.net.
So first please ping to your Hadoop cluster name to see if it returns back an IP address, this will prove if you really have any cluster configured at all. IF you dont get an IP back then you don't have a Hadoop cluster on Azure and trying creating one.
IF you are sure you do have a Hadoop cluster on Winodws Azure, try to post your question the following Hadoop on Azure CTP forum and you will get proper help you need:
http://tech.groups.yahoo.com/group/HadoopOnAzureCTP/

Related

Databricks : can't install a new cluster on azure databricks

I tried to install a new cluster on Databricks (I lost the one I used, someone deleted it) and it doesn't work. I have the following message:
Time
....
Message
Cluster terminated.Reason:Network Configuration Failure
The data plane network is misconfigured. Please verify that the network for your data plane is configured correctly.
Instance ID: ...............
Error message: Failed to launch HostedContainer{hostPrivateIP=......,
containerIp=....,
clusterId=...., resources=InstantiatedResources
{memoryMB=9105, ECUs=3.0, cgroupShares=...},
isSpot=false, id=...., instanceId=InstanceId(....),
state=Pending,instanceType=Standard_DS3_v2,
metadata=ContainerMetadata(Standard_DS3_v2)}
Because starting the FUSE daemon timed out.
This may happen because your VMs do not have outbound connectivity to DBFS storage.
Also consider upgrading your cluster to a later spark version.
What can I do ? Maybe I have to unmount ?
Can you please try and play around with the instance type and see if that helps ? May be this particular instance of the VM is not available in th region . Also check with other version of Pyspark . AFAIK unmounting will not help , but I may be wrong .

How to detect in Hadoop cluster if any Datanode drive (Storage) failed

I am trying to detect the drive failure in Datanode in a Hadoop Cluster. Cloudera Manager API don't have any specific API for that. CM API are only related to Name node or restart services. Are there any suggestions here? Thanks a lot!
If you have access to NameNode UI, the JMX page will give you this information. If you hit the JMX page directly it'll be a JSON formatted page, which can be parsed easily.
We use HortonWorks primarily, haven't touched Cloudera in a long time, but I assume that can be made available somehow.

Error when creating cluster environment for azureML: "Failed to get scoring front-end info"

I just started with using the Azure Machine Learning Services and ran into this problem. Creating a local environment and deploying my model to localhost works perfectly fine.
Can anyone identify what could have caused this error, because i do not know where to start..
I tried to create a cluster for Location "eastus2" aswell, which caused the same error.
Thank you very much in advance!
Btw, the ressource group and ressources are being created into my azure account.
Image of error
Ashvin [MSFT]
Sorry to hear that you were facing issues. We checked logs on our side using the info you provided in the screenshot. The cluster setup failed because there weren't enough cores to fit AzureML and system components in the cluster. You specified agent-vm-size of D1v2 which has 1 CPU core. By default we create 2 agents so total cores were 2. To resolve, can you please try creating a new cluster without specifying agent size? Then AzureML will create 2 agents of D3v2 which is 8 cores total. This should fit the AzureML and system components and leave some room for you to deploy your services.
If you wish a bigger cluster you could specify agent-count along with agent-vm-size to appropriately size your cluster but please have minimum total of 8 cores with each individual VM >= 2 cores to ensure cluster works smoothly. Hope this helps.
We are working on our side to add error handling to ensure request fails with clear error message.

Typical Hadoop setup for remote job submission

So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines.
Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great.
I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great.
Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar.
So to sum it up my questions are,
Is this an ok setup for a small test cluster
How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it.
How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup.
Thanks I would greatly appreciate any advice or help on this.
Since this is not answered I will attempt to answer it.
1. Rest api to submit an application:
Resource 1(Cluster Applications API(Submit Application)): https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_APISubmit_Application
Resource 2: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_yarn-resource-management/content/ch_yarn_rest_apis.html
Resource 3: https://hadoop-forum.org/forum/general-hadoop-discussion/miscellaneous/2136-how-can-i-run-mapreduce-job-by-rest-api
Resource 4: Run a MapReduce job via rest api
2. Submitting hadoop job fromĀ  client machine
Resource 1: https://pravinchavan.wordpress.com/2013/06/18/submitting-hadoop-job-from-client-machine/
3. Sending program to remote hadoop cluster
It is possible to send the program to a remote Hadoop cluster for running it. All you need to ensure is that you have set the resource manager address, fs.defaultFS, library files, and mapreduce.framework.name correctly before running the actual job.
Resource 1: (how to submit mapreduce job with yarn api in java)

how to configure high availibility with hadoop 1.0 on AWS ec virtual machines

I Have already configured this setup using heartbeat and virtual IP mechanism on the Non VM setup.
I am using hadoop 1.0.3 and using shared directory for the Namenode metadata sharing. Problem is that on the amazon cloud there is nothing like virtual Ip to get the High Availibility using Linux-ha.
Has anyone been able to achieve this. Kindly let me the know the steps required?
For now I am using the Hbase replication WAL on hbase. Hbase later than 0.92 supports this.
For the hadoop clustering on cloud , I will wait for the 2.0 release getting stable.
Used the following
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/replication/package-summary.html#requirements
On the client Side I added the logic to have 2 master servers , used alternatively to reconnect in case of network disruption.
This thing worked for a simple 2 machines backking up each other , not recommended for higher number of server.
Hope it helps.
Well, there's 2 parts to Hadoop to make it highly available. The first and more important is, of course, the NameNode. There's a secondary/checkpoint NameNode that can you startup and configure. This will help keep HDFS up and running in the event that your primary NameNode goes down. Next is the JobTracker, which runs all the jobs. To the best of my (outdated by 10 months) knowledge, there is no backup to the JobTracker that you can configured, so it's up to you to monitor and start up a new one with the correct configuration in the event that it goes down.

Resources