Databricks cluster failing to start - databricks

Cluster terminated.Reason:Cloud provider launch failure
A cloud provider error was encountered while launching worker nodes. See the Databricks guide for more information.
GCP error message: Compute Quota Exceeded for databricksharish2022 in region asia-southeast1: Quota: SSD_TOTAL_GB, used 0.0 and requested 1000.0 out of 500.0

Related

Issue loading big data using Apache Spark Connector for SQL Server to Azure SQL

I'm trying to load a pyspark dataframe into Azure SQL DB using Apache Spark Connector for SQL Server and Azure SQL in Azure DataBricks Env
[Environment] - Azure DataBricks
DBR: 9.1 LTS
Driver and Worker nodes: DS3_V2
No. of workers: 2 to 8 [AutoScaling]
[Dataset] - NYC Yellow Taxi Dataset
It works fine for data size around 30M, but for the data sizes around 90M I get the below issue:
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 5 in stage 20.0 failed 4 times, most recent failure: Lost task
5.3 in stage 20.0 (TID 381) (10.139.64.7 executor 5): com.microsoft.sqlserver.jdbc.SQLServerException: Database
'[database]' on server '[servername]' is not currently
available. Please retry the connection later. If the problem persists,
contact customer support, and provide them the session tracing ID of
[some id]
The code that I use:
try:
df.write \
.format("com.microsoft.sqlserver.jdbc.spark") \
.mode("overwrite") \
.option("truncate", True) \
.option("url", url) \
.option("dbtable", "dbo.nyc_yellow_trip_test_2017") \
.option("user", username) \
.option("password", password) \
.save()
except ValueError as error :
print("Connector write failed", error)
Sometimes that error comes as result of intermittent failures on specific regions.
You can check Resource health in Left vertical panel as shown in below image.
In the cloud environment you'll find that failed and dropped database connections happen periodically. That's partly because you're going through more load balancers compared to the on-premises environment where your web server and database server have a direct physical connection. Also, sometimes when you're dependent on a multi-tenant service you'll see calls to the service get slower or time out because someone else who uses the service is hitting it heavily. In other cases you might be the user who is hitting the service too frequently, and the service deliberately throttles you – denies connections – in order to prevent you from adversely affecting other tenants of the service.
Refer - https://learn.microsoft.com/en-us/answers/questions/212108/database-x-on-server-y-is-not-currently-available.html

azure unexpected deployment Template failure

i was trying to create a cluster in Databricks but every time i try it does'nt start and it shows this message: Error code: UnexpectedDeploymentTemplateFailure, error message: Failing to launch instances for the cluster because of unexpected deployment failure. Message: {"error":{"code":"MultipleErrorsOccurred","message":"Multiple error occurred: BadRequest,BadRequest. Please see details.","details":[{"code":"InvalidTemplateDeployment","message":"The template deployment failed with error: 'The resource with id: '/subscriptions/efbb03c8-943f-477e-8c81-568425a73b74/resourceGroups/databricks-rg-DPC-ovvxul4l4o77a/providers/Microsoft.Compute/virtualMachines/4233ecc1fb88403caec0a5d994698bb7' failed validation with message: 'The requested size for resource '/subscriptions/efbb03c8-943f-477e-8c81-
any help please ?
actually i realized that i can't work with databriks with a free azure account because of the size of cluster. you can work with the community addition instead.

error while creating aks kubernetes cluster using azure devops : Failure sending request: StatusCode=400 -- Original Error: Code="QuotaExceeded"

Trying to create a Kubernetes Cluster using Terraform and azurerm provider . But while doing that I am getting the below error:
Error: creating Managed Kubernetes Cluster "k8stest_dev" (Resource
Group "kubernetes_dev"):
containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending
request: StatusCode=400 --
Original Error: Code="QuotaExceeded" Message="Provisioning of
resource(s) for container service k8stest_dev in resource group
kubernetes_dev failed.
Message: Operation could not be completed as it results in exceeding
approved standardDSv3Family Cores quota. Additional details -
Deployment Model: Resource Manager, Location: centralus, Current
Limit: 4, Current Usage: 0, Additional Required: 6, (Minimum) New
Limit Required: 6.
Submit a request for Quota increase
The issue is that you are using standardDSv3 family and you have 4 cores available for that family in the region you are trying to deploy.
So, For solution , you will need to raise a quota request or try creating some other family vm's with less core .
Reference:
You can refer this Microsoft Documentation to know about how to
raise quota increase .
You can refer this Microsoft Documentation for the available VM
Sizes.

Azure Synapse Spark Pool Submission Failure with error message "SparkJobDefinitionActionFailed"

I am following the tutorials below and try to submit a Spark Job using the Spark engine in Azure Synapse.
The submission failed with following error:
Error:
{
"code": "SparkJobDefinitionActionFailed",
"message": "Spark job batch request for workspace contosows, spark compute contosospark with session id null failed with a system error. Please try again",
"target": null,
"details": null,
"error": null
}
Can anyone give some guidance/suggestions on how to resolve it?
More information about my setups.
Region: Southeast Asia for both Azure Synapse workspace + ADLS Gen2
I grant myself Both Storage Blob Data Owner and Storage Blob Data Contributor roles as suggested.
Tutorials used:
https://learn.microsoft.com/en-us/azure/synapse-analytics/quickstart-create-workspace
https://learn.microsoft.com/en-us/azure/synapse-analytics/quickstart-create-apache-spark-pool-portal
https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-job-definitions
Thanks!
After investigation, the product group deployed a hotfix to fix this problem.
Now you can submit the Spark job without any issues. Please do let me known, if you are experiencing this issue anymore.

Azure ML free trial: how to submit pipeline?

I'm using a free trial account on MS Azure and I'm following this tutorial.
https://learn.microsoft.com/en-us/azure/machine-learning/tutorial-designer-automobile-price-train-score
I'm stuck when I try to "submit the pipeline".
The reason seems to be that I can't create a compute instance or a training cluster on a free plan.
I still have 200USDs of free credits. I guess there must be a solution?
Error messages:
Invalid graph: The pipeline compute target is invalid.
400: Compute Test3 in state Failed, which is not able to use
Compute instance: creation failed
The specified subscription has a total vCPU quota of 0 and is less than the requested compute training cluster and/or compute instance's min nodes of 1 which maps to 4 vCPUs
Please check the announcement from MS Team regarding this:
https://azure.microsoft.com/blog/our-commitment-to-customers-and-microsoft-cloud-services-continuity/
All the free trials will not work as of now

Resources