Databricks "Import failed with error: Service Unavailable" - databricks

I have a .dbc file and when I import that to a databricks, it gives error "Import failed with error: Service Unavailable"
I did it in Azure Databricks as well as Databricks community cloud and same error. Is there a problem with my .dbc file or is it something else?

the problem was with the abc.dbc, I had changed it's filename to abc.dbc.txt to send as email and then opened the file in notepad and saved is as "abc.dbc". Then it gave error. But when I just changed the extension of "abc.dbc.txt" to "abc.dbc" it worked.

Related

How to read a csv file from an FTP using PySpark in Databricks Community

I am trying to fetch a file using FTP (kept on Hostinger) using Pyspark in Databricks community.
Everything works fine until I try to read that file using spark.read.csv('MyFile.csv').
Following is the code and an error,
PySpark Code:
res=spark.sparkContext.addFile('ftp://USERNAME:PASSWORD#URL/<folder_name>/MyFile.csv')
myfile=SparkFiles.get("MyFile.csv")
spark.read.csv(path=myfile) # Errors out here
print(myfile, sc.getRootDirectory()) # Outputs almost same path (except dbfs://)
Error:
AnalysisException: Path does not exist: dbfs:/local_disk0/spark-ce4b313e-00cf-4f52-80d8-dff98fc3eea5/userFiles-90cd99b4-00df-4d59-8cc2-2e18050d395/MyFile.csv
As the spark.addfile downloads file on driver while databricks uses dbfs as default storage you are getting the error please try below code to see if it fixes your issue
res=spark.sparkContext.addFile('ftp://USERNAME:PASSWORD#URL/<folder_name>/MyFile.csv')
myfile=SparkFiles.get("MyFile.csv")
spark.read.csv(path='file:'+myfile)
print(myfile, sc.getRootDirectory())

Azur "Error message: \"Failed to download all specified files. Exiting. Error Message: invalid uri fileUri_{1}\r\ninvalid uri fileUri_{2}\r\ninvalid u

Azure Custom script execution failed, please help for fix.
"Error message: "Failed to download all specified files. Exiting. Error Message: invalid uri fileUri_{1}\r\ninvalid uri fileUri_{2}\r\ninvalid uri fileUri_{3}\r\n"\r\n\r\nMore information on troublesho"
Custom Script for windows :- Installing IIS service
import-module servermanager
add-windowsfeature web-server -includeallsubfeature
add-windowsfeature Web-asp-Net45
add-windowsfeature NET-framework-Features
Error :-
{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"VMExtensionProvisioningError","message":"VM has reported a failure when processing extension 'CustomScriptExtension'. Error message: "Failed to download all specified files. Exiting. Error Message: invalid uri fileUri_{1}\r\ninvalid uri fileUri_{2}\r\ninvalid uri fileUri_{3}\r\n"\r\n\r\nMore information on troubleshooting is available at https://aka.ms/VMExtensionCSEWindowsTroubleshoot "}]}
After further checking able to fix the issue as below.
Custom script name should be as per azure suggested, earlier I had given (under score, Capital Letter while naming the Storage Account, that is the main error.
Deployment Error :-
Before changing the Name Error naming :-
After the overwriting the Name as below format:-
Installed the Custom Extension Script , got installed as below.

aws CLI: get-job-output erroring with either Errno9 Bad File Descriptor or Errno9 no such file or directory

I'm having some problems with retrieving job output from an AWS glacier vault.
I initiated a job (aws glacier initiate-job), the job is indicated as complete via aws glacier, and then I tried to retrieve the job output
aws glacier get-job-output --account-id - --vault-name <myvaultname> --job-id <jobid> output.json
However, I receive an error: [Errno 2] No such file or directory: 'output.json'
Thinking that perhaps the file needed be created first, and if i did create the file first, (which really doesn't make sense), one would receive the [Errno 9] Bad file descriptor error.
I'm currently using the following version of the AWS CLI:
aws-cli/2.4.10 Python/3.8.8 Windows/10 exe/AMD64 prompt/off
I tried using the aws CLI from both an Administrative and non-Administrative command prompt with the same result. Any ideas on making this work?
From a related reported issue you can try run this command in a DOS window::
copy "c:\Program Files\Amazon\AWSCLI\botocore\vendored\requests\cacert.pem" "c:\Program Files\Amazon\AWSCLI\certifi"
It seems to be an certificate error

Azure Devops Download Secure File

I am facing an issue wherein I uploaded a server.key in Download Secure File utility but in my pipeline, it is treating it as D:a_tempserver.key - when I call it in my task using echo $(server.secureFilePath). Any idea what could be the issue?
Error:
ERROR running auth:jwt:grant: We encountered a JSON web token error, which is likely not an issue with Salesforce CLI. Here’s the error: ENOENT: no such file or directory, open 'D:\a\1\s\a_tempserver.key'
In your current situation, we recommend you can use the $(Agent.TempDirectory)/server.key instead of the $(server.secureFilePath). In the auzre devops, if we upload the secure file to pipeline, then it will put the file in the temp directory not under the source directory.
Here are some screenshots about my test, hope this will help you.
The Secure File directory:
The result of $(Agent.TempDirectory):

How to troubleshoot package loading error in spark

I'm using spark in HDInsight with Jupyter notebook. I'm using the %%configure "magic" to import packages. Every time there is a problem with the package, spark crashes with the error:
The code failed because of a fatal error: Status 'shutting_down' not
supported by session..
or
The code failed because of a fatal error: Session 28 unexpectedly
reached final status 'dead'. See logs:
Usually the problem was with me mistyping the name of the package, so after a few attempts I could solve it. Now I'm trying to import spark-streaming-eventhubs_2.11 and I think I got the name right, but I still receive the error. I looked at all kinds of logs but still couldn't find the one which shows any relevant info. Any idea how to troubleshoot similar errors?
%%configure -f
{ "conf": {"spark.jars.packages": "com.microsoft.azure:spark-streaming-eventhubs_2.11:2.0.5" }}
Additional info: when I run
spark-shell --conf spark.jars.packages=com.microsoft.azure:spark-streaming-eventhubs_2.11:2.0.5
The shell starts fine, and downloads the package
I finally was able to find the log files which contain the error. There are two log files which could be interesting
Livy log: livy-livy-server.out
Yarn log
On my HDInsight cluster, I found the livy log by connecting to one of the Head nodes with SSH and downloading a a file at this path (this log didn't contain useful info):
/var/log/livy/livy-livy-server.out
The actual error was in the yarn log file accessible from YarnUI. In HDInsight Azure Portal, go to "Cluster dashboard" -> "Yarn", find your session (KILLED status), click on "Logs" in the table, find "Log Type: stderr", click "click here for full log".
The problem in my case was Scala version incompatibility between one of the dependencies of spark-streaming_2.11 and Livy. This is supposed to be fixed Livy 0.4. More info here

Resources