No details in KqlError when I try to use KqlMagic - azure

I'm trying to connect to an azure data explorer but I keep getting a non descriptive error. I'm following this tutorial.
https://learn.microsoft.com/en-us/sql/azure-data-studio/notebooks/notebooks-kqlmagic?view=sql-server-ver16.
Has anyone seen this?
click here for screenshot
I was trying to connect to azure data explorer from Azure Machine Learning Studio notebooks. I also tried it in Jupyter notebooks with an anaconda environment and I got the same error.
However, the command %reload_ext Kqlmagic worked for me
Maybe its because that Azure login has multiple directories?

Related

runOutput isn't appearing even after using dbutils.notebook.exit in ADF

I am using the below code to get some information in the Azure Databricks notebook, but runOutput isn't appearing even after the successful completion of the notebook activity.
Code that I used.
import json
dbutils.notebook.exit(json.dumps({
"num_records" : dest_count,
"source_table_name" : table_name
}))
Databricks notebook exited properly, but Notebook activity isn't showing runOutput.
Can someone please help me what is wrong here?
When I tried the above in my environment, it is working fine for me.
These are my Linked service Configurations.
Result:
I suggest you try the troubleshooting steps like, changing Notebook and changing the Databricks workspace with new one or using Existing cluster in linked service.
If still, it is giving the same, then it's better to raise a Support ticket for your issue.

How to resolve "Uncaught ApiError Error: Project project-name has been deleted." issue in VSCode for Google Cloud function development?

I am very much new to GCP & Node JS, recently I started working on a cloud function development in Nodejs.I researched many websites to begin my development so don't know which step I followed but somehow I am able to develop a simple Nodejs project that connects to Google BigQuery to execute some SQL statement on my local machine in VSCode IDE. At first, as it was giving some error while connecting to GCP, I found a few solutions like configuring the project in Google Cloud Shell to authenticate it.
Everything went smoothly and my code connected to BigQuery successfully. Some days later I shutdown a new project I created on GCP Console, then tried to execute my local code which worked perfectly earlier. It is now throwing an error as soon as it tries to connect to BigQuery part of the code.
Error is:
Uncaught ApiError Error: Project project-X has been deleted.
I tried to config new project using the command:
gcloud config set project myProject-XYZ
output is: Updated property [core/project].
Then again I tried to run my code in VS Code, but the problem still persists. Not sure where to set a new project or remove reference to the old/shutdown project.
I am expecting some guidance in order to understand this development.
Reviewing information for GCP projects, the following could help with the project you will use.
There are several possible identifiers for GCP projects:
Each project is granted a 12-digit identifier called a "project
number."
When creating a project, you can choose a unique alphanumeric ID,
however it cannot be modified. The default value is frequently
obvious-animal-1234.
"project name" is a freeform text string that you can alter at any
time.
Additionally to the information above, you can run gcloud projects list to see a list of projects and their ID/number/name to verify you are using the right identifier.
Remember that you should use the project ID with gcloud.
Do you have in mind that the Cloud Resource Manager API should be enabled in your Google Cloud Console?
Here is the direct link to the App Engine Admin API.
Now, according to the last part of your question, you can use the command: gcloud config unset account

UNABLE_TO_GET_ISSUER_CERT_LOCALLY error with Azure storage explorer on MacOS

I am using Azure storage explorer on Mac OS to connect to ADLS using Azure AD. I am able to access the containers when I just login into my Mac, but if Mac goes on sleep after that if I try to access the containers I get the error as UNABLE_TO_GET_ISSUER_CERT_LOCALLY. Again if I restart the Macbook it works fine.
So, is there anything I can do to overcome this issue, like clearing any temp folder or clearing any files to make it work? Any help is appreciated.
I am not sure if this is the right place to ask this question.
For the certificate issue, you could refer to this doc to troubleshoot.
Or you can directly open the storage explorer in the command line with the --ignore-certificate-errors flag, then it will ignore the certificate error.

Deep Learning Virtual Machine can't run jupyter "No such notebook dir: ''/dsvm/Notebooks''"

I've set up a vm with Deep Learning Virtual Machine (Microsoft Azure).
Normally, I connect to the vm thanks to ssh etc
Then I run jupyter by writing jupyter notebook --no-browser.
But this time I have can't run jupyter notebook because there is this message Bad config encountered during initialization: "No such notebook dir: ''/dsvm/Notebooks''"
How can I fix that ?
Thanks for your help !
I presume you are trying to run Jupyter Notebook and with that goal in mind, I suggest you follow the following steps:
Move your notebook to ~/notebooks/
Find your Pubic IP Address of your VM from Azure Dashboard
Access https://your_public_ip_address:8000 in your web browser and log in using your vm login credentials
You should be able to see all the files you have in ~/notebooks/
I presume this method is defined by Azure for security reasons, to prevent people from having an open port without authentication. Hope this helps!
It worked for me :
jupyter notebook --notebook-dir=/home/$USER/notebooks --no-browser

Azure ML Workbench File from Blob

When trying to reference/load a dsource or dprep file generated with a data source file from blob storage, I receive the error "No files for given path(s)".
Tested with .py and .ipynb files. Here's the code:
# Use the Azure Machine Learning data source package
from azureml.dataprep import datasource
df = datasource.load_datasource('POS.dsource') #Error generated here
# Remove this line and add code that uses the DataFrame
df.head(10)
Please let me know what other information would be helpful. Thanks!
Encountered the same issue and it took some research to figure out!
Currently, data source files from blob storage are only supported for two cluster types: Azure HDInsight PySpark and Docker (Linux VM) PySpark
In order to get this to work, it's necessary to follow instructions in Configuring Azure Machine Learning Experimentation Service.
I also ran az ml experiment prepare -c <compute_name> to install all dependencies on the cluster before submitting the first command, since that deployment takes quite a bit of time (at least 10 minutes for my D12 v2 cluster.)
Got the .py files to run with HDInsight PySpark compute cluster (for data stored in Azure blobs.) But .ipynb files are still not working on my local Jupyter server - the cells never finish.
I'm from the Azure Machine Learning team - sorry you are having issues with Jupyter notebook. Have you tried running the notebook from the CLI? If you run from the CLI you should see the stderr/stdout. The IFrame in WB swallows the actual error messages. This might help you troubleshoot.

Resources