Facing issue while installing Data bricks CLI. Not able to enter Token value at command prompt - databricks

I am trying to install Data bricks CLI. At command prompt, I entered command Data bricks configure --token and provided Data bricks Host URL. After entering both values, it's asking value of Token. I have generated token value but it's not allowing me to enter...copy paste option is also not working so I am not able to install Data bricks CLI. Can someone please help me to resolve this issue

In the recent versions of the Databricks CLI it doesn't show the pasted text because the value of token is a sensitive value. So just paste text into terminal using the standard shortcut (like described in this article), or via terminal menu (you should have Paste in the Edit menu), and press ENTER.

Related

I am trying to use databricks cli and invoke the databricks configure

I am trying to use databricks CLI and invoke the databricks configure. At the point where we need to enter the token,I am neither able to type anything or copy paste anything.
You need to just copy the token and paste and hit enter.
Note: Even after pasting the token it will show as blank.
.
Now you can run the databricks fs ls command to check whether you have successfully configured.
For more details, refer similar thread: databricks configure using cmd and R

Volume Shadow Copy (VSS) - Catastrophic failure

I have an issue with Volume Shadow Copies (VSS). This issue started a few days ago. I’ve tried MANY things from Google but cannot find a solution.
What’s frustrating (and surprising) is that even after I restored the computer to a sector-by-sector image backup, to a time that this issue was not existent, I still get this issue.
SYMPTOMS:
When trying to create an image in Macrium Reflect, it can’t, gives
error: VSS_E_UNEXPECTED_PROVIDER_ERROR.
When trying to run “check” on any disk from “tools,” I get “Windows
was unable to scan the drive”
From Windows Events: Volume Shadow Copy Service error: Error calling
a routine on a Shadow Copy Provider
{b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details Cannot ask
provider {b5946137-7b9f-4925-af80-51abd60b20d5} if volume is
supported. [0x8000ffff] [hr = 0x8000ffff, Catastrophic failure”
FACTS:
When I try cmd: “vssadmin delete shadows /all” (to clean up any dead
VSS snapshots) I get “No items that satisfy the query.”
The only VSS provider in the registry
(HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\VSS\Providers)
is Microsoft V1.0
CMD command, VSSADMIN LIST PROVIDERS, only shows Microsoft.
Services “Microsoft Storage Spaces SMP”, “Microsoft Software Shadow
Copy Provider” and “Volume Shadow Copy” services are set to automatic and run ok.
THINGS I’VE TRIED (not a complete list)
I Re-registered the VSS components with a bat file.
Tried resizing the VSS with “vssadmin Resize ShadowStorage /For=C: /On=C: /Maxsize=25GB” in cmd, I get “The shadow copy provider had an error.”
In safe mode, I ran chkdsk /f, SFC /SCANNOW, and DSIM.
In safe mode, I ran VSS repair and WMI repair via “Tweaking.com Windows Repair (All in One)” software.
All disks are reported ok in CrystalDiskInfo.
In HKEY_LOCAL_MACHINE\SYSTEM\Setup, SystemSetupInProgress is set to 0.
When running “vssadmin list writers” in CMD, they all say “no error.”
I am lost at what to do next.
Some UpperFilters value in the registry got deleted for some reason.
Carry out the following steps:
Open a new Notepad window
Copy and paste the below script into Notepad, and save as vss_fix.reg:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{71a27cdd-812a-11d0-bec7-08002be2092f}]
"UpperFilters"=hex(7):76,00,6f,00,6c,00,73,00,6e,00,61,00,70,00,00,00,00,00
This will update the UpperFilters value at this location to be volsnap (the hex code decodes to volsnap). Once you've copy/pasted the registry key, save this in a location you can easily access (e.g. desktop), and double click on the script to run the fix.
You will need to restart your machine once the fix has run.
I'll share my event log error when I tried and failed to run windows backup in case it helps searchers find this post.
Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details IVssSnapshotProvider::IsVolumeSupported() failed with 0x8000ffff [hr = x8000ffff, Catastrophic failure].
FWIW- I don't know what messed it all up. I had installed AOMEI Backupper right before it- aiming to use it to clone to a new flash drive. I didn't actually use it since it was a pay version, but I did install and run the app. I'm highly suspect of it playing a role here. I've seen others have similar problems with Acronis utilities, Truecrypt, etc.

How to list Databricks scopes using Python when working on it secret API

I can create a scope. However, I want to be sure to create the scope only when it does not already exist. Also, I want to do the checking using Python? Is that doable?
What I have found out is that I can create the scope multiple times and not get an error message -- is this the right way to handle this? The document https://docs.databricks.com/security/secrets/secret-scopes.html#secret-scopes points out using
databricks secrets list-scopes
to list the scopes. However, I created a cell and ran
%sh
databricks secrets list-scopes
I got an error message saying "/bin/bash: databricks: command not found".
Thanks!
This will list all the scopes.
dbutils.secrets.listScopes()
You can't run the CLI commands from your databricks cluster (through a notebook). CLI needs to be installed and configured on your own workstation and then you can run these commands on your workstation after you configure connecting to a databricks worksapce using the generated token.
still you can run databricks cli command in notebook by same kind databricks-clisetup in cluster level and run as bash command . install databricks cli by pip install databricks-cli

Using ArangoDB through the command line not working

Thanks for taking the time, so - Arango is installed and the WebUI has been working fine. I've been doing the tutorials and finished the basics, but attempting to move on and import my own data I'm getting stuck.
I stored my data in a google sheet so I exported that for ingest but then trying to access the command line tools to ingest it, I hit a hurdle of "command not found". Trying "arangoimport" for example as recommended here, or "arangimp" and other recommendations I found online from a search. I also tried other command line tools and had the same issue.
Where should I be running this command from? (and how can I get there?) If the command can and should be run from the first terminal window I open, then please can you tell me what I'm missing and need to do :)
Thanks,
Josh
Open terminal and use cd to go to the directory in which arangoimport.exe is stored.
When the terminal is in that directory you can run the arangoimport command.

Removing need for passphrase key in Google Cloud Projects

I am trying to set up datalab notebooks in a Google Cloud project. I screwed up and entered a passphrase during the first
$ datalab connect INSTANCE_NAME
install.I quickly realized that I wished I hadn't done that, so I deleted the instance and tried to reinstall. It asked again.
So, I did a bit of googling (after just deleting the new project and creating a new one), and discovered that the passphrase is required across projects.
So, I went to the metadata tab and deleted it through there- but it comes back whenever I try and create an instance (on any project) through the terminal.
Ok. So, I tried using gcloud to change the instance to not need the project passphrase, using
$ cloud compute instances add-metadata [INSTANCE_NAME] --metadata block-project-ssh-keys=TRUE
Same thing.
Please, what the heck am I missing? How do I just permanently remove the need for a passphrase when setting up an instance in datalab from the ssh terminal?
I wouldn't mind using the passphrase so much, but whenever I enter it, the terminal just stops (not hard stop- it just sits there without processing until I ctrl+C and force stop. I can type and enter and whatever, but it doesn't register my passphrase.)
Any help would be greatly appreciated.
FYI, I am setting all this up using a stock Pixelbook. That shouldn't matter since everything is through Google Cloud, but there ya go.
Thanks!
The passphrase isn't tied to your GCP project or Datalab instances in any way.
Instead, it is a property of your local private SSH key. This file usually winds up under ~/.ssh and is named something like google_compute_engine.
Since you mention using a Pixelbook, I assume you are running the datalab connect command from Cloud Shell. In that case, this file is stored inside of your Cloud Shell instance.
Delete that file, and then the next run of datalab connect will generate a new one (for which you can leave the passphrase empty).

Resources