I need to make a backup to my azure database, I'm trying with this command:
"C:\Program Files\Microsoft SQL Server\120\DAC\bin\SqlPackage.exe" /Action:Export /SourceServerName:"tcp:MyDataServer.database.windows.net,1433" /SourceDatabaseName:MyDatabase /SourceUser:MyUser /SourcePassword:MyPass /TargetFile:C:\backups\backup.bacpac
but it seems that the format that downloads it is "bacpac" and I need it to be ".bak", I tried to change the extension but says: "The TargetFile argument must refer to a file with a '.bacpac' extension "
Any idea how to download the database in ".bak" format?
Any idea how to download the database in ".bak" format?
SQL Azure doesn't provide a native way to generate '.bak' format backup file. If you did need this format file, you could import the BACPAC File to your local SQL Server to create a new User Database. Then you could generate a '.bak' format file from your local SQL Server.
In addition, you also could try a tool named SqlAzureBakMaker which could make '.bak' file for you easily.
Related
I have a xlsx file in a azure bolb storage. Now, I want to access this xlsx file to do some edits and save it back. I have tried it locally. But I don't know how to do this when the file is in blob storage. The following code are used to do it in locally . Note: I don't want to save the first to my local drive and then edit. I want to directly edit it and save it via powershell.
$workbook = $excel.Workbooks.Open("C:\Users\jubaiaral\OneDrive - BMW\Documents\Book1.xlsx")
$sheet = $workbook.worksheets | where {$_.name -eq 'Sheet1'}
..................my edits come here...................
$workbook.Save()
$excel.Quit()```
Thanks in advance!
I want to directly edit it and save it via powershell.
I dont think you can directly update the file and save it using powershell. You can
Copy the file
Do the work
Copy it back to the storage .
If you want to do this in place you use SPARK ( Azure Synapse / Azure databricks )
This may be helpful : https://community.databricks.com/s/feed/0D53f00001HKHeOCAX
I am new to databricks or spark and learning this demo from databricks. I have a databricks workspace setup on AWS.
The code below is from the official demo and it runs ok. But where is this csv file? I want to check the file and also understand how the path parameter works.
DROP TABLE IF EXISTS diamonds;
CREATE TABLE diamonds
USING csv
OPTIONS (path "/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv",
header "true")
I have checked at the databrikcs location on S3 bucket and have not found the file:
/databricks-datasets is a special mount location that is owned by Databricks and available out of box in all workspaces. You can't browse it via S3 browser, but you can use display(dbutils.fs.ls("/databricks-datasets")), or %fs ls /databricks-datasets, or DBFS File browser (in "Data" tab) to explore its content - see a separate page about it.
We want to copy files from Google Storage to Azure Storage.
We used following this guide: https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-google-cloud
We run this command:
azcopy copy 'https://storage.googleapis.com/telia-ddi-delivery-plaace/activity_daily_al1_20min/' 'https://plaacedatalakegen2.blob.core.windows.net/teliamovement?<SASKEY>' --recursive=true
And get this resulting error:
INFO: Scanning...
INFO: Any empty folders will not be processed, because source and/or destination doesn't have full folder support
failed to perform copy command due to error: cannot start job due to error: cannot scan the path /Users/peder/Downloads/https:/storage.googleapis.com/telia-ddi-delivery-plaace/activity_daily_al1_20min, please verify that it is a valid.
It seems to us that azcopy interprets the source as a local file destination and therefore adds the current location we run it from which is: /Users/peder/Downloads/. But we are unable to find any arguments to indicate that it is a web location and it is identical to the documentation in this guide:
azcopy copy 'https://storage.cloud.google.com/mybucket/mydirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer/mydirectory' --recursive=true
What we have tried:
We are doing this on a Mac in Terminal, but we also tested PowerShell for Mac.
We have tried single and double quotes.
We copied the Azure Storage url with SAS key from the console to ensure that has correct syntax
We tried cp instead of copy as the help page for azcopy used that.
Is there anything wrong with our command? Or can it be that azcopy has been changed since the guide was written?
I also created an issue for this on the Azure Documentation git page: https://github.com/MicrosoftDocs/azure-docs/issues/78890
The reason you're running into this issue is because the URL storage.cloud.google.com is hardcoded in the application source code for Google Cloud Storage. From this link:
const gcpHostPattern = "^storage.cloud.google.com"
const invalidGCPURLErrorMessage = "Invalid GCP URL"
const gcpEssentialHostPart = "google.com"
Since you're using storage.googleapis.com instead of storage.cloud.google.com, it is not recognized by azcopy as a valid Google Cloud Storage endpoint and it considers the value as one of the directories in your local file system.
I am accessing a website that allows me to download CSV file. I would like to store the CSV file directly to the blob container. I know that one way is to download the file locally and then upload the file, but I would like to skip the step of downloading the file locally. Is there a way in which I could achieve this.
i tried the following:
block_blob_service.create_blob_from_path('containername','blobname','https://*****.blob.core.windows.net/containername/FlightStats',content_settings=ContentSettings(content_type='application/CSV'))
but I keep getting errors stating path is not found.
Any help is appreciated. Thanks!
The file_path in create_blob_from_path is the path of your local file, looks like "C:\xxx\xxx". This path('https://*****.blob.core.windows.net/containername/FlightStats') is Blob URL.
You could download your file to byte array or stream, then use create_blob_from_bytes or create_blob_from_stream method.
Other answer uses the so called "Azure SDK for Python legacy".
I recommend that if it's fresh implementation then use Gen2 Storage Account (instead of Gen1 or Blob storage).
For Gen2 storage account, see example here:
from azure.storage.filedatalake import DataLakeFileClient
data = b"abc"
file = DataLakeFileClient.from_connection_string("my_connection_string",
file_system_name="myfilesystem", file_path="myfile")
file.append_data(data, offset=0, length=len(data))
file.flush_data(len(data))
It's painful, if you're appending multiple times then you'll have to keep track of offset on client side.
I am writing an Azure function that uses WinSCP library to download files using SFTP and upload the files on blob storage. This library doesn't allow to get files as a Stream. Only option is to download them locally. My code also uses a private key file. So i have 2 questions.
sessionOptions.SshPrivateKeyPath = Path.GetFullPath("privateKey2.ppk");
is working locally. I have added this file in solution with option "copy to output" and it works. But will it work when Azure function is deployed?
While getting the files I need to specify local path where the files will be downloaded.
var transferResult = session.GetFiles(
file.FullName, Path.GetTempPath() + #"SomeFolder\" + file.Name, false,
transferOptions);
The second parameter is the local path.
What should I use in place of Path.GetTempPath() that will work when Azure function is deployed?
For the private key, just deploy it along with your function project. You can simply add it to your VS project.
See also Including a file when I publish my Azure function in Visual Studio.
For the download: The latest version of WinSCP already supports streaming the files. Use the Session.GetFile method.
To answer your question about the temporary location, see:
Azure Functions Temp storage.
Where to store files for Azure function?