Within an Azure Data Factory Pipeline I am attempting to remove an empty directory.
The files within the directory were removed by a previous pipelines' iterative operation thus leaving an empty directory to be removed.
The directory is a sub-folder: The hierarchy being:
container / top-level-folder (always present) / directory - dynamically created - the result of an unzip operation.
I have defined a specific dataset that points to
container / #concat('top-level-folder/',dataset().dataset_folder)
where 'dataset_folder' is the only parameter.
The Delete Activity is configured like this:
On running the pipeline it errors with this error:
Failed to execute delete activity with data source 'AzureBlobStorage' and error 'The required Blob is missing. Folder path: container/top level directory/Directory to be removed/.'. For details, please reference log file here:
The log is an empty spreadsheet.
What am I missing from either the dataset or delete activity?
In azure blob storage, when all the contents inside a folder are deleted, then the folder would automatically get deleted.
When I deleted each file and tried to delete the folder in the end, then I got the same error.
This is because the folder is deleted as soon as the files in the folders are deleted. Only when using azure data lake storage, you will have to specifically delete the folder as well.
Since the requirement is to delete the folder in azure blob storage, you can simply remove the delete activity that you are using to remove the folder.
I used an Azure function suggested here:
Delete folder using Microsoft.WindowsAzure.Storage.Blob : blockBlob.DeleteAsync();
to perform the action.
Related
I want to write an extension that will:
Read the folders and files in the current workspace
Reorganise those files by creating new folders and moving the files into them
So I need an API to do the above. I am not clear on whether I should use
the standard fs node module,
the File System Provider or
the workspace namespace.
I read the answer here, but it doesn't say what to use to create folders and move files within the workspace.
The FileSystemProvider is to be used if you want to serve files from non-local storage like FTP sites or virtual file systems inside remote devices and present them to Visual Studio Code as storage directories. The FileSystemProvider is a view/controller of the remote storage. In other words, you have to implement all the file manipulation operations by communicating with the remote storage.
If you want to just manipulate the files in the current workspace and also be able to use URI's from FileSystemProviders, you use vscode.workspace.fs.
You can also use the Node.js fs module, but that only handles local disk workspaces (URI's with scheme file:). I recommend to use the synchronous versions of the fs methods. I had some troubles using the asynchronous fs methods in Visual Studio Code (I did not know of vscode.workspace.fs at that time).
I found extension.ts, sample code for implementing FileSystemProvider provided by Microsoft.
The below steps are provided which helps you understand how to create a new folder (createDirectory) and move files within the workspace (using the copy command, copying all files in an old folder to a new folder and use the delete command if you don’t wish for files to remain in the old folder).
The first step in FileSystemProvider is to register the filesystem provider for a given scheme using registerFileSystemProvider:
const memFs = new MemFS();
context.subscriptions.push(
vscode.workspace.registerFileSystemProvider('memfs', memFs, {
isCaseSensitive: true
}));
Next is to register the command registerCommand to perform your operations in FileSystemProvider like readDirectory, readFile, createDirectory, copy, delete.
context.subscriptions.push(
vscode.commands.registerCommand('memfs.init', _ =>
{
#TODO: add required functionalities
}));
Read folders and read files
for (const [name] of memFs.readDirectory(vscode.Uri.parse('memfs:/'))) {
memFs.readFile(vscode.Uri.parse(`memfs:/${name}`));
}
Create the new folder
memFs.createDirectory(vscode.Uri.parse(`memfs:/folder/`));
Move files to newly created folder. Unfortunately there doesn't seem to be a separate move file command, but you can use the Copy command to copy to the new folder and then delete:
COPY
copy(source: Uri, destination: Uri, options: {overwrite: boolean})
DELETE
context.subscriptions.push(vscode.commands.registerCommand('memfs.reset', _ => {
for (const [name] of memFs.readDirectory(vscode.Uri.parse('memfs:/'))) {
memFs.delete(vscode.Uri.parse(`memfs:/${name}`));
}
}));
I would like to add rewrite URL code on azure web app's web.config without redeploying the whole app again. for this I am using 'app service editor' and 'kudu- debug console' for editing the web.config, first I cant save the file and gives me error.
after some search I found that under APP SETTING KEY value should be 0 instead 1
edited the value 1 to 0 and save the APP SETTING KEY, after that I am able to edited the config file, in order to test the code again I changed the value 0 to 1 and save the setting. but when I refresh the file which is opened in editor or kudu the pasted code disappeared, the site is connected with automatic azure deployment pipeline
How I can edited the web.config file without redeploying the app again.
Yes, it's possible to make changes without redeploying the app.
Some details:
Check Run the package document and we can find:
1.The zip package won't be extracted to D:\home\site\wwwroot, instead it will be uploaded directly to D:\home\data\SitePackages.
2.A packagename.txt which contains the name of the ZIP package to load at runtime will be created in the same directory.
3.App Service mounts the uploaded package as the read-only wwwroot directory and runs the app directly from that mounted directory. (That's why we can't edit the read-only wwwroot directory directly)
So my workaround is:
1.Navigate to D:\home\data\SitePackages in via kudu- debug console:
Download the zip(In my case it's 20200929072235.zip) which represents your deployed app, extract this zip file and do some changes to web.config file.
2.Zip those files(choose those files and right-click...) into a childtest.zip, please follow my steps carefully here!!! The folder structure of Run-from-package is a bit strange!!!
3.Then zip the childtest.zip into parenttest.zip(When uploading the xx.zip, the kudu always automatically extra them. So we have to zip the childtest.zip into parenttest.zip first)
4.Drag and drop local parenttest.zip into online SitePackages folder in kudu-debug console and we can get a childtest.zip now:
5.Modify the packagename.txt, change the content from 20200929072235.zip to childtest.zip and Save:
Done~
Check and test:
Now let's open App Service Editor to check the changes:
In addition: Though it answers the original question, I recommend using other deployment methods(web deploy...) as a workaround. It could be much easier~
I have a node script which needs to create a file in the root directory of my application before it builds the file.
The data this file will contain is specific to each build that gets triggered, however, I'm having no luck on Azure DevOps in this regards.
For the writing of the file I'm using fs.writeFile(...), something similar to this:
fs.writeFile(targetPath, file, function(err) { // removed for brevity });
however, this throws an expection:
[Error: ENOENT: no such file or directory, open '/home/vsts/work/1/s/data-file.json']
Locally this works, I'm assuming this has got to do with permissions, however, I tried adding a blank version of this file to my project, however, it still throws this exception.
Possible to create file in sources directory on Azure DevOps during
build
The answer is Yes. This is fully supported scenario in Azure Devops Service if you're using Microsoft ubuntu-hosted agent.
If you met this issue when using microsoft-hosted agent, I think this issue is more related to one path issue. Please check:
The function where the error no such file or directory comes. Apart from the fs.writeFile function, do you also use fs.readFile in the xx.js file? If so, you should make sure the two paths are same.
The structure of your source files and your real requirements. According to your question you want to create it in Source directory /home/vsts/work/1/s, but the first line indicates that you actually want to create file in root directory of my application.
1).If you want to create file in source directory /home/vsts/work/1/s:
In your js file: Use something targetpath like './data-file.json'. And make sure you're running command node xx.js from source directory. (Leaving CMD task/PS task/Bash task's working directory blank!!!)
2).If you want to do that in root of application folder like /home/vsts/work/1/s/MyApp:
In your js file: Use __dirname like fs.writeFile(__dirname + '/data-file.json', file, function(err) { // removed for brevity }); and fs.readFile(__dirname + '/data-file.json',...).
Im learning logic apps and im trying to create a simple flow from azure blob storage, perform a liguid parsing and then save parsed file to another blob container.
How should it work:
1. Whenever new file is added to blob container ("from") [containing xml files]
2.Liquid action takes place (XML -> JSON)
3.New file .json is saved to blob container ("too") :)
What i have learned:
1. I manage to write a liguid template for xml files - tested - working
2. I know how to copy file between blob containers - tested - working
For each loop:
https://i.imgur.com/ImaT3tf.jpg "FE loop"
Completed:
https://i.imgur.com/g6M9eLJ.jpg "Completed..."
Current LA:
https://i.imgur.com/ImaT3tf.jpg "Current"
What I dont know how to do:
1. How to "insert" current file content in for each into liquid action? It looks like logic apps is skipping that step?
The main problem is you could not use Current item as the xml content, you need to get the content with Get blob content action in For_each, then parse xml to json. After this create the blob in another container with json value.
You could refer to my workflow.
We are modelling a directory structure in Azure Blob storage. I would like to be able to copy all files from a folder to a local directory. Is there any way to copy multiple files from blob storage at once that match a pattern or do I have to get these files individually?
As you may already know, blob storage only support 1 level hierarchy: You have blob containers (folder) and each container contains blobs (files). There's no concept of folder hierarchy there. The way you create an illusion of folder hierarchy is via something called blob prefix. For example, look at the screenshot below:
In the picture above, you see two folders under images blob container: 16x16 and 24x24. In cloud, the blob names include these folder names. So the name of AddCertificate.png file in folder 24x24 in blob storage is 24x24/AddCertificate.png.
Now coming to your question, you would still need to download individual files but what storage client library allows you to do is fetch a list of blobs by blob prefix. So if you want to download all files in folder 24x24 (or in other words you want to download all blobs with prefix 24x24), you would first list the blobs with prefix 24x24 and then download each blob individually. On the local computer, you could create a folder by the name of the prefix.
You can refer below code as a sample reference ((its written in javascript but you can easily map the logic to any language). This code is maintained by Microsoft.
https://github.com/WindowsAzure/azure-sdk-tools-xplat/blob/master/lib/commands/storage.blob._js#L144
https://github.com/WindowsAzure/azure-sdk-tools-xplat/blob/master/lib/commands/storage.blob._js#L689
The second link explains how to parse blob prefixes and get the folder hierarchy out of it.
It also shows how to download blob using multiple threads and ensure the integrity of the blob using MD5.
Just including the high level code that process blob path containing prefixes. Please refer above link for full implementation, I cannot copy paste entire implementation here.
if (!fileName) {
var structure = StorageUtil.getStructureFromBlobName(specifiedBlobName);
fileName = structure.fileName;
fileName = utils.escapeFilePath(fileName);
structure.dirName = StorageUtil.recursiveMkdir(dirName, structure.dirName);
fileName = path.join(structure.dirName, fileName);
dirName = '.'; //FileName already contain the dirname
}