Pulumi: query pulumi stack value if not given - azure

Here is my use case:
I have a blob resource that is created only if a file (artifcat from my CI server) is present on my build machine.
Now, I may have to run pulumi on my local machine where the file does not exist. But I don't want to delete the blob resource. The blob is still present on Azure.
if (fs.existsSync(fullFileName)) {
// On the build server, I update the blob with the new artifact
const blob = new azure.storage.Blob("myblob-b", {
name: fileName,
source: fullFileName,
resourceGroupName: resourceGroup.name,
storageAccountName: storageAccount.name,
storageContainerName: zipDeployContainer.name,
type: "block"
})
} else {
// On my local machine, the artifact does not exists but I want to keep it
const stackRef = new pulumi.StackReference(`${organization}/${projectName}/${stackName}`);
const srblob = stackRef.getOutput("zipblob");
// How do I tell pulumi keep the resource from the stack reference
}
export const zipblob = blob;

Ok, i'm not smart enough for this, people on pulumi slack helped me out. Basically you can use StackReference. Specifically the getOutput method.

Related

Unable to get image details from azure container registry

Unable to get image details from azure container register
I tried using azure container registry.
Can anyone assist me on this?
please let me know how to login to acr using username and password in c#?
ContainerRegistryClient client = new ContainerRegistryClient(endpoint, new DefaultAzureCredential(),
new ContainerRegistryClientOptions()
{
Audience = ContainerRegistryAudience.AzureResourceManagerPublicCloud
});
// Obtain a RegistryArtifact object to get access to image operations
RegistryArtifact image = client.GetArtifact("kube01.azurecr.io/sensor:6", "latest");
// List the set of tags on the hello_world image tagged as "latest"
Pageable<ArtifactTagProperties> tags = image.GetAllTagProperties();
// Iterate through the image's tags, listing the tagged alias for the image
Console.WriteLine($"{image.FullyQualifiedReference} has the following aliases:");
foreach (ArtifactTagProperties tag in tags)
{
Console.WriteLine($" {image.RegistryEndpoint.Host}/{image.RepositoryName}:{tag}");
}
}
}
}
I tried in my environment and got below results:
I setup an image in the container registry with two tags through portal:
Code:
I tried with same code and modified some lines to get the image detail.
using Azure;
using Azure.Containers.ContainerRegistry;
using Azure.Identity;
string registryName = "registry1";
// Create a new instance of the ContainerRegistryClient class
var credential = new DefaultAzureCredential();
var client = new ContainerRegistryClient(new Uri($"https://{registryName}.azurecr.io"), credential, new ContainerRegistryClientOptions()
{
Audience = ContainerRegistryAudience.AzureResourceManagerPublicCloud
});
RegistryArtifact image = client.GetArtifact("nginx","latest");
Pageable<ArtifactTagProperties> tags = image.GetAllTagProperties();
Console.WriteLine($"{image.FullyQualifiedReference} has the following aliases:");
foreach (ArtifactTagProperties tag in tags)
{
Console.WriteLine($"Name: {tag.Name}");
Console.WriteLine($"Digest:{tag.Digest}");
Console.WriteLine($"Lastmodifiedtime:{tag.LastUpdatedOn}");
}
Console:
registry1.azurecr.io/nginx:latest has the following aliases:
Name: latest
Digest:sha256:6650513efd1d27c1xxxxxxxxxxx
Lastmodifiedtime:15-02-2023 05:05:55 +00:00
Name: v1
Digest:sha256:6650513efd1d27c1xxxxxxxxxxx
Lastmodifiedtime:15-02-2023 06:26:16 +00:00
Reference:
Azure Container Registry client library for .NET - Azure for .NET Developers | Microsoft Learn

How to check existence of soft deleted file in Azure blob container with node js?

I have file which was stored in some Azure blob directory "folder1/folder2/file.txt". This file was soft deleted - I can see it in Azure web console. I need to have function which checks this file existence.
I tried library "azure-storage". It perfectly works with NOT removed files:
const blobService = azure.createBlobService(connectingString);
blobService.doesBlobExist(container, blobPath, callback)
May be anyone knows how use same approach with soft removed files?
I tied with lib "#azure/storage-blob".
But I stuck with endless entities there (BlobServiceClient, ContainerItem, BlobClient, ContainerClient, etc) and couldn't find way to see particular file in particular blob directory.
Following this MSDOC, I got to restore the Soft deleted blobs and their names with the below code snippet.
const { BlobServiceClient } = require('#azure/storage-blob');
const connstring = "DefaultEndpointsProtocol=https;AccountName=kvpstorageaccount;AccountKey=<Storage_Account_Key>;EndpointSuffix=core.windows.net"
if (!connstring) throw Error('Azure Storage Connection string not found');
const blobServiceClient = BlobServiceClient.fromConnectionString(connstring);
async function main(){
const containerName = 'kpjohncontainer';
const blobName = 'TextFile05.txt';
const containerClient = blobServiceClient.getContainerClient(containerName);
undeleteBlob(containerClient, blobName)
}
main()
.then(() => console.log(`done`))
.catch((ex) => console.log(ex.message));
async function undeleteBlob(containerClient, blobName){
const blockBlobClient = await containerClient.getBlockBlobClient(blobName);
await blockBlobClient.undelete(); //to restore the deleted blob
console.log(`undeleted blob ${blobName}`);
}
Output:
To check if the blob exists and if exists but in Soft-deleted state, I found the relevant code but it’s in C# provided by #Gaurav Mantri. To achieve the same in NodeJS refer here.

Azure Storage: Enable blob versioning on storage account programmatically

I'm creating several storage accounts programmatically via StorageManagementClient and would like to enable blob versioning on account level at the time of account creation. How is this accomplished?
var storageManagementClient = new StorageManagementClient(azureCredentials)
{
SubscriptionId = subscriptionId
};
var storageAccountCreateParameters = new StorageAccountCreateParameters
{
// set properties
};
await storageManagementClient.StorageAccounts.CreateAsync(resourceGroupName, accountName, storageAccountCreateParameters);
I thought that this would be available as a create parameter in StorageAccountCreateParameters, but I don't see anything there.
Also see https://learn.microsoft.com/en-us/azure/storage/blobs/versioning-enable?tabs=portal
The blob versioning is not included in the StorageAccountCreateParameters. It belongs to BlobServiceProperties class.
So after you create the storage account with your code above, you can use the following code to set blob versioning:
var p1 = new BlobServiceProperties()
{
IsVersioningEnabled = true
};
storageManagementClient.BlobServices.SetServiceProperties("resource_group", "account_name", p1);

how i can pass sub folder-name and get result in response using node js api & SolidBucket?

i am using solid bucket npm package to create an api for cloud storages. i am using to azure blob storage to getListFiles. folder name passing to the provider api its return the file list. what i am facing the issue is the i have sub folders in my azure blob storage that sub folder name send to api its returns the
result code is 0
and
response body no content
The Code is:
const SolidBucket = require('solid-bucket')
let provider = new SolidBucket('azure', {
accountName: 'accountName',
accountKey: 'accountKey'
})
let bucketName = 'example'
provider.getListOfFiles(bucketName).then((resp) => {
if (resp.status === 200) {
console.log(resp.message)
// Output: The list of objects was fetched successfully from bucket "example"
}
}).catch((resp) => {
if (resp.status === 400){
console.log(resp.message)
// Output: Some error coming from the provider...
}
})
Actually, the bucketname value should be the container name on Azure Blob Storage, not the virtual sub-folder name in a container.
So I used my azure storage account name & key and my test container to run the same code with an additional code console.log(resp) at the front of if (resp.status === 200), the result is as below.
It's corresponding with the same file structure in my test container via Azure Storage Explorer.

GitHub webhook created twice when using Terraform aws_codebuild_webhook

I'm creating a an AWS CodeBuild using the following (partial) Terraform Configuration:
resource "aws_codebuild_webhook" "webhook" {
project_name = "${aws_codebuild_project.abc-web-pull-build.name}"
branch_filter = "master"
}
resource "github_repository_webhook" "webhook" {
name = "web"
repository = "${var.github_repo}"
active = true
events = ["pull_request"]
configuration {
url = "${aws_codebuild_webhook.webhook.payload_url}"
content_type = "json"
insecure_ssl = false
secret = "${aws_codebuild_webhook.webhook.secret}"
}
}
for some reason two Webhooks are created on GitHub for that spoken project, one with events pull_request and push, and the second with pull request (the only one I've expected).
I've tried removing the first block (aws_codebuild_webhook) even though terraform documentation give an example with both:
https://www.terraform.io/docs/providers/aws/r/codebuild_webhook.html
but than I'm in a pickle because there isn't a way to acquire the payload_url the Webhook require and currently accept it from aws_codebuild_webhook.webhook.payload_url.
not sure what is the right approach here, Appreciate any suggestion.

Resources