AWS S3 Static Website via Nodejs SDK - node.js

I am trying to create a static website on AWS S3 via the AWS node SDK. I am at the step where I am putting the bucket website. I am calling putBucketWebsite(params = {}, callback) with the following parameters:
{
"Bucket": "xxx.example.com",
"WebsiteConfiguration": {
"IndexDocument": {
"Suffix": "index.html"
},
"RoutingRules": []
}
}
but I am getting the following error:
MalformedXML: The XML you provided was not well-formed or did not validate against our published schema
What am I doing wrong?
When I getBucketWebsite from a site that works, I get:
{
"IndexDocument": {
"Suffix": "index.html"
},
"RoutingRules": []
}

Try removing RoutingRules from your request. As per documentation it requires some properties to be present.

Related

Shopify NodeJS: Error creating Metafields: error 422 "type can't be blank"

I'm trying to create metafields using the Shopify Admin REST API (with NodeJS).
This is a simplified version of the call I'm trying to make (see documentation):
const data = await client.post({
path: 'metafields',
data: {
"metafield": {
"namespace": "namespace123",
"key": "key123",
"value": 42,
"type": "number_integer"
}
},
type: DataType.JSON,
});
But I get this error:
HttpResponseError: Received an error response (422 Unprocessable Entity) from Shopify:
{
"type": [
"can't be blank"
]
}
I've checked that the type attributes are set properly:
The root-level type is set correctly to application/json, and data.metafield.type is set following the rules here.
I've also tried other types, but I get the same error.
Problem: I was using an old API version to initialize my Shopify Context:
Shopify.Context.initialize({
// ... other stuff ... //
API_VERSION: ApiVersion.October20,
});
None of my other API calls relied on old breaking behaviour, so updating my API version fixed the issue without too much hassle:
Shopify.Context.initialize({
// ... other stuff ... //
API_VERSION: ApiVersion.January22,
});

How to create a fork of a parent repository using Azure DevOps REST API?

How can we create a new repository copying all the contents from a parent repository? I have tried forking an existing repository, but the REST API is throwing 400 Bad Request exception.
The sample request provided in the Microsoft document here is not working as expected and is throwing below exception.
{
"$id": "1",
"innerException": null,
"message": "A team project ID or name is required in the URL or request body.\r\nParameter name: ProjectReference",
"typeName": "Microsoft.TeamFoundation.SourceControl.WebServer.InvalidArgumentValueException, Microsoft.TeamFoundation.SourceControl.WebServer",
"typeKey": "InvalidArgumentValueException",
"errorCode": 0,
"eventId": 0
}
How to create a fork of a parent repository using Azure DevOps REST API?
I could reproduce this issue with the sample request body.
To resolve this issue, please try to replace the name with ID of the project:
{
"name": "forkRepository",
"project": {
"id": "MyFirstProject_ID"
},
"parentRepository": {
"name": "MyFirstRepo",
"project": {
"id": "MyFirstProject_ID"
}
}
}
My test result:
And the fork repo:

Azure generate URL for a standard Logic app with connection to CosmosDB

I have a workflow in a standard logic app, that have HTTP trigger. When the workflow is trigged, the workflow, retrieve some data from a CosmosDB. Something like:
The previous method will require to have an API connection. I have already created and deployed a 'V2' API connection. Let's call it myCosmosCon
Also in the ARM template for my logic app I have already added a connectionRuntimeUrl of my connection API (to myCosmosCon) to appSettings (configuration):
....
"siteConfig": {
"appSettings": [
{
"name": "subscriptionId",
"value": "[subscription().subscriptionId]"
},
{
"name": "resourceGroup_name",
"value": "[resourceGroup().name]"
},
{
"name": "location_name",
"value": "[resourceGroup().location]"
},
{
"name": "connectionRuntimeUrl",
"value": "[reference(resourceId('Microsoft.Web/connections', parameters('connection_name')),'2016-06-01', 'full').properties.connectionRuntimeUrl]"
},
.....
]
},
Then I wrote the following in the connections.json:
{
"managedApiConnections": {
"documentdb": {
"api": {
"id": "/subscriptions/#appsetting('subscriptionId')/providers/Microsoft.Web/locations/#appsetting('location_name')/managedApis/documentdb"
},
"connection": {
"id": "/subscriptions/#appsetting('subscriptionId')/resourceGroups/#appsetting('resourceGroup_name')/providers/Microsoft.Web/connections/myCosmosCon"
},
"connectionRuntimeUrl": "#appsetting('connection_runtimeUrl')",
"authentication": {
"type": "ManagedServiceIdentity"
}
}
}
}
Now, when I deploy the ARM template of my Logic app, workflow, ... etc. I see no errors, the workflow looks also good. The only problem is the URL link to the HTTP trigger is not generated, I can't run the program.
However, if I change the connection_runtimeUrl in the connections.json file to have the actual value; to look something like:
"connectionRuntimeUrl": "https://xxxxxxxxxxxxx.xx.common.logic-norwayeast.azure-apihub.net/apim/myCosmosCon/xxxxxxxxxxxxxxxxxxxxxxxx/",
The URL is generated directly and I can simply run the workflow. AFTER that, if I return the connection_runtimeUrl as it was (a call to appsettings()), it still working!! the link also stay there.
It looks like the when I deploy the Logic app and the workflow that the connections.json, do not compile or make the call, so Azure think that there is an error and do not generate the link.
Any idea about how to solve the problem??
Thanks!
Not sure but could be the issue:
When you create a connection api for a logic app standard, you also need to create an access policy at the connection api level for the system assigned identity running the logic app standard.
param location string = resourceGroup().location
param cosmosDbAccountName string
param connectorName string = '${cosmosDbAccountName}-connector'
// The principalid of the logic app standard system assigned identity
param principalId string
// get a reference to the cosmos db account
resource cosmosDbAccount 'Microsoft.DocumentDB/databaseAccounts#2021-06-15' existing = {
name: cosmosDbAccountName
}
// create the related connection api
resource cosmosDbConnector 'Microsoft.Web/connections#2016-06-01' = {
name: connectorName
location: location
kind: 'V2'
properties: {
displayName: connectorName
parameterValues: {
databaseAccount: cosmosDbAccount.name
accessKey: listKeys(cosmosDbAccount.id, cosmosDbAccount.apiVersion).primaryMasterKey
}
api: {
id: 'subscriptions/${subscription().subscriptionId}/providers/Microsoft.Web/locations/${location}/managedApis/documentdb'
}
}
}
// Grant permission to the logic app standard to access the connection api
resource cosmosDbConnectorAccessPolicy 'Microsoft.Web/connections/accessPolicies#2016-06-01' = {
name: '${cosmosDbConnector.name}/${principalId}'
location: location
properties: {
principal: {
type: 'ActiveDirectory'
identity: {
tenantId: subscription().tenantId
objectId: principalId
}
}
}
}
output connectionRuntimeUrl string = reference(cosmosDbConnector.id, cosmosDbConnector.apiVersion, 'full').properties.connectionRuntimeUrl
I'm having trouble with the exact same issue/bug. The only work around as I see it is to deploy the workflow twice. First time with an actual URL pointing to a dummy connection and the second time with the appsetting reference.

Cloud Storage the deleted file still accessible

I am deleting a JSON file from Cloud Storage. However, this file I deleted is still accessible. I know it sounds silly. When I list the files that exist in cloud storage, this file is not listed. However, I can access this file with the URL.
I'll try to give you an example.
I'm calling the file from cloud storage with Postman:
[
{
"_id": "60ad0e33b7161e270d7f9bf2",
"id": 1,
"city": "Rotterdam",
"hours_0_sun": 2.4,
"daily_0_temp_day": 11.5,
....
},
{
...
}
]
When I remove the file
const key = `someid-someid.json`;
const bucket = storage.bucket(process.env.GCLOUD_BUCKET_NAME);
const file = bucket.file(key);
const response = await file.delete();
And call the file again:
[
{
"_id": "60ad0e33b7161e270d7f9bf2",
"id": 1,
"city": "Rotterdam",
"hours_0_sun": 2.4,
"daily_0_temp_day": 11.5,
....
},
{
...
}
]
File's still accessible...
When I try to get the file from storage:
//Find file
const options = {
prefix: `someid-someid.json`
};
let files = await storage.bucket(process.env.GCLOUD_BUCKET_NAME).getFiles(options);
console.log(files);
Console:
[[]]
This is driving me mad. Is this normal? How can I delete the file completely?
Note: When I delete the file, I can't see the file from the storage browser too. So file doesn't exist in the storage. But still accessible...
I found the problem thanks to #Kolban he mentioned CDN caches in the comments.
I setted 1 hour caching options while I uploading the files to cloud storage which is I totaly forget.
I changed 1 hour to 1 min and problem solved!
await bucket
.upload(filePath, {
destination: key,
gzip: true,
metadata: {
cacheControl: "public, max-age=60" // 1 minute caching
},
public: true
});

Google cloud function always receive / from API Gateway

Let's setup the basics:
I'm using Google Api Gateway with differents backends like Google Cloud Function.
First, I was parsing the req paramters with a switch statement on a header containing the original request url. (Very messy but working)
So I decided to use an express app instead for my cloud function.
But here is the thing: my functions always receive / from the gateway and generate raging errors like CANNOT GET / when my path is https://mygateway/api/subservice/action
So my question is: can I change the handling of the express app to parse my header containing the original request url and not the default path url?
Here is a part of my config:
{
"swagger": "2.0",
"info": {
"title": "my API",
"version": "1.0.0"
},
"basePath": "/api",
"host": "mygateway.[REGION].gateway.dev",
"schemes": [
"https"
],
"paths": {
"/subservice/action": {
"get": {
"x-google-backend": {
"address": "https://[REGION]-[ProjectID].cloudfunctions.net/[mycloudfunction]"
},
"security": [
{
"jwt_security": []
}
],
I found on this question something similar that guided my search of the response possible duplicate here
According Google's explanation of path translation when we use x-google-backend, the backend will only receive the basic request. we have to define with the parameter path_translation the behaviour we expect. In my case, I want to receive the same path so i use APPEND_PATH_TO_ADDRESS

Resources