AWS SDK - change autoscaling group update policy - node.js

I've an autoscaling group on AWS and I'd like to change its update policy to get rolling update.
I've tried
var autoScaling = new AWS.AutoScaling(awsConfig);
autoScaling.updateAutoScalingGroup({
AutoScalingGroupName: <some name>,
UpdatePolicy: {
AutoScalingReplacingUpdate: {
WillReplace: true,
},
}
})
But this is failing with:
{ [UnexpectedParameter: Unexpected key 'UpdatePolicy' found in params]
message: 'Unexpected key \'UpdatePolicy\' found in params',
code: 'UnexpectedParameter',
time: Tue Nov 08 2016 22:15:42 GMT-0800 (PST) }

UpdatePolicy is a feature of AWS CloudFormation. It is not a feature found in the AWS API itself so none of the SDKs will have it. This is the documentation from CF.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html

Related

I am unable to connect Mongodb atlas Cluster from node js getting following unable to connect DB error

{ error: 1, message: 'Command failed: mongodump -h cluster0.yckk6.mongodb.net --port=27017 -d databaseName -p -u --gzip --archive=/tmp/file_name_2022-09-19T09-42-05.gz\n' + '2022-09-19T14:42:08.931+0000\tFailed: error connecting to db server: no reachable servers\n' }
Can anyone help me to solve this problem and following is my backup code
function databaseBackup() {
let backupConfig = {
mongodb: "mongodb+srv://<username>:<password>#cluster0.yckk6.mongodb.net:27017/databaseName?
retryWrites=true&w=majority&authMechanism=SCRAM-SHA-1", // MongoDB Connection URI
s3: {
accessKey: "SDETGGAKIA2GL", //AccessKey
secretKey: "Asad23rdfdg2teE8lOS3JWgdfgfdgfg", //SecretKey
region: "ap-south-1", //S3 Bucket Region
accessPerm: "private", //S3 Bucket Privacy, Since, You'll be storing Database, Private is HIGHLY Recommended
bucketName: "backupDatabase" //Bucket Name
},
keepLocalBackups: false, //If true, It'll create a folder in project root with database's name and store backups in it and if it's false, It'll use temporary directory of OS
noOfLocalBackups: 5, //This will only keep the most recent 5 backups and delete all older backups from local backup directory
timezoneOffset: 300 //Timezone, It is assumed to be in hours if less than 16 and in minutes otherwise
}
MBackup(backupConfig).then(onResolve => {
// When everything was successful
console.log(onResolve);
}).catch(onReject => {
// When Anything goes wrong!
console.log(onReject);
});
}

How can I debug "Build failed: Too many concurrent builds" error when only one function is being deployed via Google Cloud Function?

I'm currently trying to deploy a function via the console. I have added variables, package specs, and service account credentials.
When I hit deploy, the status was in build with the spinning wheel for about ten minutes before coming back with a build failed icon.
When I went to the logs I am seeing the following:
status: {
code: 8
message: "Build failed: Too many concurrent builds, please stagger your deployments."
}
with severity: ERROR under resource.
There are several other cloud functions that are already deployed and active; they were deployed some time ago and are not currently being redeployed.
I have attempted to redeploy the function in question but that resulted in a timeout after 60 seconds.
Full logs below:
{
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
status: {
code: 8
message: "Build failed: Too many concurrent builds, please stagger your deployments."
}
authenticationInfo: {
principalEmail: "user#user"
}
serviceName: "cloudfunctions.googleapis.com"
methodName: "google.cloud.functions.v1.CloudFunctionsService.CreateFunction"
resourceName: "projects/resource_name"
}
insertId: "-n11hqacqvq"
resource: {
type: "cloud_function"
labels: {3}
}
timestamp: "2021-02-18T22:16:56.681559Z"
severity: "ERROR"
logName: "projects/.../logs/cloudaudit.googleapis.com%2Factivity"
operation: {
id: "operations/..."
producer: "cloudfunctions.googleapis.com"
last: true
}
receiveTimestamp: "2021-02-18T22:16:56.858611526Z"
}

Azure CLI hangs when deleting blobs

I'm using the Azure CLI to delete multiple blobs (in this case there's only 3 to delete), by specifying a pattern:
az storage blob delete-batch --connection-string myAzureBlobConnectionString -s my-container --pattern clients/client_name/*
This hangs and sees to get stuck in some kind of loop, I've tried adding --debug onto the end and it appears to be entering a never ending cycle of requests:
x-ms-client-request-id:16144555-a87c-11e9-bf86-sd391bc3b6f9
x-ms-date:Wed, 17 Jul 2019 10:17:12 GMT
x-ms-version:2018-11-09
/fsonss7393djamxomaa/mycontainer
comp:list
marker:2!152!XJJ4HDHKANnmLWUIWUDCN75DSDS89DXNNAKNK3NNINI4NKLNXLNLA88NSAMOXA
yOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--
restype:container
azure.multiapi.storage.v2018_11_09.common.storageclient : Client-Request-ID=446db2f0-d87e-11e9-ac19-jj324kc3b6f9 Outgoin
g request: Method=GET, Path=/mycontainer, Query={'restype': 'container', 'comp': 'list', 'prefix': None, 'delimiter
': None, 'marker': '2!152!MDAwMDY4IWNsaXASADYnJpc3RvbG9sZHZpYyOKD87986xlcy8wYWY3YTllYi02MzUyLTRmMmUtODE3MaSDXXZTdkYmYzOT
cuanBnITAwMDAyOCE5DADATEyLTMxVDIzOjUDD8223HKjk5OTk5OTlaIQ--', 'maxresults': None, 'include': None, 'timeout': None}, Head
ers={'x-ms-version': '2018-11-09', 'User-Agent': 'Azure-Storage/2.0.0-2.0.1 (Python CPython 3.6.6; Windows 2008ServerR2)
AZURECLI/2.0.68', 'x-ms-client-request-id': '1664324-a87c-1fsfs-bf86-ee291b5252f9', 'x-ms-date': 'Wed, 17 Jul 2019 10:1
9:14 GMT', 'Authorization': 'REDACTED'}.
urllib3.connectionpool : https://fsonss7393djamxomaa.blob.core.windows.net:443 "GET /mycontainer?restype=contain
er&comp=list&marker=2%21452%21MDXAXMDY4IWNsaWVudHMvYnJpc3RvbG9sZHZpYySnsns8sWY3YTllYi02MzUyLTRDASXXDE3MS01YzJmZTdkYm
YzOTcuanBnFFSFSAyOXASAOTk5LTEyLTMxGSGSOjU4535Ljk5OTk5OTlaIQ-- HTTP/1.1" 200 None
azure.multiapi.storage.v2018_11_09.common.storageclient : Client-Request-ID=544db2f0-a88c-23x9-ac19-jkjd89bc3b6f9 Receivi
ng Response: Server-Timestamp=Wed, 17 Jul 2019 10:19:14 GMT, Server-Request-ID=44fsfs2-701e-004e-2589-3cae723232000, HTT
P Status Code=200, Message=OK, Headers={'transfer-encoding': 'chunked', 'content-type': 'application/xml', 'server': 'Wi
ndows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0', 'x-ms-request-id': '4a43c59b2-701e-44c-2989-3cdsd70000000', 'x-ms-version':
'2018-11-09', 'date': 'Wed, 17 Jul 2019 10:19:14 GMT'}.
azure.multiapi.storage.v2018_11_09.common._auth : String_to_sign=GET
It loops these requests over and over. Running an az storage list with a prefix returns the 3 files immediately.
Any ideas?
I think there is a minor error in your cli code: the container name is incorrect(means it does not have the path clients/client_name).
In your cli code, the container name is my-container. But in the debug info, I can see the container name is mycontainer which is not consistent with the name in your cli code.
Please make sure you specify the correct container name in your cli code, and which does contain the path clients/client_name.
I test the code at my side with a container, which does not have the path clients/client_name, and the same error with you. But if test with a container which has the path clients/client_name, then it deletes all the blobs inside it.
Otherwise, you should check cli version with az --version, the latest version is 2.0.69

Authentication error using new Pulumi azuread module

I've installed the latest Pulumi azuread module and I have this error when I try a pulumi preview:
Previewing update (int):
Type Name Plan Info
pulumi:pulumi:Stack test-int
└─ azuread:index:Application test 1 error
Diagnostics:
azuread:index:Application (test):
error: Error obtaining Authorization Token from the Azure CLI: Error waiting for the Azure CLI: exit status 1
my index.ts is very basic:
import * as pulumi from "#pulumi/pulumi";
import * as azure from "#pulumi/azure";
import * as azuread from "#pulumi/azuread";
const projectName = pulumi.getProject();
const stack = pulumi.getStack();
const config = new pulumi.Config(projectName);
const baseName = `${projectName}-${stack}`;
const testRg = new azure.core.ResourceGroup(baseName, {
name: baseName
});
const test = new azuread.Application("test", {
availableToOtherTenants: false,
homepage: "https://homepage",
identifierUris: ["https://uri"],
oauth2AllowImplicitFlow: true,
replyUrls: ["https://replyurl"],
type: "webapp/api",
});
Creating resources and AD application with the old module azure.ad works fine.
I have no clue what I am missing now....
EDIT:
index.ts the old way
import * as pulumi from "#pulumi/pulumi";
import * as azure from "#pulumi/azure";
const projectName = pulumi.getProject();
const stack = pulumi.getStack();
const config = new pulumi.Config(projectName);
const baseName = `${projectName}-${stack}`;
const testRg = new azure.core.ResourceGroup(baseName, {
name: baseName
});
const test = new azure.ad.Application("test", {
homepage: "https://homepage",
availableToOtherTenants: false,
identifierUris: ["https://uri"],
oauth2AllowImplicitFlow: true,
replyUrls: ["https://replyurl"]
});
Result of pulumi preview:
Previewing update (int):
Type Name Plan Info
pulumi:pulumi:Stack test-int
+ └─ azure:ad:Application test create 1 warning
Diagnostics:
azure:ad:Application (test):
warning: urn:pulumi:int::test::azure:ad/application:Application::test verification warning: The Azure Active Directory resources have been split out into their own Provider.
Information on migrating to the new AzureAD Provider can be found here: https://terraform.io/docs/providers/azurerm/guides/migrating-to-azuread.html
As such the Azure Active Directory resources within the AzureRM Provider are now deprecated and will be removed in v2.0 of the AzureRM Provider.
Resources:
+ 1 to create
2 unchanged
EDIT 2:
I'm running this on Windows 10:
az cli = 2.0.68
pulumi cli = 0.17.22
#pulumi/azure = 0.19.2
#pulumi/azuread = 0.18.2
#pulumi/pulumi = 0.17.21
Here are my principal permissions for Azure Active Directory Graph:
And the permissions for Microsoft Graph:
I ran into this issue and after hours I realized Fiddler was somehow interfering with the Az CLI running

Azure Functions Proxy - unable to set a HTTP header if value contains JSON

I am trying to set Report-To HTTP header with a proxy function, but the proxy doesn't even start when the value of the header contains a JSON value.
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"proxy1": {
"debug": true,
"matchCondition": {
"methods": [ "GET" ],
"route": "/{*all}"
},
"backendUri": "https://*****.z6.web.core.windows.net/{all}",
"responseOverrides": {
"response.headers.Reply-To": "{{ \"TEST\":0 }}"
}
}
}
}
This function returns HTTP error 503 Service unavailable "Functionhost is not running." if I try it on Azure. If started locally, the runtime show the following error message:
[26. 11. 2018 21:29:45] A ScriptHost error has occurred
[26. 11. 2018 21:29:45] Microsoft.Azure.AppService.Proxy.Common: ; expected
[26. 11. 2018 21:29:45] ; expected
[26. 11. 2018 21:29:45] The name 'TEST' does not exist in the current context
[26. 11. 2018 21:29:45] Only assignment, call, increment, decrement, and new object expressions can be used as a statement.
[26. 11. 2018 21:29:45] Stopping Host
Is something wrong with my proxy definition or is it a bug in Azure Functions?
That is valid json, so I feel this is a bug. I've logged an issue here in our repo for this.
As a workaround, you can change your header value to "{{ 'TEST':0 }}", using single quotes instead of escaped double quotes.

Resources