Creating AWS S3 object life cycle using NodeJS - node.js

Creating AWS S3 object life cycle using NodeJS.
I want to create S3 object life cycle via API using NodeJS. When I see the documentation, AWS provided only multiple object life cycle, with Java.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html
I also checked this url -
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getBucketLifecycle-property
Genral Concern
How to set multiple Transition with NodeJS like the way Java has ?
BucketLifecycleConfiguration.Rule rule2 = new BucketLifecycleConfiguration.Rule()
.withId("Archive and then delete rule")
.withFilter(new LifecycleFilter(new LifecycleTagPredicate(new Tag("archive", "true"))))
.addTransition(new Transition().withDays(30).withStorageClass(StorageClass.StandardInfrequentAccess))
.addTransition(new Transition().withDays(365).withStorageClass(StorageClass.Glacier))
.withExpirationInDays(3650)
.withStatus(BucketLifecycleConfiguration.ENABLED);
Followed by -
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html
Any help would be great.

we need to call putBucketLifecycle and pass Rules Array to LifecycleConfiguration. Similar to CLI Example
s3.putBucketLifecycle(
{
Bucket: "sample-temp-bucket",
LifecycleConfiguration: {
Rules: [
{
Filter: {
And: {
Prefix: "myprefix",
Tags: [
{
Value: "mytagvalue1",
Key: "mytagkey1",
},
{
Value: "mytagvalue2",
Key: "mytagkey2",
},
],
},
},
Status: "Enabled",
Expiration: {
Days: 1,
},
},
{
Filter: {
Prefix: "documents/",
},
Status: "Enabled",
Transitions: [
{
Days: 365,
StorageClass: "GLACIER",
},
],
Expiration: {
Days: 3650,
},
ID: "ExampleRule",
},
],
},
},
(error, result) => {
if (error) console.log("error", error);
if (result) console.log("result", result);
}
);

Related

node-ews Update email to mark as read

I'm using "node-ews" library version 3.5.0, but when I try to update any property I get the following error:
{
"ResponseMessages":{
"UpdateItemResponseMessage":{
"attributes":{
"ResponseClass":"Error"
},
"MessageText":"An internal server error occurred. The operation failed., Object reference not set to an instance of an object.",
"ResponseCode":"ErrorInternalServerError",
"DescriptiveLinkKey":0,
"Items":null
}
}
}
I'm trying to mark email as read using the following code:
const markFolderAsRead = async (ews, id, changeKey) => {
const args = {
attributes: {
MessageDisposition: "SaveOnly",
},
ItemChanges: {
ItemChange: {
ItemId: {
attributes: {
Id: id,
ChangeKey: changeKey,
},
},
Updates: {
SetItemField: {
FieldURI: {
attributes: {
FieldURI: "message:IsRead",
},
Message: {
IsRead: true,
},
},
},
},
},
},
};
await ews.run("UpdateItem", args).then((result) => {
console.log("email read:", JSON.stringify(result));
});
};
I tried several modifications, including trying to update another fields, but none of it worked.
I followed this documentation: https://learn.microsoft.com/pt-br/exchange/client-developer/web-service-reference/updateitem-operation
And the lib doesn't show any example of it, but when I change the json to a wrong "soap" construction the error show different messages, or even if I do not pass any of the parameters required as "ChangeKey".
So, maybe this error is something relate to microsoft ews soap construction that I'm missing parameters, or so.
Got it working!
My JSON was wrong. The FieldURI was finishing after the message attribute, it should be before.
Correct JSON:
const args = {
attributes: {
MessageDisposition: "SaveOnly",
ConflictResolution: "AlwaysOverwrite",
SendMeetingInvitationsOrCancellations: "SendToNone",
},
ItemChanges: {
ItemChange: {
ItemId: {
attributes: {
Id: id,
ChangeKey: changeKey,
},
},
Updates: {
SetItemField: {
FieldURI: {
attributes: {
FieldURI: "message:IsRead",
},
},
Message: {
IsRead: "true",
},
},
},
},
},
};

Add metadata to output .wav file using AWS MediaConvert

I am creating a Node api to convert .mp4 file to .wav file using AWS MediaConvert.
I need to add some metadata to the output .wav file but I am not able to figure out how to do so.
Is there any specific MediaConvert setting that I need to configure while creating a MediaConvert Job for the transcoding which will allow me to add the metadata to the .wav file?
I am currently having the following setting for my MediaConvert Job:
const mediaConvertSetting = {
Queue: queue-arn,
Role: role-arn,
Settings: {
Inputs: [
{
AudioSelectors: {
"Audio Selector 1": {
DefaultSelection: "DEFAULT",
},
},
VideoSelector: {},
TimecodeSource: "ZEROBASED",
FileInput: `s3://source-bucket/input_file.mp4`,
},
],
TimecodeConfig: {
Source: "ZEROBASED",
},
OutputGroups: [
{
Name: "File Group",
Outputs: [
{
ContainerSettings: {
Container: "RAW",
},
AudioDescriptions: [
{
AudioTypeControl: "FOLLOW_INPUT",
AudioSourceName: "Audio Selector 1",
CodecSettings: {
Codec: "WAV",
WavSettings: {
BitDepth: '16',
Channels: '1',
Format: 'RIFF',
SampleRate: '8000'
}
},
LanguageCodeControl: "FOLLOW_INPUT",
AudioType: 0,
},
],
Extension: "wav",
NameModifier: "_wav",
},
],
OutputGroupSettings: {
Type: "FILE_GROUP_SETTINGS",
FileGroupSettings: {
Destination: `s3://destination-bucket/`,
S3Settings: {
AccessControl: {
CannedAcl: "PUBLIC_READ"
}
}
},
},
},
],
TimedMetadataInsertion: {
Id3Insertions: [
{
Id3: base64EncodedTestMetadata,
Timecode: '00:00:00:01'
},
]
}
},
UserMetadata: {
"Some Metadata": "TEST",
},
At present MediaConvert cannot set metadata on the RAW container type.
A possible workflow alternative is to embed the desired metadata into the newly created WAV files in an automated fashion using an AWS Lambda function and ffmpeg.
If you would like to see this feature added to MediaConvert, please use the 'Feedback' button in the lower left of the AWS Console, and select 'Feature Request' in the resulting dialog box.

Need to update the elastic search doc without id field with nodejs client

I have docs in elastic search having no id field in them, I have some other field _id as a unique identifier.
I want to update the doc by _id field via nodejs client, but it is throwing an error Missing required parameter: id
One solution can be, to reindex the whole doc.
Any other suggestions are welcome
I am using following query
await esClient.updateByQuery({
index: 'storyv2',
refresh: true,
body: {
script: {
lang: 'painless',
source: 'ctx._source.like = 100',
},
query: {
match: {
_id: storyId,
},
},
},
});
Elastic version: 2.2
Elastic nodejs client version: 7.10.0
You can also update your documents by query:
POST your-index/_update_by_query
{
"script": {
"source": " ...do something to your document... ",
"lang": "painless"
},
"query": {
"term": {
"unique-id-field": "the-doc-id-to-update"
}
}
}
UPDATE:
await esClient.updateByQuery({
index: 'storyv2',
type: '_doc', <--- add this
refresh: true,
body: {
script: {
lang: 'painless',
source: 'ctx._source.like = 100',
},
query: {
match: {
_id: storyId,
},
},
},
});

TagSpecifications with requestSpotInstances UnexpectedParameter with aws-sdk

I'm trying to add a tag to my AWS Spot Request. But it has returned me { UnexpectedParameter: Unexpected key 'TagSpecifications' found in params.LaunchSpecification.
I have followed this documentation, and I have already tried to move this code out of LaunchSpecification, but the error persists.
const params = {
InstanceCount: 1,
LaunchSpecification: {
ImageId: config.aws.instanceAMI,
KeyName: 'backoffice',
InstanceType: config.aws.instanceType,
SecurityGroupIds: [config.aws.instanceSecurityGroupId],
TagSpecifications: [{
ResourceType: 'instance',
Tags: [{
Key: 'Type',
Value: 'Mongo-Dump',
}],
}],
BlockDeviceMappings: [{
DeviceName: '/dev/xvda',
Ebs: {
DeleteOnTermination: true,
SnapshotId: 'snap-06e838ce2a80337a4',
VolumeSize: 50,
VolumeType: 'gp2',
Encrypted: false,
},
}],
IamInstanceProfile: {
Name: config.aws.instanceProfileIAMName,
},
Placement: {
AvailabilityZone: `${config.aws.region}a`,
},
},
SpotPrice: config.aws.instancePrice,
Type: 'one-time',
};
return ec2.requestSpotInstances(params).promise();
Something makes me think that the problem is in the documentation or in the aws-sdk for Javascript itself. My options are exhausted.
The error message is correct. According to the documentation, the RequestSpotLaunchSpecification object doesn't have an attribute called TagSpecifications.
However, you can tag your Spot Instance request after you create it.
ec2.requestSpotInstances(params) returns an array of SpotInstanceRequest objects, each containing a spotInstanceRequestId (e.g. sir-012345678). Use the CreateTags API with these Spot Instance request ids to add the tags.
const createTagParams = {
Resources: [ 'sir-12345678' ],
Tags: [
{
Key: 'Type',
Value: 'Mongo-Dump'
}
]
};
ec2.createTags(createTagParams, function(err, data) {
// ...
});

Creating azure VM from image with Node SDK

I'm trying to use the azure sdk (azure-sdk-for-node) to create a virtual machine based on an image i've already saved. I've also already created the service.
Here is what I've got:
// Create a virtual machine in the cloud service.
computeManagementClient.virtualMachines.createDeployment('prerender-pro', {
name: "prerender-pro",
deploymentSlot: "Production",
label: "for heavy duty caching",
roles: [{
roleName: "prerender-pro",
roleType: "PersistentVMRole",
label: "for heavy duty caching",
oSVirtualHardDisk: {
sourceImageName: "prerender-os-2014-07-16",
mediaLink: "https://XXXXXXX.blob.core.windows.net/vhds/prerender-os-2014-07-16.vhd"
},
dataVirtualHardDisks: [],
configurationSets: [{
configurationSetType: "LinuxProvisioningConfiguration",
adminUserName: "Blah",
adminPassword: "Blahblah2014!",
computerName: 'prerender-pro',
enableAutomaticUpdates: true,
resetPasswordOnFirstLogon: false,
storedCertificateSettings: [],
inputEndpoints: []
}, {
configurationSetType: "NetworkConfiguration",
subnetNames: [],
storedCertificateSettings: [],
inputEndpoints: [{
localPort: 3389,
protocol: "tcp",
name: "RemoteDesktop"
}]
}]
}]
}, function (err, result) {
if (err) {
console.error(err);
} else {
console.info(result);
}
});
And the error I'm getting is this. I follow the example in the github readme almost exactly. Not sure why this is an issue.
{ [Error: A computer name must be specified.]
code: 'BadRequest',
statusCode: 400,
requestId: '9206ea1e591eb4dd8ea21a9196da5d74' }
Thanks!
It turns out that the error message is inaccurate. When deploying a Linux instance, only the "HostName" is required when defining the Configuration Set. "ComputerName" applies only to Windows instances. Here's an example of C# code:
ConfigurationSet configSet = new ConfigurationSet
{
HostName = "VMTest",
UserName="xxxxx",
UserPassword="xxxx",
ConfigurationSetType = ConfigurationSetTypes.LinuxProvisioningConfiguration
}

Resources