Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I'm try create AWS instance with Node.js api that will manage other instances and install the docker image but I can't find any docs or tutorials
To use the AWS SDK
First of all you should npm install aws-sdk.
It's a bit confusing, instance definitions are actually called "reservations". And creating one of them is called "runInstance".
So, of course you first need to initialize your EC2 object.
import { EC2, config } from 'aws-sdk';
config.loadFromPath(__dirname + "/../aws.config.json");
const ec2: EC2 = new EC2(); // to start/stop instances
Next, I personally try to use promises whenever I can when I work with the AWS instances. They clean up the code considerably.
import { promisify } from 'util';
If you already have a reservation, you can start it using its instance id.
const params: EC2.StartInstancesRequest = { InstanceIds: [instanceId] };
const result: EC2.StartInstancesResult = await promisify((cb) => ec2.startInstances(params, cb))();
And you can also stop it like that:
const params: EC2.StopInstancesRequest = { InstanceIds: [instanceId] };
const result: EC2.StopInstancesResult = await promisify((cb) => ec2.stopInstances(params, cb))();
To create your instance you need this:
const params: EC2.RunInstancesRequest = { InstanceType: "t1.micro", ImageId: "ami-31814f58", MinCount: 1, MaxCount: 1 };
const result: EC2.Reservation = await promisify((cb) => ec2.runInstances(params, cb))();
And finally, to list your instances/reservations (with some optional filters):
const stateFilter = { Name: "instance-state-name", Values: ["running"] };
const idFilter = { Name: "instance-id", Values: [instanceId] };
const params: DescribeInstancesRequest = { Filters: [stateFilter, idFilter] };
const result: EC2.DescribeInstancesResponse = await promisify((cb) => ec2.describeInstances(params, cb))();
The response contains a collection of reservations, and for each reservation a collection of instances.
Your own images
To use your own images, you should create an "Amazon Machine Image" (AMI) in advance.
You may want to set up an "Elastic Container Registry" (ECR). You can push a docker image to this repository: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html
Related
I am facing trouble where datadog RESOURCE column is not giving the correct value as shown in the image. Really need some help here.
My assumption is that, it is happening because http tags are not appearing correctly. I think datadog itself add the http tags and it's value.
The http.path_group & http.route should have this value "/api-pim2/v1/attribute/search" but for some reason it's not coming correctly.
I am using this library dd-trace at backend. The tracers options which i provided are these
{"logInjection":true,"logLevel":"debug","runtimeMetrics":true,"analytics":true,"debug":true,"startupLogs":true,"tags":{"env":"dev02","region":"us-east-1","service":"fabric-gateway-pim-ecs"}}
The initialising code looks like this which ran at the start of my app
app/lib/tracer.js:
const config = require('config')
const tracerOptions = config.get('dataDog.tracer.options')
const logger = require('app/lib/logger')
const tracer = require('dd-trace').init({
...tracerOptions,
enabled: true,
logger
})
module.exports = tracer
I also tried to set the http.path_group & http.route tag manually but still it's not updating the values. Though i can add the new tags like http.test which has the same value which i was trying to override in http.path_group & http.route
const addTagsToRootSpan = tags => {
const span = tracer.scope().active()
if (span) {
const root = span.context()._trace.started[0]
for (const tag of tags) {
root.setTag(tag.key, tag.value)
}
log.debug('Tags added')
} else {
log.debug('Trace span could not be found')
}
}
...
const tags = [
{ key: 'http.path_group', value: request.originalUrl },
{ key: 'http.route', value: request.originalUrl },
{ key: 'http.test', value: request.originalUrl }
addTagsToRootSpan(tags)
...
I was requiring tracer.js file at the start of my app where server was listening.
require('app/lib/tracer')
app.listen(port, err => {
if (err) {
log.error(err)
return err
}
log.info(`Your server is ready for ${config.get('stage')} on PORT ${port}!`)
})
By enabling the debug option in datadog tracer init function. I can see the tracer logs and what values are passed by the library for http.route and resource.
I was confused by this line according to the data dog tracers doc you should init first before importing any instrumented module.
// This line must come before importing any instrumented module.
const tracer = require('dd-trace').init();
But for me http.route & resource value get correct if i initialise it on my routing file. They start giving me complete route "/api-pim2/v1/attribute/search" instead of only "/api-pim2"
routes/index.js:
const router = require('express').Router()
require('app/lib/tracer')
const attributeRouter = require('app/routes/v1/attribute')
router.use('/v1/attribute', attributeRouter)
module.exports = router
I am not accepting this answer yet because i am still confused where to initialise the tracer. Maybe someone can explain better. I am just writing this answer if someone else facing the issue can try this. Might just resolve their problem.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 months ago.
Improve this question
I was wondering if any of the following is possible to implement using vite build tool.
Consider that I have files in directory matching the pattern: /content/file-[id].md
/content/file-1.md
/content/file-2.md
Every time I serve the SPA app with vite command or building an app with vite build I would like to
grab all the files /content/file-[id].md and transform them into /content_parsed/file-[id].html
/content_parsed/file-1.html
/content_parsed/file-2.html
grab all files /content_parsed/file-[id].html and generated a manifest file /files.manifest containing all paths of files.
/files.manifest
This has to be done automatically in watch mode, when the app is served (vite command) and on-demand when app is built (vite build).
I am pretty sure this is possible to be done with a manual script that I could run with node ./prepareFiles.js && vite, but in this case I am loosing the reactivity when serving the app (i.e. the watch-mode).. so a direct integration into vite would be a step-up in terms of usability and testability (I think).
Given the above use-case - can vite do this? Do I need to write a custom plugin for that? or do you recommend creating a separate watch-files/watch-directory script for that?
I have been able to partially accomplish what I wanted. The only issue right now is the hot reload functionality.
if you import the manifest as
import doc from 'docs.json'
then the page will be auto-reloaded if the module is updated.
On the other had, if you want to dynamically load the data with fetch API:
fetch('docs.json')
.then(r => r.json())
.then(json => {
//...
})
Then the only way to refresh page contents is by manual refresh.. If anyone has a suggestion how to trigger reload from within vite plugin context please let me know.. I will update the post once I figure it out.
Also I should mention that I have decided not to pre-generate the html pages so this functionality is missing from the plugin but could easily be extended with marked, markdown-it remarked etc..
Plugin: generateFilesManifest.ts
import {PluginOption} from "vite";
import fs from "fs";
import path from 'path'
const matter = require('front-matter');
const chokidar = require('chokidar');
import {FSWatcher} from "chokidar";
export type GenerateFilesManifestConfigType = {
watchDirectory: string,
output: string
}
export type MatterOutputType<T> = {
attributes: T,
body: string,
bodyBegin: number,
frontmatter: string,
path: string,
filename: string,
filenameNoExt: string,
}
export default function generateFilesManifest(userConfig: GenerateFilesManifestConfigType): PluginOption {
let config: GenerateFilesManifestConfigType = userConfig
let rootDir: string
let publicDir: string
let command: string
function generateManifest() {
const watchDirFullPath = path.join(rootDir, config.watchDirectory)
const files = fs.readdirSync(watchDirFullPath);
// regenerate manifest
const manifest: any[] = []
files.forEach(fileName => {
const fileFullPath = path.join(watchDirFullPath, fileName)
// get front matter data
const fileContents = fs.readFileSync(fileFullPath).toString()
//const frontMatter = matter.read(fileFullPath)
const frontMatter = matter(fileContents)
//console.log(frontMatter);
// get file path relative to public directory
//const basename = path.basename(__dirname)
const fileRelativePath = path.relative(publicDir, fileFullPath);
const fileInfo = JSON.parse(JSON.stringify(frontMatter)) as MatterOutputType<any>;
fileInfo.path = fileRelativePath
fileInfo.filename = fileName
fileInfo.filenameNoExt = fileName.substring(0, fileName.lastIndexOf('.'));
fileInfo.frontmatter = ''
manifest.push(fileInfo);
});
const outputString = JSON.stringify(manifest, null, 2);
fs.writeFileSync(config.output, outputString, {encoding: 'utf8', flag: 'w'})
console.log('Auto-generated file updated')
}
let watcher: FSWatcher | undefined = undefined;
return {
name: 'generate-files-manifest',
configResolved(resolvedConfig) {
publicDir = resolvedConfig.publicDir
rootDir = resolvedConfig.root
command = resolvedConfig.command
},
buildStart(options: NormalizedInputOptions) {
generateManifest();
if (command === 'serve') {
const watchDirFullPath = path.join(rootDir, config.watchDirectory)
watcher = chokidar.watch(watchDirFullPath,
{
ignoreInitial: true
}
);
watcher
.on('add', function (path) {
//console.log('File', path, 'has been added');
generateManifest();
})
.on('change', function (path) {
//console.log('File', path, 'has been changed');
generateManifest();
})
.on('unlink', function (path) {
//console.log('File', path, 'has been removed');
generateManifest();
})
.on('error', function (error) {
console.error('Error happened', error);
})
}
},
buildEnd(err?: Error) {
console.log('build end')
watcher?.close();
}
}
}
in vite.config.ts, use as
export default defineConfig({
plugins: [
vue(),
generateFilesManifest({
watchDirectory: '/public/docs',
output: './public/docs.json'
})
]
})
you might want to cover such as edge-cases as watch directory not present etc...
front-matter is the library that parses markdown files. Alternative is gray-matter
EDIT: thanks to #flydev response I was able to dig some more examples on page reload functionality. Here's the experimental functionality that you could add:
function generateManifest() {
// ...
ws?.send({type: 'full-reload', path: '*'})
}
let ws: WebSocketServer | undefined = undefined;
return {
name: 'generate-files-manifest',
//...
configureServer(server: ViteDevServer) {
ws = server.ws
}
// ...
}
Currently the whole page is reloaded regardless of the path.. Not sure if there is a way to make it smart enough to just reload pages that loaded the manifest file.. I guess it's currently limited by my own ability to write a better code :)
I have the following code:
NOTE getDb() is wrapper around admin.firestore() see the link in the end of the question for more details.
let wordRef = await getDb().
.collection(DOC_HAS_WORD_COUNT)
.doc(word)
await wordRef.set({ word: word, 'count': 0 })
await wordRef.update('count', admin.firestore.FieldValue.increment(1))
When I execute it I get
FirebaseError: Function DocumentReference.update() called with invalid data. Unsupported field value: a custom object (found in field count)
How do I increment the value in node js, firestore, cloud functions?
NOTE: this problem is specific to Mocha testing, I didn't check but it will probably not fail on real env.
The problem is caused by the code using the real implementation in test, which need to be override by an emulator implementation, as explain in:
https://claritydev.net/blog/testing-firestore-locally-with-firebase-emulators/
Where u can also find the definition of getDb() I used in the code snipet
The following code will replace the firebase admin at run time, only when running in test env.
NOTE: this code is based on https://claritydev.net/blog/testing-firestore-locally-with-firebase-emulators/
and for a full solution, one need to do the same trick for db as explained in the link
//db.js
const admin = require("firebase-admin");
let firebase;
if (process.env.NODE_ENV !== "test") {
firebase = admin
}
exports.getFirebase = () => {
return firebase;
};
exports.setFirebase = (fb) => {
firebase = fb;
};
test:
// package.test.js
process.env.NODE_ENV = "test"
beforeEach(() => {
// Set the emulator firebase before each test
setFirebase(firebase)
});
import:
// package.test.js and package.js (at the top)
const { setFirebase } = require("../db.js")
code:
// package.js
let wordRef = await getDb()
.collection(DOC_HAS_WORD_COUNT)
.doc(word)
await wordRef.set({ word: word, 'count': 0 })
await wordRef.update('count', getFirebase().firestore.FieldValue.increment(1))
I am trying to use implement the AWS X-ray into my current project (using Node.js and Serverless framework). I am trying to wire the X-ray to one of my lambda function, I got the problem of
Error: Failed to get the current sub/segment from the context.
at Object.contextMissingRuntimeError [as contextMissing] (/.../node_modules/aws-xray-sdk-core/lib/context_utils.js:21:15)
at Object.getSegment (/.../node_modules/aws-xray-sdk-core/lib/context_utils.js:92:45)
at Object.resolveSegment (/.../node_modules/aws-xray-sdk-core/lib/context_utils.js:73:19).....
code below:
import { DynamoDB } from "aws-sdk";
import AWSXRay from 'aws-xray-sdk';
export const handler = async (event, context, callback) => {
const dynamo = new DynamoDB.DocumentClient({
service: new DynamoDB({ region })
});
AWSXRay.captureAWSClient(dynamo.service);
try {
// call dynamoDB function
} catch(err) {
//...
}
}
for this problem, I use the solution from
https://forums.aws.amazon.com/thread.jspa?messageID=821510󈤆
the other solution I tried is from https://forums.aws.amazon.com/thread.jspa?messageID=829923󊧣
code is like
import AWSXRay from 'aws-xray-sdk';
const AWS = AWSXRay.captureAWS(require('aws-sdk'));
export const handler = async (event, context, callback) => {
const dynamo = new AWS.DynamoDB.DocumentClient({region});
//....
}
Still not working...
Appreciated to the help of any kind.
As you mention, that happened because you're running locally (using serverless-offline plugin) and the serverless-offline plugin doesn't provide a valid XRAY context.
One possible way to pass this error and still be able to call your function locally is setting AWS_XRAY_CONTEXT_MISSING environment variable to LOG_ERROR instead of RUNTIME_ERROR (default).
Something like:
serverless invoke local -f functionName -e AWS_XRAY_CONTEXT_MISSING=LOG_ERROR
I didn't test this using serverless framework but it worked when the same error occurred calling an amplify function locally:
amplify function invoke <function-name>
I encountered this error also. To fix it, I disabled XRay when running locally. XRay isn't needed when running locally because I can just set up debug log statements at that time.
This is what the code would look like
let AWS = require('aws-sdk');
if (!process.env.IS_OFFLINE) {
const AWSXRay = require('aws-xray-sdk');
AWS = AWSXRay.captureAWS(require('aws-sdk'));
}
If you don't like this approach, you can set up a contextStrategy to not error out when the context is missing.
Link here
AWSXRay.setContextMissingStrategy("LOG_ERROR");
If you don't want the error clogging up your output you can add a helper that ignores only that error.
// Removes noisy Error: Failed to get the current sub/segment from the context due to Xray
export async function disableXrayError() {
console.error = jest.fn((err) => {
if (err.message.includes("Failed to get the current sub/segment from the context")) {
return;
} else {
console.error(err);
}
});
}
Here is a prior post that discusses how the pre-loaded Imagemagick is limited for security reasons on AWS Lambda.
"Note: This update contains an updated /etc/ImageMagick/policy.xml
file that disables the EPHEMERAL, HTTPS, HTTP, URL, FTP, MVG, MSL,
TEXT, and LABEL coders"
I need to use the 'label' function (which works successfully on my development machine - example pic further below))
Within the discussion in the linked post, frenchie4111 generously offers use of a node module he created that uploads imagemagick to a lambda app: github link https://github.com/DoubleDor/imagemagick-prebuilt
I would like to understand how uploading a fresh version of Imagemagick works, and how I will then use that version with the GM module that incorporates IM and nodejs together.
If I read correctly the full version of imagemagick will be reloaded to the address below each time my lambda app boots up ?
/tmp/imagemagick
DoubleDor's readme directions provides the option below:
var imagemagick_prebuilt = require( 'imagemagick-prebuilt' );
var child_process = require( 'child_process' );
exports.handler = function( event, context ) {
return q
.async( function *() {
imagemagick_bin_location = yield imagemagick_prebuilt();
console.log( `ImageMagick installed: ${imagemagick_bin_location}` );
// ImageMagick logo creation test:
// convert logo: logo.gif
var convert_process = child_process
.spawn( imagemagick_bin_location, [ 'logo:', 'logo.gif' ] )
convert_process
.on( 'close', function() {
context.success();
} );
} )();
};
What would I include/require to define 'gm' to work within my partial file below (in my nodejs lambda app)?
Will I need to edit the GM module too?
//imagemaker.js > gets included and called from another file that uploads picture to s3, and/or tweets it after picture is created in /tmp/filename.jpg This works presently.. I can make and upload imagemagick text generated images but I just can't use the 'label' tool which scales text within appended gm images
'use strict';
var Promise = require('bluebird');
var exec = require('child_process').exec;
var async = require('async');
var request = require('request');
var fs = require('fs');
var dateFormat = require('dateformat');
var gm = require('gm').subClass({imageMagick: true});
var aws = require('aws-sdk');
performers = [ { name: 'Matt Daemon', score: 99}, { name: “Jenifer Lawrence”, score: 101}
//created in a makeTitle function I omit for brevity sake.
url = “/temp/pathtotitlepicture.jpg”
// function below successfully appends a gm title image created with other functions that I haven't included
function makeBody(url){
var img = gm(400,40)
.append(url)
.gravity('West')
.fill('black')
.quality('100')
.font('bigcaslon.ttf')
.background('#f0f8ff')
for (var i = 0; i < performers.length; i++) {
var pname = " " + (i+1) + ") " +performers[i].name;
img.out('label:'+pname);
};
img.borderColor('#c5e4ff')
.border('5', '5')
.write(url, function (err) {
if (err) throw err;
var stream = fs.createReadStream("/tmp/fooberry.jpg")
return resolve(stream)
});
}
Just for fun, the image below shows what I've been able to do with gm(graphics magic) and imagemagick on my development machine that I'd now like to get working on AWS Lambda >> I really need that 'label' function and I guess that means learning how to get that whole library uploaded to AWS Lambda each time it boots!(?)