JSON stored in AWS EB environment variables is retrieved without quotes - node.js

I'm running a node.js EB container and trying to store JSON inside an Environment Variable. The JSON is stored correctly, but when retrieving it via process.env.MYVARIABLE it is returned with all the double quotes stripped.
E.g. MYVARIABLE looks like this:
{ "prop": "value" }
when I retrieve it via process.env.MYVARIABLE its value is actualy { prop: value} which isn't valid JSON. I've tried to escape the quotes with '\' ie { \"prop\": \"value\" } that just adds more weird behavior where the string comes back as {\ \"prop\\":\ \"value\\" }. I've also tried wrapping the whole thing in single quotes e.g. '{ "prop": "value" }', but it seems to strip those out too.
Anyone know how to store JSON in environment variables?
EDIT: some more info, it would appear that certain characters are being doubly escaped when you set an environment variable. E.g. if I wrap the object in single quotes. the value when I fetch it using the sdk, becomes:
\'{ "prop": "value"}\'
Also if I leave the quotes out, backslashes get escaped so if the object looks like {"url": "http://..."} the result when I query via the sdk is {"url": "http:\\/\\/..."}
Not only is it mangling the text, it's also rearranging the JSON properties, so properties are appearing in a different order than what I set them to.
UPDATE
So I would say this seems to be a bug in AWS based on the fact that it seems to be mangling the values that are submitted. This happens whether I use the node.js sdk or the web console. As a workaround I've taken to replacing double quotes with single quotes on the json object during deployment and then back again in the application.

Use base64 encoding
An important string is being auto-magically mangled. We don't know the internals of EB, but we can guess it is parsing JSON. So don't store JSON, store the base64-encoded JSON:
a = `{ "public": { "s3path": "https://d2v4p3rms9rvi3.cloudfront.net" } }`
x = btoa(a) // store this as B_MYVAR
// "eyAicHVibGljIjogeyAiczNwYXRoIjogImh0dHBzOi8vZDJ2NHAzcm1zOXJ2aTMuY2xvdWRmcm9udC5uZXQiIH0gfQ=="
settings = JSON.parse(atob(process.env.B_MYVAR))
settings.public.s3path
// "https://d2v4p3rms9rvi3.cloudfront.net"
// Or even:
process.env.MYVAR = atob(process.env.B_MYVAR)
// Sets MYVAR at runtime, hopefully soon enough for your purposes
Since this is JS, there are caveats about UTF8 and node/browser support, but I think atob and btoa are common. Docs.

Related

How to copy S3 object with special character in key

I have objects in an S3 bucket, and I do not have control over the names of the keys. Some of these keys have special characters and AWS SDK does not like them.
For example, one object key is: folder/‍Johnson, Scott to JKL-Discovery.pdf, it might look fine at first glance, but if I URL encode it: folder%2F%E2%80%8DJohnson%2C+Scott+to+JKL-Discovery.pdf, you can see that after folder/ (or folder%2F when encoded) there is a random sequence of characters %E2%80%8D before Johnson.
It is unclear where these characters come from, however, I need to be able to handle this use case. When I try to make a copy of this object using the Node.js AWS SDK,
const copyParams = {
Bucket,
CopySource,
Key : `folder/‍Johnson, Scott to JKL-Discovery.pdf`
};
let metadata = await s3.copyObject(copyParams).promise();
It fails and can't find the object, if I encodeURI() the key, it also fails.
How can I deal with this?
DO NOT SUGGEST I CHANGE THE ALLOWED CHARACTERS IN THE KEY NAME. I DO NOT HAVE CONTROL OVER THIS
Faced the same problem but with PHP. copyObject() method is automatically encoding destination parameters (Bucket and Key) parameters, but not source parameter (CopySource) so it has to be encoded manually. In php it looks like this:
$s3->copyObject([
'Bucket' => $targetBucket,
'Key' => $targetFilePath,
'CopySource' => $s3::encodeKey("{$sourceBucket}/{$sourceFilePath}"),
]);
I'm not familiar with node.js but there should also exist that encodeKey() method that can be used?
trying your string, there's a tricky 'zero width space' unicode char...
http://www.ltg.ed.ac.uk/~richard/utf-8.cgi?input=342+200+213&mode=obytes
I would sanitize the string removing unicode chars and then proceding with url encoding as requested by official docs.
encodeURI('folder/‍johnson, Scott to JKL-Discovery.pdf'.replace(/[^\x00-\x7F]/g, ""))

ConvertFrom-Json complains of an unterminated string

I have a PowerShell script that pulls data from a DLL. The DLL returns the first 3000 characters of the JSON. Most of the time this is fine, because the complete json is less than 3000 characters. However, if a row returns a longer json I only get the first 3000 characters.
If I have a json that hit the cap and I run:
$myString = $returnedArray[$currentrow] | ConvertFrom-Json
I get:
ConvertFrom-Json : Unterminated string passed in. (3000):
The proper fix would be to deal with the source that is truncating the JSON. However, I don't have access to that source code (owned by a third party company).
Now that I have the output is this a question of just adding a string terminator? Or do I have to parse the JSON myself, figure out what it needs to correctly end the current field, and add it?
I have been trying various things to terminate the JSON string, but none have worked. For the moment my PowerShell script is simply skipping any row that fails the ConvertFrom-Json line.
You will need to parse the JSON string yourself and keep track of any strings or nested values so you can properly terminate them once you stop receiving the values.
Here is a short example of some Json that will be hard to terminate without tracking it in the first place.
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {

Generate a consistent sha256 hash from an object in Node

I have an object that I'd like to hash with sha256 in Node. The contents of the object are simple Javascript types. For example's sake, let's say:
var payload = {
"id": "3bab3f00-7d55-11e7-9b0a-4c32759242a5",
"foo": "a message",
"version": 7,
};
I create a hash like this:
const crypto = require('crypto');
var hash = crypto.createHash('sha256');
hash.update( ... ).digest('hex');
The question is, what to pass to update? The documentation for crypto says you can pass a <string> | <Buffer> | <TypedArray> | <DataView>, which seems to suggest an object is not a good thing to pass.
I can't use toString() because that prints "[object Object]". I could use JSON.stringify, however I have read elsewhere that the output from stringify is not guaranteed to be deterministic for the same input.
Are there any other options? I do not want to download a package from NPM.
The right terms are "canonical" and the action is called "canonicalization" (I'm assuming EN-US here), you can find a stringify that produces canonical output here.
Beware that you must make sure that the output also has the right character set (UTF-8 should be preferred) and line endings. Spurious data should not be present, e.g. a byte order mark or NUL termination string is enough to void the hash value.
After that you can pass it as string I suppose.
You can of course use any canonical encoding. Note that XML has defined XML-digsig, which contains canonicalization during signature generation and signing, which means that the verification will even succeed if the XML code is altered (without altering the structure or contents of course, but whitespace / indentation will not matter).
I'd still recommend regression testing between implementations and even version updates of the libraries.

Parameterized query in Postgresql with a json array

I would like to invoke array_prepend into a json[] using a parameterized query. I am using pg-promise npm package but this uses the normal node-postgres adapter under the hood.
My query is:
db.query(`update ${schema}.chats set messages =
array_prepend('{"sender":"${sender}","tstamp":${lib.ustamp()},"body":$1}',messages) where chat_id = ${chat_id}`
, message));
Same with "$1".
It works with a non-parameterized query.
Above code produces :
{ [error: syntax error at or near "hiya"]
Main reason for this is to avoid sql injection (docs say that they escape adequately when using the parameterized queries).
There are 2 problems in your query.
The first one is that you are using ES6 template strings, while also using sql formatting with ${propName} syntax.
From the library's documentation:
Named Parameters are defined using syntax $*propName*, where * is any of the following open-close pairs: {}, (), [], <>, //, so you can use one to your liking, but remember that {} are also used for expressions within ES6 template strings.
So you either change from ES6 template strings to standard strings, or simply switch to a different variable syntax, like $/propName/ or $[propName], this way you will avoid the conflict.
The second problem is as I pointed earlier in the comments, when generating the proper SQL names, use what is documented as SQL Names.
Below is a cleaner approach to the query formatting:
db.query('update ${schema~}.chats set messages = array_prepend(${message}, messages) where chat_id = ${chatId}', {
schema: 'your schema name',
chatId: 'your chat id',
message: {
sender: 'set the sender here',
tstamp: 'set the time you need',
body: 'set the body as needed'
}
}
);
When in doubt about what kind of query you are trying to execute, the quickest way to peek at it is via pgp.as.format(query, values), which will give you the exact query string.
And if you still want to use ES6 template strings for something else, then you can change the string to:
`update $/schema~/.chats set messages = array_prepend($/message/, messages) where chat_id = $/chatId/`
That's only one example, the syntax is flexible. Just remember not to use ES6 template string formatting to inject values into queries, because ES6 templates have no knowledge about how to properly format JavaScript types to comply with PostgreSQL, only the library knows it.

Should I Parse JSON Data Before Inserting To MongoDB?

So, I am receiving some JSON data from a client to my Node.JS server. I want to insert that json into my MongoDB instance using Mongoose.
I can insert the JSON as-is, and it works great, because it's just text. However, I want to parse it before insertion so that when I extract it later it will be all nice and neat.
So, this works:
wordStream.words.push(wordData);
And this doesn't:
wordStream.words.push(JSON.parse(wordData));
So, should I even want to parse the JSON before insertion?
And if I should parse the JSON, how do I do it without throwing an error? I need to put everything in double quotes "", I believe, before it will parse, but for some reason whenever I make a string with double quotes and parse it, it turns everything all wrong.
Here is the JSON:
{ word: 'bundle',
definitions:
[ { definition: 'A group of objects held together by wrapping or tying.',
partOfSpeech: 'noun' } ],
urlSource: 'testurl',
otherSource: '' }
And the error when I try to parse
/Users/spence450/Documents/development/wordly-dev/wordly-server/node_modules/mongoose/lib/utils.js:409
throw err;
^
SyntaxError: Unexpected token o
Ideas?
So, should I even want to parse the JSON before insertion?
Convert the strings to JSON objects will benefit you later, when you need to make queries in your MongoDB database.
And if I should parse the JSON, how do I do it without throwing an error? I need to put everything in double quotes "", I believe, before it will parse, but for some reason whenever I make a string with double quotes and parse it, it turns everything all wrong.
You aren't receiving JSON documents. JSON documents must contain the keys quoted.
You can:
Use a library that recognizes invalid JSON objects (please don't)
Use eval (this is a security issue, so don't do it)
Fix the source of the problem, creating real JSON objects. This isn't difficult, you can see the JSON features here

Resources