Properly enabling security for filepicker.io in Meteor - node.js

Filepicker by default allows pretty much everybody to add files to your S3 bucket who was clever enough to copy your API key out of the client code and luckily also offers a security option with expiring policies.
But I have no idea how to implement this in Meteor.js. Tried back and forth, installing meteor-crypto-base package, trying to generate the hashes on the server, tried RGBboy's urlsafe-base64 algorithm on https://github.com/RGBboy/urlsafe-base64. But I just do not get any further. Maybe someone can help! Thank you in advance.

This is an example of how to do filepicker signed URLs in meteor, based on the documentation here:
var crypto = Npm.require('crypto');
var FILEPICKER_KEY = 'Z3IYZSH2UJA7VN3QYFVSVCF7PI';
var BASE_URL = 'https://www.filepicker.io/api/file';
Meteor.methods({
signedUrl: function(handle) {
var expiry = Math.floor(new Date().getTime() / 1000 + 60 * 60);
var policy = new Buffer(JSON.stringify({
handle: handle,
expiry: expiry
})).toString('base64');
var signature = crypto
.createHmac('sha256', FILEPICKER_KEY)
.update(policy)
.digest('hex');
return BASE_URL + "/" + handle +
"?signature=" + signature + "&policy=" + policy;
}
});
Note this will need to exist somewhere inside of your server directory so you don't ship the key to the client. To demonstrate that it works, on the client side you can call it like so:
Meteor.call('signedUrl', 'KW9EJhYtS6y48Whm2S6D', function(err, url){console.log(url)});
If everything worked, you should see a photo when you visit the returned URL.

Related

How to rewrite NodeJS CryptoJS functions so they work in ReactJS web?

I have some code that was created for me by a past contractor that provided a working implementation of CryptoJS between a Unity3D client and NodeJS server app (as in encryption of client-server messages).
I am working on an implementation that needs to call the same server endpoints from a React web app. Basically, I need to be able to ensure that the React app provides the same results when encrypting and decrypting messages as the server (and hence the Unity3D implementation) does.
The existing encrypt/decrypt functions in the NodeJS app are shown below. They are somewhat more advanced than the standard CryptoJS examples and when I attempt to use them as-is in React, I get this error:
TypeError: cryptoJS.createHash is not a function. (In 'cryptoJS.createHash('md5')', 'cryptoJS.createHash' is undefined)
After some researching, I understand that potentially the 'use as-is' approach will fail because it's using a server side library on the client side, but I have been able to find anything that would explain how I would recreate the same code to work client-side.
encrypt(text, key, iv) {
const keyBuffer = Buffer.from(cryptoJS.createHash('md5').update(key).digest('hex'), "hex")
const ivBuffer = Buffer.from(cryptoJS.createHash('md5').update(iv).digest('hex'), "hex")
const textBuffer = Buffer.from(text, 'utf8')
let cipher = cryptoJS.createCipheriv(algorithm, keyBuffer, ivBuffer)
let encryptedText = Buffer.concat([cipher.update(textBuffer), cipher.final()])
return encryptedText.toString("base64")
}
decrypt(text, key, iv) {
const keyBuffer = Buffer.from(cryptoJS.createHash('md5').update(key).digest('hex'), "hex")
const ivBuffer = Buffer.from(cryptoJS.createHash('md5').update(iv).digest('hex'), "hex")
let decipher = cryptoJS.createDecipheriv(algorithm, keyBuffer, ivBuffer)
const textBuffer = Buffer.from(text, 'base64')
var decipheredContent = Buffer.concat([decipher.update(textBuffer), decipher.final()])
return decipheredContent.toString("utf8")
}
So, I need to figure out how to replace these encrypt/decrypt functions with ones that compile and function in client side in React web. Any help in steering me in the right direction or assisting with the code would be greatly appreciated. Thanks for taking the time to read my question.

Google Cloud Storage get signedUrl from CDN npm

I am using a code as the following to create a signed Url for my content:
var storage = require('#google-cloud/storage')();
var myBucket = storage.bucket('my-bucket');
var file = myBucket.file('my-file');
//-
// Generate a URL that allows temporary access to download your file.
//-
var request = require('request');
var config = {
action: 'read',
expires: '03-17-2025'
};
file.getSignedUrl(config, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from the URL.
});
This creates an Url that starts with https://storage.googleapis.com/my-bucket/
If I place that URL in the browser, it is readable.
However, i guess that URL is a direct access to the bucket file and is not passing through my configured CDN.
I see that in the docs (https://cloud.google.com/nodejs/docs/reference/storage/1.6.x/File#getSignedUrl) you can pass a cname option, which transforms the url to replace https://storage.googleapis.com/my-bucket/ to my bucket CDN.
HOWEVER when I copy the resulting URL, the sevice account or resulting url doesn't seem to have access to the resource.
I have added the firebase admin service account to the bucket but still I get no access.
Also, from the docs, the CDN signed url seems a lot different from the one signed through that API. Is it possible to create from the api a CDN signed url, or should i manually create it as explained in: https://cloud.google.com/cdn/docs/using-signed-urls?hl=en_US&_ga=2.131493069.-352689337.1519430995#configuring_google_cloud_storage_permissions?
For anyone interested in the node code for that signing:
var url = 'URL of the endpoint served by Cloud CDN';
var key_name = 'Name of the signing key added to the Google Cloud Storage bucket or service';
var key = 'Signing key as urlsafe base64 encoded string';
var expiration = Math.round(new Date().getTime()/1000) + 600; //ten minutes after, in seconds
var crypto = require("crypto");
var URLSafeBase64 = require('urlsafe-base64');
// Decode the URL safe base64 encode key
var decoded_key = URLSafeBase64.decode(key);
// buILD URL
var urlToSign = url
+ (url.indexOf('?') > -1 ? "&" : "?")
+ "Expires=" + expiration
+ "&KeyName=" + key_name;
//Sign the url using the key and url safe base64 encode the signature
var hmac = crypto.createHmac('sha1', decoded_key);
var signature = hmac.update(urlToSign).digest();
var encoded_signature = URLSafeBase64.encode(signature);
//Concatenate the URL and encoded signature
urlToSign += "&Signature=" + encoded_signature;
The Cloud CDN content delivery network works with HTTP(S) load balancing to deliver content to your users. Are you using HTTPS Load Balancer to deliver content to your users?
You can see this attached document[1] on using Google Cloud CDN and HTTP(S) load balancing and inserting content into the cache.
[1] https://cloud.google.com/cdn/docs/overview
[2] https://cloud.google.com/cdn/docs/concepts
What error code are you getting? Can you use the curl command and send the output with the error code for further analysis.
Could you confirm that configuration you have done meets the requirement of cacheability, as not all the HTTP response are cacheable? Google Cloud CDN caches only those responses that satisfy specific conditions [3], please confirm. Upon confirmation, I will do further investigation and advise you accordingly.
[3] Cacheability: https://cloud.google.com/cdn/docs/caching#cacheability
Could you provide me the output of this two command below, which will help me to verify if there is a permission issue on these objects? These commands will dump all the current permission settings on the object.
gsutil acl get gs://[full_path_to_file_to_be_cached]
gsutil ls -L gs://[full_path_to_file_to_be_cached]
For more details on permission, refer to this GCP documentation [4]
[4] Setting bucket permissions: https://cloud.google.com/storage/docs/cloud-console#_bucketpermission
No, it is not possible to create from the API a CDN signed URL
From what Google documents here. The answer provided by #htafoya seem legit.
However, I spent a couple of hours to struggle why the signed URL not working as CDN endpoint complains access denied. Eventually I found the code using crypto module doesn't produce the same hmac-sha1 hash value as what gcloud compute sign-url computed, I still don't know why.
At the same time, I see this lib (jsSHA) is pretty cool, it generates the HMAC-SHA1 hash value exactly the same as gcloud and it has a neat API, so I think I should comment here so that if others have the same struggle will benefit from this, this is the final code I used to sign gcloud cdn URL:
import jsSHA from 'jssha';
const url = `https://{domain}/{path}`;
const expire = Math.round(new Date().getTime() / 1000) + daySeconds;
const extendedUrl = `${url}${url.indexOf('?') > -1 ? "&" : "?"}Expires=${expire}&KeyName=${keyName}`;
// use jssha
const shaObj = new jsSHA("SHA-1", "TEXT", { hmacKey: { value: signKey, format: "B64" } });
shaObj.update(extendedUrl);
const signature = safeSign(shaObj.getHMAC('B64'));
return `${extendedUrl}&Signature=${signature}`;
working great!

How to use a proxy with the instagram-private-api for Node.js

Using a proxy with instagram-private-api.
Hi all, I spent a decent amount of time trying to figure this out and there is probably a really simple answer but I was seriously confused. When creating a session needed for the Instagram nodeJS API (private) you need a proxyUrl. I was wondering how to do / configure this? Do you need to create your own proxy server that you host?
Here is my code so far.
var Upload = require('instagram-private-api').V1;
var Client = require('instagram-private-api').V1;
var device = new Client.Device('test');
var storage = new Client.CookieFileStorage(__dirname +
'/cookies/test.json');
var photo = require('instagram-private-api').V1;
var username = 'testusername'
var password = 'testpassword'
var proxyUrl = '???'
Client.Session.create(device, storage, username, password, proxyUrl)
var Upload = require('./node_modules/instagram-private-api/client/v1/Upload.js')
var session = new Client.Session(device, storage, 'test', 'test')
Upload.photo(session, 'aaaa.jpg')
.then(function(upload) {
console.log(upload.params.uploadId);
return Media.configurePhoto(session, upload.params.uploadId, 'henlo world');
})
.then(function(medium) {
console.log(medium.params)
})
I know my code is probably seriously flawed as well, criticism is appreciated! Here's the link to the GitHub of the Node.JS wrapper mentioned. Here.
e
Hi there Benjamin,
I'm really looking forward to helping you.
As already stated, this library won't be upkept and maintained and could become irrelevant in updates to the official API.
I believe one has to set up a proxy url on their phone.

Authentication proxy - req.user.sub persistent?

Situation
I used the node.js quickstart project from auth0 to build an authentication-proxy. Reason for this is that I cannot merge my spring backend with the Quickstart spring example.
In order to let the spring backend identify the user, I pass the user's sub as shown below.
var authenticate = jwt({
secret: new Buffer(process.env.AUTH0_CLIENT_SECRET, 'base64'),
audience: process.env.AUTH0_CLIENT_ID
});
...
app.get('/secured/*', function (req, res) {
var url = apiUrl + req.url;
var userId = req.user.sub; // 'auth0|muchoCrypto123'
url += "?userId=" + userId;
req.pipe(request(url)).pipe(res);
});
I am currently also investigating the usage of HttpRequestServlet in spring to retrieve user details.
Question
Is req.user.sub a value that I can use to identify the user without worrying that this value might change? So far I couldn't detect changes.
In the user management console I found the following:
user_id auth0|muchoCrypto123
Thus I assume that the user_id won't change. Can anyone confirm?

S3, Signed-URLs and Caching

I am generating signed urls on my webapp (nodejs) using the knox nodejs-library.
However the issue arises, that for every request, I need to regenerate an unique GET signed url for the current user, leaving browser's cache-control out of the game.
I've searched the web without success as browsers seem to use the full url as caching key so I am really curious how I can, under the given circumstances (nodejs, knox library) get the issue solved and use caching control while still being able to generated signed urls for each and every request as I need to verify the user's access rights.
I cannot believe there's no solution to that though.
I am working with Java AmazonS3 client, but the process should be the same.
There is a strategy that can be used to handle this situation.
You could use a fixed date time as an expiration date. I set this date to tomorrow at 12 pm.
Now every time you generate a url, it will be the same throughout that day until 00:00. That way browser caching can be used to some extent.
Expanding #semir-deljić Answer.
Every time we call getSignedUrl function, it will generate new URLs. This will result in images not being cached even if Cache Control header is present.
Thus, we are using timekeeper library to freeze time. Now when the function is called, it thinks that the time has not passed, and it returns same URL.
const moment = require('moment');
const tk = require("timekeeper");
function url4download(awsPath, awsKey) {
function getFrozenDate() {
return moment().startOf('week').toDate();
}
// Paramters for getSignedUrl function
const params = {
// Ref: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
// Ref: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
Bucket: awsBucket,
Key: `${awsPath}/${awsKey}`,
// 604800 == 7 days
ResponseCacheControl: `public, max-age=604800, immutable`,
Expires: 604800, // 7 days is max
};
const url = tk.withFreeze(getFrozenDate(), () => {
return S3.getSignedUrl('getObject', params);
});
return url;
}
Note:
Using moment().toDate(), as the timekeeper requires a Native Date Object.
Even tough the question is for using knox library, my answer uses aws official library.
// This is how the AWS & S3 is initiliased.
const AWS = require('aws-sdk');
const S3 = new AWS.S3({
accessKeyId: awsAccessId,
secretAccessKey: awsSecretKey,
region: 'ap-south-1',
apiVersion: '2006-03-01',
signatureVersion: 'v4',
});
Inspiration: https://advancedweb.hu/cacheable-s3-signed-urls/
If you use CloudFront with S3, you can use a Custom Policy, if you restrict each url to the user's IP and a reasonably long timeout, it means that when they request the same content again, they will get the same URL and hence their browser can cache the content but the URL will not work for someone else (on a different IP).
(see: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-custom-policy.html)
When calculating signed URL you can set 'signingDate' to a fixed moment in the past e.g. yesterday morning, then calculate expiration from that moment.
Don't forget to use UTC and account for timezones.
import { S3Client, GetObjectCommand, GetObjectCommandInput } from "#aws-sdk/client-s3";
import { getSignedUrl } from "#aws-sdk/s3-request-presigner";
let signingDate = new Date();
signingDate.setUTCHours(0, 0, 0, 0);
signingDate.setUTCDate(signingDate.getUTCDate() - 1);
let params: GetObjectCommandInput = {
Bucket: BUCKET_NAME,
Key: filename
};
const command = new GetObjectCommand(params);
const url = await getSignedUrl(s3Client,
command, {
expiresIn: 3 * 3600 * 24, // 1 day until today + 1 expiration + 1 days for timezones
signableHeaders: new Set < string > (),
signingDate: signingDate
});

Resources