So I have made a meteor app and I have the autopublish and insecure packages removed, now in order to receive data from my collections I have to subscribe to them in the client. I also have a python program that communicates with my meteor server over ddp using the python-meteor package, in it I simply subscribe to my collections and have complete access to all my data, I can also make Meteor.calls to call functions on the server. This is nice but I can't help but feel like this is a major security hole, anyone can write a client and subscribe to my collections and grab all my data on a whim, if they guess the collection names right.
Is there a way to only let certain clients subscribe to collections and perform server calls?
Yes, you should add security checks to all publishers and methods.
Here's an example publisher that ensures the user is logged in and is a member of the group before receiving any posts related to the group:
Meteor.publish('postsForGroup', function(groupId) {
check(groupId, String);
// make sure the user is a member of the group
var group = Groups.findOne(groupId);
if (!_.contains(group.members, this.userId))
throw new Meteor.Error(403, 'You must be a member of the group!');
return Posts.find({groupId: groupId});
});
Here's an example method that ensures the user is logged in and an admin of the group before being allowed to change the group's name:
Meteor.methods({
'groups.update.name': function(groupId, name) {
check(groupId, String);
check(name, String);
// make sure the user is an admin of the group
var group = Groups.findOne(groupId);
if (!_.contains(group.admins, this.userId))
throw new Meteor.Error(403, 'You must be an admin of the group!');
// make sure the name isn't empty
if (!name.length)
throw new Meteor.Error(403, 'Name can not be empty!');
return Groups.update(groupId, {$set: {name: name}});
}
});
One detail to watch out for: If you are using iron router, be careful not to cause any errors in your publishers. Doing so, will cause waitOn to never return. If you think that throwing an error is possible under normal operation, then I'd recommend return this.ready() instead of throw new Meteor.Error in your publisher.
Related
I am working on an e-commerce site. There are times where a product would no longer be available but the user would have added it to the cart or added to their saved items. How do I implement the feature such that if the product has been updated, the user would be notified as soon as possible?
I thought about doing a cron job that would check the status of the product if it still available or has been recently updated. But I do not know if that is feasible. I am open to better ideas
Thanks
Similar images are included below
What you are trying to achieve falls into real-time updates category and technically there would be more than one option to achieve this.
The chosen solution would depend on your application architecture and requirements. Meanwhile, I can suggest looking into Ably SDK for Node.js which can offer a good starter.
Here down a sample implementation where on the back-end you will be publishing messages upon item's stock reaching its limit:
// create client
var client = new Ably.Realtime('your-api-key');
// get appropriate channel
var channel = client.channels.get('product');
// publish a named (may be the product type in your case) message (you can set the quantity as the message payload
channel.publish('some-product-type', 0);
On the subscriber side, which would be your web client, you can subscribe to messages and update your UI accordingly:
// create client using same API key
var client = new Ably.Realtime('your-api-key');
// get product channel
var channel = client.channels.get('product');
// subscribe to messages and update your UI
channel.subscribe(function (message) {
const productName = message.name;
const updatedQuantity = message.data;
// update your UI or perform whatever action
});
Did a live betting app once and of course live updates are the most important part.
I suggest taking a look into websockets. The idea is pretty straight forward. On backend you emit an event let's say itemGotDisabled and on frontend you just connect to your websocket and listen to events.
You can create a custom component that will handle the logic related to webscoket events in order to have a cleaner and more organized code an you can do any type of logic you want to updated to component as easy as yourFEWebsocketInstance.onmessage = (event) => {}.
Of course it's not the only way and I am sure there are packages that implements this in an even more easy to understand and straight forward way.
I'm learning TypeOrm and i'm trying to implement an email verification system after a user creates an account.
Let's say i have two entities, User and EmailVerification. When the user is created, an EmailVerification related to this user is inserted in the database. The next step would be to send an email to this user right after the EmailVerification is created.
But i'm not sure about what typeOrm feature to use to call my email service send function.
I was thinking of two ways to achieve this,
1 - From the transaction as a complementary step after inserting user and emailVericiation in database :
await getManager().transaction(async entityManager => {
await entityManager.save(user);
await entityManager.save(emailVerification);
// send the message directly from the transaction right after the user and emailVerification is created
await emailService.send(message);
})
2 - From an EntitySubscriber right after the creation of the EmailEntity :
#EventSubscriber()
export class EmailVerificationSubscriber implements EntitySubscriberInterface<EmailVerification> {
#AfterInsert()
sendEmail() {
// ... //
// get related user email
// ... //
// then send the message
await emailService.send(message);
}
}
These two ways seems sufficient to me, but i would like to know if there is some kind of best practice for this use case ?
I can provide more informations if needed
It doesn't matter very much which option you choose.
That being said, the email you send is part of a workflow that starts when you insert your emailVerification object. So it makes sense to associate it with that operation.
If you use a similar workflow in future for password recovery, it will become obvious why that makes sense.
I have a model, let's call it Client and i have another model called Accounts. They are from difference collections, a client can have many different accounts. I reference the accounts within the client doc, as well as referencing back to the client from the account doc.
const Client = new mongoose.Schema({
accounts: [{
type:mongoose.ObjectId,
ref: 'Accounts',
}],
other....
})
const Accounts = new mongoose.Schema({
name:String,
clientID: mongoose.ObjectId
})
So as we can see they reference each other. I'm doing this for easy access to populating the account and such when requesting client info.
What im trying to do is when i create a new client, i also want to create a new default account and reference it in the accounts array. I tried using a pre hook when i create my new Client to create a new Account, however that doesn't update the Client account array with the newly created Account doc _id. I've tried using this.update()
Client.pre('save',async function(next,args){
if(this.isNew){
await Accounts.create({clientID:this._id})
.then(async doc=>{
console.log(doc) // this logs my account doc just fine, which means it got created
await this.update($push:{accounts:doc._id) // this doesnt seem to do anything
})
.catch(err=>next(err)
}
next()
}
So the pre hook almost did what i wanted it to do, but I can't figure out a way to update my newly created Client doc with the info from the newly created Account doc. It creates the Client doc, and it creates the Account doc. And the beauty of it is if I have an error when creating the Account doc, then since it is atomized then the Client doesn't get created. But alas, no updated accounts array...
So instead, I tried putting it into a post hook.
Client.pre('save',async function(doc, next){
await Accounts.create({clientID:doc._id})
.then(async acc=>{
await doc.update({$push:{accounts:[acc._id]}})
}).catch(err=>next(err)
}
And hey! this works!...kinda... I can create a Client document which creates an Account document, and then updates the Client to include the Account _id in its accounts array.
BUT!!! The issue im having with this approach is that it doesnt seem to be atomizing the operation. So if i deliberately make the account creation fail (by for example passing it a non ObjectID argument), then it calls next(err) which in my http request properly returns the error message, and even says that the operation failed. But in my database, the Client document still got created, unlike in the pre-hook where it stops the whole operation, in the post hook it does not 'undo' the creation of the Client.
SUMMARY AND SOLUTIONS
Basically I need a way to update a brand new doc inside of its pre.('save') hook so it will store any changed data i processed inside the hook.
Or some way to guarantee the atomization of the operation if i use a post hook to update the new doc.
Other things i've tried:
I also tried using save() inside the pre hook after creating the Account doc, but this resulted in an loop that maxed out the doc memory since it just became recursive
I tried using a pre-hook on the Accounts model so it would reference back to the Client model and update it, but this gives me both issues together. It does not update the new client doc (since it's technically not queryable yet) AND if the account creation fails, it still creates the Client.
Sorry for the long question, I appreciate any feedback or recommendations to fix this issue or different approach to achieve my goal. If you made it this far, thanks for reading!
My question was built up of a few questions, but i want to post the solution.
While i still don't know how to guarantee that an error in a post hook will make the whole operation behave atomically, the solution to this was quite simple.
Inside the pre hook,to modify the accounts array i just had to push() into it, no need to try using this.set or this.update or any other actual query, just direct modification of this
{
//inside Client pre hook
//create account doc
await Accounts.create(...).then(doc=>{
this.accounts.push(doc._id)
}).catch(err=>next(err)
}
Is there a stripe API call that we can use to create a user if they don't exist, and retrieve the new user?
say we do this:
export const createCustomer = function (email: string) {
return stripe.customers.create({email});
};
even if the user with that email already exists, it will always create a new customer id. Is there a method that will create a user only if the user email does not exist in stripe?
I just want to avoid a race condition where more than one stripe.customers.create({email}) calls might happen in the same timeframe. For example, we check to see if customer.id exists, and does not, two different server requests could attempt to create a new customer.
Here is the race condition:
const email = 'foo#example.com';
Promise.all([
stripe.customers.retrieve(email).then(function(user){
if(!user){
return stripe.customers.create(email);
}
},
stripe.customers.retrieve(email).then(function(user){
if(!user){
return stripe.customers.create(email);
}
}
])
obviously the race condition is more likely to happen in two different processes or two different server requests, than the same server request, but you get the idea.
No, there is no inbuilt way to do this in Stripe. Stripe does not require that a customer's email address be unique, so you would have to validate it on your side. You can either track your users in your own database and avoid duplicates that way, or you can check with the Stripe API if customers already exist for the given email:
let email = "test#example.com";
let existingCustomers = await stripe.customers.list({email : email});
if(existingCustomers.data.length){
// don't create customer
}else{
let customer = await stripe.customers.create({
email : email
});
}
Indeed it can be solved by validating stripe's customer data retrieval result against stored db.
And then call another API to create afterward.
However for simplicity sake, i agree with #user7898461 & would vouch for retrieveOrCreate customer api :)
As karllekko's comment mentions, Idempotent Keys won't work here because they only last 24 hours.
email isn't a unique field in Stripe; if you want to implement this in your application, you'll need to handle that within your application - i.e., you'll need to store [ email -> Customer ID ]s and do a lookup there to decide if you should create or not.
Assuming you have a user object in your application, then this logic would be better located there anyways, as you'd also want to do this as part of that - and in that case, every user would only have one Stripe Customer, so this would be solved elsewhere.
If your use case is like you don't want to create a customer with the same email twice.
You can use the concept of stripe idempotent request. I used it to avoid duplicate charges for the same order.
You can use customer email as an idempotent key. Stripe handles this at their end. the two request with same idempotent key won't get processed twice.
Also if you want to restrict it for a timeframe the create an idempotent key using customer email and that time frame. It will work.
The API supports idempotency for safely retrying requests without
accidentally performing the same operation twice. For example, if a
request to create a charge fails due to a network connection error,
you can retry the request with the same idempotency key to guarantee
that only a single charge is created.
You can read more about this here. I hope this helps
Dear community and vitaly-t hopefully,
I am building a website / server with pg-promise.
I use postgre role/group login for authentification.
I don't know if I am doing the things correctly but I would like that each user use their own postgres connection to query the database.
So in practice, I create a connection for each user when they connect (if it is not already existing).
To do so, I have created a Pool object with an ugly 'fake promise' and a pgUser object:
var pgPool = function(pg){
var _this=this;
var fakePromise = function(err){
var _this=this;
_this.err=err
_this.then=function(cb){if(!err){cb();return _this}else{return _this};};
_this.catch=function(cb){if(err){cb(_this.err)}else{return _this};};
return _this;
};
_this.check= function(user){
if (_this[user]){
return _this[user].check();
}else{
return new fakePromise({error:'Echec de connection à la base de
données'})
}
}
_this.add = function(user,password){
var c={};
c.host = 'localhost';
c.port = 5432;
c.database = 'pfe';
c.poolSize = 10;
c.poolIdleTimeout = 30000;
c.user=user;
c.password=password
if (!_this[user]){
_this[user] = new pgUser(c,pg);
return _this[user].check();
}else{
_this[user].config.password=password;
return _this[user].check();
};
};
return _this;
};
var pgUser = function(c,pg){
var _this=this
_this.config = c
_this.db = new pg(_this.config)
_this.check = function(){
return _this.db.connect();
};
return _this;
};
And here is how I add a user to the 'pool' during the login POST handling
pool.add(req.body.user,req.body.password).then(function(obj){
obj.done();
req.session.user = req.body.user;
req.session.password = req.body.password;
res.redirect("/");
return;
}).catch(function(err){
options.error='incorect password/login';
res.render('login', options);
return;
});
I am sure it could irritate pro developpers and you would be kind if you could explain me the best way :
is that a good idea to have one connection to the database per user
(it seems legit to have a good security)?
how can I use the pg-promise library better to avoid this ugly custom 'pool' object?
Sincerly thank you.
I have contacted the security responsible of my project, doing research as associate profressor in security (CITI lab)...here is his comment :
====================
Since it is my fault, I will try to explain ;-). First, to be clear, I
work on the security side (notably access control and RDBMS security)
but am not very familiar with JS or promises.
Our aim is to implement the principle of least privilege with a defense
in depth approach. In this particular case, this means that a query sent
by an unprivileged user should not have admin rights on the database
side. RDBMS such as PostgreSQL provide very powerful, expressive and
well-tested access control mechanisms : RBAC, row-level security,
parametrized views, etc. These controls, indeed, are usually totally
ignored in web applications which use the paradigm "1 application == 1
user", this user has thus admin role. But heavy clients often use
several different users on the database side (either one per final user
or one per specific role) and thus benefit from the access control of
the database.
Access control from the DB is an addition to access control in the web
application. AC in the webapp will be more precise but may probably
suffer from some bugs ; AC in the DB will be a bit more laxist but
better enforced, limiting damages in case of an application bug.
So in our case, we want to create a DB user for every application user.
Then, the connection to the database belongs to this specific user and
the database can thus enforce that a simple user cannot execute admin
operations. An intermediate possibility would be to drop some privileges
before executing a query, but our preferred way is to connect to the
database as the currently logged-in user. The login-password is sent by
the user when he authenticates and we just pass it to the DBMS.
Scalability is not (yet) an issue for our application, we can sacrifice
some scalability for this type of security.
Would you have any hints to help us achieve this ?
==================