In my app i have stores and products, in my store side i have a reference to products, my daubt now is how i set the route to delete a product for a store
should i do something like this router.delete('products/:id') this deletes a product since the store has a reference i just need to delete the reference, but my store has a id actually my store is my user, so i want to be sure that there is a user to be able to delete something i was imaginating something like this:
router.delete('stores/:id/products/:id'), but that feels kinda strange, what you guys think about this?
I believe your concern is that you have two parameters id in request string. Express supports specifying many parameters and you can name them whatever you like, they should be parsed and available to you as variables later (req.params.productId).
So you can do
router.delete('stores/:storeId/products/:productId')
Additional information you can find in express routing documentation.
Related
in the code I have tried to make the id of the user a global variable after creating and saving to mongodb so as to use the id in the posts route but seems not to work.Any help on this?
newuser.save().then(function(user){
if(user){
id=user._id;
req.session.user=user
req.flash("success","post added")
return res.redirect("/navashanti")
}}).catch((err)=>{
if(err){
console.log(err)}
` app.get("/posts",function(req,res){
user.findById().then(function(idresult){
if(idresult){
console.log(idresult)
res.render("home",{idresult:idresult})
}
}
`
you can define it with using GLOBAL or global.
global.id; // Declaration global variable - undefined
global.id= 1; //Global variable initialized to value 1.
var userID = global.id; // Using the global variable.
You CANNOT put data for one specific user into a global variable.
Even if you tried, it would be massively subject to timing and concurrency bugs. Your server can serve many users. If you put some data for userA into a global and then userB comes along, how does the server keep the right data with the right user. If you're just using a single global, it doesn't. It gets the data mixed up and one user gets the wrong data and another user's data is overwritten. It's a disaster. You cannot code servers this way.
Instead, data on a server has to be stored in a user-specific location so that each user's data is stored separately. Globals are shared by all users so they will not work for this. Some examples of a user-specific storage location are:
In a database record for that specific user
In a server-side session object that is uniquely for each specific user.
In a cookie which will be presented back to the server on future requests for that specific user.
If you want a specific coding suggestion for your specific issue, you'd have to show us an appropriate part of your code and describe exactly what problem you are trying to solve. The code you have in your question so far is incomplete, improperly formatted (so it's difficult to read) and you do not fully describe what problem you're actually trying to solve.
Adding on to the responses here, it's not a good idea to store the user id as a global variable. I'm not too familiar with the capabilities of MongoDB but I think you could try using sessions as that allows you securely preserve user data for the duration of a user session.
I was looking around the internet to find out how I can send user status such as offline and online, etc to only friends using socket io. Some people were saying to use Redis. so I had a look and played around with it. I am also using mongodb to store friends and users.
This is my setup right now:
//Status List:
// 0 - offline
// 1 - online
// 2 - away
// 3 busy
//Set the status
redisClient.hmset ("online_status:userID", "status", "1");
//Check if someone is online
redisClient.hgetall ("online_status:userID", (err, reply) => {
console.log(reply)
})
Is it fine if I use it like this to get user status? or is there a better way to do this?
Another question is that, is that is it fine to keep looping hgetall or is there a better way to get multiple statuses at once?
You are using a hash type for storing a single information and you are using hgetall to retrieve it, so I assume you are not that familiar with redis data types yet. So first let me explain in short the three data types I'll talk about (find all types in the docs here https://redis.io/topics/data-types-intro ):
String: Is a simple key/value type, access it with set(key, value) and get(key, value)
Hash: Is a bunch of key/values stored under one redis key. Useful for storing attributes of an entity, like you could have a "userdata:userID" key and store name, avatar, status... with it. Access it with hset(key, field, value), hget(key, field), hgetall(key)
Set: Is a collection of unique strings, access it with sadd(key, member), sismember(key, member), smembers(key)
If you are only going to save the online status it would be cleaner to use a string type with set, get and del (since usually most users are offline most of the time, delete them and save space). For this simple key/value usecase redis is actually not even better than good old memcache.
If you intend to store more user related attributes (mood, motto, avatar...) you should rename it to "userdata:userID" and check with hget("userdata:userID", "status") and use hgetall only to retrieve all attributes.
Another approach could be to store all users in a SET: sadd('users:online', userID) and check with sismember('users:online', userID) or get all online users with smembers('users:online'). Suppose you store all friends in another SET friends:userID, you could grab all online friends of a user with a single intersect command sinter('friends:userID', 'users:online') - pretty nice and elegant IMHO, but this get's complicated with more different states and doesn't work with redis-cluster.
I would prefer the SET approach. Multiple hgets should also be fine until you encounter issues due to the one guy (there is allways one) that has thousands of contacts and refreshes all the time. At that point you could still introduce some friendship limits or caching.
I am trying to implement iCloud sync in my Core Data app. I am not that pro in programming and this is really an advanced topic I learned... I found that Core Data sync Framework "Ensembles" by Drew McCormack. It seems to make iCloud Sync much easier.
I integrated it in my App and syncing does work quite well as long as I add new objects to my Core Data model. But when I delete an object, it creates duplicates. And then duplicates from duplicates. I ended up having the same Entry (object) like 3-4 times...
Why is that? What am I doing wrong? I did some research and my guess is that global identifiers could solve this?
What are global identifiers? My guess is that they help to avoid duplicates!? But how do I set this? I really have no idea, did a lot of research but couldn´t find an answer to that.
Thanks for help!
Update:
Thanks for help! I read the readme and the book, but since i am beginner not everything is clear to me.
I think I understand the use of global identifiers in Ensembles now, but I don´t know if I´m doing it correctly.
If I understand it right, I have to assign an identifier to each object. I can do this by storing it in an attribute. This identifier can be anything as long as it is unique and a NSString?
In my app the user can store different things, let´s say name, text, title, date and so on. The app is based on the Master-Detail-View template in Xcode and uses Core Data. My Core Data model has only a single entity with some attributes, most are strings and a NSDate. No relationships or anything. If the user hits "+" a new object is created and I store the things the user enters in the attributes.
What I did to add global identifiers is to add a new attribute that stores it.
So when a new object is created i do
/// I did find that to use as identifier !?
NSString *taskUniqueStringKey = newManagedObject.objectID.URIRepresentation.absoluteString;
/// and store it in the attribute.
[newManagedObject setValue:taskUniqueStringKey forKey:#"coreDataObjectID"];
Then i use this:
- (NSArray *)persistentStoreEnsemble:(CDEPersistentStoreEnsemble *)ensemble globalIdentifiersForManagedObjects:(NSArray *)objects
{
return [objects valueForKeyPath:#"coreDataObjectID"];;
}
This seems to work for me. But am I doing it right? Is this the right place to assign a global identifier? I have no awakeFromInsert !?
If this is working, I got the next problem. My app is already live and older entries that the user saved before the update will be missing the global identifier. What can I do about that? I thought what I already got and what is unique and the only thing I can think of is an attribute that saves [NSDate date] when the object is created.
I was trying to use this but I failed because Ensembles will only accept NSString and not NSDate!? Can I use this date attribute, is this unique enough and working as gloabl identifier? And if yes, could you please give me code example in how to convert this from date to string?
Syncing with Ensembles works quite good. No duplicates anymore, you can just switch off iCloud and the entries stay and switch it on again and it syncs like it should without loosing locally stored objects or so. Ensembles is really cool! I am seeing some minor strange behaviors like sometimes sync takes long, sometimes it´s really quick and if I edit things in a short time period on two different devices it gets a bit messed up like an object that I just deleted reappears. But I guess that´s normal? If I take some time between using the app on the different devices everything works fine.
Do I understand it right, there is only that one method to call for sync:
- (void)syncWithCompletion:(void(^)(void))completion
{
if (self.ensemble.isMerging) return;
if (!self.ensemble.isLeeched) {
[self.ensemble leechPersistentStoreWithCompletion:^(NSError *error) {
if (error) NSLog(#"Error in leech: %#", error);
if (completion) completion();
}];
}
else {
[self.ensemble mergeWithCompletion:^(NSError *error) {
if (completion) completion();
}];
}
and you just call it if needed? There is nothing else like doing merge without leeching before, or a method like "this is the actual status - save it like it is now" ?
There are different points in the app where you want to sync. On app start and when terminating will be a good point. In my app there are two points where I should sync I guess: when adding an object and save it to Core Data and when I save changes to the object. I could also provide a button like "sync now". Is this a good approach and do I always just call
[self syncWithCompletion:NULL];
Another question that came up. Can I exclude objects from sync with Ensembles? My app loads tutorial entries as objects once on first app start. I don´t want to sync them if that´s possible somehow?
Thanks a lot for your help! If I could help you with anything like localizing in german or so let me know ! ;)
Yes, this is almost certainly due to not setting up global identifiers for your objects, or at least not doing it properly.
When you leech your ensemble, the local persistent store is imported into the sync data. Without global identifiers, Ensembles will assign random ids to your objects, so it can track them across devices.
Duplicates arise when you leech a second device that has the same data. Ensembles has no way to know that the data represents the same logical objects as on the other device, so it again assigns random ids. Effectively, it treats the objects on each device as being completely independent, so that all end up in your data set after syncing.
The solution is global identifiers. By implementing a CDEPersistentStoreEnsemble delegate method, you can provide Ensembles with global ids, which it can use to identify which objects on different devices belong together.
What should you use for global ids? Often, just a UUID, though for singleton like objects you will just want to pick an id.
You can initialize them in awakeFromInsert. You can store the global ids in attributes on your entities. (Note that if you are migrating an existing app, you will want to check with a fetch if the global ids have been generated BEFORE you try to leech the store for syncing.)
More details are in the README on GitHub and in the book at leanpub.
Update
To answer your update questions:
Yes, an identifier just has to be a string, and immutable. It should not change once assigned.
The NSManagedObjectID is not a very good global identifier, in that it will be different on different devices. We really want something that is global across devices.
If you are starting from scratch, using NSUUID is a good approach. Just create a unique id, and store it in the object.
If you have an existing app, and it has been syncing via another mechanism, you need to come up with a way to provide the same global identifiers on each device. One way to do that is mash up the object properties in some way. Usually that will give you a pretty-close-to-unique value, and it will be good enough for the transition.
As an example, you do a quick fetch, and discover that your objects don't yet have global ids. You go through the objects, and set the global ids to a string comprised of creationDate + text. (You could even shorten this by taking a hash, but it probably isn't that important.) After this initial 'migration' to global identifiers, you would just use UUIDs for any newly created objects.
Note that you don't have to use awakeFromInsert. That is simply a convenient place to put it. As long as you assign the global identifier before saving the object you should be fine.
The easiest way to get a string from an NSDate is to call the description method, but another way would be to get a double using timeIntervalSince1970, and turning that into a string. (Be careful with dates as unique identifiers on their own: often objects created together will have the same creation date.)
You are correct about how you should do a sync: you can simply call syncWithCompletion:.
To answer the question about excluding objects: You can't exclude individual objects, mainly because it could become tricky when those objects have relationships to synced objects. You can handle these objects in one of two ways:
Put them in a separate persistent store, and add that store to the same persistent store coordinator.
Sync the objects, but give them global ids manually, so that the objects are treated the same on each device. Eg. You could just give global ids as 'Sample1', 'Sample2', etc.
To integrate Drew's answer, I guess the two steps are the following.
1 Implement CDEPersistentStoreEnsemble delegate method (see README)
- (NSArray *)persistentStoreEnsemble:(CDEPersistentStoreEnsemble *)ensemble
globalIdentifiersForManagedObjects:(NSArray *)objects {
return [objects valueForKeyPath:#"yourUniqueIdentifier"];
}
2 Generate the unique identifier for a NSManagedObject subclass
- (void)awakeFromInsert {
[super awakeFromInsert];
if (!self.yourUniqueIdentifier) {
self.yourUniqueIdentifier = [[NSUUID UUID] UUIDString];
}
}
In awakeFromInsert you can initialize special default property values, like for example an identifier.
The check is necessary, for example, when you have parent-child contexts. Otherwise you are overwriting the identifier previously set. See Why is awakeFromInsert called twice?.
I am looking at the documentation for Meteor and it gives a few examples. I'm a bit confused about two things: First, where do you build the db (keeping security in mind)? Do I keep it all in the server/private folder to restrict client-side access? And second, how do I define the structure? For example, the code they show:
Rooms = new Meteor.Collection("rooms");
Messages = new Meteor.Collection("messages");
Parties = new Meteor.Collection("parties");
Rooms.insert({name: "Conference Room A"});
var myRooms = Rooms.find({}).fetch();
Messages.insert({text: "Hello world", room: myRooms[0]._id});
Parties.insert({name: "Super Bowl Party"});
I don't understand how a collection's structure is defined. Are they just able to define a collection and throw arbitrary data into it?
To answer your first question about where to put the new Meteor.Collection statements, they should go in a .js file in a folder accessible by both client and server, such as /collections. (With some exceptions: any collections that are never synced to the client, like server logs, should be defined inside /server somewhere; and any local collections should be defined in client code.)
As for your second question about structure: MongoDB is a document database, which by definition has no structure. Per the docs:
A database holds a set of collections. A collection holds a set of
documents. A document is a set of key-value pairs. Documents have
dynamic schema. Dynamic schema means that documents in the same
collection do not need to have the same set of fields or structure,
and common fields in a collection’s documents may hold different types
of data.
You may also have heard this called NoSQL. Each document (record in SQL parlance) can have different fields. Hence, there's no place where you define initial structure for a collection; each document gets its "structure" defined when it's inserted or updated.
In practice, I like to create a block comment above each new Meteor.Collection statement explaining what I intend the structure to be for most or all documents in that collection, so I have something to refer to later on when I insert or update the collection's documents. But it's up to me in those insert or update functions to follow whatever structure I define for myself.
A good practice would probably be defining your collection on both client and server with a single bit of javascript code. In other words, put the following
MyCollection = new Meteor.Collection("rooms");
// ...
anywhere but neither in the client nor in the server directory. Note that this directive alone does not expose any sensitive data to nobody.
A brand new meteor project would contain by default the insecure and autopublish packages. The former will basically allow any client to alter your database in every possible way, i.e. insert, update and remove documents. The latter will make sure that all database content is published to everyone, no matter how ridiculously this may sound. But fear not! Their only goal is to simplify the development process at the very early stage. You should get rid of these to guys from your project as soon as you start considering security issues of any kind.
As soon as the insecure package is removed from your project you can control the database privileges by defining MyCollection.allow and MyCollection.deny rules. Please check the documentation for more details. The only thing I would like to mention here is that this code should probably be considered as a sensitive one, so I guess you should put it into your server directory.
Removing the autopublish package has effect on the set of data that will be sent to your clients. Again you can control it and define privilages of your choice by implementing a custom Meteor.publish routine. This is all documented here. Here, you have no option. The code can only run in the server environment, so the best choice would be to put it in the server directory.
About your second question. The whole buzz about NoSQL databases (like mongodb) is to put as few restrictions on the structure of your database as possible. In other words, how the collections are structured is only up to you. You don't have to define no models and you can change the structure of your documents (and or remove fields) any time you want. Doesn't it sound great? :)
I have noticed in Geddy that when I create a model and a subsequent record for that model, I get a very ugly model ID associated with the record. Something like:
http://localhost:4000/posts/3FEEDE8D-2669-445B-AEA1-A31092A7FEDA
Is there a way to change this?
Ideally, I would always want this to be some sort of string. Where it be for a post or user:
http://localhost:4000/posts/this-is-a-post-title
http://localhost:4000/profile/meebix
If this is possible, how should I:
Configure routes
Change primary key for model
Other implementation steps I may need
Thanks!
Yes, you can change the id if you really want to, but you'll be going off the beaten path there, so it's quite a bad idea. Let Geddy handle IDs for you.
The way I would do this (and certainly how many others have too) is to have a "slugging" function create a slug from the post title, and save that in your database. Then, query on that instead in your show action. You won't have to change your routes.
This is what your query will look like in the show action:
Post.first({slug: params.id}, function (err, post) {
params.id is whatever string you use in the route /posts/<this string>
So once you change your show links to use the slug instead of the ID you will be all set!