Need to add global variable in REDIS Cache.
For Ex:
Consider an student, employee and Staff related application.
Every role has a unique object. When student log in to the application we need to get student information from redis. Same for other roles log in too.
If we store all the details at time of application initialization, we no need to send request to get role related details. If we store it into session, that data will be checking by every users login. And also session id has been varied for every users.
Is it possible?
If yes, How can we store the values at the time of application initialization?
First of all, since Redis is a cache, you are storing objects that may be evicted with time. When Redis becomes full, it will start clearing objects according to your eviction policy configuration.
Probably caching everything upon initialization is not the best course of action, I'd go with caching the objects when they are first requested, if they don't exist on Redis, store them for future retrievals. This way, if your Redis instance clears that object, your application logic will always find it (from cache or from local storage). That's called a Cache-Aside Pattern.
Your initialization logic varies depending on which technology / platform are you using.
ASP.NET MVC 5 or lower has the Global.asax file, ASP.NET 5 MVC6 has the Startup.cs file.
Related
I have just taken the plunge and started to learn the OWIN style of authorizing users into MVC applications. One issue I'm having is storing objects since the move away from session objects and into claims.
Traditionally what I would do is authenticate the user, and then store the User object in the session. This is useful when you are regularly using the data from that object all over the application.
Now that I have moved to OWIN with Identity, I instead store the UserId as a claim. I understand that the use of complex objects is best avoided with claims.
So I find that I'm regularly having to hit the database to read User information based on the UserId.
Here is how I am reading the UserId claim:
List<Claim> claims = HttpContext.Current.GetOwinContext().Authentication.User.Claims.ToList();
var ret = claims.FirstOrDefault(x => x.Type == StaffClaims.OrganisationId);
Is there a way that I can avoid taking this ID and reading the corresponding record from the DB each time? I want to achieve something like having the User object stored in memory somewhere.
Alternatively, does Entity Framework 6 allow caching so that I don't hit the database when repeating the same query (unless I know it has changed and should be re-read)?
First, storing the user object in the session is a hugely bad idea. Don't do that ever.
Second, you don't need to store the user id in a claim; you can get it anytime with User.Identity.GetUserId().
Third, Entity Framework does utilize caching, but not in a way I'd consider it as something you could rely on. If you want to cache something, then do it explicitly with System.Runtime.Caching. You can also utilize the OutputCache attribute on actions to cache the rendered view, which has the side effect of not requiring database calls to render it again.
Finally, this is not a big deal in the first place. Just fetch the user when you need it. Before you worry about this one simple query, there's probably 10,000 other areas of your application and could and should be optimized first.
I am currently developing a session store for ArangoDB (connect-arango). It works almost identically to MongoDB session store (connect-mongo, hence 'connect-arango'), but the problem is that ArangoDB does not have a built in TTL for its entries.
MongoDB has this and it's not a problem there. But in ArangoDB, I have to do this in the session store somewhere.
Would checking for expired sessions every 60 seconds (using setTimeout) be sufficient, or should I use something else, like checking every time the "get" function is called?
I would use an AQL query to clear them, similar to this:
FOR s IN sessions
FILTER s.expires < DATE_NOW()
REMOVE s IN sessions
If the user were to clear his cookies, the session would never be accessed using the "get" function, which means I can't check if it has expired there.
What I can do however, is to run the above query every time the "get" function is called, but I think that would be quite unnecessary and put more load on the server.
Edit: Just so no one misunderstands, I know how to clear the expired sessions, I just don't know how often to run the clear function (in this case, it's the AQL query above).
If you put a skip-list index on expires, running the above query every 60 sec should not create any problems. You can also create a periodic job within ArangoDB that runs this query every minute.
Alan Plum has added a session Foxx app to ArangoDB which basically implements all of the above. I'm not sure if he has already released a documentation. The API documentation is visible as
localhost:8529/_db/_system/_admin/aardvark/standalone.html#!/sessions
If you have any questions about this Foxx application, please fell free to contact Alan at hackers (at) arangodb.org
As of ArangoDB 2.3 Foxx comes with a built-in sessions app you can use in your Foxx apps. You can re-use the session app even if you don't want to use Foxx.
You can simply mount a copy of the sessions app at a mount point of your choice. This allows you to configure the session TTL as well as other details (e.g. length of the session IDs). The app exposes an HTTP API that lets you create new sessions, update sessions, retrieve existing sessions and delete them. It automagically enforces the TTL (i.e. deletes expired sessions) when you try to retrieve or update a session.
Currently the TTL is only enforced whenever a session is accessed. Depending on your use case this may still clutter up the collection with expired sessions. Currently it's not possible to schedule recurring tasks directly inside ArangoDB; there's a job queue but it is not a good fit for this kind of problem. This will likely be solved in a future version of ArangoDB.
I would recommend monitoring over time the amount of expired sessions that pile up in the collection of your mounted copy of the sessions app. It's probably sufficient to prune the expired sessions once a week (or even less). As the sessions app will automatically delete expired sessions when they are accessed via its API, the only problem are abandoned sessions (e.g. private browsing mode, or one-time users).
Disclosure: I wrote the new sessions/auth apps introduced in ArangoDB 2.3.
I've always wondered whether it's better to check the database for account access permissions every single request, or cache (say, an ACL) in the session state.
My current case isn't particularly mission-critical, but I feel it would be annoying to have to logout and log back in to refresh cached credentials. I've also considered using a temporary data store, with a TTL. Seems like it might be the best of both.
Security wise, it is better to check the DB every time for permissions. The security vulnerability comes in that if the user's permission are reduced after the session is created, they could potentially still be achieving a higher level of access than they should.
There are a few things you can do to stay secure without performing a full query, provided you're early enough in the development cycle. If you have role-based access control (RBAC), you can store a fast lookup table that contains a user's role. If the user's role changes during the session, you mark the permissions "dirty" in the lookup table, causing a querying of the DB for the new role. As long as the user's role stays the same, there's no need to query the DB. The lookup table then, is basically just a flag that you can set on the backend if the user's role changes. This same technique can be used even with individual access controls, provided the granularity is not too fine. If it is, it starts to become a bloat on your server. We use this technique at work to speed up transactions.
If you are late in the development cycle or if you value simplicity more than performance (simple is usually more secure), then I would query the DB every time unless the load gets too heavy for the DB.
Recently I discovered how useful and easy parse.com is.
It really speeds up the development and gives you an off-the-shelf database to store all the data coming from your web/mobile app.
But how secure is it? From what I understand, you have to embed your app private key in the code, thus granting access to the data.
But what if someone is able to recover the key from your app? I tried it myself. It took me 5 minutes to find the private key from a standard APK, and there is also the possibility to build a web app with the private key hard-coded in your javascript source where pretty much anyone can see it.
The only way to secure the data I've found are ACLs (https://www.parse.com/docs/data), but this still means that anyone may be able to tamper with writable data.
Can anyone enlighten me, please?
As with any backend server, you have to guard against potentially malicious clients.
Parse has several levels of security to help you with that.
The first step is ACLs, as you said. You can also change permissions in the Data Browser to disable unauthorized clients from making new classes or adding rows or columns to existing classes.
If that level of security doesn't satisfy you, you can proxy your data access through Cloud Functions. This is like creating a virtual application server to provide a layer of access control between your clients and your backend data store.
I've taken the following approach in the case where I just needed to expose a small view of the user data to a web app.
a. Create a secondary object which contains a subset of the secure objects fields.
b. Using ACLs, make the secure object only accessible from an appropriate login
c. Make the secondary object public read
d. Write a trigger to keep the secondary object synchronised with updates to the primary.
I also use cloud functions most of the time but this technique is useful when you need some flexibility and may be simpler than cloud functions if the secondary object is a view over multiple secure objects.
What I did was the following.
Restrict read/write for public for all classes. The only way to access the class data would be through the cloud code.
Verify that the user is a logged in user using the parameter request.user ,and if the user session is null and if the object id is legit.
When the user is verified then I would allow the data to be retrieved using the master key.
Just keep a tight control on your Global Level Security options (client class creation, etc...), Class Level Security options (you can for instance, disable clients deleting _Installation entries. It's also common to disable user field creation for all classes.), and most important of all, look out for the ACLs.
Usually I use beforeSave triggers to make sure the ACLs are always correct. So, for instance, _User objects are where the recovery email is located. We don't want other users to be able to see each other's recovery emails, so all objects in the _User class must have read and write set to the user only (with public read false and public write false).
This way only the user itself can tamper with their own row. Other users won't even notice this row exists in your database.
One way to limit this further in some situations, is to use cloud functions. Let's say one user can send a message to another user. You may implement this as a new class Message, with the content of the message, and pointers to the user who sent the message and to the user who will receive the message.
Since the user who sent the message must be able to cancel it, and since the user who received the message must be able to receive it, both need to be able to read this row (so the ACL must have read permissions for both of them). However, we don't want either of them to tamper with the contents of the message.
So you have two alternatives: either you create a beforeSave trigger that checks if the modifications the users are trying to make to this row are valid before committing them, or you set the ACL of the message so that nobody has write permissions, and you create cloud functions that validates the user, and then modifies the message using the master key.
Point is, you have to make these considerations for every part of your application. As far as I know, there's no way around this.
I'm using my own User class as and entity provider for security system in symfony 2.0.
I noticed that on each reload of the page symfony is fetching user from db:
SELECT t0.id AS id1, t0.username AS username2, t0.salt AS salt3,
t0.password AS password4, t0.email AS email5, t0.is_active AS
is_active6, t0.credentials AS credentials7 FROM w9_users t0 WHERE
t0.id = ? Parameters: ['23'] Time: 4.43 ms
Is there any easy way to disable this behaviour? Maybe serialize user data in session variables or cache them some way?
You can change this behavior in the refreshUser method of your UserProvider.
You should be careful when doing this with doctrine: There is an issue at FosUserBundle github, explaining the pitfalls:
Storing it in the session would lead to several issues, which is why it is not done by default:
if an admin change the permissions of a user, the changes will have an effect only the next time you retrieve the user from the database. So caching the user must be done carefully to avoid security issues
if you simply reuse the user which was serialized in the session, it will not be managed by Doctrine anymore. This means that as soon as you want to modify the user or to use the user in a relation, you will have to merge it back into the UnitOfWork (which will return a different object than the one used by the firewall). Merging will trigger a DB query too. And requiring such logic will break some of the built-in controller which are expecting to be able to use the user object for updates.