As above really, if I store something (e.g. website session data) in memcached, is it possible to remove the data securely so that it would not be evident in a later memory dump?
I assume delete just unassigns the memory rather than wiping it? Could I manually junk the allocated memory by updating the key with random data before deleting it?
Obviously encrypting the data before storing it would be a solution but this also adds a performance overhead.
You can't... Replacing a value is another allocation and will not overwrite the old value in memory.
Check the FAQ. So, if you want to secure your data, because you are in a hostile environment, use SASL authentication. Check it out: SASL
And make sure no one has access to memcached from the outside!!! Bind it to localhost.
Excerpt from the manual:
When do expired cached items get deleted from the cache?
memcached uses a lazy expiration, which means it uses no extra cpu
expiring items. When an item is requested (a get request) it checks
the expiration time to see if the item is still valid before returning
it to the client.
Similarly when adding a new item to the cache, if the cache is full,
it will look at for expired items to replace before replacing the
least used items in the cache.
Related
I build an API, which will send data to another API when has been collect 10 hashes. The client sends 1 hash per hour.
For example:
The client POST hash to API
API need to store it somewhere until the hashes number becomes to 10
When the number of hashes becomes 10 need to send data to another API and start from 0 again
My question related to the 2nd point. I can store the hashes in the array, the problem is that the data will be lost when the server will be shut down suddenly.
This is the only data which I need to store in API, so I don't want to use DBS.
By the way, it's my first time of developing API, so will be glad to your help.
Thanks in advance.
Sorry but your only options of storing data are either memory or disk.
If you store data in variables, you're using memory. It is fast and instant but it's not durable as you already said.
If you store data in database, you're using disk storage. It is slower but it is durable.
If you need durability, then database is your only option. Or maybe if you don't want to store the data in your machine, you could use cloud database such as firebase database.
Maybe your problem will be solved with Redis.
I had one feature where I needed to use some user's pieces of information on the server side in runtime and it could not be persisted at the database.
So, I used this.
In simple words, the redis will save the information in your cache and you can retrieve when you need.
There's no disk use and are more stable than a hand made memory control.
I hope this helps you.
Our Google cloud Function has access to an encrypted API key which it can unencrypt by using an external service. Once the API key is unencrypted, is it then safe to cache the API key as a global variable so that in cases where a Google Cloud Function is reused, the unencrypted variable can be used instead of contacting the unencryption service?
EDIT:
Our thinking is that the function will use an unencrypted version of the API key when running (i.e. store it in its memory for use) and that it's cache, I believe, is in memory and per function, which to the best of my knowledge would make it no less safe to cache the unencrypted API key per function than get it and unencrypt it on every function invocation?
'Safe' was a bad word choice - there is no such thing as safe, everything is, to an extent, a balancing act.
Statistically speaking, the longer you hold sensitive information in memory, the easier it is for a bad actor to get a hold of it. But you can never really eliminate the chance of this happening. The issue is really how this bad actor gets into Cloud Functions. The moment this becomes a possibility, you've got a problem. This can happen by trusting third party code into your deployment, or someone getting a hold of your project's admin credentials or a lapse in security at Google.
But if you assume that there is no possibility of a bad actor entering the system, it doesn't really matter how long you hold on to something in memory, since you trust every bit of code that could access it (and of course Google for providing that memory).
The memory isn't held strictly "per function". It would be held "per function per instance". Depending on the load, you could have many server instances all decrypting and holding sensitive information. But the code running on the instance would only be triggered from the one function and never others.
Caching API keys in memory in this manner does make changing API keys a bit more complex if you had to quickly change them i.e. due to a leak - a way round could be to also store a timestamp in a global variable and invalidate the key after x amount of time has passed or to restart all of the functions so the memory cache is cleared and fresh versions of the API keys are fetched, which would happen if you push a new version of the function to Google Cloud Functions
I am currently running into an issue of concurrent requests, NodeJS, with access points to a cookie that holds information that I attain from a server. The thing is the requests being made are asynchronous, and need to remain that way, but I am in charge of asking for the new data sets when the cookie is about to become stale. How do i keep updating the cookie without bogging the server down with requests for a new cookie, if multiple concurrent requests all assume that they are the ones that should be in charge of refreshing the cookie's value.
I.e. Req1->Req30 are fired off. In the process of handling Req17 the cookies time to live is caught so it sends out the refresh command. The thing is Req18->Req30 all assume that they should be the ones to refresh the cookies value, because they also do the staleness checks and fail in that respect.
I have limited ability to actively change the server side code, and due to the sensitive nature of the data cannot readily decide to place it in a DB because at that point, I become charged with ensuring that the data is again secured.
Should I just store multiple key/values in the cookie, and iterate through them, this could become an expensive operation. Also could overwrite the cookie with invalid data on some request, since to update the cookie and append the new key value pairs requires creating a new one, due to immutability with the cookies themselves.
To handle concurrent access on the cookie :
Use of timestamp; only perform the change if the data is more recent
To handle cookie data renewal :
Instead of having workers to perform the check of new data concurrently. Ask one specific worker to handle data update, meanwhile others workers use the data in read only mode.
I'm building an application with ExpressJS, Mongodb(Mogoose). Application contains routes where user has to be authenticated before accessing it.
Currently I have written a express middleware to do the same. Here with the help of JWT token I'm making mongodb query to check whether user is authenticated or not. but feel this might put unnecessary request load on my database.
should I integrate redis for this specific task?
does it will improve API performance? or should go ahead with existing
mongodb approach?
would be helpful if I get more insights on this.
TLDR: If you want the capability to revoke a JWT at some point, you'll need to look it up. So yes, something fast like Redis can be useful for that.
One of the well documented drawbacks of using JWTs is that there's no simple way to revoke a token if for example a user needs to be logged out or the token has been compromised. Revoking a token would mean to look it up in some storage and then deciding what to do next. Since one of the points of JWTs is to avoid round trips to the db, a good compromise would be to store them in something less taxing than an rdbms. That's a perfect job for Redis.
Note however that having to look up tokens in storage for validity still reintroduces statefulness and negates some of the main benefits of JWTs. To mitigate this drawback make the list a blacklist (or blocklist, i.e. a list of invalid tokens). To validate a token, you look it up on the list and verify that it is not present. You can further improve on space and performance by staggering the lookup steps. For instance, you could have a tiny in-app storage that only tracks the first 2 or 3 bytes of your blacklisted tokens. Then the redis cache would track a slightly larger version of the same tokens (e.g. the first 4 or 5 bytes). You can then store a full version of the blacklisted tokens using a more persistent solution (filesystem, rdbms, etc). This is an optimistic lookup strategy that will quickly confirm that a token is valid (which would be the more common case). If a token happens to match an item in the in-app blacklist (because its first few bytes match), then move on to do an extra lookup on the redis store, then the persistent store if need be. Some (or all) of the stores may be implemented as tries or hash tables. Another efficient and relatively simple to implement data structure to consider is something called a Bloom filter.
As your revoked tokens expire (of old age), a periodic routine can remove them from the stores. Keep your blacklist short and manageable by also shortening the lifespan of your tokens.
Remember that JWTs shine in scenarios where revoking them is the exception. If you routinely blacklist millions of long-lasting tokens, it may indicate that you have a different problem.
You can use Redis for storing jwt label. Redis is much faster and convenient for storing such data. The request to Redis should not greatly affect the performance. You can try the library jwt-redis
JWT contains claims. you can store a claim such as
session : guid
and maintain a set in redis for all keys black listed. the key should stay in set as long as the jwt validity.
when your api is hit
verify jwt signature. if tempered stop
extract claims in a list of key value pairs
get the session key and check in redis in blacklisted set
if found, stop else continue
I have the following scenario:
A user logs in, a session entry via connect-redis which is valid for 2 weeks. The user can now access certain parts of the app using the session id that is stored in the app.
Now, if 1. the user deletes that cookie in the browser (with the session) and 2. logs in again - there are now 2 session entries in Redis associated with the same user, with the older one being obsolete.
What is the best way to deal with such old/obsolete sessions? Should I use a client library for redis, search through all sessions to find the ones that match the info of the currently logging in user (after she potentially manually removed the cookie), and purge these obsolete session; or is there a better way?
Gracias,
nik
That depends whether this (user deletes the cookie) is a common scenario and, if it is, whether there's a problem with obsolete cookies in the server.
Two potential "problems" that I can think of are:
Security - could the stale cookie be exploited for malicious intent? I do not see how that's possible, but I may be wrong(tm).
Storage - are the stale cookies taking too much (RAM) resources? If there's a lot of stale cookies and each cookie is large enough, this could become a problem.
Unless 1 or 2 applies to your use case, I don't see why you'd want to go through the trouble of "manually" cleansing old cookies. Assuming that you're giving a ttl value to each session (2 weeks?), outdated cookies would be purged automatically after that period so no extra action is needed to handle these.