I have entities Account and CreditCard in Core Data. An account can have multiple creditCards. Each creditCard has an number. How do I encrypt the number?
I know I could use Keychain Services without Core Data, but could I use them together? The reason I want to use Core Data instead of something like NSUserDefaults is because I want to handle multiple accounts. I haven't used Keychain Services, so I'm not sure if it'd be good for multiple accounts.
You can store your keychain object in Core Data by transforming it into an NSData object. This is not all that trivial, as you need to transform it back and forth correctly. Check out these documentation documents about Non-Standard Persistent Attributes to help you.
You can change the attributes that you want to encrypt to type Transformable, and create your own NSValueTransformer that encrypts when transformedValue is called and decrypts when reverseTransformedValue is called.
Transformable attributes:
https://developer.apple.com/library/prerelease/ios/samplecode/PhotoLocations/Introduction/Intro.html
Example of decrypt/encrypt AES256:
https://gist.github.com/m1entus/f70d4d1465b90d9ee024
Related
As part of storing data. Does hazelcast stores metadata information like RIAK?.
If so, whether we can store custom metadata information?.
Thanks in advance
Dinesh
What kind of metadata stores riak and what kind of data do you want to store?
We store some internal metadata like last access timestamp or hits and similar but this probably not what you're asking for. We also do not have user custom metadata depending on what you expect to store. Custom metadata can for sure be stored in another map using the same key.
I'm wondering whether a mechanism exists that allows client to client encryption. For example, when enabled, any information that is entered on one client can only be decrypted using a specific key.
Similar to how regular public key transactions work, but server agnostic.
A use case:
Everything on my Facebook profile is encrypted, and no body would be able to view that information (not even facebook). The users that I give the key would be able to decrypt that information.
This would allow complete control of data stored online.
The same idea can be applied for pictures uploaded to the internet.
One issue that I see is to have a practical mechanism to manage keys and a secure way to distribute keys to other users.
Has anyone done something like this before?
In case of Facebook I can imagine encrypting the data with OpenPGP keys into armored (text) format. Then you can place encrypted block to facebook or anywhere else. Other users would take the block, decrypt it on the client side and see it.
The same applies with other social networks and places where you can store some text block.
You can easily do encryption in some client application and even in Javascript (if you manage to make JavaScript load local user's keys somehow).
I'm investigating ServiceStack's Authorization feature and want to use Couchbase as my data store. I understand there isn't an IUserAuthRepository implementation for Couchbase so I'd have to develop my own, which isn't a problem.
The issue I am having is if I store the built-in UserAuth object as-is, CB it uses the Id field as the document identifier. This is a problem because I believe the identifier should be object type specific, otherwise a separate 'bucket' would be required to prevent conflicting id's across different objects. I don't really want to have lots of buckets unless I have to.
My preference would be to have the document id set to the type of the object plus the object specific identifier.
eg Using Id "UserAuth_1234" or using UserName "UserAuth_MikeGoldsmith"
Is my assumption of trying to re-use a bucket for different application objects valid or should I be thinking about a bucket per object-type / namespace?
Any direction would be welcome, both from Couchbase and ServiceStack enthusiasts.
Thanks
Additional Info
Ok, so from John's answer I will assume my additional property for the object type is valid.
I found this post where Mythz suggests the BootStrapApi example extends the AuthUser with custom properties. However, to me it looks like the AuthUser is persisted twice, first as the AuthUser and again as the User object (both times using the OrmLiteAuthRepository). Am I right?
Essentially, I want to utilise the SS auth feature, but control the POCO object that will be saved into Couchbase. Can someone give some direction if this is possible and if so, what I need to implement / hook into?
I tried implementing a Couchbase version of IUserAuthRepository, however it uses the UseAuth concrete type so I can't use my own object.
I also tried hooking into the OnAuthenticated method of AuthUserSession but at this point the UserAuth POCO will have been persisted using the register IUserAuthRepository.
I'm happy to use the CredentialsAuthProvider as I just want username/password authentication. More could be added later.
Thanks again!
Buckets are loosely analogous to databases in the relational world, so generally they shouldn't be mapped to application objects. I'm not familiar with ServiceStack's auth feature, but your suggestion to use meaningful, prefixed keys seems reasonable and is a common approach for providing document taxonomy.
Keep in mind that in Couchbase, there's no field in the document that's considered an "id" or "key" field. The key used to store the document is available in metadata, but is not part of the JSON document itself. So if you're able to take advantage of views, then you could also store a document with a type attribute and then query by some non-id property. In other words, the key in the key value doesn't have to be the way you retrieve the user auth document.
Also, there are developers who use key prefixing as a way to provide document taxonomy for views, so you're key pattern above would work for that too. My preference is a type property, but that's no more valid than your suggestion.
I've come across the ServiceStack UseCase examples, with one that addresses my Custom Authentication issue directy.
I was able to override the TryAuthenticate method and use my own UserRepository that backs onto Couchbase.
Recently I discovered how useful and easy parse.com is.
It really speeds up the development and gives you an off-the-shelf database to store all the data coming from your web/mobile app.
But how secure is it? From what I understand, you have to embed your app private key in the code, thus granting access to the data.
But what if someone is able to recover the key from your app? I tried it myself. It took me 5 minutes to find the private key from a standard APK, and there is also the possibility to build a web app with the private key hard-coded in your javascript source where pretty much anyone can see it.
The only way to secure the data I've found are ACLs (https://www.parse.com/docs/data), but this still means that anyone may be able to tamper with writable data.
Can anyone enlighten me, please?
As with any backend server, you have to guard against potentially malicious clients.
Parse has several levels of security to help you with that.
The first step is ACLs, as you said. You can also change permissions in the Data Browser to disable unauthorized clients from making new classes or adding rows or columns to existing classes.
If that level of security doesn't satisfy you, you can proxy your data access through Cloud Functions. This is like creating a virtual application server to provide a layer of access control between your clients and your backend data store.
I've taken the following approach in the case where I just needed to expose a small view of the user data to a web app.
a. Create a secondary object which contains a subset of the secure objects fields.
b. Using ACLs, make the secure object only accessible from an appropriate login
c. Make the secondary object public read
d. Write a trigger to keep the secondary object synchronised with updates to the primary.
I also use cloud functions most of the time but this technique is useful when you need some flexibility and may be simpler than cloud functions if the secondary object is a view over multiple secure objects.
What I did was the following.
Restrict read/write for public for all classes. The only way to access the class data would be through the cloud code.
Verify that the user is a logged in user using the parameter request.user ,and if the user session is null and if the object id is legit.
When the user is verified then I would allow the data to be retrieved using the master key.
Just keep a tight control on your Global Level Security options (client class creation, etc...), Class Level Security options (you can for instance, disable clients deleting _Installation entries. It's also common to disable user field creation for all classes.), and most important of all, look out for the ACLs.
Usually I use beforeSave triggers to make sure the ACLs are always correct. So, for instance, _User objects are where the recovery email is located. We don't want other users to be able to see each other's recovery emails, so all objects in the _User class must have read and write set to the user only (with public read false and public write false).
This way only the user itself can tamper with their own row. Other users won't even notice this row exists in your database.
One way to limit this further in some situations, is to use cloud functions. Let's say one user can send a message to another user. You may implement this as a new class Message, with the content of the message, and pointers to the user who sent the message and to the user who will receive the message.
Since the user who sent the message must be able to cancel it, and since the user who received the message must be able to receive it, both need to be able to read this row (so the ACL must have read permissions for both of them). However, we don't want either of them to tamper with the contents of the message.
So you have two alternatives: either you create a beforeSave trigger that checks if the modifications the users are trying to make to this row are valid before committing them, or you set the ACL of the message so that nobody has write permissions, and you create cloud functions that validates the user, and then modifies the message using the master key.
Point is, you have to make these considerations for every part of your application. As far as I know, there's no way around this.
Assuming I have a ASP.NET MVC 3 application that runs in a web farm where each web server belongs to a workgroup (as appose to a domain with shared accounts). The web farm is also auto scalable, meaning that the number of instances are dependent on the load. Sensitive data is encrypted and decrypted when stored/retrieved from the database. The symmetric and asymmetric keys are stored on each machine and protected with ACL and encrypted using DAPI (using the machine key).
For compliance and security reasons it is required that keys be rotated on a regular interval. How would you design/modify the system to automatically rotate keys at a regular interval without bringing the system offline? Assume that there are an arbitrary number of tables each with an arbitrary number of columns that are encrypted using the keys.
Many Q&A are related to which algorithms to use and how to secure the keys, however few actually address how to design and implement an application that would allow those keys were to be rotated, especially in a dynamic environment (autoscaling environment) sharing a database.
Having multiple keys in your system
When having multiple encodings (or encryption schemes, keys) what you usually want to do first is introduce some kind of versioning scheme as you need to know which key has been used for this particular piece of data. You have several choices for this:
Timestamps: Save the timestamp the data has been encrypted with the data. Then divide time into intervals of some length where the same key is used.
Version numbers: You can also simply assign increasing version numbers.
Key fingerprint: Store they key's fingerprint with the data
In every case, you need to store all keys that are currently in use to be able to decrypt data. When reading data, just look up the key matching your version identifier and decrypt. When writing, use the currently active key and store the encrypted data + your version identifier. You can retire (aka delete) a key when you are sure there is no data encrypted with this key in your database.
Deploying new keys
Whenever you roll over to a new key, this key has to be generated and deployed. You can do this in a central fashion or use some distributed key agreement protocol.
Re-encrypt data
If you need to re-encrypt data, you can do it in two ways:
Background process: Having a background process that just retrieves N data items with an old versioning identifier, decrypts and re-encrypts it and stores the result. Sleep a bit between runs to not overload your system.
Update on access: Whenever you read data and you notice that it has an old versioning identifier, re-encrypt with the current key and store the result. This might not re-encrypt everything depending on your data-access pattern, so an additional background process might be necessary.
Asymmetric crypto
If you are using asymmetric crypto (I guess for example for storing credit card numbers, webservers only having the public key to encrypt and the payment processor having the private key to decrypt) it gets a bit tricky, since only the machines with the private keys can re-encrypt data. All other aspects are the same.
Google's Keyczar provides such a framework, but there ins't a .Net version.
Maybe you could wrap the C++ version in a .Net wrapper ?