Is it necessary to check isUpdatable() when process run in system mode/trigger to pass security review process - security

In salesforce, we have a scenario, on the trigger of the lead object we are updating some records of Campaign. But the user on the behalf of we are running the trigger does not have edit permissions on the campaign. We are not facing any issue in the update of the campaign because the trigger is running the operation in system mode.
Further, we have applied for the security review and made the changes and added the check of the object isUpdatable() and after it, we are not able to update the campaign due to that check which returns false for isUpdatable().
My questions are, Can we pass the security review without applying that isUpdatable() check? if our process has the business logic to update the campaign/opportunity on the behalf of the user who doesn't have permissions on the campaign/opportunity?
If we can not pass the security review with that check then what could be an alternative for it, where one user who doesn't have permission on campaign/opportunity, performs some operation on lead/contact and we want to update the campaign/opportunity in system mode after that operation?
or is it necessary to provide the permissions of campaign/opportunity to that user?

It's not a coding question as such so it might be closed here. Consider cross-posting to https://salesforce.stackexchange.com/
Generally speaking, your app should be simplifying Salesforce. Adding value by being pre-built, pre-tested use case for this customer and saving clicks for the end user. (let's ignore situations like pulling data from other systems, runinng some crazy Excel file generation that SF can't do easily). With that philosophy - you're supposed to respect the System Administrator's wishes when it comes to security. If admin didn't grant Profile X edit access to field Y - the security review answer is that you should detect it. If you can recover gracefully and continue with your program - cool. If it's critical field - throw an error, force the admin to make conscious decision. Because if you're saving clicks - user would face same problem in normal UI. It's not only "describes", it's also about "without sharing" for example.
There's another layer to it - licensing. In the old days "Marketing User" (to access campaigns) was a separate license, you assigned it by clicking checkbox on User but it had to be purchased. Now it's bit simpler, part of the full user license (I think). But there are still situations where you can't access Opportunities for example (Platform License) or can access only Account, Contact and 10 custom objects (Chatter Plus License or whatever is the new name).
If you abuse system mode to fetch data from objects user is not allowed to see (or save to them) - official answer is that SF loses money because of you. Permission should really be assigned and if needed - license purchased. I'm not a lawyer, I don't know who violates the Master Service Agreement with Salesforce, you or the client that installed the app or both. I'd say read the contracts and see what you can do to protect yourself. If that means your app can't be installed by customers on Essentials/Professional (or it can be installed anywhere but can be used only by full users, not by Platform/Chatter/Community) - well, it is what it is. Decide pros and cons, legal consequences if it gets caught...
They're humans, talk with them. Maybe you'll pass review. Better have a rock solid business case why you need to bypass the check though.
P.S. You know you don't have to do describes anymore? Spring'20 goes live this & next week and "WITH SECURITY ENFORCED" and "stripinaccessiblefields" become generally available :)
P.P.S you really need trigger? Workflow, process builder, flow (yuck) - anything to not have code = not need the isAccessible and if it effectively dies on permissions - it's the client's sysadmin's problem?

Related

Enterprise way to handle server access

I dont know exactly how to properly title this question.
The problem is, ... we have a set of tickets that the infrastructure team needs to take care. In order to work them out they need access to the servers, sometimes with sudo/root privileges.
Is there a way to handle access to servers with temporal users that can be delete after a while automatically, and also to disallow this users to change their own settings as well as disallowing them to create new users or another root specific action?
Is it possible to link this "way" to tickets in ... JIRA for example, so if someone needs access to any server is because there was a ticket for that.
Am I clear enough?

Why do services provide a list of recovery codes for two factor authentication instead of just one?

The title pretty much says it all. I am currently implementing two factor authentication and wonder whether I should provide a list of recovery codes. I wouldn't even have thought of it myself, but most implementations I have seen in the wild do it this way. Github for example generates a list of 16 recovery codes at once.
Is there any security benefit?
Great question - I'm pretty sure it's an ease of implementation thing for both parties - if you generate a list of recovery codes up front then you don't have to regenerate a code every time a user uses a backup code. The idea is that the user will print/save them somewhere so from a usability perspective this saves the user from having to re-print or re-save codes.
You don't have to use recovery codes, though. GitHub supports a few different recovery options. Some companies use security questions or allow you to fall back to SMS.
Some companies make you call support and provide account details (works for enterprise use cases where there's a trusted contact/if you expect to have a small number of account recovery cases/have a good way of verifying identity over the phone [disclaimer I wrote that post for Twilio]).
Facebook lets you choose friends to vouch for your identity if you get locked out.
I gave a talk about this last year and have some more recommendations in the slides. Hope this helps!

What steps are there to prevent someone inside a company to alter user data (e.g. Facebook, Google, etc.)?

I've always wonder what security mechanisms are there to prevent an employee (dba, developer, manager, etc.) from modifying users' data. Let say a user has a Facebook account. Knowing who database works, I know that at least two employees in that company would have root access to it. So my question is what if such employee decides to alter someone's profile, insert bad or misleading comments, etc.?
Any input is appreciated. Thanks.
If a person has full write access to a database, there is nothing preventing them from writing to that database. A user who has unrestricted access to Facebook's database engine has nothing other than company policy to prevent them from altering that data.
Company policy and personal honor are usually good enough. In the end, though, there's always that risk; a Google employee was fired in July for reading users' private account data. In short, the people who write software for a system can make that system do whatever they like, and there is absolutely no way to prevent this; people who can read a source of data can read that source of data, and people who can edit it can edit it. There is no theoretical way to prevent this from being the case.
In short, all that can be done is to have more than one person watching the database, and fire people who try to damage it. As a user, all you can do is trust the company that controls the data.
This is a user access control problem. You should limit who has DBA access. You should limit what code developers have access to, such as only the projects that they need to do their job.
An after thought is to keep backups and logs. If someone does change a record in the database you should have a system in place to identify and fix the problem.

Backdoor Strategy- opinion needed

I'm creating an application to track publications and grants for a university. Professors will need to put they CV into the system when it is up and running. Yeah, right.
The person in charge is planning on hiring someone to input all of the information, but my questions is how?
The strategy I'm thinking of is to install a backdoor. The lucky undergrad can log in as any professor using the backdoor. Once all the data is removed, the backdoor can be removed.
Doing so would probably be as simple as editing out a comment in the config file. The IT guys would still have access, but since they control the machines, they would have access anyway. Are there any flaws to this strategy?
Instead of installing a backdoor, why not create a priviledged user role. Users with this role can view and modify data for any other users (or a select group of users if you want to be fancy - and more secure - with it). So, the undergrad could use an account with this role to input the necessary data. When he is done, an admin can remove the role from his account, effectively closing the "back door".
You risk the undergrad dealing some other damage. What you should do is have them create a new user, give that user a small partition, and have the user enter the data on to that. Then just copy it over when he's done. Bad idea to give a student actual access, and even worse to have him log on as the guy - he should have his own user.
Don't underestimate the ongoing need for staff, students, or temps to enter and maintain the data. As simple as upkeep may be after the initial loading (typing) period, some professors simply will not do it, and will delegate it to staff.
In an eerily similar application (ours tracks publications and grants, among other things, as part of a career review for raises and promotions) our decision was to use a "proxy" system, where certain users can "switch to" other users. It's not really a switch because we store who was doing the input/editing along with who the data applies to.
Contrary to what Justin Ethier said about privileged roles, these people are the least privileged in the system, allowed only to switch to another account and do data entry.

What security events does one audit for a line of business application?

I already audit authorization success, failure and logout.
I've considered auditing (logging) every method call and keeping a version of every row and column that was ever modified, but both of those options will greatly increase the complexity of auditing. Auditing a random subset sees too random.
The legal specs (FISMA, C&A) just say something needs to be audited.
Are there any other non-domain specific auditing strategies I'm forgetting?
Considering that auditing is often about accountability you may wish to log those actions that could contribute to any event where someone/something needs to be accountable..
Alteration of client records
Alteration of configuration
Deletion of Data
It is a good idea to keep some of these things versioned, such that you can roll-back changes to vital data. Adding 'who altered' in the first place is quite straight forward.
Unless someone has direct access to the database, often application logging of any event that affects the database ... such as a transaction altering many tables ... may be sufficient. So long as you can link auditable logical action to a logical unit of accountability, regardless of what subsystem it affects, you should be able to trace accountability.
You should not be logging method calls and database alterations directly, but the business logic that led to those calls and changes, and who used that logic. Some small amount of backend code linking causality between calls/table alterations and some audit-message would be beneficial too (if you have resources).
Think of your application's audit-elements as a tree of events. The root is what you log, for example 'Dave deleted customer record 2938'. Any children of the root can be logged and you can tie it to the root, if it is important to log it as part of the audit event. For example you can assert that some audit event, 'Dave deleted ...' was tied to some billing information also going walkies as part of a constraint or something
But I am not an expert.
I agree with a lot of what Aiden said, but I strongly believe that auditing should be at the database level. Too many databases are accessed with dynamic sql so permissions are at the table level (at least in SQL Server). So the person committing fraud can enter delete or change data at the database bypassing all rules. In a well-designed system only a couple of people (the dba and backup) would have the rights to change audit triggers in prod and thus most people could get caught if they were changing data they were not authorized to change. Auditing through the app would never catch these people. Of course there is almost no way to prevent the dbas from committing fraud if they choose to do so, but someone must have admin rights to the database, so you must be extra careful in choosing such people.
We audit all changes to data, all inserts and all deletes on most tables in our database. This allows for easy backing out of a change as well as providing an audit trail. Depending on what your database stores, you may not need to do that. But I would audit every financial transaction, every personnel transaction, and every transaction having to do with orders, warehousing or anything else that might be subject to criminal activity.
If you really want to know what absolutely must be audited, talk to the people who will be auditing you and ask what they will want to see.

Resources