In salesforce, we have a scenario, on the trigger of the lead object we are updating some records of Campaign. But the user on the behalf of we are running the trigger does not have edit permissions on the campaign. We are not facing any issue in the update of the campaign because the trigger is running the operation in system mode.
Further, we have applied for the security review and made the changes and added the check of the object isUpdatable() and after it, we are not able to update the campaign due to that check which returns false for isUpdatable().
My questions are, Can we pass the security review without applying that isUpdatable() check? if our process has the business logic to update the campaign/opportunity on the behalf of the user who doesn't have permissions on the campaign/opportunity?
If we can not pass the security review with that check then what could be an alternative for it, where one user who doesn't have permission on campaign/opportunity, performs some operation on lead/contact and we want to update the campaign/opportunity in system mode after that operation?
or is it necessary to provide the permissions of campaign/opportunity to that user?
It's not a coding question as such so it might be closed here. Consider cross-posting to https://salesforce.stackexchange.com/
Generally speaking, your app should be simplifying Salesforce. Adding value by being pre-built, pre-tested use case for this customer and saving clicks for the end user. (let's ignore situations like pulling data from other systems, runinng some crazy Excel file generation that SF can't do easily). With that philosophy - you're supposed to respect the System Administrator's wishes when it comes to security. If admin didn't grant Profile X edit access to field Y - the security review answer is that you should detect it. If you can recover gracefully and continue with your program - cool. If it's critical field - throw an error, force the admin to make conscious decision. Because if you're saving clicks - user would face same problem in normal UI. It's not only "describes", it's also about "without sharing" for example.
There's another layer to it - licensing. In the old days "Marketing User" (to access campaigns) was a separate license, you assigned it by clicking checkbox on User but it had to be purchased. Now it's bit simpler, part of the full user license (I think). But there are still situations where you can't access Opportunities for example (Platform License) or can access only Account, Contact and 10 custom objects (Chatter Plus License or whatever is the new name).
If you abuse system mode to fetch data from objects user is not allowed to see (or save to them) - official answer is that SF loses money because of you. Permission should really be assigned and if needed - license purchased. I'm not a lawyer, I don't know who violates the Master Service Agreement with Salesforce, you or the client that installed the app or both. I'd say read the contracts and see what you can do to protect yourself. If that means your app can't be installed by customers on Essentials/Professional (or it can be installed anywhere but can be used only by full users, not by Platform/Chatter/Community) - well, it is what it is. Decide pros and cons, legal consequences if it gets caught...
They're humans, talk with them. Maybe you'll pass review. Better have a rock solid business case why you need to bypass the check though.
P.S. You know you don't have to do describes anymore? Spring'20 goes live this & next week and "WITH SECURITY ENFORCED" and "stripinaccessiblefields" become generally available :)
P.P.S you really need trigger? Workflow, process builder, flow (yuck) - anything to not have code = not need the isAccessible and if it effectively dies on permissions - it's the client's sysadmin's problem?
Before posting this question I searched and found that lot of questions were asked on "how to implement single user session" in different tools or frameworks.
But my question is why should we implement single user sessions?
I discussed with some friends and did some research on Google and I could find two reasons:
1. Some applications can potentially maintain "user working state" of a user, so if they allow more than one sessions then it can mess up the "user working state".
2. Some tools/applications need this for implementing licensing. Because the license allows only fixed number of users so implementing "single user session" will prevent the misuse.
Both of the above points are not related to security. Is there any other reason why it is considered a good security practice?
Thanks,
Manish
From a security point of view, single user sessions can help a user detect that their account is being used elsewhere. For example, upon logon if a user receives a message that their account is already logged on at another location, they will be alerted to the fact that their credentials have been compromised and that they should perhaps change their password.
Apart from that, I can only think of it being a good security mechanism to protect content (i.e. licensing that you covered in point (2)). However, there may be a secondary advantage to preventing multiple logons per account - users are less likely to share their credentials with other users if the account cannot be used simultaneously. Therefore their account details are likely to be kept more secure as the details have not been emailed to friends or colleagues.
I've always wonder what security mechanisms are there to prevent an employee (dba, developer, manager, etc.) from modifying users' data. Let say a user has a Facebook account. Knowing who database works, I know that at least two employees in that company would have root access to it. So my question is what if such employee decides to alter someone's profile, insert bad or misleading comments, etc.?
Any input is appreciated. Thanks.
If a person has full write access to a database, there is nothing preventing them from writing to that database. A user who has unrestricted access to Facebook's database engine has nothing other than company policy to prevent them from altering that data.
Company policy and personal honor are usually good enough. In the end, though, there's always that risk; a Google employee was fired in July for reading users' private account data. In short, the people who write software for a system can make that system do whatever they like, and there is absolutely no way to prevent this; people who can read a source of data can read that source of data, and people who can edit it can edit it. There is no theoretical way to prevent this from being the case.
In short, all that can be done is to have more than one person watching the database, and fire people who try to damage it. As a user, all you can do is trust the company that controls the data.
This is a user access control problem. You should limit who has DBA access. You should limit what code developers have access to, such as only the projects that they need to do their job.
An after thought is to keep backups and logs. If someone does change a record in the database you should have a system in place to identify and fix the problem.
I am the webmaster for a small, growing industrial association. Soon, I will have to implement a restricted, members-only section for the website.
The problem is that our organization membership both includes big companies as well as amateur “clubs” (it's a relatively new industry…).
It is clear that those clubs will share the login ID they will use to log onto our website. The problem is to detect whether one of their members will share the login credentials with people who would not normally supposed to be accessing the website (there is no objection for such a club to have all it’s members get on the website).
I have thought about logging along with each sign-on the IP address as well as the OS and the browser used; if the OS/Browser stays constant and there are no more than, say, 10 different IP addresses, the account is clearly used by very few different computers.
But if there are 50 OS/Browser combination and 150 different IPs, the credentials have obviously been disseminated far, and there would be then cause for action, such as modifying the password.
Of course, it is extremely annoying when your password is being unilaterally changed. So, for this problem, I thought about allowing the “clubs” to manage their own list of sub-accounts, and therefore if abuse is suspected, the user responsible would be easily pinned-down, and this “sub-member” alone would face the annoyance of a password change.
Question:
What potential problems would anyone see with such an approach?
Any particular reason why you can't force each club member to register (just straight-up, not necessarily as a sub or a similar complex structure)? Perhaps give each club some sort of code to use just when the users register so you can automatically create their accounts and affiliate them with a club, but you then have direct accounting of each member without an onerous process that the club has to manage themselves. Then it's much easier to determine if a given account is being spread around (disparate IP accesses in given periods of time).
Clearly then you can also set a limit on the number of affiliated accounts per club, should you want to do so. This is basically what you've suggested, I suppose, but I would try to keep any onerous management tasks out of the hands of your users if at all possible. If you can manage club-affiliated signups, you should, rather than forcing someone at the club to manage them for you.
Also, while some sort of heuristic based on IP and credentials is probably fine, I would stay away from incorporating user-agent, or at least caring too much about it. Seeing a few different UAs from the same IP - depending on your expected userbase, I suppose - isn't really that unusual. I use several browsers in the course of my day due to website bugs, etc. and unless someone is using a machine as a proxy, it's not evidence of anything nefarious.
Within your organization, is every developer required to lock his workstation when leaving it?
What do you see a risks when workstations are left unlocked, and how do you think such risks are important compared to "over-wire" (network hacking) security risks?
What policies do you think are most efficient to enforce locking the workstations? (The policies might be either something "technical", like a domain group security settings for screen-savers to be locking, or a "social", like applying some penalties to those who do not lock, or encouraging Goating?)
The primary real world risks are your co-workers "goating" you. You can enforce this by setting a group policy to run the screen saver after X minutes, which can lock the computer as well.
For me, this has become habit. On a Windows machine, pressing Windows-L is a quick way of locking the machine.
The solution might be social rather than technical. Convince people that they don't want anyone else reading their email or spoofing their accounts when they step away.
In my org (government), yes. We deal with sensitive data (medical and SSN). It's instinctual for me: Windows+L every time I walk away.
The policy is both social and technical. On the social side, we're reminded that personal security is important, and everyone is aware of the data with which we are privy. On the technical side, the workstations use a group policy that turns on the screensaver after 2 minutes, with "On resume, password protect" turned on (and unable to be turned off).
No, but I'm an organization of 1 - the last time I worked in a large organization, we were not required, but encouraged to. If I were in an enviroment with other people, I would probably lock my workstation now when I left it.
While certainly people with physical access can add hardware keyloggers, locking it does add an additional level of security. Depending on the type of organization you are, I think the risks are more from internal organizational snooping than over-the-wire attacks.
I used to work at a very large corp where the workstation required your badge to be inserted inside it to work. You weren't allowed to move in the building (you needed the badge to open the doors anyway) without that badge on you. Taking the badge out of the workstation's smartcard reader logged you out automatically.
Off topic but even neater, the workstations were more like "networkstations" (note that it is not a necessity to use the system I've just described, though) and the badge held your session. Pop it into another workstation in another building and here's your session just as you left it when you pulled the badge on the other computer.
So they basically solved the issue by physically forcing people to log off their workstation, which I think is the best way to enforce any kind of security-critical policy. People forget, it's human nature.
The only place I have seen where this is truly important is government, defense, and medical facilities. The best way to enforce it is through user policies on Windows and "dot files" on Unix systems where a screensaver and timeout are pre-chosen for you when you log in and you aren't allowed to change them.
I never lock my workstation.
When my coworker and friend mocked me and theatened to send embarassing emails from my machine, I mocked him back for thinking that locking does ANYTHING when I have physical access to his machine, and I linked him to this url:
http://www.google.com/search?hl=en&q=USB+keylogger
I don't work with any sensitive data my coworkers wouldn't already have equal access to, but I am doubtful of the effectiveness of workstation locking against a determined snoopish coworker.
edit: the reason I don't lock is because I used to, but it kept creating weird instabilities in windows. I reboot only on demand, so I expect my machine to run for months without becoming unstable and locking was getting in the way.
Goating can get you fired, so I don't recommend it. However, if that's not the case where you work, it can be effective, even if it's just a broad email that says, "I will always lock my workstation from now on."
At the very least, machines should lock themselves after X minutes of inactivity, and this should be set via group policy.
Security is about raising the bar, making a greater amount of effort necessary to accomplish something bad. Not locking your workstation at all lowers that bar.
We combine social and technical methods to encourage IT people to lock their PCs: default screen saver/locking settings plus the threat of goating. (The last place I worked actually locked the screen saver settings.)
One thing to keep in mind is that if you have applications (particularly if they are SSO) that track activity, changes, or both, the data you collect may be less valuable if you can't be sure the user recorded in the data is the user who actually made those changes.
Even at a company like ours, where there isn't a lot of company-related sensitive information available to most users, there's certainly potential for someone to acquire NBR data from another employee via an unlocked workstation. How many people save passwords to websites on their computer? Amazon? Fantasy football? (A dangerous goating technique: drop a key player from someone else's roster. It's really only funny if the commish is in on it with you, so the player can be restored ...)
Another thing to consider is that you can't be sure that everyone in your building belongs there. It's much easier to hack into a network if you're actually in the building: of course the vast majority of people in the building are there because they're allowed to be, but you really don't want to be the victimized company when that one guy in a million does get into the building. (It doesn't even have to be an intentionally bad guy: it could be somebody's kid, a friend, a relative ...) Of course the employee who let that person in could also let that person use their computer, but that kind of attack is much more difficult to stop.
locking your workstation each time you go for a coffee means that you type your password 10 times per day rather than once. And everyone around you can see you type it. And once they have that password they can impersonate you from remote computers which is far more difficult to prove than using your PC in the office with everyone watching. So surely locking your workstation is actually more of a security risk?
I'm running Pageant and have my SSH-public key distributed over all the servers here. Whoever sits down on my workstation can basically log into any account everywhere with my keys.
Therefore I always lock my machine, even for a 30s break. (Windows-L is basically the only Windows-key based shortcut I know.)
I personally think the risk is low, but in my experience most of the time it's not matter of opinion -- it's often a requirement for big corporate or government clients who will actually come in and audit your security. In that case, some kind of technical (group policy) solution would be best because you can actually prove you are complying with the requirement. I would also do it in cases where there is a legal privacy requirement (like medical data and HIPAA.)
I worked at a place where the people who supplied some of our equipment were from a company in direct competition with us. They were in the building when the equipment required maintenance. An email would go out every now and then saying they would be there, please lock your machine when you're not at it. If a competitor got our source because a developer forgot to lock their machine, the developer would be looking for a new job.
We are required to at work, and we enforce it ourselves. Mass chats are started professing love for people, emails are sent, backgrounds are changed, etc. Gotta love the first day when it happens to a new hire, everyone is sure to leave a nice note :)
The place I used to work had a policy on always locking your workstation. They enforced it by setting up a company wide mailing list - if you left your workstation unlocked, your co-workers would send an embarrassing mail to the list from your account, then lock your machine. It was kind of funny, and also kind of annoying, but it generally worked.
You could start sitting down at people's workstations and loading up [insert anything bad here] right after they walk away. That will work I'm sure.
In some/most government offices I've visited, that have the possibility of having members of the public walking about they have smartcards that plug into a USB reader on the PC. The card is on a necklace around the user's neck and locks the workstation when it's removed.
The owner of my company (and a developer), will make a minor change to your code window if you left your computer unlocked, making you go crazy wondering why your code isn't working until you find it.
Have to say, I never keep my computer unlocked after hearing about that prank, I go crazy enough as it is with some of my code.
You could rig up a simple foolproof way, have a fingerprint reader plugged into the computer programmed for your password, then wear this necklace with a usb receiver, and if you move away from the workstation, the screen saver actively locks it, then when you appear within the range, swipe your finger off the fingerprint reader to unlock the workstation - I think that would be a quite cheap way of doing it, simple, un-intrusive, and clutter free, no forgetting to lock via 'WinKey+L'
Hope this helps,
Best regards,
Tom.
Is it important or is it a good habit? Locking your computer is may not be important say in your own house, but if you are at a client's office and walk away then I would say it is important.
As for how to enforce...
After reading Jeff's blog entry on Don't Forget To Lock Your Computer; I like to change co-worker's desktop backgrounds to...
1,238px × 929px
Needless to say, co-workers started locking their computers.
GateKeeper is an easy solution to this. It locks the workstation automatically when the user walks away, and unlocks automatically when the user comes back within range of the computer. It can also require two factor authentication and other methods of lock/unlock.