I am using the Intuit AggCat Beta in a development environment and noticed that when I discover new accounts, they often don't include important information like balances, current interest rates, and other key pieces of data. This is true even after calling functions like getAccountTransactions or getCustomerAccounts (not simply discoverAndAddNewAccounts).
However, after a few hours, I refresh the accounts and this information shows up. It's very important that new accounts include this data during the discovery process, and I wanted to check if this is an issue of using the development environment (e.g. something that will go away in production?) or if other users are having this issue
This is how it works in both dev and prod environment. During the account discovering, it may not be able to get all information for some accounts for some financial institutions because of their website layout/data limitation.
The refresh call will get these information. So the best practice is to refresh the account before you try to get these information.
Related
The title pretty much says it all. I am currently implementing two factor authentication and wonder whether I should provide a list of recovery codes. I wouldn't even have thought of it myself, but most implementations I have seen in the wild do it this way. Github for example generates a list of 16 recovery codes at once.
Is there any security benefit?
Great question - I'm pretty sure it's an ease of implementation thing for both parties - if you generate a list of recovery codes up front then you don't have to regenerate a code every time a user uses a backup code. The idea is that the user will print/save them somewhere so from a usability perspective this saves the user from having to re-print or re-save codes.
You don't have to use recovery codes, though. GitHub supports a few different recovery options. Some companies use security questions or allow you to fall back to SMS.
Some companies make you call support and provide account details (works for enterprise use cases where there's a trusted contact/if you expect to have a small number of account recovery cases/have a good way of verifying identity over the phone [disclaimer I wrote that post for Twilio]).
Facebook lets you choose friends to vouch for your identity if you get locked out.
I gave a talk about this last year and have some more recommendations in the slides. Hope this helps!
In salesforce, we have a scenario, on the trigger of the lead object we are updating some records of Campaign. But the user on the behalf of we are running the trigger does not have edit permissions on the campaign. We are not facing any issue in the update of the campaign because the trigger is running the operation in system mode.
Further, we have applied for the security review and made the changes and added the check of the object isUpdatable() and after it, we are not able to update the campaign due to that check which returns false for isUpdatable().
My questions are, Can we pass the security review without applying that isUpdatable() check? if our process has the business logic to update the campaign/opportunity on the behalf of the user who doesn't have permissions on the campaign/opportunity?
If we can not pass the security review with that check then what could be an alternative for it, where one user who doesn't have permission on campaign/opportunity, performs some operation on lead/contact and we want to update the campaign/opportunity in system mode after that operation?
or is it necessary to provide the permissions of campaign/opportunity to that user?
It's not a coding question as such so it might be closed here. Consider cross-posting to https://salesforce.stackexchange.com/
Generally speaking, your app should be simplifying Salesforce. Adding value by being pre-built, pre-tested use case for this customer and saving clicks for the end user. (let's ignore situations like pulling data from other systems, runinng some crazy Excel file generation that SF can't do easily). With that philosophy - you're supposed to respect the System Administrator's wishes when it comes to security. If admin didn't grant Profile X edit access to field Y - the security review answer is that you should detect it. If you can recover gracefully and continue with your program - cool. If it's critical field - throw an error, force the admin to make conscious decision. Because if you're saving clicks - user would face same problem in normal UI. It's not only "describes", it's also about "without sharing" for example.
There's another layer to it - licensing. In the old days "Marketing User" (to access campaigns) was a separate license, you assigned it by clicking checkbox on User but it had to be purchased. Now it's bit simpler, part of the full user license (I think). But there are still situations where you can't access Opportunities for example (Platform License) or can access only Account, Contact and 10 custom objects (Chatter Plus License or whatever is the new name).
If you abuse system mode to fetch data from objects user is not allowed to see (or save to them) - official answer is that SF loses money because of you. Permission should really be assigned and if needed - license purchased. I'm not a lawyer, I don't know who violates the Master Service Agreement with Salesforce, you or the client that installed the app or both. I'd say read the contracts and see what you can do to protect yourself. If that means your app can't be installed by customers on Essentials/Professional (or it can be installed anywhere but can be used only by full users, not by Platform/Chatter/Community) - well, it is what it is. Decide pros and cons, legal consequences if it gets caught...
They're humans, talk with them. Maybe you'll pass review. Better have a rock solid business case why you need to bypass the check though.
P.S. You know you don't have to do describes anymore? Spring'20 goes live this & next week and "WITH SECURITY ENFORCED" and "stripinaccessiblefields" become generally available :)
P.P.S you really need trigger? Workflow, process builder, flow (yuck) - anything to not have code = not need the isAccessible and if it effectively dies on permissions - it's the client's sysadmin's problem?
I'm looking for a way of programmatically exporting Facebook insights data for my pages, in a way that I can automate it. Specifically, I'd like to create a scheduled task that runs daily, and that can save a CSV or Excel file of a page's insights data using a Facebook API. I would then have an ETL job that puts that data into a database.
I checked out the oData service for Excel, which appears to be broken. Does anyone know of a way to programmatically automate the export of insights data for Facebook pages?
It's possible and not too complicated once you know how to access the insights.
Here is how I proceed:
Login the user with the offline_access and read_insights.
read_insights allows me to access the insights for all the pages and applications the user is admin of.
offline_access gives me a permanent token that I can use to update the insights without having to wait for the user to login.
Retrieve the list of pages and applications the user is admin of, and store those in database.
When I want to get the insights for a page or application, I don't query FQL, I query the Graph API: First I calculate how many queries to graph.facebook.com/[object_id]/insights are necessary, according to the date range chosen. Then I generate a query to use with the Batch API (http://developers.facebook.com/docs/reference/api/batch/). That allows me to get all the data for all the available insights, for all the days in the date range, in only one query.
I parse the rather huge json object obtained (which weight a few Mb, be aware of that) and store everything in database.
Now that you have all the insights parsed and stored in database, you're just a few SQL queries away from manipulating the data the way you want, like displaying charts, or exporting in CSV or Excel format.
I have the code already made (and published as a temporarily free tool on www.social-insights.net), so exporting to excel would be quite fast and easy.
Let me know if I can help you with that.
It can be done before the week-end.
You would need to write something that uses the Insights part of the Facebook Graph API. I haven't seen something already written for this.
Check out http://megalytic.com. This is a service that exports FB Insights (along with Google Analytics, Twitter, and some others) to Excel.
A new tool is available: the Analytics Edge add-ins now have a Facebook connector that makes downloads a snap.
http://www.analyticsedge.com/facebook-connector/
There are a number of ways that you could do this. I would suggest your choice depends on two factors:
What is your level of coding skill?
How much data are you looking to move?
I can't answer 1 for you, but in your case you aren't moving that much data (in relative terms). I will still share three options of many.
HARD CODE IT
This would require a script that accesses Facebook's GraphAPI
AND a computer/server to process that request automatically.
I primarily use AWS and would suggest that you could launch an EC2
and have it scheduled to launch your script at X times. I haven't used AWS Pipeline, but I do know that it is designed in a way that you can have it run a script automatically as well... supposedly with a little less server know-how
USE THIRD PARTY ADD-ON
There are a lot of people who have similar data needs. It has led to a number of easy-to-use tools. I use Supermetrics Free to run occasional audits and make sure that our tools are running properly. Supermetrics is fast and has a really easy interface to access Facebooks API's and several others. I believe that you can also schedule refreshes and updates with it.
USE THIRD PARTY FULL-SERVICE ETL
There are also several services or freelancers that can set this up for you at little to no work on your own. Depending on where you want the data. Stitch is a service I have worked with on FB-ads. There might be better services, but it has fulfilled our needs for now.
MY SUGGESTION
You would probably be best served by using a third-party add-on like Supermetrics. It's fast and easy to use. The other methods might be more worth looking into if you had a lot more data to move, or needed it to be refreshed more often than daily.
Does anyone have recommendations on the best way to set up development and testing environments for Microsoft CRM 2011 On Demand?
The recommendations I have seen so far include:
Paying for another account with only one user
Creating a VM
Going with a partner hosted environment
You will need to be a little more specific. What's wrong with the 3 you have listed? Is it cost, it is the time to configure?
That being said, what I do is sign up for the free 30 day trial.
First, sign up for a new Windows Live account.
Second, click here to sign up for the 30 day account.
Third, I always write down the login & url because I always forget them.
I'll have anywhere from 1 - 5 of these running at once.
The main benefit is the control this gives me. Since you can't access the SQL server directly with On Demand, it forces you to make your configurations & customizations the correct way.
Your other option is to setup a VM environment and create a new instance every time you need a clean setup. This is not my preferred option since you need good hardware to run the environment (otherwise the performance penalty is huge)
Currently, I'm in the process of hiring a web developer who will be working on a site that processes credit cards. While he won't have the credentials to log into the payment gateway's UI he will have access to the API login and transaction key since it's embedded in the application's code.
I'd like to be aware of all the "what if" scenarios pertaining to the type of damage one could do with that information. Obviously, he can process credit cards but the money goes into the site owner's bank account so I'm not sure how much damage that could cause. Can anyone think of any other possible scenarios?
UPDATE: The payment gateway being used is Authorize.net.
Do they really need access to your production sites?
Don't store the key in your code, store it in your production database, or on a file on the production server.
Some good answers here, I'll just add that you'd probably have some trouble with PCI.
PCI-DSS specifically dictates separation of duties, isolation of production environments from dev/test, protection of encryption keys from anyone who does not require it, and more.
As #Matthew Watson said, rethink this, and dont grant production access to developers.
As an aside, if he can access the API directly, how do you ensure that "the money goes into the site owner's bank account"? Not to mention access to all that credit card data...
If the developer gets access to the raw credit card numbers that can become a bigger problem as your site can be associated with fraudulent activity, assuming the developer is a bad apple. (They could redirect account numbers, CCV, expiration date to another site, though this should be spottable through network tools and a comprehensive code review.)
Does the API perform the "$1.00" charge (or "$X.XX") to verify that a credit card can be charged a certain amount (and thus returning the result to the caller, such as "yes" or "no")? If so, it could be used to automate the validation of credit card account numbers traded on the Internet and abuse of such a system could lead back to you.
With any gateway I have worked with, the payment processor ties the API key to the specific IP or IP range of the site of the merchant. With that said, unless the malicious(?) code in question is executed on the same server as the merchant - there shouldn't be any security concerns in that regard.
If this is not the case for your merchant site - contact them and ask if this is feasible.
Does the payment gateway allow for reversal of charges? If so there is the possibility of a number of scams being run.
Does the site process refunds? Will it ever in the future?
If we're talking about nefarious uses, then the site owner might be investigated if lots of unauthorized purchases are made. How would that affect you if the owner is investigated?
From your description it seems that this developer will have access to the customer cards detail in which case the customers privacy may be compromised. You might consider wording the contract appropriately to make sure that this angle is covered.
However the main point is that if you're working on a sensitive project/information it's better for you to find people you could trust. Hiring a software house to do the job may save you some sleep later on.
First and foremost, it is best that you never store this type of information in plain text. Usually people take this as second-hand knowledge for credit card numbers (Sadly, only because of legal reasons), but any sort of private data that you don't want others with database/source-code access viewing should be encrypted. You should store the account information somewhere in a well encrypted format, and you should provide a test account for your developers to use on their development workstations. This way, only people with server access are able to see even the encrypted information.
This way, you can have a database on the developer's workstation with the test account's API information stored (hopefully encrypted) in it's local database, but when the code is mirrored onto the production server it will still use the live, real gateway information stored on the production server's database without extra code/configuration.
With this said, I don't think that a programmer with API authentication details can do too much. Either way, it's not worth the risk - in my opinion.
Hope this help.
PS: If something bad does end up happening, you can always generate a new key in the web interface on authorize.net after you've taken the precautions to make sure it wont happen again.
In the specific case of Authorize.Net they would not be able to do credits towards their own credit cards since Authorize.Net only allows this to be done on transactions performed through them within the last six months. The only exception being allowed if you are granted an exception for unlinked refunds. If you have signed the proper paperwork for this and someone has your API login and transaction key then can then process credits towards their own credit cards. The only way for you to catch this would be to monitor your statements carefully.
To help mitigate this you should change your transaction key immediately upon completion of the work they perform for you. That would render the key they have useless after 24 hours.