Salesforce - Does Schema class describe methods cached at salesforce end? - security

I have a requirement where I want to check permission for salesforce custom object & fields before every DML. as a solution, I have implemented my own versions of DML statements like Util.insert, Util.update etc.
In those methods, I am doing permission checks using Salesforce 'Schema Describe' methods.
I was trying to find information about internal working of these methods, If salesforce cache this information for single execution context or caching should be handled at my end for better performance.
I could not find any information about this in their official documentation.
appreciate any help on this.
Thanks.

You can do it manually if you want but there is this project on git that utilizes an access controller to test for FLS and CRUD issues. We implemented it and it works very well.
https://github.com/CodeScience/CSUtils/wiki/Using-Security-Coding-Library-(ESAPI)

Related

Is there a way to restrict CRUD operations on Notes/Domino data using an alternative application?

We have a (super)user who has been using VBA in an Excel spreadsheet to create and manipulate documents in a Domino database application.
The user has 'Editor' access to the application, and should normally be able to create/edit the document contents.
They have been, however, creating documents using VBA. That logic doesn't consider such important document fields as Readers, Authors, etc. .
We would like to restrict access to all Domino data so that it can only be created/modified using an IBM Notes client.
I have tried looking through the ECL, but that only restricts what 'others' do.
Since he has his Notes client available, the external logic is using his normal Notes credentials.
I have tried setting a hidden field with the Notes client and looking for that in the QuerySave event of the form design.
Unfortunately, the external code pays no attention to the form events and the save is executed despite the missing field.
Similarly, the Database Script has no bearing on the execution of external logic.
I was going to inspect the client version upon database open and restrict activity based on a variance in the version (I was hoping!).
I have de-selected the 'Don't prompt for a password...' option in the user security preferences, but that has no effect at all (suspected as much!).
The ONLY thing I have been able to suggest is to hide the database design... That's really only designed to thwart a user's efforts to understand the underlying design.
It won't prevent them from creating hundreds of thousands of documents with a fictitious form and throwing the app into disarray.
I'm hoping that there is a solution out there that I'm missing.
The user has been instructed not to undertake such activity in the future.
We were lucky that there really wasn't any malicious intent - "Just trying to be more efficient" we're told.
The effects of the activity have been remedied, and the user has been warned.
What I want to know is... how can I prevent this from ever happening again?
The circumstances are rare I know, but I would've thought there'd be a means of restricting the platforms used to manage Notes/Domino data.
Is there a way to ensure no external applications are able to access, create or modify Notes database documents?
I am currently focussing on access to Notes via COM.
I thought that, if I unregistered 'nlsxbe.dll' from the registry, that would prevent such activity - It has not.
I also tried removing the .TLB files from the Notes executable folder - removal of 'notes32.tlb' and 'domobj.tlb' have no effect at all. Removal of 'ltsci3.tlb' screws everything up (as expected!).
I'm really having no luck at all - Any/all suggestions would be most appreciated!
I'm not aware of any way to detect that a connection has been made by standalone code instead of by the Notes client, but you do have two paths available to you:
A Domino server add-in that prevents documents from being saved in that particular database if certain criteria aren't met.
An agent that is triggered to run shortly after documents are saved or modified in that particular database. The agent code can delete (or modify, if you prefer) the documents that don't conform to the required criteria.
The server add-in route would normally require coding in C, but thanks to the Open NTF Trigger Happy project, the hard part is done for you, and the rest can be filled in with either LotusScript or Java agent code that is "triggered" by the pre-written C code. You will need to have some basic knowledge of how the Notes Extension Manager interface works, but once you get past that and write your agent code to enforce your data consistency/integrity requirements, the only real hurdle is your willingness to host open source code on your server.
There may be two other possibilities, but I can't say if either will solve or deal with the issue...
In the ECL you can disable 'COM' access for the user (also known as OLE or ActiveX) automation since VBA access is usually via COM. This has stopped Notes using external COM access for me, but I don't know if also prevents VBA using Notes. Additional steps may be needed to enforce the ECL and apply to the specific users.
There is an (old) notes.ini 'DisableExternalApps' (or something similar) that disables some external access. This can affect many things (DDE/Prompts/#dblookups) but again I don't know if this will disable VBA/COM and its not user specific, but server wide.
I would have thought that removing the nlsxbe.dll or restricting access to execute it might work, but the ECL may be the best bet.
Alternatively, rather than add hidden flags to your design (and the documents), and then delete the offending documents, your agent could apply the correct author/reader fields to the documents instead.
Very tricky. Did you find a better solution?

PowerBI Embedded API functionality

I have some queries about the PowerBI Embedded API, and more so if functionality exists, and if so where can I find it.
In particular, I am looking to find, from the APIs (PowerBI, Embedded or Azure) where I can complete the following functions:
View the number of Rendered Views within a Workspace Collection
Delete a report/import which has been uploaded
Ability to find out how many renders a single report would create - I would find this especially useful given it is billable per render.
Additional functionality I am looking for, is also to be able to save the rendered chart to image or pdf and responsiveness in the dashboards.
I do realise its still in public preview, however, has anyone managed to find the above functionality within the current APIs.
Thanks
David
View Number of Rendered Views within a Workspace Collection:
Make a POST request to the following ARM API with Content-Length: 0:
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.PowerBI/workspaceCollections/{workspaceCollectionName}/billingUsage?api-version=2016-01-29
Delete import:
Make a DELETE request to the following Power BI API:
https://api.powerbi.com/beta/collections/{workspaceCollectionName}/workspaces/{workspaceId}/datasets('{datasetKey}')
There is no API for this yet.
Consider making the suggestion at https://ideas.powerbi.com/.

Register Callback endpoint with NetSuite CRM

we have one requirement for updating/notifying enterprise applications about any/specific data change in NetSuite CRM.
I googled, but could not find, if NetSuite allows to register endpoint so that any update would be pushed to that registered endpoint. Is there any such configuration NetSuite CRM provides?
Aside from purchasing an off the shelf integration package, there are two main ways to accomplish this. Both of them are not pretty.
Polling SuiteTalk. Poll SuiteTalk as a specified frequency for any updated records. I've implemented a ruby gem that provides most of this functionality out of the box. https://github.com/NetSweet/netsuite_rails
Writing User Event SuiteScripts. Using these scripts you can trigger a script to run, that script can call an external URL and pass the NS internalId & type of the record that was updated.
sudhirk,
There's no mechanism for what you ask. Netsuite allows for a lot of different ways to configure 'incoming' data. (ie. RESTlets, SuiteTalk (WebServices), Suitelets, etc). However, building a 'push' endpoint is nearly non existent. If your external systems have simple HTTP GET options, you could use nlapiRequestURL to push data. Otherwise, you're looking into a more formal integration project. I do this daily in my job, and we prefer to use BOOMI as the middleware.
Hope this helps!

Drupal, a custom searchable user profile should be based on Node or Custom db?

I have to work on a Drupal project to create user profile for some specific users on the website with some special fields. They can be a different role. Main idea is to search. User profile must be searchable with provided criteria.
I have two options,
1- Using node with (content_profile)
2. Create my own form and tables.
One my question is, is it possible to create a separate search machanism for custom created database? and is there a way to cache search result? or should I use node based? please advice some one with idea on this..
Thanks.
Yes it is possible to create a search mechanism using views and exposing the custom table to views via the api (there is a blog post here: http://blog.menhir.be/2008/10/22/expose-database-fields-to-views-in-a-custom-drupal-module/ and there is more info using the advanced help module (http://drupal.org/project/advanced_help) (install and look through the views documetation), then you could also use the Views caching.
A custom table and fields would be my preferred method if you have a lot of users as the profile tables can get pretty big (this may not be an issue for you), or you could use the content profile module http://drupal.org/project/content_profile and possibly save yourself some work!
If you wanted to perform a complete custom search not using views you'd probably need to implement that and the caching yourself if you went the custom field/table route, but you'd gain a lot of flexibility.

MS CRM and Biztalk2010 Integration

I am planning to start a POC for MS CRM and BizTalk 2010 Integration.
Before that I wanted to know does any body use BizTalk 2010 for integration with MS CRM?
We're using BizTalk 2010 to call into Microsoft Dynamics CRM 2011 Organization service.
There are basically two ways to do this, but I'm committed to find others.
The first way is to use the BizTalk schemas that ship with the SDK along with an external C# based class library helper. This scenario is pretty tell covered on the internet. Note that this scenario will not allow BizTalk to call into the CRM early-bound classes (Account, etc.) It will only allow using the generic CrmEntity object which makes dealing with the mapping a painful experience.
The external helper is necessary to deal with the LiveID federation idiosyncrasies.
This first method has the advantage of being simple. But you cannot use native CRM types from BizTalk.
The second way is to somehow solve the above problems, at least partly. First, it involves building a WCF façade that exposes native early-bound CRM objects (such as Account, etc.) and that deals with LiveID federation.
As generated, the early-bound classes are not serializable so they can't be part of a WCF interface (and service). This can be solved by decorating each and every properties with a DataContractAttribute. Also, read-only properties need to have an extra empty set {} added to them. Please, note that there are a huge number of such (simple) changes to make in the generated classes. Fortunately, as a generated file, the syntax is consistent and a couple of simple RegEx's will do.
On the BizTalk side, you will consume the WCF façade metadata in order to generate BizTalk schemas. Unfortunately, you will end up with huge multi-megabyte files and cross-dependend schemas.
So, first, you have to break the circular dependencies. In my case, I had to add an extra schema to hold shared complex types that were used by both the "contract" and the "metadata" schémas.
Next, you cannot easily use the huge generated schemas in your maps. First opening the map (or the schema alone) will take ages. Second, the compiler will choke and Visual Studio will crash.
To solve this, you need to manually change the GenerateDefaultFixedNodes attribute in your map's .btm XML file.
What I recommend, however, is to use a simplified version of the generated schemas, where you only include nodes and structures that are part of the mapping. Since most nodes are optional, the resulting XML request to the WCF façade will end up being the same.
The advantage of this second method is to be able to deal with native CRM types from BizTalk. But the implementation might sound complicated at first. With proper automation, in practice it works pretty well, even in the face of changes on the CRM side.
None of the methods, however, feel as "native" BizTalk integration. That's why I'm working on finding an alternate way, perhaps by building a dedicated custom binding, but so far without success.
See my question here.
Hope this helps.

Resources