I am trying to import All customers to the system but it keeps giving me this error
What does View name: Group mean?
I created a the Customer Access for All customers
Data Provider with Data object
and Import scenario
Is there any other way to import All the customers into the system so that I can implement Row-level security?
This is the import Scenario page
You should first set Group's ID and only then the Customers' IDs.
Here is working example:
Here is the Data Provider:
And finally the Import Scenario:
Also before importing records check that all the Customers in the importing data are already created in the system, otherwise you will get errors on Customers.
Related
Currently I have the following scenario. I have a report in Power BI which reads from a dataset which has data of all companies. In my ASP .NET MVC application the user will select the company for which to display the report and with Power BI Embedded the application filters the report by the ID of the company through the embed config defined in JS (filter parameters passed from server).
I am using app owns data approach where I have a master account and the embed token is generated for the master account.
The user accessing the report does not have access rights to all companies and this is being handled server-side. With this approach however, the user can easily alter the embed config in JS and display the report for a company which he is not authorized to access.
I looked into row-level security and I found the following approach https://community.powerbi.com/t5/Developer/PowerBi-Embedded-API-Works-with-RLS/td-p/231064 where there exists a role for every company and the embed token is generated for that particular company. This would be an ideal approach but in my scenario the companies are not pre-defined and can be created any time. Therefore, I would need to create a role per company. This however cannot be achieved programmatically as Power BI does not provide means to automate role creation.
The only approach I can think of is to clone a report for each new company and create a dataset specific to that report which will only have the data for that particular company. Then the generated embed token will only be valid for that particular report.
Has anyone also experienced this dilemma? Any suggestions what I should do in such scenario?
You still can use RLS, but without roles per company. Use USERPRINCIPALNAME() DAX function to find out which user is viewing the report. In the database make a table to specify which company can be seen by which user and add it to your model. Then use RLS to filter this table to only the row (or rows) where user is current one (here is where USERPRINCIPALNAME() comes into play), and let the relationship between this table and your data tables to filter out what should not be seen. This way there will be no JavaScript filters at all, so nothing can be changed by some malicious user.
See Using the username() or userprincipalname() DAX function.
I have a customer that owns a carpet cleaning business and we have all of his different franchisee's data in a multi-tenant database model and we would like to move this data into a data warehouse in snowflake. I don't want to have to build a separate database for each customer because then I have to keep each database up to date with the latest data model. I want to use 1 data model to rule them all. I have a tenant ID that I keep with each record to identify the franchisee's data. I want to give a set of credentials to each franchisee to where they can hook up their analytics tool of choice (tableau, power bi, etc.) and only get access to the rows that are applicable to them. Is there a way to secure the rows they see in each table based on their user. In other words some sort of row level access control similar to profiles in postgres. Are there any better methods for handling this type of scenario? Ultimately I want to maintain and manage the least number of elt jobs and data models.
This is the purpose of ether Secure Views, or Reader Accounts.
We are using both, and they have about the same technical hassle/setup costs. But we are using an internal tool to build/alter the schema's.
To expand on Simeon's answer:
You could have a single Snowflake account and create a Snowflake role & user for each franchisee. These roles would have access to a Secure View which uses the CURRENT_ROLE / CURRENT_USER context functions as in this example from the Snowflake documentation.
You'd have to have a role -> tennant ID "mapping table" which is used in the Secure View to limit the rows down to the correct franchisee.
I'm building an import scenario so CSV's from an external system can be imported into Sales Orders.
The external system has a list of customers, with their own ids.
Acumatica also has a list of customers, and a custom field that stores the external systems customer id.
Is it possible to configure the import scenario to designate the customer by matching by this custom field?
i.e. Say on the Order Summary target object, rather than setting the Customer -> Customer ID, can I use Customer -> Ext. Customer ID
Is it possible to (programmatically or via UI) assign custom roles to BigQuery Datasets? We would like to have access controls at a more granular level within a project, but I cannot find any indication that these are supported, or that they are not supported. The "share dataset" UI n BigQuery does not offer an obvious way to specify which roles have access.
This is the best I could do so far, but it throws an error about the entity_id
for dataset in datasets:
dataset_ref = bigquery.Dataset(dataset, frankie_client)
entry = bigquery.AccessGrant(
role='projects/xxxxxx/roles/custom_role1',
entity_type='specialGroup',
entity_id='projects/xxxxxx/roles/custom_role1')
assert entry not in dataset_ref.access_grants
entries = list(dataset_ref.access_grants)
entries.append(entry)
dataset_ref.access_grants = entries
dataset = dataset_ref.update() # API request
assert entry in dataset.access_entries
By the way, google's API is incredibly unstable and poorly documented.... If anyone knows how to do this, would be much appreciated.
*UPDATE AT THE END.
There is a long documentation page describing all the roles and discusses custom roles as well.
https://cloud.google.com/bigquery/docs/access-control
I think what you missed is that you need to apply a group on a dataset. And if you setup the group correctly you can do a lot of flexible things.
Check out the examples scenarios section in the linked page. That has your use case explained.
Read and write access to data in a dataset
CompanyProject is a project that includes dataset1 and dataset2. AnalystGroup1 is a group of data scientists who work only on dataset1 and AnalystGroup2 is a group that works only on dataset2. The data scientists should have access only to the dataset that they work on and should not be able to run queries.
Full access to a dataset
On dataset CompanyProject:dataset1 Add AnalystGroup1 to the predefined role bigquery.dataOwner.
On dataset CompanyProject:dataset2 Add AnalystGroup2 to the predefined role bigquery.dataOwner.
In addition to the pre-defined roles, BigQuery also supports custom roles. For more information, see Creating and Managing Custom Roles in the Cloud IAM documentation.
To add more, in the IAM page, you are able to see much easier the roles an assigned user has. And the custom role you created is grouped under the Custom label.
Update
The documentation has been improved after my issue ticket.
Note: Currently, you cannot apply a custom role to a dataset. For more
information on dataset access controls, see Assigning access controls
to datasets.
My team is in the process of migrating from another CRM system to Acumatica. We have several Microsoft SQL Server tables that we are trying to import into Acumatica using a number of Import Scenarios. Currently, we are reusing the same Data Provider for all of the Import Scenarios.
I notice that the database credentials (username, password, etc.) are currently duplicated, in plain text, across each of the Import Scenario XML files in our customization project. We want to avoid committing these credentials to our repo on Github.
One possibility we've considered is to have a single "config" file with the credentials, add it to gitignore, then commit an encrypted version to the repo. Is there a way to do this?
Is there another convenient way to encrypt credentials in Import Scenarios?
Please have your Acumatica partner create a support case with this info and screen shot to see if this can be considered. I am not sure why system is still requiring user name password if authentication is set to integrated windows. Perhaps you can also investigate if this validation can be overridden with a customization of the provider.