I have two different websites created with the same web template on two different web applications.$
On one, my visitor membershipgroupid = 6
On the other, 7.
How is it possible ?
Thank you by advance
Since different webapps as a rule have different databases they do not share ids for web and site specific objects. if there were 5 records in a table X in database A and 6 records in a table X in database B then after creation of a new record in database A in table X its id would be incremented to 6 and after creation a new record in table X in database B the id of a new record would be incremented to 7.
Related
I am working with power apps with SharePoint
I am trying to get the last maximum employee id from SharePoint and store it in powerapps
SharePoint
column name: EmpNo
column datatype: Number
power apps canvas
default - max length //but it gives an error
in SharePoint last employee id is 3 i want a store in 4 automatically in empno textbox
what i'd do is get the heighest number from your data source to begin, then increment on it.
Max('SP DATASOURCE',EmpNo)+1
Something along those lines in the default should achieve what your after
I have a list where I am getting all data approx. 50 columns with foreign key tables data through a model.
my second list where I am getting only columnname in rows approx. 10 columname,it can be more acc. to condition ,
now I want to bind a table using 2nd table column name and data will come from first list..because that columns are available in 1st list..so I want only those column name using mvc.. is it possible ?
note : I am using this things for viewsettings acc. to diff. diff. view which we are creating ...so please tell me how can I map rows of 2nd list with column of 1st list using mvc query...
I have recently started exploring Cassandra for our project. I have a doubt related to Cassandra data modelling. Lets take an example of google web analytics product. Google collects/aggregates information about the url statistics in different dimensions with different time ranges. Lets take a simple example of collecting access count of www.yahoo.com from desktop browsers vs mobile browsers for a period of 30 days (daily sum). We can model this in 2 ways -
One row key for each browser type for the same url and each day as column name with aggregate counter column type
One generic row key for url and composite key with day, url and browser type with aggregate counter column type
Whats the pros and cons of each approach?
Long names for column name is not a good idea as they will be stored repeatedly in each row.
You should use date,url,platform,day as primary key, and one column for count. This way if you need all days of the month you specify date,url,platform.
I have few tables in my data source in Power Pivot:
One is a "Calendar" table from Azure Marketplace (http://datamarket.azure.com/dataset/boyanpenev/datestream) that I'm using for user friendly date representation. Clients table that has basic information about clients (ClientId, Client Name, Recent Address). Each clients may have multiple Accounts (AccountId, Account Name, ClientId). And I have AccountActivity table (AccountId, Date, Income). I've setup relations between tables accordingly.
I need to build a resulting table that is based off AccountActivity and will have Date(Month, Year), Sum of Incomes, # of accounts and # of clients. I was able to get everything except # of clients. Once I'm adding # of clients into the table it starts to complain that there is no reference and instead of showing proper # of clients it is showing total # of clients that does not have any relation to the accounts. Is there any way of making it work or I will have to add into AccountActivity ClientId column?
You shouldn't need to add the ClientID to your source. You can just make a calculated column in your PowerPivot model. I recreated what I think you have
I made 3 tables with fake data that includes 4 clients, 6 accounts:
I added them to my data model and then added the DateStream to my model and created relationships.
Then I:
Went to the activity table in my model and added a calculated column [ClientID] with formula =related(Client[Client ID]).
Created a calculated measure [# Clients]:=DISTINCTCOUNT(Activity[ClientID])
Created a calculated measure [# accounts]:=DISTINCTCOUNT([AccountID])
Created a calculated measure [Sum of Income]:=SUM([Income])
Then I created a pivot table. These calculated measures seem to work across all of your dimensions.
What do you recommend in the following scenario:
I have an azure table called Users where as columns are:
PrimaryKey
RowKey
Timestamp
FirstName
LastName
Email
Phone
Then there are different types of tasks for each user let's call them TaskType1 and TaskType2.
Both task types have common columns but then have also type specific columns like this:
PrimaryKey (this is the same as the Users PrimaryKey to find all tasks belonging to one user)
RowKey
Timestamp
Name
DueDate
Description
Priority
then TaskType1 has additional columns:
EstimationCompletionDate
IsFeasible
and TaskType2 has it's own specific column:
EstimatedCosts
I know I can store both types in the same table and my question is:
If I use different tables for TaskType1 and TaskType2 what will be the impact in transactions costs? I will guess that if I have 2 tables for each task type and then I will issue a query like: get me all tasks where the task Primarykey is equal to a specific user from Users table PrimaryKey then I will have to run 2 queries for each types (because users can have both tasks type) that means more transactions ... instead if both tasks are in the same table then it will be like 1 query (in the limit of 1000 after pagination transactions) because I will get all the rows where the PartitionKey is the user PartitionKey so the partition is not split that means 1 transaction right?
So did I understood it right that I will have more transactions if I store the tasks in different tables .. ?
Your understanding is completely correct. Having the tasks split into 2 separate tables would mean 2 separate queries thus 2 transactions (let's keep more than 1000 entities out of equation for now). Though transaction cost is one reason to keep them in the same table, there are other reasons too:
By keeping them in the same table, you would be making full use of schema-less nature of Azure Table Storage.
2 tables means 2 network calls. Though the service is highly available but you would need to take into consideration a scenario when call to 1st table is successful however call to 2nd table fails. How would your application behave in that scenario? Do you discard the result from the 1st table also? By keeping them in just one table saves you from this scenario.
Assuming that you have a scenario in your application where a user could subscribe to both Task 1 and 2 simultaneously. If you keep them in the same table, you can make use of Entity Group Transaction as both entities (one for Task 1 and other for Task 2) will have the same PartitionKey (i.e. User Id). if you keep them in separate tables, you would not be able to take advantage of entity group transactions.
One suggestion I would give is to have a "TaskType" attribute in your Tasks table. That way you would have an easier way of filtering by tasks as well.