I have a customization package that has some new columns on several main tables and also several new tables. The new tables are in sql scripts and work fine. The new columns for existing tables are currently in sql scrips also. I did this a couple years ago and I honestly do not remember how I ended up there. I recently encountered a problem with the Carrier table in 2017R2. Apparently, there were columns added to the base table and publishing my project removed them. I removed the sql script and re-added the custom column through the DAC object and not it seems to work fine. I thought I had read somewhere that it was best to convert it to a main sql script.
I have other columns that I have added to SOOrder and SOShipment. I want to make sure I don't have this problem again so I want to confirm the correct method for customizing tables.
Related
While working with python3 programming, I installed Phpmyadmin on Debian 11, to verify database was being updated. Somehow, along the way, I got my non-joined tables to become the default values in my base table, Example, the series_id column became the actual series value in the series table. I had not done any join or view setup. I closed Phpmyadmin and when I reopened, it was still showing values from the other table. This database has four tables referenced by "xxxx_id" in four different columns.
Also, the changes in the columns automatically updated the related tables. And the python program used the values instead of the "xxxx_id". I can use view to simulate this, but upon close and open, it is gone 'til I reactivate the view and then it does not automatically update the other tables.
Does anyone know how this happened and how to make it happen in my other databases and tables? It would save my a lot of time and programming if I could do this for my other databases.
I have not been able to reproduce this and adding another column did not get the same effect.
I've recently started using Power BI Desktop. There are some tables with 100+ columns that I want to pull into reports. Currently I'm selecting every column/field one by one. Editing query seem to only edit the source I got for the table. Is there a way to select and add all columns of a table at once?
Would also be helpful if I could stop the refresh every time I add a columns. It starts being really slow after a point to add new columns.
Thanks
I have a table in Excel that is populated by a live connection to an external database. The SQL used to generate this table and refresh it looks similar to this (only with more fields and such):
SELECT DISTINCT shr_pf_student_v.ID,
FROM shr_pf_student_v
What my customer wants to be able to do is add additional columns and manually add data that correlates to the ID in each row in Excel. Of course, when the ID data is refreshed, if new rows are added or deleted, the manually added data no longer correlates with the ID it was originally intended to match up with.
I've thoroughly explored the Excel Connection "External Data Properties" options and none solve this issue. I've only found this one solution here: http://www.mrexcel.com/forum/excel-questions/376984-database-query-possibilities.html but after several hours of attempted application, I can't get it to work and I'm not sure that it's possible to do this way either.
Lookup formulas won't work of course because as soon as the data is refreshed, the data looks just like the new refreshed set.
Any new viable options are welcomed. I've searched high and low but I can't help but think that this is such a valuable process that must be rather prevalent and have a solution developed for it somewhere out there?
Many thanks,
Lindsay
I asked a similar yet slightly different question before here. I am using CRM 2013 Online and there are couple of thousand records in it. The records we created by an import of excel sheet data that came from a SQL database.
There were some fields in each record in which there was no data when the first import from excel was made. The system works in such a way that the excel sheet is updated from the SQL database periodically, and that data then needs to be imported in CRM Online. As far as I know and mentioned in the shared link, you can only bulk update the records in CRM by first importing the data from CRM to Excel and then reimporting the same sheet back to excel.
Is there a way to bulk update the records in CRM Online if I get data from the client in an Excel sheet?
Right now I compare their excel sheet to my exported excel sheet and make the required changes. It works well for small amount of records but it is infeasible for bulk record update. Any ideas?
2) Or is their a way to compare two excel sheets and make sure that if you copy columns from one sheet to another, the data in the column ends up in the right rows?
I faced a similar issue with updating records from a CSV file. It is true that SSIS is one way. To solve our problem, I created a .NET executable application which is scheduled to execute once per week. The .NET application does the following
Connects to the organisation
Imports all records from the excel
spreadsheet using a pre-existing data map in CRM organisation
Runs a duplicate detection rule (already existing in the CRM organisation)and brings back all duplicates
Sorts through each duplicate and stores the guid into 2 arrays: list of original records and list of newly imported records (based on created date of record)
Performs a merge of the old data on the record with the new data (this is
performed through the CRM2013 SDK MergeResponse class
Now that the original records have been updated with the new data from the
spreadsheet, delete the duplicate records which have just been
created and then made inactive due to the use of MergeResponse class in step 5 . (for us, we were updating contact info, but wanted the
orginal contact to stay in CRM because they will have cases etc
related to that contact's GUID)
If you want to go down this route, I suggest looking at the example on the MS website which uses the CRM SDK (\CRM 2013 SDK\SDK\SampleCode\CS\DataManagement\DataImport\ImportWithCreate.cs). That is the sample code which I used to create the web service.
As you have thousands of records then I guess that SSIS Package is the best option for you. its very efficient in such scenorios.
This is the approach I would use:
Create a Duplication Detection Rule under Settings > Data Management
Download the Import Template
Adjust your source system to generate the spreadsheet in that particular format
Depending on the frequency of your updates I'd look into CRM web services to import your data.
I have an excel file and I want to import the excel file basing on the existing database table using entity framework. Right now I firstly convert the excel sheet to a DataTable and have a loop to loop through each row of the DataTable. Each row has an id field and if the id exists in the database table I need to update it otherwise I need to insert this row to the database table. I want to use entity framework to wrap my loop into one transaction for roll back purpose in case of error. But I run into a scenario of two rows with the same id but different values. The first row is checked and added my entity collection, but the second row might be mistakenly updated the firstly added row because the firstly is not actually added due to the delayed context.SaveChanges() called after the loop. How can I update the previously added row in the entity collection without repeatedly calling context.SaveChanges() inside my loop? Thanks.
I don't think I have done it over the past decade or so, but I have used Microsoft Word's Mail Merge to create the SQL statement that I needed (SELECT, INSERT and UPDATE) for each line in an Excel sheet. Once I got the long SQL statement in text I simply copy-paste it into the console and the statement was executed and the job was done. I am confident that there are better ways of doing this but it worked at the time with limited knowledge but a need. This answer is probably in the category "don't try this at work, but it is fine to do it at home if it does the job".