Does anyone have any idea on how to do bulk deletes from SharePoint 2013 document lists? I need to delete nearly a million records at once. Using the delete option from the lists itself is very time consuming and can only delete first 100 records at a time. Cleaning up is really hectic.
For document libraries it is easier to open it in explorer view and then you can select the no. of files and delete them easily but if you have setup a view you can only delete first 100 files and then go on next until all have been deleted under that view.
Is there a simpler way to do export and bulk delete or any good tool?
export and bulk delete documents from document libraries using views. Reason for using views is because we only need to export and delete documents based on certain filtered criteria's e.g. date range [Today]-100
and also to do bulk delete from lists containing millions of records.
Take a look at https://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spweb.processbatchdata.aspx
https://social.technet.microsoft.com/wiki/contents/articles/19036.sharepoint-using-powershell-to-perform-a-bulk-delete-operation.aspx#Example
That's probably the fastest way to perform bulk deletes in your scenario.
Related
I have a site in sharepoint with a main folders and sub folders where I store some monthly .csv files. There a a total of three subfolders with files that gets a new file every month. The files are the same structure every month.
I have managed to merge the files in Power Query which works well despite slow query times with SharePoint.
Now to my question:
Is it possible for Power BI to automatically include and merge new files when they are uploaded or will I have to manually edit the query monthly?
If this is not possible, what are the best alternatives?
This is straight forward using custom functions.
Import the file once and create a custom function out of your steps
Filter all your import files using the SharePoint connector
Add column by invoking your custom function
Expand the results
This way you are appending your data files to one table, but I would be surprised if you actually need to merge them.
I have an excel file containing 50000 records and I'm basically wanting to implement a routine job that would copy all these records and store it in a SharePoint List in SharePoint online, and on a periodic basis keep updating the list.
As a try out, I did try setting up a Microsoft Flow workflow using Power Automate and found that it can copy only till 5000 records.
Was wondering if there is any other way this could be achieved? Any help can be appreciated on this.
Due to SharePoint list view threshold is 5000, so you could only copy 5000 records in a view.
As a workaround, you should create 10 views in a list, then copy 5000 records in each view by using flow.
I want to delete some unnecessary columns which are created but not using currently. Without deleting the table data or table in Microsoft azure storage explorer how can I delete columns manually?
Mildly annoying but you can fix the issue by deleting the entire contents of:
AppData\Roaming\StorageExplorer
that will fix the issue. You'll need to reauthorized the accounts but that's a mild inconvenience. There's likely a file or two within that directory that actually caches that data that is a more surgical approach but the few most obvious candidates didn't work for me so i just deleted the whole directory.
It's not possible to delete columns for all entities in a table, since Azure Storage Table is a schema-less database. In other words, the entities within a table can have different properties respectively. You have to query all the entities, remove the useless properties from them one by one and then replace the modified entities back to the table.
Spinning off of the answer from #frigon I performed the following steps:
Close the storage explorer
Delete \AppData\Roaming\StorageExplorer\Local Storage
Open storage explorer
It kept me logged in, kept all my settings, and only appears to have reset my account filters and table caches.
It looks like the caches are stored in leveldb in that folder, so you may be able to crack that open and find the specific value(s) to drop.
I've found that if you copy the table and paste it into a different storage account or simply rename the table then the new table will not reference the unused columns. However, if you paste it back to the original location or rename the table back to the original name then the unused columns will still be shown, even if you delete the table first.
Strangely, if you create a brand new table with the same name it will only have the default columns. But import the contents of the original table from file and the superfluous columns will also reappear even though there is no reference to those columns in the csv file.
I have a project in Access where we are using tables that have the customers information. These tables were created by downloading as Excel from another site of ours and then uploading to the Access program.
The problem is that the information on our other site changes sometimes, and we really don't know what has changed on our existing information. When we append a new Excel download it will add customeraccountID's that are not on the table yet, but I need a way of finding out if there are any changes to the existing information.
I have tried an update query, but that makes forms that have a relationship to the customer information tables not show the detail section. From what I have researched, this is possibly due to the update query making the updated table read only.
I have taken an made a query that gives me a list of all the duplicates between the newly downloaded Excel and the existing table, but now I need some way to find if there is any changes. There are 60 columns where there could be changes.
We are not against manually updated our tables if we can find a way of finding out what has changed.
I have considered downloading the duplicates report to excel and running a formula using exact(a2:a61,b2:b61), but then I would have to copy that formula to every other row through thousands of rows. I have no preference to whether we find the changes by Excel or Access.
The best way would be to have Access replace the information when appending the new information, not just drop the duplicates. Which would mean having Access replace the existing data when appending. Is that possible or can a report be created that shows where the information differs?
I asked a similar yet slightly different question before here. I am using CRM 2013 Online and there are couple of thousand records in it. The records we created by an import of excel sheet data that came from a SQL database.
There were some fields in each record in which there was no data when the first import from excel was made. The system works in such a way that the excel sheet is updated from the SQL database periodically, and that data then needs to be imported in CRM Online. As far as I know and mentioned in the shared link, you can only bulk update the records in CRM by first importing the data from CRM to Excel and then reimporting the same sheet back to excel.
Is there a way to bulk update the records in CRM Online if I get data from the client in an Excel sheet?
Right now I compare their excel sheet to my exported excel sheet and make the required changes. It works well for small amount of records but it is infeasible for bulk record update. Any ideas?
2) Or is their a way to compare two excel sheets and make sure that if you copy columns from one sheet to another, the data in the column ends up in the right rows?
I faced a similar issue with updating records from a CSV file. It is true that SSIS is one way. To solve our problem, I created a .NET executable application which is scheduled to execute once per week. The .NET application does the following
Connects to the organisation
Imports all records from the excel
spreadsheet using a pre-existing data map in CRM organisation
Runs a duplicate detection rule (already existing in the CRM organisation)and brings back all duplicates
Sorts through each duplicate and stores the guid into 2 arrays: list of original records and list of newly imported records (based on created date of record)
Performs a merge of the old data on the record with the new data (this is
performed through the CRM2013 SDK MergeResponse class
Now that the original records have been updated with the new data from the
spreadsheet, delete the duplicate records which have just been
created and then made inactive due to the use of MergeResponse class in step 5 . (for us, we were updating contact info, but wanted the
orginal contact to stay in CRM because they will have cases etc
related to that contact's GUID)
If you want to go down this route, I suggest looking at the example on the MS website which uses the CRM SDK (\CRM 2013 SDK\SDK\SampleCode\CS\DataManagement\DataImport\ImportWithCreate.cs). That is the sample code which I used to create the web service.
As you have thousands of records then I guess that SSIS Package is the best option for you. its very efficient in such scenorios.
This is the approach I would use:
Create a Duplication Detection Rule under Settings > Data Management
Download the Import Template
Adjust your source system to generate the spreadsheet in that particular format
Depending on the frequency of your updates I'd look into CRM web services to import your data.