I am trying to copy two different columns into two different named columns in a sharepoint library(doc type -> acc doc type & series -> series name) Currently I have been highlighting and then using the drag option into the second column, but that is very inefficient as you cannot use it when you highlight the entire column, as well as there often being several hundred files per library. Is there an easier way to copy the columns? Much appreciated!
Note: the new columns im trying to copy them into are global columns so i believe making new ones will not solve my problem.
I believe what you are looking for is Datasheet view (SharePoint 2010 and earlier versions), or, in SharePoint 2013 (which it sounds like you are using) it is called Quick Edit.
In any document library, click on the library tab in the ribbon and then Quick Edit. This will take you to a view that allows you to manipulate the metadata as you would in excel, e.g., copying and pasting large amounts of data at a time.
Related
I am trying to make a list in Excel that has as its output a list of unique items that appear multiple times in different sources of the excel sheet. Ideally, the list should be automated automatically as more data is inputted in the sources, but no additional sources will be added. I used a formula I found here, but it only works for a single source of data (and this data then needs to be adjacent).
I attached a picture of my document with circles enclosing the sources and pointing to where the list should be created. I highlighted in yellow a cell in the top row that does not get outputted (because I don't know how to do this). Picture for reference
I can provide the excel document if need be.
I am thinking of consolidating the sources to a single source, but I would like to solve this in a more sophisticated way that does not involve creating more tables.
As per your screenshot it seems you are using tables. Then try below formula-
=IFERROR(INDEX(FILTERXML("<t><s>"&TEXTJOIN("</s><s>",TRUE,Table1[Machine],Table2[Machine])&"</s></t>","//s[not(preceding::*=.)]"),ROW(1:1)),"")
Please note: TEXTJOIN() is available to Excel-2019 & Excel-365 and it has limitation to 50,000 data only.
To learn more about FILTERXML() read this article from JvdV.
I know this is a little backwards, but unfortunately this is what I have to do based on what I have to work with. Essentially we are getting data from one place, putting it into Excel and then running some pivot tables and then copying and pasting the results to Sharepoint. There is no way of getting the data into SharePoint from its initial source.
So here are the steps I have to do on a daily basis:
Go into 3 lists on sharepoint and delete the current data in there
Copy the data from each excel table and then paste it into the associated SharePoint list
Go into the Titles, etc part of SharePoint and update the file name to append the current date to it(ie change it from Data05032017 to Data 05042017)
I would like a way of automating this on a daily basis from Excel by simply pressing a button via VBA or if there is a way I can accomplish this wia linking somehow that would work too(not sure if its possible, I know you can link from SharePoint to Excel but unsure if you can go in the reverse direction).
So I first would like to know if this is possible and then if it is, how I go about doing it.
I have never used Visual Basic before but could do with a pointer on where to begin.
I have 750 excel spreadsheets that contains various amounts of data of different types. The columns are always the same, but the number of data rows vary per spreadsheet. I need to extract data and put it into two new spreadsheets.
Obviously to do this 750 times manually would be a nightmare. I just want to run a script that can do it for me and thus thought of Visual Basic although i've never used it before.
My specific questions are:
What type of command should i research that would allow me to copy data where the row number to start at varies (as data above varies in no of rows). There is a title before this new data - how can i get it to search for this title and then choose the row below?
Would all my spreadsheets have to be in one folder so that the script goes through them all, or can i have some kind of folder structure in that folder too?
Anyone recommend any good resources for me to get to grips with visual basic and grasp what i need to do?
thanks
Tom
So the compilation task got easier with the introduction of MS PowerQuery. If you are using MS Excel 2013, you already have this. If no, you should download it and use the extension from MS.
The following guide outlines how to Using Power Query to Combine Data from Multiple Excel Files into One Table. This means that with Power Query (PQ), MS has taken and enabled easy aggregation using a few simple button clicks. PQ is a lightweight alternative to a lot of tasks that used to require VBA.
In this example, you will use PQ to point to an entire folder (750 should be no problem) worth of commonly formatted Excel files. The only limitation is that each data file should have a similarly named tab.
I won't repeat the details of the guide for how to do it, as it is in-depth and visual. But if you run into issues, get in touch.
My current employer (to remain nameless) has a collection of incredibly sophisticated Microsoft Excel 2003 worksheets (developed by contractors, also to remain nameless).
The employer is replacing the Excel-based solution with a SalesForce-based solution (developed by other contractors, likewise to remain unnamed). The SalesForce solution is also very complex using dozens of related objects and "Dynamic SOQL" to contain the data and formulas which previously was contained in the Excel-based solution.
The employer's problem, which has become my problem, is that the data from the Excel spreadsheets needs to be meticulously and tediously recreated in .CSV files so it can be imported into SalesForce.
While I've recently learned I can use CTRL-` to review formulas in Excel, this doesn't solve the problem that variables in Excel have cryptic names like $O$15. If I'm lucky, when I investigate $O$15, I'll find some metadata explaining if n cells up and/or some other data m cells to the left, and/or (in rare instances) there may be a comment on the cell.
Patterns within the Excel spreadsheets are very limited, rarely lasting more than 6 concurrent rows or columns and no two sheets which need to be imported have much similarity.
Documentation of all systems are very limited.
Without my revealing any confidential data, does anyone have any good ideas how I might optimize my workflow?
It's not clear exactly what you need to do: here are 3 possible scenarios, requiring increasing knowledge of Excel.
1. If all you want is to convert the Excel spreadsheets into CSV format then just save the worksheets as CSVs.
2. If you just want the data and not the formulae then it would be simple (using VBA) to output anything that isn't a formula (the cell.Formula won't start with =).
3. If you need to create a linkage excel-->csv-->existing Salesforce objects/SOQL then you will need to understand both the Excel Spreadsheets and the Salesforce objects/SOQL that have been created. This will be difficult unless you have good knowledge and experience of Excel and also understand what the salesforce App requires.
Brian, if you're still working on this, here's one way to approach the problem. I use this kind of process often for updating data between SFDC and marketing automation apps.
1) Analyze the formulae that you're re-creating in Salesforce.com to determine what base data fields you need (stuff that doesn't have to be calculated from something else.
2) Find those columns/rows in your spreadsheets and use Paste Special -> Values in a new spreadsheet to create an upload file with values instead of formulae that you need for each data area (leads, prospects, accounts, etc.)
3) If you have to associate the info with leads or contacts or accounts and you have already uploaded or created those records in Salesforce.com, be sure to export them with their ID numbers. That makes it easy to use the vlookup formula in Excel to match up fields that you need to add and then re-upload the data into Salesforce.
Like data cleaning, this can be a tedious process. But if you take it step by step it shouldn't be too hard. Good luck.
My Access programming is a little rusty, & I've never worked with Excel files all that much.
I have a requirement to bring data from Excel spreadsheets into Access 2007. These spreadsheets have a fixed (predictable) format, but it includes a "header area" where I need to read single data items from specific cells, followed by a mass of tabular data (~500 rows in the one sample I've seen so far). I will be processing all of this into a set of tables that are normalized quite differently from the flat layout of the spreadsheet.
I know how to open an ADO recordset on the tabular data, and it should work fairly well for my purposes. I also figure that I can reference the Excel object model and open the sheets through Automation to get the "header area" data items.
My question is this: since I have to (I think) use the Automation approach for the "header area", am I better off just leaving it open in this mode to move on to the tabular data (with cell/range navigation), or closing that mode & going over to ADO? I suspect it's the latter--and I'd be more comfortable with it--but I don't want to do the wrong thing just because it's more familiar.
Edit
It seems I wasn't clear that I need to build this capability into the "application", as something that a user can repeat down the line. I'm assured that I can trust the format of the spreadsheet (though I'll include error trapping for graceful failure if that turns out to be false). These spreadsheets are "official design documents" for hardware, and my app needs to handle bringing in new &/or updated ones to track the things that are described in the tabular data in ways that the flat Excel format diesn't allow for.
Of those two options, I would choose the second simply because I find it more convenient to work with an ADO recordset. It should be fairly simple if you can assign a named range to your spreadsheet's tabular data.
Edit: If your spreadsheet includes field names, the recordset approach would be less prone to break due to spreadsheet changes such as one or more new columns inserted before or between the existing columns or a re-ordering of the existing columns.
But actually, I think the TransferSpreadsheet Method might be more convenient. You can specify the spreadsheet range as a named range or by cell address as in this example from the linked page:
DoCmd.TransferSpreadsheet acImport, 3, _
"Employees","C:\Lotus\Newemps.wk3", True, "A1:G12"
Also, you can choose between importing the spreadsheet range directly into an Access table, or linking to the range as a "virtual" table ... whichever best meets your application's needs.
Edit2: Creating a link (acLink instead of acImport) with TransferSpreadsheet would allow you to execute SQL statements against the link table:
INSERT INTO DestinationTable (field1, field2, field3)
SELECT foo, bar, bat FROM LinkedTable;
If the header information is really complicated, this can simplify your coding work:
In the official design Excel file, create a hidden tab.
In that tab, make a 1-row table connecting to all the header elements you're interested in. (i.e. set row 1 column 1 to "Document#" and row 2 column 1 to Sheet1:A1)
Then you can re-use the same VBA procedure to import both your tabular data and your header data.
I would do it all via Automation. Why have two separate processes where one will do? After you've read the header information reading the tabular information will be quite easy.
I inherited an application back in mid-2000 that was built to import Excel spreadsheets that were basically reporting output from MYOB (an accounting program). What had been done was to simply create a template table that had all the columns necessary to accomodate the report, using text data type for all columns. Then the non-data rows were filtered out and processed into the eventual destination table.
It's not elegant, and doesn't require a lot of programming, though the implementation I inherited used a dedicated temp table for each report layout that was being imported. You could easily replace all of those with a single table with 100 text columns of 255 (or memo fields, for that matter, if that was a requirement), and just re-use it.
I'm not sure if I'd recommend it or not, but it really is quite easy without requiring much in the way of code.