Working with Office "open" XML - just how hard is it? - ms-office

I'm considering replacing a (very) large body of Office-automation code with something that works with the Office XML format directly. I'm just starting out, but already I'm worried that it's too big a task.
I'll be dealing with Word, Excel and PowerPoint. So far I've only looked at Word and Excel. It looks like Word documents should be reasonably easy to manipulate, but Excel workbooks look like a nightmare. For example...
In Word, it looks like you could delete a paragraph simply by deleting the corresponding "w:p" tag. However, the supplied code snippet for deleting a row in Excel takes about 150 lines of code(!).
The reason the Excel code is so big is that deleting a row means updating the row indexes of all the subsequent rows, fixing up the "shared strings" table, etc. According to a comment at the top, the code snippet is not even complete, in that it won't deal with a workbook that has tables in it (I can live with that).
What I'm not clear on is whether that's the only restriction that the sample code has. For example, would there also be a problem if the workbook contained a Pivot Table? Or a chart that references data from the same sheet? Or some named ranges? Wouldn't you also have to update the formulae for any cells (etc.) that referenced a row whose row index had changed?
[That's not to mention the "calc chain", which (thankfully) I think you can simply delete since it's only a chache that can be re-built.]
And that's my question, woolly though it is. Just how hard do you have to work do something as simple as deleting a row properly? Is it an insurmountable task?
Also, if there are other, similar issues either with Excel or with Word or PowerPoint, I'd love to hear about them now, before I waste too much time going down a blind alley. Thanks.

Having worked with the Open XML SDK 2.0 for almost two years now I can say that doing seemingly trivial tasks can take many hours and sometimes days to figure out how to do it properly. For example, deleting an Excel row should be fairly straightforward and easy to do right? Nope because not only do you need code to delete your row, but then you have to update all the row indices, update any merged cell references, update hyperlink references, etc. Our internal delete method is close to 500 lines of code to just delete a row and I'm sure we don't have all the cases accounted for either.
The biggest complaint I have is the lack of documentation on how to do the most common tasks. The MSDN section on the Open XML SDK is very limited and whenever you need to do anything complicated you are really on your own. I've had to read the Open XML standard a lot to figure out what certain elements mean and how they should be implemented since I could find very little online.
The other challenging part is if you insert an element in a spot where it doesn't belong or put an invalid attribute on an element you will get a corrupt file when you try and open it. Most of the time you will not get any information on what caused the error and you will have to look at the Open XML standard spec to see what you did wrong.
If you need a fast turnaround time on converting that Office automation code into Open XML and what you are doing is not really basic, then I would say pass. If you have time and the patience to read up on the Word, Excel and PowerPoint XML structures and get familiar with how they relate then I say go for it. In my opinion it is really the only way to have very fine control over these office documents, but there will be a great learning curve when you start.
Oh and just for fun here is how much code is needed to add a comment to an Excel cell.

Just for completeness, here are some libraries I found for working with Excel XML:
www.extremexml.com - a layer on top of the Open XML SDK classes; focusses on injecting data into an existing spreadsheet; handles many of the cross-reference problems I identified in my question. Open source but GPL2 not LGPL. Code looks nice, and documentation is excellent. Does not appear terribly active on codeplex though.
Closed XML - another layer on top of the Open XML SDK - again open source, but with a less restrictive license (MIT). Looks nice, and looks more "active" than the above.
SpreadsheetLight - from what I can tell, a closed-source library sitting atop the Open XML SDK classes. Targeted more at those looking to create a spreadsheet from scratch rather than making changes to existing spreadsheets.

Here is another third party library dedicated to working with OpenXML:
http://www.officewriter.com
In the example cited by amurra above of deleting Excel spreadsheet rows, this is a single method call with this tool. It updates formulas and all the other references for which it seems that 500 lines of code would be required for otherwise.
The OpenXML SDK itself is a great tool for very simple things, but you still have to concern yourself with a lot of the internals of the file format and packaging structure to get things really right.

Here are some additional libraries that can manipulate with OOXML formats:
- GemBox.Spreadsheet (XLSX)
- GemBox.Document (DOCX)
Also GemBox published some articles that demonstrate how to manipulate with OOXML file format with pure .NET (without a use of any library), I think you'll find this interesting:
www.codeproject.com/Articles/15593/Read-and-write-Open-XML-files-MS-Office
(Introduction to SpreadsheetML format and an explanation on how we can read and write worksheet's cell content)
www.codeproject.com/Articles/649064/Show-Word-File-in-WPF
(Introduction to WordprocessingML format and demonstration on how we can read document's text)

Related

Extracting specific data from a pdf into excel

community
I need to extract select data from a pdf form into excel. Eventually, the data gathered will be used in another step (excel table) as part of an additional calculation.
I am hoping to find a way to automate this process so I tried importing the pdf file to excel using Power Query. Unfortunately, each time I loaded the pdf, I get a message (Page is blank).
After doing some initial search, I found out that this may be due to the fact the way the pdf file was built originally (not as a table converted to a pdf).
I went back and converted the pdf file into a spreadsheet and now I can actually see the data that I need to extract in excel but needs a lot of cell formatting and rearranging.
I would really like to know if there is an alternative to solving this problem. More importantly, I'd very much appreciate any bright ideas or recommendations on how to best tackle this task since I have to repeat the same process 30+ times.
Also, I don't have a lot of coding experience, knowledge- very minimal.
Thank you so much

Finding the cause of Excel file corruption

I have a feature that downloads things to an xls file using Apache POI. Mostly it works. But on one particular database, the resulting files are corrupted and won't open in Excel. I get the message "We found a problem with some content in 'DownloadFoo.xls'. Do you want us to try to recover as much as we can? If you trust the source of this workbook, click Yes." . Clicking yes results in all the formatting, data validation, etc being stripped out. On the other hand, if I open the file in Open Office Calc and save it, it's fine and can be opened in Excel from then on. (The people who want to use these files aren't allowed to download Open Office Calc, so this is not considered an acceptable workaround.)
I have tried narrowing it down to see which data is causing the problem, but it seems to occur whenever 10 or more items are downloaded, regardless of which items they are. (On other databases, it's fine to download 100+). Excluding some of the columns helps, but they are perfectly innocuous looking columns (and virtually identical to other columns which are fine) so this still hasn't got me to the bottom of it.
Are there any techniques I could use to find out what Excel has a problem with in the corrupted spreadsheets?
I can't make major changes like getting it to download to xlsx instead as this feature is going to be scrapped and replaced with something completely different in the near future, so I'd like to just focus on the problem at hand.
It turns out that the solution to the problem was to reset the data validation lists more often. Quite a lot of the cells in my spreadsheet have data validation. When the data validation lists are longer, they are stored on a hidden sheet. If several cells need the same validation, I try to get them referencing the same list in order to not write out too much stuff on the hidden sheet. However Excel apparently dislikes it when too many cells reference the same list- it's not against the rules as far as I can tell, but it doesn't like it anyway. When I changed it to rewrite the validation lists for every 5 items, it started working.
The reason this database was different was that the items had an unusually high number of subitems, so they occupied a lot of rows even though it didn't seem like many things were being downloaded. Some of the problem columns just had true or false validation rather than using the lists on the hidden sheet, so I don't know what that was about, but resetting the validation lists helped anyway.
This doesn't really answer my question as I never managed to get any information from Excel about what the problem was, or use a particular technique, it was just a series of coincidental findings. I'm putting it here anyway in case anyone else has a similar problem. Also the thing that started me on the right track was finding an old comment when double checking that it doesn't do anything different for over 10 items (it doesn't) in response to Andrew Morton's comment, so thanks Andrew!

Excel Files and Visual Basic

I have never used Visual Basic before but could do with a pointer on where to begin.
I have 750 excel spreadsheets that contains various amounts of data of different types. The columns are always the same, but the number of data rows vary per spreadsheet. I need to extract data and put it into two new spreadsheets.
Obviously to do this 750 times manually would be a nightmare. I just want to run a script that can do it for me and thus thought of Visual Basic although i've never used it before.
My specific questions are:
What type of command should i research that would allow me to copy data where the row number to start at varies (as data above varies in no of rows). There is a title before this new data - how can i get it to search for this title and then choose the row below?
Would all my spreadsheets have to be in one folder so that the script goes through them all, or can i have some kind of folder structure in that folder too?
Anyone recommend any good resources for me to get to grips with visual basic and grasp what i need to do?
thanks
Tom
So the compilation task got easier with the introduction of MS PowerQuery. If you are using MS Excel 2013, you already have this. If no, you should download it and use the extension from MS.
The following guide outlines how to Using Power Query to Combine Data from Multiple Excel Files into One Table. This means that with Power Query (PQ), MS has taken and enabled easy aggregation using a few simple button clicks. PQ is a lightweight alternative to a lot of tasks that used to require VBA.
In this example, you will use PQ to point to an entire folder (750 should be no problem) worth of commonly formatted Excel files. The only limitation is that each data file should have a similarly named tab.
I won't repeat the details of the guide for how to do it, as it is in-depth and visual. But if you run into issues, get in touch.

how to display data that is related to a specific cell in excel 2010?

I have created 2 columns, the first has a category of a system using data validation, and the second has the description and failures of that system.
The purpose of that is to open a malfunction on some parts.
In a different sheet I wish to do the same only this time I want to choose the system and the description will automatically appear in the next column showing me all the malfunctions I have written on this system.
I am not very good at all the functions of excel. but I still searched for one that might help me. I have tried using the DGET function but it got me nowhere.
Perhaps try the solution here - it's a bit tricky to explain without copy-pasting the whole thing:
https://superuser.com/questions/536234/excel-how-to-vlookup-to-return-multiple-values
Also take a look at vlookup() if you're working across spreadsheets.
As expected, all of the responses you've seen ehere - and probably elsewhere - are ponyers to VLookup, or a refusal to answer your question.
I'm guessing that you're using DGET() because you need to retrieve data from one named column, using a match for a search term in another named column; and you're that because you can't rely on column ordinals or addresses - you have to do it by name.
VLookup won't do that for you: not without extremely complex and fragile array formulae.
The bad news is: Microsoft NEVER published a working example of a DGET() formula or any corresponding VBA Worksheet Function code.
There's page after page of descriptive text and general explanation in the helpfiles and on MSDN: but no working example. Nobody in Redmond ever sat down and made the DGET() function work with a reproducible set of function parameters and published a screen-shot the working formula.
I'll let you guess why that is.
Maybe there's an example somewhere that is, in effect, a VLookup implemented for known column ordinals using DGET(). If there is, I never found it and you won't either: and it would, of course, be useless for any application where you're working with field names instead of known ordinals.
What you need to do is capture the tabulated data range, with field names in the top row, and pass it to a SQL query using ADODB or MS-Query. That bad new for that is that all the MS-JET Excel drivers have a fatal memory leak.
After that fails, you're left exporting the data somewhere that a proper database app can run the SQL: and that's actually the right thing to do, because your attempt at using DGET() is a relational data query.
If you're left with the need to do this entirely in Excel, you have reached a level of desperation normally associated with the last survivor of an airplane crash who, having devoured the charred remains of his unlucky fellow passengers, is finally forced to contemplate the awful exigency of opening and eating the inflight catering meals.
The grisly details for the equivalent in Excel are a Horrible Hack published here:
http://excellerando.blogspot.com/2014/09/from-time-to-time-it-necessary-to.html

What is the best way to import data from sophisticated formula enriched Excel files into SalesForce.com?

My current employer (to remain nameless) has a collection of incredibly sophisticated Microsoft Excel 2003 worksheets (developed by contractors, also to remain nameless).
The employer is replacing the Excel-based solution with a SalesForce-based solution (developed by other contractors, likewise to remain unnamed). The SalesForce solution is also very complex using dozens of related objects and "Dynamic SOQL" to contain the data and formulas which previously was contained in the Excel-based solution.
The employer's problem, which has become my problem, is that the data from the Excel spreadsheets needs to be meticulously and tediously recreated in .CSV files so it can be imported into SalesForce.
While I've recently learned I can use CTRL-` to review formulas in Excel, this doesn't solve the problem that variables in Excel have cryptic names like $O$15. If I'm lucky, when I investigate $O$15, I'll find some metadata explaining if n cells up and/or some other data m cells to the left, and/or (in rare instances) there may be a comment on the cell.
Patterns within the Excel spreadsheets are very limited, rarely lasting more than 6 concurrent rows or columns and no two sheets which need to be imported have much similarity.
Documentation of all systems are very limited.
Without my revealing any confidential data, does anyone have any good ideas how I might optimize my workflow?
It's not clear exactly what you need to do: here are 3 possible scenarios, requiring increasing knowledge of Excel.
1. If all you want is to convert the Excel spreadsheets into CSV format then just save the worksheets as CSVs.
2. If you just want the data and not the formulae then it would be simple (using VBA) to output anything that isn't a formula (the cell.Formula won't start with =).
3. If you need to create a linkage excel-->csv-->existing Salesforce objects/SOQL then you will need to understand both the Excel Spreadsheets and the Salesforce objects/SOQL that have been created. This will be difficult unless you have good knowledge and experience of Excel and also understand what the salesforce App requires.
Brian, if you're still working on this, here's one way to approach the problem. I use this kind of process often for updating data between SFDC and marketing automation apps.
1) Analyze the formulae that you're re-creating in Salesforce.com to determine what base data fields you need (stuff that doesn't have to be calculated from something else.
2) Find those columns/rows in your spreadsheets and use Paste Special -> Values in a new spreadsheet to create an upload file with values instead of formulae that you need for each data area (leads, prospects, accounts, etc.)
3) If you have to associate the info with leads or contacts or accounts and you have already uploaded or created those records in Salesforce.com, be sure to export them with their ID numbers. That makes it easy to use the vlookup formula in Excel to match up fields that you need to add and then re-upload the data into Salesforce.
Like data cleaning, this can be a tedious process. But if you take it step by step it shouldn't be too hard. Good luck.

Resources