Access VBA "Record Too Large" - excel

I have not found an answer to this issue on the net: maybe an Access bug?
I have Windows 10, and Access 2016. I have 102 fields and 40 records. There are 14 long text fields for each record. None of the long text entries contain more than 200 characters. The records in question are set to "Long Text" (used to be "Memo" in earlier Access versions).
The software that I wrote and have used for 5 years with Access 2010, imports an Excel Workbook. Now I use that same software with Access 2016 and have started getting the error described here. This is the 4th db I have setup using Access 2016 and this is the first time I have seen this problem.
When I tried to type entries into one or two cells in a "long text" field in a record, an error was generated saying the "Record is too large". The same field on other records work as expected. Only the cell on that given record is generating an error. Like I said, I have never seen this error in other versions of Access.
I have performed 1) "compact and repair", 2) exported the table to a new table, and 3) exported the table to Excel and, with a new Access record, cut and paste by hand all 102 records. Item number 3) works most of the time, efforts 1) and 2) have never fixed the problem.
The incident leading me to seek help is that this time, performing step 3) above, with a new record, I have one cell that generates the "record too large" error again. I noticed the entry cell in Excel that I was copying from had several semi-colons: I removed them, tried to cut and paste to the Access Cell with no success. I tried typing the entry into the cell instead of pasting it and get the error.
I really am at a loss to what the issue is with this problem and I need some help. Has anyone ever experienced this issue?

I have 102 fields and 40 records. There are 14 long text fields for each record. None of the long text entries contain more than 200 characters.
I'd refactor the schema, yesterday. This is exactly what one-to-one relationships are for. Move a subset of columns into another table, relate PK to PK. 102 columns is too many concerns stuffed into one single table. Break it down - regardless of the "record too large" error.
That said if none of the long text entries contain more than 200 characters, then why are they long text in the first place? I'd make them variable-length character columns (that would be nvarchar on SQL Server, not sure about Access), with perhaps 255 characters capacity.

thank you for all of the input. I cannot post the code. I was, however, able to work a solution for the issue as described below in case others get in this same predicament.
Export offending Table to Excel, preserve formatting.
Export offending Table to Access, same db, preserve definition, do not export data.
Change several (offending) Fields in the blank (new) table from "Short Text" to "Long Text" (Memo).
Append the exported Excel sheet created in step 1. into the blank Table created in step 2.
These steps resolved my issue and got me out of a painful jam. Thank you all for the help and ideas.
v/r,
Johnny

There is a chance you found Excel-import-related bug in Microsoft Access.
Create the table anew to work around the defect to get its internal data right.
The problem is in larger tables auto-created on import from Excel. Even if the length or count of their fields does not exceed any limits, you can still start getting error "Record too Large". Executing Compact and Repair action does not remedy this issue.
So if you are sure that your data structure does not exceed Access limitations and then you re-create the table with the same fields and their lengths, the error is gone.
As a proof, currently I have two tables in my database, both with identical internal structure. The one created by import reports "Record too Large" and the one I created manually (by copy-paste of fields in design view) is OK.
So we can say that upon import from Excel, there is a specific Access bug which was not corrected as of today (2018-10-17) in Access 2016. Work it around in the above way.

Related

Excel Power Query Connection and Loading into an existing Table (Query results cannot overlap a table or XML Mapping)

First post here.
I have an existing workbook that was created some time ago with a table by a user. Basically the user got an extract file and simply cut and pasted the data to the table, to the right of this data are a whole bunch of formulas within the table ... so they then just copy and paste any formulas down.
What I am trying to do is remove this effort and have it refreshed from the updated extract file of raw data.
I know I could do VB to deal with this (although not done any VB for a few years), HOWEVER I notice there is a data connection and Power Query so thought this would be a better way.
Problem is as there is already a Table I can not import into it due to the said error "Query results cannot overlap a table or XML Mapping", I understand that the connection creates/recreates the table.
I have tried methods to get round this ...
Recreate the Table and then Find & Replace the name references, but a lot of the formulas exceed the find and replace string.
Tried to convert the Table to a range and then import, but I ended up with 2 two tables side by side. I can't see anything to merge the 2 and not sure this will solve the problem when I try to import again.
Any starter for ten on this will help as I've not dabbled with Excel in this way for a few years.
Regards
Gary

Issue with counting events by time range

I have a data export from software that tracks when people swipe access cards to unlock a door. I am trying to find out how many card swipes we get each for each two-hour block.
Something weird is going on. The formula works for only two of the time blocks, even though the first several rows of data show there are other time slots that should be recorded. I have made sure that all the right cells are formatted for the same category (Time) and type (*1:30:55 PM). In the attached screenshot, you can see the formula used.
I think the issue is this was a CSV export from the software, but from there I don't know where to go. Any suggestions? And yes, I tried to do a pivot table first, but when I tried to group the data I got an error message saying something like "Excel can't group on this value."
I found the problem.
I opened the CSV export in a text editor and found there was a leading space before every hour entry. I had to use the search and replace to remove all the extra spaces. After that, I was able to successfully load it into Excel, all of the time entries were right-justified, and my formulas worked.

MS Access - Data in top row appears and disappears when focus on the cell changes.

Its a bit of a weird one but I have a linked table within my database. The table is an excel table with identical field headings and data types and until recently has worked fine however now when I traverse the linked table in Access the data will change every other move, changing from the original row to show data in the row below. Iv had a script output the values of the top row and it displays normally however I cant append this linked table into anything and I assume its this glitch.
Im stumped and would love any ideas as to how this happened and how it can be fixed.
This is an unusual post as I've never quite heard of this type issue. To sanity check things I would suggest that you delete your excel table from the navigation pane in Access - and then relink it.
So then perhaps I didn't understand, and I am wondering what is meant in your first post by: "The table is an excel table with identical field headings and data types"
A link to excel is a qualified "table" so to speak. You should be able to double click on it within Access, it opens in data sheet view and you see all the data but you can't write to it. You can't write back into the excel.
You can query it....
You can append the query results of the excel into a true Access table.

Easy csv to excel

My customer has an issue with certain .csv files auto detecting data types and altering data when they open in excel. Current workaround is to open an instance of excel, open the file, and go through the many-step process of choosing data types.
There is no standard format for which data elements will be in each csv file, so I've been thinking up methods to write code that is fairly flexible. To keep this short, basically, I think I've got a good idea of how to make something flexible to support the customer's needs that involves running an append query in Access to dynamically alter/create specifications, but I cannot figure out how to obtain values for the "Start" and "Width" fields in the MSysIMEXColumns table.
Is there a function in vba that can help me read a csv file, and gather the column names along with the "Start" and "Width" values? Bonus if you can also help me plug those values into an Access table. Thanks for your help!!
First of all... there is NO "easy csv to Excel" conversion when your customer has:
"...no standard format for which data elements will be in each csv file."
I used to work for a data processor where we combined thousands of different customer files, trying to plunger them into a structured database. The ways customers can mangle data are endless. And just when you think you've figured them out, they find news ways of mangling data.
I once had one customer who had the brilliant idea of storing their "Dead Beat" flag IN their Full name field. And then didn't tell us they did so. And then when we mailed the list out to their customers, they tried to blame us for not catching that. Can you imagine someone waking up some morning and get junk mail addressed to "Dear, Dead Beat"?
But that's only one way "no standard format" customers can make it impossible to catch their errors. They can be notorious for mixing in text with number fields. They can be notorious for including invisible escape characters in text fields that make printers crash. And don't even get started on how many different ways abbreviations can cause data to be inconsistent.
Anyway... to answer your question:
If you are using CSV files, they are comma delimited. You don't need "Start" and "Width".
"Start" and "Width" are for Fixed Width files. And if your customer is giving you a fixed width file, they NEED to give you a "standard format". If they don't then you are just trying to mind read what they did. And while you can probably guess correctly most of the time, inevitably, you are going to guess wrong and your customer is going to try to blame you for the error.
Other than that, sometimes you just have to go through the long slog of having a human visually inspect things to make sure the convert went as planned. I'd also suggest lots of counts and groupings on your data afterwards to make sure they didn't do something unexpected.
Trying to convert undocumented files is a very difficult and time consuming task. It's why they are paying you big bucks to do it.
So to answer your question again, "Start" and "Width" are for Fixed Width files. If they are sending you Fixed Width files, they need to send specifications.
If it's a csv file, you don't need "Start" and "Width". The delimiter (ususally a comma) is what separates your fields.
** EDIT **
Ok... thinking through this some more... I'll take a guess at what you are doing:
1) You create and save a generic spec in Access for delimited files.
2) You open your CSV file through vba and read the new delimited header record with all the column header names.
3) You try to modify the MSysIMEXColumns table to include new fields and modify old ones.
4) You now run your import based on the new spec you created.
If that is the case, you need to do a couple of things:
First, understand that this a dangerous thing to do. Access uses wizards to create it's systems tables. If you muck with these, you don't know how it might affect the wizards when they try to access these tables again. You are best off creating a new spec for each new file type, using the Access wizards.
Second, once you come to the conclusion you are smarter than microsoft (which is probably a good bet anyway), you can try to make a dynamic spec file.
Third, you NEED to make sure your spec record in MSysIMEXSpecs is correct. That means you need to have it set as a delimited file and have the correct delimiter in there. Plus you need to have the FileType correct. You need to know if it's Unicode or any number of other file types that your customer could be sending you.
And by "correct delimiter" I mean... try to get your customer to send you "pipe delimited" | files. If they send you "comma delimited" files, you run the risk of them sending you text fields with comments or addresses that include a comma in the data. Say they cut and paste a street address that has a comma in it... that has the fantastic effect of splitting that address into two fields and pushing ALL of your subsequent columns off by one. It's lots of work to figure out if your data is corrupted this way. Pipes are much less likely to be populated in your data by accident.
Fourth, assuming your MSysIMEXSpecs is correct, you can then modify your MSysIMEXColumns table. You will need to extract your column headers from your csv file. You'll need to know how many fields there are and their order. You can then modify your current records to have the new field names and add any new records for new fields, or delete records if there are less fields than before.
I'd suggest saving them all as text fields DataType=10 in a staging table. That way you can go back and do analysis on each field to see if they mixed text into numeric fields or any other kind of craziness that customers love to do.
And since you know your spec in MSysIMEXSpecs is a delimited field file, you can give each record a "Start" field equal to the sequence your Header record calls for.
Attributes DataType FieldName IndexType SkipColumn SpecID Start Width
0 10 Rule 0 0 3 1 1
0 10 From Bin 0 0 3 2 1
0 10 toBin 0 0 3 3 1
0 10 zone 0 0 3 4 1
0 10 binType 0 0 3 5 1
0 10 Warehouse 0 0 3 6 1
0 10 comment 0 0 3 7 1
Thus, the first field will have a "Start" of 1. The second field will have a "Start" of 2. etc.
Then your "Width" fields will all have a length of 1. Since your file is a delimited file, Access will figure out the correct width when it does the import.
And as long as your SpecID is pointing to the correct delimited spec, you should be able to import any csv file.
Fifth, after the data is in your staging table, you should then do an analysis of each field to make sure you know what type of data you really have and that your suspected data type doesn't violate any data type rules. At that point you can transfer them to a second staging table where you convert to the correct data types. You can do this all through VBA. You'll probably need to create quite a few home grown functions to validate your data. (This is the NOT so easy part about "easy csv to Excel" coding.)
Sixth, after you are done all of your data massaging, you can now transfer the good data to your live database or Excel spreadsheet. Inevitably you'll always have some records that don't fit the rules and you'll have to have someone eyeball the data and figure out what to do with it.
You are at the very tip of the iceberg in data conversion management. You will soon learn about all kinds of amazingly horrible stuff. And it can take years to write automated procedures that catch all the craziness that data processors / customers send to you. There are a gazillion data mediums that they can send the data to you in. There are a gazillion different data types the data can be represented in. Along with a a gazillion different formats that the data resides in. And that's all before you even get to think about data integrity.
You'll probably become very acquainted with GIGO (Garbage In, Garbage Out). If your customer tries to slip a new format past you (and they will) without telling you the "standard format for which data elements will be", you will be left trying to guess what the data is. And if it's garbage... best of luck trying to create an automated system for that.
Anyway, I hope the MSysIMEXColumns info helps. And if they ever give you Fixed Length files, just know you'll have to write a whole new system to get those into your database.
Best of luck :)

Importing Excel into SS2000; Error on Field; DTS

I'm trying to import an excel file in to a SQL Server 2000 database using DTS. This is nothing fancy, just a straight import. Where I work, we do this 1000 times a day. This procedure usually works without an issue but something must have changed in the file.
I'm getting the below error:
Screen shot of Error http://www.understandingguitar.org/wp-content/uploads/2008/12/packageerror-screenshot-20081212.jpg
I've checked to ensure that the column "AssignmentID" is stored as "text" in the excel sheet. I've also tried to change it to general. Exact same error regardless of setting. The field does contain numbers... I appreciate everyone's help on this!
Regards, Frank
Try opening at the excel file and see the column content.
Is any of the row value in that column right-aligned? (generally for numbers)?
I am guessing that such a row could be a problem.
It may be obvious, but is the destination string long enough to hold the string representation of the float? I'm not sure if Excel is rounding what it displays to you, so it may be worth trying with a wider column.
The answer has something to do with the fact that the procedure is expecting text but even if you set the properties (in the format dialog) to "text", excel may not handle the data as text. And hence, SQL Server (or the libraries) won't handle it to text.
When the procedures try to import it, the system feels that it is converting from a number to text and it expect that data maybe lost (even though no data will be lost) and the error is raised.
If figured out that I can get around this error by placing a ' (apostrophe) before each listed number. [I.E. '124321] This force excel to treat the number as text.
Hopefully this will save others the headache I now have from this. :-)
Regards,Frank

Resources