Anyone who has worked with VFP data will tell you the DBF is a fixed record length and structures do not have variable length columns, even with the introduction of data types like VarChar. A VarChar is still a fixed length field in the table structure, but internally VFP treats it differently when you work with the data. If you design a numeric field 5,2 you have two significant digits, a decimal place, and two decimal places. Right??
Yes, but only most of the time. {g}
Today I ran into a situation that probably has happened to me only once before in more than 17 years of working with DBF files. I am working on a death march conversion. Part of the conversion does some mathematical calculations with cost. The columns we are fitting data into are N(5,2) so any information 99.99 or less should fit just fine. We ran the conversion and the client was complaining about data being off by 2 cents. We figured there was some rounding problem with the data. My co-worker on the project watched the process in the debugger. The data in the SCATTER NAME object had 78 cents (.78). The next line executed is an INSERT INTO … NAME and in the column was 80 cents (.80).
I was asked by my co-worker to confirm he was seeing what he thought crazy behavior in VFP. I asked him to BROWSE the table. Sure enough it was true. This is sort of what we saw in a column with two decimal places of precision.
89.54
98.29
05.78
147.8
If it was a rounding problem we would have seen several of the values round to the tenth. What caught my eye was the missing hundredths value in the last and current record. At worse I expected .80, but in this case we saw a three whole digits and one decimal place stuffed into a N(5,2) column.
Argh!
Visual FoxPro is dynamically interpreting the structure a different way. I think it should creat an overflow condition, which immediately sticks out like a sore thumb. Instead, we lost a couple of hours of time tracking this down.
It is not until I enter in a numeric field with more than 5 significant digits that I get “Numeric overflow. Data was lost” (Error 39). Interesting behavior for sure. I will not be reporting this as a bug because it is probably the way DBFs have worked since the beginning of time, but it sure can lead to some expensive debugging sessions if you are not aware of this behavior.
I wrote a short test program to see if this is an issue with a SCATTER NAME object vs. doing a standard APPEND BLANK and a REPLACE, but the behavior is the same.
Hopefully this blog entry will save you some time with this problem down the road.