Home » Uncategorized » VFP: Not always fixed field structures
Jul
19

Anyone who has worked with VFP data will tell you the DBF is a fixed record length and structures do not have variable length columns, even with the introduction of data types like VarChar. A VarChar is still a fixed length field in the table structure, but internally VFP treats it differently when you work with the data. If you design a numeric field 5,2 you have two significant digits, a decimal place, and two decimal places. Right??

Yes, but only most of the time. {g}

Today I ran into a situation that probably has happened to me only once before in more than 17 years of working with DBF files. I am working on a death march conversion. Part of the conversion does some mathematical calculations with cost. The columns we are fitting data into are N(5,2) so any information 99.99 or less should fit just fine. We ran the conversion and the client was complaining about data being off by 2 cents. We figured there was some rounding problem with the data. My co-worker on the project watched the process in the debugger. The data in the SCATTER NAME object had 78 cents (.78). The next line executed is an INSERT INTO … NAME and in the column was 80 cents (.80).

I was asked by my co-worker to confirm he was seeing what he thought crazy behavior in VFP. I asked him to BROWSE the table. Sure enough it was true. This is sort of what we saw in a column with two decimal places of precision.

89.54
98.29
05.78
147.8

If it was a rounding problem we would have seen several of the values round to the tenth. What caught my eye was the missing hundredths value in the last and current record. At worse I expected .80, but in this case we saw a three whole digits and one decimal place stuffed into a N(5,2) column.

Argh!

Visual FoxPro is dynamically interpreting the structure a different way. I think it should creat an overflow condition, which immediately sticks out like a sore thumb. Instead, we lost a couple of hours of time tracking this down.

It is not until I enter in a numeric field with more than 5 significant digits that I get “Numeric overflow. Data was lost” (Error 39). Interesting behavior for sure. I will not be reporting this as a bug because it is probably the way DBFs have worked since the beginning of time, but it sure can lead to some expensive debugging sessions if you are not aware of this behavior.

I wrote a short test program to see if this is an issue with a SCATTER NAME object vs. doing a standard APPEND BLANK and a REPLACE, but the behavior is the same.

Hopefully this blog entry will save you some time with this problem down the road.

6 Responses to “VFP: Not always fixed field structures”

  1. Brian
    July 24th, 2006 at 10:16 | #1

    I am not a database programmer so I don’t understand why that error is not a bug even if it “rarely” occurs. Either the calculation is correct or it’s not so why is it acceptable? If I were the client, I would be very concerned for a number of reasons, including; customer complaints, income reporting for tax purposes, etc. The acceptance of this calculation error by the programmer would also cause the client to question the competence of the programmer, the software and may lead to litigation against the company contracted to do the database programming.

  2. Anonymous
    July 24th, 2006 at 17:26 | #2

    I hate to say this, but it is not a bug, but by design. N(5,2) means, Numeric, Length of five, percision of 2 decimals. This doesn’t mean you MUST have two decimals. you can store 12345 in this field if you want.

    What I think is a bug is if you do:

    REPLACE MyField with 12345.1

    and you still don’t get a numeric overflow. But, 123.4 is a perfectly valid value for a 5,2 numeric field.

    BOb Archer

  3. July 24th, 2006 at 18:55 | #3

    I understand your concern Brian, but in this case the competancy of the programmer was the reason the problem was discovered. When converting data from one system to another programmers run into all kinds of problems. It is a natural course to follow. This is why we put in a check and balance system, to ensure the data is rock solid. The reason this problem was discovered was the numbers from the old system did not add up to the numbers in the new system.

    So I am not accepting the error in calculation as you suggest. I am only accepting the behavior of the database product with a 20 year legacy of backwards compatibility, and conformancy to the DBF standard. You see, each database engine has its quirks and trade-offs. This happens to be one of them with DBFs.

    When it comes to developing applications tracking numeric data, database developers add rules to the data to ensure the size of the data stored meets the guidelines of the columns in the data. That is why this problem rarely occurs and one of the reasons I have seen this a couple of times in more than 20 years of software development.

  4. July 24th, 2006 at 19:04 | #4

    Thanks for the additional discussion BOb. You are correct on the example. It is not until you round to a value with six significant digits.

    ** Works
    REPLACE nCost WITH 99999.49

    ** Triggers data overflow error
    REPLACE nCost WITH 99999.91

    Any way you look at it, it will be something that can bite you in the behind when you least expect it.With disk space so inexpensive these days it makes sense to use large numeric fields.

  5. July 2nd, 2009 at 10:07 | #5

    USE TRANSFORM() FUNCTION TO SOLVE THIS PROBLEM.

    FOR EX.

    TRANSFORM(12345.1234,"999.99")

    CAN FIT THE LARGE NUMBER 12345.1234
    TO A 6.2 DIGIT FIELD WITH THE VALUE 0

  6. July 2nd, 2009 at 10:25 | #6

    Subrat,

    While your solution may not create errors, it certainly is not the best solution when you are completely changing the true value of the data.

Add reply