[freetds] Force use of UTF-8 instead of ISO8859-1
jklowden at schemamania.org
jklowden at schemamania.org
Fri Feb 5 12:17:42 EST 2010
On Fri, Feb 05, 2010 at 04:38:31PM +0100, Frediano Ziglio wrote:
> 2010/2/5 <jklowden at schemamania.org>:
> > On Fri, Feb 05, 2010 at 03:57:03PM +0100, Frediano Ziglio wrote:
> >> Using nchar(10) "foo" get stored as "foo " ("foo" followed by 7
> >> spaces). Using utf-8 encoding a 5 character get stored in 10 nchar but
> >> if a character is not ascii get encoded in more than 10 characters.
> >> SQLDescribeCol return 10 characters and you provided 10 character
> >> buffer which is insufficient to store original data. Now... On
> >> SQLDescribeCol
> >>
> >> ColumnSizePtr
> >>
> >> [Output] Pointer to a buffer in which to return the size (in
> >> characters) of the column on the data source.
> >>
> >> so is fine to return 10 or we should return that maximum buffer space
> >> (40 in this case) Or perhaps we should return 0 (undeterminated)
> >
> > SQLDescribeCol is correct. It returns the logical size, the size as reported by the server. The column size is measured in characters, by definition; the server's storage requirements are not the client's concern.
> >
> > To determine the appropriate buffer size, bsqlodbc should use SQLGetDescRec and SQL_DESC_OCTET_LENGTH.
Hi Freddy,
> correctly our imlpementation returns
> 10 and.... (rumble!) SQL_WCHAR... I forget this SMALL detail....
What should it return if the encoding is UTF-8? SQL_WCHAR seems correct, unless it means UCS-2LE. I think SQL_WCHAR means "Unicode".
> I think that our driver
> is 100% correct...
SQLDescribeCol correctly returns ColumnSizePtr = 10 for nchar(10), yes.
> it detects truncation!
Hurrah! :-)
> At this point I would ask me
> if SQL_DESC_OCTET_LENGTH is correct... but perhaps we can't even use
> this value to compute character buffer...
The ODBC specification does not contemplate UTF-8. It assumes UCS-2. So we're extending it, and we need to remember POLA.
SQL_DESC_OCTET_LENGTH is the length in bytes of the buffer needed hold the data. The driver knows how the buffer is encoded. It should return -- as dbcollen() does -- the maximum size that could be required to hold any value that the column could hold. For nchar(10) in UTF-8, that's 40.
Now let me grumble for a moment, OK? This is a perfect example of ODBC's needless complexity. Who *cares* about the server's idea of the length? The application needs to know how many bytes to allocate, calls SQLDescribeCol, and gets a useless answer. So, it has to call SQLAllocHandle to get a SQLHDESC and call SQLGetDescField! Wouldn't it be better to have a bytes-per-character function that takes a type as an input and returns a size?
Mind, there's no consistency. If SQLDescribeCol returns the logical length, in characters, of nchar, why doesn't it return 1 for TINYINT, SMALLINT, INT, and BIGINT? After all, they're one logical length, and SQLDescribeCol doesn't describe *storage*, right? Or perhaps SQLDescribeCol's job is only to report what the server says, and the *server* is inconsistent!
OK, I'm done. I hope to try SQL_DESC_OCTET_LENGTH this weekend.
Regards,
--jkl
More information about the FreeTDS
mailing list