Skip to Content.
Sympa Menu

freetds - RE: [freetds] column_unicodedata considered harmful

freetds AT lists.ibiblio.org

Subject: FreeTDS Development Group

List archive

Chronological Thread  
  • From: "Lowden, James K" <LowdenJK AT bernstein.com>
  • To: "'FreeTDS Development Group'" <freetds AT lists.ibiblio.org>
  • Subject: RE: [freetds] column_unicodedata considered harmful
  • Date: Tue, 1 Apr 2003 12:45:25 -0500

> From: ZIGLIO Frediano [mailto:Frediano.Ziglio AT vodafoneomnitel.it]
> Sent: April 1, 2003 4:02 AM
>
> > Functions such as tds7_get_data_info() that set the column
> > width should
> > fixup the width according to the client charset. We need to keep
> > server-side and client-side representations of column sizes;
> > most of libtds
> > only needs to know about the client side. The only time
> > server-side sizes
> > are important is when calling functions that perform conversions.
>
> This is a problem. Assuming server is latin1 and we convert
> to utf8; we have
> a varchar(3) with all accent letter (like 'ééé'). Converting
> to utf8 result
> 6 bytes... If client ask for data length we return 6
> characters, but if
> client ask for column size? IMHO we should return 3, cause
> server size is
> 3.. but if client use column size to compute size of buffer it fail...

That is exactly my point in talking about "fixing up" the metadata. The
purpose of TDSCOLINFO is to convey the server's description of the data to
the client. If we change the data (with iconv), then we must change the
metadata to match. Using your example, we must make it look as though the
server had sent UTF-8.

(A good way to think about all this, but much harder to implement, is as a
server gateway. Pretend you're a proxy TDS server, doing protocol and/or
charset conversions by adjusting the TDS packets flowing through you.
Your representation to your client has to be consistent, else it's useless.)

Just to be clear: the column size is a byte count, not a character count. I
don't think the APIs ever use a character count.

What should we say, for UTF-8? UTF-8 uses 1-4 byte/char, so 4 would be
safe. But, how many bytes it uses is a function of the Unicode code point
it's representing, and we know the charset we're converting from. For
Latin1, we know the max is 2 byte/char. Therefore, if your example client
asks for the column size, we'd better say 3 * 2 = 6. Everytime.

Pretending for a moment that the server charset is Latin1 and the client is
UCS-2,

> > Freddy: tds7_get_data_info() discards 5 collation bytes
> > because we haven't
> > installed our collation structure in TDSCOLINFO. When we do,
> > we'll have
> > server-reported per-column charset information. We'll then
> > have to make a
> > design decision about whether or not to convert all such
> > columns to a single client charset.
>
> In db-lib and ct-lib you set charset only before connection,
> so this is the
> correct way. In ODBC I can't see any method to set charset...
> very strange..

Actually, it's an option. I should have known. :/

In ODBC, I think you want SQL_COPT_SS_TRANSLATE. The documentation says it
"causes the driver to translate characters between the client and server
code pages".

In db-lib, it's very poorly described as db-lib options DBANSItoOEM and
DBOEMtoANSI. The documentation of dbsetopt() implies this option is sent to
the server, but I don't believe it. Elsewhere, the docs speak of using this
"DB-Library option", and there's no DBCC or SET option for this
functionality. Conversion is performed by the driver.

I'm still running 7.0, not 2000. In SQL Server 2000, individual columns in
a result set may each have their own character set (it's no longer a
server-wide setting). It's not clear to me how db-lib is supposed to cope
with that; if anyone has clear information or is willing to dig through the
docs, I'd be interested. For the present, I'd say we do any and all
conversions if DBANSItoOEM is set (the default), else none.

> perhaps MS use windows to discover charset

It's in the registry.

> (you know MS products are very unportable...) ??

So I've heard. ;-)

> Perhaps problem with ODBC is that client can read characters using
> single-byte or UCS2... (MS use very seldom utf8...)

Sybase can, though. We should study Sybase's metadata while it uses UTF-8.


> I discovered how to skip a character using iconv !!

Maybe I'm missing something. It doesn't look hard to me.

For single- or double-byte encoded inputs, we know how many bytes to skip.

For UTF-8, see http://czyborra.com/utf/#UTF-8 especially this table:

bytes | bits | representation
1 | 7 | 0vvvvvvv
2 | 11 | 110vvvvv 10vvvvvv
3 | 16 | 1110vvvv 10vvvvvv 10vvvvvv
4 | 21 | 11110vvv 10vvvvvv 10vvvvvv 10vvvvvv

Any UTF-8 value >127 is part of a multibyte character. If (byte & 64) is
true, you have a lead byte that will be followed by some "10vvvvvv" bytes; a
simple shift-loop will get you to the end. Insert a '?' in the output, and
continue from there.

UTF-8 had self-synchronization as part of its design; they wanted to make
this easy.

Regards,

--jkl


The information contained in this transmission may contain privileged and
confidential information and is intended only for the use of the person(s)
named above. If you are not the intended recipient, or an employee or agent
responsible for delivering this message to the intended recipient, any
review, dissemination, distribution or duplication of this communication is
strictly prohibited. If you are not the intended recipient, please contact
the sender immediately by reply e-mail and destroy all copies of the
original message. Please note that we do not accept account orders and/or
instructions by e-mail, and therefore will not be responsible for carrying
out such orders and/or instructions.





Archive powered by MHonArc 2.6.24.

Top of Page