Skip to Content.
Sympa Menu

freetds - [freetds] column_unicodedata considered harmful

freetds AT lists.ibiblio.org

Subject: FreeTDS Development Group

List archive

Chronological Thread  
  • From: "James K. Lowden" <jklowden AT schemamania.org>
  • To: TDS Development Group <freetds AT lists.ibiblio.org>
  • Subject: [freetds] column_unicodedata considered harmful
  • Date: Mon, 31 Mar 2003 07:27:55 -0500

Generalizing from "ASCII client and sometimes USC-2 server" to "any
client charset and any server charset" is going to be hard. There
must be a hundred "/2" and "*2" adjustments and "if unicode" tests in
the code. In the general case, they're all wrong.

Consider line 1634 in tds/token.c:

if (curcol->column_unicodedata) {
colsize /= 2;

Bzzt. Thanks for playing. In the general case, the question isn't "Does
this column hold unicode data?" but rather "Do this column's data require
conversion to the client's character set?" Examples of situations where
the above wouldn't work:

1. Mac OS X (or Win32), where the client is UCS-2.
2. Sybase server sends UTF-8.

In programming as in life, you can't get the right answer if you don't ask
the right question. We need a better design principle.

In the large, there are 3 ways we could manage character set conversion:

1. At the wire. When reading/writing, perform the conversion to/from the
local character set. FreeTDS handles client charset data for the most
part.

2. At the API. Perform the conversion at the last possible moment, when
presenting/accepting data to/from the client. FreeTDS handles server
charset data for the most part.

3. Cannonical. Combine #1 and #2. I made this suggestion and we
discarded it, but included it here for completeness.

At the moment, I think we agree that #1 is the best option. It requires
the fewest calls to iconv(), and has so far let the bulk of FreeTDS
operate on ASCII data.

It has some undesirable qualities, the foremost being that the data and
its metadata don't always match. Example: an nvarchar(256) will be
represented by the server, correctly, as 512 bytes wide, and we'll pass
that information to the client with e.g. dbcollen(). As far as an ASCII
client is concerned, that column is 256 bytes wide, not 512. We can't
munge the metadata immediately on receipt from the server, though, because
libtds needs to know the true size according to the server. [Objects.
Wouldn't objects be nice?]

This metadata desynchronization isn't insurmountable; we can keep or
compute both sizes if we know the char/byte ratio of the controlling
character set. It just means we need two metadata structures for each
such column, and remember which one to use when.

Sticking with option #1, I think we should isolate all client-server
charset conversions (and endian handling, while we're at it) in a single
module. The set of functions in that module should be the only ones that
deal with the server's character set and with conversion to/from the
client charset. Only they should deal with the char/byte ratios. They
should also be responsible for fixing up the metadata, so that the rest fo
the code doesn't worry about dividing by two or even about what the
server's character set happens to be.

--jkl




Archive powered by MHonArc 2.6.24.

Top of Page