Skip to Content.
Sympa Menu

freetds - [freetds] Committed: Basic UTF-8 changes

freetds AT lists.ibiblio.org

Subject: FreeTDS Development Group

List archive

Chronological Thread  
  • From: "James K. Lowden" <jklowden AT schemamania.org>
  • To: TDS Development Group <freetds AT lists.ibiblio.org>
  • Subject: [freetds] Committed: Basic UTF-8 changes
  • Date: Sun, 6 Apr 2003 19:32:21 -0400

All,

Note: What's in CVS right now probably doesn't work. I don't have a test
server up, and I'm running out of time. Given the extent of my changes, I
wanted to update the repository so that things don't get too far out of
synch. I could have created a branch, but my development practice is: go
straight ahead. Whatever I've broken, we have more than enough collective
talent to fix.

Anyone looking for a recent working snapshot should use
ftp://ftp.ibiblio.org/pub/Linux/ALPHA/freetds/current/freetds-pre-utf8.tgz.
That's the last snapshot without my changes.

What's up
---------

To use a Microsoft server, FreeTDS used to assume its client would use a
single-byte character set e.g., ISO-8859-1. This assumption pervaded the
data structures and logic. What I've done is excise as much of that as I
could find, replacing the ASCII<->UCS-2 idea with a generalized
client<->server one.

My immediate goal is to substitute my generalized approach while retaining
the existing functionality. I am not trying to add new capabilities right
now. Once we can handle the current conversions more generally, it will
be possible to add new ones.

An important consequence (apart from nonworking code) of the change is
that *all* character data will be passed through iconv if it's linked in.
Previously, iconv was employed only to convert UCS-2 data. Now, what
we're saying is that the server and client may have any encoding. For
example, we could have UTF-8 on the client and ISO-8859-15 on the server.
A simple varchar(30) field with accented characters (>127) would need up
to 2 byte/char on the client.

We will need to be able in freetds.conf to indicate that iconv should not
be used, meaning the client and server are "guaranteed" to use the same
encoding. I haven't done anything about that yet.

Details
-------

TDSSOCKET used to have "void *iconv_info"; it now has "TDSICONVINFO
iconv_info" (no pointer) because there was no reason to allocate the
memory dynamically that I could see. mem.c is commensurately smaller. ;-)

The TDSICONVINFO is completely different. It has two iconv conversion
descriptors, one each for input and output. In addition, it has a new
pair of TDS_ENCODING structures that describe the client and server
character sets. Unfortunately, I haven't been able to test their
initialization.

Some character sets use a fixed char/byte ratio and some don't.
TDS_ENCODING keeps min/max values for that ratio. For fixed-length
character sets, the min and max are equal. By keeping both values, it's
possible to calculate the most pessimistic (largest) conversion ratio, for
allocation needs.

These ratios are listed in a new file, src/tds/character_sets.h. I think
I have all the 8859 variants correct, but I might not. And, since I know
nothing about ideographic languages, I didn't attempt anything with them.
The names in the file were taken from "iconv -l"; if more than one name
describes the same encoding, I used just the first one. We can map
synonyms later, perhaps when we begin inferring the character set from
nl_langinfo(3) instead of freetds.conf.

You'll note there's no provision for per-column conversions, even though
we know SQL Server 2000 sends per-column "collation" (character set
information). The solution will be to add a TDSICONVINFO pointer to
TDSCOLINFO, and apply the same strategy.

Notes
-----

An important principle to keep in mind as you're dealing with sizes: A
size is a number of bytes. That goes for column widths and data lengths.
When the server sends us an nchar(30) column, it says it's 60 bytes wide.
For an ASCII client, we convert that to 30 bytes. The client libraries
should never, ever see 60. Likewise, on a UTF-8 client, that same column
will be 3 * 30 = 90 bytes wide, and that's what the client libraries (and
clients, of course) should always see.

You'll note that UTF-8 has a maximum byte/char of 3. Why 3? As far as we
know, none of the character sets Microsoft supports requires more than 3
byte/char in UTF-8. This is probably just wrong, because Unicode colums
can hold any Unicode character, Microsoft's choices notwithstanding.

UTF-8's true upper limit is 4 byte/char. In converting from single-byte
encodings, though, it will need only 2 or 3 byte/char, max, depending on
the encoding (or even just one, in the case of ASCII). That's an
opportunity for optimization, and it would be very useful with large text
columns. All the required information is in TDSICONVINFO. What remains
is provide the logic.

It should be obvious, but I've never seen it stated as such: there's no
fixed relationship between the server's datatype and the data's size.
Many of us have internalized the idea that nchar columns are 2 byte/char
Because They Are Unicode, and that char columns are 1 byte/char Because
They Are Not. It just not so. Those things are true on the server,
because that's how the server encodes them. The size of a [n]char field
on the client reflect the client's encoding, not the server's.

Apologia
--------

A more responsible maintainer would have tested these changes (at least
somewhat!) before committing them. I figure in the next few days I'll be
able to correct whatever's broken, but in the meantime no one has working
code. I'm sorry for the inconvenience.

--jkl






Archive powered by MHonArc 2.6.24.

Top of Page