Skip to Content.
Sympa Menu

freetds - [freetds] UTF-8 progress

freetds AT lists.ibiblio.org

Subject: FreeTDS Development Group

List archive

Chronological Thread  
  • From: "James K. Lowden" <jklowden AT schemamania.org>
  • To: TDS Development Group <freetds AT lists.ibiblio.org>
  • Subject: [freetds] UTF-8 progress
  • Date: Sun, 16 Nov 2003 03:21:12 -0500

Freddy,

Trying to get src/tds/unittests/utf_1.c to work.

Nice test, by the way.

When the column metadata arrive, we call adjust_character_column_size().
As far as the client's concerned, the column is as wide as need be, for
post-converted data. An nvarchar(10) would have column_size of 5 for an
ISO-8859-1 client, 10 for UCS-2 (one day), and 40 for UTF-8 (allowing for
worst case scenario).

For nchar/nvarchar, we then allocate a fixed buffer for the column to read
the row data into. We can't do that for blobs, because their stated
maximum length is 2 GB.

But we were doing something both unnecessary and ugly, afaict. Instead of
passing blob_info->textvalue to tds_get_char_data() as dest, we cast
blob_info to char*. Then in tds_get_char_data(), reversed the process.

I changed it to pass blob_info->textvalue. And fixed a bunch of other
things.

utf_1.c now works with nvarchar for all strings[], and with text for:

english,
spanish,
french,
portuguese

It breaks on russian. I'm sure that's because text is single-byte
encoded, and Russian can't be represented in my server's charset. So, I
think the test is broken.

read_and_convert() is now simpler and more robust (if I do say so myself).
And handles UTF-8, as promised. I haven't tested the chunk-boundary
logic yet; I was kinda hoping the unit test would do that for me.

I also manually re-indented some header files, so the comments line up and
things like that. Please don't run them through indent(1) again. :-)

Enjoy.

--jkl




Archive powered by MHonArc 2.6.24.

Top of Page