freetds AT lists.ibiblio.org
Subject: FreeTDS Development Group
List archive
- From: "James K. Lowden" <jklowden AT schemamania.org>
- To: TDS Development Group <freetds AT lists.ibiblio.org>
- Subject: [freetds] UCS-2 strings
- Date: Mon, 14 Apr 2003 01:41:39 -0400
All,
Many many API function pass or return null-terminated strings. The
question I'm wrestling with is how to interpret those strings for
non-ASCII clients.
Right now, db-lib for instance doesn't track character set very closely.
You pass dbcmd() a char*; it uses strlen(), strcat(), and strcpy() to fill
the command buffer for you. The buffer is converted to UCS-2 on the way
out the door, in write.c, if the server is UCS-2. If your table happens
to have a column "År" (Year), and your character set is set up correctly,
libtds will convert it to UCS-2 and everything works. At least in theory.
Now suppose we want to support a UCS-2 client. strlen() & Co. become our
nemesis, because a UCS-2 buffer has zeros throughout. Termination is
signified by 16 zero bits, not 8.
The question is, should we accept UCS-2 buffers everywhere, and, if so,
how do we know that's what we're getting?
There are two alternatives I can see:
1. Accept UCS-2 buffers. Determine terminator from
DBPROCESS::tds_socket->iconv_info->client_charset. Write static, local
replacements for strlen() etc. that use client_charset to do the Right
Thing. Could get interesting for things like DBSETLUSER() that don't have
a dbproc. Might *have* to rely on nl_langinfo(3) for some things.
2. Require buffers in a single-byte character set. I think this is the
way Microsoft clients work in NT-ish systems.
I'm interested in your thoughts on these two, or any Third Way you might
have in mind.
Thanks.
--jkl
- [freetds] UCS-2 strings, James K. Lowden, 04/14/2003
Archive powered by MHonArc 2.6.24.