Skip to Content.
Sympa Menu

freetds - Re: Unicode, UTF-8, and Greek

freetds AT lists.ibiblio.org

Subject: FreeTDS Development Group

List archive

Chronological Thread  
  • From: Steve Langasek <vorlon AT netexpress.net>
  • To: TDS Development Group <freetds AT franklin.oit.unc.edu>
  • Subject: Re: Unicode, UTF-8, and Greek
  • Date: Tue, 17 Jul 2001 08:04:53 -0500 (CDT)


On Mon, 16 Jul 2001, Brian Bruns wrote:

> On Mon, 16 Jul 2001, Steve Langasek wrote:
>
> > On Sat, 14 Jul 2001, Brian Bruns wrote:
> >
> > > That was very insightful. I think we have three major issues here:
> >
> > > 1) TDS 7/8. Everything is unicode, so we should just be able to read
> > > the
> > > field in freetds.conf and use iconv to convert between UCS2 and
> > > that. If there is no entry in freetds.conf, we could try to grab the
> > > value of $LANG, although I'm not sure how that maps to the iconv
> > > labels. Failing there we simply use the present 7bit ASCII conversion
> >
> > I rather favor defaulting to UTF-8 instead of ASCII when no charset has
> > been
> > specified. If there's non-ascii data in a database, it's because someone
> > put
> > it there; doesn't it make more sense to treat this data losslessly by
> > default,
> > passing it through for the client app to deal with?

> Ok, my concern is application support, does PHP, Perl (DBD::Sybase,
> DBD::ODBC), sqsh, Gnome-DB, SybSQL, sybtcl, Python (although we don't have
> this working yet), support UTF-8 out of the box in some reasonable fashion
> that is going to make this work?

Two significant reasons why so many people like UTF-8 as an encoding for
Unicode are that it's byte-oriented, and therefore endian-neutral; and that
it's byte-stream compatible with the ISO-8859-x encodings in the sense that
the set of bytes used in the representation of printable characters in UTF-8
is equivalent to the set of bytes used for printable characters in the 8859
charsets. Using UTF-8 as a default encoding means that Unicode-aware apps
will have full access to the data in the database, and other apps will
occasionally see some characters which don't make sense in context, but are
nevertheless printable -- IOW, precisely the same situation people are in
now with the ASCII-truncation solution, except that UTF-8 makes *all* of the
salient data available by default. The only downside I can see to using
UTF-8 is that Unicode-dumb apps will incorrectly calculate string lengths,
seeing a three-byte multibyte character as three characters instead of one.
I can't imagine where this would be a problem, but I may be overlooking
something.

In terms of NT-compatibility, then, UTF-8 by default seems like the best
possible solution to me. And with either default, users of non-UTF8 apps
are going to need to do /some/ configuration in order to handle extended
data; at least a UTF8 default works out of the box with some applications.
:)

> My belief is that several of them are ok, but some may not be.

> When accessing a database via TDS 4.2 the data is returned using the
> default character set of the database. I would assume the appropriate
> thing to do in this case is preserve that charset, what ever it may be.

> So, if we have a MS SQL server being accessed with 4.2, it comes down in a
> national charset, but the same database under TDS 7.0 is unicode and is
> converted to UTF-8. That's a big change for just switching the protocol
> version.

But isn't this essentially what happens on the Windows side, as well? :)
Or, I suppose, Microsoft may convert everything to UCS2 for API
compatibility; in which case it doesn't seem a bad idea to do likewise, and
use UTF8 as freetds's default client-side charset even for 4.2.

The way I see it, if there's a pre-populated database that someone's now
trying to connect to from Unix, they've been connecting to it using
Microsoft APIs, so they're probably already used to using UCS-2; and if
there's nothing in the database yet, giving users UTF8 support by default
isn't necessarily a bad thing.


> > For the most part, I think iconv will be doing all the hard work. If we
> > have
> > working conversion from UCS2->UTF8 (wchar->mbyte) and UCS2->ISO-8859-x
> > (wchar->char), I expect we'll get working conversion to other multibyte
> > charsets for free.

> iconv does most of it, yes. However, I have no idea what multibyte or
> variable byte (UTF-8) streams look like on the wire, so handling these in
> libtds is currently very broken, and unlikely to be fixed anytime soon.

Again, except where something cares about the length of a string in
characters (as opposed to the length in bytes), there's really nothing
special that needs to be done. I would presume the protocol specifies all
lengths in terms of bytes, since this is the most practical way of providing
indices into a bytestream.

Steve Langasek
postmodern programmer





Archive powered by MHonArc 2.6.24.

Top of Page