Skip to Content.
Sympa Menu

freetds - Re: [freetds] Help with charset

freetds AT lists.ibiblio.org

Subject: FreeTDS Development Group

List archive

Chronological Thread  
  • From: "James K. Lowden" <jklowden AT freetds.org>
  • To: freetds AT lists.ibiblio.org
  • Subject: Re: [freetds] Help with charset
  • Date: Tue, 15 Nov 2011 20:34:47 -0500

On Mon, 14 Nov 2011 13:11:49 -0800
Steve Langasek <vorlon AT dodds.net> wrote:

> > > I *think* that the client charset needs to match what the server
> > > is sending.  So if the server is sending CP850, you need to set
> > > freetds to expect that, or things will get garbled.  Did you try
> > > that?  What was the result?
>
> > Hmm... indeed.
> > Making "client charset = CP850" did the trick.
>
> How did FreeTDS end up with such awful semantics for 'client
> charset'? Why should 'client charset' be interpreted as meaning "the
> charset the server uses for encoding data"?

Hi Steve,

You'll be pleased to know things are nowhere near as broken as they
seem. It's not clear to me they're broken at all.

Let me state this explicitly, so that no one reading the archives gets
the wrong impression: "client charset" describes the client, not the
server.

I just retrieved Swedish text from today's Svenska Dagbladet to a file.
I created a table on my server using Finnish_Swedish_CS_AS collation,
and uploaded ISO 8859-1 data. I then downloaded it with "client
charset utf-8". These are the files:

$ file se*
se.8859-1: ISO-8859 text # input
se.utf8: UTF-8 Unicode text # output

In my TDSDUMP file, I see these references to encoding:

$ grep -E 'charset|iconv' dump
iconv.c:328:tds_iconv_open(0x7f7ffd637120, UTF-8)
iconv.c:351:Using trivial iconv
iconv.c:185:local name for ISO-8859-1 is ISO-8859-1
iconv.c:185:local name for UTF-8 is UTF-8
iconv.c:185:local name for UCS-2LE is UCS-2LE
iconv.c:185:local name for UCS-2BE is UCS-2BE
iconv.c:347:setting up conversions for client charset "UTF-8"
iconv.c:349:preparing iconv for "UTF-8" <-> "UCS-2LE" conversion
iconv.c:389:preparing iconv for "ISO-8859-1" <-> "UCS-2LE" conversion
iconv.c:392:tds_iconv_open: done

Those are the preliminaries. I'm set up for UTF-8. The server hasn't
announced its encoding yet, so we have to wait. It shows up in
ENVCHANGE packet:

token.c:2086:server indicated charset change to "iso_1"
iconv.c:986:setting server single-byte charset to "CP1252"
iconv.c:460:Charset 15 not supported by iconv, using "ISO-8859-1"
instead
Server charset: CP1252
Client charset: UTF-8
[end grep output]

The server reports the encoding. FreeTDS matches up the name to one
used by the prevailing iconv, and opens an iconv handle to manage the
conversion. Never does FreeTDS request or accept a server encoding
from the user.

Elsewhere in the dump, I see the resultset processing:

token.c:1498:processing TDS7 result metadata.
mem.c:594:tds_free_all_results()
token.c:1523:set current_results (1 column) to tds->res_info
token.c:1530:setting up 1 columns
token.c:3014:adjust_character_column_size:
Server charset: CP1252
Server column_size: 75
Client charset: UTF-8
Client column_size: 300

FreeTDS allocated 300 bytes for a 75-character column because ISO
8859-1 can expand to as many as 4 byte/character. (It decided
to use 8859-1 instead of CP1252 as a heuristic, because I'm not linked
to a real iconv.)

&there4; I think we can say the semantics are OK. ;-)

Now, that leads to the harder but less alarming question: what happened
in your case?

This for sure: if you set "client charset" to match the server's
charset, you will get the data encoded on the client as they're encoded
on the server, unchanged.

Also known: it's quite easy for the server to lie, if it was lied to.
You can bcp in any text, no matter what encoding, into any
char/varchar column, no matter what collation. On retrieval, the server
will report the encoding as recorded in the metadata, regardless of the
data. More than once on this list I have helped people (1) understand
the issue and (2) prove that the data-as-received did not match the
encoding promised by the server. How it got that way, I have know way
of knowing, but bcp is certainly one way.

Other things that sometimes confuse people (but not you, I'm sure) are
terminal and locale settings. Most people start with the naïve idea
that "å" is a character, and so it is. But computers store characters
as numbers. Not only can quite a few numbers can mean å, but those
numbers can also mean other characters, viz:

# with ISO 8859-1 charset
$ grep ^EU se*
se.8859-1:EUROPAKRISEN Även stabila nationer straffades hårt.
se.utf8:EUROPAKRISEN Ã
ven stabila nationer straffades hårt.

# with xterm -u8 (UTF-8 charset)
$ grep ^EU se*
se.8859-1:EUROPAKRISEN ?en stabila nationer straffades h?t.
se.utf8:EUROPAKRISEN Även stabila nationer straffades hårt.

If you can't trust grep(1) xterm(1), whom *can* you trust?

While we're on the subject (sort of): I look forward to the day when

$ cat se.* | translate

produces English text by sensing the input and using locale settings
to decide the default output. I would also gleefully incorporate
translate.so into FreeTDS.

HTH.

--jkl





Archive powered by MHonArc 2.6.24.

Top of Page