Skip to Content.
Sympa Menu

freetds - Re: [freetds] Sybase: Charset conversion

freetds AT lists.ibiblio.org

Subject: FreeTDS Development Group

List archive

Chronological Thread  
  • From: Frediano Ziglio <freddy77 AT gmail.com>
  • To: FreeTDS Development Group <freetds AT lists.ibiblio.org>
  • Subject: Re: [freetds] Sybase: Charset conversion
  • Date: Thu, 26 Nov 2009 09:15:56 +0100

2009/11/26 Sebastian Podjasek <sebastian.podjasek AT morenet.pl>:
> Dnia 2009-11-26, czw o godzinie 07:17 +0100, Frediano Ziglio pisze:
>> > What does the top of your TDSDUMP log say about the conversions?  What
>> > encoding do you think the server expects, and what does the client want?
>
>> Which database server are you using? Sybase or MSSQL? Which version?
>> Which platform?
>> Which client platform are you using? Big endian or little endian?
>>
>> I suspect client is big endian and server send utf16 coded as big endian
>> using UNIVARCHAR. Which database type are you using? Could you post a
>> TDSDUMP?
>
>
>
>> This is the description of the tables in xxxxxx.db v17, released beginning
>> of xxxxxx. This database
>> is released with version 10 of Adaptive Server Anywhere. All character
>> fields in this version of the
>> database are converted to Unicode character fields. Please consider this
>> change when connecting to
>> the database, and always perform several tests with your software before
>> working on live data.
>
> I use Perl DBD::Sybase with following connection string:
> DBI->connect('dbi:Sybase:' . $dsn, 'xxx', 'xxx', {
>    RaiseError => 1,
>    syb_show_sql => 1,
>    syb_show_eed => 1,
>    LongReadLen => 4000,
> }) or die $DBI::errstr;
>
> My client is standard x86_64 laptop running kernel 2.6.31-15-generic
> from ubuntu distro - so it's a standard little endian hardware.
> Regarding server hardware I have no idea about its Endianness.
>
> Below are few interesting lines from TDSDUMP
>
> log.c:190:Starting log file for FreeTDS 0.82
>        on 2009-11-24 17:09:44 with debug flags 0x4fff.
> iconv.c:197:names for ISO-8859-1: ISO-8859-1
> iconv.c:197:names for UTF-8: UTF-8
> iconv.c:197:names for UCS-2LE: UCS-2LE
> iconv.c:197:names for UCS-2BE: UCS-2BE
> iconv.c:363:iconv to convert client-side data to the "UTF-8" character
> set
> iconv.c:516:tds_iconv_info_init: converting "UTF-8"->"UCS-2LE"
> net.c:210:Connecting to x.x.x.x port x (TDS version 5.0)
> net.c:264:tds_open_socket: connect(2) returned "Operation now in
> progress"
> net.c:303:tds_open_socket() succeeded
> util.c:162:Changed query state from DEAD to IDLE
>
> ....
>
> net.c:671:Received packet
> 0000 e3 07 00 03 04 75 74 66-38 00 ad 16 00 05 05 00 |.....utf 8.......|
> 0010 00 00 0c 53 51 4c 20 41-6e 79 77 68 65 72 65 0a |...SQL A nywhere.|
> 0020 00 00 00 e2 16 00 01 09-00 00 0e 61 01 ff ff fe |........ ...a....|
> 0030 e6 02 09 01 fe 7f 1e e2-68 00 00 0a fd 00 00 01 |........ h.......|
> 0040 00 00 00 00 00         -                        |.....|
>
> token.c:316:looking for login token, got  e3(ENVCHANGE)
> token.c:108:tds_process_default_tokens() marker is e3(ENVCHANGE)
> token.c:2356:server indicated charset change to "utf8"
> iconv.c:985:setting server single-byte charset to "UTF-8"
> iconv.c:516:tds_iconv_info_init: converting "ISO-8859-1"->"UTF-8"
> token.c:316:looking for login token, got  ad(LOGINACK)
> token.c:316:looking for login token, got  e2(CAPABILITY)
> token.c:108:tds_process_default_tokens() marker is e2(CAPABILITY)
> token.c:316:looking for login token, got  fd(DONE)
> token.c:108:tds_process_default_tokens() marker is fd(DONE)
>
>
> After executing some simple SQL 'SELECT' with Unicode column, I get
> following (some values has to be obfuscated for legal reasons)
>
>
> net.c:671:Received packet
> 0000 61 73 00 00 00 02 00 07-xx xx xx xx xx xx xx 0c |as...... xxxxxxx.|
> 0010 xx xx xx xx xx xx xx xx-xx xx xx xx 03 64 62 61 |xxxxxxxx xxxx.dba|
> 0020 0b xx xx xx xx xx xx xx-xx xx xx xx 07 xx xx xx |.xxxxxxx xxxx.xxx|
> 0030 xx xx xx xx 20 00 00 00-50 00 00 00 6f 08 00 06 |xxxx ... P...o...|
> 0040 xx xx xx xx xx xx 0c xx-xx xx xx xx xx xx xx xx |xxxxxx.x xxxxxxxx|
> 0050 xx xx xx 03 64 62 61 0b-xx xx xx xx xx xx xx xx |xxx.dba. xxxxxxxx|
> 0060 xx xx xx 06 xx xx xx xx-xx xx 20 00 00 00 22 00 |xxx.xxxx xx ...".|
> 0070 00 00 e1 80 02 00 00 00-d1 08 c5 9c 00 00 60 cf |........ ......`.|
> 0080 c0 00 3e 00 00 00 00 50-00 6f 01 42 01 05 00 63 |..>....P .o.B...c|
> 0090 00 7a 00 65 00 6e 00 69-00 65 00 20 00 7a 00 20 |.z.e.n.i .e. .z. |
> 00a0 00 54 00 43 00 47 00 20-00 6a 00 65 00 73 00 74 |.T.C.G.  .j.e.s.t|
> 00b0 00 20 00 7a 00 61 00 6b-01 42 00 75 00 63 00 6f |. .z.a.k .B.u.c.o|
> 00c0 00 6e 00 65 fd 10 00 01-00 01 00 00 00          |.n.e.... .....|
>
> token.c:510:processing result tokens.  marker is  61(ROWFMT2)
> token.c:1713:tds5_process_result
> mem.c:563:tds_free_all_results()
> token.c:1737:num_cols=2
> token.c:1840:col 0:
> token.c:1841:   column_name=[dt_made]
> token.c:1847:   flags=20 utype=80 type=111 varint=1
> token.c:1850:   colsize=8 prec=0 scale=0
> token.c:3294:adjust_character_column_size:
>        Server charset: UCS-2LE
>        Server column_size: 640
>        Client charset: UTF-8
>        Client column_size: 1280
> token.c:1840:col 1:
> token.c:1841:   column_name=[c_text]
> token.c:1847:   flags=20 utype=34 type=225 varint=5
> token.c:1850:   colsize=1280 prec=0 scale=0
> util.c:162:Changed query state from READING to PENDING
> ct.c:1157:ct_results() process_result_tokens returned 1 (type 4049)
> token.c:495:tds_process_tokens(0x13daf90, 0x7fff09c47d5c,
> 0x7fff09c47d58, 0x6914)
> util.c:162:Changed query state from PENDING to READING
>
>
> And here you can find values returned by DBI and my 'conversion flow' to
> receive expected result (polish string)...
>
>> Data returned by DBI...
>> > '倀漀䈁ԁ挀稀攀渀椀攀 稀 吀䌀䜀 樀攀猀琀 稀愀欀䈁甀挀漀渀攀'
>> 0x00000000 (00000)   e58080e6 bc80e488 81d481e6 8c80e7a8   ................
>> 0x00000010 (00016)   80e69480 e6b880e6 a480e694 80e28080   ................
>> 0x00000020 (00032)   e7a880e2 8080e590 80e48c80 e49c80e2   ................
>> 0x00000030 (00048)   8080e6a8 80e69480 e78c80e7 9080e280   ................
>> 0x00000040 (00064)   80e7a880 e68480e6 ac80e488 81e79480   ................
>> 0x00000050 (00080)   e68c80e6 bc80e6b8 80e69480            ............
>> Conversion from UTF-8 to UCS-2LE
>> > 'Po B czenie z TCG jest zak Bucone'
>> 0x00000000 (00000)   0050006f 01420105 0063007a 0065006e   .P.o.B...c.z.e.n
>> 0x00000010 (00016)   00690065 0020007a 00200054 00430047   .i.e. .z. .T.C.G
>> 0x00000020 (00032)   0020006a 00650073 00740020 007a0061   . .j.e.s.t. .z.a
>> 0x00000030 (00048)   006b0142 00750063 006f006e 0065       .k.B.u.c.o.n.e
>> Conversion from UCS-2BE to UTF-8
>> > 'Połączenie z TCG jest zakłucone'
>> 0x00000000 (00000)   506fc582 c485637a 656e6965 207a2054   Po....czenie z T
>> 0x00000010 (00016)   4347206a 65737420 7a616bc5 8275636f   CG jest zak..uco
>> 0x00000020 (00032)   6e65                                  ne
>
> same thing done with TSQL
>
>> seba:~/$ LANG=en_US.UTF-8 tsql -S xxxxxx -U xxxxx -P xxxxx
>> locale is "en_US.UTF-8"
>> locale charset is "UTF-8"
>> 1> select xxxxxx ltext from dba.XXXXXXX where xxxxx > '2009-11-18'
>> 2> go
>> ltext
>> e58080e6bc80e1a880e1a880e68c80e7a880e69480e6b880e6a480e69480e28080e7a880e28080e59080e48c80e49c80e28080e6a880e69480e78c80e79080e28080e7a880e68480e6ac80e1a880e79480e68c80e6bc80e6b880e69480
>> (1 row affected)
>
> I hope that helps...
>

I tried this test

create table #tmp1(a univarchar(10))
insert into #tmp1 values(u&'\0041\0141')
select * from #tmp1

I'm sorry to say but my Sybase 15 returns data in little endian format
so it seems a problem of your database. It can be possible however
that is a problem inserting data.

freddy77




Archive powered by MHonArc 2.6.24.

Top of Page