[freetds] SQL Server 2019: CHAR(N) using UTF-8 is seen as SQL_NVARCHAR by SQLDescribeCol()

Sebastien FLAESCH sf at 4js.com
Fri Aug 28 11:43:12 EDT 2020


Hello!

What is the status of UTF-8 support for CHAR/VARCHAR columns in SQL Server 2019
when using a _UTF8 DB collation?

I have now FreeTDS 1.2.3 installed ...

Here my ODBC datasource:

[ftm_msvtest1_lison2_utf8_2019]
Description     = SQL Server 2019
Server          = lison2
Database        = msvtest1
Port            = 1433
TDS_Version     = 7.4
ClientCharset   = UTF-8


My application C locale (my strings are in UTF-8):

$ locale
LANG=
LANGUAGE=
LC_CTYPE="en_US.utf8"
LC_NUMERIC="en_US.utf8"
LC_TIME="en_US.utf8"
LC_COLLATE="en_US.utf8"
LC_MONETARY="en_US.utf8"
LC_MESSAGES="en_US.utf8"
LC_PAPER="en_US.utf8"
LC_NAME="en_US.utf8"
LC_ADDRESS="en_US.utf8"
LC_TELEPHONE="en_US.utf8"
LC_MEASUREMENT="en_US.utf8"
LC_IDENTIFICATION="en_US.utf8"
LC_ALL=en_US.utf8



A) When fetching a CHAR(n) column, SQLDescribeCol() still returns sqltype -9
which is SQL_WVARCHAR, when I expect to get 1 / SQL_CHAR ...

(See previous emails about this)


B) When I try to insert UTF-8 data, I get this error:

(-9833) [FreeTDS][SQL Server]Invalid data for UTF8-encoded characters

I am binding my string buffers with SQL_VARCHAR / SQL_C_CHAR and provide
directly UTF-8 data, as I do when using ISO-8859-15 locale.

When binding with SQL_WVARCHAR it's better but I can see on the SQL Profiler
that string parameters are passed as

exec sp_executesql ...  '... @px NVARCHAR', ...N'éàô'


But the goal in my opinion is to have the whole chain using UTF-8:

   app string in UTF-8  <=>  TDS protocol using UTF-8  <=>  DB column in UTF-8

And thus avoid any charset conversion.


When using a ISO-8859-15 client config with

ClientCharset   = ISO-8859-15

And my application strings in ISO-8859-15 and LC_ALL=en_US.iso885915,
I still get the error:

(-9833) [FreeTDS][SQL Server]Invalid data for UTF8-encoded characters


?

Seb



On 2/28/19 7:03 PM, Frediano Ziglio wrote:
> Fine, it's the not still documented extension 10. If set to 1 (1 byte)
> data are returned as UTF-8.
> I don't know what extension 9 is but 10 is enough.
> The collation has a weird bit set if the encoding is UTF-8.
> 
> Frediano
> 
> Il giorno gio 28 feb 2019 alle ore 14:22 Frediano Ziglio
> <freddy77 at gmail.com> ha scritto:
>>
>> Hi,
>>     there seems to be 2 new "feature" (it's like capabilities for TDS
>> 7.4), 9 and 10, the relative ack values are 0x0101 and 0x01, client is
>> sending 0x01 for both.
>> Currently not documented in TDS protocol specification
>> (https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-tds/b46a581a-39de-4745-b076-ec4dbb7d13ec).
>> The protocol version is still 7.4.
>> I suppose I can try to send these feature extensions and see what
>> happens, if server changes the encoding.
>>
>> Frediano
>>
>> Il giorno gio 28 feb 2019 alle ore 10:56 Frediano Ziglio
>> <freddy77 at gmail.com> ha scritto:
>>>
>>> Hi,
>>>    so, installed a sql 2019 and tried some queries.
>>> Using the driver provided inside the docker image the "abc" is
>>> returned as "abc       ", so
>>> the CHAR(10) are returned encoded in UTF-8 and the "10" is just the
>>> number of bytes.
>>> Now, why the server decided to return UTF-8 and not convert to NVARCHAR (which
>>> make sense to me for old clients) it's the issue. Maybe they bump the
>>> protocol version,
>>> maybe they are using some other flags/version (like client version
>>> passes through
>>> the pre-login). Unfortunately the login is encrypted so I'll have to
>>> dig a bit more (well,
>>> just question of setup and use the bounce utility, not sure if it
>>> supports pre-login).
>>>
>>> Frediano
>>>
>>> Il giorno gio 28 feb 2019 alle ore 10:50 Sebastien FLAESCH
>>> <sf at 4js.com> ha scritto:
>>>>
>>>> Hello Craig,
>>>>
>>>> I don't think so: An ODBC program should be able to properly identify
>>>> (as much as possible) the original database type used for a column.
>>>>
>>>> We must also distinguish the concept of "variable length" regarding the
>>>> number of bytes used to encode characters, versus the actual length of
>>>> the string in character units.
>>>>
>>>> Plus the fact that [N]CHAR columns can be blank-padded, while [N]VARCHAR
>>>> columns are not and store the actual number of trailing white spaces of
>>>> the string, try for ex:
>>>>
>>>> create table tab1 ( c1 char(10), c2 varchar(10) )
>>>> insert into tab1 values ( 'abc', 'abc  ' )
>>>> select '['+c1+']', '['+c2+']' from tab1
>>>>
>>>> You will see:
>>>>
>>>> [abc       ] [abc  ]
>>>>
>>>> (check SET ANSI_PADDING ON/OFF)
>>>>
>>>> ...
>>>>
>>>> UTF-8 is a variable-width encoding:
>>>>
>>>> a  => needs 1 byte
>>>> é  => needs 2 bytes
>>>> 木 => needs 3 bytes
>>>> ... etc (there can be more than 3 bytes)
>>>>
>>>> In that context, one can see a "variable length" in byte units...
>>>>
>>>> But when counting in characters, 'aé木' is 3 characters, no matter the
>>>> encoding used to store this string.
>>>>
>>>> You have also the concept of byte-length semantics versus char-length
>>>> semantics: When creating a CHAR(10) is this 10 bytes or 10 characters?
>>>> Depending on the database brand, it's bytes (SQL Server), characters
>>>> (PostgreSQL), or it can even be a mix and specified explicitly (like
>>>> in Oracle DB).
>>>>
>>>> Note: You have then also the concept of "width" or "number of columns":
>>>> A latin character takes one column, while an Asian logogram takes 2.
>>>> See wcwidth() for more details.
>>>>
>>>> ...
>>>>
>>>> In SQL Server, NCHAR and NVARCHAR can store UTF-16 when the SC modifier
>>>> is used in the database collation (they store UCS-2 when no SC is used)
>>>>
>>>> In UTF-16, a first range of characters are encoded on 2 bytes, but you
>>>> can have "surrogate pairs" encoded on 4 bytes (two 16-bit code units).
>>>>
>>>> In your logic, the NCHAR/UTF-16 type could also be considered as a
>>>> "variable length" type and SQLDecribeCol() could return SQL_WVARCHAR
>>>> instead of SQL_WCHAR... It must not and does not.
>>>>
>>>> Cheers,
>>>> Seb
>>>>
>>>>
>>>> On 2/27/19 4:32 PM, Craig Jackson wrote:
>>>>> I don't know anything about SQL Server 2019, but if they added the option
>>>>> to allow UTF-8 encoding of character strings, it would seem natural that
>>>>> said strings would be variable-length, even if declared CHAR(n). Strings
>>>>> encoded in UTF-8 are inherently variable length.
>>>>>
>>>>> Craig Jackson
>>>>>
>>>>> On Wed, Feb 27, 2019 at 10:27 AM Sebastien FLAESCH <sf at 4js.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Just tried with TDS 7.4, it did not help.
>>>>>>
>>>>>> I have tested the MS ODBC driver 17.3.1.1, it works as expected
>>>>>> ( I get 1/SQL_CHAR from SQLDescribeCol() )
>>>>>>
>>>>>> You may know that it's very easy to setup an SQL Server 2019 with
>>>>>> docker, do you?
>>>>>>
>>>>>> Cheers,
>>>>>> Seb
>>>>>>
>>>>>> On 2/27/19 3:08 PM, Frediano Ziglio wrote:
>>>>>>> Hi,
>>>>>>>      I would try to use TDS 7.4, not 7.3.
>>>>>>> Also I would try to use Microsoft ODBC driver and a network analyzer
>>>>>>> (like wireshark).
>>>>>>>
>>>>>>> Regards,
>>>>>>>      Frediano
>>>>>>>
>>>>>>> Il giorno mar 26 feb 2019 alle ore 11:26 Sebastien FLAESCH
>>>>>>> <sf at 4js.com> ha scritto:
>>>>>>>>
>>>>>>>> Testing FreeTDS 1.1rc3 with SQL Server 2019:
>>>>>>>>
>>>>>>>> I get an unexpected SQL type code with a CHAR(10) column, when the
>>>>>>>> database collation is using UTF-8.
>>>>>>>>
>>>>>>>> Instead of SQL_CHAR (or SQL_WCHAR), I get SQL_WVARCHAR.
>>>>>>>>
>>>>>>>> I have created a database with this collation:
>>>>>>>>
>>>>>>>>       Latin1_General_100_CI_AS_SC_UTF8
>>>>>>>>
>>>>>>>> Then created a table like this:
>>>>>>>>
>>>>>>>>       CREATE TABLE mytab1 ( col1 CHAR(10) )
>>>>>>>>
>>>>>>>> and inserted a row:
>>>>>>>>
>>>>>>>>       INSERT INTO mytab1 VALUES ( 'abc' )
>>>>>>>>
>>>>>>>> Then in the ODBC program:
>>>>>>>>
>>>>>>>>       SQLExecDirect(.."SELECT * FROM mytab1"..)
>>>>>>>>       SQLDescribCol(...)
>>>>>>>>
>>>>>>>> Check the TDSDUMP trace in attachment (freetds-2019-1.log)
>>>>>>>>
>>>>>>>> token.c:1541:tds7_get_data_info:
>>>>>>>>             colname = col1
>>>>>>>>             type = 39 (varchar)
>>>>>>>>             server's type = 231 (x UCS-2 varchar)           <-- 231 ?
>>>>>>>>             column_varint_size = 2
>>>>>>>>             column_size = 20 (20 on server)
>>>>>>>>
>>>>>>>>
>>>>>>>> The same program, connecting to SQL Server 2017 (were DB collation is
>>>>>> Latin1
>>>>>>>> without UTF-8):
>>>>>>>>
>>>>>>>>
>>>>>>>> token.c:1541:tds7_get_data_info:
>>>>>>>>             colname = col1
>>>>>>>>             type = 47 (char)
>>>>>>>>             server's type = 175 (xchar)          <-- 175
>>>>>>>>             column_varint_size = 2
>>>>>>>>             column_size = 10 (10 on server)
>>>>>>>>
>>>>>>>> With SQL Server 2019, when using another collation (non-UTF-8)
>>>>>>>>
>>>>>>>>        CREATE TABLE mytab2 ( col1 CHAR(10) COLLATE Latin1_General_CI_AS )
>>>>>>>>
>>>>>>>> I get the expected tds type code...
>>>>>>>>
>>>>>>>> Seb
>>>>>>>> _______________________________________________
>>>>>>>> FreeTDS mailing list
>>>>>>>> FreeTDS at lists.ibiblio.org
>>>>>>>> https://lists.ibiblio.org/mailman/listinfo/freetds
>>>>>>> _______________________________________________
>>>>>>> FreeTDS mailing list
>>>>>>> FreeTDS at lists.ibiblio.org
>>>>>>> https://lists.ibiblio.org/mailman/listinfo/freetds
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> FreeTDS mailing list
>>>>>> FreeTDS at lists.ibiblio.org
>>>>>> https://lists.ibiblio.org/mailman/listinfo/freetds
>>>>>>
>>>>> _______________________________________________
>>>>> FreeTDS mailing list
>>>>> FreeTDS at lists.ibiblio.org
>>>>> https://lists.ibiblio.org/mailman/listinfo/freetds
>>>>>
>>>>
>>>> _______________________________________________
>>>> FreeTDS mailing list
>>>> FreeTDS at lists.ibiblio.org
>>>> https://lists.ibiblio.org/mailman/listinfo/freetds
> _______________________________________________
> FreeTDS mailing list
> FreeTDS at lists.ibiblio.org
> https://lists.ibiblio.org/mailman/listinfo/freetds
> 



More information about the FreeTDS mailing list