freetds AT lists.ibiblio.org
Subject: FreeTDS Development Group
List archive
- From: "James K. Lowden" <jklowden AT freetds.org>
- To: TDS Development Group <freetds AT lists.ibiblio.org>
- Subject: [freetds] libtds2: status
- Date: Sun, 2 Oct 2011 00:56:33 -0400
Work on the new libtds continues. I have resolved questions of
repetition and recursion and have begun to generate code and data to
handle all versions of the protocol currently supported.
The protocol is currently described by 325 rows of data in 15 tables.
Supplementing that are 100 lines of primitive C functions and structs,
and 110 lines of Perl to generate the C tables and logic from the
database. I am optimistic that the entire state machine can be
described in 1000 lines of data. By comparison, the current
implementation requires 26,620 lines of C.
One question I was debating with myself was how to read the data.
Should it be a packet at a time, and copied to C structs, as is
currently done? The state machine is more conducive to one read per
field, directly to the user-accessible C structs, a zero-copy model.
But that would entail calling read(2) for every field, at a cost of two
user/kernel transitions each.
Happily, the answer is both: one read and no copies.
Every packet has a header stating the packet's size. A single function
can read the header, allocate memory based on the size, and read the
entire packet as one bundle-of-bytes. That's how it's done today in
libtds.
The raw packet buffers can be attached to the TDSSOCKET structure. The
number of them varies with the returned results. Very large columns
can require many buffers. The current library uses one packet buffer,
basically:
read packet
copy parts of packet to C struct
repeat until EOM
But: what if we keep all the packet buffers? Then, having read the
data, all memory needed is already allocated!
There is no need to copy any character data. The C struct char pointers
-- such things as the column name, for example --
can be pointers into the packet buffer. We need to copy only pointers
and (for alignment reasons) integers.
Today, libtds has many allocation and copy functions. One reason is to
create null-terminate strings: in the packet buffer, all strings are
prefixed by a length, and almost never have terminators. Another
reason is to convert UCS2 to ISO 8859-1. These do not have to be
libtds tasks. Character set conversion and null-termination can be
left to the client libraries.
If we decide we want libtds to null-terminate, we can, without calling
malloc(3). Character data can be "shifted" within the buffer with
memmove(3). Imagine a buffer "buf"
000000 ... 05 68 65 6c 6c 6f ...
5: h e l l o
The C struct for it might be
struct {
...
unsigned char len;
char * data;
...
} packet;
Today the logic is basically (cf. tds_get_n())
memcpy(&packet.len, buf, 1); /* if the length is kept */
malloc(packet.data, 1+packet.len);
memcpy(packet.data, buf+1, packet.len);
packet.data[len] = '\0';
Simplest, without null terminators, would be
memcpy(&packet.len, buf, 1);
packet.data = buf + 1;
To keep null terminators
memcpy(&packet.len, buf, 1);
memmove(buf, buf+1, packet.len);
buf[packet.len] = '\0';
(That logic does perturb the raw packet data, but only slightly
and only in a single low-level function.)
That's fine as far as it goes, but single column can span packets. How
to give it back to the client as one string?
$ grep -E 'Received header|marker' pdf.dump | uniq -c
1 net.c:555:Received header
1 token.c:122:tds_process_default_tokens() marker is e3(ENVCHANGE)
1 token.c:122:tds_process_default_tokens() marker is ab(INFO)
1 token.c:122:tds_process_default_tokens() marker is e3(ENVCHANGE)
1 token.c:122:tds_process_default_tokens() marker is ab(INFO)
4 token.c:122:tds_process_default_tokens() marker is e3(ENVCHANGE)
1 token.c:122:tds_process_default_tokens() marker is fd(DONE)
1 net.c:555:Received header
1 token.c:554:processing result tokens. marker is fd(DONE)
2 token.c:554:processing result tokens. marker is 81(TDS7_RESULT)
2 token.c:554:processing result tokens. marker is d1(ROW)
202 net.c:555:Received header
1 token.c:554:processing result tokens. marker is fd(DONE)
The ROW token used 203 packets. What to do?
The TDS protocol makes this pretty easy. The 202 subsequent packets of
column data have zero overhead except for the 8-byte header. The size
(but not necessarily the number) of all those packets is known when the
first one is received, based on the column size.
Steps:
1. Read whole packet.
2. Compute number of unread bytes
for last column in packet. Call it N.
3. Call realloc(2) N + 2 * packet_size bytes.
4. Read packets until N is satisfied.
5. Continue parsing.
The successive packets are pure data. They can be read into the
allocated buffer.
This requires that the entire column fit in memory. That's
OK. It's very difficult to imagine a real-world process that retrieves
data from a database and does not expect to hold it in memory. Unlike
today's library, this logic keeps only one copy.
(It skips using txtptr in the protocol. We can implement txtptr in
db-lib for compatibility.)
The beauty of the automaton is that the state machine handles
this situation naturally. When the parser exists, it tells the state
machine what it read: in this case, PARTIAL_PACKET. That leads the
state machine to invoke the packet reader, which also returns
PARTIAL_PACKET, whereby the state machine invokes the packet reader
again. When the whole column has been read, the parser returns
END_OF_PACKET or PARTIAL_PACKET, and the state machine continues
accordingly.
For the curious -- and if you've read this far, you must be curious --
here's an how an example packet is handled:
These structures are written by hand:
struct tds_packet_member_t
{
int type;
void * datum;
};
struct tds_packet_members_t
{
real first_version, last_version;
size_t nelem;
struct tds_packet_member_t *members
};
This code is generated:
/* TDS_RETURNSTATUS_TOKEN */
struct tds_TDS_RETURNSTATUS_TOKEN_t
{
/* members */
TDS_TINYINT token;
TDS_INT return_status;
/* access */
struct tds_packet_member_t
data_42_73[2] =
{ { SYBINT1, &token }
, { SYBINT4, &return_status }
};
/* versions */
struct tds_packet_members_t
members[1] =
{ { 4.2, 7.3, 2, data_42_73 }
};
};
The struct holds the union of all elements that can appear in this kind
of packet for all versions of the protocol. Which elements are filled
is governed by a list of type+pointer pairs. Traversing that list
yields a sequential set of members used by some version of the
protocol. Which list to use is determined by the members array.
The packet parser looks up the members row (in this case, there's only
one) based on the TDS version currently in use. There it finds a
pointer to a list of addresses of structure members and their types.
Each of these is passed to the "reader", which looks up the type's size
and copies N bytes to the given address, swapping bytes as needed.
Reaching the end of the list, it returns END_OF_PACKET.
The question arises: how does the client library that's trying to use
this information know which members to use and which to ignore?
An example of a protocol change is the ERROR packet, where the
line_number expanded from 16 to 32 bits. That's typical. It's
possible to capture all such cases in the database, and generate code
to assign the shorter member to the longer one. Client code then need
concern itself only with the longer version. Alternatively -- and this
works for all types, not just integers -- the client code can identify
the valid members the same way the reader did, by traversing the list
and matching up the addresses. It is not necessary to use
e.g. IS_TDS7_PLUS.
Very few casts are needed, and all casts are in the generated code,
which of course is produced from the database. There is no opportunity
for an incorrect cast if the database accurately describes the
protocol.
About threads. Frediano will want me to think about threads. OK,
let's do that.
If two threads attempt to use to the same connection, one starting
before the other finishes, chaos will certainly result. Read/write
functions will have to be serialized.
Other than that, I don't see any issues. Any variables used by the
state transistion table are (surprise!) *state* variables and need to be
attached to the connection. There's no global state.
In sum:
1. I had been concerned about memory management. This is solved by
keeping all packets.
2. The inputs to generate the state machine and the associated
generator code appear to require about 10% of the size of the current
libtds.
3. Protocol variations are captured in the database and are handled by
the generated code. The database also drives the generation of
structures that encapsulate the logic of parsing the packets.
4. The exported, client-callable functions will be radically
different, and far fewer. The state machine engages when it's
requested to read, and stops when the requested token is processed or
at EOM, whichever comes first. I haven't thought much about the
interface yet, but I see:
connect
login
send sql
send bcp
send rpc?
read meta
read row
cancel
disconnect
5. The client libraries will require substantial re-writing to use
this library. I hope the resulting libraries will be smaller and
simpler, too.
6. Much remains to be done; the work so far in no way threatens the
raj. First I just want to generate code that can parse a static TDS
stream and print a report. Once that works, the path to a working
tsql-style program should become clear. Only then would adoption by the
client libraries begin.
--jkl
- [freetds] libtds2: status, James K. Lowden, 10/02/2011
Archive powered by MHonArc 2.6.24.