Skip to Content.
Sympa Menu

freetds - RE: [freetds] Longstanding issue and 0.64...

freetds AT lists.ibiblio.org

Subject: FreeTDS Development Group

List archive

Chronological Thread  
  • From: Frediano Ziglio <freddyz77 AT tin.it>
  • To: FreeTDS Development Group <freetds AT lists.ibiblio.org>
  • Subject: RE: [freetds] Longstanding issue and 0.64...
  • Date: Tue, 30 Nov 2004 22:13:35 +0100

Il mar, 2004-11-30 alle 21:12, Lowden, James K ha scritto:
> > From: ZIGLIO, Frediano, VF-IT
> > Sent: Tuesday, November 30, 2004 10:43 AM
> > > From: Lowden, James K
> > > Sent: lunedì 29 novembre 2004 20.03
> > >
> > > I think the log format should be standardized; it would be easier to
> > > grep and scan. I don't see much value in logging the time on every
> > > line. It's enough that we know when packets were sent and received.
> > >
> > > Here's a format suggestion:
> > >
> > > LEVEL PID Function (File:Line): text
> > >
> > > LEVEL would be one of:
> > >
> > > LOGIN
> > > API
> > > ASYNC
> > > DIAG
> > > ERROR
> > > PACKET
> > > LIBTDS
> > > CONFIG
> > >
> >
> > What difference between API and LIBTDS ??
>
> If you set the API bit, you get db-lib/ct-lib/odbc logging: a log
> entry for every API call (and probably some internal ones). If you
> set the LIBTDS bit, you get a log entry for libtds function calls.
>
> > > Surely stdio synchronizes writes to a file?
> > >
> >
> > stdio ?? If I open 2 connection I need to write to the same file. If
> I use same handle I'm sure it write correctly informations but if I
> open two handles for the same file probably one write overwrite other
> write. Well... I'm not sure but this thing need a bit of testing.
>
> [Interesting test program omitted.]
>
> I had in mind opening the file once and writing through a global
> handle.
>

I was thinking that:
- we read filename from connection
- we set logging file in tds_connnect
So I always though that I can have logging for a "Server1" connection in
"dump1.log" and logging for "Server2" connection in "dump2.log"...
This would allow to do testing and production on the same machine
without performance problems. I don't know however if this would require
some sort of syncronization for every log file opened.

> > Also we need a macro to support file:line syntax...
>
> Well, if TDSDUMP() is itself a macro, __FILE__ and __LINE__ are no
> problem. I'd use __FUNCTION__, too, as long as it's supported by the
> compiler (which we can determine with autoconf). If the compiler
> can't provide the function name, it won't appear in the log.
>
> > > I don't think FreeTDS should catch signals. It's up to the
> > > application
> > > to catch signals and call the API accordingly. If we decide to
> use
> > > O_ASYNC, that's another story.
> >
> > I know that it's up the application however dbcancel docs state that
> a code inside a signal can call this function (to support alarm or
> similar) and SQLCancel docs state that this function can be called
> from another thread while connection it's busy.
>
> So we have to make sure that dbcancel() only does things that are
> allowed within a signal-handler.
>
> I think the low-level networking code will have to be reworked to
> support timeouts and cancels correctly.
>

Some time ago I though that cancel can just see if we are receiving
data,set a flag and send cancel request.
Code that read/write data should just wait for data from server checking
cancel flag. This do not require a change in network cause we do not
require to stop recv (cause server will send cancel reply).
I don't know if this solution really work...

> > > We need to support binary bindings of character data. We lost that
> > > feature when we implemented iconv throughout.
>
> > There are many solution/optimization/changes that we can combine
> > to relax these difficulties. For example thinking about inserting
> > (bulk, rpc, dynamic call) we call call a libTDS function to start
> > sending data and then call different functions to insert a
> > parameter/column at a time. Or we can provide a way to libTDS to
> > grab data directly from binded data (calling for example a function
> > that convert data for use or similar). Or a mix of the two (libTDS
> > stop when it don't know who to convert parameters and so higher
> > API can do the job...).
>
> This is an interesting problem. We should discuss it, and try to
> devise an internal API. Like you, I see layers:
>
> 1. net. open, read/write, close. Timeouts, interrupt handlers,
> O_ASYNC. Also, for BCP purposes, netlib should let the caller use
> sockets and files in exactly the same way, so that we can BCP from
> server to server or from file to file (to do format/encoding
> conversions). This layer only deals with buffers (not files or
> callback functions).
>

I already started separating network stuff in net.c however there are
still knowledge of TDS packets. I don't know if it's a good things to
support interrupt, perhaps it's better to tell caller that we got a
interruption (errno == EINTR) or a timeout

> 2. tds. The current code has no packet layer. Each packet is
> hand-built (if you will): put the token, put the length, put this
> byte, put these two bytes, put this string, etc. It would be clearer
> if there were functions like "write_<token>_packet(struct
> <token>_packet_type* data)". Most packets -- everything except rows
> -- are quite small and can easily be described by a structure.
>

Better to call it token layer. Packet conflict with TDS packet (packets
sent/received to/from server)
A small dissertion on write_<token>_packet: one problem it's
portability, we need to swap bytes as needed so in write_<token>_packet
we would just access elements and call tds_put_XXX, in caller we have
just to pack data in structure and call write_<token>_packet... perhaps
it produce more readable code however it's IMHO more code to write and
less performance. On the same subject I though to use extra space at the
and of TDS packet to write safety a entire small packet of bytes. Many
processor can swap bytes and write/read unaligned data in an efficient
way so this would reduce code size (just an optimization I know... we
can leave without :) ).
Perhaps a metalanguage to describe token structure and some
metaprocessor (perl??) to translate this metalanguage to code ??

> 3. row. Using metadata and data, compose a row, sending pieces via
> the "net" layer. Read data from "net", constructing metadata and
> filling data buffers. The column read/write functions allocate
> buffers to communicate with the "net" layer. No buffers would be
> passed to/from callers. Instead, they should take function pointers
> as arguments, and those functions will provide/accept data. This
> would allow BCP to work directly with files, using only one buffer.
>
> The "row" layer has a lot of work to do:
>
> * iconv. We put all iconv work in libtds, next to the wire. We
> convert immediately to client's encoding. That's OK, be we have to
> honor the client's binding, too. I guess this is part of "row".
>
> * text. Also part of "row". As you say, sending a row is really
> just calling several send-a-column functions, same for reading a row.
> Text is tricky, because a partial column can be read/written.
>
> * column. ct-lib and ODBC both provide for column-wise result sets.
> I haven't read Bill's code, so I don't know how he does it, but if
> we're clever, the row-reading function will know how to build columns,
> too. Without double buffering.
>
> Does that look like a good direction to you?
>

It sounds good. Have you some idea about some abstract code/declaration?

freddy77






Archive powered by MHonArc 2.6.24.

Top of Page