Skip to Content.
Sympa Menu

freetds - RE: [freetds] Bulkcopy in ct-lib

freetds AT lists.ibiblio.org

Subject: FreeTDS Development Group

List archive

Chronological Thread  
  • From: "ZIGLIO, Frediano, VF-IT" <Frediano.Ziglio AT vodafone.com>
  • To: "FreeTDS Development Group" <freetds AT lists.ibiblio.org>
  • Subject: RE: [freetds] Bulkcopy in ct-lib
  • Date: Fri, 6 Feb 2004 16:05:31 +0100

> > >
> > > The best alternative I've come up with is to support
> > bcp_moretext(). That would require, ISTM, some kind of
> > "bcp_bind_tmpfile()", something that associates a chunk of a
> > file with a column. The row image need not be precomposed in
> > memory; rather, a "row image description" -- a list of
> > pointers/handles and lengths of post-iconv, post-dbconvert,
> > server-ready data -- would be enough.
> > >
> >
> > I didn't understand this paragraphs that much... What do you mean by
> > "something that associates a chunk of a file with a column" ?
>
> bcp_bind() associates a buffer in a host variable with a
> column. When the column is sent (with bcp_sendrow), the data
> are read from the buffer.
>
> For text/binary data, I propose bcp_bind_tmpfile() to do the
> same thing, but using a workfile instead of a memory buffer.
> The column's data would be held in a workfile, where it
> wouldn't tax virtual memory.
>
> My other point is that we don't have to allocate a buffer to
> hold the row image, before sending it to the server. It's
> enough to know where each column's data will be read from.
>

Adn the length to read... however I'd start writing to file if size
became big. For example 10K or similar to avoid creating file for small
text/image. I did a test with query and cancel. Test is: create a table,
building a query with "insert into ... values(...)" and so on and then
send a cancel (without flushing query, of course!). Result it's that no
rows are inserted. So we can start sending data and on error sending
cancel. This can save a lot of memory in query.c too! So we need to know
only wire length, start sending data and if something go wrong simply
send cancel and return error to client. About fseek I'd test the
possibility to rewind before creating temporary file.

> >
> > The harder thing it's length. You need first to send length and then
> > data. To know length you need to scan all column if terminated or to
> > convert to proper charset if we need to convert it (text).
>
> Exactly. The server wants each column's length before its
> data. That's why we read the whole row, passing each
> column's data (depending on the datatype) through iconv() and
> dbconvert(). The server-ready data are parked, either in
> memory or on disk, and the sizes noted. When the whole row
> has been read from the file, processed, and parked, we're
> ready to call bcp_sendrow(), which reads the prepared data
> and writes them column-by-column to the server.
>
> > If we know length before we can just copy from file to wire.
>
> No, we can't. If the row crosses packet boundaries, we'd
> send the first part of the row before we're done reading the
> rest of it from disk. If we encounter a problem -- iconv(),
> unexpected EOF -- we have no way to retract the partial row
> from the server, and no clean way to end the session.
>

We can also send a cancel

> > I don't think it's
> > good to use temporary files just to avoid fseek... perhaps
> it would be
> > better to test if we can fseek and then use temporary file.
>
> I'm not proposing to use temporary files just to avoid
> fseek(). I'm saying:
>
> 1. We need then to support bcp_moretext(), because there's
> no reason to allocate memory for very large columns.
> 2. To do that, we have to restructure bcp_exec(), something
> we'll have to work on anyway when we move bcp wire handling
> to libtds.

Fully agreed.

> 3. When we restructure bcp_exec(), we get an opportunity to
> reorder the processing, such that "measuring" the data file
> is no longer needed. It's that "measuring" that requires fseek(3).
>

I don't know bulk that much however if I understood your suggestion for
large data is:
- read from bcp file
- convert
- write to temporary file
- read from temporary file
- write to wire

fseek would reduce disk usage (no temporary files) while a small memory
buffer would reduce disk access for small fields.

> If you're worried about throughput, don't. :-) Disk I/O is
> orders of magnitude faster than network I/O. We involve
> workfiles only for huge columns, and we need them for
> text/ntext, if we're going to pass the data through iconv().
>
> Is that any clearer?
>

Yes, now it's clear. I know disk I/O is faster than network I/O however
you know... I'm stingy :) Well, you know bcp code better than me however
I'd would like to see bulk support on libTDS and all this optimination
for all data processing too (results, input/output params). So I think
that first change is to move bcp to libTDS than start change it. So my
idea to roadmap for bcp is:
1- move bcp functions to libTDS (new names, etc)
2- adapt current bcp library functions (loop until acceptable, all
library should work as current)
3- integrate libTDS/bulk (remove BCP_COLINFO, use TDSCOLUMN)
4- optimize libTDS for data handling (well, stream support it's only my
suggestion...)
How does it sound ?

freddy77




Archive powered by MHonArc 2.6.24.

Top of Page