Skip to Content.
Sympa Menu

baslinux - [BL] Importing csv to sqlite db, was Re: dbf to sql convertor, was Re: Spanish dollybase,

baslinux AT lists.ibiblio.org

Subject: Baslinux mailing list

List archive

Chronological Thread  
  • From: baslinux AT lists.ibiblio.org
  • To: baslinux AT lists.ibiblio.org
  • Subject: [BL] Importing csv to sqlite db, was Re: dbf to sql convertor, was Re: Spanish dollybase,
  • Date: Mon, 14 Jul 2008 19:33:29 +0000 (UTC)

Can you explain how to import filename.csv (which includes header info)
into filename.db? The csv file created from the dbf with the same
convertor (viewed with pico) has no random garbage in it to start with.

I could not find an understandable explanation of this feature of sqlite3 (not plain sqlite) on the web so am posting full details, since it took me a few days to figure out by experimenting.


First, make an sql file (--trim b to remove some garbage) from the dbf file, and use pico to remove all of the actual data. (filename.sql - only the headers).

dbf --sql --trim b filename.sql filename.dbf
delete data

Make a csv file of the same dbf file, and use pico to move the header information. (filename.csv - only the data)
dbf --csv filename.csv file.dbf
delete headers


sqlite3 filename.db
.read filename.sql (This sets up an empty table inside the database)
.separator '","' (Better to use a different separator - see below).
.import filename.csv filename

select * from filename;
Displays all the records in list form, including the first page (which is missing if I import dbf > sql > db).

I ended up this time with a 230K db file (not 150K like I got when I stripped the sql version of it - maybe I should also strip the csv version before importing the data). Still some garbage characters.

Slight problem:

My csv file has commas inside some of the entries "data,data,data...","data","data data"......

If I set the separator to ',', it gets too many columns and leaves in all the " (and the result won't import either because of the mismatch in expected versus detected number of columns) so I changed the separator to '","', which imported properly, but that still left a " at the start of the first column and the end of the last column.

The dbf convertor has the option of creating csv using another separator than ",", but I have not got this working yet:
dbf --csv --separator # file.csv file.dbf
ERROR: found no file for input
Please make sure that the last argument is a valid dBASE file.
Same no matter where I put the --separator #

It works without the separator option.
Can someone get the separator option to work?


I then used nano to search and replace with
Ctrl-\ "," > # and then " > null.
Ctrl-G lists a lot of useful key combinations in the full nano.
In pico (the minimal nano which Steven compiled for BL) Ctrl-G for 'Get help' is disabled, but search and replace works.

When I tried to import this file it found 12 instead of 11 columns on line 57 and it did not import. I deleted an initial # from that line with nano (Ctrl- go to line number). It then complained about line 233 on which I had the key C#m (this is my vinyl record database) so I started over again, replacing "," with @ and then " with null (I may have lost a few unneeded quotation marks in my records which I could first have changed to something else in the dbf file before converting to csv, then changed back later).

It then imported properly including the first page, which was lost when importing from sql to db.
--------------

Another problem is that in list mode nothing lined up any more (since the padding had disappeared on the way through the csv file) so I set

.mode column

All columns were truncated to default 10 characters.

.width 30

The first column then displayed as 30 characters and the others were truncated to 10 if they were longer to start wtih.

The proper way is to list the width of _each_ column:
.width 10 15 20 15 4 4

.headers on
(Lets you see the names of the columns).

In column mode separators are not displayed.



To scroll around the wide and long database I could print it to a text file (as explained by Ian in an email) and view with vi or nano -w. The tutorials explain how to sort, edit, and add records using sql commands.

O I could edit the text file and import it back in, I think, which would be an easy way to copy a record to the middle of the file and change part of it, or insert a record, since the master file does not seem to keep track of number of records.

I can in theory use commands to display all records containing a certain string in a certain column. My non-buggy DOS database program only searches on initial strings. I can sort on several columns at once rather than doing two sorts in a row.

Eventually this will have saved time over using the DOS dbf program.
A friend with a Mac x86 is now interested in sqlite. Thanks for the program, Steven, and the help, Ian.

Summary:

I had hoped sqlite could import header info from the csv file made by the dbf convertor, but apparently you need to set up the table first and then import data from a text file with separators. So I had to make a separate sql file of the header info, set up the table first by reading that file, and then import the data in text form (csv file) with a carefully chosen separator not already used in the file. Since the dbf convertor would not let me change the separator from default "," (and it also put " at the start and end of each line") I used nano to edit the csv file it did produce, choosing a separator not found in the data.

Someone clever could probably do this all with a bash script that lets you enter the name of the dbf file, choose a separator (character not used in the data) and output a db file using 'dbf', 'nano' and 'sqlite'.
Any volunteers?

Sindi




Archive powered by MHonArc 2.6.24.

Top of Page