Skip to Content.
Sympa Menu

pcplantdb - Re: [pcplantdb] Re: [piw] Information I would like to see

pcplantdb@lists.ibiblio.org

Subject: pcplantdb

List archive

Chronological Thread  
  • From: Sean Maley <semaley@yahoo.com>
  • To: Permaculture Plant Database <pcplantdb@lists.ibiblio.org>
  • Subject: Re: [pcplantdb] Re: [piw] Information I would like to see
  • Date: Thu, 3 Feb 2005 15:36:56 -0800 (PST)

We keep the symbol from the usda, for example, then
periodically request information. This work is
scalable if we coordinate hosts.

For instance:
Start with:
http://plants.usda.gov/Data/statdist/plantlst.txt

Then you have:
http://plants.usda.gov/cgi_bin/plant_profile.cgi?symbol=SYMBOL
or the more usable printable version:
http://plants.usda.gov/cgi_bin/plant_profile.cgi?symbol=OCBA&mode=Print&photoID=ocba_001_ahp.tif

Write a program starting with this

use LWP;
use LWP::Protocol::http;
use URI::Escape;

sub do_GET
{
# Parameters: $url [,%hash]
$browser = LWP::UserAgent->new() unless $browser;
$browser->agent( "Permabot" );
my $resp = $browser->get( @_ );
return
($resp->content,$resp->status_line,$resp->is_success,$resp)
if wantarray;
return unless $resp->is_success;
return $resp->content;
}

my $symbol='OCBA'; # Sweet Basil
my $url =
qw|http://plants.usda.gov/cgi_bin/plant_profile.cgi?symbol=$symbol\&mode=Print";;
my( $doc, $status, $successful, $response ) =
do_GET( $url, Cookie=>"sessionid=$session_enc" );

$doc =~ tr/\xd/\xa/;
@data = split "\n", $doc;

my( $table, $level ) = ( 0, 0 );
# should clean the tags to bare
<TABLE><TR><TD>elements
# multi line tags or multi tag lines
# could cause problems, too

for( @data ) {
if( /<TABLE/i ) {
$table++; $level++;
}

$tr++ if /<TR/i;
$td++ if /<TD/i;

# work the data by coordinate system.
# TABLES are embedded so this is a little complex
# but not impossible
# TD = 1 is often field name
# TD = 2 is often data value

if( |</TABLE|i ) { # need to spend time here
$level--;
}
}


--- "Lawrence F. London, Jr." <lfl@intrex.net> wrote:

> Sean Maley wrote:
>
> > ... spelling error, scraping. Basically, we do
> what
> > google does. We scan the web pages from other
> plant
> > database web sites and archive the information
> with
> > our own data. This gathers information quickly
> and
> > adds to the available information that this
> database
> > provides for our users. We may not have our own
> > information for a given species, but could still
> have
> > something to provide. A simple report would
> provide
> > all the available opportunities to add to the
> archive
> > from a permaculture perspective.
>
> Could you provide more explanation of what this is
> about and how it happens?
> Seems we have been talking about incorporating data
> from other databases
> and other text-based and marked-up (for the Web)
> data in archives, i.e. mailing list archives,
> ftp archives, html-ized archives (mhonarc and
> hypermail) and you suggest "scanning"
> select web accress for such data. What does this
> "scanning" consist of? Would this
> produce an index of urls used to locate the various
> pieces of data themselves? Or would this
> actually gather and archive this data on our machine
> for access and redistribution through PIW,
> referencing the source and credit to its original
> location?
>
> This sounds like a great idea.
>
> > Sorry for the scrappy attention to detail, it's
> been a
> > busy week. (MSRB has been having their fun this
> week)
>
> MRSB?
>
> LL
> --
> L.F.London
> lfl@intrex.net
> http://market-farming.com
> Market Farming Forum
>
http://lists.ibiblio.org/mailman/listinfo/marketfarming
> _______________________________________________
> pcplantdb mailing list
> pcplantdb@lists.ibiblio.org
> http://lists.ibiblio.org/mailman/listinfo/pcplantdb
>




__________________________________
Do you Yahoo!?
Take Yahoo! Mail with you! Get it on your mobile phone.
http://mobile.yahoo.com/maildemo




Archive powered by MHonArc 2.6.24.

Top of Page