Skip to Content.
Sympa Menu

baslinux - Re: [BL] script to extract http://'s from a doc. 5C

baslinux AT lists.ibiblio.org

Subject: Baslinux mailing list

List archive

Chronological Thread  
  • From: sindi keesan <keesan AT sdf.lonestar.org>
  • To: Lee Forrest <lforrestster AT gmail.com>
  • Cc: baslinux AT lists.ibiblio.org
  • Subject: Re: [BL] script to extract http://'s from a doc. 5C
  • Date: Fri, 2 Mar 2007 20:08:44 +0000 (UTC)

On Thu, 1 Mar 2007, Lee Forrest wrote:

On Fri, Mar 02, 2007 at 02:58:37AM +0000, sindi keesan wrote:

[delete]

This produced a file called links.html with only the html tags.

I just read over both of your responses about the script.

Something is screwy with the "file_with_urls". Non-printing characters
(including
newlines) or non-us-ascii characters or spaces in the urls.

It is a 186K .doc file with lots of garbage in it.
I also tested on a file I made with nothing but two lines in it:

http://www.grex.org
http://www.freeshell.org

Called 'testfile'. No garbage at all.

What's it look like with cat instead of less?

My test file would look the same both ways.


Do you have a real vi handy? (nvi/elvis/vim etc.)

If so, open the file and in command mode (hit Esc), do

:%list

If I knew what was going on, then I could probably write a clean-up stage.

If you want, gzip it and ftp it to me. Current IP is 66.52.195.14

Try my 2-line test file first.

[delete]

Lee


Here is my 'urlextract' which I chmod +x'ed
I typed ./urlextract testfile or file.doc (both in the same directory as urlextract). -------------

#!/bin/sh

#script.sh

#usage: script.sh file_with_urls

echo "<html><head></head><body>" > links.html

sed -n 's@^\(.*\)\(http://\|www\)\([^ ><"]*\)\(.*\)@<p><a href="\2\3">\2\3</a>@p' "$1" | sed
's@\"www@\"http://www@' >> links.html

echo "</body></html>" >> links.html

links links.html




Archive powered by MHonArc 2.6.24.

Top of Page