Skip to Content.
Sympa Menu

b-hebrew - [b-hebrew] Karl's lexicon

b-hebrew AT lists.ibiblio.org

Subject: Biblical Hebrew Forum

List archive

Chronological Thread  
  • From: "JAMES CHRISTIAN READ" <JCR128 AT student.anglia.ac.uk>
  • To: b-hebrew AT lists.ibiblio.org
  • Subject: [b-hebrew] Karl's lexicon
  • Date: Sat, 25 Aug 2007 08:55:04 +0100

Hi Karl,

KWR: Where should the separator be? At the ends of the lines? If so, then
the return is already a unique character.

JCR: The separator should be between the form and the
entry. The form and the entry would be the natural
columns to use in a database driven version of your
dictionary.

KWR: Making a quick electronic count of irregular forms and forms which
could come from two or more sources, I got a result of 777 entries in
my dictionary. The problem is that I know I am missing many such
forms.

JCR: How did you make such an automated count?

KWR: Because I don't know data base programming, I don't understand how
this would work.

JCR: I actually made a mistake. A database table is
basically a set of rows (each row is unique) with
columns. The columns hold different pieces of
information. To achieve the functionality you suggest
we would need to have one massive table with two
columns. The first column would be the word we are
looking up. The second column would be the root it is
derived from. When looking for the definition of a
word found in the text the application would ask the
database to return the row which has the word in its
first field. Taking the root from the second column
it would be able to query a second table dedicated to
your dictionary entries to return the desired
definition.

However, as I said, I think it would be far more
functional and user friendly if a definition was
provided for every single word. This way we make do
with one table instead of two. And each request would
result in one query to the database rather than two
and so gains in performance would also be achieved.

I don't think that hand feeding the student in this
way will impede their progress in recognising forms.
On the contrary, I think that with time, they will
start to notice the similarities between forms and
work out the identifying marks of, for example the
3rd person masculine singular, for themselves. Such
learning is more representative of the natural way of
learning grammar and makes reading more accessible to
non academic learners. There is a large category of
learners who are linguistic geniouses in the sense that
if you put them in any culture in the world they will
soak up the language like sponges but are academically
challenged in the sense that the very second you start
talking about grammar their brains switch off and they
start looking at you as if you were from another
planet. My feelings are that reading the Hebrew text
should be made accessible to all with an interest and
such 'hand-holding' would open the text up to a much
wider range of learners.

KWR: OK, it's in your concordance, but not your frequency table.


JCR: If it's in the concordance then it's in the
frequency tables. They are both generated from the
same data. Note that there are hundreds of pages in
the frequency tables. You may not have found the
bi-gram in question because you were expecting it to
have a specific count. Please note that frequencies
were not generated using a critical version of the
Tanakh but using the xml Aleppo Codex and therefore
all counts are specific to this text. I do plan to get
around to doing something similar with Chris's online
Leningrad Codex but haven't got around to it yet.

One way of finding the bi-gram quicker in the frequency
charts is to first find the first word in the 1-gram
column, click on it and search through the much
reduced results. Hope this helps.

I suppose my whole architecture is lacking in
documentation and I really should get round to
publishing a page or two about how the concordance,
reader and frequency charts can be used.

James Christian Read - BSc Computer Science
http://www.lamie.org/hebrew - thesis1: concept driven machine translation
using the Aleppo codex
http://www.lamie.org/lad-sim.doc - thesis2: language acquisition
simulationPlease note that frequencies
were not generated using a critical version of the
Tanakh but using the xml Aleppo Codex and therefore
all counts are specific to this text. I do plan to get
around to doing something similar with Chris's online
Leningrad Codex but haven't got around to it yet.

One way of finding the bi-gram quicker in the frequency
charts is to first find the first word in the 1-gram
column, click on it and search through the much
reduced results. Hope this helps.

I suppose my whole architecture is lacking in
documentation and I really should get round to
publishi





Archive powered by MHonArc 2.6.24.

Top of Page