Skip to Content.
Sympa Menu

b-hebrew - Re: [b-hebrew] Biblical Hebrew orthographical practices in light of epigraphy

b-hebrew AT lists.ibiblio.org

Subject: Biblical Hebrew Forum

List archive

Chronological Thread  
  • From: James Christian <jc.bhebrew AT googlemail.com>
  • To: Yitzhak Sapir <yitzhaksapir AT gmail.com>
  • Cc: B-Hebrew <b-hebrew AT lists.ibiblio.org>
  • Subject: Re: [b-hebrew] Biblical Hebrew orthographical practices in light of epigraphy
  • Date: Wed, 2 Jun 2010 09:11:30 +0300

Hi Yitzhak,

In response to the suggestion that missing yodhs in plurals may be for
convenience you provided more examples of non plural words with missing
yodhs in order to back up your strong position that this was not for
convenience. I won't comment on your logic here. I think it speaks for
itself and pretty much most list members can see how self contradictory this
kind of logic is.

You go on to misrepresent me and say I have made strong claims about the
amounts of data needed to know anything with any kind of certainty about a
language. I won't go into this because I'm getting tired of this hair
splitting game that this discussion has become.

However, a few points you need to be corrected on. You seem to be under the
same misguided impression that statistical models of language have no
linguistic knowledge. You are not alone. Pretty much most theoretical
linguists (Chomsky et al) are under this same misguided impression. In fact,
to the contrary, statistical models currently are the computational grammars
with the most sophisticated grammatical models of language in existence.
Phrase based statistical models represent not the high level subject verb
object type of linguistic rules but the very low level, fine grained and
extremely sophisticated collocational rules of which words can have which
relationships with which other words. Models like those of Bod (2007) and
Chiang (2005) which I referenced to you make these models even more
sophisticated by catering for higher level linguistic knowledge like subject
verb object. Chiang achieves this with SCFG's, Bod with UDOP. Both
approaches are, in principal, very similar and, in my view, the right kind
of direction if ever we are to approach a complete computational explanation
of any natural language.

You go on to make claims about how translators are not native in both
languages. I agree entirely and you seem to have missed the point.
Translators are typically native in the *target language*. This means they
have all of that collocational knowledge in the language they are
translating into. i.e. they know what sounds like e.g. good colloquial
English or formal legalistic English. Their task with the source language is
to simply understand the text.

You then go on to mention how small a corpus the Hebrew corpus is and how we
shouldn't be able to know with certainty things like how personal pronouns
translate. What you don't mention is how we have an unbroken tradition of
translating Hebrew into various languages with much larger corpora and have
a good key to crack the code with.

Let's take Egyptian Hieroglyphs as an example. The corpus was pretty large.
Nobody could crack it. We had no translations to give us a key to help us
crack it. Then the Rosetta stone was cracked with the help of its trilingual
nature. The cracking of the Egyptian Hieroglyphs was extremely accelerated
with the help of this vital key.

Now a lot of the rest of what you said just sounded like the typical
spouting of a theoretical linguist. To any serious computational linguist
the remedy to your misguided impressions is quite simple. If you really
think your theories are worth anything then put them in code and test them
with real data. When you have actually got your hands dirty with real data
your eyes will gradually start to open and you will leave that fuzzy and
misguided world of theoretical linguistics and come into that concrete and
very real world of practical linguistics. A simple exercise I have suggested
to you.

1) Take a 1,000 word sample of English (I'll even give you free reign to
choose the best representative sample) as your training data
2) From your training data construct your rule set with justification from
your training data
3) Test the explanatory power of your rule set with the test set of all
English utterances on the internet.

Once you have done this, come back and we can start having some form of
sensible discussion.

James Christian




Archive powered by MHonArc 2.6.24.

Top of Page