[b-hebrew] Hebrew transliteration
peterkirk at qaya.org
Mon Jan 19 14:53:31 EST 2004
On 19/01/2004 11:16, Trevor Peterson wrote:
>>Now the relics of these non-standard
>>are causing no end of trouble. Literally. I and my former SIL
>>have spent man-years trying to sort out the incredible mess
>>old practices have caused, every person doing what is right
>>in their own
>>eyes, and there is no end to the task in sight.
>Could you be more specific about what the problems are? I'm truly curious to know.
I was just looking at version 57 of a mapping file intended for
converting text in just one legacy format into Unicode. This is 45 KB of
code. I made the first draft of this more than three years ago, and
several others have been working on it since. It should be released
shortly, after going through more revisions, as part of a new version of
the Ezra SIL package. This file is full of all kinds of mapping
complexities because the legacy encoding had used different conventions
from Unicode. And this package doesn't even cover the most tricky issue,
which is bidirectional ordering.
>>If all the Hebrew users in the world (biblical scholars,
>>around the world, etc) could get together and make their own
>>on a single technology which they could all use, that would
>>be fine, as
>>they could then share their data.
>Who says they need to? Why do I need to share data with an Israeli physicist? This is why I say that I think Unicode has its place, but not necessarily for every field that needs to work with Hebrew script.
Oh here we go, biblical Hebrew is a quite different language from modern
Hebrew, Jewish commentators from DSS times to the present day have
nothing useful to tell us about the meaning of the biblical text, and so
we can erect a barrier around biblical Hebrew to isolate it from the
possibly dangerous influences of degraded later Hebrew. Is that what you
believe? Or do you think we just might have something to learn from
scholars like Tov and Dotan, to name a couple of moderns whose names
spring to mind, as well as from generations of older Jewish scholars. In
that case we need to be able to communicate with them. A good start is
to be able to read the Bible passages which they quote, and for which
they will of course use the encoding that everyone else in Israel uses.
Plus, whether you like it or not, many non-Jewish biblical scholars are
starting to use Unicode, so you need to be able to communicate with them.
>>How about communicate with colleagues who prefer SIL Ezra
>>while you are using SP Tiberian, or vice versa?
>Working in what capacity? If it's simply a matter of e-mailing back and forth, we can get by with whatever is available. We can transliterate according to accepted standards, we can depend on each other's access to basic texts, we can use Unicode if that seems to work for everyone, or we can agree on a common font. ...
Wouldn't it be a lot easier if we could just assume that everyone has
the same standard Hebrew setup?
>... If it's a matter of publishing, we'll all have to conform to the publisher's standards, whatever they might be. If the publisher wants camera-ready material, it doesn't matter how we generate it.
>>How about make use of
>>the large range of Unicode fonts which are already available
>>many support accents properly yet)?
>If there are good fonts available otherwise, what difference does it make?
But what if there are none for our chosen encoding which meet the
publisher's requirement for good quality camera-ready copy? What if the
publisher specifies a house font, which is likely to be Unicode-based,
at least in the near future? Mind you, I trust no publishers are really
using cameras these days. I suppose some accept PDFs which almost come
to the same thing.
>>How about use all the nice Hebrew
>>language support provided by OSs and standard software?
>Like what? A spell-checker for Biblical Hebrew?
I have never heard of one, but it's not a crazy idea. No, I was thinking
more of keyboard and rendering engine support. The rendering engine is
the really tricky one, automatically positioning the points in the
correct places. Legacy encodings don't do that for you, you have to
choose manually which of a number of holams or dageshes looks best with
a particular consonant and/or accent. Even the best results look shoddy.
Unicode allows rendering engines like MS Uniscribe to do a much better
job of this, to produce output which is really fit for publication.
>>getting proper bidirectional behaviour e.g. word wrapping
>>with RTL text
>>instead of being able to view text only in a fixed width
>>all line breaks have to be hard coded?
>With good typesetting software that accepts LTR transliteration input, this is not an issue.
OK, if you are talking about printed output rather than ad hoc
communication and web publication. But where can I get software using
transliteration input which does as good a job as OpenOffice, using
Unicode, for a lower price? Trick question, I know, because OpenOffice
is free. I guess you will say Latex, which is also free. Well, 20+ years
ago I too used this kind of batch processing (remember nroff?) to
produce formatted English text. Then I discovered WYSIWYG word
processing, and have never looked back since. I can do WYSIWYG Hebrew
word processing with OpenOffice and MS Office, and typesetting with MS
Publisher. Why should I go back to the dark ages of batch processing?
peter at qaya.org (personal)
peterkirk at qaya.org (work)
More information about the b-hebrew