OK, I agree this isn't much of an issue.
I just wish to explain a little bit why I think it can be useful. Let us take for instance the HTML language or wiki text (when editing a page on Wikipedia). In these two examples, one can input an English text with Arabic words interleaved (or vice-versa) without further markup. The browser will use a bidi algorithm to determine what must be written left-to-right or right-to-left. I think that the preferred directionality of a Unicode character can be found in a table. Likewise, the browser can select a font based on the characters that it encounters.
Supposedly, if I write some kind of converter from HTML or wiki text or whatever to XeLaTeX, I must then require the source text to have proper markup to determine what is in Arabic and what is in English. But HTML or wiki text doesn't need that extra markup.
Much information can be found just by looking at the Unicode characters encountered and selecting whether it is an Arabic or English word can be done automatically. Extra markup means redundancy, which one usually wishes to avoid.
Now I understand that looking at the Unicode characters won't tell you if it is e.g., Arabic, Farsi or Urdu. But maybe this can be set at a higher level, e.g., with a command such as \UseLanguages{English,Farsi}.
Well, these are just some thoughts. ArabXeTeX and XeLaTeX give great results already!
