I rewrite formulas in a format that can be used in either simple ASCII or UNICODE text files, for several reasons : 1. peer review systems were the initial motivation. Especially in the past, these systems accepted only ASCII text comments, sometime UNICODE. Some of these systems do not accept non-ASCII characters such as Greek letters. However, that is no longer a problem (although it was tedious to adpat to unicode for programming code!). 2. Reduce when possible the use of special UNICODE characters (eg Greek letters where an equivalent English letter will do!) An additional problem that arises with UNICODE characters, is that even for Courier constant-width character sets, these tend NOT to have constant width, which mis-aligns expressions, and therefore majkes proof-reading more difficult (see "constant-character-width fonts " in the point below. 3. Elimination of [superscripts, subscripts, special line formating], which are not handled by simple text editors (other than coding like HTML). 4. Use constant-character-width fonts like courier 10 pitch. This greatly aids in the virticle alignment of successive equations, making the derivations much easier to follow, and often making it obvious when mistakes occur in expressions! 5. Restructure a few conventional ways of writing formule, which : makes for faster typing, makes for faster and better review by readers reduces ambiguities that often arise with conventional notations. "brings togther" information about an operation in front of the expression being evaluation, being faster and easier to see. Most notably : dp(t : f(x,v,a,t)) = ∂f(x,v,a,t) /∂t "partial" derivative dp(dt : f(x,v,a,t)) = ∂/∂t f(x,v,a,t) (dt, t0 to t1 : f(t) ) definite integral of f(t) wrt dt, from t0 to t1 facilitates use in symbolic programs 6. Points [2-5] above make typing MUCH [faster, easier] plus allow more precise alignment of [characters, symbols, expressions, functions, formulae] that greatly improves error-checking! Careful alignments often make [errors, omissions, incmpleteness] stand out automatically. 7. Symbolic processing - [formulae, descriptive text, variables] are entered in a "everything in a simple character line" format that is directly used as a computer program (QNial - Queen's University Nested Interactive Array Language, www.nial.com - this is an interpreted language (no compiler)). In some peer reviews that I do, this allows simple and powerful "symbolic" processing, especially of large higher-dimensional arrays, as this is much more accurate than imagining precisely outputs of formulae. 8. Create many sub-sub-sections for each key symbol etc - so that hyperlinks from the table of contents make it very easy to get to each derivation. (could use the index for that as well, but a Table of Contents provides a more powerful [ordering,organisation]. Note that QNial is not a "high usage" language, and perhaps I should migrate to O'Camel or something else, but I usually end up frustrated with the limitations of other languages, and I've grown used to it. I've not used C for many years except for Linux language ports and limited work - never got into C++ or C# etc as I rarely need the programming framework and standard coding, which is easy to adapt to QNial (but much more work to go the other way. HFLN nomenclature is intentionally redundant (like language) to help reinforce symbol meanings and make it easier to spot errors. For example : dp[dt : E0ods(POIo,t)] = dp[dt : E0pds(POIo(t),t)] E0pds(POIo(t),t) is a measure of the static electric field at POIo (Point of Interest fixed in the observer frame RFo) at time t. Note that POIo shifts position in RFp with time, and that the static electric field at that point that arises from the moving particle, also changes with time.