Quantcast
Channel: The Sagittal forum
Viewing all articles
Browse latest Browse all 40

Mathematical theory • Terminological and notational reform proposals for logarithms, powers, and roots

$
0
0
Intro

The logarithm of a number x to the base y is conventionally written as:

\(
\large\hspace{64px} \log_y{x}
\)


This popular notation leaves much to be desired; instead of helping us see how logs relate to other simple operations — such as powers, roots, and division — it makes them seem mysterious and difficult.

In this post, I will propose some new and improved notation for logs. I'm far from the first person to propose reforms to log notation, but I hope you will find my proposals particularly practical and powerful. Here's a summary:
  1. An ASCII-only operator, x /_ y.
  2. A Unicode-based notation, also usable in handwriting (whiteboard, notebook, etc.) or specialized math tools (\(\LaTeX\), Wolfram Language, etc.): the existing radical sign, but with a subscript instead of a superscript, \(_y\sqrt{x}\).
  3. A notation that can't be reproduced in single lines of text, typically requiring handwriting or specialized math tools:

Along with these three log notation proposals, I also suggest some terminological reform, including a new term: logdivision. I also propose an ASCII-only operator for roots, x /^ y.

These proposed reforms were developed in collaboration with Dave Keenan. Ultimately, however, our priorities diverged, and we didn't end up agreeing on any of the final notations except for the ASCII-only log notation. Dave's efforts turned to his James notation proposal, which I encourage you to check out here: viewtopic.php?t=573. I've ensured that there is no conflict between our proposals, in the sense that we never give different meanings to the same symbol, only different symbols to the same meaning in some cases.

Anyway, before I delve into the details of my proposals, I'd first like to review the problem in more detail. Let's all get on the same page.



Problem 1: the lost relationship between logs and roots

Ah, the issues with our currently conventional log notation…

For starters, although taking a log is just as fundamental of a thing to do as taking a root, the notation for logs is much bulkier than the notation for roots. With roots we get to use that compact and distinctive symbol, \(\surd\), as in \(\sqrt[^y]{x}\), but with logs we force ourselves to spell it out in cumbersome text, "\(\log\)".

But the problem is even worse than logs taking up an unnecessary amount of space compared with roots. The two very different notations obscure an even deeper relationship that logs and roots share. And so, sadly, many people don't even realize that logs and roots are a pair! They're the two possible inverses of raising \(x\) to the \(y\)th power: roots recover \(x\) for you; logs recover \(y\). Said another way, if \(\text{pow}(x, y) = z\), then \(\text{root}(z, y) = x\) and \(\log(z, x) = y\).

Said yet another way, all three of these operations involve the same set of three related numbers:
  1. a base,
  2. an exponent, and
  3. a power which is the result of raising this base to this exponent (usually phrased as raising the base to the exponentth power).
Together we could call three numbers in such a relation a power relation.

Where these three operations differ is in which of these three numbers is the unknown:
  1. Powers take a base and exponent to find the unknown power.
  2. Roots take a power (here called a radicand) and an exponent (rather, its inverse, called a degree) to find the unknown base (called a root).
  3. Logs take a power (here called a logarithmand) and a base to find the unknown exponent (called a logarithm).
So all three of these expressions actually state the same fact, but it's hard to tell, because of how different they look from each other:
  1. \(x^y = z\)
  2. \(^y\sqrt{z} = x\)
  3. \(\log_x{z} = y\)
Well, to be fair, the situation with power and root notations isn't too bad: \(x^y\) and \(^y\sqrt{z}\). They both use superscripted text (the \(y\)). And by placing it to opposite sides of the main text, they help suggest how they're each others' inverses. (Roots additionally include that \(\surd\) symbol, of course, which by the way is called the radical, from the Latin root for, well, "root").

But the notation for logs is way different. Sure, it also incorporates some smaller text — subscript here, instead of superscript — but it's really the presence of that spelled-out "\(\log\)" that's the dealbreaker. Many learners, I'm sure, become needlessly intimidated by logs merely because of the way they are notated; they decide that logs must be some sort of advanced math operation, or at least akin to the trigonometric operations such as sin, cos, and tan. But in fact — along with powers and roots — they're nothing more than the next tier of operations past the additive tier — addition and its inverse, subtraction — and multiplicative tier — multiplication and its inverse, division.



Problem 2: the lost relationship between logs and division

Speaking of division, taking the log also shares a very close relationship with division. Unfortunately, like logs' relationship with roots, this relationship is very much obscured by our present conventional notation. Let's review:
  • Subtraction is the inverse of addition, because if \(x + y = z\), then \(z - y = x\) and \(z - x = y\).
  • Division is the inverse of multiplication, because if \(x × y = z\), then \(z \text{/} y = x\) and \(z \text{/} x = y\).
  • With the additive and multiplicative tiers, there is only one inverse operation, and that's because both addition and multiplication are commutative operations, or in other words, \(x + y = y + x\), and \(x × y = y × x\). Not so with powers! \(x \text{^} y \neq y \text{^} x\), and so to get \(x\) back from \(z\) and \(y\) we need one operation and to \(y\) back from \(z\) and \(x\) we need another. As we've already discussed, these two operations are roots and logs; they are the two inverses of powers.
So it's natural to ask now: is one of root and log more analogous to division than the other? Or are they both just as analogous?

Well, there is a pretty solid answer to this question, and it's logs; logs, not roots, are more closely analogous to division. And here's a few reasons why.


Reason 1: repeated lower-tier operation

So one way to think of dividing \(x\) by \(y\) is counting how many times I subtract \(y\) from \(x\) to reach zero.
For example, \(8 \text{/} 2 = 4\), because:

\(
\begin{align}
\hspace{64px}8 - 2 &= 6 \\
\hspace{64px}6 - 2 &= 4 \\
\hspace{64px}4 - 2 &= 2 \\
\hspace{64px}2 - 2 &= 0 \\
\end{align}
\)


Well, another way to think of the log base \(y\) of \(x\) is counting how many times I divide \(x\) by \(y\) to reach one.
For example, \(\log_2{8} = 3\), because:

\(
\begin{align}
\hspace{64px}8 \text{/} 2 &= 4 \\
\hspace{64px}4 \text{/} 2 &= 2 \\
\hspace{64px}2 \text{/} 2 &= 1 \\
\end{align}
\)


There is no such great analogy between division and roots. \(^3\sqrt{8}\) essentially asks \(n × n × n = 8\), solve for \(n\). So we could look at \(8 \text{/} 2\) as asking \(n + n = 8\), solve for \(n\). But here, the counting part is already done for us; this is more like seeking along a continuum of values for the correct value that satisfies this pre-counted-out expression. It's not nearly as clear and powerful of an analogy.


Reason 2: roots are mere conveniences

Another strike against roots here is that they can be thought of as a mere convenience operation. This is because we can get the same answer as any root of \(x\) with degree \(y\) by finding the power of \(x\) with exponent \(\frac{1}{y}\).

Reciprocating is a basic form of division, analogous to negation as a basic form of subtraction. So raising to a reciprocal power is analogous to multiplying by a negative number, i.e. \(x^y\) : \(x^{\frac{1}{y}}\) :: \(x×y\) : \(x×{-}y\). We have no extra operator for multiplying by a negative number, like we see in that fourth expression there, and don't have any use for one. For raising to a reciprocal power, though, it turns out to be pretty handy, enough so that for convenience we gave it a dedicated name "root" and sugarcoated notation \(\sqrt[y]{x}\) rather than \(x^{\frac{1}{y}}\). But these conveniences just aren't strictly necessary.

With logs, on the other hand, there's simply no way to replicate a log with a power, reciprocal or otherwise.
  • Fun fact, re: the etymology for "exponent". It comes from the Latin for "put forth". The opposite of that seems to me to be "take back". And in Latin, that's "recipere", where we trace the etymology for "reciprocal". So we could think of exponing \(x\)'s, \(x x x x…\). Then take them away again, by recipring them. Once you take away one more than you started with, you end up with \(x^{-1}\), AKA the reciprocal. Or the reciprocal could be thought of as a unit of taking-away-x.



Reason 3: logs can be expressed as quotients of logs to a shared base

Perhaps you've found yourself in this situation before: you want to take a log base \(n\), but your calculator only supports a couple standard bases: \(10\), and \(e\). What to do?

Well the short answer is, if you're taking \(\log_n{x}\), what you need to do choose one of these two standard bases — let's say you choose \(e\) — and find

\(
\hspace{64px} \LARGE \frac{\log_e{x}}{\log_e{n}}
\)


because that's equal to \(\log_n{x}\). If you're like me, you memorized this logarithmic identity in algebra class back in high school, and just went with it; if anyone ever did manage to explain to you why it works, well, it didn't stick. If so, I recently found a way to get my head around this identity. It works for me, so I hope it'll help you too.

It begins with this key observation (I'm using my x /_ y notation for \(\log_y{x}\) here):

                                 a /_ c = (a /_ b)(b /_ c)                                     The exponent from \(c\) to \(a\) \(=\) the one from \(c\) to \(b\) \(×\) the one from \(b\) to \(c\).
(a /_ c)/(b /_ e) = (a /_ b)(b /_ c)/(b /_ c)          Divide both sides by the exponent from \(c\) to \(b\).
(a /_ c)/(b /_ c) = (a /_ b)(b /_ c)/(b /_ c)          Cancel out.
(a /_ c)/(b /_ c) = (a /_ b)                                                             Finally, flip sides of the equation.
                                 a /_ b = (a /_ e)/(b /_ e)                                  Now we can see the key result.

So, asking, "Which power to raise \(a\) to, to get \(b\)?" is the same as choosing some other base \(c\) and asking, "What is the ratio between the power that I need to raise \(c\) to, to get \(a\), and the power I need to raise \(c\) to, to get \(b\)?". In other words, we shift both values into any log space (as long as its the same log space, such as \(c\) in this example), then we divide one by the other. So logarithms are literally a special type of division.



Proposal 1: Reserve "logarithmization" and "exponentiation" for the natural case; use "logdivision" and "power" otherwise

Importantly, the logarithmic identity we looked at in the previous section lets us centralize all of our logarithms — logarithms to any base, that is — under a single unified base. Any expression log base anything else can be instead expressed as a ratio of two logs both to that chosen standard base. And the natural base to choose for this job is, well, the one we call the natural base: \(e\). So that's why many calculators only offer this one base (or maybe also base \(10\) sometimes).

It bothers many mathematicians that when the base is left off of \(\log\), it often is taken by default as \(10\), while \(e\) has to use the separate \(\ln\) notation. But \(10\) has no profound mathematical meaning, like \(e\) does; \(10\) is there only because us apes have ten fingers and set ourselves up with a decimal number system. When we see a naked \(\log\), if we want to impress the extraterrestrials who may have no such special connection with \(10\), we should really see it default to base \(e\).

We'll go a step further, even, and propose that the word "logarithm" itself should be reserved for the natural logarithm. Taking the log to any other base other than \(e\) is best thought of as a compound operation: finding the ratio of one (natural) logarithm to another, or in other words, dividing two logs. And it's this intimate relationship with division that leads us to comfortably propose the alternative terminology to "logarithm" for any non-natural case: logdivision.

It's nice that "logdivision" sounds a bit like "logarithm" — close enough, without being too close.

Along with "logdivision", then we get a bunch of new words to refer to the parts of logarithms:

script
conceptdivision termnew logdivision termlogarithmization term
operationdivisionlogdivisionlogarithmization
verb phrasedivide \(x\) (by \(y\))logdivide \(x\) (by y)logarithmize \(x\) (by \(y\))
adjectivedivisivelogdivisivelogarithmic
preposition\(x\) over \(y\)\(x\) logover yn/a
left operanddividendlogdividendlogarithmand
right operanddivisorlogdivisorbase
outputquotientlogquotientlogarithm
alt. conceptfractionlogfractionn/a
alt. left operandnumeratorlognumeratorn/a
alt. right operanddenominatorlogdenominatorn/a

You can use words from either the logdivision or logarithmization column to describe logarithms and logdivisions, however, we advise against mixing them, i.e. don't speak of a "logdividend and base" or a "logarithmand and logdivisor"; only "logdividend and logdivisor" and "logarithmand and base".

In the earlier parts of this post, I've been using "log" as short to avoid confronting the issue that I didn't want to use "logarithm" for any base other than \(e\). Actually, though, I don't think that "log" should be used as short for "logdivision" (even if it should, I don't think we have a snowflake's chance in hell of convincing anyone that "log" should be short for "logdivision"); "log" should always be short for "logarithm" still. Either "log to a variable base" or "log with a variable base" could be synonymous with "logdivision"; "log to/with a variable base" can be read as equivalent to "log(arithm) to/with a divisor that is the log(arithm) of a variable". If you want a shorter term for "logdivision", try "logdiv". I'll use these shortened terms like this for the rest of the post.

A couple side points:
  • Etymologically, "logarithm" comes from Greek "logos" and "arithmos", which mean ratio and number, respectively. So what does Napier mean by "ratio-number"? This resource explains it rather well; it's the relationship between a geometric = ratiometric = logometric sequence and an arithmetic sequence. In another timeline, it might have been called the "geomarithm".
  • The internet seems to be split on whether the operation should be referred to as logarithmication, logarithmation, and logarithmization, with thousands of web results for each. However, only the final of these three, logarithmization, which we've gone with here, also has an official definition online.

Next up, similar to our suggestion to reserve the word "logarithm" for the base \(e\) case, we suggest that the word "exponentiation" should be reserved for the case of raising \(e\) to some power, as we see in the exponential function \(\exp{x}\). In the other two cases:
  1. raising any other (non-\(e\)) constant to a variable exponent, or
  2. raising any variable to a constant exponent (\(e\) or otherwise),
we suggest the word power should be used instead. The default type of power is the second kind here, the variable to constant exponent type; the other type of power should be qualified as "power of constant". See these tables:

script
typeformulanamestandard function notation
constant expof_y(x) = x ^ ypower (by constant)pow_y(x)
constant basef_y(x) = y ^ xpower of constantexp_y(x)
primitive f(x) = e ^ xexponentiationexp(x)

script
typeformulanamestandard function notation
constant logdividendf_y(x) = y /_ xlogdivision of constantn/a
constant basef_y(x) = x /_ ylogdivision (from constant)log_y(x)
primitive f(x) = x /_ elogarithmizationlog(x)

So ^ should be known as the "power sign". And \(x^y\) might be read "\(x\) power \(y\)", "power \(x\) by \(y\)", or "the \(y\)th power of \(x\)".

Some more side points:
  • A popularly accepted reading of this is "\(x\) to the power of \(y\)", but we disapprove of this. Even the Wikipedia page for exponentiation gives "\(b\) (raised) to the power of \(n\)" as the second reading of \(b^n\). But how could we refer to this as a power of \(n\) when we already accept referring to it as a power of \(b\)?
  • We recognize that it's a bit confusing that "exponentiation of x" as in \(\exp_k(x)\) literally makes \(x\) into an exponent, like \(k^x\), while "power of x" as in \(\text{pow}_k(x)\) does not make it into a power, but rather a base, like \(x^k\); this suggests that "power (by constant)" should really be called "basification", and all "powers of constants" (not just those of \(e\)) should really be called "exponentiation". However, "power" and "exponentiation" are simply too entrenched to consider this extent of reform proposal, and we wish to reserve "exponentiation" for "(natural) exponentiation", anyway.
  • We also recognize that it may have been more parallel for us to go with the operation name "powerization" and "powerful operation tier", i.e. it would have more closely paralleled "addition" and "multiplication", as well as "additive operation tier" and "multiplicative operation tier". But the reason no one uses "powerization", or needs it, is that "power" functions as all parts of speech in both math and the real world. "How much power will it take?" "She's a power dresser." "Overhead power lines." "Power it up." "Just power through." "It's powering the whole town."
  • While "expmultiplication" might seem like the parallel thing to "logdivision", this in fact would be \(\exp(x) × \exp(y) = \exp(x+y)\), which is certainly not the same thing as power, and not particularly useful.
  • "power of constant" is sometimes seen as antilogarithmization, though this somewhat archaic. It made sense when logs were just a means to speed up multiplication, adding values from lookup tables, and the log was the primary operation, so it made sense to call its inverse the "antilog", but now that we treat exponentiation as the primary operation, it makes no sense for a primary operation to be called "anti"<anything>.




Proposal 2: Division slash based operators for plain text situations

As my first notational proposal, here are a pair of two-character operators for use in plain computer text situations (ASCII-only), such as a person typing with only the characters directly available on a standard keyboard, potentially without the ability to use subscripts or superscripts, as is the case with many email clients, spreadsheet programs, or computer languages:
  • x /_ y for logs, as a replacement for log_y(x) or sometimes log(x,y) or log(x)/log(y).
  • x /^ y for roots, as a replacement for x^(1/y) or sometimes x**(1/y) (Without access to the \(\surd\) symbol, we're forced to write roots as reciprocal powers.)
Here's the reasoning:
  • Pairing them like this — a pair of two-character operators both starting with the / character — helps to show how logs and roots are related, addressing the first problem we described toward the beginning of this post.
  • Using the division slash character / shows how logs are like division, addressing the second problem we described toward the beginning of this post. (As for roots, well, it's not wrong to associate them with division; they still are a "breaking-down" type operation, akin to subtraction or division.)
  • /_ visually looks like an 𝐿 for "logdivision" and /^ visually looks like an 𝑟 for "root". As a bonus, as pairs of letters go, L and R are a particularly memorable pair; they are the two letters that make the two liquid consonants in English, and come up frequently as the initials for "left" and "right" (which is apt because logs and roots can be understood as the left and right inverses of exponentiation). We could even imagine reading x/_y as "\(x\) ell \(y\)", and x/^y as "\(x\) arr \(y\)".
  • The underscore _ in /_ for logdivision reminds us of the _ in log_y(x), while accordingly the ^ in /^ reminds us of the ^ in x^(1/y).
We call these "operators" rather than "symbols" or "signs" — as in "addition sign" +, "division sign" ÷, etc. — because they consist of more than one character. Many programming languages have two-char operators like this, such as the Elvis operator ?:, logical operators && and ||, or addition assignment +=, etc. etc. etc.

So /_ is obviously called the logdivision operator, but as for /^, we can't really call it the "root operator" or "radical operator", because those would inevitably get confused with the \(\surd\) symbol, which is not interchangeable (more on this in the next paragraph). So instead, let's call it the reciprocal power operator or recipropower operator for short; you can remember this easily because / matches "recipro" and ^ matches "power", in that order. And just like how we sometimes call ^ after the generic symbol, "caret" or "hat", we might also call this operator the "caret-slash" or "hat-slash".

Okay, more regarding the non-interchangeability of \(\sqrt[y]{x}\) with x/^y: note in particular that the radicand and degree are on opposite sides of each other in the two notations. You'll notice that the same swapping of order applies for x/_y and the conventional \(\log_y{x}\). Well, Dave's James notation helps demonstrate why this is the more logical order for these two numbers, with parallelism to subtraction and division:

  • \(
    \def \ex #1{\enclose{top left}{#1}\,}
    \def \lo #1{\enclose{bottom left}{#1}\,}
    \def \re #1{\enclose{angletop}{#1}\,}
    \def \no #1{\enclose{top right}{#1}\,}
    \def \fracnl #1#2{\genfrac{}{}{0pt}{1}{#1}{#2}}
    \def \dfracnl #1#2{\genfrac{}{}{0pt}{0}{#1}{#2}}
    \Large
    \begin{array}{c|c|c}
    \\
    \texttt{x - y} & \texttt{x / y} & \texttt{x /_ y} \\
    \\
    \hline \\
    {\lo{\dfracnl{\ex{x}}{\re{\ex{y}}}}} & {\dfracnl{x}{\re{y}}} & {\dfracnl{\lo{x}}{\re{\lo{y}}}} \\
    \\
    \hline \\
    x-y & \dfrac{x}{y} & \log_y{x} \\
    \\
    \end{array}
    \)

And with radication, then, we simply wish for parallelism between the two operators, so that each one has the whole thing, the power (logarithmand or radicand) as its left operand and one or the other of its parts, the base and exponent, as its right operand, so that it spits out the other of the parts.

Using this operator and thinking in terms of logdivision makes facts like
\(\log_y{x} = \frac{1}{\log_x{y}} = -\log_{\frac{1}{x}}{y}\)
much more accessible:
x /_ y = 1/(y /_ x) = -y /_ (1/x).

Just as the /, ^, and _ do not appear in handwritten or Math software environments, there would be no "connected-up" "proper" versions of /_ and /^. These two operators are truly for ASCII-only environment use only.

  • The technical name for the division slash is the "solidus". Perhaps informally we may call this the "over sign" per "\(x\) over \(y\)", by analogy with the informal "plus sign" per "\(x\) plus \(y\)", where formally that should be the "addition sign", or "times sign" per "\(x\) times \(y\)", where formally that should be the "multiplication sign".
  • The technical name for ÷ is the "obelus",
  • The technical name for roots is "radication". Fun fact: "eradication" essentially means "destruction by uprooting".
  • It's just as well that /_ isn't a "log sign", as that might get confused with "log sine"!
  • We might also call /_ the "logdivisive", the adjective form of "logdivision", by analogy with calling \(\surd\) the "radical", which is the adjective form of "radication".




Proposal 3: Subscripted radical notation, for Unicode and enhanced text situations

Next, I propose an extension of the radical sign so that it supports logdivision in addition to radication. This can be used in enhanced text environments where a person is either individually looking up and copying-and-pasting specific Unicode characters or has otherwise given themselves the ability to type them using key combinations, or else is using dedicated math notation tools like \(\LaTeX\) or Wolfram Language, or has total freedom in a handwriting environment (whiteboards, notebooks, etc.). At a computer, admittedly, 99% of people in the world are only willing or capable to use ASCII-only, so /_ and /^ are for this majority; but for the minority who are doing subtle and/or expansive work, the extra work for \(\LaTeX\) or Unicode is well worth it.

So here we simply use the radical sign with a subscripted number instead of a superscripted number:

\(
\large\hspace{64px} _x\sqrt{z}
\)


Here's the reasoning:
  • Like the Triangle of Power, invented by Alex Jordan and named by Grant Sanderson, this positions the base, exponent, and power of a power relation in the shape of a triangle, with the base in the bottom-left, the exponent in the top-center, and the power in the bottom-right. Having the exponent up and to the right from the base is in accordance with how we already write powers, \(x^y\) (in this case, we don't actually draw the radical sign, though). Having the exponent up and to the left from the power is in accordance with how we already write roots, \(\sqrt[y]{z}\) (though the power is considered a radicand and the exponent is considered a degree, and as an exponent is actually the inverse thereof). So if we just place the base to the left of the power (considered a logarithmand), we find the logical remaining third form, for logdivision, \(_x\sqrt{z}\). Each form evaluates to the missing number of the three, i.e. \(x^y=z\), \(\sqrt[y]{z} = x\), and \(_x\sqrt{z} = y\). Putting them all together, we'd have \(_x\sqrt[y]{z}\), but this wouldn't be useful in ordinary contexts. So while the Triangle of Power is pretty bulky and seems more useful as a mnemonic than actual notation, this extension of the existing radical sign is compact and readily-available. (Admittedly, this notation does not do much for showing the relationship between logdivision and division, but what it does for the relationship between logdivision and radication makes up for it.)
  • By tucking the base underneath the serif-like hook at the end of the radical sign (as it is often rendered, anyway), we create the shape of a caret, which is how we write powers x^y, and that's exactly the relationship between the subscripted base and the superscripted exponent here. Honestly, this almost makes me feel like the radical sign must have been designed with this use in mind! In handwriting environments, this part of the radical sign could be drawn exaggerated in size, to emphasize the difference from taking a root. (See image below.)
  • Of course, there are pros and cons to reusing a symbol like we are doing with \(\surd\) here. The pros are: no need to add anything to Unicode, and people already know how to read it and write it and recognize it and potentially type it or insert it into \(\LaTeX\) or Wolfram Language. The cons are: it breaks what some may consider to be a law that one symbol should only be used with one operation, and might thereby surprise or confuse people who would only expect to see a radical sign used for taking a root; and, following the pattern of "root mean square" — which asks us to take the squares, take their mean, then take the root — and "log sine" — which asks us to take the sine, then the log — it would be reasonable for someone to interpret "reciprocal power" or "recipropower" as "take the power then take the reciprocal", which contradicts what it actually is. (These are Dave's arguments, and I have responses to them in the small text below.)

Just as \(x^y\) (and x^y can be read "\(x\) to the \(y\)th power", we suggest that \(\sqrt[y]{z}\) (and z/^y) can be read "\(z\) from the \(y\)th root", and \(_x\sqrt{z}\) (and z/_x) can be read "\(z\) from the \(x\)th log". In other words, the "building-up" operation that goes from the base and exponent to the power is in the to direction, while either of the "breaking-down" operations that go from the power and one or the other of the base or the exponent to the other of those two are in the from direction, as shown here:


We note that while in \(x^y\), the number \(y\) is the exponent, it still makes sense to say "to the \(y\)th power"; the power is the result, not the superscripted argument. Think of \(y\) as an index, that is, imagine a list of all the possible powers of \(x\), and \(y\) gives us the address for the power we want out of that list. For example, "The powers of 2 are [2, 4, 8, 16, 32, 64, 128, …]; give me the 4th power of 2" Answer: 16. (And in fact, "index" is another commonly used word for the "degree" of a root. But we find it confusing to use such a generic term in this context, and have preferred "degree" here.) And alternatively, as \(x^y\) is sometimes read "\(x\) to exponent \(y\)", we could say \(\sqrt[y]{x}\) as "\(x\) from degree \(y\)" or \(_y\sqrt{x}\) as "\(x\) from base \(y\)".

  • I've also got to give a shout-out to "Lars" for his "Y of Power", which I think is even better than the Triangle of Power.
  • This is a response to Dave's argument that we should use one symbol per operation. There is an argument that because my proposal simply shows another vantage on the power relation, it is in an important sense the same operation. We need only recognize that the subscripted number is the root, where root is another word for base, and I see no reason to forbid using it in the context of logarithmization. Unlike with radication, we're not asking for the root, we're asking for the exponent, but the root is still very much involved.
  • This is a response to Dave's argument that the name "reciprocal power" invites a misinterpretation. I contend that people still have to learn what "reciprocal power" actually means, as opposed to all the other possible things it could just as well mean per the words of its name, and then the words of its name will help them remember what it is but no one ever hoped that they'd completely unambiguously capture everything about what it is and what it isn't. Also, we can read the "reciprocal power" as the power that corresponds to the \(n\)th power but is its reciprocal, namely the \(\frac14\)th power.



Proposal 4: L-shaped division bar, for enhanced situations

Finally, I propose a variation of the division bar to express logdivision. Like the ordinary division bar, which we could call a sort of "2D notation", this one can't even be easily reproduced with Unicode characters; this notation is only for use with dedicated math notation tools like \(\LaTeX\) or Wolfram Language, or else a handwriting environments where the writer has total expressive freedom:


Here's the reasoning:
  • The division-bar-like notation emphasizes the similarity between logdivision and division. (Admittedly, this does nothing to emphasize the similarity between logdivision and radication.)
  • If you turn your head sideways, it looks like an 𝐿, that is, an italic L, such as is evoked by the logdivision operator /_ for plain-text environments.
  • The shape can also be seen as the relevant portion of the radical sign as my third proposal extends it to work for logdivision. That is, the corner of this L-shaped division bar looks like a caret pointing off to the side where the exponent would be.

(Note that we haven't written any custom \(\LaTeX\) yet to render this L-shaped division bar. It's an open problem, if anyone is interested in taking it on.)

  • The technical name for the "division bar" is the "vinculum".




Conclusion

This is the Sagittal forum, after all, so I ought to at least provide some examples of microtonal intervals notated following my reform proposals.

One step of 12-EDO, as a frequency ratio:
2/^12
\(\sqrt[12]{2}\) (established)

The quarter-comma meantone fifth, as a frequency ratio:
5/^4
\(\sqrt[4]{5}\) (established)

The perfect fifth, in octaves:
(3/2)/_2
\(_2\sqrt{\frac32}\)


Thanks for reading!

Statistics: Posted by cmloegcmluin — Tue Jan 02, 2024 1:59 pm



Viewing all articles
Browse latest Browse all 40

Trending Articles