• Artyom@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    Just wait until they learn that computers subtract by adding, and multiply by adding, and divide by adding, and do exponents by adding, and do logarithms by adding.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      it multiplies by using a complex set of gate arrays that do some adding, otherwise hardware multipliers are like multiplier tables built up by logic gates. Early CPUs did multiplication by adding (essentially multiplications are just recursively adding the same numbers to themselves), and if you were lucky it was optimized to use bit-shifts.

      Division is a lot more complicated though. I did some optimization by multiplying with reciprocals instead, but speed gain was negligible due to memory bandwith limitations.

  • affiliate@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    from a formal perspective, division is an “”abbreviation”” for multiplying by a reciprocal. for example, you first define what 1/3 is, and then 2/3 is shorthand for 2 * (1/3). so in this sense, multiplication and division are extremely similar.

    same thing goes for subtraction, but now the analogy is even stronger since you can subtract any two numbers (whereas you “can’t” divide by 0). so x - y is shorthand for x + (-y). and -y is defined “to be the number such that y + (-y) = 0”.

    • Wandering_Uncertainty@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      The way I think of it, there is no subtraction, and there is no division. Or square roots.

      There is the singular layer of operations (the adding/subtracting layer which I think of as counting, multiplying/dividing layer which I think of as grouping, etc).

      Everything within that layer is fundamentally the same thing. But we just have multiple ways of saying it.

      Partly because teaching kids negative numbers is harder than subtraction, and thinking of fractions is hard enough without thinking of it as a representative process of relationships via multiplication.

      Again, just how my brain does things. I’m not a mathematician or anything, but I’m pretty decent at regular math.

      • affiliate@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        i think this a really nice way of thinking of things, especially for regular everyday life.

        as a mathematician though, i wanted to mention how utterly and terribly cursed square roots are. (mainly just to share some of the horrors that lurk beneath the surface.) they’ve been a problem for quite some time. even in ancient greece, people were running into trouble with √2. it was only fairly recently (around the 17th century) that they started looking at complex numbers in order to get a handle on √-1. square roots led to the invention of two different “extensions” of the standard number systems: the real numbers (e.g. for √2), and later, the complex numbers (e.g. for √-1).

        at the heart of it, the problem is that there’s a fairly straightforward way to define exponentiation by whole numbers: 3n just means multiply 3 by itself a bunch of times. but square roots want us to exponentiate things by a fraction, and its not really clear what 31/2 is supposed to mean. it ends up being that 31/2 is just defined as 31/2 = x, where x is "“the number that satisfies x2 = 3"”. and so we’re in this weird situation where exponentiating by a fraction is somehow defined differently than exponentiating by a whole number.

        but this is similar to how multiplication is defined: when you multiply something by a whole number, you just add a number to itself a bunch of times; but if you want to multiply by a fraction, then you have to get a bit creative. and in a very real sense, multiplication “is the exponentiation of addition”.