• chortle_tortle@mander.xyz
    link
    fedilink
    English
    arrow-up
    85
    ·
    edit-2
    9 months ago

    Mathematicians will in one breath tell you they aren’t fractions, then in the next tell you dz/dx = dz/dy * dy/dx

  • benignintervention@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    ·
    9 months ago

    I found math in physics to have this really fun duality of “these are rigorous rules that must be followed” and “if we make a set of edge case assumptions, we can fit the square peg in the round hole”

    Also I will always treat the derivative operator as a fraction

  • rudyharrelson@lemmy.radio
    link
    fedilink
    English
    arrow-up
    62
    ·
    9 months ago

    Derivatives started making more sense to me after I started learning their practical applications in physics class. d/dx was too abstract when learning it in precalc, but once physics introduced d/dt (change with respect to time t), it made derivative formulas feel more intuitive, like “velocity is the change in position with respect to time, which the derivative of position” and “acceleration is the change in velocity with respect to time, which is the derivative of velocity”

    • Prunebutt@slrpnk.net
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      1
      ·
      9 months ago

      Possibly you just had to hear it more than once.

      I learned it the other way around since my physics teacher was speedrunning the math sections to get to the fun physics stuff and I really got it after hearing it the second time in math class.

      But yeah: it often helps to have practical examples and it doesn’t get any more applicable to real life than d/dt.

      • exasperation@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        I always needed practical examples, which is why it was helpful to learn physics alongside calculus my senior year in high school. Knowing where the physics equations came from was easier than just blindly memorizing the formulas.

        The specific example of things clicking for me was understanding where the “1/2” came from in distance = 1/2 (acceleration)(time)^2 (the simpler case of initial velocity being 0).

        And then later on, complex numbers didn’t make any sense to me until phase angles in AC circuits showed me a practical application, and vector calculus didn’t make sense to me until I had to actually work out practical applications of Maxwell’s equations.

  • vaionko@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    39
    ·
    9 months ago

    Except you can kinda treat it as a fraction when dealing with differential equations

  • Avicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    9 months ago

    Look it is so simple, it just acts on an uncountably infinite dimensional vector space of differentiable functions.

    • gandalf_der_12te@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      fun fact: the vector space of differentiable functions (at least on compact domains) is actually of countable dimension.

      still infinite though

      • Avicenna@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Doesn’t BCT imply that infinite dimensional Banach spaces cannot have a countable basis

        • gandalf_der_12te@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Uhm, yeah, but there’s two different definitions of basis iirc. And i’m using the analytical definition here; you’re talking about the linear algebra definition.

          • Avicenna@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            So I call an infinite dimensional vector space of countable/uncountable dimensions if it has a countable and uncountable basis. What is the analytical definition? Or do you mean basis in the sense of topology?

            • gandalf_der_12te@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 months ago

              Uhm, i remember there’s two definitions for basis.

              The basis in linear algebra says that you can compose every vector v as a finite sum v = sum over i from 1 to N of a_i * v_i, where a_i are arbitrary coefficients

              The basis in analysis says that you can compose every vector v as an infinite sum v = sum over i from 1 to infinity of a_i * v_i. So that makes a convergent series. It requires that a topology is defined on the vector space fist, so convergence becomes well-defined. We call such a vector space of countably infinite dimension if such a basis (v_1, v_2, …) exists that every vector v can be represented as a convergent series.

              • Avicenna@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 months ago

                Ah that makes sense, regular definition of basis is not much of use in infinite dimension anyways as far as I recall. Wonder if differentiability is required for what you said since polynomials on compact domains (probably required for uniform convergence or sth) would also work for cont functions I think.

                • gandalf_der_12te@discuss.tchncs.de
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  regular definition of basis is not much of use in infinite dimension anyways as far as I recall.

                  yeah, that’s exactly why we have an alternative definition for that :D

                  Wonder if differentiability is required for what you said since polynomials on compact domains (probably required for uniform convergence or sth) would also work for cont functions I think.

                  Differentiability is not required; what is required is a topology, i.e. a definition of convergence to make sure the infinite series are well-defined.

    • marcos@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      9 months ago

      And it denotes an operation that gives you that fraction in operational algebra…

      Instead of making it clear that d is an operator, not a value, and thus the entire thing becomes an operator, physicists keep claiming that there’s no fraction involved. I guess they like confusing people.

  • Gladaed@feddit.org
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    9 months ago

    Why does using it as a fraction work just fine then? Checkmate, Maths!

    • Kogasa@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      9 months ago

      It doesn’t. Only sometimes it does, because it can be seen as an operator involving a limit of a fraction and sometimes you can commute the limit when the expression is sufficiently regular

  • Caveman@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    9 months ago

    The thing is that it’s legit a fraction and d/dx actually explains what’s going on under the hood. People interact with it as an operator because it’s mostly looking up common derivatives and using the properties.

    Take for example f(x) dx to mean "the sum (∫) of supersmall sections of x (dx) multiplied by the value of x at that point ( f(x) ). This is why there’s dx at the end of all integrals.

    The same way you can say that the slope at x is tiny f(x) divided by tiny x or d*f(x) / dx or more traditionally (d/dx) * f(x).