What dx and dy actually mean (a calculus student’s guide)

The first time anyone shows you the chain rule, they will write it as something like

$$\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx},$$

and then say something reassuring like “you can think of it as the $du$’s cancelling.” And you will think: hang on, the textbook just spent two pages telling me $\tfrac{dy}{dx}$ is not a fraction. Now I am supposed to cancel things in it as if it were? Which is it?

Both, sort of. This is the kind of question textbooks tend to wave away with “it works out because of [reasons]” rather than addressing head-on. Here is the head-on version.

The short answer

$\tfrac{dy}{dx}$ is the limit of a fraction. The thing it is the limit of looks like

$$\frac{\Delta y}{\Delta x} = \frac{f(x + \Delta x) - f(x)}{\Delta x},$$

which is a perfectly ordinary fraction in which both top and bottom are real numbers. As you let $\Delta x$ shrink, the fraction approaches a limit, and that limit is what we call $\tfrac{dy}{dx}$.

The notation $\tfrac{dy}{dx}$ is Leibniz’s way of suggesting “the limit of $\Delta y / \Delta x$ as both go to zero together.” The notation deliberately looks like a fraction because in the limit it came from a fraction. But the limit itself is a single number, not a literal fraction of two other numbers, because $dy$ and $dx$ separately would have to be “infinitesimal,” which is not a thing real numbers do.

Newton vs Leibniz

Newton and Leibniz both invented calculus in the 1660s–1670s, independently of each other, with very different notations and very different attitudes about what the symbols meant.

Newton wrote derivatives as $\dot y$ — just a dot above the variable — and thought of the derivative as a single object, “the rate of change of $y$ with time.” He was nervous about the idea of infinitely small quantities and tried to define derivatives in a way that didn’t need them. (He didn’t quite succeed; the rigorous limit definition we use today came about 200 years later.)

Leibniz wrote derivatives as $\tfrac{dy}{dx}$ and was happy to think of $dx$ and $dy$ as “infinitely small” quantities that you could manipulate algebraically. This is what makes Leibniz notation feel so natural for things like the chain rule and substitution: you really can pretend the $dx$’s cancel, and you get the right answer.

The catch is that “infinitely small” is not a real number, so for two centuries Leibniz’s notation was easy to use but hard to defend logically. Mathematicians used it because it worked, and dealt with the philosophical objections by ignoring them. Eventually, in the 1800s, Cauchy and Weierstrass put calculus on a rigorous limit-based footing — and a century later, in the 1960s, Abraham Robinson constructed a fully rigorous theory of “non-standard analysis” in which infinitesimals genuinely exist. Both approaches give the same answers as Leibniz did. He was just two centuries early.

The picture: secant slopes approaching the tangent slope

The geometric picture matches the limit definition exactly. Pick a point $(x, y)$ on a curve. Pick a second point a little to the right, at horizontal distance $\Delta x$. Draw the straight line through the two points. Its slope is exactly $\Delta y / \Delta x$ — the rise over the run.

Now slide the second point closer to the first. The straight line swings — it gets closer to being tangent to the curve at the first point. As the two points get arbitrarily close, $\Delta x$ approaches $0$, $\Delta y$ approaches $0$, and the fraction $\Delta y / \Delta x$ approaches a particular finite number: the slope of the tangent line at $(x, y)$. That number is what we write $\tfrac{dy}{dx}$.

$dx$ on its own does not have a numerical value. Neither does $dy$. The symbol $\tfrac{dy}{dx}$ as a whole has a value — it is the limit. You can see why Leibniz wrote it the way he did: the way the limit emerges is genuinely “a small change in $y$ divided by a small change in $x$, in the limit.”

Why you can sometimes treat it as a fraction

The chain rule statement

$$\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}$$

looks like fraction cancellation. It is not literally that, because none of the “fractions” involved are fractions of real numbers — they are all limits. But it looks like fraction cancellation because, in the underlying $\Delta$ form, it is fraction cancellation:

$$\frac{\Delta y}{\Delta x} = \frac{\Delta y}{\Delta u} \cdot \frac{\Delta u}{\Delta x}.$$

This is just the algebraic identity that two fractions multiplied together with a matching middle term cancel. The chain rule says that this identity survives the limit-taking process. So the fraction-cancellation intuition is correct as a memory device, even though it is not what is technically happening in the limit version.

The same trick works for substitution in integration. When you do “$u$-substitution” and write

$$\int f(g(x))\, g'(x)\,dx = \int f(u)\,du,$$

you are formally setting $u = g(x)$ and then writing $du = g'(x)\,dx$ as if $du$ and $dx$ were genuine quantities you could swap one for the other. This is the Leibniz move, and it is the reason $u$-substitution feels like cancellation. Behind the scenes, what makes it valid is the chain rule worked in reverse. But the fraction-style notation lets you skip the proof every time you do the technique, which is most of the time.

When the “fraction” intuition breaks down

There are exactly two situations where treating $\tfrac{dy}{dx}$ as a fraction will lead you wrong, and you should know both.

First: second derivatives. You write $\tfrac{d^2 y}{dx^2}$ for the second derivative. This looks like $\tfrac{(dy)^2}{(dx)^2}$, and it is not. The notation is just historical bookkeeping for “differentiate twice with respect to $x$.” If you try to do algebra on $\tfrac{d^2 y}{dx^2}$ as if it were a literal fraction with squared bits, you will get nonsense. Treat it as a single symbol whose meaning is “second derivative,” not as a fraction.

Second: partial derivatives. Once you move from one variable to several, you write $\tfrac{\partial f}{\partial x}$ instead of $\tfrac{df}{dx}$. The chain rule for several variables looks like it should still be cancellation:

$$\frac{\partial f}{\partial x} \stackrel{?}{=} \frac{\partial f}{\partial u} \cdot \frac{\partial u}{\partial x},$$

but in fact the correct multivariable chain rule is more like a sum of such products, one term per intermediate variable. Trying to cancel partial derivatives the way you cancel ordinary derivatives will give wrong answers. The Leibniz fraction trick is a single-variable shortcut.

Just call it “the derivative”

For practical exam purposes, the right way to think about $\tfrac{dy}{dx}$ is: it is a single number (or a single function), representing the slope of the tangent. The notation looks like a fraction because of historical convenience and because the fraction-style manipulations work in the single-variable case. But the object itself is “the derivative of $y$ with respect to $x$,” not a literal ratio.

Specifically:

That is the whole truth. The Leibniz notation is one of the cleanest and most powerful pieces of notation in all of mathematics, but it takes a moment to internalise that it is doing two things at once: suggesting an intuitive picture (slope of secant lines) and permitting algebraic manipulations (the cancellation pattern) that are technically theorems about limits in disguise.

The practical advice

Use the fraction intuition. It will make most of single-variable calculus feel natural. Just remember that what is really happening underneath is that all the “$d$”s are limits, and that when you get to second derivatives or several variables, the intuition stops being a free pass and you have to think more carefully. The notation is a tool, and like every powerful tool, it has a sharp end and a handle.


Read next