Part 3: The Inequality Machinery

The decisive edge — transforming derivative information into competition-grade inequality proofs through the Mean Value Theorems, convexity hierarchies, and a systematic heuristic arsenal.

In Part 1 , we built the algebraic and topological foundations — the schism between pointwise and interval monotonicity, Darboux’s Theorem, Froda’s jumps. In Part 2 , we forged the analytical tools — the Discrete Zero Theorem, recursive derivatives, the wavy curve, and the trap catalogue. Now we arrive at the culmination: the inequality machinery.

This is the decisive edge. The transition from calculating derivatives to constructing proofs through the engine of monotonicity is what separates the top 0.1% from everyone else. We will build a systematic arsenal of heuristics — the Auxiliary Function Method, Multi-Stage Differentiation, the Tangent Line Method, Structure-Matching, Positive Stripping, and Maclaurin’s Algebraic Trick — and deploy them against a ranked hierarchy of increasingly fearsome problems from Putnam, ISI, CMI, and JEE Advanced.


I. The Mean Value Bridge

The Mean Value Theorems are the primary analytical bridge between bounding a function and bounding its derivative. The key insight: f(b)f(a)=f(c)(ba)f(b) - f(a) = f'(c)(b-a) reduces the complex task of bounding a transcendental function to the vastly simpler task of bounding its derivative at a single unknown point.

Lagrange’s Mean Value Theorem

LMVT — The Fundamental Bridge

If ff is continuous on [a,b][a, b] and differentiable on (a,b)(a, b), then there exists c(a,b)c \in (a, b) such that: f(b)f(a)=f(c)(ba)f(b) - f(a) = f'(c)(b - a)

Competition power: If you can bound ff' on an interval — say mf(x)Mm \le f'(x) \le M — then LMVT immediately gives m(ba)f(b)f(a)M(ba)m(b-a) \le f(b) - f(a) \le M(b-a). Many transcendental inequality problems reduce to nothing more than this.

Cauchy’s Mean Value Theorem

For problems involving ratios of functions — particularly the asymmetric fractional inequalities prevalent in ISI and CMI subjective papers — LMVT is insufficient. The Cauchy MVT extends the bridge to functional ratios.

CMVT — The Ratio Bridge

If ff and gg are continuous on [a,b][a, b], differentiable on (a,b)(a, b), and g(x)0g'(x) \neq 0 on (a,b)(a, b), then there exists c(a,b)c \in (a, b) such that: f(b)f(a)g(b)g(a)=f(c)g(c)\frac{f(b) - f(a)}{g(b) - g(a)} = \frac{f'(c)}{g'(c)}

The Parametric Interpretation of CMVT

Students frequently treat CMVT as an algebraic artifact. In reality, it is equivalent to applying LMVT to the parametric curve (g(t),f(t))(g(t), f(t)) for t[a,b]t \in [a, b].

When to reach for CMVT:

  1. The inequality involves a ratio like lnblnaba\frac{\ln b - \ln a}{b - a} or ebeasinbsina\frac{e^b - e^a}{\sin b - \sin a}.
  2. The structure has f(b)f(a)f(b) - f(a) in the numerator and g(b)g(a)g(b) - g(a) in the denominator (or vice versa).
  3. Structure-matching: If you see f(b)f(a)g(b)g(a)\frac{f(b) - f(a)}{g(b) - g(a)}, identify ff and gg from the context. The theorem then gives a point cc where the ratio equals f(c)/g(c)f'(c)/g'(c) — which is often trivially bounded.

Critical check: Always verify g(x)0g'(x) \neq 0 on (a,b)(a, b) before applying. Failing this check is the Denominator Collapse trap (see Section IV).

CMVT ≠ L'Hôpital

CMVT and L’Hôpital’s Rule are not the same theorem. L’Hôpital requires limxaf(x)/g(x)\lim_{x \to a} f(x)/g(x) to be an indeterminate form (0/00/0 or /\infty/\infty), then replaces the limit by limf(x)/g(x)\lim f'(x)/g'(x). CMVT makes no limit assumption — it gives an exact equality at a specific point cc. In competition proofs, CMVT gives a concrete existence result; L’Hôpital gives an asymptotic one.


II. The Convexity Hierarchy

Convexity is “super-monotonicity” — it governs the monotonicity of the slope itself. A strictly convex function curves upward: its graph lies below every secant and above every tangent. This geometric property spawns the most powerful inequality theorems in competitive mathematics.

Jensen’s Inequality

Jensen's Inequality

If ff is convex on an interval II and x1,x2,,xnIx_1, x_2, \ldots, x_n \in I with weights wi0w_i \ge 0 summing to 11, then: f ⁣(i=1nwixi)i=1nwif(xi)f\!\left(\sum_{i=1}^n w_i x_i\right) \le \sum_{i=1}^n w_i f(x_i)

For concave ff, the inequality reverses. For the equal-weight case: f ⁣(x1++xnn)f(x1)++f(xn)nf\!\left(\frac{x_1 + \cdots + x_n}{n}\right) \le \frac{f(x_1) + \cdots + f(x_n)}{n}.

The calculus check: f(x)>0f''(x) > 0 on IIff is strictly convex ⟹ Jensen applies with strict inequality (unless all xix_i are equal).

The Tangent Line Bound

A direct consequence of convexity: for a convex function ff, the tangent at any point lies entirely below the graph:

f(x)f(a)+f(a)(xa)for all xIf(x) \ge f(a) + f'(a)(x - a) \quad \text{for all } x \in I

The Tangent Line Method

When a problem asks you to prove f(x)g(x)f(x) \ge g(x) and equality holds at x=ax = a:

  1. Verify ff is convex (check f0f'' \ge 0).
  2. Compute the tangent to ff at x=ax = a: T(x)=f(a)+f(a)(xa)T(x) = f(a) + f'(a)(x - a).
  3. If T(x)g(x)T(x) \ge g(x) is easy to prove, you’re done — since f(x)T(x)g(x)f(x) \ge T(x) \ge g(x).

Why this works: The tangent line is the tightest linear lower bound for a convex function. This often bypasses complicated higher-order derivative analysis entirely.

Karamata’s Inequality (Majorization)

Majorization

A sorted sequence (x1,,xn)(x_1, \ldots, x_n) with x1xnx_1 \ge \cdots \ge x_n majorizes (y1,,yn)(y_1, \ldots, y_n) with y1yny_1 \ge \cdots \ge y_n if:

  • i=1kxii=1kyi\sum_{i=1}^k x_i \ge \sum_{i=1}^k y_i for all k=1,,n1k = 1, \ldots, n-1, and
  • i=1nxi=i=1nyi\sum_{i=1}^n x_i = \sum_{i=1}^n y_i (equal total sums).

We write (x1,,xn)(y1,,yn)(x_1, \ldots, x_n) \succ (y_1, \ldots, y_n).

Karamata's Inequality

If ff is convex and (x1,,xn)(y1,,yn)(x_1, \ldots, x_n) \succ (y_1, \ldots, y_n), then: i=1nf(xi)i=1nf(yi)\sum_{i=1}^n f(x_i) \ge \sum_{i=1}^n f(y_i)

Why it matters: Karamata subsumes Jensen (Jensen is the special case where (y1,,yn)(y_1, \ldots, y_n) is the constant sequence of averages). It provides a robust framework for symmetric and cyclic inequalities that resist standard AM-GM or Cauchy-Schwarz.

Operator Monotonicity (For the Curious)

At the frontier of university mathematics, convexity extends to matrix-valued functions. Lieb’s Concavity Theorem states that certain trace-exponential functions of Hermitian matrices preserve concavity. While matrices rarely appear in JEE inequalities, the underlying principle — that convexity is preserved under logarithmic and exponential transformations — is a powerful heuristic for simplifying expressions involving ef(x)e^{f(x)} or ln(f(x))\ln(f(x)).


III. The Heuristic Arsenal

The gap between a proficient calculus student and a top 0.1% scorer is defined by the arsenal of specific heuristics deployed under extreme time pressure. Here are the six most powerful techniques.

TechniqueMechanismWhen to Use
Auxiliary FunctionConstruct F(x)=g(x)h(x)F(x) = g(x) - h(x), prove FF' has constant signComparing two continuous functions on an interval
Multi-Stage DifferentiationCompute F(n)F^{(n)} until sign is obvious, back-propagateFF' has ambiguous sign (mixed algebraic + transcendental)
Tangent Line MethodLinearize: f(x)f(a)+f(a)(xa)f(x) \ge f(a) + f'(a)(x-a) for convex ffComplex fractions or logarithms with equality at a specific point
Structure-MatchingRearrange to f(x)>f(y)f(x) > f(y) for x>yx > yDisguised monotonicity in functional equations (ISI favorite)
Positive StrippingDivide out strictly positive factors (eg(x)e^{g(x)}, x2+ax^2 + a)Simplifying wavy curve or derivative sign analysis
Maclaurin’s TrickRewrite a1+b2\frac{a}{1+b^2} as aab21+b2a - \frac{ab^2}{1+b^2}Denominator bounding would reverse inequality direction
1. The Auxiliary Function Method

The most fundamental technique. To prove g(x)>h(x)g(x) > h(x) on (a,b)(a, b):

  1. Define F(x)=g(x)h(x)F(x) = g(x) - h(x).
  2. Compute F(x)F'(x) and determine its sign.
  3. If F>0F' > 0 on (a,b)(a, b), then FF is strictly increasing.
  4. Evaluate FF at the left endpoint: if F(a)0F(a) \ge 0, then F(x)>0F(x) > 0 for all x>ax > a.

The golden rule: The initial condition F(a)=0F(a) = 0 (or F(a)0F(a) \ge 0) is non-negotiable. Without it, a positive derivative tells you the function is increasing — but says nothing about whether it is positive.

2. Multi-Stage Differentiation (Recursive Derivatives)

When F(x)F'(x) has ambiguous sign (e.g., F(x)=ex1xF'(x) = e^x - 1 - x, which isn’t obviously positive for beginners):

  1. Compute F(x)F''(x). If its sign is clear, stop.
  2. If not, compute F(x)F'''(x). Repeat until F(n)(x)F^{(n)}(x) has obvious sign.
  3. Back-propagate:
    • F(n)>0F^{(n)} > 0F(n1)F^{(n-1)} is increasing.
    • Evaluate F(n1)F^{(n-1)} at the base point. If F(n1)(a)=0F^{(n-1)}(a) = 0 and F(n1)F^{(n-1)} is increasing, then F(n1)>0F^{(n-1)} > 0 for x>ax > a.
    • Repeat downward until you reach FF.

Covered in depth in Part 2, Section III .

3. The Tangent Line Method (Linearization)

For proving f(x)L(x)f(x) \ge L(x) where equality holds at x=ax = a:

  1. Verify f(x)0f''(x) \ge 0 (convexity) on the relevant domain.
  2. The tangent line at x=ax = a is T(x)=f(a)+f(a)(xa)T(x) = f(a) + f'(a)(x - a).
  3. By convexity, f(x)T(x)f(x) \ge T(x) for all xx in the domain.

The power move: This completely bypasses higher-order derivative analysis. For fractional or logarithmic terms, linearizing around the equality point often collapses the problem to a trivial algebraic inequality.

4. Structure-Matching

When a problem presents an asymmetric inequality like f(a)g(b)>f(c)g(d)\frac{f(a)}{g(b)} > \frac{f(c)}{g(d)}:

  1. Rearrange both sides to isolate a single monotonic function applied to different arguments.
  2. If you can write the inequality as φ(u)>φ(v)\varphi(u) > \varphi(v), then monotonicity of φ\varphi and the ordering u>vu > v (or u<vu < v for decreasing φ\varphi) finishes the proof.

ISI/CMI signature: Many subjective problems disguise monotonicity inside functional equations. The relation f(x)>f(y)f(x) > f(y) is hidden — your job is to extract the common function ff and the ordering.

5. Positive Stripping

Before analyzing the sign of a complex expression, strip out any factor that is unconditionally positive over the domain:

  • eg(x)>0e^{g(x)} > 0 always.
  • x2+a>0x^2 + a > 0 for a>0a > 0.
  • 1+sin2x1>01 + \sin^2 x \ge 1 > 0 always.
  • coshx>0\cosh x > 0 always.

Rule: You may safely divide both sides of an inequality by a strictly positive factor without changing the inequality direction. This often reduces a terrifying expression to a manageable polynomial or simple transcendental.

6. Maclaurin's Algebraic Trick

When you need a lower bound for a1+b2\frac{a}{1 + b^2} but bounding the denominator 1+b22b1 + b^2 \ge 2b gives a1+b2a2b\frac{a}{1+b^2} \le \frac{a}{2b} — the wrong direction:

The fix: Rewrite as subtraction: a1+b2=aab21+b2\frac{a}{1+b^2} = a - \frac{ab^2}{1+b^2}.

Now bound the subtracted term: ab21+b2ab22b=ab2\frac{ab^2}{1+b^2} \le \frac{ab^2}{2b} = \frac{ab}{2}.

Since you’re subtracting, the inequality flips to the correct direction: a1+b2aab2\frac{a}{1+b^2} \ge a - \frac{ab}{2}.

When to use: Any time bounding a denominator produces an inequality in the wrong direction. The trick converts division into subtraction, making AM-GM safe to apply.


IV. The Error Catalogue

Even top-tier students fall into specific traps under exam pressure. The inequality machinery is notoriously unforgiving of imprecise domain analysis or irreversible operations.

TrapErrorConsequence
Domain ParadoxDifferentiating ln(f(x))\ln(f(x)) or f(x)1/3f(x)^{1/3} without checking boundary continuityFalse extrema; monotonicity claimed where function is undefined
False ConvexityProving f>0f'' > 0 locally, then applying Jensen globallyJensen applied in a concave region reverses the inequality
Irreversible SquaringSquaring both sides without confirming both sides are positiveExtraneous roots and false positive intervals
Parenthetical LossDropping parentheses during distribution of negatives or IBPTotal structural failure of F(x)F(x); all subsequent analysis invalid
Infinity MisjudgmentUsing L’Hôpital without asymptotic verificationIncorrect growth comparisons (polynomial vs. exponential)
Denominator CollapseApplying CMVT without checking g(x)0g'(x) \neq 0Division by zero; ratio theorem produces garbage
Trap 1: Domain Paradoxes

Error: Differentiating a function and analyzing ff' without checking that ff is actually defined on the claimed domain.

Example: Analyzing f(x)=x2/3(3x7)f(x) = x^{2/3}(3x - 7) and writing f(x)=15x143x1/3f'(x) = \frac{15x - 14}{3x^{1/3}}. The derivative is undefined at x=0x = 0. If you ignore this and treat the number line as continuous through 00, your wavy curve analysis includes a phantom interval.

Fix: Always list domain restrictions before computing ff'. Critical points include both zeros of ff' and points where ff' is undefined.

Trap 2: False Convexity

Error: Proving f(x)>0f''(x) > 0 on a limited sub-interval, then applying Jensen’s Inequality as if ff is convex everywhere.

Fix: Jensen requires convexity on the entire interval containing the variables x1,,xnx_1, \ldots, x_n. If ff transitions from convex to concave, the inequality reverses in the concave region. Always verify the domain of convexity covers all inputs.

Trap 3: Irreversible Squaring

Error: Squaring both sides of f(x)g(x)f(x) \ge g(x) to obtain f(x)2g(x)2f(x)^2 \ge g(x)^2.

This is valid only when both f(x)0f(x) \ge 0 and g(x)0g(x) \ge 0. If g(x)<0g(x) < 0, squaring can introduce extraneous solutions. If f(x)<0f(x) < 0 and g(x)<0g(x) < 0, squaring reverses the inequality.

Fix: Before squaring, explicitly verify the sign of both sides. If uncertain, use the auxiliary function method instead.

Trap 4: The Freshman's Dream (Extended)

Error: Splitting denominators in fractions containing sums: f(x)g(x)+h(x)f(x)g(x)+f(x)h(x)\frac{f(x)}{g(x) + h(x)} \neq \frac{f(x)}{g(x)} + \frac{f(x)}{h(x)}

This immediately corrupts the structure-matching process and prevents the application of the auxiliary function method. This error is devastatingly common in JEE Advanced subjective questions involving cyclic fractional inequalities.

Fix: Keep compound denominators intact. Use Maclaurin’s Trick (Strategy 6) or the tangent line method to handle them.

Trap 5: Infinity Misjudgment

Error: Evaluating limxf(x)/g(x)\lim_{x \to \infty} f(x)/g(x) using L’Hôpital’s Rule without verifying the indeterminate form, or comparing polynomial and exponential growth without Taylor expansion.

Fix: For growth comparisons, use the hierarchy: (lnx)axbcxx!xx(\ln x)^a \ll x^b \ll c^x \ll x! \ll x^x. For subtle limits, expand via Maclaurin series before applying L’Hôpital.

Trap 6: Denominator Collapse (CMVT)

Error: Applying the Cauchy Mean Value Theorem f(b)f(a)g(b)g(a)=f(c)g(c)\frac{f(b)-f(a)}{g(b)-g(a)} = \frac{f'(c)}{g'(c)} without verifying that g(x)0g'(x) \neq 0 on (a,b)(a, b).

If g(x0)=0g'(x_0) = 0 for some x0(a,b)x_0 \in (a, b), the ratio f(c)g(c)\frac{f'(c)}{g'(c)} may involve division by zero, and the theorem’s hypothesis is violated. The resulting “bounds” are mathematically meaningless.

Fix: Before invoking CMVT, explicitly verify g(x)0g'(x) \neq 0 on the open interval. If gg' has zeros, partition the interval or use a different approach.


Illustrative Examples: The Ranked Hierarchy

The following 8 problems form a pedagogical ladder from foundational to fearsome. Each level introduces a new technique or combines previous ones in a harder context.

Problem Easy

(Direct Comparison via Auxiliary Function)

Prove that x>sinxx > \sin x for all x>0x > 0.

View Solution

Step 1 — Auxiliary function: Define F(x)=xsinxF(x) = x - \sin x.

Step 2 — Derivative: F(x)=1cosxF'(x) = 1 - \cos x.

Since cosx1\cos x \le 1 for all real xx, we have F(x)0F'(x) \ge 0 for all xx.

Step 3 — Discrete Zero check: F(x)=0F'(x) = 0 when cosx=1\cos x = 1, i.e., at x=2kπx = 2k\pi for integer kk. These are isolated points — no plateaus. By the Discrete Zero Theorem , FF is strictly increasing for x>0x > 0.

Step 4 — Initial condition: F(0)=00=0F(0) = 0 - 0 = 0.

Conclusion: FF is strictly increasing from F(0)=0F(0) = 0, so F(x)>0F(x) > 0 for all x>0x > 0, i.e., x>sinxx > \sin x. \blacksquare

5 min Standard Curriculum — Level 1
Problem Easy

(Multi-Stage Differentiation)

Prove that ex>1+x+x22e^x > 1 + x + \dfrac{x^2}{2} for all x>0x > 0.

View Solution

Step 1 — Auxiliary function: F(x)=ex1xx22F(x) = e^x - 1 - x - \frac{x^2}{2}.

Step 2 — First derivative: F(x)=ex1xF'(x) = e^x - 1 - x.

The sign of F(x)F'(x) for x>0x > 0 is not immediately obvious. Escalate.

Step 3 — Second derivative: F(x)=ex1F''(x) = e^x - 1.

For x>0x > 0: ex>1e^x > 1, so F(x)>0F''(x) > 0. ✓ Sign is clear.

Step 4 — Back-propagation:

  • F(x)>0F''(x) > 0FF' is strictly increasing on (0,)(0, \infty).
  • F(0)=e010=0F'(0) = e^0 - 1 - 0 = 0.
  • Since FF' is increasing from F(0)=0F'(0) = 0: F(x)>0F'(x) > 0 for all x>0x > 0.

Step 5 — Back-propagation to FF:

  • F(x)>0F'(x) > 0FF is strictly increasing on (0,)(0, \infty).
  • F(0)=1100=0F(0) = 1 - 1 - 0 - 0 = 0.
  • Since FF is increasing from F(0)=0F(0) = 0: F(x)>0F(x) > 0 for all x>0x > 0.

Conclusion: ex>1+x+x22e^x > 1 + x + \frac{x^2}{2} for all x>0x > 0. \blacksquare

7 min MIT / AP Calculus — Level 2
Problem Medium

(Cauchy Mean Value Theorem Application)

Prove that for b>a>0b > a > 0: bab<ln ⁣(ba)<baa\quad\dfrac{b - a}{b} < \ln\!\left(\dfrac{b}{a}\right) < \dfrac{b - a}{a}.

View Solution

Step 1 — Recognition: ln(b/a)=lnblna\ln(b/a) = \ln b - \ln a. This is a difference of function values over the interval [a,b][a, b] — a direct invitation to apply MVT.

Step 2 — Apply LMVT: Define f(t)=lntf(t) = \ln t on [a,b][a, b]. Since ff is continuous on [a,b][a, b] and differentiable on (a,b)(a, b), LMVT guarantees a point c(a,b)c \in (a, b) such that:

lnblnaba=f(c)=1c\frac{\ln b - \ln a}{b - a} = f'(c) = \frac{1}{c}

Step 3 — Bound cc: Since a<c<ba < c < b, taking reciprocals (and reversing inequalities since all quantities are positive):

1b<1c<1a\frac{1}{b} < \frac{1}{c} < \frac{1}{a}

Step 4 — Substitute: Replace 1c\frac{1}{c} with ln(b/a)ba\frac{\ln(b/a)}{b - a}:

1b<ln(b/a)ba<1a\frac{1}{b} < \frac{\ln(b/a)}{b - a} < \frac{1}{a}

Step 5 — Multiply through: Since ba>0b - a > 0 (given b>ab > a), multiplying preserves direction:

bab<ln ⁣(ba)<baa\frac{b - a}{b} < \ln\!\left(\frac{b}{a}\right) < \frac{b - a}{a} \qquad \blacksquare

10 min CMI / ISI B.Math — Level 3
Problem Medium

(Convexity and Jensen’s Inequality)

Given positive reals a,b,ca, b, c with a+b+c=1a + b + c = 1, prove that a2+b2+c213a^2 + b^2 + c^2 \ge \dfrac{1}{3}.

View Solution

Step 1 — Identify convexity: Define f(x)=x2f(x) = x^2. Then f(x)=2>0f''(x) = 2 > 0 for all xx, so ff is strictly convex globally.

Step 2 — Apply Jensen’s Inequality with equal weights wi=1/3w_i = 1/3:

f ⁣(a+b+c3)f(a)+f(b)+f(c)3f\!\left(\frac{a + b + c}{3}\right) \le \frac{f(a) + f(b) + f(c)}{3}

(13)2a2+b2+c23\left(\frac{1}{3}\right)^2 \le \frac{a^2 + b^2 + c^2}{3}

Step 3 — Simplify:

19a2+b2+c23    a2+b2+c239=13\frac{1}{9} \le \frac{a^2 + b^2 + c^2}{3} \implies a^2 + b^2 + c^2 \ge \frac{3}{9} = \frac{1}{3} \qquad \blacksquare

Remark: The Cauchy-Schwarz approach ((a)23a2(\sum a)^2 \le 3 \sum a^2) is faster for this specific problem, but the Jensen machinery generalizes effortlessly to higher powers and transcendental functions.

8 min Putnam / Standard Olympiad — Level 4
Problem Hard

(Bounding Trigonometric Sums via Limit Derivatives)

Let f(x)=a1sinx+a2sin2x++ansinnxf(x) = a_1 \sin x + a_2 \sin 2x + \cdots + a_n \sin nx, where a1,,anRa_1, \ldots, a_n \in \mathbb{R}. Given that f(x)sinx|f(x)| \le |\sin x| for all xx, prove that a1+2a2++nan1|a_1 + 2a_2 + \cdots + na_n| \le 1.

View Solution

Step 1 — Key recognition: The target expression a1+2a2++nana_1 + 2a_2 + \cdots + na_n is exactly f(0)f'(0).

To verify: f(x)=a1cosx+2a2cos2x++nancosnxf'(x) = a_1 \cos x + 2a_2 \cos 2x + \cdots + na_n \cos nx, so f(0)=k=1nkakf'(0) = \sum_{k=1}^n k a_k. ✓

Step 2 — Use the definition of derivative: Since f(0)=0f(0) = 0:

f(0)=limx0f(x)x|f'(0)| = \lim_{x \to 0} \left|\frac{f(x)}{x}\right|

Step 3 — Factor through sinx\sin x:

f(x)x=f(x)sinxsinxx\left|\frac{f(x)}{x}\right| = \left|\frac{f(x)}{\sin x}\right| \cdot \left|\frac{\sin x}{x}\right|

Step 4 — Bound each factor:

  • By hypothesis, f(x)sinx|f(x)| \le |\sin x|, so f(x)sinx1\left|\frac{f(x)}{\sin x}\right| \le 1 for x0x \neq 0.
  • The standard limit: limx0sinxx=1\lim_{x \to 0} \frac{\sin x}{x} = 1.

Step 5 — Combine:

f(0)=limx0f(x)sinxsinxx11=1|f'(0)| = \lim_{x \to 0} \left|\frac{f(x)}{\sin x}\right| \cdot \left|\frac{\sin x}{x}\right| \le 1 \cdot 1 = 1 \qquad \blacksquare

15 min Putnam 1967 A1 — Level 5
Problem Hard

(Integral Bounding of Discrete Sums)

Prove that 23n3/2<i=1ni<23n3/2+12n\dfrac{2}{3}n^{3/2} < \displaystyle\sum_{i=1}^{n} \sqrt{i} < \dfrac{2}{3}n^{3/2} + \dfrac{1}{2}\sqrt{n} for all positive integers nn.

View Solution

Step 1 — Setup: The function f(x)=xf(x) = \sqrt{x} is strictly increasing (f(x)=12x>0f'(x) = \frac{1}{2\sqrt{x}} > 0) and strictly concave (f(x)=14x3/2<0f''(x) = -\frac{1}{4x^{3/2}} < 0).

Step 2 — Lower bound (left Riemann sum vs integral):

Since ff is increasing, on each sub-interval [i1,i][i-1, i], we have f(i1)f(x)f(i)f(i-1) \le f(x) \le f(i). Integrating the right inequality:

i1ixdxi\int_{i-1}^{i} \sqrt{x}\, dx \le \sqrt{i}

Summing from i=1i = 1 to nn:

0nxdxi=1ni\int_0^n \sqrt{x}\, dx \le \sum_{i=1}^n \sqrt{i}

23n3/2i=1ni\frac{2}{3}n^{3/2} \le \sum_{i=1}^n \sqrt{i}

Strict inequality holds because ff is not constant. ✓

Step 3 — Upper bound (trapezoidal/concavity argument):

Since ff is concave, the trapezoidal rule overestimates the integral:

i1ixdxi1+i2\int_{i-1}^{i} \sqrt{x}\, dx \ge \frac{\sqrt{i-1} + \sqrt{i}}{2}

Rearranging: i2i1ixdxi1+ii1ixdx\sqrt{i} \le 2\int_{i-1}^{i} \sqrt{x}\, dx - \sqrt{i-1} + \sqrt{i} - \int_{i-1}^{i} \sqrt{x}\, dx.

More directly, since ff is increasing and concave, we use the bound:

i=1ni0nxdx+n\sum_{i=1}^n \sqrt{i} \le \int_0^n \sqrt{x}\, dx + \sqrt{n}

This follows from comparing left and right Riemann sums: i=1ni0nxdxn0=n\sum_{i=1}^n \sqrt{i} - \int_0^n \sqrt{x}\, dx \le \sqrt{n} - \sqrt{0} = \sqrt{n}.

Tightening via the concavity of x\sqrt{x} (the error in the right Riemann sum for a concave function is bounded by f(n)f(0)2\frac{f(n) - f(0)}{2}):

i=1ni<23n3/2+12n\sum_{i=1}^n \sqrt{i} < \frac{2}{3}n^{3/2} + \frac{1}{2}\sqrt{n} \qquad \blacksquare

15 min Putnam 1953 A1 — Level 6
Problem Hard

(The Tangent Line Method with Maclaurin’s Trick)

For positive reals a,b,c,da, b, c, d with a+b+c+d=4a + b + c + d = 4, prove that cyca1+b22\displaystyle\sum_{\text{cyc}} \frac{a}{1 + b^2} \ge 2.

Bounding the denominator directly

Attempt: 1+b22b1 + b^2 \ge 2b (AM-GM), so a1+b2a2b\frac{a}{1+b^2} \le \frac{a}{2b}. But this gives an upper bound, not a lower bound. The inequality direction is wrong because bounding a denominator from below gives a bound from above on the fraction. Dead end.

View Solution

Step 1 — Maclaurin’s Trick: Decompose the fraction:

a1+b2=aab21+b2\frac{a}{1 + b^2} = a - \frac{ab^2}{1 + b^2}

Step 2 — Now bound the subtracted term: Since 1+b22b1 + b^2 \ge 2b (AM-GM):

ab21+b2ab22b=ab2\frac{ab^2}{1 + b^2} \le \frac{ab^2}{2b} = \frac{ab}{2}

Step 3 — Combine: Since we’re subtracting, the inequality flips to the desired direction:

a1+b2aab2\frac{a}{1 + b^2} \ge a - \frac{ab}{2}

Step 4 — Sum cyclically:

cyca1+b2cyca12cycab=412(ab+bc+cd+da)\sum_{\text{cyc}} \frac{a}{1 + b^2} \ge \sum_{\text{cyc}} a - \frac{1}{2}\sum_{\text{cyc}} ab = 4 - \frac{1}{2}(ab + bc + cd + da)

Step 5 — Bound the product sum: Note ab+bc+cd+da=(a+c)(b+d)ab + bc + cd + da = (a+c)(b+d). By AM-GM:

(a+c)(b+d)((a+c)+(b+d)2)2=(42)2=4(a+c)(b+d) \le \left(\frac{(a+c)+(b+d)}{2}\right)^2 = \left(\frac{4}{2}\right)^2 = 4

Step 6 — Conclude:

cyca1+b2412(4)=2\sum_{\text{cyc}} \frac{a}{1+b^2} \ge 4 - \frac{1}{2}(4) = 2 \qquad \blacksquare

18 min Olympiad Treasures — Level 7
Problem Hard

(Domain Discontinuities in Derivative Analysis)

Find the intervals on which f(x)=(3x7)x2/3f(x) = (3x - 7)x^{2/3} is strictly increasing.

View Solution

Step 1 — Derivative (product rule):

f(x)=3x2/3+(3x7)23x1/3f'(x) = 3 \cdot x^{2/3} + (3x - 7) \cdot \frac{2}{3}x^{-1/3}

Step 2 — Simplify by factoring out x1/3x^{-1/3}:

f(x)=x1/3 ⁣[3x+2(3x7)3]=1x1/3 ⁣[9x+6x143]=15x143x1/3f'(x) = x^{-1/3}\!\left[3x + \frac{2(3x-7)}{3}\right] = \frac{1}{x^{1/3}}\!\left[\frac{9x + 6x - 14}{3}\right] = \frac{15x - 14}{3x^{1/3}}

Step 3 — Critical points:

  • Numerator zero: x=1415x = \frac{14}{15}.
  • Denominator zero (derivative undefined): x=0x = 0.

Step 4 — Sign analysis (wavy curve with two critical points):

Interval15x1415x - 14x1/3x^{1/3}f(x)f'(x)
x<0x < 0--++
0<x<14/150 < x < 14/15-++-
x>14/15x > 14/15++++++

Conclusion: ff is strictly increasing on (,0)(1415,)(-\infty, 0) \cup \left(\frac{14}{15}, \infty\right). \blacksquare

Trap alert: The critical point x=0x = 0 is where ff' is undefined, not where f=0f' = 0. Students who ignore this and treat the domain as continuous through 00 miss the decreasing interval (0,14/15)(0, 14/15).

12 min JEE Advanced PYQ — Level 8

V. Beyond the Syllabus

Number Theory: Stirling's Approximation and the Euler-Mascheroni Constant

Stirling’s formula ln(n!)nlnnn+O(lnn)\ln(n!) \sim n\ln n - n + \mathcal{O}(\ln n) can be derived entirely via the inequality machinery. The discrete sum k=1n1lnk\sum_{k=1}^{n-1} \ln k is sandwiched by Riemann integrals: 1nlnxdxk=1nlnk1n+1lnxdx\int_1^n \ln x\, dx \le \sum_{k=1}^{n} \ln k \le \int_1^{n+1} \ln x\, dx, using monotonicity of ln\ln. Evaluating lnxdx=xlnxx\int \ln x\, dx = x\ln x - x gives the leading term.

Similarly, the Euler-Mascheroni constant γ0.577\gamma \approx 0.577 (measuring the difference between the harmonic series and lnn\ln n) converges because the sequence Dn=k=1n1klnnD_n = \sum_{k=1}^n \frac{1}{k} - \ln n is proved to be strictly decreasing and bounded below — using exactly the auxiliary function method from Level 2.

Combinatorics: Analytic Combinatorics and Catalan Bounds

When a combinatorial sequence ana_n satisfies a recursion, its generating function f(x)=anxnf(x) = \sum a_n x^n becomes a continuous function amenable to calculus. The Catalan numbers Cn=1n+1(2nn)C_n = \frac{1}{n+1}\binom{2n}{n} have generating function C(x)=114x2xC(x) = \frac{1 - \sqrt{1-4x}}{2x}. Analyzing the monotonicity and complete monotonicity of this generating function via multi-stage differentiation yields tight analytic bounds on combinatorial growth rates — circumventing tedious inductive algebra. This intersection of fields, analytic combinatorics, relies directly on tracking the monotonicity of power series and their derivatives.

Differential Equations: Gronwall's Inequality

The pinnacle of inequality cross-connections: if a non-negative continuous function u(t)u(t) satisfies u(t)α(t)+atβ(s)u(s)dsu(t) \le \alpha(t) + \int_a^t \beta(s)\, u(s)\, ds, Gronwall’s lemma proves u(t)α(t)exp ⁣(atβ(s)ds)u(t) \le \alpha(t)\exp\!\left(\int_a^t \beta(s)\, ds\right).

This provides the foundational proof for uniqueness of ODE solutions (Picard-Lindelöf theorem). In Putnam/CMI problems, this appears as: given f(x)f(x)f''(x) \le -f(x) with f(0)=f(0)=0f(0) = f'(0) = 0, prove f(x)0f(x) \le 0 for x0x \ge 0. The solution constructs a bounding envelope using exponential auxiliary functions — directly mirroring the multi-stage differentiation and integrating factor techniques of this module.


Selected Problems

Problem Hard

For which real numbers cc is ex+ex2ecx2\dfrac{e^x + e^{-x}}{2} \le e^{cx^2} for all real xx?

Hint
Take logarithms and use Taylor series expansion on both sides. Compare the x2x^2 coefficient to find the minimum bounding cc.
Problem Medium

Prove or disprove: if xx and yy are real numbers with y0y \ge 0 and y(y+1)(x+1)2y(y+1) \le (x+1)^2, then y(y1)x2y(y-1) \le x^2.

Hint
Construct an auxiliary function linking the two inequalities. The domain constraint y0y \ge 0 is essential — it prevents the irreversible squaring paradox.
Problem Hard

Let f:RRf: \mathbb{R} \to \mathbb{R} be twice differentiable with f(x)>0f''(x) > 0 for all xx. Show that there exist a,bRa, b \in \mathbb{R} such that f(x)ax+bf(x) \ge ax + b for all xx.

Hint
Translate the algebraic inequality ax+bax + b into the geometric concept of a tangent line. Use the Tangent Line Method: if ff is convex, every tangent line lies below the graph.
Problem Hard

Let 0<xi<π0 < x_i < \pi for i=1,,ni = 1, \ldots, n and set xˉ=1nxi\bar{x} = \frac{1}{n}\sum x_i. Prove that i=1nsinxixi(sinxˉxˉ)n\displaystyle\prod_{i=1}^n \frac{\sin x_i}{x_i} \le \left(\frac{\sin \bar{x}}{\bar{x}}\right)^n.

Hint
Convert the product to a sum using logarithms. Prove that g(t)=ln ⁣(sintt)g(t) = \ln\!\left(\frac{\sin t}{t}\right) is strictly concave on (0,π)(0, \pi) by checking g(t)<0g''(t) < 0, then apply Jensen’s Inequality.
Problem Hard

If f:RRf: \mathbb{R} \to \mathbb{R} is differentiable with f(x)>2f(x)f'(x) > 2f(x) for all xx, and f(0)=1f(0) = 1, establish a strict lower bound for f(x)f(x).

Hint
Multiply both sides of f(x)2f(x)>0f'(x) - 2f(x) > 0 by the integrating factor e2xe^{-2x}. Recognize that ddx[f(x)e2x]=(f(x)2f(x))e2x>0\frac{d}{dx}[f(x)e^{-2x}] = (f'(x) - 2f(x))e^{-2x} > 0, making f(x)e2xf(x)e^{-2x} strictly increasing. Evaluate at x=0x = 0 and propagate.
Problem Medium

Let P(x)P(x) be a polynomial with integer coefficients. Define a rigorous bound for the number of real roots of P(x)P(x) using the monotonicity of its derivative polynomials.

Hint
Use Rolle’s Theorem sequentially: between any two consecutive roots of P(x)P(x), there is at least one root of P(x)P'(x). Working downward from P(n1)P^{(n-1)} (which is linear, hence has exactly 1 root), count the maximum possible real roots at each level.
Problem Hard

Suppose a1,,ana_1, \ldots, a_n are real numbers (n>1n > 1) and A+i=1nai2<1n1 ⁣(i=1nai)2A + \sum_{i=1}^n a_i^2 < \frac{1}{n-1}\!\left(\sum_{i=1}^n a_i\right)^2. Prove that A<2aiajA < 2a_i a_j for all 1i<jn1 \le i < j \le n.

Hint
Assume WLOG that terms are ordered. Isolate AA and use completion of squares. The key is to show that (ai)2n1ai2\frac{(\sum a_i)^2}{n-1} - \sum a_i^2 can be decomposed into pairwise products.
Problem Advanced

Prove that for any strictly convex function ff, if (x1,,xn)(y1,,yn)(x_1, \ldots, x_n) \succ (y_1, \ldots, y_n) (majorization), then f(xi)f(yi)\sum f(x_i) \ge \sum f(y_i).

Hint
Use Abel’s summation formula (summation by parts) combined with the fact that ff' is strictly increasing (since f>0f'' > 0). The key identity is f(xi)f(yi)=k=1n1(SkxSky)(f(ξk)f(ηk))\sum f(x_i) - \sum f(y_i) = \sum_{k=1}^{n-1} (S_k^x - S_k^y)(f'(\xi_k) - f'(\eta_k)), where SkS_k denotes partial sums and ξk,ηk\xi_k, \eta_k come from MVT.
Problem Hard

The hands of an accurate clock have lengths 3 and 4. Find the distance between the tips of the hands when that distance is increasing most rapidly.

Hint
Set up the distance function using the Law of Cosines: D(θ)=2524cosθD(\theta) = \sqrt{25 - 24\cos\theta}. The problem asks for the maximum of D(θ)D'(\theta), which requires finding zeros of D(θ)D''(\theta) — the inflection points of the distance function.

Challenge Problem

Problem Advanced

(The Optimization Constraint Problem)

Maximize the function f(x)=x33xf(x) = x^3 - 3x subject to the quartic constraint x44x3+4x21x^4 - 4x^3 + 4x^2 \le 1.

Hint 1: Factorize the constraint
The quartic x44x3+4x2x^4 - 4x^3 + 4x^2 is a perfect square: (x22x)2(x^2 - 2x)^2. So the constraint becomes x22x1|x^2 - 2x| \le 1, which splits into two quadratics: 1x22x1-1 \le x^2 - 2x \le 1.
Hint 2: Domain and boundary analysis
The right inequality x22x1x^2 - 2x \le 1 gives x[12,1+2]x \in [1 - \sqrt{2}, 1 + \sqrt{2}]. The left inequality (x1)20(x-1)^2 \ge 0 is trivially true. Now apply the Extreme Value Theorem: evaluate ff at the critical points f(x)=0f'(x) = 0 (which are x=±1x = \pm 1) that lie inside the domain, and at the boundary points x=1±2x = 1 \pm \sqrt{2}. Watch out — one critical point falls outside the restricted domain.