Chapter 17: Functions of Several Variables (Set-1)
A multivariable limit exists when
A One path matches
B Only along axes
C Values are bounded
D All paths match
A limit at a point exists only if the function approaches the same value along every possible path to that point. If two paths give different values, the limit does not exist.
If two different paths give two different limit values, then
A Limit does not exist
B Limit exists
C Limit is zero
D Limit is infinite
For a multivariable limit to exist, it must be unique. If approaching the point along two paths yields two different values, uniqueness fails and the limit cannot exist.
For continuity of f(x,y) at (a,b), we need
A Only partials exist
B Limit equals value
C Only limit exists
D Only value exists
Continuity at (a,b) requires three things: f(a,b) is defined, the limit as (x,y)→(a,b) exists, and that limit equals f(a,b).
If f is continuous at (a,b), then
A Mixed partials equal
B Gradient is zero
C Jacobian is one
D Limit equals f(a,b)
Continuity directly means the limit of f(x,y) as (x,y) approaches (a,b) equals the function’s value at that point. Other conditions are separate concepts.
A quick test that may show non-existence of limit
A Differentiate once
B Integrate over region
C Try two paths
D Find gradient
Testing along two simple paths (like y=0 and y=x) can quickly show different approaching values. If results differ, the limit definitely does not exist.
If limit exists, then every directional limit must
A Be infinite
B Be zero
C Not exist
D Be equal
A true multivariable limit forces agreement along all directions and paths. Directional limits are necessary checks: if any direction differs, the overall limit cannot exist.
Continuity of a polynomial in x,y over R² is
A Always continuous
B Never continuous
C Only at origin
D Only on axes
Polynomials are built from sums and products of continuous functions. Hence they are continuous everywhere in their domain, including in R² and R³.
A rational function is continuous where
A Numerator nonzero
B Denominator nonzero
C x and y positive
D x equals y
Rational functions are continuous wherever they are defined. The only problem points occur where the denominator becomes zero, making the function undefined.
A removable discontinuity typically occurs when
A Denominator constant
B Degree increases
C Gradient vanishes
D Factor cancels
A removable discontinuity happens when the function is undefined at a point due to 0/0, but simplification cancels the problematic factor, allowing a limit value to be assigned.
In polar substitution for limits, we usually set
A x=r cosθ, y=r sinθ
B x=r sinθ, y=r cosθ
C x=rθ, y=r/θ
D x=r², y=θ²
Converting to polar form is common near (0,0). If expression becomes dependent only on r and goes to a single value as r→0, it helps confirm the limit.
The partial derivative ∂f/∂x means
A Vary y, fix x
B Vary both equally
C Vary x, fix y
D Hold x at zero
In ∂f/∂x, we change x while keeping the other variables (like y, z) constant. It measures the rate of change along the x-direction.
The partial derivative ∂f/∂y means
A Fix y, vary x
B Fix both variables
C Vary x and y
D Fix x, vary y
∂f/∂y measures how f changes with y while all other variables are held constant. It is like ordinary differentiation but in one chosen direction.
Second order partial ∂²f/∂x² is
A Differentiate f w.r.t y
B Differentiate ∂f/∂x w.r.t x
C Integrate ∂f/∂x
D Differentiate w.r.t θ
∂²f/∂x² is the derivative of the first partial ∂f/∂x again with respect to x. It describes curvature along the x-direction.
Mixed partial ∂²f/∂x∂y means
A x then y differentiation
B x only differentiation
C y only differentiation
D Integration then derivative
∂²f/∂x∂y means first differentiate with respect to x, then differentiate that result with respect to y (or reverse order in many cases).
Clairaut’s theorem (basic) states that, if smooth enough
A fx = fy
B fxx = fyy
C f = constant
D fxy = fyx
If second partial derivatives are continuous near a point, the mixed partials are equal: ∂²f/∂x∂y = ∂²f/∂y∂x. This is a key symmetry result.
The gradient ∇f points in direction of
A Maximum decrease
B Zero change
C Maximum increase
D Minimum curvature
The gradient gives the direction in which f increases fastest, and its magnitude gives the maximum rate of increase. It is fundamental in directional derivatives and optimization.
Directional derivative at a point needs
A Unit direction vector
B Any random vector
C Only Jacobian
D Only polar form
Directional derivative uses a unit vector to represent direction only, not magnitude. Using a non-unit vector would scale the value incorrectly.
Directional derivative formula uses
A f × u
B ∇f / u
C u / ∇f
D ∇f · u
The directional derivative in direction u is the dot product of the gradient and the unit vector u. This projects the gradient onto that direction.
Tangent plane to z=f(x,y) at (a,b) uses
A fxx(a,b), fyy(a,b)
B fx(a,b), fy(a,b)
C Jacobian only
D Euler theorem only
Tangent plane near (a,b) uses linear approximation: z ≈ f(a,b) + fx(a,b)(x−a) + fy(a,b)(y−b). It depends on first partial derivatives.
A normal vector to surface F(x,y,z)=0 is
A ∇F
B ∇f
C Any tangent vector
D Position vector only
For an implicit surface F=0, the gradient ∇F is perpendicular to level surfaces, so it serves as a normal vector at points where ∇F ≠ 0.
Total differential of f(x,y) is
A f dx + f dy
B dx + dy
C fx dx + fy dy
D fx dy + fy dx
The total differential gives the best linear change: df = fx dx + fy dy. It’s used for approximations and error estimates for small changes in x and y.
Chain rule for z=f(x,y), x=x(t), y=y(t) gives
A dz/dt = fx + fy
B dz/dt = dx/dt + dy/dt
C dz/dt = f/t
D dz/dt = fx dx/dt + fy dy/dt
When x and y depend on t, z changes through both variables. Chain rule adds contributions: rate through x plus rate through y, using partial derivatives.
If z=f(x,y), and y=y(x), then dz/dx equals
A fx + fy (dy/dx)
B fx fy
C fx − fy
D fy / fx
This is the single-parameter chain rule. z changes directly with x and indirectly through y(x). So dz/dx = fx + fy·dy/dx.
A function f(x,y) is homogeneous of degree n if
A f(tx,ty)=f(x,y)
B f(tx,ty)=t^n f(x,y)
C f(tx,ty)=t f(x,y) always
D f(tx,ty)=0
Homogeneity means scaling inputs by t scales output by t^n. The power n is the degree of homogeneity and helps identify structure in multivariable functions.
For homogeneous f of degree n, Euler theorem gives
A fx + fy = n
B fxx + fyy = 0
C x+y = n
D xfx + yfy = n f
Euler’s theorem connects the function to its first partial derivatives. It holds for homogeneous functions and is useful for simplifying expressions and checking homogeneity quickly.
If f is homogeneous of degree 0, then Euler theorem implies
A xfx + yfy = 0
B xfx + yfy = f
C xfx + yfy = 1
D f is constant
Degree 0 means scaling inputs does not change the output. Substituting n=0 in Euler’s theorem yields xfx + yfy = 0, a key property of ratio-type functions.
If f(x,y)=x²+y², its degree is
A 1
B 0
C 2
D 3
Replacing (x,y) by (tx,ty) gives t²x² + t²y² = t²(x²+y²). Hence f(tx,ty)=t² f(x,y), so the degree of homogeneity is 2.
If f(x,y)=x/y, its degree is
A 1
B 0
C 2
D −1
f(tx,ty)=(tx)/(ty)=x/y, unchanged by scaling. Therefore the function is homogeneous of degree 0, commonly seen in ratio expressions.
Euler theorem in three variables for degree n is
A xfx+yfy+zfz = n f
B fx+fy+fz = n f
C x+y+z = n
D fxx+fyy+fzz = n
For f(x,y,z) homogeneous of degree n, scaling gives Euler’s result with three terms: xfx + yfy + zfz = n f. Same idea extends from two variables.
A quick way to check homogeneity is to
A Scale variables by t
B Integrate function
C Find Jacobian only
D Compute Hessian first
Substitute (tx,ty) or (tx,ty,tz) into f and factor out t^n. If a single power t^n factors for all terms, the function is homogeneous of degree n.
Jacobian ∂(u,v)/∂(x,y) is a
A Sum of partials
B Product of variables
C Directional derivative
D Determinant of partials
The Jacobian is the determinant of the matrix of first partial derivatives. It measures local change of area/scale under transformation from (x,y) to (u,v).
If u=u(x,y), v=v(x,y), then Jacobian matrix is
A [[ux, uy],[vx, vy]]
B [[u, v],[x, y]]
C [[x, u],[y, v]]
D [[ux, vx],[uy, vy]]
The Jacobian determinant uses the partial derivatives of u and v with respect to x and y. The correct arrangement is rows as functions, columns as variables.
If transformation is locally invertible, then Jacobian is
A Always zero
B Nonzero
C Always one
D Always negative
A nonzero Jacobian determinant at a point indicates the transformation has a local inverse near that point (basic invertibility test). Zero Jacobian suggests collapse of area/scale.
Inverse Jacobian relation (basic) is
A Their sum is 1
B Their difference is 0
C Their product is 0
D ∂(u,v)/∂(x,y) · ∂(x,y)/∂(u,v) = 1
When inverse transformation exists and derivatives behave well, the Jacobians are reciprocals. Their product equals 1, showing how scaling in one direction reverses under inverse mapping.
Jacobian helps most directly in
A Solving quadratic
B Change of variables
C Series expansion
D Matrix diagonalization
In multiple integrals, Jacobian accounts for stretching or shrinking of area/volume when switching coordinate systems, like Cartesian to polar or other transformations.
Jacobian for polar conversion is
A r
B 1/r
C r²
D sinθ
For x=r cosθ and y=r sinθ, the Jacobian determinant |∂(x,y)/∂(r,θ)| equals r. That’s why dA becomes r dr dθ in polar integrals.
If J = 0 at a point, transformation may be
A Not locally one-to-one
B Always continuous
C Always linear
D Always invertible
A zero Jacobian suggests the mapping collapses dimensions locally (area factor becomes zero). This can break local invertibility and may create singular behavior in change of variables.
Functional dependence test uses Jacobian
A Equal to one
B Equal to zero
C Greater than one
D Negative always
If u, v, w depend on each other (not independent), the Jacobian ∂(u,v,w)/∂(x,y,z) can become zero under certain conditions, indicating dependence in basic tests.
Chain rule using Jacobian often forms product of
A Two Jacobians
B Two gradients
C Two Hessians
D Two integrals
For composite transformations, Jacobians multiply like derivatives in one variable: ∂(u,v)/∂(x,y) = ∂(u,v)/∂(r,θ) · ∂(r,θ)/∂(x,y), when valid.
For u=x+y, v=x−y, Jacobian ∂(u,v)/∂(x,y) equals
A −2
B 0
C 1
D 2
Compute determinant: ux=1, uy=1; vx=1, vy=−1. So J = (1)(−1) − (1)(1) = −2. Absolute value is 2, but determinant is −2.
Differentiability at a point generally implies
A Discontinuity at point
B Jacobian equals 1
C Continuity at point
D Euler theorem holds
If a function is differentiable at a point, it has a good linear approximation there, which forces continuity. However, continuity alone does not guarantee differentiability.
A critical point of f(x,y) occurs when
A fx=0 and fy=0
B f=0 only
C Jacobian is zero
D x=y
For interior extrema in two variables, we first find points where both first partial derivatives vanish (or derivatives fail to exist). These are candidates for maxima, minima, or saddles.
Second derivative test uses the
A Jacobian determinant
B Euler identity
C Polar identity
D Hessian determinant
The test uses D = fxx fyy − (fxy)² at a critical point. Sign of D and fxx helps classify local min, local max, or saddle point.
If D>0 and fxx>0 at a critical point, it is
A Local minimum
B Local maximum
C Saddle point
D No conclusion
When D>0, the surface is bowl-like. If fxx>0, curvature is upward, giving a local minimum. This is the standard second derivative test in two variables.
If D>0 and fxx<0 at a critical point, it is
A Local minimum
B Saddle point
C Local maximum
D No conclusion
D>0 indicates consistent curvature. If fxx<0, the surface bends downward like an upside bowl, producing a local maximum near that critical point.
If D<0 at a critical point, it indicates
A Local minimum
B Local maximum
C Saddle point
D Absolute minimum
D<0 means the surface curves upward in one direction and downward in another, like a saddle. So the point is neither a local max nor a local min.
If D=0 in second derivative test, then
A It is minimum
B It is maximum
C It is saddle always
D Test is inconclusive
When D=0, the usual quadratic approximation is insufficient to classify the point. One must use higher-order terms or other analysis to determine the nature of the point.
Lagrange multipliers are used to find
A Constrained extrema
B Ordinary limits
C Jacobian zeros
D Homogeneous degree
Lagrange multipliers help optimize a function subject to constraints like g(x,y)=c. The method links gradients: ∇f = λ∇g, giving candidate points on the constraint curve/surface.
On level curve f(x,y)=c, the gradient ∇f is
A Tangent to curve
B Normal to curve
C Parallel to x-axis
D Always zero
Level curves keep f constant, so movement along the curve causes no change in f. The gradient points in the steepest increase direction, hence it is perpendicular to the level curve.
Best linear approximation of f(x,y) near (a,b) is
A f(a,b)+fxΔx+fyΔy
B f(a,b)+fxxΔx²
C f(a,b)+fyyΔy²
D f(a,b)+ΔxΔy
Near (a,b), f changes approximately by its tangent plane: f(a+Δx,b+Δy) ≈ f(a,b) + fx(a,b)Δx + fy(a,b)Δy. This supports quick estimates and error analysis.