Chapter 25: Finite Differences, Interpolation and Numerical Integration (Set-5)
For equally spaced x with step h, if Δ³y is constant and nonzero, the simplest matching function type is
A Quadratic polynomial
B Linear function
C Cubic polynomial
D Exponential function
For equally spaced data, constant third forward differences indicate a degree-3 polynomial trend. Quadratic gives constant second differences, and linear gives constant first differences, so cubic is the correct match.
Using operator algebra, the identity E = (1−∇)⁻¹ implies E⁻¹ equals
A 1 − ∇
B 1 + ∇
C 1 − Δ
D 1 + Δ
Since ∇ = 1 − E⁻¹, rearranging gives E⁻¹ = 1 − ∇. This expresses a backward shift in terms of the backward difference operator and is useful in deriving relations.
For central operators, if δ = E^(1/2) − E^(−1/2), then δ² equals
A E − 1
B E − 2 + E⁻¹
C 1 − E⁻¹
D E + 2 + E⁻¹
Square δ: (E^(1/2)−E^(−1/2))² = E − 2 + E⁻¹. This operator form corresponds to a second central difference structure used in central-difference expansions.
If Δ = E − 1, then Δ(Ey) equals
A Δy
B ∇y
C δy
D EΔy
Compute Δ(Ey) = (E−1)(Ey) = E²y − Ey = E(Ey − y) = EΔy. This shows Δ and E commute in a controlled way for equally spaced shifts.
For equally spaced data, the identity Δ∇ equals
A ∇Δ
B δμ
C EΔ
D ΔE⁻¹
Using Δ = E−1 and ∇ = 1−E⁻¹, we get Δ∇ = (E−1)(1−E⁻¹) = E+E⁻¹−2, which is symmetric and equals ∇Δ as well.
The operator expression E + E⁻¹ − 2 is equal to
A Δ + ∇
B E − 1
C Δ∇
D 1 − E⁻¹
Expand Δ∇ = (E−1)(1−E⁻¹) = E − 1 − 1 + E⁻¹ = E + E⁻¹ − 2. This identity appears in central second-difference relations.
In Newton forward interpolation, the term with Δ³y₀ has coefficient
A p(p−1)(p−2)/3!
B p(p+1)(p+2)/3!
C p³/6
D p(p−1)/2
Newton forward uses falling factorial coefficients. The third-difference term is p(p−1)(p−2)/3! · Δ³y₀. This comes directly from the Gregory–Newton forward expansion.
In Newton backward interpolation, the term with ∇³yₙ has coefficient
A p(p−1)(p−2)/3!
B p³/6
C p(p+1)/2
D p(p+1)(p+2)/3!
Newton backward uses rising factorial coefficients because expansion is around xₙ. The third-difference term becomes p(p+1)(p+2)/3! · ∇³yₙ, with p=(x−xₙ)/h.
For Lagrange interpolation with nodes x₀,x₁,x₂, the basis L₁(x) equals
A (x−x₁)(x−x₂)/((x₀−x₁)(x₀−x₂))
B (x−x₀)(x−x₁)/((x₂−x₀)(x₂−x₁))
C (x−x₀)(x−x₂)/((x₁−x₀)(x₁−x₂))
D (x₁−x)/(x₁−x₀)
L₁(x) must be 1 at x₁ and 0 at x₀,x₂. The numerator vanishes at x₀ and x₂, and the denominator normalizes it so L₁(x₁)=1.
If x values are distinct, Lagrange interpolating polynomial is
A Unique
B Not unique
C Always quadratic
D Always cubic
For n+1 distinct x-values, there is exactly one polynomial of degree at most n that matches all points. Both Lagrange and Newton forms represent the same unique polynomial.
In divided differences, f[x₀,x₁,x₂] equals
A (f[x₀,x₂]−f[x₀,x₁])/(x₂−x₁)
B (f[x₁,x₂]−f[x₀,x₁])/(x₂−x₀)
C (f[x₀,x₁]+f[x₁,x₂])/(x₂−x₀)
D (f₂−f₀)/(x₂−x₀)
Second divided difference measures change in first divided differences over a wider x span. This recursive definition is central to building Newton’s divided-difference polynomial for unequal spacing.
Newton divided-difference polynomial in nested form starts as
A a₀ + xa₁ + …
B a₀ + (x+x₀)a₁ + …
C a₀ + (x−x₀)a₁ + …
D a₀ + (x/x₀)a₁ + …
Newton’s form is P(x)=a₀ + a₁(x−x₀) + a₂(x−x₀)(x−x₁)+… where aᵣ are divided differences. The (x−xᵢ) structure enables easy extension.
For numerical differentiation, the central first-derivative formula has truncation error order
A h²
B h
C h³
D h⁴
Central difference f′(x)≈[f(x+h)−f(x−h)]/(2h) cancels first-order error terms from Taylor series, leaving leading error proportional to h² for smooth functions.
For second derivative, the central formula [f(x+h)−2f(x)+f(x−h)]/h² has error order
A h
B h³
C h⁴
D h²
This second-derivative approximation comes from Taylor expansions and has leading truncation error proportional to h². It is commonly used in finite-difference methods for curvature and PDE discretization.
Composite trapezoidal rule exactness is guaranteed for polynomials up to degree
A 2
B 1
C 3
D 0
Trapezoidal rule is based on linear interpolation between points, so it integrates any linear function exactly. For higher-degree polynomials, there is generally nonzero error.
Composite Simpson’s 1/3 rule exactness is guaranteed for polynomials up to degree
A 2
B 4
C 3
D 1
Simpson’s 1/3 uses quadratic interpolation on each pair of subintervals and ends up being exact for polynomials up to degree 3. Its error involves the fourth derivative.
In Simpson’s 1/3, if n is even, number of odd-index interior points equals
A n/2
B (n−1)/2
C n
D n/3
With indices 0 to n, the odd interior indices are 1,3,5,…,n−1. That is exactly n/2 points when n is even. These are the points weighted by 4.
In Simpson’s 1/3, the sum of weights (for n subintervals) equals
A 2n
B 4n
C 3n
D n
Weights are 1 at endpoints, 4 at odd interior (n/2 points), and 2 at even interior ((n/2)−1 points). Total = 2 + 4(n/2) + 2((n/2)−1) = 3n.
For Simpson’s 3/8 composite rule with n subintervals (n multiple of 3), the count of interior indices multiple of 3 is
A n/3
B n/2
C n − 1
D n/3 − 1
Multiples of 3 among interior indices are 3,6,…,n−3. There are (n/3)−1 such indices, and they get weight 2 in the 3/8 composite pattern.
If trapezoidal approximations T(h) and T(h/2) are known, Richardson extrapolation for order h² gives
A (4T(h/2)−T(h))/3
B (2T(h/2)−T(h))
C (T(h)+T(h/2))/2
D (T(h)−T(h/2))/3
Trapezoidal error behaves like C h². Combining T(h) and T(h/2) cancels C h²: improved estimate = (4T(h/2)−T(h))/3. This is the first Romberg step.
In Romberg integration table, R(1,2) (first extrapolated value) is computed from
A Simpson and trapezoid
B T(h) and T(h/2)
C Euler and Heun
D Lagrange and Newton
Romberg starts with trapezoidal values in the first column. The next column uses Richardson extrapolation combining T(h) and T(h/2) to eliminate the leading h² error term.
If a method has error C h⁴, Richardson extrapolation uses factor
A 4
B 2
C 16
D 8
If error is proportional to h⁴, halving h reduces error by 2⁴=16. Richardson uses this factor to eliminate the leading term, combining approximations at h and h/2 accordingly.
For Euler’s method, the update yₙ₊₁ = yₙ + h f(xₙ,yₙ) is derived from Taylor series by truncating after
A First derivative term
B Second derivative term
C Third derivative term
D Fourth derivative term
Taylor expansion gives y(x+h)=y(x)+h y′(x)+h²/2 y″(ξ). Euler keeps only y+h y′ and replaces y′ by f(x,y), causing the h² truncation error per step.
For the IVP y’ = y, y(0)=1, Euler with step h gives recurrence
A yₙ₊₁ = (1−h)yₙ
B yₙ₊₁ = (1+h)yₙ
C yₙ₊₁ = yₙ/h
D yₙ₊₁ = yₙ + h
Here f(x,y)=y, so Euler gives yₙ₊₁ = yₙ + h yₙ = (1+h)yₙ. This shows exponential growth approximated by repeated multiplication.
For y’ = −ky (k>0), Euler stability requires roughly
A h > 2/k
B h = k/2
C h < k/2
D h < 2/k
Euler gives yₙ₊₁=(1−kh)yₙ. For decaying solution without sign-flip growth, |1−kh|<1 is needed, giving 0
In Heun’s method, the corrected slope is the average of
A f at two starts
B f at two ends
C f at start/end
D f at midpoint only
Heun predicts y* then computes slopes f(xₙ,yₙ) and f(xₙ₊₁,y*). The method updates using their average, reducing error compared with using only the starting slope.
If the function is smooth, Simpson’s rule error bound over [a,b] contains factor
A (b−a)h⁴
B (b−a)h²
C (b−a)h
D (b−a)/h
Composite Simpson’s 1/3 global error is proportional to (b−a)h⁴ times max|f⁽⁴⁾(x)|. This shows why Simpson converges faster than trapezoidal for smooth functions.
Trapezoidal global error bound over [a,b] contains factor
A (b−a)h⁴
B (b−a)h²
C (b−a)h
D (b−a)/h
Composite trapezoidal error is proportional to (b−a)h² times max|f”(x)|. Because it is second-order, halving h typically reduces error by about 4 for smooth integrands.
A central-difference interpolation method is generally most accurate when x is
A Near first entry
B Near last entry
C Near table center
D Outside range
Central methods use symmetric information on both sides of the target x. Near the center, both forward and backward data are available and balanced, improving accuracy and reducing bias.
In Bessel’s formula, the “best” situation is when x is near
A Midpoint of nodes
B First node
C Last node
D Far beyond
Bessel’s interpolation is designed for points near the midpoint between two central tabulated values. Its averaging structure suits half-step positions, often giving better accuracy there than Stirling.
If x-values are equally spaced, divided differences reduce to expressions involving
A Only derivatives
B Only integrals
C Only limits
D Finite differences
With constant step h, divided differences relate to finite differences by scaling factors involving h and factorials. This links Newton divided-difference interpolation to Gregory–Newton formulas.
The main numerical danger in high-degree interpolation on wide interval is
A Runge oscillations
B Exact matching
C Lower truncation
D No uniqueness
High-degree polynomials can oscillate strongly between points, especially near interval ends. This can produce large errors even if data points are matched exactly, motivating splines or local interpolation.
A common stability improvement in interpolation is using
A One high degree
B Extrapolation only
C Piecewise splines
D Random nodes
Splines use low-degree polynomials on small subintervals with smooth joining. This reduces oscillations and improves numerical stability while still giving a smooth curve through data.
In numerical differentiation using tables, differentiating an interpolating polynomial is useful because it
A Removes rounding
B Uses nearby data
C Gives exact always
D Avoids subtraction
Building a polynomial from nearby points and differentiating it gives a systematic derivative estimate. It captures local trend better than using a single difference and can improve accuracy for smooth data.
If forward difference table is built for y, then Δ²yᵢ relates to y by
The central second-difference operator δ²y(x) corresponds to
A y(x+h) − 2y(x) + y(x−h)
B y(x+h) − y(x)
C y(x) − y(x−h)
D y(x+h/2) − y(x)
From δ² = E − 2 + E⁻¹, applying to y gives y(x+h) − 2y(x) + y(x−h). This is the symmetric second-difference pattern used in central derivative formulas.
In Newton forward, using p close to 0 mainly means x is close to
A xₙ
B midpoint
C x₀
D outside
Newton forward uses p=(x−x₀)/h. If p≈0, then x≈x₀, so the interpolation point is near the first node, making higher-order terms smaller and the series efficient.
In Newton backward, using p close to 0 mainly means x is close to
A x₀
B xₙ
C midpoint
D outside
Newton backward uses p=(x−xₙ)/h. If p≈0, then x≈xₙ, meaning the required value is near the last table entry where backward differences give best accuracy.
For trapezoidal rule, the first Romberg extrapolation step increases order from
A h² to h⁴
B h to h²
C h⁴ to h⁶
D h² to h³
Trapezoidal error is O(h²). Richardson extrapolation cancels the h² term, producing a new estimate with leading error O(h⁴). This is why Romberg converges rapidly on smooth functions.
In Gauss–Legendre quadrature on [−1,1], nodes are chosen as zeros of
A Chebyshev polynomials
B Taylor polynomials
C Legendre polynomials
D Bernoulli polynomials
Gauss–Legendre quadrature uses Legendre polynomial roots as nodes on [−1,1]. With these nodes and weights, it achieves the highest possible exactness degree for a given number of points.
A main reason Gauss quadrature can beat Newton–Cotes is
A Equal spacing needed
B No weights used
C Uses differences only
D Higher exactness degree
For the same number of function evaluations, Gauss quadrature integrates polynomials of much higher degree exactly by choosing optimal nodes and weights, while Newton–Cotes is limited by equal spacing.
If n+1 data points are used, the maximum degree of interpolating polynomial is
A n+1
B n
C 2n
D n−1
With n+1 distinct x-values, a unique polynomial of degree at most n interpolates them. This is a core theorem behind both Lagrange and Newton interpolation formulas.
In Euler method, reducing h by half typically reduces global error by about factor
A 2
B 4
C 8
D 16
Euler’s global error is O(h). If h is halved, the leading error term roughly halves as well. This slow convergence is why higher-order methods are preferred for accuracy.
In Heun’s method (order 2), reducing h by half typically reduces global error by about factor
A 2
B 8
C 4
D 16
Heun’s method has global error O(h²). Halving h makes h² become (h/2)² = h²/4, so the error typically reduces by a factor of about 4 for smooth problems.
If a table is noisy, central difference derivatives can still be unstable because differentiation
A Amplifies noise
B Averages noise
C Removes noise
D Hides noise
Differentiation behaves like a high-pass operation: small fluctuations in data can produce large changes in difference quotients. Even accurate formulas can give poor derivatives if the data is noisy.
In composite Simpson’s 1/3, if data points count is 9, then subintervals n equals
A 9
B 7
C 6
D 8
Number of subintervals is n = (points − 1). With 9 points, n=8, which is even, so Simpson’s 1/3 can be applied over the full interval with correct weighting.
For composite 3/8 rule, if points count is 10, then subintervals n equals
A 10
B 9
C 8
D 7
Subintervals are n = points − 1 = 9. Since 9 is a multiple of 3, composite Simpson’s 3/8 rule can be applied across the whole interval without leftover subintervals.
A standard reason to prefer divided differences over forward differences is
A Lower computation always
B No subtraction needed
C Unequal spacing support
D Uses only endpoints
Forward difference formulas assume equal spacing h. Divided differences work for any distinct x-values and allow Newton’s polynomial to be constructed directly from irregular data.
In finite difference methods for PDEs, grid refinement (smaller h) usually decreases truncation error but may increase
A Round-off effects
B Uniqueness guarantee
C Exactness degree
D Smoothness of f
Smaller h reduces truncation error in difference approximations, but computations involve more steps and more subtractions of close numbers, which can magnify rounding errors in floating-point arithmetic.
In numerical integration, “adaptive” algorithms mainly choose step size based on
A Fixed table length
B Constant derivatives
C Estimated local error
D Random selection
Adaptive methods estimate error on a subinterval (often by comparing two rules or two step sizes). If error is too large, they subdivide; if small, they keep larger steps to save work.