Chapter 25: Finite Differences, Interpolation and Numerical Integration (Set-2)
If a table has constant third differences (equally spaced x), the data best fits a
A Quadratic polynomial
B Cubic polynomial
C Linear function
D Constant function
For equally spaced x-values, a polynomial of degree 3 has constant third differences. This property helps identify the likely degree of the underlying polynomial trend from tabulated data.
For shift operator E, the identity for E⁻¹y(x) is
A y(x+h)
B y(x)/h
C y(x−h)
D y′(x)
E shifts forward by one step: Ey(x)=y(x+h). Therefore E⁻¹ shifts backward: E⁻¹y(x)=y(x−h). This is used to express backward difference relations neatly.
The operator form of central difference is commonly
A δ = E − 1
B δ = 1 − E⁻¹
C δ = E + 1
D δ = E^(1/2) − E^(−1/2)
Central difference compares values equally spaced around x. Using half-step shifts, δy(x)=y(x+h/2)−y(x−h/2), which in operator form becomes (E^(1/2)−E^(−1/2))y.
For equally spaced x, the mean operator μ is usually defined as
A (E+1)/2
B (E^(1/2)+E^(−1/2))/2
C (1−E)/2
D (E−1)/2
The mean operator averages values at symmetric half steps: μy(x) = [y(x+h/2)+y(x−h/2)]/2. It appears in central interpolation formulas like Stirling and Bessel.
In factorial notation, p(p−1)(p−2) is written as
A p^(3)
B p₃
C (p)³
D p^(−3)
In finite-difference interpolation, factorial notation p^(n) means falling factorial: p(p−1)(p−2)… This form naturally matches coefficients in Newton forward and backward series expansions.
Newton forward interpolation uses terms like
A p^r ∇^r yₙ
B p^(r) δ^r y₀
C p^(r) Δ^r y₀
D p^r Ey₀
The Gregory–Newton forward formula builds from forward differences at the start: y ≈ y₀ + pΔy₀ + p(p−1)/2! Δ²y₀ + … using falling factorial coefficients.
Newton backward interpolation commonly uses terms like
A p^r Δ^r y₀
B p^(r) ∇^r yₙ
C p^(r) δ^r y₀
D p^r μy₀
The Gregory–Newton backward formula expands near the end: y ≈ yₙ + p∇yₙ + p(p+1)/2! ∇²yₙ + … where p=(x−xₙ)/h.
The main reason Newton forward formula is efficient is
A Avoids any subtraction
B Needs no data points
C Always exact integral
D Reuses difference table
Once a forward difference table is computed, many interpolations near the beginning can be done quickly by reusing Δy₀, Δ²y₀, etc., avoiding repeated polynomial construction.
Lagrange interpolation polynomial is formed as a sum of
A Δyᵢ Lᵢ(x)
B ∇yᵢ Lᵢ(x)
C yᵢ Lᵢ(x)
D δyᵢ Lᵢ(x)
Lagrange form is P(x)=Σ yᵢ Lᵢ(x), where each Lᵢ(x) is a basis polynomial that equals 1 at xᵢ and 0 at all other data points.
A key advantage of Newton divided-difference form is
A Requires even n
B Works for unequal x
C Needs constant Δy
D Uses only endpoints
Divided differences do not need equal spacing of x-values. Newton’s divided-difference polynomial can be built step-by-step as new data points are added, making it practical for irregular tables.
A first divided difference f[x₀,x₁] equals
A (f₁−f₀)/(x₁−x₀)
B (f₁+f₀)/(x₁−x₀)
C (x₁−x₀)/(f₁−f₀)
D (f₁−f₀)/(x₁+x₀)
The first divided difference is the slope between two points. It generalizes the idea of first difference when spacing is not uniform and forms the building block for higher divided differences.
The “error term” in interpolation mainly depends on
A Only first difference
B Next higher derivative
C Only step size sign
D Data units only
Interpolation error involves a higher derivative of the true function and a product (x−xᵢ) terms. Smooth functions with bounded derivatives typically give smaller errors within the data range.
In numerical differentiation, forward difference is generally best near
A End of table
B Middle only
C Beginning of table
D Outside range
Forward difference uses values ahead of the target point, so it is convenient near the start where forward values exist. Near the end, forward values may be unavailable.
Backward difference derivative is generally best near
A Beginning of table
B Middle only
C At x=0 only
D End of table
Backward difference uses values behind the target point, so it is convenient near the end where backward values exist. It avoids needing unknown values beyond the last tabulated point.
Central difference derivative is preferred mainly because it is
A Always exact
B More accurate order
C Uses one point only
D Needs no h
Central difference uses symmetric points, causing leading error terms to cancel. For smooth functions, it typically has smaller truncation error than forward or backward approximations using the same step size.
A basic second-derivative central formula is
A [f(x+h)−f(x)]/h
B [f(x)−f(x−h)]/h
C [f(x+h)−2f(x)+f(x−h)]/h²
D [f(x+2h)−f(x)]/h²
This formula comes from Taylor expansions and approximates curvature at x using symmetric neighbors. It is widely used in difference tables and finite-difference methods for PDE/ODE discretization.
Composite trapezoidal rule gives exact result for
A Any linear function
B Any quadratic function
C Any cubic function
D Any exponential function
Trapezoidal rule approximates the curve by straight line segments. If the true function is linear, the approximation matches it exactly on each subinterval, making the integral exact.
Simpson’s 1/3 rule gives exact result for any
A Quartic polynomial
B Cubic polynomial
C Exponential function
D Logarithmic function
Simpson’s 1/3 rule integrates polynomials up to degree 3 exactly because its derivation matches the integral of a quadratic interpolant over pairs of subintervals.
When using Simpson’s 1/3 rule, the interval [a,b] must have
A Odd subintervals
B Prime subintervals
C No partition
D Even subintervals
Simpson’s 1/3 works on two subintervals at a time (three points). Therefore the number of subintervals n must be even, so the full interval can be grouped properly.
In Simpson’s 3/8 rule, each block uses
A Two subintervals
B Four subintervals
C Three subintervals
D One subinterval
Simpson’s 3/8 rule fits a cubic over four equally spaced points, covering three subintervals. That’s why n must be a multiple of 3 in the composite version.
The composite trapezoidal formula weights endpoints as
A Double weight each
B Half weight each
C Four weight each
D Zero weight each
In composite trapezoidal rule, endpoints are counted once while interior points are counted twice in the h/2 form. Equivalently, in the h[ … ] form, each endpoint has weight 1/2.
In composite Simpson’s 1/3, the endpoints weights are
A 2 each
B 4 each
C 3 each
D 1 each
Composite Simpson’s 1/3 uses h/3 times [y₀ + yₙ + 4(sum odd) + 2(sum even)]. The first and last values appear once, so each endpoint has weight 1.
The main idea of Richardson extrapolation is to
A Increase step size
B Cancel leading error
C Remove all rounding
D Replace integration
Richardson extrapolation combines two approximations computed with different step sizes to eliminate the leading error term. This improves accuracy without changing the underlying basic method.
Romberg integration is built from repeated use of
A Trapezoidal rule
B Simpson 3/8 rule
C Euler method
D Lagrange basis
Romberg starts with trapezoidal approximations at h, h/2, h/4, etc. Then it applies Richardson extrapolation in a table to rapidly improve accuracy for smooth functions.
In Euler’s method, the slope used for one step is evaluated at
A Next point
B Midpoint only
C Current point
D Random point
Euler’s method uses the slope f(xₙ,yₙ) at the current point to advance: yₙ₊₁=yₙ+h f(xₙ,yₙ). It is simple but may need small h for good accuracy.
Euler’s method is classified as a
A Two-step method
B One-step method
C Multi-step only
D Implicit method
Euler uses only the current value (xₙ,yₙ) to compute the next value. It does not require older points, so it is a basic one-step explicit numerical method for IVPs.
A major stability issue with Euler’s method appears for
A Linear integrals
B Polynomial tables
C Constant functions
D Stiff equations
For stiff ODEs, explicit Euler may require extremely small step sizes to remain stable. Otherwise, numerical solutions can blow up even when the true solution is smooth and decaying.
Heun’s method improves Euler by using
A Only backward slope
B Only forward slope
C Average of slopes
D No slope at all
Heun predicts a value using Euler, then corrects it using the slope at the predicted point. The step uses an average of initial and final slopes, improving accuracy over simple Euler.
A predictor–corrector method generally means
A Integrate then differentiate
B Predict then refine
C Divide then multiply
D Randomly perturb
Predictor–corrector approaches first predict an approximate next value, then correct it using additional information, often slope recalculation. This increases accuracy while keeping computations structured.
In numerical integration, decreasing h usually
A Reduces truncation error
B Increases exactness always
C Removes rounding fully
D Breaks all formulas
Smaller step size gives better geometric approximation and reduces truncation error. However, too small h increases computations and may increase round-off error, so practical step choice is important.
For tabulated data with unequal spacing, direct use of Simpson’s 1/3 is
A Always perfect
B Required by rule
C Better than all
D Not directly valid
Standard Simpson’s 1/3 assumes equal subinterval width h. With unequal spacing, weights are not correct. One may use alternative formulas, local fitting, or methods designed for irregular data.
Discrete differentiation idea means approximating derivatives using
A Exact antiderivatives
B Laplace transforms
C Differences of values
D Only limits
When a function is known only at discrete x-values, derivatives are approximated using finite differences like forward, backward, or central difference formulas, rather than taking an exact limit.
In operator algebra, the identity E = 1 + Δ implies
A Backward shift relation
B Forward shift relation
C Central shift relation
D No relation
Since Δ = E−1, rearranging gives E = 1+Δ. This expresses shifting forward in terms of the forward difference operator and helps expand function values using difference series.
The inverse relation of (1+Δ) is related to
A Backward shift
B Forward shift
C Mean operator
D Central operator
Because E = 1+Δ, the inverse E⁻¹ is a backward shift. In operator terms, E⁻¹ moves values to x−h, connecting forward differences to backward movement.
In Newton forward, truncation is done because higher differences become
A Always zero
B Always negative
C Always unstable
D Small or negligible
Practical interpolation uses only a finite number of difference terms. For smooth data, higher-order differences often decrease in magnitude, so truncating after a few terms gives a good approximation.
Stirling’s formula is mainly suited for interpolation near
A Start of table
B Middle of table
C End of table
D Outside table
Stirling’s central interpolation uses symmetric differences around a central point. It performs best when the required x is near the middle of the dataset, where forward and backward information balance.
Bessel’s formula is especially useful when x is near
A First entry exactly
B Last entry exactly
C Midpoint between entries
D Outside range
Bessel interpolation is a central method tailored for points near the midpoint between two central tabulated values. It often gives improved accuracy when x lies between central nodes.
Newton–Cotes formulas are a family of methods for
A Root finding only
B Eigenvalues only
C Matrix inversion
D Numerical integration
Newton–Cotes rules approximate integrals using polynomial interpolation over equally spaced points, leading to formulas like trapezoidal, Simpson’s 1/3, Simpson’s 3/8, and higher-order rules.
Gauss quadrature differs from Newton–Cotes mainly by
A Using equal spacing
B Choosing optimal nodes
C Using differences only
D Avoiding weights
Gauss quadrature selects integration points (nodes) and weights optimally to maximize exactness for polynomials of high degree. Unlike Newton–Cotes, nodes are not equally spaced.
Spline interpolation is mainly used to avoid
A Any computation
B Step size definition
C High-degree oscillations
D Difference tables
Using one high-degree polynomial can cause unwanted oscillations. Splines use low-degree polynomials piecewise with smooth joining, giving stable and smooth interpolation across many data points.
In finite differences, “order” of difference refers to
A Times differenced
B Size of table
C Units of y
D Units of x
First difference is applied once, second difference is difference of first differences, and so on. The order shows how many times the differencing operation has been repeated.
If y values are noisy, very small h in differentiation may cause
A Always exact slope
B Large numerical error
C Smaller rounding only
D Constant derivative
Differentiation amplifies noise. With very small h, subtraction of close values and measurement noise can dominate, making the derivative estimate unstable even if the underlying function is smooth.
The composite Simpson’s 1/3 rule is best applied when the function is
A Highly discontinuous
B Randomly jumping
C Undefined at all
D Smooth on interval
Simpson’s method assumes the function can be well approximated by parabolas over small intervals. Smoothness ensures the fourth derivative stays bounded, giving reliable high accuracy.
In error bounds, trapezoidal rule accuracy improves when |f”(x)| is
A Very large
B Undefined
C Small on interval
D Negative only
Trapezoidal error depends on the second derivative magnitude. If curvature is small across [a,b], the straight-line approximation matches the curve closely, reducing error for a given h.
For Simpson’s 1/3, error decreases quickly when |f⁽⁴⁾(x)| is
A Large at ends
B Small on interval
C Constant negative
D Discontinuous
Simpson’s error term depends on the fourth derivative. If the function is very smooth so the fourth derivative remains small, Simpson’s rule yields highly accurate integral approximations.
Finite-difference solution of PDEs mainly replaces derivatives with
A Difference quotients
B Exact integrals
C Fourier series only
D Random sampling
In PDE discretization, derivatives like ∂u/∂x are replaced by finite difference expressions on a grid. This converts the PDE into algebraic equations solvable numerically.
In Newton forward, choosing origin at x₀ mainly reduces
A Polynomial degree
B Computational work
C Data points count
D Function smoothness
Setting origin at the first point aligns the formula with Δy₀, Δ²y₀, etc. It makes evaluation direct and efficient, especially when repeated calculations are needed near the table start.
The best simple check for “equal spacing” in a table is verifying
A Constant y differences
B Constant ratios
C Constant x differences
D Constant slopes
Equal spacing means xᵢ₊₁−xᵢ is constant for all i. Many formulas in finite differences and Newton forward/backward interpolation assume this condition for correct application.
A typical application of interpolation in practice is
A Solving eigenvectors
B Finding primes
C Computing determinants
D Reading table values
Interpolation is used to estimate values not listed in a table, such as trigonometric, logarithmic, or engineering tabulations. It provides quick approximations without recomputing the entire function.
Runge–Kutta methods are mainly used to improve over Euler by
A Removing all tables
B Higher accuracy steps
C Using only integrals
D Avoiding derivatives
Runge–Kutta methods use multiple slope evaluations within each step to achieve higher-order accuracy. They reduce error significantly compared to Euler for the same step size, improving stability and precision.