Chapter 25: Finite Differences, Interpolation and Numerical Integration (Set-4)
For equally spaced x, if a dataset comes from a quadratic function, which differences are constant?
A First differences
B Third differences
C Second differences
D Fourth differences
A quadratic polynomial has degree 2, so Δ²y becomes constant for equally spaced x. This is a standard test to identify quadratic behavior from tabulated values.
If Ey(x)=y(x+h), then (E−1)²y represents
A Δ²y
B ∇²y
C δ²y
D μ²y
Since Δ = E−1, applying twice gives Δ² = (E−1)². So (E−1)²y means the second forward difference, used to measure curvature in the table.
The relation between forward and backward difference operators is
A Δ = E∇
B ∇ = EΔ
C Δ = E⁻¹∇
D ∇ = E⁻¹Δ
Δy(x−h)=y(x)−y(x−h)=∇y(x). Since E⁻¹ shifts back, ∇ = E⁻¹Δ. This identity links backward and forward differences through shifting.
If ∇yₙ = yₙ − yₙ₋₁, then ∇²yₙ equals
A yₙ₊₂ − 2yₙ₊₁ + yₙ
B yₙ − 2yₙ₋₁ + yₙ₋₂
C yₙ₊₁ − yₙ
D yₙ − yₙ₊₁
Second backward difference is ∇²yₙ = ∇(∇yₙ) = (yₙ−yₙ₋₁) − (yₙ₋₁−yₙ₋₂) = yₙ − 2yₙ₋₁ + yₙ₋₂.
In Newton forward interpolation, the term containing Δ²y₀ has coefficient
A p(p+1)/2!
B p²/2
C p(p−1)/2!
D (p−1)/2
Newton forward series is y ≈ y₀ + pΔy₀ + p(p−1)/2! Δ²y₀ + … The falling factorial p(p−1) matches the forward difference expansion.
In Newton backward interpolation, the term containing ∇²yₙ has coefficient
A p(p+1)/2!
B p(p−1)/2!
C p²/2
D (p+1)/2
Newton backward series is y ≈ yₙ + p∇yₙ + p(p+1)/2! ∇²yₙ + … because p=(x−xₙ)/h and coefficients increase forward from the end.
For a forward difference table, Δ²y₀ is computed as
A y₂ − y₀
B y₁ − y₀
C Δy₀ − Δy₁
D Δy₁ − Δy₀
Second forward difference is the difference of first differences: Δ²y₀ = Δ(Δy₀) = Δy₁ − Δy₀. This describes change in slope across the table.
For equally spaced x, a basic approximation for f′(x₀) using forward differences is
A (y₀ − y₁)/h
B (y₂ − y₀)/h
C (y₁ − y₀)/h
D (y₀ + y₁)/h
The simplest forward derivative approximation uses the first forward difference: f′(x₀) ≈ Δy₀/h = (y₁−y₀)/h. It is easy but less accurate than central difference.
A basic approximation for f′(xₙ) using backward differences is
A (yₙ₊₁ − yₙ)/h
B (yₙ − yₙ₋₁)/h
C (yₙ − yₙ₋₂)/h
D (yₙ + yₙ₋₁)/h
Backward derivative at the last node uses ∇yₙ/h = (yₙ−yₙ₋₁)/h. This is convenient near the end where forward values beyond yₙ are not available.
For smooth data, central difference derivative has leading error order
A h²
B h
C h⁴
D 1/h
Central difference f′(x) ≈ [f(x+h)−f(x−h)]/(2h) has truncation error proportional to h² for smooth functions, giving better accuracy than forward/backward which are typically order h.
The forward difference derivative f′(x) approximation typically has error order
A h²
B h³
C h
D 1/h
Forward difference f′(x) ≈ [f(x+h)−f(x)]/h comes from Taylor expansion and has leading truncation error proportional to h. This is why it is less accurate than central difference.
The trapezoidal rule for ∫ᵃᵇ f(x)dx with one interval is
A (b−a)(f(a)−f(b))/2
B (b−a)(f(a)+4f(m)+f(b))/6
C (b−a)f(m)
D (b−a)(f(a)+f(b))/2
The trapezoidal rule uses a straight line between endpoints and area of a trapezoid. It is exact for linear functions and forms the base of composite trapezoidal integration.
Composite trapezoidal rule for n subintervals uses factor
A h
B h/2
C h/3
D 3h
Composite trapezoidal is h[(y₀+yₙ)/2 + Σyᵢ]. The step size h multiplies the weighted sum of function values. Endpoints have half weight, interior full weight.
If n = 4 subintervals, Simpson’s 1/3 rule uses how many points?
A 4 points
B 3 points
C 5 points
D 6 points
Number of points is n+1. With n=4, points are 5: x₀ to x₄. Simpson’s 1/3 works because n is even, allowing pairing of subintervals.
In composite Simpson’s 1/3 rule with n=6, how many odd-index interior points exist?
A 2 points
B 3 points
C 4 points
D 5 points
For n=6, indices are 0 to 6. Odd interior indices are 1, 3, 5 → three points. These receive weight 4 in the Simpson’s 1/3 weighted sum.
Simpson’s 3/8 rule single-interval formula uses outside factor
A h/3
B h/2
C 8h/3
D 3h/8
Simpson’s 3/8 fits a cubic over three subintervals (four points). The integral approximation is (3h/8)[y₀ + 3y₁ + 3y₂ + y₃], assuming equal spacing h.
For Simpson’s 3/8 composite rule, points with index multiple of 3 (excluding ends) get weight
A 3
B 4
C 2
D 1
In composite 3/8, coefficients follow 1, 3, 3, 2, 3, 3, 2, …, 3, 3, 1. Interior points at indices 3, 6, 9,… get weight 2.
The error term of trapezoidal rule involves maximum of
A |f”(x)|
B |f'(x)|
C |f⁽⁴⁾(x)|
D |f(x)|
Trapezoidal error bound is proportional to (b−a)h² times max|f”(x)| on [a,b]. Curvature governs how much the straight-line approximation deviates from the true curve.
The error term of Simpson’s 1/3 rule involves maximum of
A |f”(x)|
B |f⁽⁴⁾(x)|
C |f'(x)|
D |f(x)|
Simpson’s 1/3 error bound depends on the fourth derivative because it integrates a quadratic interpolant. For smooth functions with small fourth derivative, Simpson’s accuracy improves greatly.
For fixed interval length, halving h in trapezoidal rule typically reduces error by factor
A 2
B 8
C 4
D 16
Trapezoidal rule error is order h². If h is halved, h² becomes (h/2)² = h²/4, so error is roughly quartered, assuming f” stays bounded and smooth.
For fixed interval length, halving h in Simpson’s 1/3 rule typically reduces error by factor
A 16
B 4
C 2
D 8
Simpson’s 1/3 rule has error order h⁴. Halving h gives (h/2)⁴ = h⁴/16, so the error typically reduces by about a factor of 16 for smooth functions.
In Euler’s method for y’ = f(x,y), if f is negative, y typically
A Increases per step
B Stays constant
C Becomes undefined
D Decreases per step
Euler update is yₙ₊₁ = yₙ + h f(xₙ,yₙ). With positive h, if f(xₙ,yₙ) is negative, the increment is negative, so y decreases at that step.
In Euler’s method, local truncation error order is
A h
B h³
C h²
D 1/h
Euler’s method is derived from Taylor series truncation after the first derivative term. The neglected second-order term makes one-step error proportional to h², while global error becomes order h.
Euler’s method global error order is
A h²
B h
C h³
D 1/h
Over a fixed interval, the number of steps is about 1/h. Local errors of size h² accumulate over many steps, producing an overall error typically proportional to h.
A basic predictor-corrector idea improves accuracy by
A Recomputing slope
B Ignoring slope
C Increasing interval
D Removing h
Predictor gives an initial estimate, then corrector recomputes slope using the predicted value and averages or refines it. This reduces truncation error compared to using only the initial slope.
In numerical differentiation, using too large h mainly increases
A Round-off error
B Table size error
C Truncation error
D Memory error
Large h makes the finite difference a poor approximation to the true derivative because Taylor series higher-order terms become significant. This increases truncation error even if rounding is small.
In numerical differentiation, using too small h mainly increases
A Truncation error
B Step error sign
C Curvature itself
D Round-off error
Very small h causes subtraction of nearly equal numbers, leading to loss of significant digits. Floating-point rounding and data noise can dominate, making derivative estimates unstable.
In Lagrange interpolation, the polynomial passes through all given points because
A Differences are constant
B Basis property holds
C h is small
D n is even
Each basis polynomial Lᵢ(x) is 1 at xᵢ and 0 at other nodes. So P(xⱼ)=yⱼ exactly, ensuring the interpolating polynomial passes through every provided data point.
The Newton divided-difference form is convenient for adding a new point because
A Extends polynomial easily
B Needs no recalculation
C Removes all errors
D Keeps degree fixed
Newton form builds P(x)=a₀ + a₁(x−x₀) + a₂(x−x₀)(x−x₁)+… Adding a new point only requires computing a new coefficient, not rebuilding everything.
In Newton divided differences, the coefficient a₂ equals
A f[x₀,x₁]
B f[x₁,x₂]
C f[x₀,x₁,x₂]
D f[x₂]
In Newton’s divided-difference polynomial, coefficients are the divided differences: a₀=f[x₀], a₁=f[x₀,x₁], a₂=f[x₀,x₁,x₂], etc. Each captures higher-order change.
For equally spaced data, Gregory–Newton forward formula is essentially
A Lagrange form
B Romberg form
C Gauss form
D Newton forward form
Gregory–Newton forward is another name for Newton forward interpolation using forward differences and falling factorial terms in p. It is designed for equally spaced nodes near the start.
For equally spaced data, Gregory–Newton backward formula is essentially
A Newton backward form
B Lagrange form
C Romberg form
D Euler form
Gregory–Newton backward is Newton backward interpolation using backward differences and p=(x−xₙ)/h. It is best for points near the end of equally spaced tables.
Central difference δ in operator form relates to E by
A E − 1
B 1 − E⁻¹
C E^(1/2) − E^(−1/2)
D (E+1)/2
Central difference compares values at x±h/2. Since E^(1/2) shifts by +h/2 and E^(−1/2) shifts by −h/2, δ = E^(1/2) − E^(−1/2).
The mean operator μ is mainly used to
A Compute derivatives only
B Average symmetric values
C Compute integrals only
D Shift backward
μ averages values symmetrically around x: μy(x) = [y(x+h/2)+y(x−h/2)]/2. It appears in central interpolation formulas to keep expressions balanced and stable.
A key reason central formulas can be more accurate is
A Uses fewer points
B Avoids step size
C Uses no subtraction
D Error terms cancel
Symmetric expansions around x cause odd-order error terms to cancel, leaving smaller leading error. This is why central difference derivatives are often order h² instead of order h.
In numerical integration, Simpson’s 1/3 can fail badly if function is
A Smooth polynomial
B Constant function
C Highly oscillatory
D Linear function
Simpson’s assumes smooth curvature over each pair of subintervals. If the function oscillates rapidly, a quadratic fit can miss key changes, causing large error unless h is made small enough.
A simple benefit of composite rules over single-interval rules is
A Better accuracy
B Fewer points needed
C No step size needed
D Always exact
Splitting [a,b] into more subintervals lets the approximating shapes follow the function more closely. This reduces truncation error, especially for curved or varying functions.
In Romberg integration table, moving right across a row generally means
A Larger step size
B Higher accuracy estimate
C Fewer computations
D Less extrapolation
Each Romberg column applies Richardson extrapolation to remove leading error terms. As you move right, estimates typically improve in order and accuracy, assuming the integrand is smooth enough.
In Euler’s method, choosing h too large may cause
A Exact solution always
B Zero iterations
C Better global order
D Instability or big error
Large step size makes Euler’s straight-line stepping inaccurate. For some problems, especially stiff ones, large h can also cause numerical instability, producing wildly wrong values.
If a data table is equally spaced and x is near the center, a central interpolation approach is
A Newton forward only
B Newton backward only
C Stirling or Bessel
D Trapezoidal only
Near the center, central difference formulas use symmetric information and often improve accuracy. Stirling suits near a central node, while Bessel is useful near the midpoint between two central nodes.
When using Lagrange interpolation, selecting the “nearest” points mainly helps to
A Reduce error term
B Increase polynomial degree
C Increase oscillations
D Remove uniqueness
The error term includes factors (x−xᵢ). Choosing points close to the target x keeps these factors small, generally improving accuracy and reducing sensitivity to polynomial oscillations.
For unequal spacing, using forward differences Δ directly is problematic mainly because
A y is not constant
B h is not constant
C x is not real
D f is discontinuous
Forward/backward difference tables assume a constant step size h to form p=(x−x₀)/h and consistent difference levels. With unequal spacing, divided differences are the correct tool.
In numerical differentiation from tabulated data, a practical method is to differentiate
A Trapezoidal formula
B Romberg table
C Interpolating polynomial
D Simpson weights
One way to estimate derivatives is to first build an interpolating polynomial (Newton or Lagrange) from nearby points, then differentiate that polynomial to obtain derivative approximations at desired x.
The key meaning of “discrete differentiation” is estimating derivatives from
A Tabulated values
B Exact formulas
C Continuous limits
D Symbolic algebra
When you only know function values at discrete x points, you approximate derivatives using finite differences. This is common in experiments, tables, and numerical simulations where formulas are unknown.
In Newton forward interpolation, if p is small (close to 0), the series converges faster because
A h becomes zero
B Δ becomes zero
C y becomes constant
D Higher terms shrink
Terms include p(p−1)(p−2)… which are small when p is near 0. That makes higher-order contributions small, so fewer difference terms are needed for a good approximation.
In Newton backward interpolation, if p is near 0, it means x is near
A x₀
B xₙ
C midpoint
D outside range
Backward p is defined as (x−xₙ)/h. If p≈0, then x≈xₙ, meaning the interpolation point is near the last tabulated value, where backward formula is most accurate.
A common numerical sign that polynomial interpolation degree is too high is
A Constant differences
B Exact endpoints
C Large oscillations
D Smaller table
High-degree polynomials can swing sharply between points, especially over wide intervals. If estimates fluctuate unrealistically, it suggests using fewer points, lower degree, or spline interpolation for stability.
In Gauss quadrature, the key improvement comes from choosing nodes as
A Equally spaced points
B Table endpoints only
C Random sample points
D Roots of orthogonal polynomials
Gauss quadrature chooses nodes as roots of orthogonal polynomials (like Legendre for [-1,1]). This selection maximizes exactness degree for a given number of points, improving efficiency.
In Simpson’s 1/3 rule, if function values are given at x=0,1,2,3,4 (equal spacing), n equals
A 4
B 5
C 3
D 2
Points are 5, so subintervals are n = points−1 = 4. Since n is even, Simpson’s 1/3 can be applied on the whole interval using the standard coefficient pattern.
In finite differences, using smaller h in difference tables generally makes differences approximate derivatives
A Less closely
B More closely
C Not related
D Always exact
Differences like (yᵢ₊₁−yᵢ)/h approximate derivatives as h becomes small, similar to the derivative limit idea. However, very small h can increase round-off and noise effects.