Chapter 25: Finite Differences, Interpolation and Numerical Integration (Set-3)
For equally spaced x with step h, the forward difference at xᵢ is
A yᵢ − yᵢ₋₁
B yᵢ₊₁ − yᵢ₋₁
C yᵢ₊₁ − yᵢ
D yᵢ/h
Forward difference compares the next tabulated value with the current one. With equal spacing, Δyᵢ = yᵢ₊₁ − yᵢ. It is the first building block of forward interpolation.
If Eyᵢ = yᵢ₊₁, then E²yᵢ equals
A yᵢ₊₂
B yᵢ₋₂
C yᵢ₊₁
D yᵢ
The shift operator E moves one step forward each time. Applying it twice shifts by two steps: E²yᵢ = E(yᵢ₊₁) = yᵢ₊₂. This is useful in operator manipulations.
Using operators, the identity (1−E⁻¹)y equals
A Δy
B δy
C μy
D ∇y
Backward difference is ∇y = y(x) − y(x−h). Since E⁻¹y(x)=y(x−h), we get (1−E⁻¹)y = y(x) − y(x−h) = ∇y.
If Δ = E−1, then E can be written as
A 1 − Δ
B Δ − 1
C 1 + Δ
D 1/Δ
Rearranging Δ = E − 1 gives E = 1 + Δ. This shows a shift can be expressed using the difference operator, helping expand shifted values in terms of differences.
If p = (x−x₀)/h and x = x₀ + 2h, then p is
A 1/2
B 2
C −2
D 0
Substitute x = x₀ + 2h into p = (x−x₀)/h. Then p = (2h)/h = 2. This p value decides the coefficients in Newton forward interpolation.
For Newton backward, if x = xₙ − h, then p = (x−xₙ)/h equals
A 1
B 0
C −1
D −2
Here x is one step left of the last point: x = xₙ − h. So p = (x−xₙ)/h = (−h)/h = −1. This fits the backward series coefficients.
Lagrange interpolation through three points gives a polynomial of degree at most
A 2
B 1
C 3
D 0
Three distinct data points determine a unique polynomial of degree ≤2. It could become linear if points lie on a straight line, but the maximum degree possible is 2.
In Lagrange interpolation, basis polynomial L₀(x) is zero at
A Only at x₀
B Nowhere
C All x-values
D All xⱼ, j≠0
L₀(x) is constructed so L₀(x₀)=1 and L₀(xⱼ)=0 for j≠0. This ensures the final sum matches each yᵢ exactly at its node.
The Newton divided-difference polynomial is most convenient when points are
A Equally spaced only
B Symmetric only
C Unequally spaced
D Periodic only
Divided differences generalize finite differences without needing constant spacing. Newton’s divided-difference form builds polynomial terms using (x−x₀), (x−x₁), etc., working well for irregular x.
A second divided difference f[x₀,x₁,x₂] is computed using
A Two first differences
B Two second differences
C One trapezoid area
D One Simpson area
The second divided difference is formed from first divided differences: f[x₀,x₁,x₂] = ( f[x₁,x₂] − f[x₀,x₁] ) / (x₂ − x₀). It measures curvature trend.
If the second derivative is large on [a,b], trapezoidal rule error is generally
A Smaller
B Always zero
C Larger
D Unchanged
Trapezoidal error depends on f”(ξ). Larger curvature means straight-line segments deviate more from the curve, so the area approximation becomes less accurate for the same step size.
If a function is almost linear on [a,b], trapezoidal rule tends to be
A Very poor
B Very accurate
C Not applicable
D Always negative
Trapezoidal rule uses straight lines. If the curve is nearly a straight line, the trapezoids closely match the true area, making the approximation accurate even with moderate step sizes.
Composite trapezoidal rule uses interior points with weight
A 1
B 1/2
C 2
D 4
In composite trapezoidal, endpoints get half weight, while each interior y-value is counted fully. The formula is h[(y₀+yₙ)/2 + Σ yᵢ], so interior weights are 1.
In composite Simpson’s 1/3 rule, the factor outside the bracket is
A h/2
B h
C 3/h
D h/3
Composite Simpson’s 1/3 integrates parabolic fits over pairs of subintervals. The standard coefficient is (h/3)[y₀+yₙ+4(sum odd)+2(sum even)], derived from integrating quadratics.
Simpson’s 1/3 rule is usually more accurate than trapezoidal because it uses
A Linear fitting
B Constant fitting
C Parabolic fitting
D Random fitting
Simpson’s approximates the function locally by a parabola instead of a straight line. For smooth functions, parabolic approximation matches curvature better, reducing error significantly compared to trapezoids.
If n is odd, composite Simpson’s 1/3 rule is
A Not directly usable
B Always exact
C Always stable
D Preferred method
Composite Simpson’s 1/3 requires an even number of subintervals so they can be grouped in pairs. If n is odd, you must adjust by using another rule on the last interval(s).
In Simpson’s 3/8 rule (single application), number of subintervals used is
A 2
B 4
C 3
D 1
Simpson’s 3/8 uses four equally spaced points, forming three subintervals. It fits a cubic curve through these points and integrates that cubic approximation over the interval.
In Euler’s method for y’ = f(x,y), the next x-value is
A xₙ − h
B xₙ + h
C xₙ/h
D xₙ²
Euler moves forward by a constant step size h in x. So xₙ₊₁ = xₙ + h. Then y is updated using the slope at (xₙ,yₙ).
For y’ = f(x,y), Euler’s method uses slope taken at
A (xₙ₊₁,yₙ₊₁)
B Midpoint only
C Endpoint only
D (xₙ,yₙ)
Euler is an explicit method: it computes yₙ₊₁ from known values at the current step. The slope f(xₙ,yₙ) is used as a constant slope over the interval [xₙ, xₙ₊₁].
If h is reduced in Euler’s method, the number of steps over fixed interval
A Decreases
B Stays same
C Increases
D Becomes zero
For a fixed interval length (b−a), steps count is n = (b−a)/h. Smaller h means more subintervals, so more iterations are needed, improving accuracy but increasing computation.
A simple “predictor” in Heun’s method is obtained using
A Euler step
B Simpson step
C Romberg step
D Lagrange step
Heun first predicts y* using an Euler step: y* = yₙ + h f(xₙ,yₙ). Then it corrects using an averaged slope, improving accuracy compared to using only the first slope.
The corrected Heun update uses average of slopes at
A Two midpoints
B Two random points
C Start and predicted end
D Endpoints only
Heun computes slopes f(xₙ,yₙ) and f(xₙ₊₁,y*). The corrected value is yₙ₊₁ = yₙ + (h/2)[f(xₙ,yₙ)+f(xₙ₊₁,y*)], which reduces error.
A key cause of numerical instability in differentiation is
A Subtracting close values
B Adding close values
C Multiplying large values
D Squaring small values
Finite difference formulas often subtract nearby values, which can lose significant digits (cancellation). This makes results sensitive to rounding and noise, especially when h is very small.
The best “balanced” h choice often tries to trade off
A Speed and memory
B Addition and division
C Truncation and round-off
D Graph and table
Larger h increases truncation error (ignored higher terms), while very small h increases round-off error due to cancellation. Practical numerical work chooses h to minimize total error.
If data are equally spaced, Newton forward interpolation polynomial uses differences taken at
A Last entry
B Middle entry
C Any entry
D First entry
The forward formula uses Δy₀, Δ²y₀, etc., based at the first row of the difference table. This keeps computation simple when the interpolation point lies near the beginning.
If x lies near xₙ, Newton backward formula is usually chosen because it uses
A First differences
B Last differences
C Central differences
D Random differences
Newton backward uses ∇yₙ, ∇²yₙ, etc., taken at the end of the table. That minimizes the size of p and usually improves accuracy when interpolating near xₙ.
In Lagrange interpolation, using too many points can cause
A Exact stability
B Zero error always
C Oscillations
D Less computation
High-degree polynomial interpolation may oscillate between points, especially over wide intervals (Runge-type behavior). Often, choosing nearby points or using splines gives more stable estimates.
A practical way to reduce interpolation error is to choose points
A Near target x
B Far from x
C Only endpoints
D Randomly spaced
Interpolation error depends on product (x−xᵢ). Using data points close to the desired x reduces this product magnitude and typically gives better local accuracy than distant points.
Central interpolation formulas like Stirling mainly require
A Only forward differences
B Only backward differences
C Symmetric differences
D No differences
Stirling uses both forward and backward differences around a central value to build a balanced approximation. This symmetry improves accuracy when the interpolation point is near the table center.
Bessel’s formula is often used when the interpolation point is near
A Extreme left
B Extreme right
C Far outside
D Half-step center
Bessel’s formula is designed for points near the midpoint between two central entries. It combines central differences in a way that improves accuracy when x is not exactly at a tabulated node.
If Δy₀, Δ²y₀, Δ³y₀ are available, a forward difference approximation to y(x₀+ph) uses
A Only y₀
B Series in p
C Only Δy₀
D Only Δ³y₀
Newton forward expresses y(x) as a series in p with factorial coefficients and higher differences: y ≈ y₀ + pΔy₀ + p(p−1)/2!Δ²y₀ + p(p−1)(p−2)/3!Δ³y₀ + …
When integrating tabulated values with equal h, Simpson’s 1/3 rule needs
A Even number points
B Two points only
C Odd number points
D Three points only
Simpson’s 1/3 needs an even number of subintervals n, so the number of data points is n+1, which is odd. This ensures the 1–4–2–4–…–1 weighting pattern fits correctly.
Composite Simpson’s 3/8 rule requires the number of subintervals to be
A Multiple of 3
B Even number
C Prime number
D Multiple of 2
Composite 3/8 works in blocks of three subintervals (four points each). Therefore total subintervals must be divisible by 3, otherwise the last part must be handled using another method.
If a function has a bounded fourth derivative and is smooth, Simpson’s rule error generally decreases like
A h order
B h² order
C h⁴ order
D 1/h order
Simpson’s 1/3 rule has truncation error proportional to h⁴ for smooth functions (over a fixed interval). That is why it usually converges faster than trapezoidal, which is typically order h².
Trapezoidal rule error for smooth functions typically decreases like
A h order
B h⁴ order
C 1/h order
D h² order
For a smooth function with bounded second derivative, trapezoidal rule has error proportional to h² over a fixed interval. Halving h typically reduces the error by about a factor of 4.
If f(x) is constant on [a,b], both trapezoidal and Simpson’s give
A Approximate only
B Random error
C Exact integral
D Undefined result
For constant functions, the area is simply constant value times interval length. Any rule using weighted sums of function values will match exactly because all sampled values are identical.
In Romberg integration, the first column usually contains
A Simpson estimates
B Trapezoidal estimates
C Euler estimates
D Lagrange values
Romberg starts with trapezoidal approximations with successively halved step sizes. These values fill the first column; Richardson extrapolation is then applied to generate improved values across the table.
Richardson extrapolation in Romberg mainly removes the leading
A Function values
B Interval endpoints
C Error term power
D Step size itself
If an approximation has error like C h^p + higher terms, Richardson combines results at h and h/2 to cancel the C h^p part. This yields a higher-order accurate estimate.
A difference table is especially helpful for quickly computing
A Higher differences
B Determinants
C Eigenvectors
D Fourier series
Difference tables systematically compute Δy, Δ²y, Δ³y, etc., row by row. This organized structure reduces mistakes and speeds up interpolation and numerical differentiation computations.
If the tabulated x spacing is not constant, Newton forward/backward with Δ/∇ is
A Always best
B Always exact
C Required method
D Not suitable directly
Newton forward/backward formulas assume equal spacing so that p=(x−x₀)/h works uniformly. With unequal spacing, divided differences or Lagrange interpolation is the correct approach.
In numerical differentiation, using central difference at interior points typically gives
A Worse accuracy
B No meaning
C Better accuracy
D Same as Euler
Central difference uses symmetric points around x, canceling leading error terms from Taylor expansions. For interior points with available data on both sides, it often produces more accurate slopes.
The main use of Runge–Kutta overview in this chapter is to show
A Higher order ODE methods
B New interpolation basis
C New difference operator
D Exact integration trick
Runge–Kutta methods improve ODE solutions by evaluating slopes multiple times per step, achieving higher-order accuracy than Euler. They are widely used because they are accurate and relatively stable.
In Newton–Cotes rules, “closed” formulas mean endpoints
A Excluded as nodes
B Included as nodes
C Randomly chosen
D Always repeated
Closed Newton–Cotes rules include both endpoints a and b among the sample points, like trapezoidal and Simpson’s. Open rules exclude endpoints and use interior points instead.
Adaptive step size methods mainly attempt to
A Remove all tables
B Avoid derivatives
C Control error automatically
D Fix n always
Adaptive methods adjust step size based on estimated error: smaller steps where the function changes quickly, larger steps where it is smooth. This often achieves accuracy efficiently with fewer computations.
Numerical stability in interpolation generally improves when calculations avoid
A Small numbers
B Using h
C Using y-values
D Large cancellations
When two close numbers are subtracted, significant digits can be lost, magnifying rounding errors. Stable interpolation methods aim to reduce such cancellations and avoid ill-conditioned polynomial forms.
A common “safe” rule for interpolation is to avoid extrapolation because error can
A Become zero
B Stay constant
C Grow quickly
D Always decrease
Outside the data interval, the interpolating polynomial may behave unpredictably. The error term contains products of (x−xᵢ), which can become large beyond the known range, causing rapid error growth.
In Euler’s method, local truncation error refers to error made in
A Single step
B Whole interval
C Table creation
D Integration only
Local truncation error measures the error introduced in one step assuming the previous value is exact. Global error measures the total accumulated error after many steps across the interval.
In Euler’s method, global error mainly grows because of
A Exact cancellation
B Constant derivatives
C Step-by-step accumulation
D No iteration
Even small local errors add up over many steps. Since the number of steps is about (b−a)/h, the overall accumulated effect often results in global error of order h for Euler’s method.
For numerical integration, a basic requirement for applying composite rules is knowing
A Exact antiderivative
B Laplace transform
C Infinite series
D Function at nodes
Composite numerical integration rules use sampled function values at specified x-nodes. You only need f(xᵢ) values (or table values) and step size to approximate the integral.
In finite differences, operator algebra is mainly used to
A Change data values
B Remove step size
C Derive formulas quickly
D Avoid computation
Treating Δ, ∇, δ, and E like algebraic operators helps derive and relate formulas systematically. It provides compact expressions linking shifts and differences, supporting interpolation and differentiation derivations.