Chapter 25: Finite Differences, Interpolation and Numerical Integration (Set-1)
In a forward difference table, which operator represents the first forward difference?
A Δ operator
B ∇ operator
C δ operator
D D operator
Forward difference measures change from one tabulated value to the next in the forward direction. It is defined as Δy = y₁ − y₀ for step size h in x.
Which operator is used for backward difference?
A Δ operator
B E operator
C ∇ operator
D I operator
Backward difference measures change in the backward direction. It is defined as ∇y₀ = y₀ − y₋₁. It is useful when interpolation is near the end of a table.
The shift operator E is defined by
A Ey = y(x−h)
B Ey = y(x)/h
C Ey = y′(x)
D Ey = y(x+h)
The shift operator moves the argument forward by one step. So Ey(x) means the function value at x+h. This operator helps express relations among Δ, ∇, and δ.
Which relation is correct for forward difference and shift operator?
A Δ = E − 1
B Δ = 1 − E
C Δ = E + 1
D Δ = 1/E
Since Ey = y(x+h), the forward difference is Δy = y(x+h) − y(x) = (E − 1)y. This compact relation is widely used in operator algebra.
For equally spaced data, step size h means
A Constant y spacing
B Constant x spacing
C Constant slope
D Constant error
Equally spaced data means consecutive x-values differ by the same amount h. Many formulas like Newton forward and Simpson’s 1/3 rule assume a uniform step size.
The “difference table” is mainly used to
A Solve determinants
B Factor polynomials
C Compute higher differences
D Find eigenvalues
A difference table organizes values of y and successive differences (Δy, Δ²y, etc.). It makes interpolation and numerical differentiation easier by providing needed difference terms clearly.
If Δy is constant for a dataset, the underlying function is likely
A Linear polynomial
B Quadratic polynomial
C Cubic polynomial
D Exponential function
For a linear function, first differences are constant when x is equally spaced. For a quadratic, second differences are constant; for a cubic, third differences are constant.
If Δ²y is constant, the function is most consistent with
A Linear polynomial
B Log function
C Random data
D Quadratic polynomial
A quadratic function produces constant second forward differences for equally spaced x-values. This property is often used to check the degree trend of tabulated data.
Central difference operator δ is commonly defined as
A δy = y(x+h) − y(x)
B δy = y(x) − y(x−h)
C δy = y(x+h/2) − y(x−h/2)
D δy = y(x)/y(x+h)
Central difference measures change around the point symmetrically. It often gives better accuracy than forward or backward differences for derivatives, especially when data is centered around x.
The operator identity involving E and ∇ is
A ∇ = 1 − E⁻¹
B ∇ = E − 1
C ∇ = 1 − E
D ∇ = E⁻¹ + 1
Since E⁻¹ shifts backward, E⁻¹y(x)=y(x−h). Thus ∇y = y(x) − y(x−h) = (1 − E⁻¹)y. This is standard in finite differences.
Newton forward interpolation is preferred when the required x is
A Near table end
B Near table start
C Far outside table
D Randomly selected
Newton forward uses forward differences taken from the beginning of the table and works best when the interpolation point lies near the first entries, reducing truncation and rounding effects.
Newton backward interpolation is preferred when the required x is
A Near table start
B At midpoint only
C Near table end
D Outside interval
Newton backward uses backward differences based near the last tabulated point. It is convenient and usually more accurate when x is close to the end of the given data.
In Newton forward interpolation, the variable p is defined as
A (x − x₀)/h
B (x₀ − x)/h
C (y − y₀)/h
D (x + x₀)/h
Here x₀ is the first x-value in the table and h is the step size. The quantity p scales how far x lies from x₀ in units of h, simplifying the formula terms.
In Newton backward interpolation, p is commonly taken as
A (x − x₀)/h
B (xₙ − x)/h
C (x + xₙ)/h
D (x − xₙ)/h
Newton backward uses the last tabulated point xₙ as the origin. The parameter p measures the offset from xₙ in step-size units, matching backward-difference terms.
Lagrange interpolation is especially useful for
A Only equally spaced x
B Only periodic data
C Unequally spaced x
D Only integer x
Lagrange’s formula does not require equal spacing of x-values. It directly builds the interpolating polynomial using basis polynomials from given data points.
The Lagrange interpolating polynomial through n+1 points has degree at most
A n
B n+1
C 2n
D n−1
With n+1 distinct data points, the interpolating polynomial is unique and its degree is at most n. It may be lower if the data lies on a lower-degree curve.
A key property of Lagrange basis polynomial Lᵢ(x) is
A Lᵢ(xⱼ)=0 if i=j
B Lᵢ(xⱼ)=1 if i=j
C All Lᵢ are equal
D Lᵢ has no roots
Lagrange basis polynomials are constructed so that Lᵢ(xᵢ)=1 and Lᵢ(xⱼ)=0 for j≠i. This makes the sum match each data point exactly.
Interpolation means estimating a value
A Outside given range
B At x=0 only
C Within given range
D At integer points
Interpolation estimates function values inside the interval covered by data points. Estimating outside the interval is extrapolation, which can be less reliable and more error-prone.
Extrapolation is generally
A Always exact
B Always stable
C Required by formula
D More risky
Extrapolation uses the same polynomial or model beyond known data. Errors can grow quickly because the function behavior outside the data range may differ from the fitted trend.
Divided differences are mainly associated with
A Newton general interpolation
B Simpson’s rule only
C Trapezoidal rule only
D Euler’s method only
Newton’s divided difference form works for unequal spacing and builds the polynomial incrementally. Divided differences generalize finite differences when the spacing between x-values is not constant.
The simplest numerical derivative using forward difference is
A [f(x)−f(x−h)]/h
B [f(x+h)−f(x−h)]/h
C [f(x+h)−f(x)]/h
D [f(x)−f(x+h)]/2h
Forward difference approximates the slope using the next point. It is simple but usually less accurate than central difference because its truncation error is larger for smooth functions.
The backward difference derivative approximation is
A [f(x)−f(x−h)]/h
B [f(x+h)−f(x)]/h
C [f(x+h)−f(x−h)]/2h
D [f(x−h)−f(x)]/2h
Backward difference uses the previous point to estimate slope at x. It is useful near the end of a table where forward values may not be available.
Central difference for first derivative is commonly written as
A [f(x+h)−f(x)]/h
B [f(x+h)−f(x−h)]/2h
C [f(x)−f(x−h)]/h
D [f(x+2h)−f(x)]/h
Central difference uses symmetric points around x, typically giving better accuracy than forward/backward methods for smooth functions, because leading error terms cancel out.
Truncation error is mainly due to
A Wrong table reading
B Using calculator
C Changing units
D Ignoring higher terms
Numerical formulas come from series expansions. When we drop higher-order terms, a truncation error remains. Smaller step size generally reduces this, but too small h can raise round-off issues.
Round-off error increases mainly when
A h becomes very small
B h becomes very large
C function is constant
D interval is zero
Very small step size can cause subtraction of nearly equal numbers and amplify floating-point rounding effects. Good numerical work balances truncation error and round-off error.
Numerical integration methods are mainly used to
A Solve linear equations
B Factor polynomials
C Approximate definite integrals
D Find exact roots
When an integral is difficult to evaluate exactly or only tabulated values are available, numerical integration estimates the area under a curve using rules like trapezoidal and Simpson’s.
The trapezoidal rule approximates the curve by
A Straight line segments
B Parabolic arcs
C Cubic splines
D Exponential arcs
Trapezoidal rule joins consecutive points by straight lines, forming trapezoids. The total area is the sum of trapezoid areas, giving a simple and widely used approximation.
Composite trapezoidal rule is used when the interval is
A Not divided at all
B Exactly one point
C Only symmetric
D Divided into many parts
Composite trapezoidal rule applies trapezoidal rule on multiple subintervals and adds results. It improves accuracy, especially when the function changes noticeably over the interval.
For composite trapezoidal rule, the step size h equals
A (b+a)/n
B (a−b)/n
C (b−a)/n
D (b−a)/(n−1)
If [a,b] is split into n equal subintervals, each has width h=(b−a)/n. Then x₀=a and xₙ=b, with n+1 points used in the formula.
The trapezoidal rule error term involves the
A Second derivative
B First derivative
C Fifth derivative
D No derivative
Trapezoidal rule error depends on the curvature of the function, measured by the second derivative. If the function is nearly linear, the second derivative is small and the rule is accurate.
Simpson’s 1/3 rule approximates the curve by
A Straight lines
B Parabolic arcs
C Triangles only
D Step function
Simpson’s 1/3 rule fits a parabola through groups of points and integrates that parabola exactly. This usually gives better accuracy than trapezoidal for smooth functions.
Simpson’s 1/3 rule requires number of subintervals n to be
A Even number
B Prime number
C Odd number
D Zero
Simpson’s 1/3 rule works on pairs of subintervals at a time, so n must be even. Then the points count is n+1, and coefficients follow the 1–4–2–4–…–1 pattern.
Simpson’s rule error term involves the
A Second derivative
B First derivative
C Fourth derivative
D No derivative
Simpson’s 1/3 rule is more accurate because its leading error term depends on the fourth derivative. For smooth functions with small fourth derivative, Simpson’s gives very good results.
In Simpson’s 1/3 rule, the coefficient of interior odd-index y values is
A 2
B 1
C 3
D 4
The composite Simpson’s 1/3 formula uses weights 1, 4, 2, 4, 2, …, 4, 1. Odd-index interior points get weight 4, reflecting parabolic fitting structure.
In Simpson’s 1/3 rule, the coefficient of interior even-index y values is
A 2
B 4
C 1
D 0
Even-index interior points (excluding endpoints) receive weight 2 in composite Simpson’s 1/3 rule. This alternating pattern is essential for correct application and accuracy.
Simpson’s 3/8 rule commonly requires n to be
A Even number
B Prime number
C Multiple of 3
D Multiple of 5
Simpson’s 3/8 rule works over blocks of three subintervals, so the number of subintervals must be a multiple of 3. It is another Newton–Cotes integration formula.
If only equally spaced table values are given, a good method for integration is
A Composite rules
B Exact antiderivative
C Laplace transform
D Matrix inversion
With tabulated equally spaced values, composite trapezoidal or composite Simpson’s rules are practical. They use the spacing h and the listed y-values directly without needing an explicit formula.
Euler’s method is used to approximate solutions of
A Definite integral
B Initial value ODE
C Algebraic equation
D Partial fraction
Euler’s method numerically solves y′=f(x,y) with a given starting value y(x₀)=y₀. It steps forward using slope estimates, producing an approximate solution curve.
The Euler update formula is
A yₙ₊₁ = yₙ − h f(xₙ,yₙ)
B yₙ₊₁ = f(xₙ,yₙ)/h
C yₙ₊₁ = yₙ + f(xₙ,yₙ)
D yₙ₊₁ = yₙ + h f(xₙ,yₙ)
Euler’s method uses the slope at the current point to move to the next point by a step h. It is simple and fast but can be inaccurate if h is too large.
In Euler’s method, local truncation error per step is of order
A h²
B h
C h³
D 1/h
The one-step error (local truncation error) of Euler’s method is proportional to h², while the accumulated (global) error over many steps is typically proportional to h.
Global error of Euler’s method is typically of order
A h²
B h³
C h
D 1/h
Although each step has error about h², errors accumulate across roughly 1/h steps over a fixed interval, leading to global error of order h for Euler’s method.
A smaller step size h in Euler’s method usually makes the solution
A More accurate
B Always unstable
C Always exact
D Less computed
Decreasing h reduces truncation error and typically improves accuracy. However, too small h increases computation and may increase round-off effects, so a practical balance is needed.
Heun’s method is best described as
A Exact ODE solver
B Trapezoidal integration
C Lagrange interpolation
D Improved Euler method
Heun’s method improves Euler by using a predictor-corrector idea, often averaging slopes at the start and end of the step. This usually increases accuracy for the same step size.
A “difference equation” is mainly a relation involving
A Continuous integrals
B Discrete values
C Complex residues
D Matrix rank
Difference equations relate values at discrete points, like yₙ₊₁ and yₙ. They are discrete analogs of differential equations and appear naturally in numerical methods and sequences.
Higher order differences like Δ³y represent
A Differences of differences
B Derivatives exactly
C Integrals exactly
D Random noise only
Δ²y is the difference of Δy values, and Δ³y is the difference of Δ²y values. Higher differences help represent polynomial trends and are used in interpolation formulas.
In interpolation, the interpolating polynomial is unique when
A y-values are distinct
B h is zero
C x-values are distinct
D n is negative
For n+1 distinct x-values, there is exactly one polynomial of degree at most n that passes through all points. This uniqueness supports both Lagrange and Newton interpolation methods.
A common practical warning in interpolation is to avoid
A Using very high degree
B Using any data points
C Using step size
D Using difference table
Very high-degree polynomials can oscillate and become unstable, especially with many points or uneven spacing. Often, using fewer points near the target x gives a better estimate.
Newton forward formula mainly uses
A Backward differences
B Divided differences only
C Fourier coefficients
D Forward differences
Newton forward interpolation is built from Δy₀, Δ²y₀, etc., taken at the start of the table. It is computationally efficient when data is equally spaced and x is near x₀.
Newton backward formula mainly uses
A Forward differences
B Central differences only
C Backward differences
D Laplace values
Newton backward interpolation uses ∇yₙ, ∇²yₙ, etc., based near the last point. It is convenient for equally spaced data when the required x lies near xₙ.
Romberg integration is best described as
A Lagrange interpolation use
B Richardson extrapolation use
C Euler method variant
D Newton forward variant
Romberg integration improves trapezoidal approximations by applying Richardson extrapolation repeatedly. It builds a table of refined estimates, often achieving high accuracy for smooth integrands.