Chapter 24: Numerical Methods and Approximation (Set-5)
For bisection on [a,b] with tolerance ε, which inequality gives the minimum n ensuring interval length ≤ ε?
A 2ⁿ ≤ (b−a)/ε
B 2ⁿ ≥ (b−a)/ε
C n ≥ (b−a)ε
D n ≤ (b−a)/ε
After n bisections, interval length is (b−a)/2ⁿ. To make it ≤ ε, we need (b−a)/2ⁿ ≤ ε, so 2ⁿ ≥ (b−a)/ε.
In bisection, if f(a)f(b)<0 but f is discontinuous on [a,b], what is true?
A Root guaranteed
B Newton works
C Secant guaranteed
D No guarantee
The root guarantee uses the intermediate value theorem, which requires continuity. If f is discontinuous, a sign change may come from a jump, so a root inside is not ensured.
Newton’s method loses quadratic convergence for a root of multiplicity m>1 because:
A f not continuous
B Bracket missing
C f′(α)=0
D Error becomes complex
For multiple roots, derivatives at the root often vanish, weakening the tangent-step correction. Standard Newton then typically becomes only linearly convergent unless modified to handle multiplicity.
A standard modified Newton step for a root of known multiplicity m is:
A x− f/(m f′)
B x− m f/f′
C x− f′/f
D x− m f′/f
If a root has multiplicity m, using xₙ₊₁ = xₙ − m·f(xₙ)/f′(xₙ) often restores faster convergence by compensating for the flattened slope near the root.
Newton’s method can diverge when the initial guess is near:
A Flat tangent region
B Exact root
C Continuous interval
D Small residual
If f′(x) is very small near the starting point, the Newton step becomes huge. This can jump away from the root region, causing divergence or oscillations.
If a fixed-point iteration x=g(x) satisfies |g′(x)|>1 near the fixed point, then:
A Converges faster
B Becomes exact
C Always oscillates
D Diverges typically
When |g′(x)|>1, the mapping expands errors rather than shrinking them. Small deviations grow each step, so the fixed-point iteration usually diverges near that point.
In secant method, the next iterate formula is based on:
A Midpoint halving
B Two-point linear fit
C Tangent slope
D Quadratic spline
Secant method uses the line through (xₙ,f(xₙ)) and (xₙ₋₁,f(xₙ₋₁)). The x-intercept of this line gives xₙ₊₁, avoiding derivative calculation.
Secant method may be numerically unstable when:
A f is differentiable
B Sign change exists
C f(xₙ)≈f(xₙ₋₁)
D Root is simple
The secant step divides by f(xₙ)−f(xₙ₋₁). If this difference is tiny, the computed step can blow up, leading to large jumps and instability.
For a simple root, Newton has quadratic convergence when:
A f′(α)=0
B f″(α)=0
C f is constant
D f′(α)≠0
A simple root α satisfies f(α)=0 and f′(α)≠0. Near such roots, Newton’s local error analysis shows quadratic convergence under suitable smoothness.
In false position, stagnation is most likely when:
A |f(a)| ≪ |f(b)|
B f′ exists
C interval halves
D g′ < 1
When function values at endpoints are highly unbalanced, the secant x-intercept tends to fall near the endpoint with smaller |f|, causing the other endpoint to barely move.
A practical way to reduce false-position stagnation is:
A Use midpoint always
B Remove sign change
C Modify endpoint weight
D Use matrix inverse
Modified regula falsi methods reduce stagnation by adjusting or scaling the function value at an endpoint that remains fixed. This forces movement of both endpoints and improves convergence behavior.
In bisection, “monotonic convergence” means the bracket:
A Always expands
B Always shrinks
C Becomes complex
D Loses sign change
Each bisection step selects a half-interval that still brackets the root. Therefore the interval length decreases every iteration, making progress steady and predictable.
In Jacobi method for Ax=b, the iteration uses which matrix part inverted?
A Lower matrix L
B Upper matrix U
C Full matrix A
D Diagonal matrix D
Jacobi splits A into D+L+U and uses D⁻¹ to compute updates. Each variable is solved from its own equation using old values of other variables.
Gauss–Seidel differs by using which during the same iteration?
A Only old values
B Newly computed values
C Only residuals
D Only eigenvalues
Gauss–Seidel updates variables sequentially and immediately uses the latest updates. This changes the iteration matrix and often makes convergence faster than Jacobi.
A sufficient condition ensuring Jacobi convergence is:
A Strict diagonal dominance
B Zero determinant
C Symmetric only
D Negative trace
Strict diagonal dominance often guarantees the iteration matrix has spectral radius less than 1. Then errors shrink each iteration, giving convergence for Jacobi (and usually Gauss–Seidel too).
If a matrix is strictly diagonally dominant by rows, then Gauss–Seidel:
A Diverges guaranteed
B Needs derivatives
C Uses bracketing
D Converges guaranteed
Strict diagonal dominance by rows is a well-known sufficient condition for Gauss–Seidel convergence. It ensures the iterative update acts like a contraction in many norms.
In iterative methods, the convergence check using spectral radius requires:
A ρ(B) > 1
B ρ(B) = 1
C ρ(B) < 1
D ρ(B) = 0 only
For many stationary iterations x^(k+1)=Bx^(k)+c, error evolves as e^(k+1)=Be^(k). If ρ(B)<1, the error tends to zero, giving convergence.
For Ax=b, the residual r=b−Ax becomes zero when:
A x is initial
B x is exact
C A is diagonal
D b is zero
If x is the exact solution, Ax=b holds exactly, so r=b−Ax=0. Residual norms close to zero indicate the approximation is nearly satisfying the system.
A small residual does NOT always guarantee a small solution error when:
A System diagonal
B System symmetric
C Root bracketed
D System ill-conditioned
In ill-conditioned systems, small changes or errors can produce large solution changes. So even if residual is small, the true solution error may still be significant.
Condition number mainly measures:
A Root multiplicity
B Interval halving
C Error amplification
D Iteration count
Condition number indicates how much relative input errors can be amplified into relative output errors. Larger condition numbers mean the problem is more sensitive to rounding or data errors.
In Newton’s method, if the initial guess is far from root, the tangent step may:
A Always shrink
B Jump away
C Keep bracket
D Become exact
Newton is an open method. The tangent line approximation can lead to a new estimate far from the root, especially in regions of high curvature or small derivative, causing divergence.
A safe hybrid strategy often used in practice is:
A Jacobi for roots
B Bisection for Ax=b
C LU for f(x)=0
D Newton with bracketing
A common robust approach uses bisection to maintain a bracket and switches to Newton when safe. This combines reliability of bracketing with speed of Newton near the root.
In bisection, the best error bound after n steps is:
A (b−a)/2ⁿ⁺¹
B (b−a)/2ⁿ
C (b−a)·2ⁿ
D (b−a)/n
After n steps, interval length is (b−a)/2ⁿ. The midpoint approximation error is at most half of that, so maximum error ≤ (b−a)/2ⁿ⁺¹.
If |xₙ₊₁−xₙ| is small but |f(xₙ₊₁)| is not small, it suggests:
A Exact root found
B Quadratic convergence
C Stagnation possible
D Perfect stability
Small change in iterates means the method is not moving, but a large residual means it is not near a root. This can happen due to poor scaling, rounding limits, or stagnation.
Round-off errors become more serious in long iterations mainly because they:
A Cancel always
B Accumulate over steps
C Remove truncation
D Create continuity
Each operation may introduce a tiny rounding error. Over many steps, these can accumulate or be amplified, especially in unstable methods or ill-conditioned problems.
Truncation error is reduced by:
A Fewer iterations
B Less precision
C More rounding
D Smaller step size
Truncation error often comes from approximations like series truncation or finite differences. Using smaller step sizes or higher-order formulas typically reduces truncation error.
In fixed-point iteration, choosing a different g(x) is useful because it can:
A Remove f(x)
B Force bracketing
C Change convergence
D Make exact root
Different rearrangements x=g(x) can produce different derivatives g′ near the fixed point. Since convergence depends strongly on |g′|, choosing g wisely can improve or ruin convergence.
For linear systems, Jacobi may converge but be slow; a common improvement is:
A Bisection method
B Gauss–Seidel
C Simpson rule
D Secant method
Gauss–Seidel uses newly updated values immediately, often reducing the spectral radius of the iteration matrix. This usually speeds up convergence compared to Jacobi for suitable matrices.
SOR method introduces a relaxation parameter ω to:
A Speed convergence
B Increase rounding
C Remove residual
D Ensure discontinuity
SOR modifies the Gauss–Seidel update by mixing old and new values with factor ω. With a good ω (often between 1 and 2), convergence can be significantly faster.
If ω=1 in SOR, the method becomes:
A Jacobi
B Bisection
C Newton
D Gauss–Seidel
SOR generalizes Gauss–Seidel. When ω=1, the relaxation has no effect, and the update rule reduces exactly to the standard Gauss–Seidel iteration.
In iterative linear solvers, “splitting” A=M−N is used to form:
A x^(k+1)=Ax^(k)+b
B x^(k+1)=M x^(k)+b
C x^(k+1)=M⁻¹Nx^(k)+M⁻¹b
D x^(k+1)=N⁻¹Mx^(k)
Stationary iterations come from A=M−N so that Mx=Nx+b. Multiplying by M⁻¹ yields x^(k+1)=M⁻¹N x^(k)+M⁻¹b.
For Jacobi, the splitting typically uses:
A M=L, N=U
B M=D, N=−(L+U)
C M=U, N=L
D M=A, N=0
With A=D+L+U, Jacobi rearranges to Dx=−(L+U)x+b. Thus M=D and N=−(L+U), giving the iteration x^(k+1)=D⁻¹(b−(L+U)x^(k)).
For Gauss–Seidel, the splitting typically uses:
A M=D, N=−(L+U)
B M=U, N=−(D+L)
C M=L, N=−(D+U)
D M=D+L, N=−U
Gauss–Seidel uses (D+L)x=−Ux+b, so M=D+L and N=−U. Using M⁻¹ allows immediate use of updated components during the iteration.
Order of convergence p is defined using errors eₙ via:
A lim |eₙ₊₁|/|eₙ|ᵖ = C
B lim |eₙ|/|eₙ₊₁| = C
C lim |eₙ₊₁|+|eₙ| = C
D lim eₙ = 0 only
If the limit exists and equals a nonzero constant C, then the method has order p. This formula captures how the next error depends on a power of the current error.
If p=1 and 0
A Quadratic
B Cubic
C Linear
D Divergent
Order p=1 means eₙ₊₁≈C eₙ for small errors. With 0
If the order p>1, the method is usually classified as:
A Sublinear
B Superlinear
C Non-iterative
D Discontinuous
When p>1, the method converges faster than linear. Newton (p=2) is quadratic, and secant (p≈1.618) is superlinear but not quadratic.
Aitken’s Δ² acceleration is most helpful when the base iteration is:
A Quadratically convergent
B Exact in one step
C Divergent always
D Linearly convergent
Aitken’s method targets slowly converging sequences, especially linear convergence, by extrapolating using three successive iterates. It can significantly speed up convergence when the sequence behaves regularly.
If a method is stable, small rounding errors typically:
A Grow without limit
B Force divergence
C Stay bounded
D Create new roots
Stability means small perturbations do not get amplified dramatically across iterations. Rounding errors still exist, but a stable method prevents them from dominating the computed solution.
In Newton’s method, the “tangent line” update is derived from:
A Second-order Taylor
B First-order Taylor
C Fourier series
D Matrix splitting
Newton uses f(x)≈f(xₙ)+f′(xₙ)(x−xₙ). Setting this approximation to zero and solving for x gives xₙ₊₁ = xₙ − f(xₙ)/f′(xₙ).
In numerical differentiation preview, the forward difference f′(x)≈[f(x+h)−f(x)]/h has error order:
A O(h²)
B O(1)
C O(h³)
D O(h)
The forward difference comes from truncating the Taylor expansion after the first derivative term. The neglected second derivative term introduces an error proportional to h, so it is first-order accurate.
For the composite trapezoidal rule with step size h, the global error order is:
A O(h²)
B O(h)
C O(h³)
D O(1)
For a smooth function, the composite trapezoidal rule has global error proportional to h². Halving step size typically reduces the overall error by about a factor of four.
In bisection, root isolation is commonly done by:
A Computing derivatives
B Checking sign change
C Solving exactly
D Using eigenvalues
Root isolation means finding an interval where the function changes sign. For continuous functions, this guarantees at least one root inside and provides a safe starting bracket.
If f(a)=0 in bracketing, then:
A b must be root
B No root exists
C Use secant only
D a is a root
If f(a)=0, then x=a satisfies f(x)=0 exactly, so it is already a root. No iteration is needed unless searching for other roots.
Newton’s method applied to f(x)=x² has trouble near x=0 because:
A f is discontinuous
B f has no root
C f′(0)=0
D f is negative
For f(x)=x², the derivative is f′(x)=2x, which becomes zero at the root. Near zero, Newton steps can behave poorly, and convergence becomes slow compared to simple-root cases.
In Gauss–Seidel, divergence can occur even if:
A Diagonally dominant
B Not diagonally dominant
C Matrix is triangular
D b is nonzero
Without diagonal dominance or other suitable properties, the iteration matrix can have spectral radius ≥1, causing divergence. Diagonal dominance is a strong sufficient condition that helps avoid this.
Iterative methods are preferred over direct methods mainly when:
A Very large sparse
B Very small system
C Exact symbolic needed
D Only one equation
For very large sparse systems, direct methods can be expensive in memory and time due to fill-in. Iterative methods exploit sparsity and can reach acceptable accuracy efficiently.
If LU decomposition is used, solving Ax=b is done by:
A One bisection step
B Secant iterations
C Fixed point only
D Two triangular solves
With A=LU, first solve Ly=b by forward substitution, then solve Ux=y by backward substitution. This is efficient and avoids computing A⁻¹ explicitly.
In secant method, if starting guesses are very poor, the method may:
A Converge always
B Keep bracket
C Diverge or wander
D Halve interval
Secant is an open method without bracketing guarantees. Bad starting values can lead to steps that move away from the root, oscillate, or approach an unintended root.
A good stopping rule should balance accuracy and:
A Root multiplicity
B Computation cost
C Interval sign
D Matrix symmetry
Very strict stopping rules increase iterations and may hit precision limits, while loose rules reduce accuracy. A good rule achieves required accuracy with reasonable time and stable computation.
In practice, a common combined stopping criterion uses:
A Only step size
B Only iteration count
C Only derivative sign
D Residual and step
Many robust implementations check both |f(xₙ)| (residual) and |xₙ₊₁−xₙ| (step change). This avoids stopping too early when iterates stagnate but residual remains large.