Chapter 24: Numerical Methods and Approximation (Set-5)

For bisection on [a,b] with tolerance ε, which inequality gives the minimum n ensuring interval length ≤ ε?

A 2ⁿ ≤ (b−a)/ε
B 2ⁿ ≥ (b−a)/ε
C n ≥ (b−a)ε
D n ≤ (b−a)/ε

In bisection, if f(a)f(b)<0 but f is discontinuous on [a,b], what is true?

A Root guaranteed
B Newton works
C Secant guaranteed
D No guarantee

Newton’s method loses quadratic convergence for a root of multiplicity m>1 because:

A f not continuous
B Bracket missing
C f′(α)=0
D Error becomes complex

A standard modified Newton step for a root of known multiplicity m is:

A x− f/(m f′)
B x− m f/f′
C x− f′/f
D x− m f′/f

Newton’s method can diverge when the initial guess is near:

A Flat tangent region
B Exact root
C Continuous interval
D Small residual

If a fixed-point iteration x=g(x) satisfies |g′(x)|>1 near the fixed point, then:

A Converges faster
B Becomes exact
C Always oscillates
D Diverges typically

In secant method, the next iterate formula is based on:

A Midpoint halving
B Two-point linear fit
C Tangent slope
D Quadratic spline

Secant method may be numerically unstable when:

A f is differentiable
B Sign change exists
C f(xₙ)≈f(xₙ₋₁)
D Root is simple

For a simple root, Newton has quadratic convergence when:

A f′(α)=0
B f″(α)=0
C f is constant
D f′(α)≠0

In false position, stagnation is most likely when:

A |f(a)| ≪ |f(b)|
B f′ exists
C interval halves
D g′ < 1

A practical way to reduce false-position stagnation is:

A Use midpoint always
B Remove sign change
C Modify endpoint weight
D Use matrix inverse

In bisection, “monotonic convergence” means the bracket:

A Always expands
B Always shrinks
C Becomes complex
D Loses sign change

In Jacobi method for Ax=b, the iteration uses which matrix part inverted?

A Lower matrix L
B Upper matrix U
C Full matrix A
D Diagonal matrix D

Gauss–Seidel differs by using which during the same iteration?

A Only old values
B Newly computed values
C Only residuals
D Only eigenvalues

A sufficient condition ensuring Jacobi convergence is:

A Strict diagonal dominance
B Zero determinant
C Symmetric only
D Negative trace

If a matrix is strictly diagonally dominant by rows, then Gauss–Seidel:

A Diverges guaranteed
B Needs derivatives
C Uses bracketing
D Converges guaranteed

In iterative methods, the convergence check using spectral radius requires:

A ρ(B) > 1
B ρ(B) = 1
C ρ(B) < 1
D ρ(B) = 0 only

For Ax=b, the residual r=b−Ax becomes zero when:

A x is initial
B x is exact
C A is diagonal
D b is zero

A small residual does NOT always guarantee a small solution error when:

A System diagonal
B System symmetric
C Root bracketed
D System ill-conditioned

Condition number mainly measures:

A Root multiplicity
B Interval halving
C Error amplification
D Iteration count

In Newton’s method, if the initial guess is far from root, the tangent step may:

A Always shrink
B Jump away
C Keep bracket
D Become exact

A safe hybrid strategy often used in practice is:

A Jacobi for roots
B Bisection for Ax=b
C LU for f(x)=0
D Newton with bracketing

In bisection, the best error bound after n steps is:

A (b−a)/2ⁿ⁺¹
B (b−a)/2ⁿ
C (b−a)·2ⁿ
D (b−a)/n

If |xₙ₊₁−xₙ| is small but |f(xₙ₊₁)| is not small, it suggests:

A Exact root found
B Quadratic convergence
C Stagnation possible
D Perfect stability

Round-off errors become more serious in long iterations mainly because they:

A Cancel always
B Accumulate over steps
C Remove truncation
D Create continuity

Truncation error is reduced by:

A Fewer iterations
B Less precision
C More rounding
D Smaller step size

In fixed-point iteration, choosing a different g(x) is useful because it can:

A Remove f(x)
B Force bracketing
C Change convergence
D Make exact root

For linear systems, Jacobi may converge but be slow; a common improvement is:

A Bisection method
B Gauss–Seidel
C Simpson rule
D Secant method

SOR method introduces a relaxation parameter ω to:

A Speed convergence
B Increase rounding
C Remove residual
D Ensure discontinuity

If ω=1 in SOR, the method becomes:

A Jacobi
B Bisection
C Newton
D Gauss–Seidel

In iterative linear solvers, “splitting” A=M−N is used to form:

A x^(k+1)=Ax^(k)+b
B x^(k+1)=M x^(k)+b
C x^(k+1)=M⁻¹Nx^(k)+M⁻¹b
D x^(k+1)=N⁻¹Mx^(k)

For Jacobi, the splitting typically uses:

A M=L, N=U
B M=D, N=−(L+U)
C M=U, N=L
D M=A, N=0

For Gauss–Seidel, the splitting typically uses:

A M=D, N=−(L+U)
B M=U, N=−(D+L)
C M=L, N=−(D+U)
D M=D+L, N=−U

Order of convergence p is defined using errors eₙ via:

A lim |eₙ₊₁|/|eₙ|ᵖ = C
B lim |eₙ|/|eₙ₊₁| = C
C lim |eₙ₊₁|+|eₙ| = C
D lim eₙ = 0 only

If p=1 and 0

A Quadratic
B Cubic
C Linear
D Divergent

If the order p>1, the method is usually classified as:

A Sublinear
B Superlinear
C Non-iterative
D Discontinuous

Aitken’s Δ² acceleration is most helpful when the base iteration is:

A Quadratically convergent
B Exact in one step
C Divergent always
D Linearly convergent

If a method is stable, small rounding errors typically:

A Grow without limit
B Force divergence
C Stay bounded
D Create new roots

In Newton’s method, the “tangent line” update is derived from:

A Second-order Taylor
B First-order Taylor
C Fourier series
D Matrix splitting

In numerical differentiation preview, the forward difference f′(x)≈[f(x+h)−f(x)]/h has error order:

A O(h²)
B O(1)
C O(h³)
D O(h)

For the composite trapezoidal rule with step size h, the global error order is:

A O(h²)
B O(h)
C O(h³)
D O(1)

In bisection, root isolation is commonly done by:

A Computing derivatives
B Checking sign change
C Solving exactly
D Using eigenvalues

If f(a)=0 in bracketing, then:

A b must be root
B No root exists
C Use secant only
D a is a root

Newton’s method applied to f(x)=x² has trouble near x=0 because:

A f is discontinuous
B f has no root
C f′(0)=0
D f is negative

In Gauss–Seidel, divergence can occur even if:

A Diagonally dominant
B Not diagonally dominant
C Matrix is triangular
D b is nonzero

Iterative methods are preferred over direct methods mainly when:

A Very large sparse
B Very small system
C Exact symbolic needed
D Only one equation

If LU decomposition is used, solving Ax=b is done by:

A One bisection step
B Secant iterations
C Fixed point only
D Two triangular solves

In secant method, if starting guesses are very poor, the method may:

A Converge always
B Keep bracket
C Diverge or wander
D Halve interval

A good stopping rule should balance accuracy and:

A Root multiplicity
B Computation cost
C Interval sign
D Matrix symmetry

In practice, a common combined stopping criterion uses:

A Only step size
B Only iteration count
C Only derivative sign
D Residual and step

Leave a Reply

Your email address will not be published. Required fields are marked *