Chapter 24: Numerical Methods and Approximation (Set-3)
A sequence {xₙ} is said to converge to L when xₙ approaches:
A 0 always
B Infinity only
C L as n→∞
D Random values
Convergence means terms of the sequence get closer and closer to a fixed limit L as n increases. In numerical methods, iterates should converge to the true solution.
If eₙ is error at step n, “rate of convergence” mainly describes:
A Memory usage
B Error decrease speed
C Graph shape only
D Matrix size
Rate of convergence explains how quickly the error shrinks as iterations proceed. Faster rates mean fewer iterations are needed to reach a desired tolerance.
If eₙ₊₁ ≈ C·eₙ² near the root, the method shows:
A Quadratic convergence
B Linear convergence
C Divergent behavior
D No convergence
Quadratic convergence means the next error is roughly proportional to the square of the current error. Once close to the solution, accuracy improves very rapidly.
Which method uses an interval and always halves it each step?
A Newton method
B Secant method
C Bisection method
D Jacobi method
Bisection starts with [a,b] where f(a) and f(b) have opposite signs. It repeatedly takes the midpoint and selects the half interval that still brackets the root.
For bisection on a continuous function, which condition must be true first?
A f′(x) exists
B Sign change occurs
C f(x) is periodic
D Root is integer
Bisection needs f(a) and f(b) to have opposite signs to ensure a root exists in the interval. This comes from continuity and the intermediate value theorem.
In bisection, if f(a)f(mid) < 0, the new interval becomes:
A [mid, b]
B [a, b] same
C [0, b]
D [a, mid]
Opposite signs between a and mid mean the root lies in that subinterval. So b is replaced by mid, keeping the root bracketed for the next iteration.
Bisection method error bound after n steps is related to:
A Derivative size
B Function maximum
C Interval length
D Matrix eigenvalues
Each bisection step halves the interval length. The maximum possible error after n steps is at most half of the current interval length, giving a clear bound.
False position differs from bisection mainly because it chooses next point using:
A Linear interpolation
B Midpoint only
C Derivative value
D Random selection
False position uses a secant line through the endpoints to estimate the root, instead of taking the midpoint. It still keeps bracketing but can progress faster in some cases.
A known problem in false position where one endpoint stays fixed is called:
A Pivoting
B Stagnation
C Oscillation
D Overflow
In regula falsi, one endpoint may remain unchanged for many iterations. This reduces interval improvement, making convergence slow even though the method still brackets the root.
Newton’s method update uses which two function values at xₙ?
A f and f″
B f′ and f″
C f and f′
D f only
Newton’s method uses f(xₙ) and f′(xₙ) to form the tangent line. The next iterate is xₙ₊₁ = xₙ − f(xₙ)/f′(xₙ).
Newton’s method can become unstable when the derivative at xₙ is:
A Very large
B Exactly one
C Negative only
D Near zero
When f′(xₙ) is close to zero, the Newton step f(xₙ)/f′(xₙ) can become huge. This may cause jumping away from the root and divergence.
For a multiple root, standard Newton usually becomes:
A Faster than usual
B Slower than usual
C Always divergent
D Always exact
At multiple roots, Newton’s method often loses quadratic convergence and becomes closer to linear convergence. Modified Newton methods adjust the step to recover faster speed.
Secant method is most accurately described as Newton’s method but:
A Uses bracketing
B Uses midpoint
C Uses no derivative
D Uses matrices
Secant replaces the derivative in Newton’s formula with a slope estimated using two previous points. This avoids derivative computation while still giving relatively fast convergence.
Secant method requires which starting information?
A One starting value
B Two starting values
C Three starting values
D Exact root
Secant needs two initial approximations x₀ and x₁. It uses them to draw a secant line and compute the next estimate using a difference-quotient slope.
Secant method may fail because:
A Denominator becomes zero
B It always brackets
C It uses midpoint
D It needs diagonals
The secant formula divides by f(xₙ) − f(xₙ₋₁). If these function values become equal or very close, the step becomes undefined or unstable.
Which method typically has guaranteed monotonic interval shrinking?
A Secant
B Newton
C Bisection
D Fixed point
Bisection halves the bracketing interval each step, so the interval shrinks monotonically. This gives strong reliability, though it may converge slower than open methods.
The order of convergence of the secant method is approximately:
A 1.0
B 1.618
C 2.0
D 3.0
Secant converges superlinearly with order about 1.618. It improves faster than linear methods but does not reach Newton’s quadratic speed.
In iterative linear solvers, “initial guess” refers to:
A Final answer
B Matrix determinant
C Pivot choice
D Starting vector x⁰
Iterative methods start with an initial approximation vector x⁰. Repeated updates refine it toward the true solution, so a better initial guess can reduce iterations.
Jacobi method computes xᵢ^(k+1) using values from:
A Current iteration only
B Exact inverse matrix
C Previous iteration only
D Random perturbation
In Jacobi, all components of the new vector are computed using the old vector values. No newly updated component is used until the next iteration begins.
Gauss–Seidel improves Jacobi mainly by:
A Using latest updates
B Using midpoint
C Avoiding equations
D Using interpolation
Gauss–Seidel uses newly computed values immediately within the same iteration. This generally reduces errors faster, so it often converges in fewer iterations than Jacobi.
A commonly used sufficient condition for Jacobi and Gauss–Seidel convergence is:
A Zero trace
B Diagonal dominance
C Orthogonal columns
D Negative diagonal
If A is strictly diagonally dominant, both Jacobi and Gauss–Seidel often converge. The strong diagonal terms help make the iteration matrix contractive.
If A is not diagonally dominant, iterative methods may:
A Always converge
B Become exact
C Possibly diverge
D Need no tolerance
Without conditions like diagonal dominance, the iteration matrix may have spectral radius ≥ 1. Then errors may not shrink and the method can oscillate or diverge.
The residual for linear system Ax=b at approximation x is:
A Ax − b
B b − Ax
C A + b
D x − b
Residual r = b − Ax measures how far the current approximation is from satisfying Ax=b. A small residual norm indicates a good approximate solution.
In iterative solvers, a common stopping rule is:
A Residual below tolerance
B Residual increases
C Determinant is zero
D Eigenvalues are negative
Iterations stop when the residual norm or successive difference becomes smaller than tolerance. This indicates the approximate solution is sufficiently accurate for practical use.
Spectral radius of iteration matrix must be less than 1 for:
A Divergence
B Instability only
C Convergence
D Exact solution
Many iterative methods converge only if the spectral radius ρ(B) of the iteration matrix is less than 1. Then repeated multiplication reduces errors rather than amplifying them.
Spectral radius means:
A Minimum eigenvalue
B Sum of eigenvalues
C Trace of matrix
D Maximum |eigenvalue|
Spectral radius is the largest absolute value among eigenvalues. It is important because it controls whether errors shrink in repeated iterations of an iteration matrix.
A fixed-point iteration is stable near the solution when:
A |g′(x)| > 1
B |g′(x)| < 1
C g′(x) is complex
D g(x) is constant
If |g′(x)| is less than 1 near the fixed point, the mapping contracts distances. Small errors shrink each step, giving stable convergence in practice.
“Error term” in an approximation usually represents:
A Exact value
B Input data
C Difference from truth
D Matrix diagonal
Error term measures how far the approximation is from the exact value. It can be due to truncation, rounding, or method limitations, and guides how accurate results are.
Truncation error mostly appears due to:
A Finite precision
B Sign change
C Diagonal dominance
D Cutting infinite process
Truncation error is caused by stopping an infinite process early, like truncating a series or limiting a step size. The neglected remainder creates the approximation error.
Round-off error mainly comes from:
A Interval halving
B Finite digit storage
C Derivative usage
D Root bracketing
Computers store numbers with limited digits. During arithmetic, rounding occurs, producing tiny errors. In long computations, these may accumulate and affect the final result.
Error propagation means:
A Errors carry through steps
B Errors stay constant
C Errors vanish instantly
D Errors become roots
Small numerical errors can influence later computations. In iterative algorithms, rounding or truncation errors can accumulate or be amplified, affecting stability and accuracy.
Condition number is mainly associated with:
A Root count
B Number of iterations
C Sensitivity of solution
D Graph symmetry
Condition number indicates how much output can change for a small input change. Large condition numbers mean the problem is ill-conditioned and numerical errors can be magnified.
“Iteration function” in Newton’s method can be written as:
A g(x)= (a+b)/2
B g(x)= x − f/f′
C g(x)= f′/f
D g(x)= f(x)
Newton iteration can be written as xₙ₊₁ = g(xₙ) where g(x)= x − f(x)/f′(x). This view helps analyze convergence using g′(x).
For Newton near a simple root, the method is fastest because:
A It has quadratic order
B It brackets root
C It halves interval
D It avoids f(x)
Quadratic order means errors shrink roughly like the square of the previous error. This rapidly increases accuracy once the estimate is close to the simple root.
Which method uses “root isolation” as a first step often?
A Jacobi method
B LU factorization
C Bisection method
D Aitken method
Bisection usually starts by isolating a root in an interval where a sign change occurs. This root isolation ensures the method begins with a valid bracket.
Graph intuition for bisection is mainly about:
A Tangent intersection
B Matrix splitting
C Eigenvalue plotting
D Curve crossing x-axis
A sign change means the curve crosses the x-axis between endpoints. Bisection repeatedly narrows this region, visually shrinking the crossing interval around the root.
In false position, the next x is the x-intercept of:
A Tangent line
B Secant line
C Normal line
D Horizontal line
False position draws a secant through two endpoints of opposite sign. The x-intercept of that secant provides the next approximation while keeping the bracket.
“Stopping criteria” is important mainly to:
A Increase errors
B Avoid functions
C Decide when to stop
D Make roots complex
Stopping criteria prevent unnecessary iterations and ensure required accuracy. Common rules use tolerance on residual, tolerance on successive change, or maximum iteration limits.
In practical computing, why are too-small tolerances risky?
A Reduce accuracy
B Increase iterations
C Remove convergence
D Change the root
Very small tolerances can require many iterations and may expose rounding limitations. After a point, extra iterations may not improve accuracy due to finite precision arithmetic.
“Stability of iteration” is mainly about:
A Error not amplified
B Fast convergence only
C Always exact answer
D Using derivatives
A stable method does not magnify small numerical errors through repeated steps. Stability helps ensure computed results remain reliable even with rounding and truncation effects.
Aitken’s Δ² method is used to:
A Find brackets
B Compute Jacobian
C Accelerate sequences
D Solve integrals
Aitken’s delta-squared technique improves a slowly converging sequence by combining three consecutive terms. The transformed sequence often reaches the limit faster in fewer steps.
“Relaxation methods” in linear solvers include:
A Bisection method
B Secant method
C Simpson rule
D SOR method
Relaxation methods modify iterative updates using a relaxation factor. SOR is a well-known extension of Gauss–Seidel that can speed convergence for suitable systems.
Gauss–Seidel method works best when matrix is:
A Nearly singular
B Diagonally dominant
C Random rectangular
D Zero diagonal
Diagonal dominance helps ensure the iteration matrix makes errors shrink. For many diagonally dominant systems, Gauss–Seidel converges efficiently and more reliably than without dominance.
In Jacobi method, “splitting” often means writing A as:
A L+U only
B D−L−U
C D+L+U
D I+A
The matrix is split into diagonal D, lower part L, and upper part U. Jacobi uses D to compute updates while treating L+U using old iteration values.
In Gauss–Seidel, the splitting is used so updates depend on:
A New and old values
B Only old values
C Only random values
D Only exact values
Gauss–Seidel uses updated components as soon as they are computed (new values) and still uses old values for components not yet updated. This speeds convergence.
“Computational cost” in iteration is often tied to:
A Color of graph
B Steps per iteration
C Root sign only
D Proof length
Cost depends on operations required each iteration, like multiplications and additions. Total cost also depends on how many iterations are needed to reach tolerance.
“Iteration count estimation” is easiest for which method?
A Newton method
B Secant method
C Bisection method
D Gauss–Seidel
Bisection has a predictable error bound based on interval halving. So the number of iterations needed to achieve a tolerance can be estimated directly from the initial interval length.
In root-finding, a “bracketing method” means:
A Uses derivative
B Uses two tangents
C Uses matrix inverse
D Keeps sign-change interval
Bracketing methods always maintain an interval where f(a) and f(b) have opposite signs. This ensures the root remains inside, improving reliability over open methods.
Which pair are both bracketing root methods?
A Newton and secant
B Bisection and false position
C Jacobi and SOR
D Newton and Jacobi
Both bisection and false position require a sign change interval and keep the root bracketed. Newton and secant are open methods and can fail without bracketing.
“Numerical approximation” mainly means:
A Close computed estimate
B Exact symbolic solving
C Random guessing only
D Pure geometry only
Numerical approximation produces a value close to the true answer using computations rather than exact algebra. It is essential when exact solutions are difficult or impossible to obtain.