Chapter 24: Numerical Methods and Approximation (Set-4)
When comparing two methods, which one is said to converge faster near the root if its error drops more quickly?
A Lower order
B Same order
C Higher order
D No order
Higher order convergence means the error decreases faster as iterations proceed, especially near the solution. For example, quadratic convergence usually outperforms linear convergence in speed.
If a method has linear convergence, the error typically behaves like:
A eₙ₊₁≈Ceₙ
B eₙ₊₁≈Ceₙ²
C eₙ₊₁≈C√eₙ
D eₙ₊₁≈C/eₙ
Linear convergence means each new error is roughly a constant fraction of the previous error. It steadily improves but is slower than quadratic or higher-order methods.
A method with convergence order p>1 is generally called:
A Sublinear
B Nonconvergent
C Superlinear
D Exact method
If the order p is greater than 1, the method improves faster than linear. Secant is superlinear (p≈1.618), while Newton is quadratic (p=2) near a simple root.
Which is a basic reason for stopping an iteration when solving f(x)=0?
A f(xₙ) grows
B |f(xₙ)| small
C xₙ becomes negative
D interval doubles
When |f(xₙ)| becomes smaller than a chosen tolerance, xₙ nearly satisfies f(x)=0. This residual-based stopping rule is common and easy to compute.
In bisection, the number of iterations depends mainly on:
A Initial interval
B Derivative size
C Polynomial degree
D Matrix symmetry
Bisection halves the interval each step, so required iterations depend on starting interval length and desired tolerance. Longer initial intervals need more halvings to reach the same accuracy.
Bisection gives a guaranteed root only when the function is:
A Discontinuous
B Complex-valued
C Constant everywhere
D Continuous on [a,b]
Continuity is required for the intermediate value theorem. If f is continuous and changes sign on [a,b], then at least one root exists inside the interval.
The intermediate value theorem is used in bisection to justify:
A Tangent crossing
B Matrix splitting
C Sign-change root
D Eigenvalue bound
The theorem states that a continuous function takes every value between f(a) and f(b). If f(a) and f(b) have opposite signs, zero lies between them.
In bisection, after n iterations, the interval length becomes:
A (b−a)/2ⁿ
B (b−a)/n
C (b−a)·n
D (b−a)²
Each bisection step halves the interval. After n steps, the interval shrinks by a factor 2ⁿ, giving a simple predictable bound on how tightly the root is located.
Which method can suffer from “stagnation” even while bracketing the root?
A Newton method
B False position
C Secant method
D Fixed point
In false position, one endpoint may remain nearly fixed while the other moves. This endpoint stagnation slows improvement, making convergence sometimes disappointingly slow.
False position selects the next approximation using:
A Midpoint rule
B Quadratic fit
C Linear interpolation
D Derivative step
The method draws a straight line between two bracketing points and takes its x-intercept. This uses linear interpolation to estimate where the function crosses zero.
Newton’s method requires which extra condition to compute xₙ₊₁ reliably?
A f′(xₙ) not zero
B f(xₙ) always positive
C root bracketed
D diagonal dominance
Newton’s update divides by f′(xₙ). If the derivative is zero or nearly zero, the step may become huge or undefined, causing instability or divergence.
Newton’s method is often fast near a simple root because it has:
A Linear order
B Zero order
C Negative order
D Quadratic order
Quadratic convergence means errors shrink roughly like the square of the previous error. This can quickly increase correct digits once the iterate is close to the simple root.
A poor initial guess in Newton’s method can cause:
A Guaranteed convergence
B Interval halving
C Divergence or cycling
D Exact answer
Newton is an open method and does not keep a bracket. If the starting guess is far from the root, tangent steps can jump away, oscillate, or diverge.
Secant method replaces the derivative by using:
A Midpoint slope
B Two-point slope
C Exact tangent
D Second derivative
Secant approximates f′ using the slope between two recent points. This avoids derivative calculations but still uses slope information to move toward the root.
Secant method typically needs which minimum function evaluations per new step?
A One new evaluation
B Zero evaluations
C Three evaluations
D Ten evaluations
After starting with two points, each new secant step typically needs one new function value f(xₙ₊₁). The previous values are reused, so cost per step is low.
Secant method can fail when:
A f is continuous
B root is real
C f(xₙ)=f(xₙ₋₁)
D tolerance is used
The secant formula divides by f(xₙ)−f(xₙ₋₁). If this difference becomes zero or extremely small, the computed step becomes undefined or numerically unstable.
For solving Ax=b, Jacobi method updates each component using:
A Updated same-step values
B Matrix inverse
C Random guesses
D Previous-step values
Jacobi computes all new components from the old vector x^(k). It does not reuse newly computed values until the next iteration, making it simple but often slower.
Gauss–Seidel often converges faster than Jacobi mainly because it:
A Uses updated values
B Avoids subtraction
C Uses exact inverse
D Uses bisection
Gauss–Seidel updates variables one by one and immediately uses the newest values. This typically reduces errors more per iteration than Jacobi for many suitable systems.
A common sufficient condition that helps Jacobi and Gauss–Seidel converge is:
A Zero diagonal entries
B Non-square matrix
C Strict diagonal dominance
D Random coefficients
If each diagonal entry in a row is larger than the sum of magnitudes of the other entries, the matrix is strictly diagonally dominant. This often ensures iterative convergence.
In iterative solvers, the residual for Ax=b is defined as:
A Ax+b
B b−Ax
C A−x
D x−b
Residual r=b−Ax measures how far the current approximation x is from satisfying the equation. A small residual norm indicates x is a good approximate solution.
Spectral radius is the:
A Sum of eigenvalues
B Smallest eigenvalue
C Largest |eigenvalue|
D Determinant sign
Spectral radius ρ(B) equals the maximum absolute value among eigenvalues of B. For many iterative methods, convergence requires ρ(B)<1 so errors shrink with iterations.
If the iteration matrix has spectral radius ≥ 1, the method may:
A Diverge
B Converge faster
C Become exact
D Halve intervals
When ρ(B) is at least 1, the error is not reduced by repeated iteration and can grow. This often leads to divergence or failure to reach tolerance.
“Condition number” mainly indicates:
A Function continuity
B Root location
C Sensitivity to data
D Graph curvature
Condition number measures how sensitive the solution is to small input changes. A high condition number means small rounding or measurement errors can cause large output errors.
In computations, error propagation means:
A Errors always cancel
B Errors carry forward
C Errors become zero
D Errors are ignored
Rounding and truncation errors from earlier steps can influence later results. In long iterative processes, such errors may accumulate or be amplified, affecting reliability.
Which type of error mainly comes from cutting off a series or method early?
A Truncation error
B Round-off error
C Residual error
D Relative error
Truncation error occurs when an infinite process is replaced by a finite one, like truncating a Taylor series or stopping an iteration. The neglected remainder causes error.
Round-off error is mainly due to:
A Interval selection
B Sign change
C Diagonal dominance
D Finite precision
Computers represent numbers with limited digits. Arithmetic operations round values, producing small errors. These may build up in repeated calculations, especially in iterative methods.
Fixed-point iteration is written in the form:
A x=f′(x)
B x=1/f(x)
C x=g(x)
D x=f(x)²
A root problem f(x)=0 can be rewritten as x=g(x). Iteration xₙ₊₁=g(xₙ) is used to reach a fixed point, which corresponds to a solution.
A basic convergence condition for fixed-point iteration near solution is:
A |g′(x)|<1
B |g′(x)|>1
C g′(x)=2 always
D g(x)=0 always
If |g′(x)| is less than 1 near the fixed point, the mapping contracts distances. Then errors shrink each step and convergence is usually stable.
In root finding, a “bracketing method” means the method:
A Uses derivatives
B Keeps sign-change interval
C Uses no function values
D Uses matrix inverse
Bracketing methods maintain an interval where f(a) and f(b) have opposite signs. This guarantees at least one root inside and improves reliability compared to open methods.
Which pair are both open (non-bracketing) root methods?
A Bisection and regula
B Bisection and Newton
C Newton and secant
D Regula and bisection
Newton and secant do not keep a guaranteed sign-change bracket. They can converge faster, but they may also diverge if the starting guesses are poor.
When solving Ax=b iteratively, the “error norm” is used to:
A Choose graph scale
B Compute determinant
C Find eigenvalues
D Measure error size
Error norm gives a single number describing how large the error or residual is. It helps decide whether the approximation is close enough to stop iterating.
In bisection, the midpoint formula is:
A (a+b)/2
B (a−b)/2
C ab/2
D a/b
Bisection uses the midpoint to split the interval. Then it checks the sign change and keeps the half that contains the root, steadily narrowing the bracket.
In false position, the next estimate is the x-intercept of the:
A Tangent line
B Secant line
C Normal line
D Horizontal line
Regula falsi uses the secant line between two endpoints of opposite sign. The point where this line crosses the x-axis becomes the next root approximation.
Newton’s method is also called:
A Regula falsi
B Simpson method
C Newton–Raphson
D Euler method
Newton’s method is commonly known as Newton–Raphson method. It uses derivative information and tangent-line approximation to iteratively improve the root estimate.
For Newton’s method, the update step is based on:
A Tangent x-intercept
B Interval midpoint
C Secant midpoint
D Matrix diagonal
The tangent line at xₙ approximates the curve near that point. Its x-intercept provides xₙ₊₁, usually giving rapid improvement when the guess is near the root.
The main advantage of bisection over Newton is:
A Faster convergence
B No continuity needed
C Guaranteed convergence
D Needs derivatives
With continuity and sign change, bisection always converges. Newton can be faster, but it may diverge for poor initial guesses or when the derivative becomes very small.
In iterative computation, “stopping criteria” helps avoid:
A Any computation
B Function continuity
C Root existence
D Unnecessary iterations
Stopping rules ensure the algorithm stops when accuracy is sufficient. Without stopping criteria, iterations may waste time or even become unstable due to accumulating round-off errors.
For the same tolerance, which method usually needs more iterations?
A Bisection method
B Newton method
C Secant method
D Aitken method
Bisection improves accuracy by halving the interval, so it progresses steadily but slowly. Newton and secant often reach the same tolerance in fewer steps.
Aitken’s Δ² process is mainly used to:
A Solve linear systems
B Bracket a root
C Accelerate convergence
D Compute derivatives
Aitken’s method speeds up a slowly converging sequence by using three successive approximations to form an improved estimate, often reducing iterations significantly.
SOR method is closely related to:
A Newton–Raphson
B Gauss–Seidel
C Bisection
D Secant
SOR is a modification of Gauss–Seidel that introduces a relaxation factor. With a good factor, it can speed up convergence for many large linear systems.
“Computational complexity” in basic terms means:
A Graph smoothness
B Root sign
C Operation count
D Interval size
Computational complexity describes how the number of operations grows with problem size. It helps compare efficiency of algorithms, especially when solving large systems or many iterations.
In numerical methods, “accuracy” mainly refers to:
A Closeness to true
B Speed only
C Memory use only
D Graph type
Accuracy means how close the computed result is to the true value. It is commonly measured using absolute or relative error, or by checking residual and tolerance.
“Precision” in computation is mainly about:
A Algorithm type
B Root location
C Function continuity
D Digits stored
Precision refers to how many digits a computer can store and operate with reliably. Limited precision leads to round-off errors and can affect stability in iterative algorithms.
For diagonal dominance in row i, the requirement is:
A |aᵢᵢ| < sum others
B |aᵢᵢ| = 0
C |aᵢᵢ| > sum others
D all entries equal
Strict diagonal dominance means the magnitude of the diagonal entry exceeds the sum of magnitudes of other entries in the row. This condition often supports convergence of iterative solvers.
“Iteration matrix” is mainly used to analyze:
A Curve sketching
B Convergence behavior
C Numerical integration
D Differentiation rules
Iteration matrix describes how errors transform between steps. Studying its eigenvalues and spectral radius helps predict whether the iterative method will converge or diverge.
A simple error estimate used in iterations is:
A |xₙ₊₁−xₙ|
B |a+b|
C |f′(x)| only
D determinant value
The difference between successive iterates often estimates how much the solution is still changing. If this difference falls below tolerance, the approximation is treated as stable.
In bracketing methods, “root isolation” means:
A Derivative equals one
B Computing eigenvalues
C Finding sign-change interval
D Choosing random guesses
Root isolation is locating an interval [a,b] where f(a) and f(b) have opposite signs. This ensures a root is inside and allows methods like bisection to start safely.
In Newton’s method, the “initial guess” should ideally be:
A Very far away
B Always negative
C Always zero
D Close to root
Newton converges fastest and most reliably when the starting value is near the root. A far guess can lead to divergence because the tangent step may jump away.
In Jacobi method, the diagonal part D is important because:
A It is ignored
B It is inverted
C It is set zero
D It is squared
Jacobi rearranges equations using the diagonal entries. Updates typically involve dividing by aᵢᵢ, effectively using D⁻¹ to compute the next approximation vector.
“Numerical integration preview” in this chapter mainly means:
A Approximate area methods
B Exact antiderivative
C Matrix convergence
D Root bracketing
Numerical integration approximates the area under a curve using rules like trapezoidal or Simpson’s rule. It is used when exact antiderivatives are difficult or impossible to compute.