Chapter 24: Numerical Methods and Approximation (Set-1)
Which error comes from chopping an infinite process in computations?
A Round-off error
B Relative error
C Truncation error
D Absolute error
Truncation error happens when an infinite procedure is replaced by a finite one, like stopping a series or limiting iterations. The remaining neglected part creates this error in numerical methods.
Which error is mainly due to limited machine precision?
A Truncation error
B Residual error
C Relative error
D Round-off error
Round-off error occurs because computers store only a limited number of digits. During arithmetic operations, numbers are rounded, and small rounding differences can accumulate in iterative calculations.
If the exact value is known, what is |approx − exact| called?
A Absolute error
B Relative error
C Residual norm
D Truncation error
Absolute error is the magnitude of the difference between the approximate value and the true value. It tells how far the approximation is from the exact value in the same units.
Which measure scales error by the true value size?
A Absolute error
B Truncation error
C Relative error
D Iteration error
Relative error is the absolute error divided by the true value (often as a fraction or percent). It is useful when comparing accuracy across quantities of different magnitudes.
What is a common stopping rule in iterations?
A Random guess change
B Error below tolerance
C Increase step size
D Always 100 steps
Iterations are typically stopped when an error estimate, residual, or successive change becomes smaller than a chosen tolerance. This ensures the approximation is accurate enough for the problem.
What does “order of convergence” describe?
A Memory used
B Matrix determinant
C Equation degree
D Speed of convergence
Order of convergence tells how fast errors shrink from one iteration to the next. Higher order means error decreases more quickly near the solution, giving faster improvement per iteration.
Which convergence is usually the slowest type?
A Quadratic convergence
B Superlinear convergence
C Linear convergence
D Cubic convergence
Linear convergence means the error decreases by a roughly constant ratio each step. It is slower than quadratic or cubic convergence, where errors shrink dramatically as iterations progress.
“Quadratic convergence” means error roughly squares each step near root
A True
B False
C Only for bisection
D Only for secant
In quadratic convergence, the new error is approximately proportional to the square of the previous error near the solution. This makes the method extremely fast once it gets close to the root.
Which root-finding method always keeps the root bracketed?
A Newton method
B Secant method
C Fixed point method
D Bisection method
Bisection always maintains an interval [a, b] where f(a) and f(b) have opposite signs. It repeatedly halves the interval, so the root remains safely bracketed.
Bisection method requires which basic condition?
A f′(x) exists
B Two close guesses
C f(a)f(b) < 0
D Matrix is diagonal
Bisection needs a sign change over the interval, meaning f(a) and f(b) have opposite signs. This indicates at least one root inside for continuous functions.
Which theorem supports the sign-change idea for bisection?
A Mean value theorem
B Intermediate value theorem
C Taylor theorem
D Green’s theorem
For a continuous function, the intermediate value theorem guarantees that if the function changes sign on [a, b], it must cross zero somewhere in that interval.
What is the main drawback of bisection?
A Needs derivative
B Can diverge easily
C Converges slowly
D Needs matrix inverse
Bisection is reliable but slow because it only halves the interval each iteration. Compared to Newton or secant, it usually needs many more steps for high accuracy.
Bisection error bound after n steps is based on interval length halving
A False
B Only for Newton
C Only for Jacobi
D True
After each bisection step, the interval length becomes half. The maximum possible error is at most half the current interval length, giving a clear error bound estimate.
Regula falsi is also called which method?
A False position
B Newton method
C Secant method
D Jacobi method
Regula falsi is the false position method. It finds a new approximation using a straight-line interpolation between two endpoints that bracket the root.
False position uses which idea between endpoints?
A Quadratic fit
B Cubic spline
C Linear interpolation
D Fourier series
The false position method draws a line between (a, f(a)) and (b, f(b)) and takes where it crosses the x-axis as the next root estimate.
False position still requires which condition like bisection?
A Second derivative
B Sign change bracket
C Complex numbers
D Matrix symmetry
Like bisection, false position needs f(a) and f(b) to have opposite signs so that a root is bracketed. This helps keep the iteration stable.
A known issue in false position is sometimes called what?
A Overflow problem
B Pivoting
C Orthogonality
D Stagnation
False position can stagnate when one endpoint stays fixed for many iterations. This slows convergence because the interval does not shrink evenly like bisection.
Which method uses tangent line at current estimate?
A Bisection
B Jacobi
C Newton method
D Gauss elimination
Newton’s method uses the tangent line at the current approximation xₙ. The next estimate is where that tangent crosses the x-axis, giving fast convergence near the root.
Newton–Raphson iteration formula is:
A xₙ₊₁ = xₙ − f(xₙ)/f′(xₙ)
B xₙ₊₁ = (a+b)/2
C xₙ₊₁ = xₙ + f(xₙ)
D xₙ₊₁ = f(xₙ)/f′(xₙ)
Newton–Raphson updates the guess by subtracting the ratio of function value to derivative. This uses local linearization and often converges very quickly when the start guess is close.
Newton method typically has which convergence near a simple root?
A Linear convergence
B No convergence
C Logarithmic convergence
D Quadratic convergence
Near a simple root and with a good initial guess, Newton’s method has quadratic convergence. Errors shrink extremely fast, often doubling correct digits each iteration.
Newton’s method may fail if which value becomes zero?
A f(xₙ)
B xₙ
C f′(xₙ)
D tolerance
Newton’s update divides by f′(xₙ). If the derivative is zero or nearly zero, the step can blow up or become unstable, causing divergence or large jumps.
Newton method needs which additional information compared to secant?
A Sign change
B Derivative value
C Two guesses only
D Diagonal dominance
Newton’s method requires computing the derivative at each step. Secant avoids derivatives by approximating slope from two previous points, making it derivative-free but slightly slower.
Secant method uses how many starting values?
A Two values
B One value
C Three values
D Four values
Secant method starts with two initial approximations. It uses them to estimate the slope and then finds the x-intercept of the secant line as the next root estimate.
Secant method is best described as:
A Bracketing only
B Matrix factorization
C Polynomial division
D Derivative-free Newton
Secant method resembles Newton but replaces the derivative by a finite difference slope using two recent points. This avoids derivative computation while still giving fairly fast convergence.
Secant method generally converges faster than:
A Newton method
B Exact algebra
C Bisection method
D LU decomposition
Secant often converges faster than bisection because it uses slope information to jump closer to the root. However, it may be less reliable than bracketing methods.
Secant method usually has convergence order about:
A 1.618
B 1.0
C 2.0
D 3.0
The secant method has superlinear convergence with order approximately 1.618 (the golden ratio). It is faster than linear methods but slower than Newton’s quadratic convergence.
Which method does NOT require sign-change bracketing?
A Bisection
B False position
C Both A and B
D Secant
Secant does not require the root to be bracketed by opposite signs. It can work with two guesses on the same side, but then it can also fail if guesses are poor.
A “residual” in root finding commonly means:
A interval length
B derivative value
C f(x) value
D matrix diagonal
Residual usually refers to how close the current estimate is to solving the equation. For f(x)=0, the residual is |f(xₙ)|, and smaller residual indicates a better root approximation.
In linear systems Ax=b, residual is:
A Ax + b
B b − Ax
C A − b
D x − A
For a guessed solution x, the residual r = b − Ax measures the mismatch. Iterative methods aim to reduce the residual norm to meet a stopping tolerance.
Gauss–Jacobi method is used to solve:
A Nonlinear equations
B Differential equations only
C Integrals only
D Linear systems
Gauss–Jacobi is an iterative method for solving systems of linear equations Ax=b. It updates each variable using values from the previous iteration, making it simple and parallel-friendly.
Jacobi method updates xᵢ using values from:
A Same iteration only
B Exact solution
C Previous iteration only
D Random values
In Jacobi, all new components are computed from the old vector x^(k). No newly computed values are used in the same iteration, which makes the method straightforward.
A common sufficient condition for Jacobi convergence is:
A Diagonally dominant
B Orthogonal matrix
C Singular matrix
D Zero determinant
If a matrix is strictly diagonally dominant, Jacobi and Gauss–Seidel often converge. Diagonal dominance means each diagonal entry is larger than the sum of magnitudes of others in its row.
Gauss–Seidel differs from Jacobi mainly by:
A Using interpolation
B Using derivatives
C Halving intervals
D Using updated values
Gauss–Seidel uses the newest computed values immediately within the same iteration. This often speeds up convergence compared to Jacobi, which waits until the next iteration to use updates.
Which method usually converges faster for same system?
A Jacobi method
B Bisection method
C Gauss–Seidel method
D False position
Gauss–Seidel typically converges faster than Jacobi because it uses updated values during iteration. This reduces error more effectively per cycle for many diagonally dominant systems.
In Gauss–Seidel, x₁^(k+1) is used to compute x₂^(k+1)
A False
B True
C Only for Newton
D Only for bisection
Gauss–Seidel computes variables sequentially and reuses updated values immediately. So after finding x₁^(k+1), it is used right away in the formula for x₂^(k+1).
A basic reason iterative methods may diverge is:
A Large diagonal entries
B Too much memory
C Having a solution
D Poor matrix properties
Iterative methods can diverge if the iteration matrix has spectral radius ≥ 1 or if the system is not suitable (like lack of diagonal dominance). Then errors may grow instead of shrinking.
Spectral radius relates to:
A Matrix trace only
B Determinant sign
C Maximum eigenvalue magnitude
D Number of equations
Spectral radius is the largest absolute value of eigenvalues of a matrix. For many iterative schemes, convergence depends on spectral radius of the iteration matrix being less than 1.
A tolerance in stopping criteria represents:
A Allowed error limit
B Exact answer
C Number of variables
D Matrix order
Tolerance is the acceptable threshold for error or residual. When the computed measure falls below this limit, the approximation is considered accurate enough and the algorithm stops.
Fixed point iteration solves f(x)=0 by rewriting as:
A x = f′(x)
B x = g(x)
C x = 1/f(x)
D x = x²
Fixed point iteration transforms the root problem into x = g(x). Starting from an initial guess, it repeatedly computes xₙ₊₁ = g(xₙ) to approach a fixed point solution.
A common convergence condition for fixed point iteration is:
A |g′(x)| > 1
B g(x) = 0 always
C |g′(x)| < 1
D f′(x) = 0
If |g′(x)| is less than 1 near the fixed point, iterations tend to move closer to the solution. If |g′(x)| is greater than 1, errors usually grow and divergence occurs.
“Condition number” informally measures:
A Root count
B Matrix symmetry
C Iteration step size
D Sensitivity to input
Condition number indicates how sensitive the solution is to small changes in input data. A large condition number means small rounding or data errors can cause large changes in the computed answer.
Error propagation mainly means:
A Errors always vanish
B Only truncation happens
C Errors spread through steps
D Only bisection uses it
Error propagation describes how small errors from rounding or approximation pass through calculations. In iterative methods, early small errors can influence later steps and affect final accuracy.
“Iteration count estimation” is used to:
A Predict steps needed
B Prove theorem
C Remove derivatives
D Make matrix diagonal
Iteration count estimation helps predict how many iterations are required to meet a desired tolerance. For methods like bisection, it can be computed directly from interval length and tolerance.
Bisection iterations needed depend on:
A f′(x) value
B Matrix symmetry
C Second derivative
D Initial interval length
In bisection, the interval is halved each time. The number of steps to reach a tolerance depends on the starting interval length and how small you want the final interval (error bound).
Newton’s method can diverge mainly due to:
A Too small tolerance
B Bad initial guess
C Using bracketing
D Diagonal dominance
Newton’s method may fail if the starting guess is far from the root or near a point where derivative is small. Then tangent steps can jump away, causing oscillation or divergence.
Multiple roots in Newton’s method often cause:
A Faster convergence
B No derivatives needed
C Slower convergence
D Guaranteed bracketing
When the root has multiplicity greater than one, standard Newton steps often lose quadratic speed and become slower. Modified Newton methods can restore faster convergence by adjusting the update.
Aitken’s Δ² process is used for:
A Root bracketing
B Matrix inversion
C Numerical integration
D Acceleration of convergence
Aitken’s delta-squared method accelerates a slowly converging sequence. It uses three successive iterates to produce a new estimate that often converges faster than the original iteration.
SOR is commonly related to which method family?
A Gauss–Seidel family
B Bisection family
C Newton family
D Secant family
Successive Over-Relaxation (SOR) is an improvement over Gauss–Seidel. It introduces a relaxation factor to speed convergence for suitable linear systems, especially large sparse ones.
Direct methods differ from iterative methods because direct methods:
A Always use derivatives
B Give exact solution in finite steps
C Need sign change
D Use only tolerance
Direct methods like Gaussian elimination or LU decomposition aim to solve in a finite number of operations (ignoring rounding). Iterative methods refine guesses repeatedly until meeting tolerance.
LU decomposition is mainly used to:
A Find roots only
B Compute integrals
C Solve linear systems
D Trace curves
LU decomposition factors A into L and U matrices. This allows solving Ax=b efficiently by forward substitution (Ly=b) and backward substitution (Ux=y), especially for multiple right-hand sides.