Chapter 24: Numerical Methods and Approximation (Set-2)
While iterating for a root, what does a “tolerance” mainly represent in practice?
A Exact root value
B Function maximum
C Interval starting size
D Allowed error limit
Tolerance is the acceptable accuracy target. Iterations stop when the change in successive approximations, or the residual |f(x)|, becomes smaller than this chosen limit.
When a computer stores 3.14159 as 3.142, which type of error is introduced?
A Round-off error
B Truncation error
C Residual error
D Relative error
Round-off error happens because machines store only finite digits. Rounding 3.14159 to 3.142 changes the value slightly, and repeated operations can accumulate this small error.
In numerical work, what is the absolute error for exact = 8 and approx = 7.96?
A 0.004
B 0.04
C 0.4
D 4
Absolute error equals |approx − exact|. Here |7.96 − 8| = 0.04. It measures the actual size of the difference in the same units as the quantity.
Relative error is best described as:
A |approx − exact|
B |error| × |exact|
C |exact| − |approx|
D |error| / |exact|
Relative error scales the absolute error by the true value size. This makes it useful for comparing accuracy when the magnitudes of quantities are very different.
If an iteration reduces error by a constant factor each step, it is called:
A Quadratic convergence
B Cubic convergence
C Linear convergence
D No convergence
Linear convergence means eₙ₊₁ ≈ C·eₙ for a constant C with 0
In quadratic convergence, the new error is roughly proportional to:
A eₙ
B √eₙ
C 1/eₙ
D eₙ²
Quadratic convergence means eₙ₊₁ ≈ C·(eₙ)² near the solution. This makes the method very fast once close to the root because errors drop dramatically.
In root-finding, the residual at x = xₙ is commonly:
A |f(xₙ)|
B |xₙ|
C |f′(xₙ)|
D |xₙ₊₁ − xₙ|
For solving f(x)=0, residual measures how well the current guess satisfies the equation. A small |f(xₙ)| indicates xₙ is close to a root.
Which method guarantees the root stays inside the chosen interval every step?
A Newton method
B Bisection method
C Secant method
D Fixed point
Bisection always keeps a bracket [a,b] with opposite signs at endpoints. Each step halves the interval while preserving the sign change, so the root remains inside.
Bisection method needs which key requirement for continuous f(x)?
A f′(x) exists
B f(a) = f(b)
C a = 0 always
D f(a)f(b) < 0
A sign change across [a,b] implies at least one root exists inside for a continuous function. This is the foundation of bracketing methods like bisection.
Bisection method is most directly supported by:
A Intermediate value theorem
B Taylor theorem
C Cauchy theorem
D Rolle’s theorem
The intermediate value theorem says a continuous function takes all values between f(a) and f(b). If signs differ, zero must occur somewhere in between.
After each bisection step, the interval length becomes:
A Doubled
B Tripled
C Halved
D Unchanged
Bisection picks the midpoint and then chooses the half-interval that still has a sign change. This halves the interval length each time, giving a predictable error bound.
Bisection method is often described as:
A Fast but risky
B Derivative based
C Matrix based
D Slow but reliable
Bisection converges steadily as the interval halves each step, so it is very reliable. However, compared to Newton or secant, it usually requires more iterations.
The false position method is also called:
A Aitken method
B Regula falsi
C Euler method
D Simpson method
False position is known as regula falsi. It combines bracketing with a straight-line interpolation step to estimate where the function crosses the x-axis.
False position finds the next estimate using:
A Linear interpolation
B Interval midpoint
C Quadratic approximation
D Random selection
False position draws a secant line through endpoints (a,f(a)) and (b,f(b)). The x-intercept of this line is used as the next root approximation.
Like bisection, false position requires:
A Two equal endpoints
B Derivative at start
C Complex values
D Sign change bracket
False position also needs f(a) and f(b) of opposite signs to ensure the root lies between them. This bracketing property provides stability and a root guarantee.
A common drawback of false position is:
A Always diverges
B Needs matrix inverse
C Endpoint stagnation
D Needs second derivative
Sometimes one endpoint barely moves because the interpolated point repeatedly falls near the same side. This stagnation slows down progress even though bracketing is maintained.
Newton’s method uses which geometric idea to update the root estimate?
A Tangent line step
B Midpoint rule
C Area under curve
D Parabola fitting
Newton’s method uses the tangent line at xₙ to approximate the curve. The next estimate is where this tangent crosses the x-axis, often giving fast improvement.
Newton–Raphson iteration formula is:
A xₙ₊₁ = (a+b)/2
B xₙ₊₁ = xₙ + f
C xₙ₊₁ = f′/f
D xₙ₊₁ = xₙ − f/f′
The method updates xₙ by subtracting f(xₙ)/f′(xₙ). This comes from linearizing f around xₙ and solving the tangent-line approximation for the root.
Newton’s method generally needs what additional input compared to secant?
A Sign change bracket
B Derivative values
C Two endpoints
D Matrix diagonal
Newton requires f′(x) at each step, which may be hard or costly to compute. Secant avoids derivatives by using two past points to estimate the slope.
Newton’s method can fail badly when:
A f′(xₙ) ≈ 0
B f(x) is continuous
C interval is small
D root is real
If the derivative at the current guess is near zero, division by f′(xₙ) produces a huge step. This can jump away from the root and cause divergence.
Near a simple root with good initial guess, Newton often shows:
A Linear convergence
B No convergence
C Quadratic convergence
D Cyclic behavior
For a simple root and a starting point close enough, Newton’s method converges quadratically. The error approximately squares each step, quickly increasing correct digits.
Secant method starts with:
A One initial guess
B Three initial guesses
C No starting value
D Two initial guesses
The secant method uses two starting approximations. It forms a secant line through the two points on the curve and uses its x-intercept as the next estimate.
Secant method avoids which requirement of Newton?
A Derivative computation
B Continuity
C Function evaluation
D Root existence
Secant replaces the derivative by a finite-difference slope using two recent points. This makes it useful when derivatives are difficult to compute, while still converging fairly quickly.
Secant method convergence order is approximately:
A 1.0
B 1.618
C 2.0
D 3.0
The secant method converges superlinearly with order about 1.618 (golden ratio). It is faster than linear methods like bisection but slower than Newton’s quadratic convergence.
Compared to bisection, secant is usually:
A Slower always
B Always divergent
C Always bracketed
D Faster usually
Secant often reaches the root in fewer iterations because it uses slope information. However, it does not guarantee bracketing, so poor starting guesses can cause failure.
Which root method does NOT require sign-change bracketing?
A Secant method
B Bisection method
C False position
D Both A and B
Secant method can start with two guesses that do not bracket a root. This flexibility can be helpful, but it also reduces reliability compared to bracketing methods.
In fixed-point iteration, the update rule is:
A xₙ₊₁ = xₙ − f/f′
B xₙ₊₁ = (a+b)/2
C xₙ₊₁ = g(xₙ)
D xₙ₊₁ = Ax − b
Fixed-point iteration rewrites the problem as x = g(x). Starting from x₀, it repeatedly computes xₙ₊₁ = g(xₙ) hoping the sequence approaches the fixed point.
A common basic convergence condition for fixed-point iteration is:
A |g′(x)| > 1
B g′(x) = 0 always
C f(x) = 1 always
D |g′(x)| < 1
If |g′(x)| is less than 1 near the fixed point, the iteration tends to be a contraction, pulling guesses closer to the solution and producing convergence.
Gauss–Jacobi method is mainly used to solve:
A Linear systems
B Integrals
C Curve tracing
D Differential roots
Gauss–Jacobi solves Ax=b iteratively. It rearranges each equation to compute one variable using other variables from the previous iteration, repeating until changes are small.
In Jacobi method, the new values use:
A Only updated values
B Only old values
C Random values
D Exact inverse
Jacobi computes x^(k+1) entirely from x^(k). It does not use freshly computed components within the same iteration, which makes it simple but often slower.
A common sufficient condition for Jacobi convergence is:
A Zero diagonal entries
B Non-square matrix
C Negative determinant
D Strict diagonal dominance
If the matrix is strictly diagonally dominant, the Jacobi iteration often converges. Diagonal dominance helps ensure the iteration matrix has spectral radius less than 1.
Gauss–Seidel differs from Jacobi because it:
A Uses updated values
B Uses midpoint
C Uses no equations
D Uses bracketing
Gauss–Seidel updates components sequentially and immediately uses the newest values in subsequent calculations within the same iteration, which usually improves convergence speed.
For many diagonally dominant systems, Gauss–Seidel is typically:
A Slower than Jacobi
B Same as bisection
C Faster than Jacobi
D Not iterative
Because Gauss–Seidel uses updated values during iteration, it often reduces error more per step than Jacobi, so it typically converges in fewer iterations for similar systems.
In linear systems, the residual vector is:
A Ax + b
B A − x
C x − b
D b − Ax
Residual r = b − Ax measures how well the current approximate solution x satisfies Ax=b. Iterative solvers stop when the residual norm drops below a tolerance.
Spectral radius of an iteration matrix means:
A Sum of eigenvalues
B Max |eigenvalue|
C Product of eigenvalues
D Number of pivots
Spectral radius is the largest absolute eigenvalue of a matrix. For many iterative schemes, convergence requires the spectral radius of the iteration matrix to be less than 1.
A common reason iterative solvers diverge is:
A Spectral radius ≥ 1
B Very small tolerance
C Too many digits
D Continuous function
If the iteration matrix has spectral radius 1 or more, errors do not shrink and may grow. Then the sequence of approximations can diverge instead of converging.
“Error propagation” in computations mainly means:
A Error always disappears
B Only truncation occurs
C Only rounding occurs
D Error carries forward
Small rounding or truncation errors can influence later steps. In long computations or iterations, these errors can accumulate or get amplified, affecting final accuracy.
Condition number mainly indicates:
A Root location
B Sensitivity measure
C Matrix symmetry
D Exact convergence
A high condition number means the solution is sensitive to small changes in data or rounding. Poor conditioning can cause large output errors even when input errors are tiny.
A direct method for Ax=b is:
A Gauss elimination
B Jacobi method
C Gauss–Seidel
D Fixed point
Gaussian elimination is a direct method because it solves the system in a finite sequence of operations (ignoring rounding). Iterative methods instead refine an initial guess repeatedly.
LU decomposition is mainly a technique to:
A Find derivatives
B Bracket a root
C Integrate numerically
D Factor a matrix
LU decomposition factors A into L (lower triangular) and U (upper triangular). This makes solving Ax=b efficient using forward and backward substitution, especially for multiple right-hand sides.
In bisection, if f(a) and f(mid) have opposite signs, the next interval is:
A [mid, b]
B [a, b] unchanged
C [a, mid]
D [0, mid]
The sign change indicates the root lies between a and mid. So b is replaced by mid to keep a bracket where the function values have opposite signs.
In bisection, midpoint is computed as:
A (a + b)/2
B a − b
C ab/2
D a/b
The midpoint splits the interval into two equal halves. Bisection then checks which half contains the sign change and repeats, steadily shrinking the interval around the root.
Newton’s method is faster than bisection mainly because it uses:
A Random intervals
B Only integers
C Matrix splitting
D Slope information
Newton uses derivative (slope) to predict where the function crosses zero via the tangent line. This often leads to much larger accuracy gains per step than interval halving.
Secant method uses slope approximated from:
A One point only
B Two recent points
C Three midpoints
D Exact derivative
Secant method replaces f′(x) with the slope between two points (xₙ,f(xₙ)) and (xₙ₋₁,f(xₙ₋₁)). This avoids derivatives while still using slope-like information.
A standard stopping rule in root finding is:
A f(xₙ) maximum
B xₙ negative
C |f(xₙ)| small
D interval grows
If |f(xₙ)| becomes very small, xₙ nearly satisfies f(x)=0, so it is close to a root. Many algorithms stop when this residual is below tolerance.
A standard stopping rule in iterations can also be:
A |xₙ₊₁ − xₙ| large
B f′(xₙ) large
C a = b
D |xₙ₊₁ − xₙ| small
When consecutive approximations change very little, the sequence is stabilizing. If |xₙ₊₁ − xₙ| is below tolerance, we consider the result accurate enough and stop.
Aitken’s Δ² method is mainly used to:
A Speed up convergence
B Compute integrals
C Bracket roots
D Solve determinants
Aitken’s delta-squared process accelerates a slowly converging sequence. Using three successive iterates, it produces an improved estimate that often approaches the limit faster.
SOR is an extension of:
A Newton method
B Bisection method
C Secant method
D Gauss–Seidel method
SOR (Successive Over-Relaxation) modifies Gauss–Seidel by adding a relaxation factor. With a suitable factor, convergence can improve significantly for many large linear systems.
“Truncation error” is most closely linked to:
A Limited digits storage
B Replacing infinite by finite
C Using wrong bracket
D Derivative becoming zero
Truncation error appears when an infinite process is cut off, such as truncating a Taylor series or stopping an iteration early. The ignored remainder causes approximation error.
“Stability of iteration” in simple terms refers to:
A Small errors not explode
B Always same root
C Using large step
D No function needed
An iteration is stable if small numerical errors, like rounding, do not grow uncontrollably through repeated steps. Stable methods keep errors bounded, improving reliability of computed results.