Chapter 24: Numerical Methods and Approximation (Set-4)

When comparing two methods, which one is said to converge faster near the root if its error drops more quickly?

A Lower order
B Same order
C Higher order
D No order

If a method has linear convergence, the error typically behaves like:

A eₙ₊₁≈Ceₙ
B eₙ₊₁≈Ceₙ²
C eₙ₊₁≈C√eₙ
D eₙ₊₁≈C/eₙ

A method with convergence order p>1 is generally called:

A Sublinear
B Nonconvergent
C Superlinear
D Exact method

Which is a basic reason for stopping an iteration when solving f(x)=0?

A f(xₙ) grows
B |f(xₙ)| small
C xₙ becomes negative
D interval doubles

In bisection, the number of iterations depends mainly on:

A Initial interval
B Derivative size
C Polynomial degree
D Matrix symmetry

Bisection gives a guaranteed root only when the function is:

A Discontinuous
B Complex-valued
C Constant everywhere
D Continuous on [a,b]

The intermediate value theorem is used in bisection to justify:

A Tangent crossing
B Matrix splitting
C Sign-change root
D Eigenvalue bound

In bisection, after n iterations, the interval length becomes:

A (b−a)/2ⁿ
B (b−a)/n
C (b−a)·n
D (b−a)²

Which method can suffer from “stagnation” even while bracketing the root?

A Newton method
B False position
C Secant method
D Fixed point

False position selects the next approximation using:

A Midpoint rule
B Quadratic fit
C Linear interpolation
D Derivative step

Newton’s method requires which extra condition to compute xₙ₊₁ reliably?

A f′(xₙ) not zero
B f(xₙ) always positive
C root bracketed
D diagonal dominance

Newton’s method is often fast near a simple root because it has:

A Linear order
B Zero order
C Negative order
D Quadratic order

A poor initial guess in Newton’s method can cause:

A Guaranteed convergence
B Interval halving
C Divergence or cycling
D Exact answer

Secant method replaces the derivative by using:

A Midpoint slope
B Two-point slope
C Exact tangent
D Second derivative

Secant method typically needs which minimum function evaluations per new step?

A One new evaluation
B Zero evaluations
C Three evaluations
D Ten evaluations

Secant method can fail when:

A f is continuous
B root is real
C f(xₙ)=f(xₙ₋₁)
D tolerance is used

For solving Ax=b, Jacobi method updates each component using:

A Updated same-step values
B Matrix inverse
C Random guesses
D Previous-step values

Gauss–Seidel often converges faster than Jacobi mainly because it:

A Uses updated values
B Avoids subtraction
C Uses exact inverse
D Uses bisection

A common sufficient condition that helps Jacobi and Gauss–Seidel converge is:

A Zero diagonal entries
B Non-square matrix
C Strict diagonal dominance
D Random coefficients

In iterative solvers, the residual for Ax=b is defined as:

A Ax+b
B b−Ax
C A−x
D x−b

Spectral radius is the:

A Sum of eigenvalues
B Smallest eigenvalue
C Largest |eigenvalue|
D Determinant sign

If the iteration matrix has spectral radius ≥ 1, the method may:

A Diverge
B Converge faster
C Become exact
D Halve intervals

“Condition number” mainly indicates:

A Function continuity
B Root location
C Sensitivity to data
D Graph curvature

In computations, error propagation means:

A Errors always cancel
B Errors carry forward
C Errors become zero
D Errors are ignored

Which type of error mainly comes from cutting off a series or method early?

A Truncation error
B Round-off error
C Residual error
D Relative error

Round-off error is mainly due to:

A Interval selection
B Sign change
C Diagonal dominance
D Finite precision

Fixed-point iteration is written in the form:

A x=f′(x)
B x=1/f(x)
C x=g(x)
D x=f(x)²

A basic convergence condition for fixed-point iteration near solution is:

A |g′(x)|<1
B |g′(x)|>1
C g′(x)=2 always
D g(x)=0 always

In root finding, a “bracketing method” means the method:

A Uses derivatives
B Keeps sign-change interval
C Uses no function values
D Uses matrix inverse

Which pair are both open (non-bracketing) root methods?

A Bisection and regula
B Bisection and Newton
C Newton and secant
D Regula and bisection

When solving Ax=b iteratively, the “error norm” is used to:

A Choose graph scale
B Compute determinant
C Find eigenvalues
D Measure error size

In bisection, the midpoint formula is:

A (a+b)/2
B (a−b)/2
C ab/2
D a/b

In false position, the next estimate is the x-intercept of the:

A Tangent line
B Secant line
C Normal line
D Horizontal line

Newton’s method is also called:

A Regula falsi
B Simpson method
C Newton–Raphson
D Euler method

For Newton’s method, the update step is based on:

A Tangent x-intercept
B Interval midpoint
C Secant midpoint
D Matrix diagonal

The main advantage of bisection over Newton is:

A Faster convergence
B No continuity needed
C Guaranteed convergence
D Needs derivatives

In iterative computation, “stopping criteria” helps avoid:

A Any computation
B Function continuity
C Root existence
D Unnecessary iterations

For the same tolerance, which method usually needs more iterations?

A Bisection method
B Newton method
C Secant method
D Aitken method

Aitken’s Δ² process is mainly used to:

A Solve linear systems
B Bracket a root
C Accelerate convergence
D Compute derivatives

SOR method is closely related to:

A Newton–Raphson
B Gauss–Seidel
C Bisection
D Secant

“Computational complexity” in basic terms means:

A Graph smoothness
B Root sign
C Operation count
D Interval size

In numerical methods, “accuracy” mainly refers to:

A Closeness to true
B Speed only
C Memory use only
D Graph type

“Precision” in computation is mainly about:

A Algorithm type
B Root location
C Function continuity
D Digits stored

For diagonal dominance in row i, the requirement is:

A |aᵢᵢ| < sum others
B |aᵢᵢ| = 0
C |aᵢᵢ| > sum others
D all entries equal

“Iteration matrix” is mainly used to analyze:

A Curve sketching
B Convergence behavior
C Numerical integration
D Differentiation rules

A simple error estimate used in iterations is:

A |xₙ₊₁−xₙ|
B |a+b|
C |f′(x)| only
D determinant value

In bracketing methods, “root isolation” means:

A Derivative equals one
B Computing eigenvalues
C Finding sign-change interval
D Choosing random guesses

In Newton’s method, the “initial guess” should ideally be:

A Very far away
B Always negative
C Always zero
D Close to root

In Jacobi method, the diagonal part D is important because:

A It is ignored
B It is inverted
C It is set zero
D It is squared

“Numerical integration preview” in this chapter mainly means:

A Approximate area methods
B Exact antiderivative
C Matrix convergence
D Root bracketing

Leave a Reply

Your email address will not be published. Required fields are marked *