Chapter 11: Matrices and Determinants (Set-2)

A 1×n matrix is commonly called

A Row matrix
B Column matrix
C Square matrix
D Null matrix

A n×1 matrix is commonly called

A Row matrix
B Diagonal matrix
C Column matrix
D Scalar matrix

Two matrices are equal when

A Same order only
B Same entries positionwise
C Same determinant
D Same diagonal sum

Subtraction A − B is possible when

A det(A)=det(B)
B A is square
C Same order only
D B is diagonal

For matrices, distributive law means

A A(B+C)=AB+AC
B A(B+C)=A+B+C
C AB+AC=A(BC)
D A(B+C)=AB·AC

If A is m×n, then Aᵀ is

A m×n
B m×m
C n×m
D n×n

For any matrices A,B of same order, (A+B)ᵀ equals

A AᵀBᵀ
B BᵀAᵀ
C Aᵀ−Bᵀ
D Aᵀ+Bᵀ

For any matrix A, transpose of kA is

A kAᵀ
B kA
C (1/k)Aᵀ
D Aᵀ+k

If A is skew-symmetric, diagonal entries are

A Any real values
B Always one
C Always zero
D Always equal

A square matrix with det ≠ 0 is

A Singular matrix
B Zero matrix
C Nilpotent matrix
D Non-singular matrix

If A² = 0 for some nonzero A, A is

A Diagonal
B Nilpotent
C Orthogonal
D Scalar

For orthogonal matrix A, inverse equals

A A
B −A
C Aᵀ
D adj(A)

A diagonal matrix with all diagonal entries 1 is

A Identity matrix
B Zero matrix
C Scalar matrix
D Row matrix

An elementary row operation “Rᵢ ↔ Rⱼ” means

A Add rows
B Multiply row by k
C Swap two rows
D Make row zero

Determinant equals zero when rows are

A Independent
B Linearly dependent
C Orthogonal only
D All nonzero

In Laplace expansion along a row, terms are

A element × minor
B minor × minor
C cofactor × cofactor
D element × cofactor

A 3×3 determinant expanded along a column uses

A Only diagonal product
B Row operations only
C Cofactors of that column
D Trace values

If one row becomes k times, determinant becomes

A kD
B D/k
C D+k
D D−k

Determinant unchanged under operation

A Swap two rows
B Multiply a row by k
C Rᵢ → Rᵢ + kRⱼ
D Make two rows equal

For square matrix A, det(A⁻¹) equals

A det(A)
B 1/det(A)
C −det(A)
D det(A)²

If A is invertible, then A·A⁻¹ equals

A Identity matrix
B Zero matrix
C Aᵀ
D adj(A)

If det(A)=5, det(2A) for 3×3 A is

A 10
B 20
C 40
D 5/2

If det(A)=3 and det(B)=4, det(AB) is

A 7
B 12
C 1
D 3/4

A matrix with Aᵀ=A and det(A)=0 is

A Skew non-singular
B Orthogonal diagonal
C Nilpotent scalar
D Symmetric singular

For 2×2 matrix A, if det(A)=0 then A is

A Singular
B Invertible
C Non-singular
D Orthogonal

Gauss elimination mainly aims to convert to

A Identity form
B Diagonal form only
C Row echelon form
D Symmetric form

Gauss–Jordan method mainly aims to convert to

A Row echelon only
B Reduced row echelon
C Upper triangular only
D Lower triangular only

A system is consistent when

A rank(A) = rank([A|B])
B rank(A) < rank([A|B])
C det(A) = 0 always
D trace(A) = 0

Infinite solutions occur when

A ranks unequal
B det(A)≠0 always
C ranks equal and less than n
D rank(A)=n always

Inverse method solves AX=B by

A X = A+B
B X = AB⁻¹
C X = AᵀB
D X = A⁻¹B

For 2×2 A, if A·adj(A)=kI, then k equals

A trace(A)
B rank(A)
C det(A)
D 0 always

The adjoint helps compute inverse when

A det(A)≠0
B det(A)=0
C A is rectangular
D A is zero matrix

A reflection matrix in 2D often has determinant

A 1
B 0
C −1
D 2

A 2D rotation matrix is usually

A Singular symmetric
B Nilpotent diagonal
C Skew with det 0
D Orthogonal with det 1

Matrix representing linear map must be

A Only square always
B Any m×n possible
C Only diagonal
D Only symmetric

Rank of a matrix means

A Sum of diagonal
B Determinant value
C Number of pivots
D Product of entries

Row-reduced echelon form has

A Leading 1s and pivot columns clean
B Zeros below pivots only
C Only diagonal nonzero
D Same as transpose

Elementary matrices are formed by

A Taking transpose
B Multiplying by scalar only
C Single row operation on I
D Using minors only

If E is elementary matrix, then E is

A Always singular
B Always invertible
C Always symmetric
D Always nilpotent

If A is 3×3 and rank(A)=3, then A is

A Singular
B Nilpotent
C Skew-symmetric
D Non-singular

If rank(A)=2 for a 3×3 matrix, det(A) is

A Nonzero
B Always 1
C Zero
D Always −1

If A is symmetric, then A + Aᵀ equals

A 2A
B 0 matrix
C Aᵀ only
D Identity matrix

For any square matrix A, A − Aᵀ is always

A Symmetric
B Skew-symmetric
C Diagonal
D Identity

Determinant helps compute area scale under linear map by

A Trace value
B Rank number only
C |det(A)| factor
D Cofactor sum

In 3D, a linear map scales volume by

A |det(A)| factor
B trace(A)
C det(A) only
D rank(A)

For a 2×2 system, Cramer’s rule uses

A Two traces
B Two ranks
C Two transposes
D Two determinants

Consistency of system using determinants for square A checks

A det(A)=0 only
B trace(A)=0 gives unique
C det(A)≠0 gives unique
D rank always equals n

LU factorization idea is to write A as

A A = LU
B A = U + L
C A = UL only
D A = LᵀUᵀ

Characteristic equation is linked to

A Trace only
B Determinant always zero
C Eigenvalues idea
D Row operations

Adjacency matrix idea is used in

A Geometry only
B Graph networks
C Determinant expansion
D Inverse formula

Leave a Reply

Your email address will not be published. Required fields are marked *