Order of a matrix is written as (number of rows) × (number of columns). Here rows = 3 and columns = 2, so the order is 3 × 2.
Which matrix has all entries zero
A Identity matrix
B Diagonal matrix
C Zero matrix
D Scalar matrix
A zero matrix has every element equal to 0. It can be of any order, and it acts as the additive identity because A + 0 = A for all matrices A.
Addition of matrices is possible when
A Same number of entries
B Same order only
C Same trace value
D Same determinant
Two matrices can be added only if they have the same number of rows and the same number of columns. Then addition is done entry-wise, position by position.
Scalar multiplication of a matrix means
A Multiply two matrices
B Add each row
C Transpose entries
D Multiply each entry
In scalar multiplication, every element of the matrix is multiplied by the same scalar (number). If k is a scalar, then (kA) has entries k·aᵢⱼ.
Matrix multiplication AB is defined when
A A and B same order
B Rows of A = columns of B
C Columns of A = rows of B
D det(A) = det(B)
If A is m×n and B is n×p, then AB is defined and the result is an m×p matrix. Inner dimensions (n) must match.
Which property is generally false for matrices
A Commutative of multiplication
B Associative of addition
C Distributive over addition
D Additive identity exists
In general, AB ≠ BA for matrices, so multiplication is not commutative. However, matrix addition is commutative and associative, and distributive laws hold.
Identity matrix I satisfies for A
A AI = 0
B AI = A and IA = A
C IA = 0
D AI = Aᵀ
The identity matrix I acts like 1 in multiplication. For any compatible matrix A, multiplying by I on either side keeps A unchanged: AI = A and IA = A.
Transpose of a matrix changes
A Values to negatives
B Diagonal into zeros
C Rows into columns
D Order unchanged always
Transpose interchanges rows and columns. If A is m×n, then Aᵀ is n×m. Element at (i, j) becomes the element at (j, i).
A matrix A is symmetric if
A A = −A
B Aᵀ = 0
C det(A) = 0
D A = Aᵀ
A square matrix is symmetric when it equals its transpose, meaning aᵢⱼ = aⱼᵢ for all i and j. Only square matrices can be symmetric.
A matrix A is skew-symmetric if
A A = Aᵀ
B A = −Aᵀ
C A = I
D A = 0 only
In a skew-symmetric matrix, aᵢⱼ = −aⱼᵢ. This forces all diagonal entries to be zero because aᵢᵢ = −aᵢᵢ implies aᵢᵢ = 0.
Which matrix must be square
A Symmetric matrix
B Row matrix
C Column matrix
D Rectangular matrix
Symmetry requires comparing aᵢⱼ with aⱼᵢ, which needs both positions to exist. That is only possible when the matrix has equal rows and columns (square).
If A is 2×3 and B is 3×4, order of AB is
A 3×3
B 3×4
C 2×4
D 2×3
When A is m×n and B is n×p, AB becomes m×p. Here m=2 and p=4, so AB has order 2×4.
If A is 3×2 and B is 3×2, AB is
A Defined
B Not defined
C Identity always
D Zero always
For AB to exist, columns of A must equal rows of B. A has 2 columns, B has 3 rows, so sizes do not match and AB cannot be formed.
The transpose property for product is
A (AB)ᵀ = AᵀBᵀ
B (AB)ᵀ = ABᵀ
C (AB)ᵀ = AᵀB
D (AB)ᵀ = BᵀAᵀ
Transpose reverses the order of multiplication. So the transpose of a product equals the product of transposes in reverse order, provided dimensions are compatible.
Which is always true for transpose
A (Aᵀ)ᵀ = 0
B (Aᵀ)ᵀ = Aᵀ
C (Aᵀ)ᵀ = A
D (Aᵀ)ᵀ = I
Taking transpose twice returns the original matrix. First transpose swaps rows and columns; the second swap brings them back, so (Aᵀ)ᵀ = A.
Trace of a square matrix is
A Product of diagonal
B Sum of diagonal entries
C Sum of all entries
D Determinant value
Trace is defined only for square matrices and equals the sum of the main diagonal elements a₁₁ + a₂₂ + … + aₙₙ. It is useful in many matrix properties.
A diagonal matrix has nonzero entries only on
A Main diagonal
B Any row
C Any column
D Off-diagonal only
In a diagonal matrix, all off-diagonal elements are zero. Only the main diagonal may have nonzero values. It is always a square matrix.
A scalar matrix is a diagonal matrix with
A All entries equal
B Zero diagonal only
C All diagonal equal
D Ones off-diagonal
A scalar matrix is of the form kI, where all diagonal entries are the same scalar k and every off-diagonal entry is 0. Identity matrix is a special case with k=1.
For a square matrix A, A + (−A) equals
A I
B 0 matrix
C A
D Aᵀ
The additive inverse of A is −A, formed by negating each entry. Adding A and −A cancels each element to zero, giving the zero matrix of same order.
Determinant is defined for
A Any matrix
B Only diagonal matrix
C Only zero matrix
D Only square matrix
Determinant is defined only for square matrices (n×n). It produces a single number that helps test invertibility, solve systems, and compute geometric scaling.
Determinant of a 2×2 matrix \[ab\[ab,[c d]] is
A ad + bc
B ab − cd
C ad − bc
D ac − bd
For a 2×2 matrix, determinant is computed as ad − bc. This value decides if the matrix is invertible: if ad − bc ≠ 0, inverse exists.
Determinant of an identity matrix of order n is
A 0
B 1
C n
D −1
The identity matrix has ones on the diagonal and zeros elsewhere. Its determinant equals 1 for any order. This matches the idea that it preserves volume/area scaling.
Determinant of a triangular matrix equals
A Product of diagonal
B Sum of diagonal
C Zero always
D Twice trace
For upper or lower triangular matrices, determinant is simply the product of diagonal entries. Off-diagonal terms do not affect it due to determinant expansion properties.
If two rows of a determinant are equal, determinant is
A 1
B −1
C 0
D Depends on size
If any two rows (or columns) are identical, the determinant becomes zero. This indicates linear dependence and also means the matrix is singular (non-invertible).
Swapping two rows changes determinant by factor
A +1
B −1
C 0
D 2
Interchanging any two rows (or columns) reverses the sign of the determinant. So det changes from D to −D. This is a key row-operation effect.
Multiplying one row by k changes determinant to
A D + k
B D/k
C D²
D kD
If a single row (or column) is multiplied by k, the determinant is multiplied by k. Other rows unchanged, so scaling in one direction scales determinant similarly.
Adding a multiple of one row to another row makes determinant
A Doubled
B Zero always
C Unchanged
D Negative always
Replacing a row by (row + k × another row) does not change the determinant. This operation is used in simplifying determinants and in row-reduction methods.
Determinant of transpose satisfies
A det(Aᵀ) = det(A)
B det(Aᵀ) = −det(A)
C det(Aᵀ) = 1/det(A)
D det(Aᵀ) = 0 always
Transposing only swaps rows with columns, and determinant remains the same. So det(Aᵀ) equals det(A) for every square matrix A.
Determinant of product satisfies
A det(AB)=det(A)+det(B)
B det(AB)=det(A)det(B)
C det(AB)=det(A)−det(B)
D det(AB)=det(A)/det(B)
For square matrices of same order, determinant of a product equals product of determinants. This helps in finding determinants of complicated products quickly.
A matrix is singular when
A det(A) = 1
B trace(A) = 0
C det(A) = 0
D A = Aᵀ
A square matrix is singular if its determinant is zero. Then it has no inverse, and its rows/columns are linearly dependent, causing systems to have no unique solution.
Inverse of A exists when
A det(A)=0
B trace(A)=0
C A is rectangular
D det(A)≠0
A square matrix has an inverse only if it is non-singular, meaning det(A) ≠ 0. This ensures unique solutions and that A can be “undone” by A⁻¹.
Inverse formula using adjoint is
A A⁻¹ = adj(A)/det(A)
B A⁻¹ = det(A)·adj(A)
C A⁻¹ = adj(Aᵀ)
D A⁻¹ = det(Aᵀ)
For a non-singular matrix A, inverse is given by A⁻¹ = adj(A)/det(A). The adjoint is transpose of cofactor matrix, and det(A) must be nonzero.
Inverse of 2×2 matrix \[ab\[ab,[c d]] is
A (1/(ad+bc))·\[db\[db, [c a]]
B (1/(ab−cd))·\[ad\[ad, [b c]]
C (1/(ad−bc))·\[d−b\[d−b, [−c a]]
D (1/(ac−bd))·\[d−c\[d−c, [−b a]]
For 2×2 matrix, inverse exists if ad−bc ≠ 0. Swap a and d, change signs of b and c, then divide by determinant (ad−bc).
Property of inverse of product is
A (AB)⁻¹ = A⁻¹B⁻¹
B (AB)⁻¹ = B⁻¹A⁻¹
C (AB)⁻¹ = (Aᵀ)(Bᵀ)
D (AB)⁻¹ = A+B
Inverse of a product reverses order: (AB)⁻¹ = B⁻¹A⁻¹, when A and B are invertible. This is similar to transpose rule but for inverses.
Inverse of transpose property is
A (Aᵀ)⁻¹ = −A⁻¹
B (Aᵀ)⁻¹ = A⁻¹
C (Aᵀ)⁻¹ = A
D (Aᵀ)⁻¹ = (A⁻¹)ᵀ
If A is invertible, then transpose is also invertible and (Aᵀ)⁻¹ equals (A⁻¹)ᵀ. This comes from transposing both sides of AA⁻¹ = I.
If A is orthogonal, then
A AᵀA = 0
B det(A) = 0
C AᵀA = I
D A = −Aᵀ
An orthogonal matrix satisfies AᵀA = I, meaning its columns are perpendicular unit vectors. Also, A⁻¹ = Aᵀ, which makes computations easier in rotations.
A matrix is idempotent if
A A² = I
B A² = A
C A² = 0
D Aᵀ = A
Idempotent means multiplying the matrix by itself gives the same matrix: A² = A. Such matrices act like “projection” operators in advanced topics.
A matrix is involutory if
A A² = I
B A² = A
C A³ = 0
D det(A)=0
Involutory matrices satisfy A² = I, so they are their own inverses: A⁻¹ = A. Many reflection matrices are involutory in geometry.
A matrix is nilpotent if
A A² = I
B A = Aᵀ
C Aᵏ = 0 for some k
D det(A) = 1
Nilpotent means some positive power of A becomes the zero matrix. For example, a strictly upper triangular matrix is nilpotent. Such matrices always have determinant 0.
Cofactor of element aᵢⱼ equals
A Minor ÷ (−1)^{i+j}
B Minor × (−1)^{i+j}
C Minor × (−1)^{i−j}
D Minor + (−1)^{i+j}
The cofactor Cᵢⱼ is defined as (−1)^{i+j} times the minor Mᵢⱼ. This alternating sign pattern is used in Laplace expansion of determinants.
Minor of element aᵢⱼ is determinant of
A Same matrix
B Matrix after transposing
C Matrix after swapping rows
D Matrix after deleting i-row, j-column
Minor Mᵢⱼ is the determinant formed by removing the i-th row and j-th column. It is used to build cofactors and to compute determinant by expansion.
Expansion of determinant along a row uses
A Only minors
B Cofactors and elements
C Only traces
D Only diagonal product
Laplace expansion along a row/column uses sum of (element × corresponding cofactor). This method helps compute determinants of 3×3 or higher matrices systematically.
Adjoint of A is
A Transpose of cofactor matrix
B Inverse of A
C Matrix of minors only
D Determinant of A
The adjoint (adj(A)) is obtained by taking cofactors of A to form a matrix and then transposing it. It is used in the formula A⁻¹ = adj(A)/det(A).
If det(A)=0, then system AX=B generally has
A Always unique solution
B Always no solution
C No or infinite solutions
D Always infinite only
When det(A)=0, A is singular and cannot have an inverse. Then the linear system may be inconsistent (no solution) or dependent (infinitely many solutions), depending on B.
Cramer’s rule applies to systems with
A det(A)=0
B det(A)≠0
C Rectangular A only
D Homogeneous only
Cramer’s rule gives unique solution for a square system when det(A) ≠ 0. Each variable is a ratio of determinants, so nonzero determinant is necessary.
An augmented matrix represents
A Only coefficients
B Only constants
C Only identity part
D Coefficients with constants
An augmented matrix combines coefficient matrix A and constant vector B into one matrix [A | B]. It is used in Gauss elimination and Gauss-Jordan methods.
A system has a unique solution when
A det(A)=0
B rank(A) < rank([A|B])
C rank(A)=rank([A|B])=n
D rank(A)=0
For n variables, unique solution occurs when the ranks of coefficient and augmented matrices are equal and equal to n. Then the system is consistent and fully determined.
A system has no solution when
A rank(A)=rank([A|B])
B rank(A) < rank([A|B])
C det(A)≠0 always
D rank(A)=n always
If augmented matrix rank is greater than coefficient rank, the system becomes inconsistent. This happens when equations contradict each other after row reduction, giving statements like 0 = nonzero.
For homogeneous system AX=0, it always has
A At least trivial solution
B No solution
C Unique nontrivial solution
D Infinite always
X=0 always satisfies AX=0, so every homogeneous system has the trivial solution. Nontrivial solutions exist when det(A)=0 (or rank < n), giving dependence.
Area of a triangle using coordinates can be found using
A Trace
B Transpose
C Determinant
D Identity matrix
In 2D, area of a triangle with vertices (x₁,y₁),(x₂,y₂),(x₃,y₃) is (1/2)|det| of a 3×3 coordinate matrix. Determinant captures scaling/area