A matrix having exactly one row and n columns is a row matrix. It is written as 1×n and is widely used to represent row vectors in basic algebra.
A n×1 matrix is commonly called
A Row matrix
B Diagonal matrix
C Column matrix
D Scalar matrix
A matrix with n rows and exactly one column is called a column matrix. It is written as n×1 and represents a column vector in many applications.
Two matrices are equal when
A Same order only
B Same entries positionwise
C Same determinant
D Same diagonal sum
Two matrices A and B are equal only if they have the same order and every corresponding element is equal, meaning aᵢⱼ = bᵢⱼ for all i and j.
Subtraction A − B is possible when
A det(A)=det(B)
B A is square
C Same order only
D B is diagonal
Matrix subtraction is defined only when both matrices have the same order. Then subtraction is done entry-wise, so each element becomes (aᵢⱼ − bᵢⱼ).
For matrices, distributive law means
A A(B+C)=AB+AC
B A(B+C)=A+B+C
C AB+AC=A(BC)
D A(B+C)=AB·AC
Matrix multiplication distributes over addition: multiplying A with a sum gives the same result as multiplying A with each matrix separately and then adding: A(B+C)=AB+AC.
If A is m×n, then Aᵀ is
A m×n
B m×m
C n×m
D n×n
Transpose swaps rows and columns. So an m×n matrix becomes n×m after transposing. Element at (i,j) becomes the element at (j,i).
For any matrices A,B of same order, (A+B)ᵀ equals
A AᵀBᵀ
B BᵀAᵀ
C Aᵀ−Bᵀ
D Aᵀ+Bᵀ
Transpose of a sum equals sum of transposes. This follows directly from entry-wise addition: (aᵢⱼ+bᵢⱼ) becomes (aⱼᵢ+bⱼᵢ), which is Aᵀ+Bᵀ.
For any matrix A, transpose of kA is
A kAᵀ
B kA
C (1/k)Aᵀ
D Aᵀ+k
Scalar multiplication works entry-wise, and transpose just swaps indices. So (kA)ᵀ has entries k·aⱼᵢ, which equals kAᵀ.
If A is skew-symmetric, diagonal entries are
A Any real values
B Always one
C Always zero
D Always equal
Skew-symmetric means A = −Aᵀ. For diagonal element aᵢᵢ, we get aᵢᵢ = −aᵢᵢ, so 2aᵢᵢ=0 and therefore aᵢᵢ=0.
A square matrix with det ≠ 0 is
A Singular matrix
B Zero matrix
C Nilpotent matrix
D Non-singular matrix
A matrix is non-singular if its determinant is not zero. This guarantees the matrix has an inverse and the related linear system has a unique solution.
If A² = 0 for some nonzero A, A is
A Diagonal
B Nilpotent
C Orthogonal
D Scalar
A nonzero matrix that becomes the zero matrix after some power is called nilpotent. If A²=0, then A is nilpotent of index 2, and det(A)=0.
For orthogonal matrix A, inverse equals
A A
B −A
C Aᵀ
D adj(A)
Orthogonal matrices satisfy AᵀA = I. Multiplying both sides by A⁻¹ gives A⁻¹ = Aᵀ. This property is important in rotations and reflections.
A diagonal matrix with all diagonal entries 1 is
A Identity matrix
B Zero matrix
C Scalar matrix
D Row matrix
The identity matrix I has ones on the main diagonal and zeros elsewhere. It leaves any compatible matrix unchanged under multiplication: AI=A and IA=A.
An elementary row operation “Rᵢ ↔ Rⱼ” means
A Add rows
B Multiply row by k
C Swap two rows
D Make row zero
The operation Rᵢ ↔ Rⱼ exchanges the i-th and j-th rows. In determinant terms, a row swap changes the sign of the determinant.
Determinant equals zero when rows are
A Independent
B Linearly dependent
C Orthogonal only
D All nonzero
If rows (or columns) are linearly dependent, the matrix does not span full space and det(A)=0. This indicates singularity and lack of inverse.
In Laplace expansion along a row, terms are
A element × minor
B minor × minor
C cofactor × cofactor
D element × cofactor
Laplace expansion uses a sum of aᵢⱼCᵢⱼ along a chosen row or column. Cofactor already includes the sign factor (−1)^{i+j} times minor.
A 3×3 determinant expanded along a column uses
A Only diagonal product
B Row operations only
C Cofactors of that column
D Trace values
Expansion along a column adds products of each element in that column with its cofactor. This is useful when a row or column has many zeros.
If one row becomes k times, determinant becomes
A kD
B D/k
C D+k
D D−k
Multiplying a single row by k multiplies the determinant by k. Determinant scales linearly with each row, so only that row contributes the factor k.
Determinant unchanged under operation
A Swap two rows
B Multiply a row by k
C Rᵢ → Rᵢ + kRⱼ
D Make two rows equal
Adding a multiple of one row to another does not change determinant. This is a key property used to simplify determinants by creating zeros without changing value.
For square matrix A, det(A⁻¹) equals
A det(A)
B 1/det(A)
C −det(A)
D det(A)²
If A is invertible, det(A)≠0. Using det(AA⁻¹)=det(I)=1, we get det(A)·det(A⁻¹)=1, so det(A⁻¹)=1/det(A).
If A is invertible, then A·A⁻¹ equals
A Identity matrix
B Zero matrix
C Aᵀ
D adj(A)
Inverse is defined so that multiplying a matrix by its inverse gives identity. For invertible A, A·A⁻¹ = I and also A⁻¹·A = I.
If det(A)=5, det(2A) for 3×3 A is
A 10
B 20
C 40
D 5/2
For an n×n matrix, det(kA)=kⁿ det(A). Here n=3 and k=2, so det(2A)=2³·5=8·5=40.
If det(A)=3 and det(B)=4, det(AB) is
A 7
B 12
C 1
D 3/4
Determinant of product equals product of determinants: det(AB)=det(A)det(B)=3×4=12, provided A and B are square matrices of same order.
A matrix with Aᵀ=A and det(A)=0 is
A Skew non-singular
B Orthogonal diagonal
C Nilpotent scalar
D Symmetric singular
Aᵀ=A means symmetric. det(A)=0 means singular. So it is a symmetric singular matrix. Symmetry does not guarantee invertibility; determinant decides that.
For 2×2 matrix A, if det(A)=0 then A is
A Singular
B Invertible
C Non-singular
D Orthogonal
For any square matrix, determinant zero implies singularity. Singular matrices have no inverse, and related linear equations may have no unique solution.
Gauss elimination mainly aims to convert to
A Identity form
B Diagonal form only
C Row echelon form
D Symmetric form
Gauss elimination uses row operations to reach row echelon form with leading entries and zeros below them. This helps solve systems by back substitution efficiently.
Gauss–Jordan method mainly aims to convert to
A Row echelon only
B Reduced row echelon
C Upper triangular only
D Lower triangular only
Gauss–Jordan continues elimination to make zeros both below and above pivots, producing reduced row echelon form. It can directly read solutions and can also find inverses.
A system is consistent when
A rank(A) = rank([A|B])
B rank(A) < rank([A|B])
C det(A) = 0 always
D trace(A) = 0
A linear system AX=B is consistent if the rank of coefficient matrix equals the rank of augmented matrix. If ranks differ, contradictions appear and no solution exists.
Infinite solutions occur when
A ranks unequal
B det(A)≠0 always
C ranks equal and less than n
D rank(A)=n always
If rank(A)=rank([A|B])<n, the system is consistent but underdetermined. At least one free variable exists, leading to infinitely many solutions described parametrically.
Inverse method solves AX=B by
A X = A+B
B X = AB⁻¹
C X = AᵀB
D X = A⁻¹B
If A is invertible, multiply both sides of AX=B by A⁻¹ on the left: A⁻¹AX=A⁻¹B, giving IX=A⁻¹B, so X=A⁻¹B.
For 2×2 A, if A·adj(A)=kI, then k equals
A trace(A)
B rank(A)
C det(A)
D 0 always
For any square matrix A, the identity A·adj(A)=det(A)·I holds. So the scalar k is the determinant of A, giving a key link between adjoint and inverse.
The adjoint helps compute inverse when
A det(A)≠0
B det(A)=0
C A is rectangular
D A is zero matrix
The inverse formula A⁻¹=adj(A)/det(A) requires dividing by det(A). Therefore det(A) must be nonzero. If det(A)=0, inverse does not exist.
A reflection matrix in 2D often has determinant
A 1
B 0
C −1
D 2
Reflections reverse orientation in the plane, so their determinant is typically −1. Rotations preserve orientation and usually have determinant +1 for orthogonal matrices.
A 2D rotation matrix is usually
A Singular symmetric
B Nilpotent diagonal
C Skew with det 0
D Orthogonal with det 1
Rotation matrices preserve lengths and angles, so they are orthogonal: AᵀA=I. They also preserve orientation, giving determinant +1 in standard 2D rotation cases.
Matrix representing linear map must be
A Only square always
B Any m×n possible
C Only diagonal
D Only symmetric
A linear map from Rⁿ to Rᵐ can be represented by an m×n matrix. Square matrices are only for maps where input and output dimensions are same.
Rank of a matrix means
A Sum of diagonal
B Determinant value
C Number of pivots
D Product of entries
Rank is the number of linearly independent rows or columns. In row-reduced form, it equals the number of pivot (leading 1) positions, indicating the dimension of the span.
Row-reduced echelon form has
A Leading 1s and pivot columns clean
B Zeros below pivots only
C Only diagonal nonzero
D Same as transpose
In RREF, each pivot is 1, pivots move right as you go down, and each pivot column has zeros everywhere else. This form makes solution reading direct.
Elementary matrices are formed by
A Taking transpose
B Multiplying by scalar only
C Single row operation on I
D Using minors only
An elementary matrix is obtained by performing exactly one elementary row operation on an identity matrix. Multiplying by it performs that same row operation on another matrix.
If E is elementary matrix, then E is
A Always singular
B Always invertible
C Always symmetric
D Always nilpotent
Every elementary row operation can be reversed by another elementary operation. So the corresponding elementary matrix always has an inverse, meaning it is always invertible.
If A is 3×3 and rank(A)=3, then A is
A Singular
B Nilpotent
C Skew-symmetric
D Non-singular
Full rank n for an n×n matrix means rows/columns are independent. Then det(A)≠0, so the matrix is non-singular and has an inverse.
If rank(A)=2 for a 3×3 matrix, det(A) is
A Nonzero
B Always 1
C Zero
D Always −1
For an n×n matrix, if rank is less than n, rows/columns are dependent, making determinant zero. So a 3×3 matrix with rank 2 is singular.
If A is symmetric, then A + Aᵀ equals
A 2A
B 0 matrix
C Aᵀ only
D Identity matrix
If A is symmetric, Aᵀ=A. So A + Aᵀ = A + A = 2A. This identity is useful when splitting matrices into symmetric parts.
For any square matrix A, A − Aᵀ is always
A Symmetric
B Skew-symmetric
C Diagonal
D Identity
(A − Aᵀ)ᵀ = Aᵀ − A = −(A − Aᵀ). So A − Aᵀ satisfies the definition of skew-symmetric, regardless of what A is.
Determinant helps compute area scale under linear map by
A Trace value
B Rank number only
C |det(A)| factor
D Cofactor sum
For a 2D linear transformation represented by A, the area of a shape scales by |det(A)|. The absolute value gives scaling size, while sign indicates orientation change.
In 3D, a linear map scales volume by
A |det(A)| factor
B trace(A)
C det(A) only
D rank(A)
In 3D, determinant magnitude gives the volume scaling under a linear transformation. If det(A)=0, volume collapses to zero (flattening), meaning transformation is singular.
For a 2×2 system, Cramer’s rule uses
A Two traces
B Two ranks
C Two transposes
D Two determinants
In a 2×2 linear system, Cramer’s rule finds each variable as a ratio of determinants: x = D₁/D and y = D₂/D, where D must be nonzero.
Consistency of system using determinants for square A checks
A det(A)=0 only
B trace(A)=0 gives unique
C det(A)≠0 gives unique
D rank always equals n
For square coefficient matrix A, det(A)≠0 guarantees unique solution because A is invertible. If det(A)=0, determinant test cannot ensure uniqueness and other checks are needed.
LU factorization idea is to write A as
A A = LU
B A = U + L
C A = UL only
D A = LᵀUᵀ
LU factorization expresses A as product of a lower triangular matrix L and an upper triangular matrix U. It simplifies solving multiple systems AX=B with the same A.
Characteristic equation is linked to
A Trace only
B Determinant always zero
C Eigenvalues idea
D Row operations
Eigenvalues are found by solving det(A − λI)=0, called the characteristic equation. This is an introductory link showing how determinants connect to important matrix concepts.
Adjacency matrix idea is used in
A Geometry only
B Graph networks
C Determinant expansion
D Inverse formula
An adjacency matrix represents a graph: entry 1 (or weight) indicates a connection between two nodes. It is a simple application of matrices in networks and computer science.