Chapter 13: Vector Spaces and Linear Transformations (Set-3)
Which subset of R2R2 is a subspace?
A Line not through
B Line through origin
C Circle centered origin
D First quadrant only
A line through the origin is closed under addition and scalar multiplication and contains the zero vector. A shifted line, circle, or quadrant fails closure or zero containment.
For nonzero vectors u,vu,v, dependence means
A u⋅v=0u⋅v=0
B ∥u∥=∥v∥∥u∥=∥v∥
C u=cvu=cv for some cc
D u+v=0u+v=0 always
With two nonzero vectors, linear dependence occurs exactly when one is a scalar multiple of the other. Orthogonality or equal length does not imply dependence.
If S={v1,v2,v3}S={v1,v2,v3} spans R2R2, then SS must be
A Linearly independent
B Orthonormal always
C Same as standard
D Linearly dependent
Any set of three vectors in R2R2 must be dependent because the dimension is 2. Spanning R2R2 is possible, but redundancy is unavoidable.
If dim(V)=ndim(V)=n, any set of n+1n+1 vectors in VV is
A Dependent
B Independent
C A basis
D A subspace
In an nn-dimensional space, at most nn vectors can be independent. So any collection with n+1n+1 vectors must have a nontrivial linear relation.
If a set has exactly nn vectors in nn-dimensional VV and spans VV, then it is
A Dependent set
B Zero subspace
C A basis
D Not possible
In an nn-dimensional space, a spanning set with nn vectors is automatically linearly independent, hence forms a basis. This is a key equivalence result.
If a set has nn independent vectors in nn-dimensional VV, then it
A Spans VV
B Spans kernel only
C Spans image only
D Cannot exist
In dimension nn, any linearly independent set of nn vectors must be a basis, so it spans the whole space and gives unique coordinates for every vector.
The coordinate vector [v]B[v]B is defined relative to
A Any spanning set
B An ordered basis
C Any subspace
D Any inner product
Coordinates depend on a chosen ordered basis BB. The ordering fixes which basis vector corresponds to each coordinate position in [v]B[v]B.
If BB is a basis, the map v↦[v]Bv↦[v]B is
A Not one-one
B Not onto
C An isomorphism
D Nonlinear
The coordinate map is linear, one-one, and onto FnFn. Every vector has unique coordinates, and every coordinate tuple corresponds to a vector in VV.
If V=U⊕WV=U⊕W, every v∈Vv∈V can be written
A Only as u−wu−w
B Only as uwuw
C Not as sum
D Uniquely as u+wu+w
Direct sum (intro) means each vector decomposes uniquely into one part from UU and one part from WW. Uniqueness depends on U∩W={0}U∩W={0}.
If U∩W≠{0}U∩W={0}, uniqueness of u+wu+w decomposition is
A Not guaranteed
B Always guaranteed
C Depends on norms
D Always impossible
If there is a nonzero vector common to both subspaces, then the same vv can be expressed in different ways by shifting that common vector between uu and ww.
In quotient space V/WV/W, two cosets are equal when
A Representatives are equal
B Their lengths match
C Their representatives differ by WW
D Determinants match
v+W=u+Wv+W=u+W exactly when v−u∈Wv−u∈W. This equality rule ensures coset operations are consistent with the equivalence relation.
If W=ker(π)W=ker(π) for projection π:V→V/Wπ:V→V/W, then ππ is
A One-one
B Onto
C Zero map
D Not linear
The natural projection sends every vv to its coset v+Wv+W. Every coset has a representative in VV, so the projection is surjective by definition.
A linear map induced on V/WV/W typically requires WW to be
A Invariant under TT
B Orthogonal to VV
C Equal to VV
D Finite set
To define T‾(v+W)=T(v)+WT(v+W)=T(v)+W, we need T(W)⊆WT(W)⊆W. This guarantees the output coset does not depend on the chosen representative.
The isomorphism theorem (basic link) says V/ker(T)V/ker(T) is isomorphic to
A ker(T)ker(T)
B VV always
C Im(T)Im(T)
D FF
First isomorphism theorem states V/ker(T)≅Im(T)V/ker(T)≅Im(T) for linear TT. It connects quotient spaces, kernels, and images in a clean structure statement.
If T:V→WT:V→W is linear and VV finite-dimensional, then
A rank + nullity = dim(V)
B rank + nullity = dim(W)
C rank = dim(V) always
D nullity = 0 always
Rank–Nullity theorem: dim(V)=dim(Im(T))+dim(ker(T))dim(V)=dim(Im(T))+dim(ker(T)). It helps count solution degrees of freedom for T(v)=0T(v)=0.
If T:R4→R2T:R4→R2 has rank 2, then nullity is
A 2
B 6
C 4
D 0
Domain dimension is 4. Rank–Nullity gives 4=2+nullity4=2+nullity, so nullity is 2. That means a 2-dimensional kernel.
If T:R3→R3T:R3→R3 is onto, then rank is
A 2
B 1
C 0
D 3
Onto means image equals codomain. Therefore dim(Im(T))=dim(R3)=3dim(Im(T))=dim(R3)=3, so rank is 3, and nullity must be 0.
For a linear operator on R3R3, injective implies
A Not onto
B Onto
C Rank is 1
D Nullity is 3
For linear maps V→VV→V with finite dimension, injective and surjective are equivalent. If nullity is 0, rank becomes 3, making it onto.
If a linear operator has nontrivial kernel, then it is
A Always onto
B Always diagonal
C Not one-one
D Always invertible
A nontrivial kernel means some nonzero vector maps to zero. Then two different inputs can map to the same output, so the map cannot be injective.
If TT is invertible, then its matrix in any basis is
A Nonsingular
B Zero matrix
C Always diagonal
D Always symmetric
Invertible linear maps correspond to invertible matrices under any basis. That means determinant is nonzero and rank equals full dimension of the domain.
In matrix form, null space corresponds to solutions of
A Ax=bAx=b always
B ATx=0ATx=0 only
C xTA=0xTA=0
D Ax=0Ax=0
The null space is the set of vectors sent to the zero vector by AA. So it is exactly the solution set of the homogeneous system Ax=0Ax=0.
If Ax=bAx=b is consistent, then its solution set is
A Always unique
B Always empty
C One solution + null space
D Always a basis
If xpxp is a particular solution, then all solutions are x=xp+xhx=xp+xh where xhxh runs over the null space of AA. This describes all solutions.
Rank of a matrix equals the dimension of its
A Column space
B Null space
C Diagonal entries
D Eigenvalue set
Rank is the dimension of the column space (also equals dimension of row space). It counts how many independent columns the matrix has.
Rank of a matrix also equals
A Number of columns
B Number of pivots
C Number of rows always
D Trace value
In row-reduced echelon form, rank equals the number of pivot positions. This matches the number of leading variables and the dimension of row/column spaces.
A change-of-basis matrix is used to
A Find eigenvalues
B Compute determinants
C Convert coordinates
D Build quotient
Change-of-basis matrices translate coordinate vectors from one basis to another. They help represent the same vector consistently under different coordinate systems.
If AA represents TT in standard bases, then T(ei)T(ei) equals
A Column ii
B Row ii
C Diagonal ii
D Determinant ii
The standard matrix is built from images of standard basis vectors. The ii-th column is T(ei)T(ei), so multiplying AA by coordinates reproduces T(x)T(x).
Similarity transformation has the form
A PAP−1PAP−1 only
B PA+P−1PA+P−1
C P−1APP−1AP
D A−1PA−1P
If PP is invertible, B=P−1APB=P−1AP represents the same operator in a different basis. Similar matrices have same characteristic polynomial and eigenvalues.
If AA and BB are similar, then they must share
A Row reductions
B Pivot positions
C Entry sums
D Eigenvalues
Similarity preserves characteristic polynomial, hence eigenvalues (with multiplicities). Row reduction patterns can change, but spectral properties remain unchanged under basis change.
A nonzero vector vv is an eigenvector if
A AvAv is orthogonal
B AvAv is parallel
C AvAv is zero always
D AvAv is unit
Eigenvector means Av=λvAv=λv. So AvAv points in the same direction as vv (possibly reversed if λ<0λ<0), and scaling is by eigenvalue λλ.
If λ=0λ=0 is an eigenvalue, then
A AA is singular
B AA is invertible
C AA is diagonal
D AA is orthogonal
If 00 is an eigenvalue, then det(A)=0det(A)=0 because the product of eigenvalues equals determinant. Hence AA is not invertible (singular).
Eigenspace for λλ is a subspace of
A The scalar field
B The quotient space
C The domain space
D The dual space
Eigenspace is ker(A−λI)ker(A−λI), a subspace of the vector space where vectors live. It contains eigenvectors and the zero vector.
If AA is n×nn×n, then det(A−λI)det(A−λI) is a polynomial of degree
A nn
B n+1n+1
C 2n2n
D 1
The characteristic polynomial of an n×nn×n matrix has degree nn. Its roots (with multiplicity) are exactly the eigenvalues of AA.
For 2×22×2 matrix, trace equals
A Product of diagonal
B Sum of diagonal
C Sum of eigenvectors
D Rank plus nullity
Trace is defined as the sum of diagonal entries. Also, trace equals sum of eigenvalues (with multiplicity), giving a quick check for computed eigenvalues.
Determinant equals
A Sum of eigenvalues
B Dimension of kernel
C Number of pivots
D Product of eigenvalues
For an n×nn×n matrix, determinant equals the product of eigenvalues counted with algebraic multiplicity. This links spectral data to invertibility.
If eigenvalues are all distinct for an n×nn×n matrix, then it is
A Never diagonalizable
B Always singular
C Diagonalizable
D Always nilpotent
Distinct eigenvalues yield linearly independent eigenvectors. With nn distinct eigenvalues, we get nn independent eigenvectors, forming an eigenbasis for diagonalization.
If geometric multiplicity is less than algebraic multiplicity, then matrix may be
A Not diagonalizable
B Always diagonalizable
C Always invertible
D Always symmetric
Diagonalization needs enough independent eigenvectors. If eigenspace dimension (geometric multiplicity) is too small, you cannot form a full eigenvector basis.
A symmetric real matrix has
A Complex eigenvalues only
B No eigenvectors
C Zero determinant always
D Real eigenvalues
Real symmetric matrices have all real eigenvalues and orthogonal eigenvectors for distinct eigenvalues. This is a key special property used in many applications.
Spectral theorem (basic mention) says a real symmetric matrix can be
A Row-reduced only
B Made triangular only
C Orthogonally diagonalized
D Turned into zero
Spectral theorem states real symmetric matrices are diagonalizable with an orthonormal eigenbasis, giving A=QDQTA=QDQT where QQ is orthogonal.
The companion matrix is associated with a
A Dot product space
B Polynomial equation
C Coset operation
D Projection theorem
Companion matrices encode a monic polynomial so that the characteristic polynomial matches it. They are used in linear recurrences and connecting polynomials to matrices.
A matrix with Ak=0Ak=0 for some kk is called
A Idempotent
B Orthogonal
C Nilpotent
D Invertible
Nilpotent means some power becomes the zero matrix. All eigenvalues of a nilpotent matrix are 0, and it is never invertible unless it is the zero matrix.
A projection matrix has eigenvalues typically
A 0 or 1
B 1 or 2
C -1 or 1
D All distinct
For a projection PP with P2=PP2=P, eigenvalues satisfy λ2=λλ2=λ, so λλ is 0 or 1. This matches “killed” or “kept” components.
Gram–Schmidt produces vectors that are
A Linearly dependent
B Always eigenvectors
C Always in kernel
D Mutually orthogonal
Gram–Schmidt constructs an orthogonal (and usually normalized) set spanning the same subspace. It is widely used to build orthonormal bases in inner product spaces.
Dual space dimension equals
A Zero always
B Original space dimension
C Twice the dimension
D Depends on basis order
For finite-dimensional VV, the dual space V∗V∗ has the same dimension as VV. A basis of VV yields a corresponding dual basis of functionals.
A linear functional is determined by its values on
A Any subset
B Only zero vector
C A basis
D Only eigenvectors
Because the functional is linear, knowing its values on basis vectors determines its value on every vector via linear combination and coordinate expansion.
Jordan form is mainly used when matrix is
A Not diagonalizable
B Always diagonalizable
C Always symmetric
D Always orthogonal
Jordan form generalizes diagonalization by using Jordan blocks when there are not enough eigenvectors. It still captures eigenvalues and near-diagonal structure.
An eigenvalue of AA is a root of
A Minimal basis
B Row space equation
C Characteristic polynomial
D Gram matrix
Eigenvalues satisfy det(A−λI)=0det(A−λI)=0. So they are exactly the roots of the characteristic polynomial, counted with algebraic multiplicity.
If AA is invertible, then 00 is
A Always an eigenvalue
B Not an eigenvalue
C Always a root
D Always repeated
Invertible means det(A)≠0det(A)=0. Since determinant equals product of eigenvalues, none can be zero. Hence 00 cannot be an eigenvalue.
Stability idea in x′=Axx′=Ax depends mainly on
A Matrix trace only
B Vector norms only
C Basis choice only
D Eigenvalue real parts
For linear systems x′=Axx′=Ax, the sign of real parts of eigenvalues largely determines growth or decay. Negative real parts typically lead to decaying solutions.
A linear map preserves
A Distances always
B Angles always
C Addition and scaling
D Determinants always
Linearity means it respects vector addition and scalar multiplication. Preserving distances or angles requires special maps (orthogonal/unitary), not all linear maps.
Orthogonality basics are defined using
A Inner product
B Determinant
C Quotient operation
D Matrix rank
Orthogonality is defined by inner product ⟨u,v⟩=0⟨u,v⟩=0. This general definition works in many spaces (including function spaces), not just Euclidean geometry.