Chapter 13: Vector Spaces and Linear Transformations (Set-4)
For W={(x,y)∈R2:x+y=0}W={(x,y)∈R2:x+y=0}, which is true?
A Not closed addition
B Not closed scalar
C Subspace of R2R2
D Missing zero vector
The condition x+y=0x+y=0 defines a line through the origin. It contains (0,0)(0,0) and is closed under addition and scalar multiplication, so it is a subspace.
For S={(x,y):x+y=1}S={(x,y):x+y=1}, which is correct?
A Not a subspace
B Subspace of R2R2
C Equals zero subspace
D Closed under scaling
(0,0)(0,0) does not satisfy x+y=1x+y=1, so the set does not contain the zero vector. Without the zero vector, it cannot be a subspace.
If v∈span(S)v∈span(S), then vv can be expressed as
A Infinite product only
B Dot products sum
C Determinant expression
D Finite linear combo
Span consists of all finite linear combinations of vectors in SS. “Finite” is important in basic linear algebra; it does not require infinite series.
If {v1,v2,v3}{v1,v2,v3} is independent, then {v1,v2}{v1,v2} is
A Dependent
B Not defined
C Independent
D Always spanning
Any subset of a linearly independent set is also linearly independent. Removing vectors cannot create a new nontrivial linear relation among the remaining vectors.
If {v1,v2}{v1,v2} is dependent, then adding v3v3 makes {v1,v2,v3}{v1,v2,v3}
A Independent
B Dependent
C A basis always
D Orthonormal
If some nontrivial combination of v1,v2v1,v2 already gives zero, the same combination still works in the larger set. So dependence persists after adding vectors.
In a finite-dimensional space, if a set spans VV, then it contains a
A Basis subset
B Kernel subset
C Quotient subset
D Orthogonal subset
From any spanning set, you can remove redundant vectors while keeping the span unchanged until you obtain a minimal spanning set, which becomes a basis.
In a finite-dimensional space, any independent set can be extended to a
A Kernel only
B Zero set
C Basis
D Quotient space
A standard theorem says any linearly independent set can be enlarged by adding vectors to form a basis of the whole space (in finite dimensions).
If U,WU,W are subspaces, then dim(U+W)dim(U+W) equals
A dimU+dimW+dim(U∩W)dimU+dimW+dim(U∩W)
B dimU+dimW−dim(U∩W)dimU+dimW−dim(U∩W)
C dimU−dimWdimU−dimW
D dim(U∩W)dim(U∩W)
Dimension formula for sums: overlap counted twice in dimU+dimWdimU+dimW, so subtract dim(U∩W)dim(U∩W). This is very useful for computing dimensions.
If U∩W={0}U∩W={0}, then dim(U+W)dim(U+W) becomes
A dimU+dimWdimU+dimW
B dimU−dimWdimU−dimW
C dim(U∩W)dim(U∩W)
D Always zero
When intersection is only {0}{0}, there is no overlap to subtract. So dim(U+W)=dimU+dimWdim(U+W)=dimU+dimW, matching the direct sum intuition.
A vector in U∩WU∩W belongs to
A Only UU
B Only WW
C Neither set
D Both UU and WW
Intersection means common elements. Any vector in U∩WU∩W is simultaneously in UU and in WW. This is the shared part of two subspaces.
In quotient V/WV/W, coset addition is
A (v+W)+(u+W)=(v+u)+W(v+W)+(u+W)=(v+u)+W
B (v+W)+(u+W)=(vu)+W(v+W)+(u+W)=(vu)+W
C (v+W)+(u+W)=(v−u)+W(v+W)+(u+W)=(v−u)+W
D Not defined
Coset addition is defined by adding representatives and then taking the coset. This rule is well-defined because changing representatives changes the sum by an element of WW.
To ensure coset operations are valid, we need
A Orthogonality
B Symmetry of WW
C Well-definedness
D Determinant nonzero
If v+W=u+Wv+W=u+W, operations must give the same result regardless of choosing vv or uu. This independence from representatives is well-definedness.
The natural projection π:V→V/Wπ:V→V/W is always
A Injective
B Surjective
C Zero map
D Not linear
Every coset in V/WV/W has at least one representative in VV. So π(v)=v+Wπ(v)=v+W hits every element of the quotient, making it onto.
The kernel of π(v)=v+Wπ(v)=v+W is
A V/WV/W
B {0}{0} only
C VV
D WW
π(v)π(v) is the zero coset WW exactly when v∈Wv∈W. Hence ker(π)=Wker(π)=W. This is the basic link between subspaces and quotients.
If T:V→WT:V→W is linear, then T(0)T(0) must be
A 1
B Undefined
C 0
D Any vector
By linearity, T(0)=T(0⋅v)=0⋅T(v)=0T(0)=T(0⋅v)=0⋅T(v)=0. Every linear transformation maps the zero vector to the zero vector of the codomain.
If TT is linear, then T(−v)T(−v) equals
A −T(v)−T(v)
B T(v)T(v)
C 0 always
D T(v)−1T(v)−1
Since −v=(−1)v−v=(−1)v, linearity gives T(−v)=T((−1)v)=(−1)T(v)=−T(v)T(−v)=T((−1)v)=(−1)T(v)=−T(v). This property is immediate from homogeneity.
If TT is linear and T(v)=T(w)T(v)=T(w), then v−wv−w lies in
A Im(T)Im(T)
B V/WV/W
C ker(T)ker(T)
D Dual space
T(v)=T(w)⇒T(v−w)=T(v)−T(w)=0T(v)=T(w)⇒T(v−w)=T(v)−T(w)=0. So v−w∈ker(T)v−w∈ker(T). This is often used to test injectivity.
If TT is injective, then T(v)=T(w)T(v)=T(w) implies
A v=wv=w
B v=−wv=−w
C v+w=0v+w=0
D Always many solutions
Injective means distinct inputs give distinct outputs. So equality of outputs forces equality of inputs. Equivalently, injective means kernel contains only zero.
If TT is surjective, then for each yy in codomain there exists
A Unique xx always
B No preimage
C Only zero preimage
D Some xx with T(x)=yT(x)=y
Onto means every target vector is hit by at least one input. Uniqueness is not required; many inputs can map to the same output if kernel is nontrivial.
Rank of TT equals the dimension of
A ker(T)ker(T)
B Domain always
C Im(T)Im(T)
D Quotient space
Rank is defined as dim(Im(T))dim(Im(T)). It counts independent output directions. Nullity is dim(ker(T))dim(ker(T)), measuring collapsed directions.
For T:R5→R3T:R5→R3, the rank can be at most
A 5
B 3
C 8
D 15
Rank cannot exceed the dimension of the codomain, which is 3. Even if the domain is larger, the image lies inside R3R3.
If T:R5→R3T:R5→R3 has rank 3, then nullity is
A 2
B 3
C 5
D 8
Rank–Nullity: dim(domain)=5=3+nullitydim(domain)=5=3+nullity. Hence nullity is 2. So the kernel is a 2-dimensional subspace of R5R5.
If matrix AA is m×nm×n, then AA represents a map
A Rm→RnRm→Rn
B Rn→RnRn→Rn
C Rn→RmRn→Rm
D Scalars to vectors
Multiplication AxAx takes an nn-component vector xx and outputs an mm-component vector. So AA defines a linear transformation from RnRn to RmRm.
If AA has a pivot in every column, then Ax=0Ax=0 has
A Only trivial solution
B Infinitely many solutions
C No solutions
D Exactly two solutions
Pivot in every column means no free variables. For the homogeneous system Ax=0Ax=0, that forces x=0x=0 as the only solution, so nullity is 0.
If AA has a free variable, then nullity is
A Exactly 0
B Negative
C At least 1
D Always equals rank
A free variable indicates at least one parameter in the solution of Ax=0Ax=0. Therefore the null space has dimension at least 1, so nullity ≥1≥1.
For linear operator TT, matrix of T−1T−1 is
A Transpose matrix
B Inverse matrix
C Adjoint always
D Row-reduced form
If AA represents TT in a basis and TT is invertible, then T−1T−1 is represented by A−1A−1 in the same basis. Invertibility matches nonsingularity.
If AA is invertible, then its columns are
A Linearly independent
B Always orthogonal
C Always equal length
D Always nonnegative
Invertible square matrix has full rank. Full rank implies columns form an independent set and also span the space. This is equivalent to determinant being nonzero.
If AA is n×nn×n and rank is nn, then nullity is
A nn
B 2n2n
C 1
D 0
For A:Rn→RnA:Rn→Rn, Rank–Nullity gives n=rank+nullityn=rank+nullity. If rank is nn, then nullity must be 0.
Eigenvalues satisfy which equation?
A det(A+λI)=1det(A+λI)=1
B det(A−λI)=0det(A−λI)=0
C A=λA=λ
D A2=λIA2=λI always
Eigenvalues are scalars for which (A−λI)v=0(A−λI)v=0 has a nonzero solution. That happens exactly when A−λIA−λI is singular, so determinant is zero.
If v≠0v=0 and (A−λI)v=0(A−λI)v=0, then vv is
A Row vector only
B Pivot column
C Eigenvector for λλ
D Kernel of AA
The equation (A−λI)v=0(A−λI)v=0 rearranges to Av=λvAv=λv. With v≠0v=0, this is exactly the definition of eigenvector corresponding to eigenvalue λλ.
If λλ is eigenvalue, then eigenspace dimension equals
A Nullity of A−λIA−λI
B Rank of A−λIA−λI
C Trace of A−λIA−λI
D Determinant of A−λIA−λI
Eigenspace is ker(A−λI)ker(A−λI). Its dimension is the nullity of A−λIA−λI, also called the geometric multiplicity of λλ.
A matrix is diagonalizable if the total number of independent eigenvectors is
A 1
B 0
C nn
D n2n2
For an n×nn×n matrix, diagonalization needs an eigenbasis: nn linearly independent eigenvectors. Then A=PDP−1A=PDP−1 for some invertible PP.
If a matrix has a repeated eigenvalue, it is diagonalizable when
A Trace is nonzero
B Enough eigenvectors exist
C Determinant is one
D It is triangular
Repeated eigenvalues do not automatically block diagonalization. The key is whether the eigenspaces provide enough independent eigenvectors to form a full basis.
Coefficients of characteristic polynomial relate to
A Trace and determinant
B Rank and nullity
C Basis and dimension
D Dot product only
For an n×nn×n matrix, characteristic polynomial coefficients include sums of principal minors. In particular, the λn−1λn−1 coefficient involves trace and constant term involves determinant.
Cayley–Hamilton can help compute
A Vector lengths
B Basis size
C Higher matrix powers
D Coset count
Since p(A)=0p(A)=0, you can rewrite AkAk for large kk as a combination of lower powers of AA. This simplifies computations in recurrences and systems.
If AA is similar to BB, then their characteristic polynomials are
A The same
B Always different
C Same degree only
D Unrelated
Similarity is a change of basis: B=P−1APB=P−1AP. This preserves the characteristic polynomial, so both matrices have identical eigenvalues and algebraic multiplicities.
In R3R3, cross product is NOT needed to define
A Area formula
B Normal direction
C Torque concept
D Vector space axioms
Vector space structure uses only addition and scalar multiplication. Cross product is an extra operation defined in R3R3, not required for the vector space axioms.
In function space, a typical linear combination looks like
A f(x)g(x)f(x)g(x)
B f(x)/g(x)f(x)/g(x)
C af(x)+bg(x)af(x)+bg(x)
D sin(f)sin(f)
In spaces of functions, vectors are functions. A linear combination uses scalars a,ba,b and adds functions pointwise: (af+bg)(x)=af(x)+bg(x)(af+bg)(x)=af(x)+bg(x).
A linear transformation between vector spaces must preserve
A Linear combinations
B Only lengths
C Only angles
D Only determinants
Linearity ensures T(au+bv)=aT(u)+bT(v)T(au+bv)=aT(u)+bT(v). So a linear map preserves all linear combinations, which is stronger than preserving just sums or scaling alone.
If PP is projection onto a subspace UU, then Im(P)Im(P) equals
A ker(P)ker(P)
B UU
C Whole space
D Dual space
A projection onto UU sends every vector to a vector in UU. Thus its image is exactly UU. Its kernel is the set mapped to zero (complement direction).
For projection PP, the kernel represents vectors
A Sent to unit
B Sent to eigenvalue
C Sent to zero
D Sent to trace
Kernel always means vectors mapped to zero. For a projection onto UU, kernel typically contains vectors in the “ignored” direction (complement), collapsed to zero.
Orthogonal diagonalization requires matrix to be
A Real symmetric
B Any invertible
C Any triangular
D Any nilpotent
Spectral theorem: real symmetric matrices have an orthonormal eigenbasis. Hence they can be diagonalized by an orthogonal matrix QQ: A=QDQTA=QDQT.
A singular value mention relates to
A Trace of AA
B ATAATA eigenvalues
C Determinant of AA
D Nullity of AA only
Singular values are square roots of eigenvalues of ATAATA (or AATAAT). They measure stretching factors of the transformation and exist for any matrix.
A matrix is idempotent if
A A2=0A2=0
B AT=A−1AT=A−1
C A=IA=I only
D A2=AA2=A
Idempotent means applying the transformation twice equals applying it once. Projection matrices are common examples, and their eigenvalues must satisfy λ2=λλ2=λ.
If AA is nilpotent, then all eigenvalues are
A 0
B 1
C -1
D Nonzero
If Ak=0Ak=0, any eigenvalue λλ satisfies λk=0λk=0, forcing λ=0λ=0. So nilpotent matrices have only zero eigenvalues.
If AA is upper triangular, determinant equals
A Sum of diagonal
B Rank of AA
C Product of diagonal
D Nullity of AA
For triangular matrices, determinant is the product of diagonal entries. This also matches the fact that eigenvalues are diagonal entries, so determinant equals product of eigenvalues.
If AA and BB are equivalent (row/column operations), then they must share
A Same eigenvalues
B Same rank
C Same trace
D Same determinant
Matrix equivalence preserves rank because it represents invertible row and column transformations. Eigenvalues are not preserved under general equivalence, only under similarity.
The solution space of Ax=0Ax=0 is a
A Subspace
B Coset only
C Empty set
D Nonlinear curve
Solutions of a homogeneous linear system form the null space. It contains zero and is closed under addition and scalar multiplication, so it is always a subspace.
The solution set of Ax=bAx=b (consistent) is a
A Always a subspace
B Always empty
C Coset of null space
D Always a basis
If xpxp is one solution, then all solutions are xp+N(A)xp+N(A). This is a translate of the null space, called an affine subspace (coset form).
A linear operator “scaling” by factor cc has eigenvalues
A 0 only
B 1 only
C ±1±1 only
D cc only
If T(v)=cvT(v)=cv for all vectors, then every nonzero vector is an eigenvector with eigenvalue cc. The transformation uniformly scales all directions by cc.