Chapter 13: Vector Spaces and Linear Transformations (Set-2)
a
Which statement matches “closure under addition”?
A Product stays in set
B Sum stays in set
C Inverse always exists
D Order is preserved
Closure under addition means if uu and vv belong to the set, then u+vu+v must also belong to the same set. This is essential for subspaces.
Which statement matches “closure under scalar multiplication”?
A v/cv/c always exists
B v⋅vv⋅v exists
C cvcv stays inside
D vv becomes unit
Closure under scalar multiplication means multiplying any vector vv in the set by any scalar cc from the field keeps the result cvcv in the set.
A set fails subspace test if it does not contain
A Zero vector
B Two vectors
C A basis
D Any scalar
Every subspace must contain the zero vector. If a set misses the zero vector, it cannot be closed under scalar multiplication or satisfy vector space axioms.
If 0⋅v0⋅v is computed in a vector space, the result is
A Same vector vv
B Unit vector
C Zero vector
D Undefined
One vector space axiom implies 0⋅v=00⋅v=0. This follows from distributive properties and is consistent with scalar multiplication rules in any vector space.
If 1⋅v1⋅v is computed in a vector space, the result is
A Zero vector
B Inverse of vv
C Orthogonal to vv
D Same vector vv
The scalar identity axiom states 1⋅v=v1⋅v=v for all vectors vv. This ensures scalars behave like usual multiplication with identity element 1.
Which condition guarantees uniqueness of coordinates?
A Basis property
B Span only
C Dependence only
D Same lengths
Coordinates are unique only when the chosen set is a basis (spans the space and is linearly independent). If the set is dependent, multiple representations occur.
A generating set means the set’s span equals
A Only the kernel
B Only a line
C The whole space
D Only the zero set
A generating set for VV is any set of vectors whose span equals VV. It may have extra vectors and need not be independent.
A minimal generating set is typically a
A Basis
B Dependent set
C Empty set
D Random set
A generating set becomes minimal when no vector can be removed without losing spanning. Such a set is usually linearly independent and hence forms a basis.
Extending a basis means
A Remove dependent vectors
B Add vectors to span
C Change field only
D Compute determinant
Extending a basis means adding vectors (usually to an independent set) to obtain a basis for a larger space. The goal is to span the full space.
Reducing a spanning set to a basis involves
A Adding more vectors
B Making all unit
C Taking dot products
D Removing dependent vectors
If a set spans but is dependent, you can remove redundant vectors while keeping the span unchanged. Repeating this produces a linearly independent spanning set.
Dimension of R3R3 is
A 3
B 2
C 1
D 6
R3R3 has standard basis (1,0,0),(0,1,0),(0,0,1)(1,0,0),(0,1,0),(0,0,1). Any basis has exactly three vectors, so its dimension is 3.
Dimension of the span of one nonzero vector is
A 0
B 2
C 1
D Infinite
The span of a single nonzero vector forms a line through the origin. Any vector in that span is a scalar multiple of the vector, so dimension is 1.
Intersection U∩WU∩W always contains
A All vectors
B Zero vector
C No vectors
D Only basis vectors
Every subspace contains the zero vector. Therefore, the intersection of two subspaces must contain the zero vector, even if they share nothing else.
If U∩W={0}U∩W={0}, then U+WU+W suggests
A Same subspace
B Not a subspace
C Only quotient space
D Direct sum idea
When subspaces intersect only at zero, vectors in U+WU+W tend to have unique decompositions. This is the key idea behind a direct sum (intro level).
A coset of WW in VV looks like
A v⋅Wv⋅W
B v/Wv/W
C v+Wv+W
D v−Wv−W only
A coset is the set {v+w:w∈W}{v+w:w∈W}. Different vectors can generate the same coset if their difference lies in WW.
In V/WV/W, the “zero element” is
A WW itself
B {0}{0} only
C Any vector vv
D Any basis set
In the quotient space, the zero element is the coset 0+W=W0+W=W. It acts like the additive identity for coset addition.
Dimension formula (basic) for quotient is
A dim(V/W)=dimV+dimWdim(V/W)=dimV+dimW
B dim(V/W)=dimV−dimWdim(V/W)=dimV−dimW
C dim(V/W)=dimWdim(V/W)=dimW
D dim(V/W)=0dim(V/W)=0 always
For finite-dimensional VV and subspace WW, the quotient reduces dimension by dim(W)dim(W). This matches the idea of “collapsing” WW to zero.
A map induced on V/WV/W must be
A Nonlinear always
B Only injective
C Well-defined
D Only surjective
An induced map on cosets must give the same output for all representatives of a coset. Otherwise, the function depends on choices and is not valid.
A subset is linearly dependent if
A Some nontrivial combo gives zero
B Only zero combo gives zero
C All vectors are unit
D All vectors are orthogonal
Dependence means there exist scalars not all zero such that their linear combination equals zero. This implies at least one vector can be written from others.
A vector is in span{v1,…,vk}{v1,…,vk} if it can be written as
A A dot product
B A linear combination
C A cross product
D A determinant
Being in the span means the vector equals a1v1+⋯+akvka1v1+⋯+akvk for some scalars. This links directly to solvability of linear systems.
In a basis, representation of a vector is
A Always multiple
B Always impossible
C Always integer
D Unique
A basis is independent and spanning, so each vector has exactly one coordinate vector relative to that basis. Independence prevents alternative representations.
Ordered basis mainly affects
A Vector length
B Field choice
C Coordinate order
D Determinant sign
Changing the order of basis vectors changes the order of coordinates in the coordinate column. The geometric vector is unchanged; only coordinate description changes.
Kernel of a linear map is always a
A Subspace
B Coset
C Basis
D Scalar set
If T(u)=0T(u)=0 and T(v)=0T(v)=0, then T(u+v)=0T(u+v)=0, and T(cu)=0T(cu)=0. Hence the kernel is closed under addition and scalar multiplication.
Image of a linear map is always a
A Single vector
B Coset set
C Subspace
D Empty set
If y1=T(u)y1=T(u) and y2=T(v)y2=T(v), then y1+y2=T(u+v)y1+y2=T(u+v). Also cy1=T(cu)cy1=T(cu). So the image is a subspace.
A linear map with zero kernel is
A Onto always
B One-one
C Zero map
D Not linear
For linear maps, injective means only the zero vector maps to zero. So ker(T)={0}ker(T)={0} is exactly the condition for one-one.
If dim(V)=5dim(V)=5 and rank=3=3, nullity is
A 8
B 15
C 3
D 2
Rank–Nullity gives 5=3+nullity5=3+nullity. Hence nullity =2=2. This also equals the dimension of the solution space of T(v)=0T(v)=0.
If nullity=0=0, then the map is
A Injective
B Not defined
C Always onto
D Always zero
Nullity is dim(kerT)dim(kerT). If it is 0, the kernel has only the zero vector, meaning no nonzero vector maps to zero, so the map is one-one.
If rank equals dimension of codomain, the map is
A One-one only
B Constant map
C Onto
D Not possible
Rank is dimension of image. If image dimension equals codomain dimension, the image must be the entire codomain, making the transformation surjective.
The matrix of a linear map changes when you change
A Vector space axioms
B Scalar field identity
C Zero vector
D Basis
A linear map is fixed, but its matrix depends on chosen bases for domain and codomain. Different bases produce different coordinate descriptions and matrices.
Column interpretation of transformation matrix means columns are
A Images of basis vectors
B Kernels of basis vectors
C Determinants of basis
D Orthogonal projections
In standard coordinates, column ii equals T(ei)T(ei). More generally, in a chosen basis, columns represent images of domain basis vectors expressed in codomain basis.
Composition corresponds to matrix
A Addition
B Multiplication
C Transpose
D Determinant
If T(x)=AxT(x)=Ax and S(y)=ByS(y)=By, then (S∘T)(x)=B(Ax)=(BA)x(S∘T)(x)=B(Ax)=(BA)x. So composition corresponds to product in correct order.
A projection operator typically satisfies
A P2=0P2=0
B PT=P−1PT=P−1
C P2=PP2=P
D P=IP=I always
A projection applied twice gives the same result as applying once. This idempotent property P(P(v))=P(v)P(P(v))=P(v) is a key basic feature of projections.
A linear functional maps vectors to
A Scalars
B Matrices
C Subspaces
D Cosets
A linear functional is a linear map from a vector space to its field (like RR). The collection of all such maps forms the dual space.
Eigenvalue λλ exists if there is a nonzero vv with
A Av=v+λAv=v+λ
B A+λI=0A+λI=0
C det(A)=λdet(A)=λ
D Av=λvAv=λv
Eigenvectors are nonzero vectors whose direction is preserved by AA, only scaled by λλ. This equation is the defining property of eigenpairs.
The eigenspace for λλ equals
A Im(A−λI)Im(A−λI)
B ker(A+λI)ker(A+λI)
C ker(A−λI)ker(A−λI)
D Im(A+λI)Im(A+λI)
Vectors satisfying Av=λvAv=λv are exactly solutions of (A−λI)v=0(A−λI)v=0. So the eigenspace is the kernel of A−λIA−λI.
Geometric multiplicity means
A Eigenspace dimension
B Polynomial degree
C Determinant value
D Trace value
Geometric multiplicity of eigenvalue λλ is dim(ker(A−λI))dim(ker(A−λI)). It measures how many independent eigenvectors correspond to λλ.
Algebraic multiplicity means
A Eigenspace dimension
B Root multiplicity in polynomial
C Rank of eigenspace
D Number of rows
Algebraic multiplicity is how many times λλ appears as a root of the characteristic polynomial. It is always at least the geometric multiplicity.
A matrix is diagonalizable when it has
A Zero trace
B Unit determinant
C Full eigenvector basis
D All positive entries
Diagonalization requires nn linearly independent eigenvectors for an n×nn×n matrix. Then A=PDP−1A=PDP−1 with DD diagonal.
Similar matrices represent
A Same linear operator
B Different dimensions
C Same determinant only
D Same kernel only
Similarity B=P−1APB=P−1AP means AA and BB describe the same linear operator under different bases. They share eigenvalues and characteristic polynomial.
Characteristic polynomial is unchanged by
A Row operations
B Scaling only
C Transpose always
D Similarity change
Similar matrices have the same characteristic polynomial. This is why eigenvalues are basis-independent properties of a linear operator, not of one matrix form.
Cayley–Hamilton theorem states
A Polynomial equals determinant
B Trace equals determinant
C Matrix satisfies its polynomial
D Rank equals nullity
Cayley–Hamilton says a matrix AA satisfies its own characteristic polynomial p(A)=0p(A)=0. It helps express higher powers of AA using lower powers.
Minimal polynomial is the monic polynomial of least degree with
A m(A)=0m(A)=0
B m(λ)=0m(λ)=0 always
C m(I)=0m(I)=0
D m(0)=1m(0)=1
The minimal polynomial is the lowest-degree monic polynomial that annihilates the matrix. It divides the characteristic polynomial and guides diagonalization behavior.
For diagonal matrix, eigenvalues are
A Off-diagonal entries
B Diagonal entries
C Row sums
D Column sums
For a diagonal matrix, det(A−λI)det(A−λI) becomes ∏(aii−λ)∏(aii−λ). Hence eigenvalues are exactly the diagonal entries, with multiplicities.
A matrix norm measures
A Matrix determinant
B Matrix trace
C Matrix rank
D Matrix “size”
A matrix norm assigns a nonnegative number representing magnitude, often consistent with vector norms. Norms are used to estimate errors and stability in computations.
Orthogonality in inner product spaces means
A Determinant zero
B Same direction
C Inner product zero
D Same length
Two vectors are orthogonal if their inner product is zero. This generalizes perpendicularity in R2R2 and R3R3 to abstract inner product spaces.
Gram–Schmidt starts with a set that is
A Linearly dependent
B Linearly independent
C Empty only
D All zero vectors
Gram–Schmidt converts an independent set into an orthonormal set spanning the same subspace. If the set is dependent, some steps produce zero vectors.
An invariant subspace WW for TT satisfies
A T(W)⊆WT(W)⊆W
B T(W)=VT(W)=V
C T(W)={0}T(W)={0}
D T(W)=WcT(W)=Wc
Invariance means applying TT to any vector in WW stays in WW. This is important for understanding eigenvectors, decompositions, and block forms.
A linear differential operator is linear if it satisfies
A Always increases degree
B Has constant solutions
C Add and scalar rules
D Has zero eigenvalues
Operators like D(f)=f′D(f)=f′ are linear because D(f+g)=D(f)+D(g)D(f+g)=D(f)+D(g) and D(cf)=cD(f)D(cf)=cD(f). These match linear transformation axioms.
Similarity differs from equivalence mainly because similarity preserves
A Eigenvalues
B Row space only
C Column space only
D Rank only
Similarity is a change of basis for the same operator, so eigenvalues and characteristic polynomial are preserved. Equivalence allows independent row/column changes and may not preserve eigenvalues.
Eigenvalues with negative real part often indicate
A Always singular matrix
B Always diagonalizable
C Always rank zero
D Stability tendency
In basic dynamical systems x′=Axx′=Ax, eigenvalues with negative real part typically lead to solutions decaying toward zero over time. This gives a simple stability interpretation.