Chapter 13: Vector Spaces and Linear Transformations (Set-5)
For W={(x,y,z)∈R3:x+2y+3z=0}W={(x,y,z)∈R3:x+2y+3z=0}, dim(W)dim(W) is
A B. 1
B C. 3
C A. 2
D D. 0
Explanation: A single nontrivial linear equation in R3R3 defines a plane through the origin, which is a subspace of dimension 3−1=23−1=2.
For W={(x,y,z):x=1}⊂R3W={(x,y,z):x=1}⊂R3, the main reason it is not a subspace is
A B. Not closed add
B C. Not closed scale
C D. Not finite set
D A. Zero not included
Explanation: The set requires x=1x=1, so (0,0,0)∉W(0,0,0)∈/W. Missing the zero vector already breaks the subspace requirement, so it cannot be a subspace.
If dim(U)=4dim(U)=4, dim(W)=3dim(W)=3 in a space VV, then dim(U∩W)dim(U∩W) is at least
A B. 1
B A. 0
C C. 3
D D. 7
Explanation: In a space with dim(V)=5dim(V)=5 or less, intersection must be nontrivial. Generally, dim(U∩W)≥dimU+dimW−dimVdim(U∩W)≥dimU+dimW−dimV. With typical constraints, minimum becomes 0.
If U,W⊂R5U,W⊂R5 with dimU=4dimU=4 and dimW=3dimW=3, then dim(U∩W)dim(U∩W) can be
A B. 0
B C. 4
C A. 2
D D. 5
Explanation: Use dim(U+W)=dimU+dimW−dim(U∩W)≤5dim(U+W)=dimU+dimW−dim(U∩W)≤5. So 7−dim(U∩W)≤5⇒dim(U∩W)≥27−dim(U∩W)≤5⇒dim(U∩W)≥2. Value 2 is possible.
If v∉Wv∈/W, then coset v+Wv+W is
A A. Not equal WW
B B. Equal to WW
C C. Empty set
D D. Same as {v}{v}
Explanation: v+W=Wv+W=W would imply v∈Wv∈W. If v∉Wv∈/W, its coset is a distinct translate of WW, not the zero coset.
For projection π:V→V/Wπ:V→V/W, the map is injective exactly when
A B. W=VW=V
B C. WW is finite
C D. WW is invariant
D A. W={0}W={0}
Explanation: ker(π)=Wker(π)=W. A linear map is injective iff kernel is {0}{0}. Therefore projection becomes one-one only when WW is the zero subspace.
If T:V→WT:V→W is linear and dimV=ndimV=n, then dim(V/kerT)dim(V/kerT) equals
A B. nullity(T)nullity(T)
B C. dimWdimW always
C A. rank(T)rank(T)
D D. n+rankn+rank
Explanation: By first isomorphism theorem, V/ker(T)≅Im(T)V/ker(T)≅Im(T). Taking dimensions gives dim(V/kerT)=dim(Im(T))=rank(T)dim(V/kerT)=dim(Im(T))=rank(T).
If T:V→WT:V→W has rank rr, then dim(V/kerT)dim(V/kerT) is
A A. rr
B B. n−rn−r
C C. n+rn+r
D D. dimW−rdimW−r
Explanation: Dimension of quotient V/ker(T)V/ker(T) equals dimension of image by isomorphism theorem. So it equals rank rr, regardless of the codomain size.
If T:R6→R4T:R6→R4 has nullity 3, then rank is
A B. 4
B C. 6
C D. 1
D A. 3
Explanation: Rank–Nullity: dim(domain)=6=rank+nullity=rank+3dim(domain)=6=rank+nullity=rank+3. Hence rank =3=3. Codomain dimension 4 allows rank up to 4.
If T:R6→R4T:R6→R4 is onto, then nullity must be
A B. 0
B C. 4
C A. 2
D D. 6
Explanation: Onto implies rank =4=4 (dimension of codomain). Then 6=rank+nullity=4+nullity6=rank+nullity=4+nullity, so nullity =2=2.
If T:R4→R4T:R4→R4 has rank 3, then TT is
A B. Invertible
B A. Not invertible
C C. Onto
D D. One-one
Explanation: For an operator on R4R4, invertible requires rank 4 and nullity 0. Rank 3 implies nullity 1, so it is neither one-one nor onto.
If AA is n×nn×n and has eigenvalue 0, then rank is
A B. Equal to nn
B C. Always 1
C A. Less than nn
D D. Always n−1n−1
Explanation: Eigenvalue 0 implies det(A)=0det(A)=0, so AA is singular and cannot have full rank. Therefore rank must be strictly less than nn.
If λλ is eigenvalue of AA, then ker(A−λI)ker(A−λI) is
A B. Always {0}{0}
B C. Always whole space
C D. Never a subspace
D A. Nontrivial
Explanation: Eigenvalue means there exists nonzero vv such that (A−λI)v=0(A−λI)v=0. So the kernel contains a nonzero vector, hence is nontrivial.
If AA is diagonalizable, then sum of dimensions of all eigenspaces is
A B. Less than nn
B A. nn
C C. Greater than nn
D D. Always 1
Explanation: Diagonalizable means there is a basis of eigenvectors. The direct sum of eigenspaces spans the space, so the total dimension contributed by eigenspaces equals nn.
If AA has distinct eigenvalues, then eigenvectors for different eigenvalues are
A B. Always orthogonal
B C. Always equal
C A. Linearly independent
D D. Always zero
Explanation: Eigenvectors from distinct eigenvalues are guaranteed independent. Orthogonality is only guaranteed for symmetric matrices, not for general matrices.
For 2×22×2 matrix, if trace is 5 and determinant is 6, then eigenvalues satisfy
A A. λ1+λ2=5λ1+λ2=5
B B. λ1−λ2=5λ1−λ2=5
C C. λ1λ2=5λ1λ2=5
D D. λ1+λ2=6λ1+λ2=6
Explanation: For any square matrix, trace equals sum of eigenvalues (with multiplicity) and determinant equals product. Here sum must be 5 and product must be 6.
For 2×22×2 matrix with determinant 0, one eigenvalue must be
A B. 1
B C. -1
C D. 2
D A. 0
Explanation: Determinant equals product of eigenvalues. If determinant is 0, at least one eigenvalue is 0. This is consistent with the matrix being singular.
If AA is upper triangular, the characteristic polynomial roots are
A B. Row sums
B C. Column sums
C A. Diagonal entries
D D. Pivot entries
Explanation: For triangular AA, det(A−λI)det(A−λI) equals product of (aii−λ)(aii−λ). Hence eigenvalues are diagonal entries with algebraic multiplicities.
If AA is n×nn×n and nilpotent, then characteristic polynomial is
A A. λnλn
B B. (λ−1)n(λ−1)n
C C. (λ+1)n(λ+1)n
D D. λn−1λn−1
Explanation: Nilpotent implies all eigenvalues are 0. Therefore the characteristic polynomial has only root 0 with multiplicity nn, giving p(λ)=λnp(λ)=λn.
If AA is a projection matrix, its minimal polynomial must divide
A B. λ2+1λ2+1
B A. λ(λ−1)λ(λ−1)
C C. λ2−2λλ2−2λ
D D. λ3λ3
Explanation: Projection satisfies A2=AA2=A, so A2−A=0A2−A=0. Hence AA is annihilated by m(λ)=λ(λ−1)m(λ)=λ(λ−1), so minimal polynomial divides λ(λ−1)λ(λ−1).
If AA is idempotent and not zero or identity, then rank is
A B. Always 0
B C. Always nn
C D. Always n−1n−1
D A. Between 1 and n−1n−1
Explanation: For idempotent AA, eigenvalues are 0 or 1. If not all 0 or all 1, both appear, so rank (number of 1-eigenvalues) is strictly between 0 and nn.
If AA is similar to BB, then rank(A)rank(A) and rank(B)rank(B) are
A B. Always different
B C. Related by trace
C A. Equal
D D. Related by determinant only
Explanation: Similarity uses invertible matrices: B=P−1APB=P−1AP. Multiplying by invertible matrices does not change rank, so similar matrices have the same rank.
If AA and BB are similar, then det(A)det(A) and det(B)det(B) are
A B. Opposite sign
B A. Equal
C C. Unrelated
D D. Always zero
Explanation: det(P−1AP)=det(P−1)det(A)det(P)=det(A)det(P−1AP)=det(P−1)det(A)det(P)=det(A). So determinant is invariant under similarity, just like trace and characteristic polynomial.
If B=P−1APB=P−1AP, then trace satisfies
A B. tr(B)=det(A)tr(B)=det(A)
B C. tr(B)=0tr(B)=0 always
C D. tr(B)=rank(A)tr(B)=rank(A)
D A. tr(B)=tr(A)tr(B)=tr(A)
Explanation: Trace is invariant under similarity. It can be proved using cyclic property tr(XY)=tr(YX)tr(XY)=tr(YX). Thus tr(P−1AP)=tr(A)tr(P−1AP)=tr(A).
If T:V→VT:V→V and T2=TT2=T, then VV decomposes as
A B. kerT+kerTkerT+kerT
B C. ImT∩kerTImT∩kerT
C A. kerT⊕ImTkerT⊕ImT
D D. Only ImTImT
Explanation: For a projection (idempotent) operator, every vector splits into a part mapped to itself (image) plus a part mapped to zero (kernel). Intersection is {0}{0}.
If TT is linear and Im(T)={0}Im(T)={0}, then TT is
A A. Zero map
B B. Identity map
C C. Invertible map
D D. Projection map
Explanation: If the image contains only the zero vector, every output is zero. So T(v)=0T(v)=0 for all vv. That is exactly the zero transformation.
If TT is linear and ker(T)=Vker(T)=V, then TT must be
A B. Onto map
B C. One-one map
C A. Zero map
D D. Rotation map
Explanation: If every vector maps to zero, the kernel is the whole domain. That happens only for the zero map. Such a map cannot be injective unless V={0}V={0}.
In RnRn, a set of nn vectors is a basis iff its matrix of columns is
A B. Singular
B C. Zero matrix
C D. Upper triangular only
D A. Invertible
Explanation: nn vectors form a basis iff they are independent iff the n×nn×n matrix with those vectors as columns has full rank, hence is invertible.
A linear map TT is determined completely by its values on
A A. A basis
B B. Any one vector
C C. Any coset
D D. Any eigenvalue
Explanation: Every vector is a linear combination of basis vectors. Linearity then fixes TT on all vectors once TT is known on basis vectors.
If AA and BB represent same operator in different bases, then they are
A B. Equivalent only
B C. Unrelated
C A. Similar
D D. Always equal
Explanation: Change of basis gives B=P−1APB=P−1AP. That is similarity. So matrices of the same linear operator in different bases are similar and share eigenvalues.
If λλ is an eigenvalue of AA, then λλ is also an eigenvalue of AkAk as
A B. kλkλ
B C. λ+kλ+k
C D. λ/kλ/k
D A. λkλk
Explanation: If Av=λvAv=λv, then Akv=λkvAkv=λkv. So eigenvectors remain the same and eigenvalues get raised to power kk.
If AA is invertible, eigenvalues of A−1A−1 are
A B. Negatives
B A. Reciprocals
C C. Squares
D D. Same values
Explanation: From Av=λvAv=λv with λ≠0λ=0, apply A−1A−1: v=λA−1v⇒A−1v=(1/λ)vv=λA−1v⇒A−1v=(1/λ)v. So eigenvalues invert.
If AA is diagonalizable, then there exists PP such that
A B. PAP−1=0PAP−1=0
B C. P−1AP=IP−1AP=I
C A. P−1AP=DP−1AP=D
D D. P−1AP=AP−1AP=A
Explanation: Diagonalizable means A=PDP−1A=PDP−1 for some invertible PP whose columns are eigenvectors. Equivalently, P−1AP=DP−1AP=D is diagonal.
If geometric multiplicity of λλ equals 1 for an n×nn×n matrix, then
A A. Eigenspace is 1D
B B. Eigenvalue is distinct
C C. Matrix is diagonal
D D. Rank is 1
Explanation: Geometric multiplicity is defined as dimension of eigenspace ker(A−λI)ker(A−λI). If it is 1, there is exactly one independent eigenvector direction for λλ.
If AA has characteristic polynomial (λ−2)3(λ−2)3, then trace equals
A B. 2
B C. 8
C D. 0
D A. 6
Explanation: Eigenvalue 2 has algebraic multiplicity 3. Trace equals sum of eigenvalues with multiplicity, so 2+2+2=62+2+2=6. This holds even if not diagonalizable.
If AA has characteristic polynomial λ2(λ−3)λ2(λ−3), then determinant equals
A B. 3
B C. 9
C A. 0
D D. -3
Explanation: Determinant equals product of eigenvalues with multiplicity: 0⋅0⋅3=00⋅0⋅3=0. Therefore AA is singular.
If T:V→VT:V→V has ker(T)={0}ker(T)={0} in finite dimension, then
A B. TT has rank 0
B A. TT is onto
C C. TT is zero map
D D. TT is nilpotent
Explanation: For linear operators on finite-dimensional spaces, injective implies surjective. Kernel {0}{0} means injective, so rank equals dimVdimV, hence onto.
If T:V→VT:V→V is onto in finite dimension, then
A B. Kernel equals VV
B C. Nullity is dimVdimV
C D. Rank is 0
D A. Kernel is {0}{0}
Explanation: Onto implies rank equals dimVdimV. Rank–Nullity then forces nullity 0, so kernel contains only zero. Thus surjective implies injective for operators.
A linear functional space V∗V∗ is called
A B. Quotient space
B C. Kernel space
C A. Dual space
D D. Image space
Explanation: The dual space consists of all linear maps from VV to the scalar field. It is a vector space and has the same dimension as VV when VV is finite-dimensional.
If dim(V)=ndim(V)=n, then dim(V∗)dim(V∗) equals
A A. nn
B B. n2n2
C C. 2n2n
D D. 1
Explanation: Finite-dimensional vector spaces have dual spaces of equal dimension. Each basis of VV produces a dual basis of V∗V∗, giving exactly nn independent functionals.
In Gram–Schmidt, the step uses subtraction of
A B. Determinants
B A. Projections
C C. Eigenvectors
D D. Cosets
Explanation: Gram–Schmidt makes vectors orthogonal by subtracting the projection of the current vector onto the span of previous orthogonal vectors, removing components in their directions.
If AA is real symmetric, eigenvectors for distinct eigenvalues are
A B. Parallel
B C. Equal length only
C D. Always dependent
D A. Orthogonal
Explanation: Real symmetric matrices have orthogonal eigenspaces for distinct eigenvalues. This is part of the spectral theorem and enables orthogonal diagonalization.
Singular values of AA are square roots of eigenvalues of
A B. A+ATA+AT
B C. A−ATA−AT
C A. ATAATA
D D. AA−1AA−1
Explanation: By definition, singular values are σi=μiσi=μi where μiμi are eigenvalues of the positive semidefinite matrix ATAATA. This works for any real matrix.
For A:Rn→RmA:Rn→Rm, rank of AA equals rank of
A A. ATAT
B B. A2A2
C C. A−1A−1
D D. A+IA+I
Explanation: Rank of a matrix equals rank of its transpose because row rank equals column rank. This is a fundamental theorem in linear algebra.
If AA is m×nm×n, then rank(A)≤rank(A)≤
A B. m+nm+n
B C. mnmn
C D. m2m2
D A. min(m,n)min(m,n)
Explanation: Rank cannot exceed the number of rows or columns because you cannot have more independent columns than columns, and cannot have more independent rows than rows.
If AA has full column rank nn (with m≥nm≥n), then Ax=bAx=b has
A B. Always infinite solutions
B C. No solutions ever
C A. At most one solution
D D. Exactly two solutions
Explanation: Full column rank means nullity is 0, so the homogeneous system has only trivial solution. Therefore, if Ax=bAx=b is consistent, its solution is unique.
For consistent Ax=bAx=b, uniqueness of solution occurs when
A B. Rank is 0
B A. Nullity is 0
C C. Trace is 0
D D. Determinant is 0
Explanation: If null space is {0}{0}, then any two solutions differ by a null vector, forcing them to be equal. Hence consistency plus nullity 0 gives a unique solution.
If AA is n×nn×n and has nn independent eigenvectors, then minimal polynomial has
A B. Degree always nn
B C. Only constant term
C D. Always λnλn
D A. No repeated factors
Explanation: Diagonalizable matrices have minimal polynomial that splits into distinct linear factors (no repeated roots). Repeated factors are associated with Jordan blocks and lack of diagonalization.
A linear transformation T:V→WT:V→W preserves the dimension of every subspace U⊆VU⊆V (i.e., dim(T(U))=dim(U)dim(T(U))=dim(U) for all UU) exactly when TT is
A A. Injective (one-one)
B B. Zero map
C C. Projection map
D D. Nilpotent map
Explanation: If TT is injective, then the restriction T∣UT∣U is injective for any subspace UU, so dim(T(U))=dim(U)dim(T(U))=dim(U). If TT is not injective, some nonzero vector maps to zero, reducing dimension for a suitable subspace.
If a linear operator has all eigenvalues equal to 1, it must be
A B. Identity always
B C. Zero map always
C A. Not necessarily identity
D D. Diagonal always
Explanation: Having all eigenvalues 1 does not force A=IA=I. A nontrivial Jordan block with eigenvalue 1 exists (not diagonalizable), giving A≠IA=I but same eigenvalues.