Chapter 13: Vector Spaces and Linear Transformations (Set-1)
Which item must exist in a vector space?
A B. Prime element
B C. Division operation
C D. Order relation
D A. Scalar multiplication rule
A vector space requires addition and scalar multiplication satisfying axioms (closure, associativity, distributive laws, identity, inverses). Prime, division, or order are not required properties.
A subset WW of VV is a subspace if it is
A A. Closed under addition
B B. Non-empty only
C C. Finite set
D D. Contains all scalars
Subspace test needs nonempty (or contains zero), closed under vector addition, and closed under scalar multiplication. Being finite or “containing scalars” is irrelevant.
Which set is always a subspace of any vector space?
A B. All nonzero vectors
B A. The zero subspace
C C. Unit vectors only
D D. Positive vectors only
{0}{0} is always a subspace because it contains zero, and is closed under addition and scalar multiplication. The other sets generally fail closure properties.
If a set of vectors is linearly independent, it means
A B. Vectors are perpendicular
B C. Vectors have unit length
C A. Only trivial combination gives zero
D D. Vectors are all distinct
Linear independence means a1v1+⋯+akvk=0a1v1+⋯+akvk=0 forces all scalars ai=0ai=0. Perpendicularity, unit length, or distinctness are not required.
The span of a set of vectors is the set of all
A A. Linear combinations
B B. Dot products
C C. Eigenvalues
D D. Determinants
Span is the collection of all finite linear combinations of given vectors. It forms the smallest subspace containing those vectors.
A basis of a vector space is a set that is
A B. Dependent and spanning
B C. Independent but not spanning
C D. Spanning but redundant
D A. Independent and spanning
A basis must both span the space and be linearly independent. This ensures every vector has a unique coordinate representation relative to that basis.
The dimension of a finite-dimensional vector space equals
A B. Number of all vectors
B C. Number of subspaces
C A. Number of basis vectors
D D. Number of scalars
Dimension is defined as the size (cardinality) of any basis in a finite-dimensional space. All bases have the same number of vectors.
Which condition ensures a subset WW is a subspace?
A A. 0∈W0∈W
B B. WW has at least two vectors
C C. WW is bounded
D D. WW is closed set
Containing the zero vector is necessary for a subspace (also follows from closure if nonempty). Being bounded/closed (topology) is not required.
If vectors v1,v2v1,v2 are dependent, then
A B. Both must be zero
B C. They must be orthogonal
C A. One is scalar multiple of other
D D. They must be equal length
Two vectors are linearly dependent exactly when one is a scalar multiple of the other (including the case where one is the zero vector).
A linear combination uses
A B. Only vectors
B C. Only scalars
C D. Only matrices
D A. Scalars and vectors
A linear combination has the form a1v1+⋯+akvka1v1+⋯+akvk, where aiai are scalars and vivi are vectors.
In RnRn, the standard basis has
A B. n2n2 vectors
B C. 2n2n vectors
C A. nn vectors
D D. 1 vector
The standard basis is {e1,…,en}{e1,…,en} where each eiei has a 1 in one position and 0 elsewhere, so there are nn vectors.
Which is a trivial subspace of VV?
A B. Any line not through origin
B A. {0}{0}
C C. Any circle
D D. Any finite set
Trivial subspaces are {0}{0} and VV itself. A line not through origin is not closed under scalar multiplication, so not a subspace.
The intersection of two subspaces is always
A A. A subspace
B B. Not a subspace
C C. Empty always
D D. A basis
Intersection of subspaces contains zero and is closed under addition and scalar multiplication, so it forms a subspace.
The sum U+WU+W of subspaces means
A B. Only common elements
B C. Only basis vectors
C A. {u+w}{u+w} combinations
D D. Only scalar sums
U+W={u+w:u∈U,w∈W}U+W={u+w:u∈U,w∈W}. It is a subspace containing both UU and WW.
A direct sum idea means
A B. Both are equal sets
B C. Both are finite
C D. Both are orthogonal
D A. Intersection is only zero
For V=U⊕WV=U⊕W, every vector has a unique decomposition u+wu+w, and typically U∩W={0}U∩W={0}. Orthogonality is extra, not required.
Coordinates of a vector depend on
A A. Chosen basis
B B. Vector length
C C. Field characteristic
D D. Matrix determinant
The coordinate vector changes when the basis changes. The same geometric vector can have different coordinate tuples in different bases.
A quotient space V/WV/W consists of
A B. Elements of WW only
B C. Matrices in VV
C A. Cosets of WW
D D. Scalars in field
V/WV/W is formed by equivalence classes v+Wv+W. Addition and scalar multiplication are defined on these cosets and must be well-defined.
Two vectors v,uv,u are equivalent mod WW if
A B. v+u∈Wv+u∈W
B C. vu∈Wvu∈W
C D. v=uv=u always
D A. v−u∈Wv−u∈W
In quotient spaces, v∼uv∼u when their difference lies in WW. Then they represent the same coset v+W=u+Wv+W=u+W.
The natural projection map sends vv to
A B. WW
B A. v+Wv+W
C C. 00
D D. −v−v
The projection π:V→V/Wπ:V→V/W is defined by π(v)=v+Wπ(v)=v+W. Its kernel is WW.
The kernel of the natural projection is
A B. V/WV/W
B C. {0}{0}
C A. WW
D D. VV
π(v)=Wπ(v)=W (zero coset) exactly when v∈Wv∈W. So ker(π)=Wker(π)=W, a key link to the isomorphism theorem.
In a quotient space, “well-defined” means
A A. Independent of representative
B B. Depends on chosen vector
C C. Uses only basis vectors
D D. Requires determinant nonzero
Operations on cosets must not depend on which representative vector you pick. If v+W=u+Wv+W=u+W, then results must match for both.
A linear transformation TT must satisfy
A B. Only additivity
B C. Only homogeneity
C A. Additivity and homogeneity
D D. Only continuity
Linearity means T(u+v)=T(u)+T(v)T(u+v)=T(u)+T(v) and T(cv)=cT(v)T(cv)=cT(v). Continuity is not part of the definition in basic linear algebra.
The kernel of a linear map is the set of vectors mapped to
A B. Unit vector
B A. Zero vector
C C. Any scalar
D D. Any basis
ker(T)={v:T(v)=0}ker(T)={v:T(v)=0}. It is always a subspace of the domain and describes solutions of homogeneous equations.
The image (range) of a linear map is
A B. All inputs of TT
B C. Only zero output
C D. Only basis outputs
D A. All outputs of TT
Im(T)={T(v):v∈V}Im(T)={T(v):v∈V}. It is a subspace of the codomain and its dimension is the rank of TT.
A linear map is one-one if
A B. Image equals domain
B C. Rank is zero
C A. Kernel is only zero
D D. Nullity equals rank
Injective (one-one) linear map means T(v)=0⇒v=0T(v)=0⇒v=0. For linear maps, that is equivalent to ker(T)={0}ker(T)={0}.
A linear map is onto if
A A. Image equals codomain
B B. Kernel equals codomain
C C. Domain equals image
D D. Nullity is maximal
Surjective (onto) means every vector in the codomain is achieved as T(v)T(v) for some vv. So Im(T)=Im(T)= codomain.
Rank of a transformation is
A B. Dimension of kernel
B C. Size of domain
C D. Number of equations
D A. Dimension of image
Rank is dim(Im(T))dim(Im(T)). It measures how many independent output directions the transformation produces.
Nullity of a transformation is
A B. Dimension of image
B C. Dimension of codomain
C A. Dimension of kernel
D D. Number of rows
Nullity is dim(ker(T))dim(ker(T)). It counts the number of independent directions collapsed to zero by the map.
Rank–Nullity theorem states
A A. dim(domain)=rank+nullity
B B. dim(codomain)=rank+nullity
C C. rank=determinant
D D. nullity=trace
For linear T:V→WT:V→W with finite-dimensional VV, dim(V)=rank(T)+nullity(T)dim(V)=rank(T)+nullity(T). It links solution space size and output dimension.
In Ax=0Ax=0, the solution set equals
A B. Column space of AA
B A. Null space of AA
C C. Row space of AA
D D. Eigen space only
The homogeneous system Ax=0Ax=0 describes vectors mapped to zero, i.e., the kernel (null space). Its dimension is the nullity.
Pivot columns help determine
A B. Determinant only
B C. Trace only
C A. Rank of matrix
D D. Eigenvalues only
In row-reduced form, pivot columns indicate leading variables and the number of pivots equals matrix rank, which also equals dimension of the column space.
Free variables appear when
A A. Rank is less than columns
B B. Rank equals columns
C C. Determinant is nonzero
D D. Matrix is diagonal
If there are fewer pivots than variables, some variables become free parameters, giving infinitely many solutions for Ax=0Ax=0.
Composition of linear maps is
A B. Nonlinear always
B C. Undefined
C D. Only for square matrices
D A. Linear
If TT and SS are linear, then S∘TS∘T preserves addition and scalar multiplication, so it is also a linear transformation.
An inverse linear map exists when TT is
A B. Only onto
B C. Only one-one
C A. Bijective
D D. Zero map
A linear map has an inverse iff it is both one-one and onto. Then it becomes an isomorphism and preserves vector space structure.
A linear operator is a linear map from
A B. VV to scalars
B A. VV to VV
C C. Scalars to VV
D D. Matrices to vectors
A linear operator has the same domain and codomain. Examples include projections, rotations, and scalings on RnRn.
The standard matrix of T:Rn→RmT:Rn→Rm has columns
A B. Eigenvectors only
B C. Row-reduced basis
C D. Kernel basis
D A. T(ei)T(ei) vectors
The matrix AA representing TT in standard bases has ii-th column T(ei)T(ei). Then T(x)=AxT(x)=Ax for all xx.
Matrix representation depends on
A A. Choice of bases
B B. Vector length
C C. Dot product only
D D. Determinant sign
The same linear transformation can have different matrices under different bases. Changing bases changes coordinate vectors and the transformation matrix accordingly.
The identity map matrix is
A B. Zero matrix
B C. Any diagonal matrix
C A. Identity matrix
D D. Any upper matrix
Identity map sends every vector to itself, so in any basis its matrix is II, producing unchanged coordinate vectors.
If T(x)=AxT(x)=Ax, then kernel of TT equals
A B. Column space of AA
B A. Null space of AA
C C. Row space of AA
D D. Eigenspace only
T(x)=0T(x)=0 means Ax=0Ax=0. So ker(T)ker(T) is exactly the null space of the representing matrix AA in the chosen bases.
An eigenvector v≠0v=0 satisfies
A A. Av=λvAv=λv
B B. Av=0Av=0 only
C C. vA=λvA=λ
D D. A+λI=0A+λI=0
Eigenvector definition: multiplying by AA scales the vector by λλ without changing its direction (except sign). v≠0v=0 is required.
The set of all eigenvectors for λλ plus zero is
A B. Kernel only
B C. Row space
C D. Quotient set
D A. Eigenspace
Eigenspace for eigenvalue λλ is ker(A−λI)ker(A−λI). It contains all vectors satisfying Av=λvAv=λv, including the zero vector.
Eigenvalues are found by solving
A B. det(A+λI)=1det(A+λI)=1
B C. A=λIA=λI
C A. det(A−λI)=0det(A−λI)=0
D D. A2=0A2=0
The characteristic equation comes from nontrivial solutions of (A−λI)v=0(A−λI)v=0. That requires determinant zero, giving eigenvalues as roots.
The characteristic polynomial of AA is
A A. det(A−λI)det(A−λI)
B B. det(A+λI)det(A+λI)
C C. det(λA−I)det(λA−I)
D D. trace(A−λ)trace(A−λ)
Characteristic polynomial is defined as p(λ)=det(A−λI)p(λ)=det(A−λI) for an n×nn×n matrix. Its roots are eigenvalues.
For a 2×22×2 matrix, characteristic polynomial degree is
A B. 1
B A. 2
C C. 3
D D. 4
For an n×nn×n matrix, det(A−λI)det(A−λI) is a polynomial of degree nn. So for 2×22×2, degree is 2.
Trace of a matrix equals
A B. Product of eigenvalues
B C. Number of pivots
C A. Sum of eigenvalues
D D. Dimension of kernel
Counting algebraic multiplicities, the sum of eigenvalues equals the trace of AA. This follows from coefficients of the characteristic polynomial.
Determinant of a matrix equals
A B. Sum of eigenvalues
B C. Rank plus nullity
C D. Number of columns
D A. Product of eigenvalues
Counting algebraic multiplicities, the product of eigenvalues equals det(A)det(A). This also comes from the constant term of the characteristic polynomial.
Eigenvalues of a triangular matrix are
A A. Diagonal entries
B B. Off-diagonal entries
C C. Row sums
D D. Column sums
For triangular matrices, det(A−λI)det(A−λI) is product of diagonal terms (aii−λ)(aii−λ). Hence eigenvalues are exactly the diagonal entries.
Diagonalization requires enough
A B. Zero determinant
B A. Independent eigenvectors
C C. Positive trace
D D. Symmetric entries
A matrix is diagonalizable if it has a basis of eigenvectors, meaning nn linearly independent eigenvectors for an n×nn×n matrix.
Gram–Schmidt process is used to
A B. Find eigenvalues
B C. Compute determinant
C A. Create orthonormal set
D D. Solve cosets
Gram–Schmidt takes a linearly independent set in an inner product space and constructs an orthonormal basis spanning the same subspace.
The dual space consists of
A A. Linear functionals
B B. All subspaces
C C. All eigenvectors
D D. All cosets
The dual space V∗V∗ is the set of all linear maps from VV to the field of scalars. These are called linear functionals