Chapter 13: Vector Spaces and Linear Transformations (Set-1)

Which item must exist in a vector space?

A B. Prime element
B C. Division operation
C D. Order relation
D A. Scalar multiplication rule

A subset WW of VV is a subspace if it is

A A. Closed under addition
B B. Non-empty only
C C. Finite set
D D. Contains all scalars

Which set is always a subspace of any vector space?

A B. All nonzero vectors
B A. The zero subspace
C C. Unit vectors only
D D. Positive vectors only

If a set of vectors is linearly independent, it means

A B. Vectors are perpendicular
B C. Vectors have unit length
C A. Only trivial combination gives zero
D D. Vectors are all distinct

The span of a set of vectors is the set of all

A A. Linear combinations
B B. Dot products
C C. Eigenvalues
D D. Determinants

A basis of a vector space is a set that is

A B. Dependent and spanning
B C. Independent but not spanning
C D. Spanning but redundant
D A. Independent and spanning

The dimension of a finite-dimensional vector space equals

A B. Number of all vectors
B C. Number of subspaces
C A. Number of basis vectors
D D. Number of scalars

Which condition ensures a subset WW is a subspace?

A A. 0∈W0∈W
B B. WW has at least two vectors
C C. WW is bounded
D D. WW is closed set

If vectors v1,v2v1,v2 are dependent, then

A B. Both must be zero
B C. They must be orthogonal
C A. One is scalar multiple of other
D D. They must be equal length

A linear combination uses

A B. Only vectors
B C. Only scalars
C D. Only matrices
D A. Scalars and vectors

In RnRn, the standard basis has

A B. n2n2 vectors
B C. 2n2n vectors
C A. nn vectors
D D. 1 vector

Which is a trivial subspace of VV?

A B. Any line not through origin
B A. {0}{0}
C C. Any circle
D D. Any finite set

The intersection of two subspaces is always

A A. A subspace
B B. Not a subspace
C C. Empty always
D D. A basis

The sum U+WU+W of subspaces means

A B. Only common elements
B C. Only basis vectors
C A. {u+w}{u+w} combinations
D D. Only scalar sums

A direct sum idea means

A B. Both are equal sets
B C. Both are finite
C D. Both are orthogonal
D A. Intersection is only zero

Coordinates of a vector depend on

A A. Chosen basis
B B. Vector length
C C. Field characteristic
D D. Matrix determinant

A quotient space V/WV/W consists of

A B. Elements of WW only
B C. Matrices in VV
C A. Cosets of WW
D D. Scalars in field

Two vectors v,uv,u are equivalent mod WW if

A B. v+u∈Wv+u∈W
B C. vu∈Wvu∈W
C D. v=uv=u always
D A. v−u∈Wv−u∈W

The natural projection map sends vv to

A B. WW
B A. v+Wv+W
C C. 00
D D. −v−v

The kernel of the natural projection is

A B. V/WV/W
B C. {0}{0}
C A. WW
D D. VV

In a quotient space, “well-defined” means

A A. Independent of representative
B B. Depends on chosen vector
C C. Uses only basis vectors
D D. Requires determinant nonzero

A linear transformation TT must satisfy

A B. Only additivity
B C. Only homogeneity
C A. Additivity and homogeneity
D D. Only continuity

The kernel of a linear map is the set of vectors mapped to

A B. Unit vector
B A. Zero vector
C C. Any scalar
D D. Any basis

The image (range) of a linear map is

A B. All inputs of TT
B C. Only zero output
C D. Only basis outputs
D A. All outputs of TT

A linear map is one-one if

A B. Image equals domain
B C. Rank is zero
C A. Kernel is only zero
D D. Nullity equals rank

A linear map is onto if

A A. Image equals codomain
B B. Kernel equals codomain
C C. Domain equals image
D D. Nullity is maximal

Rank of a transformation is

A B. Dimension of kernel
B C. Size of domain
C D. Number of equations
D A. Dimension of image

Nullity of a transformation is

A B. Dimension of image
B C. Dimension of codomain
C A. Dimension of kernel
D D. Number of rows

Rank–Nullity theorem states

A A. dim(domain)=rank+nullity
B B. dim(codomain)=rank+nullity
C C. rank=determinant
D D. nullity=trace

In Ax=0Ax=0, the solution set equals

A B. Column space of AA
B A. Null space of AA
C C. Row space of AA
D D. Eigen space only

Pivot columns help determine

A B. Determinant only
B C. Trace only
C A. Rank of matrix
D D. Eigenvalues only

Free variables appear when

A A. Rank is less than columns
B B. Rank equals columns
C C. Determinant is nonzero
D D. Matrix is diagonal

Composition of linear maps is

A B. Nonlinear always
B C. Undefined
C D. Only for square matrices
D A. Linear

An inverse linear map exists when TT is

A B. Only onto
B C. Only one-one
C A. Bijective
D D. Zero map

A linear operator is a linear map from

A B. VV to scalars
B A. VV to VV
C C. Scalars to VV
D D. Matrices to vectors

The standard matrix of T:Rn→RmT:Rn→Rm has columns

A B. Eigenvectors only
B C. Row-reduced basis
C D. Kernel basis
D A. T(ei)T(ei) vectors

Matrix representation depends on

A A. Choice of bases
B B. Vector length
C C. Dot product only
D D. Determinant sign

The identity map matrix is

A B. Zero matrix
B C. Any diagonal matrix
C A. Identity matrix
D D. Any upper matrix

If T(x)=AxT(x)=Ax, then kernel of TT equals

A B. Column space of AA
B A. Null space of AA
C C. Row space of AA
D D. Eigenspace only

An eigenvector v≠0v=0 satisfies

A A. Av=λvAv=λv
B B. Av=0Av=0 only
C C. vA=λvA=λ
D D. A+λI=0A+λI=0

The set of all eigenvectors for λλ plus zero is

A B. Kernel only
B C. Row space
C D. Quotient set
D A. Eigenspace

Eigenvalues are found by solving

A B. det⁡(A+λI)=1det(A+λI)=1
B C. A=λIA=λI
C A. det⁡(A−λI)=0det(A−λI)=0
D D. A2=0A2=0

The characteristic polynomial of AA is

A A. det⁡(A−λI)det(A−λI)
B B. det⁡(A+λI)det(A+λI)
C C. det⁡(λA−I)det(λA−I)
D D. trace(A−λ)trace(A−λ)

For a 2×22×2 matrix, characteristic polynomial degree is

A B. 1
B A. 2
C C. 3
D D. 4

Trace of a matrix equals

A B. Product of eigenvalues
B C. Number of pivots
C A. Sum of eigenvalues
D D. Dimension of kernel

Determinant of a matrix equals

A B. Sum of eigenvalues
B C. Rank plus nullity
C D. Number of columns
D A. Product of eigenvalues

Eigenvalues of a triangular matrix are

A A. Diagonal entries
B B. Off-diagonal entries
C C. Row sums
D D. Column sums

Diagonalization requires enough

A B. Zero determinant
B A. Independent eigenvectors
C C. Positive trace
D D. Symmetric entries

Gram–Schmidt process is used to

A B. Find eigenvalues
B C. Compute determinant
C A. Create orthonormal set
D D. Solve cosets

The dual space consists of

A A. Linear functionals
B B. All subspaces
C C. All eigenvectors
D D. All cosets

Leave a Reply

Your email address will not be published. Required fields are marked *