Chapter 4: Computer Organization and Architecture (Set-7)
In the instruction cycle, which step typically places the fetched instruction into the instruction register for further processing
A Execute stage
B Fetch stage
C Store stage
D Interrupt stage
During fetch, the CPU uses the Program Counter to read the next instruction from memory and loads it into the Instruction Register. This prepares the instruction for decoding in the next stage.
Which CPU register must be updated when a jump instruction changes program flow to a new memory address
A Memory Data Register
B Instruction Register
C Accumulator
D Program Counter
A jump or branch changes the address of the next instruction. The Program Counter must be updated with the target address so the CPU fetches the correct next instruction from memory.
Which field of an instruction tells the CPU what operation like ADD or LOAD should be performed
A Operand field
B Address bus
C Opcode field
D Cache tag
The opcode identifies the operation type. The control unit decodes the opcode and generates control signals that guide ALU actions, register transfers, and memory access needed to execute the instruction.
In immediate addressing mode, the operand value is found
A Inside instruction
B In main memory
C In cache only
D In disk sector
Immediate addressing stores the actual constant value within the instruction itself. This avoids an extra memory access for the operand, making it fast for fixed values used in computations.
Direct addressing mode means the instruction contains
A Operand constant
B Cache index bits
C Operand address
D Register file list
In direct addressing, the instruction specifies the memory address where the operand is stored. The CPU uses this address to read or write data, requiring memory access during execution.
Indirect addressing mode is best described as
A Address points to address
B Address equals operand
C Operand is opcode
D Data always immediate
Indirect addressing uses a pointer: the instruction gives an address that contains the effective address of the operand. It supports pointers and dynamic structures but often needs extra memory access.
Register addressing mode improves speed mainly because operands are
A In hard disk
B In CPU registers
C In ROM chips
D In optical media
Register operands are accessed much faster than main memory. Using registers reduces memory fetch delays and makes ALU operations and data movement faster during instruction execution.
Which stage of the instruction cycle primarily interprets the instruction bits to decide required actions
A Fetch stage
B Store stage
C Decode stage
D Power stage
The decode stage identifies opcode and operand information and prepares the control signal sequence. The control unit decides which micro-operations are needed for execution based on the decoded instruction.
In many CPUs, which stage may read operands from registers and perform arithmetic or logic work
A Execute stage
B Fetch stage
C Decode stage
D Sleep stage
The execute stage performs the main instruction operation, such as ALU calculations, comparisons, or address computations. It may also initiate memory access or change control flow for branches.
The store or write-back stage mainly ensures
A Instruction is decoded
B Cache is flushed
C CPU is cooled
D Result is saved
Write-back stores the computed result to a destination register or memory. This updates system state so later instructions can use the new value, completing the instruction’s effect.
A machine cycle is most closely associated with
A Full program execution
B Internet data transfer
C Basic memory operation
D Disk partition process
A machine cycle commonly refers to a basic CPU operation like memory read or write. A complete instruction may require multiple machine cycles such as fetch, operand read, and result write.
A clock cycle is best defined as
A One timing tick
B One instruction always
C One file operation
D One I/O device
A clock cycle is one tick of the CPU clock that synchronizes internal actions. Many instructions require several clock cycles because they are completed through multiple micro-operations.
Micro-operations are best described as
A Large software apps
B Network security rules
C Internal tiny steps
D Monitor refresh tasks
Micro-operations are small actions like loading a register, incrementing the PC, or transferring data on an internal bus. The CPU executes a sequence of micro-operations to complete one instruction.
Interrupt cycle occurs when the CPU
A Prints output pages
B Services an interrupt
C Formats the disk
D Loads a driver
When an interrupt occurs, the CPU saves its current state and jumps to an interrupt service routine. After handling the event, it restores the state and resumes the interrupted program.
A key benefit of interrupts over polling is that interrupts
A Reduce CPU waiting
B Force constant checking
C Increase disk capacity
D Reduce word length
With interrupts, the CPU does not constantly check devices. Devices signal the CPU only when service is needed, allowing better CPU utilization and improved overall system efficiency.
DMA is primarily used to
A Decode instructions faster
B Increase cache size
C Lower clock speed
D Transfer data directly
Direct Memory Access allows an I/O device to transfer data to or from main memory without CPU handling each byte. This reduces CPU overhead and improves speed for large transfers.
A main advantage of DMA is
A More CPU involvement
B Less ROM storage
C Less CPU overhead
D More monitor pixels
DMA handles bulk data transfers while the CPU continues other tasks. The CPU is typically interrupted only when the transfer completes, improving throughput for I/O-heavy workloads.
Which system bus part carries the actual data values during transfers
A Data bus
B Address bus
C Control bus
D Power bus
The data bus carries actual data bits between CPU, memory, and I/O. Its width influences how many bits move at once, which can impact overall transfer speed.
Which system bus part selects the memory or I/O location to access
A Data bus
B Address bus
C Control bus
D Video bus
The address bus carries the address of the target memory or I/O location. It identifies where data should be read from or written to during bus operations.
Which bus carries signals like read, write, and interrupt request
A Data bus
B Address bus
C Control bus
D Storage bus
The control bus carries command and timing signals that manage bus operations. It includes signals such as memory read/write and interrupt control, ensuring correct coordination among components.
CPU clock speed mainly tells
A Cycles per second
B Disk size in GB
C RAM type used
D Monitor refresh rate
Clock speed indicates how many clock cycles occur each second, typically in GHz. It affects how quickly CPU steps occur, though overall performance also depends on IPC, cache, and memory.
CPU word length generally means
A Instruction count list
B Number of cores
C Bits handled together
D Cache hit ratio
Word length is the number of bits the CPU processes as a unit, such as 32-bit or 64-bit. It affects register size, data range, and how much data is handled efficiently.
Which factor often improves throughput for parallel programs without helping single-thread tasks much
A Smaller cache size
B More CPU cores
C Lower clock speed
D Narrower bus width
More cores allow multiple threads or tasks to run simultaneously. This boosts throughput for parallel workloads, while single-thread programs may not benefit much if only one core is used.
A CPU thread is best described as
A Storage capacity unit
B Bus signal group
C Cache replacement rule
D Execution flow unit
A thread is a sequence of executed instructions within a program. Multiple threads can run concurrently on multi-core CPUs, improving responsiveness and utilization when tasks can be parallelized.
Cache size influences performance mainly by affecting
A Cache hit chances
B Monitor color depth
C Keyboard response
D Printer speed
A larger cache can store more frequently used data and instructions, increasing the chance of cache hits. More hits reduce slow main memory accesses, improving overall CPU efficiency and speed.
Bus width mainly affects
A Disk rotation time
B Screen resolution
C Data per transfer
D Fan speed control
Bus width is the number of bits transferred at once on a bus. Wider buses move more data each cycle, improving bandwidth for memory and I/O communications.
Throughput is best defined as
A Delay before response
B Work per time
C Total storage size
D CPU heat level
Throughput measures how much work a system completes per unit time, such as tasks per second. It improves with parallelism, efficient caching, and reduced waiting due to memory and I/O delays.
Latency is best defined as
A Waiting time delay
B Data per second
C Number of cache lines
D Instruction set size
Latency is the delay between requesting a service or data and receiving a response. Lower latency improves responsiveness, especially in memory access and real-time system operations.
MIPS is generally used to measure
A Memory access delay
B Disk space usage
C Million instructions rate
D Screen refresh cycles
MIPS stands for Million Instructions Per Second. It is a rough measure and can be misleading because different processors may execute different amounts of work per instruction.
FLOPS is mainly used to measure
A Floating math rate
B Text printing speed
C Disk read rate
D Network packet count
FLOPS measures floating-point operations per second, important for science, engineering, and graphics workloads. It reflects how efficiently the processor performs decimal-based calculations.
Benchmarking is mainly used to
A Increase cache memory
B Fix hardware issues
C Change BIOS password
D Compare system performance
Benchmarking runs standard tests to measure performance under controlled workloads. It helps compare systems, identify bottlenecks, and evaluate upgrades, though results depend on the benchmark type.
A bottleneck occurs when
A CPU never waits
B Cache always hits
C One part limits speed
D Disk never used
A bottleneck is a component that slows the whole system, such as slow memory or storage. The CPU may remain idle waiting for data, reducing throughput even if the CPU is powerful.
RISC processors generally use
A Simple instructions
B Complex instructions
C No pipelining
D No registers
RISC focuses on simpler instructions that execute efficiently, often with fixed length. This design supports pipelining and relies on registers, improving speed and simplifying control logic.
CISC processors are known for
A Only immediate mode
B Complex instructions
C No memory access
D No addressing modes
CISC includes instructions that can perform multi-step operations within one instruction. This can reduce instruction count, but decoding and pipelining may become more complex compared to RISC designs.
Instruction Set Architecture (ISA) defines
A Software-hardware rules
B Monitor display modes
C Disk file format
D Network routing plan
ISA specifies the instructions, registers, addressing modes, and data formats visible to programmers. It acts as a contract enabling software to run on any CPU that implements that ISA.
Which term best describes a standard measure of how fast a CPU handles typical tasks under a fixed test
A Disk partition count
B File size label
C Benchmark score
D Pixel density value
A benchmark score comes from running a standard workload and measuring completion time or throughput. It helps compare different CPUs or systems using the same tasks and settings.
A microprocessor is best described as
A CPU on a chip
B Complete embedded system
C Main memory device
D Input device controller
A microprocessor contains CPU logic on a single integrated circuit. It typically needs external memory and peripherals for a full system and is widely used in general-purpose computing.
A microcontroller typically includes
A Only ALU unit
B Only cache memory
C Only control unit
D CPU with peripherals
A microcontroller integrates CPU, memory, and I/O peripherals on one chip. It is designed for embedded control tasks, such as controlling sensors, motors, or appliances efficiently.
Multiprocessor basics refers to
A Multiple printers connected
B Multiple hard disks only
C Multiple CPUs together
D Multiple keyboards used
A multiprocessor system uses more than one processor to share workload. It improves throughput for parallel tasks and is used in servers and high-performance systems.
Cache mapping deals with
A Screen color mapping
B Placement of blocks
C Keyboard mapping keys
D Disk space mapping
Cache mapping defines where a memory block can be placed in cache. Methods like direct-mapped and set-associative balance speed, complexity, and the chance of conflict misses.
Direct-mapped cache means each block maps to
A One fixed line
B Any cache line
C Any cache set
D Two random lines
In direct-mapped cache, each memory block has exactly one cache line it can go to. This is fast and simple but can cause conflicts when different blocks map to the same line.
Set-associative cache means a block can go into
A Only one line
B A specific set
C Only main memory
D Only ROM chip
Set-associative cache divides cache into sets. A block maps to one set but may be stored in any line within that set, reducing conflict misses compared to direct mapping.
Register Transfer Language (RTL) is mainly used to describe
A Network routing tables
B Printer paper feeding
C Register data movement
D Disk formatting steps
RTL is a notation used to describe micro-operations like transfers between registers and ALU actions. It helps explain CPU datapath behavior and control logic at a low level.
Which situation best matches “interrupt-driven I/O” in practice
A Device signals CPU event
B CPU checks device repeatedly
C Disk stores interrupts
D Cache deletes device data
In interrupt-driven I/O, the device notifies the CPU when it needs service. This avoids continuous polling and allows the CPU to do other work until an interrupt occurs.
In a typical CPU, the instruction register is most important during
A Disk writing process
B Screen refresh process
C Battery charging cycle
D Decoding instructions
The instruction register holds the fetched instruction while the control unit decodes it. This decoding decides the operation, operand handling, and control signals needed to execute the instruction correctly.
When an interrupt is accepted, the CPU typically
A Ignores current state
B Deletes cache memory
C Saves current state
D Stops clock permanently
Before servicing an interrupt, the CPU saves key state information like PC and flags. After the interrupt service routine finishes, the CPU restores the state and continues the interrupted program.
A common effect of increasing clock speed without improving memory system is that performance may be limited by
A Memory bottleneck
B Screen resolution
C Keyboard layout
D Printer paper size
A faster CPU may still wait for data if memory access is slow. If cache and memory cannot feed data fast enough, the memory system becomes the bottleneck and limits real performance gains.
In performance terms, “latency” is more important than throughput when the goal is
A Many tasks per time
B Fast single response
C Large file storage
D High screen quality
Latency matters most when you want quick response to a single request, like opening a small file or reading one memory location. Throughput matters more when processing many tasks continuously.
Which statement about MIPS is correct for comparisons
A Always reliable measure
B Equals cache hit rate
C Depends on instruction mix
D Measures memory capacity
MIPS depends on instruction type and count. Different CPUs may perform different work per instruction, so MIPS alone can mislead; it is best used with context or same-ISA comparisons.
Which statement best matches the idea of “benchmarking basics”
A Measure with standard tests
B Increase RAM physically
C Remove cache memory
D Change instruction set
Benchmarking uses standard workloads to measure performance under comparable conditions. It helps compare systems, detect bottlenecks, and evaluate upgrades, but results vary with the chosen test and setup.