Chapter 4: Computer Organization and Architecture (Set-2)
During the instruction cycle, which step generally reads the next instruction from memory into the CPU
A Decode stage
B Execute stage
C Store stage
D Fetch stage
In the fetch step, the CPU uses the Program Counter address to read the next instruction from memory and places it into the Instruction Register, preparing it for decoding.
Which CPU register normally holds the address of the next instruction to be fetched
A Instruction Register
B Program Counter
C Memory Data Register
D Accumulator register
The Program Counter stores the memory address of the next instruction. After an instruction is fetched, the PC is updated to point to the next sequential instruction or a branch target.
In machine instructions, which part tells the CPU what operation to perform
A Operand field
B Address field
C Opcode field
D Data field
The opcode specifies the action such as add, load, store, or jump. The control unit decodes the opcode to generate control signals for the required CPU operations.
In a basic instruction format, operands usually represent
A Operation type
B Data or address
C Clock signal
D Cache policy
Operands provide the data to operate on, or a location where data is stored. They may be immediate values, register names, or memory addresses depending on instruction design.
After decoding an instruction, the CPU primarily prepares
A Control signal sequence
B Power supply lines
C Screen refresh rate
D Disk formatting table
During decode, the control unit interprets the instruction and plans micro-operations. It then produces control signals for registers, ALU, memory, and buses to complete execution.
Which step of the instruction cycle usually performs the actual arithmetic or logic work
A Fetch stage
B Decode stage
C Store stage
D Execute stage
In the execute stage, the ALU may perform calculations or comparisons, and the CPU may access memory or I/O as required. This stage completes the main action of the instruction.
Which step typically writes the result back to a register or memory location
A Fetch stage
B Store stage
C Decode stage
D Execute stage
In the store or write-back step, results from ALU or memory reads are written into destination registers or memory. This finalizes the instruction’s effect and updates system state.
A clock cycle in simple terms is
A One instruction only
B One disk rotation
C One timing pulse
D One file transfer
A clock cycle is a single tick of the CPU clock that synchronizes internal operations. Many instructions need multiple clock cycles to complete, depending on instruction complexity and CPU design.
Micro-operations are best described as
A Big software updates
B Tiny CPU steps
C Internet data packets
D Monitor pixel changes
Micro-operations are small internal actions like “load register,” “increment PC,” or “transfer data.” The CPU performs these steps in sequence to carry out a full machine instruction.
In the stored-program concept, programs are stored in
A Keyboard buffer only
B Printer memory only
C Monitor display unit
D Main memory area
Stored-program concept means instructions and data are kept in the same memory system. The CPU fetches instructions from memory just like it fetches data, enabling flexible and reprogrammable computing.
The instruction set of a CPU refers to
A Keyboard shortcuts list
B Monitor refresh rules
C Supported machine commands
D Disk partition scheme
An instruction set is the collection of machine-level operations the CPU can execute, such as arithmetic, data transfer, and branching. It defines what programs can directly ask the processor to do.
A machine cycle is commonly linked to
A One CPU operation
B One instruction step
C One full program
D One network request
A machine cycle often means a basic CPU action like memory read or memory write. A complete instruction may require multiple machine cycles, such as fetching, decoding, and executing.
An interrupt cycle happens when the CPU
A Services an interrupt
B Deletes temporary files
C Increases screen size
D Cools down hardware
When an interrupt occurs, the CPU saves its current state, jumps to an interrupt service routine, and performs urgent handling. After completion, it restores state and continues the earlier task.
Which is a common example of a decoding activity
A Adding two numbers
B Printing a document
C Identifying opcode meaning
D Saving to hard disk
During decode, the control unit examines the instruction bits to identify opcode and addressing details. This lets it select proper micro-operations and control signals needed for execution.
If the Program Counter is incremented, it usually means
A Disk is full
B Next sequential instruction
C CPU is halted
D Cache is cleared
Incrementing the PC moves it to the next instruction address in memory. This supports sequential program flow, unless a jump or branch instruction changes the PC to another target address.
CPU performance is most directly affected by
A Mouse sensitivity
B Printer ink level
C Speaker volume
D Clock rate
Clock rate controls how fast the CPU clock ticks, influencing how quickly internal steps occur. However, true performance also depends on cores, cache, instruction efficiency, and memory speed.
A CPU core can be understood as
A One power supply
B One memory module
C One processing unit
D One display device
A core is an independent execution unit with its own ALU and control logic. Multiple cores allow parallel execution of tasks, improving throughput for workloads that can be split.
A thread is best described as
A Hardware cooling pipe
B Execution flow unit
C Disk storage block
D Network cable type
A thread is a sequence of instructions that can be scheduled and executed. Modern CPUs may handle multiple threads to improve utilization, especially when one thread waits for memory or I/O.
Cache improves performance mainly by reducing
A Memory access delay
B Monitor brightness
C Keyboard typing errors
D File compression time
Cache stores frequently used data near the CPU, so many memory requests avoid slower main memory. This lowers average access time and keeps the CPU from waiting too often for data.
Bus width usually indicates
A Screen pixel count
B Disk rotation speed
C Bits moved together
D Audio frequency range
Bus width is the number of bits that can be transferred in one bus operation. Wider buses can move more data per cycle, improving data transfer capacity between CPU and memory or devices.
Throughput in computer performance means
A Screen size ratio
B Data per time
C Keyboard key travel
D File naming style
Throughput measures how much work a system completes per unit time, such as tasks per second. Higher throughput often comes from parallelism, efficient instruction execution, and reduced waiting for memory or I/O.
Latency is best explained as
A Total storage size
B Number of CPU cores
C Instruction list length
D Delay before response
Latency is the waiting time between a request and the start of a response. Lower latency improves responsiveness, especially for small frequent operations like cache access or memory reads.
MIPS is a rough measure of
A Memory size
B Monitor refresh rate
C Instructions per second
D Disk write errors
MIPS stands for Million Instructions Per Second. It estimates instruction execution speed, but comparisons can be misleading because instruction complexity differs across CPUs and instruction set designs.
FLOPS is commonly used for measuring
A Text typing speed
B Floating operations rate
C Disk storage capacity
D Printer page count
FLOPS means Floating-Point Operations Per Second and is used for math-heavy tasks like scientific computing. It measures how quickly the CPU or GPU can perform floating-point calculations.
Benchmarking in simple terms is
A Testing performance with tasks
B Changing wallpapers
C Replacing keyboard keys
D Cleaning computer dust
Benchmarking uses standard tests to measure system performance. It helps compare CPUs or systems using the same workload, showing how fast tasks execute under controlled conditions.
A bottleneck occurs when
A All parts equal speed
B Battery is fully charged
C One part limits speed
D Screen is too bright
A bottleneck happens when one component, like slow memory or I/O, restricts overall system performance. Even if the CPU is fast, the system slows if it must wait on the bottleneck.
RISC architecture generally uses
A Many complex instructions
B Fewer simple instructions
C No registers inside
D No memory access
RISC designs emphasize simple, fixed-length instructions and efficient pipelining. This can reduce instruction complexity and improve speed per cycle, often relying on more instructions to complete complex tasks.
CISC architecture is known for
A Very small memory only
B Only one addressing mode
C No control unit
D More complex instructions
CISC aims to reduce the number of instructions per program by offering complex instructions that do more work. These instructions may take multiple cycles and can be harder to pipeline efficiently.
Instruction set architecture (ISA) defines
A Cabinet design rules
B Monitor color scheme
C CPU-program interface
D Internet routing method
ISA specifies instructions, registers, data types, and addressing modes visible to programmers and compilers. It is the contract between software and hardware, enabling programs to run on compatible processors.
The system bus typically includes
A Water cooling pipes
B Data, address, control
C Speaker and mic wires
D Only USB connectors
A system bus is commonly divided into data bus, address bus, and control bus. Together they transfer data values, memory locations, and control signals between CPU, memory, and I/O devices.
The address bus mainly carries
A Memory location numbers
B Device names list
C Screen pixel values
D Audio sample data
The address bus carries the address of the memory or I/O location being accessed. It tells where the CPU wants to read from or write to, enabling correct selection of memory cells.
The control bus carries signals such as
A Printer colors only
B File names only
C Keyboard symbols only
D Read and write
Control bus signals coordinate operations like memory read, memory write, interrupt requests, and clock-related control. These signals ensure each component acts at the correct time and in the correct way.
DMA is best described as
A Manual file backup
B Disk formatting method
C Direct memory transfer
D Keyboard buffer tool
Direct Memory Access allows an I/O device to transfer data to or from main memory with minimal CPU involvement. This reduces CPU load and speeds up large data transfers like disk or network data.
A key benefit of DMA is
A More screen resolution
B Less CPU waiting
C More printer speed
D Less RAM capacity
DMA handles bulk data movement without the CPU managing every byte. The CPU can perform other tasks while DMA transfers occur, improving overall efficiency and throughput for I/O-heavy operations.
Which interrupt type is usually generated by hardware devices
A Software interrupt
B Logical interrupt
C Virtual interrupt
D Hardware interrupt
Hardware interrupts come from external devices like keyboards, timers, or network cards to request attention. They inform the CPU that a device needs service, often for input, output, or timing events.
Software interrupts are commonly triggered by
A Power supply failure
B Program instruction request
C Fan speed change
D Screen brightness change
Software interrupts are generated by executing special instructions, often to request operating system services. They are used for system calls, allowing user programs to safely access privileged services.
Memory addressing mode “immediate” means
A Address in cache
B Data in hard disk
C Data in instruction
D Address in monitor
Immediate addressing stores the operand value directly in the instruction itself. This avoids an extra memory read for the operand, making execution faster for small constant values.
Direct addressing mode generally means
A Operand in register
B Operand in opcode
C Address in keyboard
D Address is given
In direct addressing, the instruction contains the memory address of the operand. The CPU uses that address to access the operand from memory, making addressing simple and easy to understand.
Indirect addressing mode generally means
A Data stored in IR
B Address points to address
C Cache stores opcode
D PC stores data value
In indirect addressing, the instruction points to a location that contains the effective address. This allows flexible memory referencing, but typically requires extra memory access, increasing execution time.
Register addressing mode uses operand from
A Main memory only
B Hard disk sector
C CPU register
D Optical disk track
Register addressing uses operands located in CPU registers. Since registers are extremely fast, this mode speeds up execution and reduces memory access, improving overall instruction performance.
Register Transfer Language (RTL) is used to describe
A Data movement operations
B Network cable rules
C Printer paper size
D Monitor refresh process
RTL describes how data transfers between registers and functional units occur, along with micro-operations. It helps explain internal CPU behavior at a low level without needing full circuit details.
Microprocessor mainly refers to
A RAM chip only
B CPU on a chip
C Hard disk controller
D Monitor driver board
A microprocessor integrates the CPU functions on a single integrated circuit. It typically needs external memory and peripherals, and it is used in computers and many digital systems for general processing.
A microcontroller generally includes
A Only ALU inside
B Only cache memory
C Only output devices
D CPU plus peripherals
A microcontroller integrates CPU, memory, and I/O peripherals on one chip. It is designed for embedded control tasks, such as controlling appliances or sensors, where compact and low-power operation is needed.
Multiprocessor system means
A Two input devices
B Many monitors used
C Multiple CPUs working
D Multiple keyboards used
A multiprocessor system has more than one processor to share workload. It can improve performance and reliability by running tasks in parallel, especially in servers and high-performance computing environments.
Which term best describes the number of parallel processing units in a CPU package
A CPU cores
B Cache lines
C Disk sectors
D Screen pixels
CPU cores are separate execution units inside one processor chip. More cores allow simultaneous processing of multiple tasks, improving throughput when software supports parallel execution.
Cache mapping is mainly related to
A Screen color mapping
B Cache placement rules
C File permission mapping
D Keyboard layout mapping
Cache mapping decides where a main memory block can be stored in cache. Common methods include direct-mapped, set-associative, and fully associative, each balancing speed, hardware cost, and hit rate.
Direct-mapped cache means each block maps to
A Any cache line
B Two random lines
C All cache sets
D One fixed line
In direct-mapped cache, each main memory block has exactly one possible cache location. This is simple and fast, but it can cause conflict misses when multiple blocks compete for the same line.
Set-associative cache allows a block to go into
A One location only
B Any set line
C Specific set lines
D Only main memory
Set-associative cache divides cache into sets, and a memory block maps to one set but can be placed in any line within that set. This reduces conflicts compared to direct mapping.
A common reason for using pipelining is to increase
A Memory size
B Instruction throughput
C Screen resolution
D File compression ratio
Pipelining overlaps instruction stages so multiple instructions are processed at once in different stages. This increases instruction completion rate per unit time, even though individual instruction latency may not decrease.
Which statement best matches “interrupt-driven I/O”
A Device signals CPU
B CPU polls continuously
C Disk stores interrupts
D Cache clears on I/O
In interrupt-driven I/O, a device notifies the CPU only when it needs attention. This avoids constant polling, allowing the CPU to do useful work until an interrupt requests service.