Last Sync: 2022-08-31 13:30:04

This commit is contained in:
tactonbishop 2022-08-31 13:30:04 +01:00
parent 1103a46204
commit e81be0c2df
5 changed files with 70 additions and 57 deletions

View file

@ -1,45 +1,44 @@
---
categories:
- Computer Architecture
tags: [CPU]
tags: [CPU, electromagnetism]
---
# CPU architecture
The CPU comprises three core components:
* Registers (a form of memory that are positioned on the same chip as the CPU )
* the Arithmetic Logic Unit (ALU)
* the Control Unit (CU)
- Registers (a form of memory that are positioned on the same chip as the CPU )
- the Arithmetic Logic Unit (ALU)
- the Control Unit (CU)
> This method of putting together a computer is known as the **Von Neumann Architecture**. It was devised by John von Neumann in about 1945, well before any of the components that would be needed to produce it had actually been invented.
> This method of putting together a computer is known as the **Von Neumann Architecture**. It was devised by John von Neumann in about 1945, well before any of the components that would be needed to produce it had actually been invented.
## Registers
This is the part of the CPU that stores data. The memory cells that comprise it do not have capacitors (unlike RAM) so they cannot store very much data but they work faster, which is what is important.
In terms of speed, registers sit at the top part of the overall memory hierarchy...
There are five main types of register in the CPU:
| Register type | What it stores |
|-------------------------|-------------------------------------------------------------|
| ----------------------- | ----------------------------------------------------------- |
| Accumulator | The results of calculations |
| Instruction Register | The DRAM address of the **instruction** to be processed |
| Memory Address Register | The DRAM address of the **data** to be processed |
| Memory Data Register | The store of the data that is currently being processed |
| Program Counter | The RAM address of the **next instruction** to be processed |
## Arithmetic Logic Unit
This is the hub of the CPU, where the binary calculations occur. It comprises [logic gates](/Hardware/Logic_Gates/Logic_gates.md) that execute the instructions passed from memory. This is where the data stored by the registers is acted upon..
It can execute arithmetic on binary numbers and logical operations.
This is the heart of the CPU; all the other components on the CPU chip are appendanges to the execution that occures within the ALU. It is also what is meant by the **core** processor that is referred to in hardware specs of computers, for instance *dual-core*, *quad core* etc.
This is the heart of the CPU; all the other components on the CPU chip are appendanges to the execution that occures within the ALU. It is also what is meant by the **core** processor that is referred to in hardware specs of computers, for instance _dual-core_, _quad core_ etc.
Below is a schematic of a series of logical circuits within the CPU core:
Below is a schematic of a series of logical circuits within the CPU core:
![74181aluschematic.png](/img/74181aluschematic.png)
@ -47,30 +46,30 @@ Below is a schematic of a series of logical circuits within the CPU core:
The vast majority of general purpose computers are multi-core. This means that the CPU contains more than one processing unit. They are best thought of as mini-CPUs within the main CPU since they each have the same overall Von Neumann architecture.
With Intel processors the two main consumer processors are the i5 and i7. The latter has more cores than the former. Consequently it has faster clock speeds and greater concurrency due to increased threads.
With Intel processors the two main consumer processors are the i5 and i7. The latter has more cores than the former. Consequently it has faster clock speeds and greater concurrency due to increased threads.
## Control Unit
The CPU's [controller](/Hardware/Chipset_and_controllers.md). It takes the instructions in binary form from RAM memory (separate from the CPU, but connected) and then signals to the to ALU and memory registers what it is supposed to do to execute the instructions. Think of it as the overseer that gets the ALU and registers to work together to run program instructions.
## Fetch, decode, execute
## The system clock
## The system clock
Whilst modern CPUs and threading make it appears as though the CPU is capable of running multiple processes at once, access to the CPU is in fact sequential. The illusion of simultaneous computation is due to the fact the processor is so fast that we do not detect the sequential changes. For this to happen, the CPU needs to have a means of scheduling and sequencing processes. This is made possible through the system clock, hence when talking about the speed of the CPU we do so with reference to _clock speeds_ and the _clock cycle_.
Whilst modern CPUs and threading make it appears as though the CPU is capable of running multiple processes at once, access to the CPU is in fact sequential. The illusion of simultaneous computation is due to the fact the processor is so fast that we do not detect the sequential changes. For this to happen, the CPU needs to have a means of scheduling and sequencing processes. This is made possible through the system clock, hence when talking about the speed of the CPU we do so with reference to *clock speeds* and the _clock cycle_.
The clock's circuitry is based on a quartz crystal system like that used in watches. At precisely timed intervals, the clock sends out pulses of electricity that cause bits to move from place to place within [logic gates](/Hardware/Logic_Gates/Logic_gates.md) or between logic gates and [registers](/Hardware/CPU/Architecture.md#registers).
The clock's circuitry is based on a quartz crystal system like that used in watches. At precisely timed intervals, the clock sends out pulses of electricity that cause bits to move from place to place within [logic gates](/Hardware/Logic_Gates/Logic_gates.md) or between logic gates and [registers](/Hardware/CPU/Architecture.md#registers).
Simple instructions such as add can often be executed in just one clock cycle, whilst complex operations such as divide will require a number of smaller steps, each using one cycle.
We measure the speed of a chip process within the CPU in **Hertz (Hz)**. One Hertz is equivalent to _1 cycle per second_ where a "cycle" is equivalent to a single clock **tick**. Thus a tick covers a period of 1 second.
We measure the speed of a chip process within the CPU in **Hertz (Hz)**. One Hertz is equivalent to _1 cycle per second_ where a "cycle" is equivalent to a single clock **tick**. Thus a tick covers a period of 1 second.
A speed of 2GHz for example means two billion cycles per second. This would mean that the clock was completing two billion cycles at each tick.
A speed of 2GHz for example means two billion cycles per second. This would mean that the clock was completing two billion cycles at each tick.
## Electromagnetism: broader scientific context
## Electromagnetism: broader scientific context
Hertz was the scientist who detected electromagentic waves and more broadly in science, we use Hertz to measure the number of electromatic waves (cycles) in a signal.
Hertz was the scientist who detected electromagentic waves and more broadly in science, we use Hertz to measure the number of electromatic waves (cycles) in a signal.
![](/img/hertz_wave_freq.gif)
As the diagram above shows, a cycle is equal to one ascending and one descending crest. The more cycles the greater the Hertz.
As the diagram above shows, a cycle is equal to one ascending and one descending crest. The more cycles the greater the Hertz.

View file

@ -6,30 +6,30 @@ tags: [memory]
# The role of memory in computation
The following steps outline the way in which memory interacts with the processor during computational cycles, once the [bootstrapping](/Operating_Systems/Boot_process.md) process has completed and the OS kernel is itself loaded into memory.
The following steps outline the way in which memory interacts with the processor during computational cycles, once the [bootstrapping](/Operating_Systems/Boot_process.md) process has completed and the OS kernel is itself loaded into memory.
1. A file is loaded from the harddisk into memory.
2. The instruction at the first address is sent to the CPU, travelling accross the data bus part of the [system bus](/Hardware/Bus.md#system-bus).
3. The CPU processes this instruction and then sends a request accross the address bus part of the system bus for the next instruction to the memory controller within the [chipset](/Hardware/Chipset_and_controllers.md).
4. The chipset finds where this instruction is stored within the [DRAM](/Hardware/Memory/RAM_types.md#dram) and issues a request to have it read out and send to the CPU over the data bus.
4. The chipset finds where this instruction is stored within the [DRAM](/Hardware/Memory/RAM_types.md#dram) and issues a request to have it read out and send to the CPU over the data bus.
> This is a simplified account; it is not the case that only single requests are passed back and forth. This would be inefficient and time-wasting. The kernel sends to the CPU not just the first instruction in the requested file but also a number of instructions that immediately follow it.
> This is a simplified account; it is not the case that only single requests are passed back and forth. This would be inefficient and time-wasting. The kernel sends to the CPU not just the first instruction in the requested file but also a number of instructions that immediately follow it.
![](/img/memory-flow.svg)
Every part of the above process - the journey accross the bus, the lookup in the controller, the operations on the DRAM, the journey back accross the bus - takes muliple CPU clock cycles.
Every part of the above process - the journey accross the bus, the lookup in the controller, the operations on the DRAM, the journey back accross the bus - takes muliple CPU clock cycles.
## The role of the cache
The cache is SRAM memory that is separate from the DRAM memory which comprises the main memory. It exists in order to boost perfomance when executing the read/request cycles of the steps detailed above.
The cache is SRAM memory that is separate from the DRAM memory which comprises the main memory. It exists in order to boost perfomance when executing the read/request cycles of the steps detailed above.
There are two types of cache memory:
There are two types of cache memory:
* L1 cache
* Situated on the CPU chip itself
* L2 cache
* Situated outside of the CPU on its own chip
- L1 cache
- Situated on the CPU chip itself
- L2 cache
- Situated outside of the CPU on its own chip
The L1 cache is the fastest since the data has less distance to travel when moving to and from the CPU. This said, the L2 cache is still very fast when compared to the main memory, both because it is SRAM rather than DRAM and because it is closer to the processor than the main memory.
Cache controllers use complex algorithms to determine what should go into the cache to facilitate the best performance, but generally they work on the principle that what has been previously used by the CPU will be requested again soon. If the CPU has just asked for an instruction at memory location 555 it's very likely that it will next ask for the one at 556, and after that the one at 557 and so on. The cache's controller circuits therefore go ahead and fetch these from slow DRAM to fast SRAM>
Cache controllers use complex algorithms to determine what should go into the cache to facilitate the best performance, but generally they work on the principle that what has been previously used by the CPU will be requested again soon. If the CPU has just asked for an instruction at memory location 555 it's very likely that it will next ask for the one at 556, and after that the one at 557 and so on. The cache's controller circuits therefore go ahead and fetch these from slow DRAM to fast SRAM.

View file

@ -51,3 +51,5 @@ $ uptime
- The three numbers are load averages for the past 1 minute, 5 minutes and 15 minutes respectively.
- A load average close to 0 is usually a good sign because it means that your processor isn't being challenged and you are conserving power. Anything equal to or above 1 means that a single process is using the CPU nearly all the time. You can identify that process with `htop` and it will obviously be near to the top. (This is often caused by Chrome and Electron-based software.)
## Memory

View file

@ -1,29 +1,30 @@
---
tags:
- Operating_Systems
- Linux
- processes
categories:
- Operating systems
tags: [sytems-programming, processes, memory]
---
# The Kernel
The kernel acts as the primary mediator between the hardware (CPU, memory) and user [processes](../Programming_Languages/Shell_Scripting/Processes.md). Let's look at each of its responsibilities in greater depth:
* process management
* memory management
* device drivers
* system calls
- process management
- memory management
- device drivers
- system calls
## Process management
> A process is just another name for a running program. Process management is the starting, pausing, resuming, scheduling and terminating of processes.
On modern computers it appears that multiple processes can run simultaneously at once. This is only because the processor is so fast that we do not detect changes. In fact access to the CPU is always sequential. The sequence in which multiple programs are allowed to access the CPU is managed by the kernel.
On modern computers it appears that multiple processes can run simultaneously at once. This is only because the processor is so fast that we do not detect changes. In fact access to the CPU is always sequential. The sequence in which multiple programs are allowed to access the CPU is managed by the kernel.
> Consider a system with a one-core CPU. Many processes may be _able_ to use the CPU, but only one process can actually use the CPU at any given time...Each process uses the CPU for a fraction of a second, then pauses, then another process uses it for a fraction of a second and so on... (_How Linux Works: Third Edition_, Brian Ward 2021)
> Consider a system with a one-core CPU. Many processes may be _able_ to use the CPU, but only one process can actually use the CPU at any given time...Each process uses the CPU for a fraction of a second, then pauses, then another process uses it for a fraction of a second and so on... (_How Linux Works: Third Edition_, Brian Ward 2021)
This process of the CPU shuffling between multiple processes is called _context switching_.
This process of the CPU shuffling between multiple processes is called _context switching_.
The role of the kernel in facilitating this, is as follows:
1. CPU runs process for a time slice based on its internal time. Then hands control back to the kernel (kernel mode)
2. Kernel records current state of CPU and memory. This is necessary in order to resume the progress that was just interupted.
3. The kernel executes any tasks that arose in the last timeslice executed by the CPU (e.g. collecting data from I/0)
@ -33,12 +34,14 @@ The role of the kernel in facilitating this, is as follows:
7. Kernel switches the CPU into user mode and hands control of CPU to the process.
## Memory management
During the context switch from CPU to user space, the kernel allocates memory. It has the following jobs to manage:
* Keeping its own private area in memory for itself that user processes cannot access
* Assigning each user process its own section of memory
* Managing shared memory between processes and ensuring the private memory of processes is not accessed by others
* Managing read-only memory
* Allowing for the use of disk space as auxiliary memory
- Keeping its own private area in memory for itself that user processes cannot access
- Assigning each user process its own section of memory
- Managing shared memory between processes and ensuring the private memory of processes is not accessed by others
- Managing read-only memory
- Allowing for the use of disk space as auxiliary memory
> Modern CPUs include a memory management unit which provides the kernel with **virtual** memory. In this scenario, memory isn't directly accessed by the process instead it works on the assumption that is has access to the entire memory of the machine and this is then translated into a map that is applied to the real memory and managed by the kernel.
@ -46,18 +49,21 @@ During the context switch from CPU to user space, the kernel allocates memory. I
Devices are managed by the kernal and are not accessible directly via user space, since improper usage could crash the whole machine. There is little uniformity between devices and as a result drivers are needed. Thes are kernl code that enable different OS kernels to access and control the devices.
## System calls
Syscalls are what enable programs to start and are required for the acts of opening, reading and writing files. System calls in Linux are typically managed via C.
## System calls
Syscalls are what enable programs to start and are required for the acts of opening, reading and writing files. System calls in Linux are typically managed via C.
In Linux there are two particularly important system calls:
* `fork()`
* When a process calls fork, the kernel creates a nearly identical copy of this running process
* `exec()`
* When a process calls exec it passes a program name as a parameter. Then the kernel loads and starts this program, replacing the current process.
- `fork()`
- When a process calls fork, the kernel creates a nearly identical copy of this running process
- `exec()`
- When a process calls exec it passes a program name as a parameter. Then the kernel loads and starts this program, replacing the current process.
Example with a terminal program like `ls`:
> When you enter `ls` into the terminal window, the shell that's running inside the terminal window calls `fork()` to create a copy of the shell, and then the new copy of the shell calls `exec(ls)` to run `ls`. (_Ibid._)
Example with a terminal program like `ls`:
> When you enter `ls` into the terminal window, the shell that's running inside the terminal window calls `fork()` to create a copy of the shell, and then the new copy of the shell calls `exec(ls)` to run `ls`. (_Ibid._)
## Controlling processes
In Linux we can view, kill, pause and resume processes using [ps](../Programming_Languages/Shell_Scripting/Processes.md).
In Linux we can view, kill, pause and resume processes using [ps](../Programming_Languages/Shell_Scripting/Processes.md).

View file

@ -0,0 +1,6 @@
---
categories:
- Computer Architecture
- Hardware
tags: [memory]
---