Initial commit, pure MD

This commit is contained in:
thomasabishop 2022-04-23 13:26:53 +01:00
commit 0a1569ca26
136 changed files with 7810 additions and 0 deletions

View file

@ -0,0 +1,256 @@
---
tags:
- Algorithms_Data_Structures
---
![Screenshot_2021-05-11_at_18.55.23.png](../img/Screenshot_2021-05-11_at_18.55.23.png)
*Summary of the main classes of algorithmic complexity*
## Distinguish algorithms from programs
Test commit
Algorithms are general sets of instructions that take data in one state, follow a prescribed series of steps and return data in another state. Programs are a specific application of one or more algorithms to achieve an outcome in a specific context. With algorithms, the actual detail of the steps is mostly abstracted and it is irrelevant to what end the algorithm is being put. For instance you may create a program that returns addresses from a database using a postcode. It is irrelevant to the efficiency or value of the algorithm whether or not you are looking up postcodes or some other form of alphanumeric string.
## Algorithmic efficiency
Algorithms can be classified based on their efficiency. Efficiency is function of the runtime speed. However this doesn't always mean that the fastest algorithms are best.
If we are landing the Curiosity Rover on Mars we may choose an algorithm that is slower on average for a guarantee that it will never take longer than we find acceptable. In other cases for example a video game, we may choose an algorithm that keeps the average time down, even if this occasionally leads to processes that need to be aborted because they take too long.
We need a generalised measure of efficiency to compare algorithms, across variant hardware. We can't simply use the number of steps, since some steps will be quicker to complete than others in the course of the overall algorithm and may take longer on different machines. Moreover the same algorithm could run at different speeds on the same machine, depending on its internal state at the given time that it ran. So we use the following: **the number of steps required relative to the input.**
>
> Two given computers may differ in how quickly they can run an algorithm depending on clock speed, available memory and so forth. They will however tend to require approximately the same number of instructions and we can measure the rate at which the number of instructions increases with the problem size.
This is what **asymptotic runtime** means: the rate at which the runtime of an algorithm grows compared to the size of its input. For precision and accuracy we use the worst case scenario as the benchmark.
So: the efficiency of algorithm *A* can be judged relative to the efficiency of algorithm *B* based on the rate at which the runtime of *A* grows compared to its input, compared to the same property in *B*, assuming the worst possible performance.
From now on we will use the word 'input' to denote the data that the algorithm receives (in most cases we will envision this as an array containing a certain data type) and 'execution' to denote the computation that is applied by the algorithm to each item of the data input. Rephrasing the above with these terms we can say that 'algorithmic efficiency' is a measure that describes the rate at which the execution time of an algorithm increases relative to the size of its input.
We will find that for the runtime of some algorithms, the size of the input does not change the execution time. In these cases, the runtime is proportional to the input quantity. In this case, regardless of whether the input is an array of one hundred elements or an array of ten elements, the amount of work that is executed on each element is the same.
For other cases, this will not hold true. We will find that there is a relationship between input size and execution time such that the length of the input affects the amount of work that needs to be performed on each item at execution.
## Linear time
Let's start with linear time, which is the easiest runtime to grasp.
We need an example to make this tangible and show how an algorithm's runtime changes compared to the size of its input. Let's take a simple function that takes a sequence of integers and returns their sum:
````js
function findSum(arr){
let total = 0;
for (let i = 0; i < arr.length; i++){
total = total += arr[i];
)
return total
}
````
The input of this function is an array of integers. It returns their sum as the output. Let's say that it takes 1ms for the function to sum an array of two integers.
If we passed in an array of four integers, how would this change the runtime? The answer is that, providing that the time it takes to sum two integers doesn't change, it would take twice as long.
As the time it takes to execute `findSum` doesn't change, we can say confidently that the runtime is as long as the number of integers we pass in.
A more general way to say this is that the runtime is equal to size of the input. For algorithms of the class of which `findSum` is a member: **the total runtime is proportional to the number of items to be processed**.
## Introducing asymptotic notation
If we say that it takes 1ms for two integers to be summed, this gives us the following data set:
|Length of input|Runtime|
|:--------------|------:|
|2|2|
|3|3|
|4|4|
|5|5|
If we plotted this as a graph it is clear that this is equivalent to a linear distribution:![lin.svg](../img/lin.svg)
Algorithms which display this distribution are therefore called **linear algorithms**.
The crucial point is that the amount of time it takes to sum the integers does not increase as the algorithm proceeds and the input size grows. This time remains the same. If it did increase, we would have a fluctuating curve on the graph. This aspect remains constant, only the instructions increase. This is why we have a nice steadily-advancing distribution in the graph.
We can now introduce notation to formalise the algorithmic properties we have been discussing.
## Big O notation
To express linear time algorithms formally, we say that:
>
> it takes some constant amount of time ($C$) to sum one integer and n times as long to sum n integers
Here the constant is the time for each execution to run and n is the length of the input. Thus the complexity is equal to that time multiplied by the input.
The algebraic expression of this is $cn$ : the constant multiplied by the length of the input. In algorithmic notation, the reference to the constant is always removed. Instead we just use n and combine it with a 'big O' which stands for 'order of complexity'. Likewise, if we have an array of four integers being passed to `findSum` we could technically express it as O(4n), but we don't because we are interested in the general case not the specific details of the runtime. So a linear algorithm is expressed algebraically as $O(n)$ which is read as "oh of n" and means
>
> $O(n)$ = with an order of complexity equal to (some constant) multiplied by n
Applied, this means an input of length 6 ($n$) where runtime is constant ($c$) at 1ms has a total runtime of 6 x 1 = 6ms in total. Exactly the same as our table and graph. O n is just a mathematical way of saying *the runtime grows on the order of the size of the input.*
>
> It's really important to remember that when we talk about the execution runtime being constant at 1ms, this is just an arbitrary placeholder. We are not really bothered about whether it's 1ms or 100ms: 'constant' in the mathematical sense doesn't mean a unit of time, it means 'unchanging'. We are using 1ms to get traction on this concept but the fundamental point being expressed is that the size of the input doesn't affect the execution time across the length of the execution time.
## Constant time
Constant time is another one of the main classes of algorithmic complexity. It is expressed as O(1). Here, we do away with n because with constant time we are only ever dealing with a single execution so we don't need a variable to express nth in a series or 'more than one'. Constant time covers all singular processes, without iteration.
An example in practice would be printing `array[0]` . Regardless of the size of the array, it is only ever going to take one step, or constant times one. On a graph this is equivalent to a flat line along the time axis. Since it only happens for one instant, it doesn't persist over time or have multiple iterations.
### Relation to linear time
If you think about it, there is a clear logical relationship between constant and linear time: because the execution time of a linear algorithm is constant, regardless of the size of n, each execution of O(n) is equal to O(1). Thus O(n) is simply O(1) writ large or iterated. At any given execution of an O(n) algorithm n is going to be equal to 1.
## Quadratic time
With the examples of constant and linear time, the total number of instructions doesn't change the amount of work that needs to be performed for each item, but this only covers one subset of algorithms. In cases other than O(1) and O(n), the length of the input **can** affect the amount of work that needs to be performed at execution. The most common example of this scenario is known as quadratic time, represented as $O(n^2)$.
Let's start with an example.
````js
const letters = ['A', 'B', 'C'];
function quadratic(arr) {
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < arr.length j++) {
console.log(arr[i]);
}
}
}
quadratic(letters);
````
This function takes an array . The outer loop runs once for each element of the array that is passed to the function. For each iteration of the outer loop, the inner loop also runs once for each element of the array.
In the example this means that the following is output:
````
A A A B B B C C C (length: 9)
````
Mathematically this means that n (the size of the input) grows at a rate of n2 or the input multiplied by itself. Our outer loop (`i`) is performing n iterations (just like in linear time) but our inner loop (`j`) is also performing n iterations, three `j`s for every one `i` . It is performing n iterations for every nth iteration of the outer loop. So runtime here is directly proportional to the squared size of the input data set. As the input array has a length of 3, and the inner array runs once for every element in the array, this is equal to 3 x 3 or 3 squared (9).
If the input had length 4, the runtime would be 16 or 4x4. For every execution of linear time (the outer loop) the inner loop runs as many times as is equal to the length of the input.
This is not a linear algorithm because as n grows the runtime increases as a factor of it. Therefore the runtime is not growing proportional to the size of the input, it is growing proportional to the size of the input squared.
Graphically this is represented with a curving lines as follows:
![square.svg](../img/square.svg)
We can clearly see that as n grows, the runtime gets steeper and more pronounced,
## Logarithmic time (log n)
A logarithm is best understood as the inverse of exponentiation:
$$ \log \_{2}8 = 3 \leftrightarrow 2^3 = 8 $$
When we use log in the context of algorithms we are always using the binary number system so we omit the 2, we just say log.
>
> With base two logarithms, the logarithm of a number roughly measures the number of times you can divide that number by 2 before you get a value that is less than or equal to 1
So applying this to the example of $\log 8$ , it is borne out as follows:
* 8 / 2 = 4 — count: 1
* 4 / 2 = 2 — count: 2
* 2 / 2 = 1 — count: 3
As we are now at 1, we can't divide any more, so $\log 8$ is equal to 3.
Obviously this doesn't work so neatly with odd numbers, so we approximate.
For example, with $\log 25$:
* 25 / 2 = 12.5 — count: 1
* 12.5 / 2 = 6.25 — count: 2
* 6.25 / 2 = 3.125 — count: 3
* 3.125 / 2 = 1.5625 — count: 4
* 1.5625 / 2 = 0.78125
Now we are lower than 1 so we have to stop. We can only say that the answer to $\log 25$ is somewhere between 4 and 5.
The exact answer is $\log 25 \approx 4.64$
Back to algorithms: $O(\log n)$ is a really good complexity to have. It is close to O(1) and in between O(1) and O(n). Represented graphically, it starts of with a slight increase in runtime but then quickly levels off:
![Screenshot_2021-05-11_at_18.51.02.png](../img/Screenshot_2021-05-11_at_18.51.02.png)
Many sorting algorithms run in log n time, as does recursion.
## Reducing O complexity to the general case
When we talk about big O we are looking for the most general case, slight deviations, additions or diminutions in n are not as important as the big picture. We are looking for the underlying logic and patterns that are summarised by the classes of O(1), O(n), O(n2) and others.
For example, with the following function:
````js
function sumAndAddTwo(arr){
let total = 0;
for (let i = 0; i < arr.length; i++){
total += arr[i];
}
total = total+= 2;
}
````
The formal representation of the above complexity would be O(n) + O(1). But it's easier just to say O(n), since the O(1) that comes from adding two to the result of the loop, makes a marginal difference overall.
Similarly, with the following function:
````js
function processSomeIntegers(integers){
let sum, product = 0;
integers.forEach(function(int){
return sum += int;
}
integers.forEach(function(int){
return product *= int;
}
console.log(`The sum is ${sum} and the product is ${product}`);
}
````
It might appear to be more complex than the earlier summing function but it isn't really. We have one array (`integers` ) and two loops. Each loop is of O(n) complexity and does a constant amount of work. If we add O(n) and O(n) we still have O(n), not O(2n). The constant isn't changed in any way by the fact that we are looping twice through the array in separate processes, it just doubles the length of n. So rather than formalising this as O(n) + O(n), we just reduce it to O(n).
When seeking to simplify algorithms to their most general level of complexity, we should keep in mind the following shorthands:
* Arithmetic operations always take constant time
* Variable assignment always takes constant time
* Accessing an element in an array by index or an object value by key is always constant
* in a loop the complexity is the length of the loop times the complexity of whatever happens inside of the loop
With this in mind we can break down the `findSum` function like so:
![breakdown.svg](../img/breakdown.svg)
This gives us:
$$ O(1) + O(1) + O(n) $$
Which, as noted above can just be reduced to O(n).
## Space complexity
So far we have talked about time complexity only: how the runtime changes relative to the size of the input. With space complexity, we are interested in how much memory (conceived as an abstract spatial quantity corresponding to the machine's hardware) is required by the algorithm. We can use Big O notation for space complexity as well as time complexity.
Space complexity in this sense is called 'auxiliary space complexity'. This means the space that the algorithm itself takes up, independent of the the size of the inputs. We are not focusing on the space that each input item takes up, only the overall space of the algorithm.
Again there are some rules of thumb:
* Booleans, `undefined`, and `null` take up constant space
* Strings require O(n) space, where n is the sting length
* Reference types take up O(n): an array of length 4 takes up twice as much space as an array of length 2
So with space complexity we are not really interested in how many times the function executes, if it is a loop. We are looking to where data is stored: how many variables are initialised, how many items there are in the array.

View file

@ -0,0 +1,24 @@
---
tags:
- Algorithms_Data_Structures
---
>
> Arrays are used both on their own and to implement many other data structures that place additional restrictions on how the data is manipulated.
## Algorithmic complexity, strict arrays compared with JavaScript
In terms of data retrieval arrays have a good runtime since retrieving an element or storing an element takes constant time. Also an entire array takes up O(n) space.
This only applies in the case of strict arrays that exist in strictly typed and more advanced languages such as Java and C++. In JavaScript an array is really closer to a list in other languages. That is to say, it's size (and of course the types it holds) does not need to be known in advance or specified at all. In stricter languages, you would cast the type for the array and specify its length, for example:
````cpp
int anArray[10] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }
````
Whilst storage and retrieval take place in constant time, deleting and inserting elements through methods like `pop` and `shift` are more time consuming, since the index of every item has to be updated in response.
## Random access
As we can access any element of the array at any time through index notation, arrays offer random and non-sequential access. This is in contrast to stacks and queues (where only the top/bottom can be accessed) and linked lists.

View file

@ -0,0 +1,44 @@
---
tags:
- Algorithms_Data_Structures
---
\_Visualization of the queue data structure _
![queue.svg](../img/queue.svg)
## A queue is a sequential data structure and most similar to a stack
A queue is basically a stack inverted and can be visualised as a line of people waiting to be served. The person that is first into the queue is served first. The people who join the queue behind this person will be served after them.
As a result it can be summarised as **'first in, first out'** or FIFO.
Just like a stack it is a sequential data structure without random access. You cannot access all the elements at one instant, you can only access the oldest element. If you wish to access the newest element, you have to move through all the others that are ahead of it first, at which point it becomes the oldest.
It differs from a stack in that a stack only has one point of transaction: the front or 'top' of the stack. With a stack you add and remove from the top or front. With a queue, you have two points of transaction: the front of the queue for removing elements and the back of the queue for adding elements.
We can however add a 'peek' method to see which is the next element in line to come out.
As we are removing the first element added, we use an `array shift` method to remove items from the front of the array.
Removing an element from the queue is called **dequeuing**. Adding an element to the queue is called **enqueuing**. In terms of the tail/head nomenclature, the end of the queue where elements are enqueued is the **tail** and front of the queue, where elements are removed is the **head**.
````js
class Queue {
items = [] // array to store the elements comprising the queue
enqueue = (element) => this.items.push(element) // add element to back
dequeue = () => this.items.shift() // remove element from the front
// Optional helper methods:
isEmpty = () => (this.items.length === 0) // return true if the queue is empty
clear = () => this.items.length = 0 // empty the queue
size = () => this.items.length // count elements in queue
peek = () => !this.isEmpty() ? this.items[0] : undefined; // check which element is next in line
}
````
## Use cases
* A queue sequences data in the order that it was first received. Thus it is most beneficial in scenarios where receipt time is a factor. For example, imagine a service whereby tickets go on sale at a certain time for a limited period. You may want to prioritise those who sent their payment earliest over those who arrived later.
* Serving requests on a single shared resource like a printer or CPU task scheduling.

View file

@ -0,0 +1,119 @@
---
tags:
- Algorithms_Data_Structures
- recursion
---
>
> A recursive function is a function that calls itself in its definition.
More generally recursion means when a thing is defined in terms of itself. There are visual analogues that help to represent the idea such as the Droste effect where an image contains a version of itself within itself. The ouroboros is another example. Also fractals display recursive properties.
## Schema
The general structure of a recursive function is as follows:
![javascript-recursion.png](../img/javascript-recursion.png)
## Why use recursive functions?
Recursion is made for solving problems that can be broken down into smaller, repetitive problems. It is especially good for working on things that have many possible branches but are too complex for an iterative approach or too costly in terms of memory and time complexity.
Recursive programming differs from **iterative** programming but both have similar use cases. Looping is a canonical example of working iteratively.
## Base condition
>
> Once a condition is met, the function stops calling itself. This is called a base condition.
Because recursion has the potential for infinity, to use it, we must specify a point at which it ends. However as we are not using an iterative approach we cannot rely on `while` or `foreach` to specify the boundaries of its operation. We call this the **base condition** and this will typically be specified using conditional logic.
The schema for a recursive function with a base condition is:
````jsx
function recurse() {
if(condition) {
recurse();
}
else {
// stop calling recurse()
}
}
recurse();
````
## Demonstrations
### Countdown
````jsx
// program to count down numbers to 1
function countDown(number) {
// display the number
console.log(number);
// decrease the number value
let newNumber = number - 1;
// base case
if (newNumber > 0) {
countDown(newNumber);
}
}
countDown(4);
````
* This code takes `4` as it's input and then outputs a countdown returning: `4, 3, 2, 1`
* In each iteration, the number value is decreased by 1
* The base condition is `newNumber > 0` . This breaks the recursive loop once the output reaches `1` and stops it continuing into negative integers.
Each stage in the process is noted below:
````
countDown(4) prints 4 and calls countDown(3)
countDown(3) prints 3 and calls countDown(2)
countDown(2) prints 2 and calls countDown(1)
countDown(1) prints 1 and calls countDown(0)
````
### Finding factorials
>
> The factorial of a positive integer **n** is the product of all the positive integers less than or equal to **n**.
To arrive at the factorial of **n** you subtract 1 from **n** and multiply the result of the subtraction by **n**. You repeat until the subtractive process runs out of positive integers. For example, if 4 is **n,** the factorial of **n** is **24:**
$$ 4 * 3 * 2 \*1 $$
4 multiplied by 3 gives you 12, 12 multiplied by 2 gives you 24. 24 multiplied by 1 is 24.
This is clearly a process that could be implemented with a recursive function:
````js
// program to find the factorial of a number
function factorial(x) {
// if number is 0
if (x === 0) {
return 1;
}
// if number is positive
else {
return x * factorial(x - 1);
}
}
let num = 3;
// calling factorial() if num is non-negative
if (num > 0) {
let result = factorial(num);
console.log(`The factorial of ${num} is ${result}`);
}
````
![javascript-factorial 1.png](../img/javascript-factorial%201.png)

View file

@ -0,0 +1,79 @@
---
tags:
- Algorithms_Data_Structures
---
*A stack visualised vertically*
![stack2.svg](../img/stack2.svg)
*A stack visualised horizontally*
![stack1.svg](../img/stack1.svg)
## A stack is a linear data structure that observes LIFO
Think of a stack like a pile of books: the last book that you add is the nearest to the top and therefore the first one that you can remove.
If you want to get a book that is not at the top, you have to first remove the books that are above it to get to it.
This type of data structure is linear and only allows sequential, not random, access. It is an example of 'last one in first one out'.
## We can build a stack from an array
A stack is an example of a data structure that can be built by adapting an array. If you think about it, all that is needed is an array to store the data, an `array push` method to add elements to the 'end' or the 'bottom' of the stack and an `array pop` method to remove the element at the top.
## Demonstration
Below we create a stack constructor, using a class. An object created from this template will have the following properties and methods:
* `items[]` → an array to store the data
* `push()` → a method to add an element to the end of the stack
* `pop()` → a method to remove an element from the front
In addition we have the following helpers, which allow us to check the status of the stack and retrieve information about it:
* `isEmpty()` → check if the stack is populated or not
* `clear()` → empty the stack of its content (therefore making `isEmpty()` return `true`)
* `size` → a property corresponding to the stack's length
````js
class Stack {
items = [] // the array that will store the elements that comprise the stack
push = (element) => this.items.push(element) // add an element to the end of the stack
pop = () => this.items.pop() // remove and return the last element from the stack
// We can add some useful helper methods, that return info about the state of the stack:
isEmpty = () => (this.items.length === 0) // return true if the stack is empty
clear = () => this.items.length = 0 // empty the stack
size = () => this.items.length // count elements in stack
}
````
## Run through
````js
let stack = new Stack();
test.push(1); // Add some elements to the stack
test.push(2);
test.push(3);
// Stack now looks like:
console.log(stack.items); // [1, 2, 3]
// Let's try removing the last element
stack.pop(); // 3 -> this was the last element we added, so it's the first one that comes out
// Now the stack looks like this:
// [1,2]
// Let's add a new element
test.push(true)
// Now the stack looks like:
// [1,2, true]
````
## Practical applications
* Any application that wants to got 'back in time' must utilise a stack. For example, the 'undo' function in most software is a function of a stack. The most recent action is at the top, and under that is the second most recent and so on all the way back to the first action.
* Recursive functions: a function that calls itself repeatedly until a boundary condition is met is using a stack structure. As you drill down through the function calls you start from the most recent down to the last.
* Balancing parentheses. Say you want to check if the following string is balanced `[()]` . Every time you find an opening parentheses. You push that to the front of a stack. You then compare the closing parentheses with the order of the stack. The same could be done when seeking to find palindromes. This sort of thing could be a code challenge so build an example.

View file

@ -0,0 +1,55 @@
---
tags:
- Programming_Languages
- Databases
---
>
> A database is a collection of organised data that can be efficiently stored, sorted, and searched.
How the data is organised will often determine the *type* of database used. There are many different types of database; some examples of the different types are relational, object-orientated, graphical, NoSQL, and distributed.
## ACID principle
To ensure the integrity of a database, each change or transaction must conform to a set of rules known as ACID:
* **atomicity**
* when changing data within a database, if any part of the change fails, the whole change will fail and the data will remain as it was before the change was made; this prevents partial records being created. Basically a safeguard
* **consistency**
* before data can be changed in a database, it must be validated against a set of rules
* **isolation**
* databases allow multiple changes at the same time, but each change is isolated from others
* **durability**
* once a change has been made, the data is safe, even in the event of system failure
>
> Databases will have mechanisms for **backup**, **distribution**, and **redundancy**, to ensure data is not lost.
## Database management system
A DBMS is software that can retrieve, add, and alter existing data in a database. MySQL, PostgreSQL, MongoDB, MariaDB are all examples of DBMSs. You can work with them via programming languages like PHP or through graphical clients such as PHPMyAdmin, MicrosoftSQL, Adminer etc. There is also SQLite which runs on the client not the server, so useful for learning and local development. SQLite is also useful when you need a database specific to a single device without networked communication, such as on mobile.
There are also CLI tools for all the major databases.
While I will be working primarily through PHP, graphical database software is useful for visual grepping and checking that scripts are working as they should.
## Relational database architecture
Tables, fields and records are the basic building blocks of databases
![FL-Databases-1.5_terminology 1.gif](../img/FL-Databases-1.5_terminology%201.gif)
### Table
A group of similar data with rows for **records** and columns for each **field**.
### Record
Horizontal/"row": a collection of items which may be of different data types all relating to the individual or object that the record describes
### Field
Vertical/ "column" : stores a single particular unit of data for each record. Each field must use the same data type.
Each individual field has **properties:** such as the data type, length or the total memory allocation.

View file

@ -0,0 +1,22 @@
---
tags:
- Databases
- Networks
- http
---
## GET
* Get data
## POST
* Create data
## PUT
* Update data
## DELETE
* Remove data

18
Databases/Primary key.md Normal file
View file

@ -0,0 +1,18 @@
---
tags:
- Programming_Languages
- Databases
---
>
> Every table in a relational database should have a **primary key**. A primary key is one **field that uniquely identifies each record**.
This is essential for carrying out operations across database tables and for creating and deleting database entires. It is also a safeguard: it means you can always identify a record by itself and don't have to rely on generic queries to identify it.
Sometimes you will have a dedicated field such as `UNIQUE_ID` for the primary key. Other times you can use an existing field to fulfil that function (a bit like using the `key` attribute in React). In both cases the following constraints **must be met:**
1. No two records can have the **same** primary key data
1. The primary key value should **never be reused**. Thus, if a record is deleted from the table, it should not be re-allocated to a new record.
1. A primary key value **must not be modified** once it has been created
1. A primary key **must have a value**; it cannot be `null`

43
Databases/RESTful APIs.md Normal file
View file

@ -0,0 +1,43 @@
---
tags:
- Databases
- REST
- apis
---
## Definition of an API
An application programming interface is a set of definitions and protocols for building and integrating application software. It can be thought of as a contract between an information provider and an informational consumer. The API is a mediator between the clients and the resources they wish to acquire from a server or database.
## REST
REST stands for **Representational State Transfer**. It is a set of *architectural constraints* on the structure of an API rather than a fixed protocol. It is a particular way of implementing client-server interaction over HTTP.
When a request is made from a client to resources via RESTful API, the API transfers a representation of of the state of the resource to the requester or endpoint. The information is delivered via HTTP. The format can be of several types (HTML, XML, plaintext, Python, PHP etc) but is generally JSON because of its broad compatibility with multiple programming languages.
### Key constraints
In order to qualify as RESTful, an API must meet the following constraints:
1. **Uniform interface**:
Possess a client-server architecture with request manage through HTTP
1. **Client-server decoupling** :
The client and server applications must be completely independent of one another. The *only* information the client should know about the server is the URI it uses to request the resource, it can't interact with the server in any other way. Likewise, the server shouldn't modify the client application in any way (contrast for example SSR) other than passing the requested data via HTTP.
1. **Statelessness**
Server applications should not be able to store any data related to a client request. The request alone should contain all the information necessary for processing it, without recourse to any specifics of the client application. For example, a specification of POST with a certain JSON body and header authentication will be all that is provided to the server.
1. **Cacheability**
Where possible, resources should be cacheable on the client or server side. Server responses must contain information about whether caching is allowed for the delivered resource (you see this in the headers in the DevTools console). The goal here is to improve performance on the client side whilst increasing scaleability on the server side.
1. **Layered system architecture**
It may be the case that the data flow between the client and the server is not direct. For instance the request may be funneled through middleware or another program before it is received by the server. Similarly there may be several steps before the client receives the requested data. Whilst one should not assume a direct correspondence, REST APIs need to be designed so that neither the client nor the server can tell whether it communicates with the end application or an intermediary.
## Example
A basic example of a REST API would be a series of methods corresponding to the main [HTTP request types](HTTP%20request%20types.md).
\| HTTP request type | URI | Action | Body ? |
\|------------------- |--------------------- |------------------------------ |-------------------------- |
\| GET | /api/customers | Retrieve customers as array | No |
\| GET | /api/customers/guid | Get a specific customer | No, data comes from GUID |
\| PUT | /api/customers/guid | Update an existing customer | Yes |
\| DELETE | /api/customers/1 | Delete a customer | No, data comes from GUID |
\| POST | /api/customers | Create a new customer | Yes |

492
Databases/SQL syntax.md Normal file
View file

@ -0,0 +1,492 @@
---
tags:
- Programming_Languages
- Databases
- sql
---
## Demonstration database
For the purposes of demonstration we will work from a made up database. This database stores information about computers, their manufacturers, properties and sale data:
* Overall database: **`computer_sales`**
* Tables: `**manufacturer**` , `**model**` , `**sales**`
* Example fields: `**manufacturer_id**` , `**model_id**` , `**name**`, `**year_founded**` , `**ram**` , `**sale_date**`
Below are the `model` and `manufacturer` tables output from the SQLite terminal client.
The model table:
````
model_id manufacturer_id name cpu_speed ram cores wifi release_date
---------- --------------- ---------------------- ---------- ---------- ---------- ---------- ------------
1 1 Raspberry Pi 1 Model A 0.7 256.0 1 0 2013-02-01
2 1 Raspberry Pi 1 Model B 0.7 256.0 1 0 2012-04-01
3 1 Raspberry Pi 1 Model B 0.7 512.0 1 0 2012-10-01
4 1 Raspberry Pi 1 Model A 0.7 512.0 1 0 2014-11-01
5 1 Raspberry Pi 1 Model B 0.7 512.0 1 0 2014-07-01
6 1 Raspberry Pi 2 Model B 0.9 1024.0 4 0 2015-02-01
7 1 Raspberry Pi 3 Model B 1.2 1024.0 4 1 2016-02-01
8 1 Raspberry Pi 3 Model B 1.4 1024.0 4 1 2018-03-14
9 1 Raspberry Pi 3 Model A 1.4 1024.0 4 1 2018-11-01
10 1 Raspberry Pi 4 Model B 1.5 1024.0 4 1 2019-06-24
11 1 Raspberry Pi 4 Model B 1.5 2048.0 4 1 2019-06-24
12 1 Raspberry Pi 4 Model B 1.5 4096.0 4 1 2019-06-24
13 1 Raspberry Pi Zero 1.0 512.0 1 0 2015-11-01
14 1 Raspberry Pi Zero W 1.0 512.0 1 1 2017-02-28
15 2 Apple Lisa 0.008 1.0 1 0 1983-01-19
16 2 Apple iMac 3.7 8192.0 4 1 2019-03-19
17 2 Apple MacBook Pro 2.6 16384.0 6 1 2019-05-21
18 2 Apple MacBook Air 2.6 8192.0 2 1 2019-07-09
19 3 Commodore VIC-20 0.00102 0.005 1 0 1980-01-01
20 3 Commodore 64 0.001023 0.064 1 0 1982-08-01
21 3 Amiga 500 0.00716 0.5 1 0 1987-04-01
````
The manufacturer table:
````
manufacturer_id name url year_founded trading
--------------- ------------ ----------------------- ------------ ----------
1 Raspberry Pi <https://raspberrypi.org> 2008 1
2 Apple <https://apple.com> 1976 1
3 Commodore <https://www.commodore.c> 1954 0
````
## Main commands
There are obviously many SQL commands but most standard CRUD actions can be executed with a small number of commands:
* `SELECT`
* `UPDATE`
* `CREATE`
* `INSERT`
* `DELETE`
## Language structure
Before we start using the syntax we need to understand the grammar:
![Pasted image 20220314155028.png](../img/Pasted%20image%2020220314155028.png)
Expressions differ from clauses and predicates in that they are not the mechanism for returning data (i.e. declaring a clause and a logical condition) they do something to the data, as part of the retrieval. This is a bit subtle:
* \`SELECT name FROM model WHERE cores = "4"
* This retrieves the models that have 4 cores
* \`SELECT count(\*) FROM model WHERE cores = "4"
* This counts the number of models that are returned where the counting is a function over and above the retrieval itself.
### Examples from `computer_sales.db`
`sqlite> SELECT * from model WHERE cpu_speed=0.7` : return all models with a CPU speed equal to 0.7:
````
model_id manufacturer_id name cpu_speed ram cores wifi release_date
---------- --------------- ---------------------- ---------- ---------- ---------- ---------- ------------
1 1 Raspberry Pi 1 Model A 0.7 256.0 1 0 2013-02-01
2 1 Raspberry Pi 1 Model B 0.7 256.0 1 0 2012-04-01
3 1 Raspberry Pi 1 Model B 0.7 512.0 1 0 2012-10-01
4 1 Raspberry Pi 1 Model A 0.7 512.0 1 0 2014-11-01
5 1 Raspberry Pi 1 Model B 0.7 512.0 1 0 2014-07-01
````
````
count(*)
----------
5
````
>
> Any value that is not a number should be in single-quotes, never double quotes
## The `WHERE` clause
Within the `SELECT` statement, the `WHERE` clause specifies the search criterion. The `WHERE` clause should always be last in the syntax. The clauses are always written in this order: `FROM` followed by `WHERE`.
`SELECT name, cores, release_date FROM model WHERE CORES="4";`:
````
name cores release_date
---------------------- ---------- ------------
Raspberry Pi 2 Model B 4 2015-02-01
Raspberry Pi 3 Model B 4 2016-02-01
Raspberry Pi 3 Model B 4 2018-03-14
Raspberry Pi 3 Model A 4 2018-11-01
Raspberry Pi 4 Model B 4 2019-06-24
Raspberry Pi 4 Model B 4 2019-06-24
Raspberry Pi 4 Model B 4 2019-06-24
Apple iMac 4 2019-03-19
````
## Compound statements
Compound statements allow you to apply more filters to your clauses within an SQL statement. SQL allows you to build complex, combinatorial `WHERE` clauses by using Boolean and mathematical operators (i.e `AND` , `OR` , `>` , `<` , `!=` , `<=` ...)
Multiple clauses:
````sql
SELECT name, ram, release_date
FROM model
WHERE release_date > '2018-01-01' AND ram > 512;
````
More complex logic achieve with parentheses:
````sql
SELECT name, cores, release_date
FROM model
WHERE (manufacturer_id = 1 OR manufacturer_id = 2) AND cores >= 2;
````
### Wildcards
SQL does not use Regex. Instead it has a simpler glob-like syntax for carrying out string matching.
In order to signal that you wish to compare by a wildcard and not a value, you have to use the `LIKE` keyword. The actual wildcard operator is `%` .
In an SQL statement, the `%` wild card will match any number of occurrences of any character.
Any characters can appear before or after MacBook and the record will still be returned:
````sql
SELECT name, cores, release_date
FROM model
WHERE name LIKE '%MacBook%';
````
This wildcard only filters characters that come after `Raspberry` :
````sql
SELECT name, cores, release_date
FROM model
WHERE name LIKE 'Raspberry%';
````
## Retrieving data queries (`SELECT`)
### **Print/retrieve/write an entire table, unfiltered**
````sql
SELECT * FROM [table_name]
SELECT * FROM model
````
### Retrieve all data from a specific field
````sql
SELECT [field_name] FROM [table_name]
SELECT name FROM manufacturer
````
### Retrieve data and order it
This example orders alphabetically:
````sql
SELECT [field_name] FROM [table_name] ORDER BY [property]
SELECT name FROM model ORDER BY name
````
>
> When `ORDER BY` is used the default method for strings is alphabetical and for integers it is ascending order.
Here's a more complex real-life request:
````sql
SELECT name, cores, ram FROM model ORDER BY ram, name
````
It gives us:
````
name cores ram
---------------- ---------- ----------
Commodore VIC-20 1 0.005
Commodore 64 1 0.064
Amiga 500 1 0.5
Apple Lisa 1 1.0
Raspberry Pi 1 M 1 256.0
Raspberry Pi 1 M 1 256.0
Raspberry Pi 1 M 1 512.0
Raspberry Pi 1 M 1 512.0
Raspberry Pi 1 M 1 512.0
Raspberry Pi Zer 1 512.0
````
But we can obviously specify our own ordering method:
````sql
SELECT name, cores, release_date FROM model ORDER BY cores DESC, name;
````
Returns:
````
name cores release_date
----------------- ---------- ------------
Apple MacBook Pro 6 2019-05-21
Apple iMac 4 2019-03-19
Raspberry Pi 2 Mo 4 2015-02-01
Raspberry Pi 3 Mo 4 2018-11-01
Raspberry Pi 3 Mo 4 2016-02-01
Raspberry Pi 3 Mo 4 2018-03-14
Raspberry Pi 4 Mo 4 2019-06-24
Raspberry Pi 4 Mo 4 2019-06-24
Raspberry Pi 4 Mo 4 2019-06-24
````
>
> `ORDER BY` always comes last, after the selection and any filtering clauses but *before* a `WHERE` clause
## Inserting data (`INSERT`)
### Adding a record
````sql
INSERT INTO sales
VALUES (1, 11, '2020-01-01','mhogan');
````
If you intend to miss out a value, you shouldn't leave it blank, you should instead use `NULL` :
````sql
INSERT INTO sales
VALUES (1, 11, '2020-01-01', NULL);
````
>
> There is a problem with this format: it only works so long as the order to the values in the `VALUES` clause corresponds to the order of the fields in the tables. To rule out error we should instead specify these fields along with the table name:
````sql
INSERT INTO sales**(employee_id, sale_id, model_id, sale_date)**
VALUES ('mhogan', 1, 11, '2020-01-01',);
````
## Modifying existing records (`UPDATE`)
### Schematic syntax
````sql
UPDATE [table_name]
SET [field]
WHERE [conditional expression/filter]
````
### Real example
````sql
UPDATE manufacturer
SET url = '<http://www.hp.co.uk>'
WHERE manufacturer_id = 4; // typically this will be the primary key as you are updating and existing record and need to identify it uniquely
````
### Multiple fields
````sql
UPDATE manufacturer
SET url = '<http://www.apple.co.uk>',
year_founded = 1977
WHERE manufacturer_id = 2;
````
## Deleting records (`DELETE`)
````sql
DELETE FROM sales WHERE sale_id = 1;
````
## Change table structure (`ALTER`)
We use the `ALTER` query to add, remove and otherwise change the structural properties of a table.
### Add an additional field to existing table (`ALTER`)
This adds a `price` field to the `sales` table. The `price` field accepts data of the type `real` . `real` is a slightly less precise (less memory) version of float
````sql
ALTER TABLE sales ADD price real;
````
## Create a table (`CREATE`)
````sql
CREATE TABLE employee (
employee_id text PRIMARY KEY,
first_name text,
surname text,
address_number integer,
address_1 text,
address_2 text,
locality text,
region text,
postal_code text,
phone_number text,
days_per_week real
);
````
We specify the new table name first, then it's fields and their corresponding data types. We also set a primary key
## Creating relationships between tables with `PRIMARY` and `FOREIGN` keys
We will demonstrate with an example. We already have the `sales` table. We want to create new table called `returns` that will sustain a one-to-one relationship with `sales`. We are going to use the `sale_id` as our foreign key in `returns`. This is the primary key in `sales`.
The `sales` table:
````
sale_id model_id sale_date employee_id price
---------- ---------- ---------- ----------- ----------
1 44 2020-07-27 tbishop 399.99
2 22 2021-02-07 tbishop 200.99
````
Creating the `returns` table and establishing relationship with `sales` using the `FOREIGN KEY` keyword:
````sql
CREATE TABLE returns (
return_id integer PRIMARY KEY,
sale_id integer NOT NULL,
date_returned text,
reason text,
FOREIGN KEY (sale_id) REFERENCES sales(sale_id)
);
````
Here's an example with more than one foreign key:
````sql
CREATE TABLE returns (
return_id integer PRIMARY KEY,
sale_id integer NOT NULL,
employee_id text NOT NULL,
date_returned text,
reason text,
FOREIGN KEY(sale_id) REFERENCES sales(sale_id),
FOREIGN KEY(employee_id) REFERENCES employee(employee_id)
);
````
## Selecting and combining data from multiple tables
---
Once a relationship has been created using primary and foreign keys (as detailed in the previous section), you are able to combine and integrated data from the different tables. This is known as performing **joins.**
### Inner joins
We can demonstrate this with the following scenario:
>
> We want to create a list of the name of all computers that have been sold and when they were sold.
This will require us to use the `name` field from the `model` table and the `sale_date` field from the `sales` table.
Here's the SQL:
````sql
SELECT model.name, sales.sale_date
FROM model
INNER JOIN sales on model.model_id = sales.model_id;
````
* We use dot notation to distinguish the `table.field` for each table.
* We use `INNER JOIN` to join the `sales` table with the `model` table where `model_id` field in `model` is the same as the `model_id` field in `sales`
This returns:
````sql
name sale_date
-------------------- ----------
Raspberry Pi 2 Mo 4 2015-02-01
Raspberry Pi 3 Mo 4 2018-11-01
````
Note data will only be returned when there is a match between both fields stated in the `SELECT` clause. There must be corresponding data between `model.name` and `sale.sale_data` for a row to be returned. For example if there is a model that has not been sold, there will be a `mode.model_name` but no `sale_data` .
![model_sales_inner_join_step2.jpg](../img/model_sales_inner_join_step2.jpg)
### Outer joins
In the example above, we used the `INNER JOIN` method. This enshrines the logic:
>
> return only rows where there is a matching row in both tables
Which in the applied context means:
* If there is a model that has never been sold, it wont be returned
* If there is a sale without a model, it wont be returned
But there are other types of join that satisfy other types of logic.
The logical state that obtains in the case of **inner joins**:
![1_3.7-Inner_Join_Venn.png](../img/1_3.7-Inner_Join_Venn.png)
The logical state that obtains in the case of **left outer joins**
![2_3.7-Inner_Join_Left 1.png](../img/2_3.7-Inner_Join_Left%201.png)
The logical state that obtains in the case of **right outer joins**:
![3_3.7-Inner_Join_Right.png](../img/3_3.7-Inner_Join_Right.png)
The logical state that obtains in the case of **full outer joins**:
![4_3.7-Full_Outer_Join.png](../img/4_3.7-Full_Outer_Join.png)
This type of join is used when you want to discern when there is *not* a match between two fields across tables. For example: imagine that you wanted a list of computers that had never been sold. In this case, you would be interested in rows where there is a `model_id` without a corresponding `sales_id` .
In SQL this would be achieved with:
````sql
SELECT model.name, sales.sale_date
FROM model
LEFT JOIN sales on model.model_id = sales.model_id;
````
Note that this would return all the model names but where there isn't a sale data, `NULL` would be returned. This is an **important distinction :** the outer join method doesn't just return the rows with a `NULL` value for `sale_date` as we might expect. It returns all models along with those that have not been sold. This is because it is oriented to the "left" table, equivalent to the table in the SQL that we cited first with the `on` keyword.
>
> A left outer join returns all the records from the left (model) table and those that match in the right (sales) table. Where there are no matching records in the right (sales) table, a `NULL` value is returned.
A **right outer join**, often referred to as a right join, is the opposite of a left outer; it returns all the records from the right table and those that match in the left table. In our scenario this would be all the models that had a `sale_date` including models that didn't have a `sale_date` , i.e which returned `NULL`
Finally, a **full outer join** returns all the records from both tables, and where a record cannot be matched, a NULL value is returned. So this would mean there could be `NULL`s in both fields of the returned rows.
We can combine multiple types of join in the same SQL query:
````sql
SELECT model.name, sales.sale_date, manufacturer.url
FROM model
LEFT JOIN sales on model.model_id = sales.model_id
INNER JOIN manufacturer on model.manufacturer_id = manufacturer.manufacturer_id;
````
## Aggregate functions
Count return with custom variable:
````sql
SELECT COUNT(*) AS total_sales
FROM SALES
````
Sum:
````sql
SELECT SUM(price) as total_value
FROM sales
````
Average:
````sql
SELECT AVG(price) as average_income
FROM sales
````
Applying aggregate function with sorting applied:
````sql
SELECT COUNT(*) AS total_sales
FROM sales
GROUP BY employee_id
````

View file

@ -0,0 +1,32 @@
---
tags:
- Linguistics
---
The following properties are widely believed by linguists to be the defining hallmarks of spoken language. They were originally formulated by [Charles Hockett](https://en.wikipedia.org/wiki/Charles_F._Hockett). They provide away of distinguishing linguistic behaviour from the other behaviours of organisms that may be communicative but not linguistic. For example the dances that bees do to inform other bees about the location of nectar or a dog dropping a ball at your feet.
## Displacement
Human language can talk about things that are beyond the immediate here and now. It can express concepts which transcend the current location or circumstances of the speakers (for example abstract ideas, past events, future events). For instance two people could be at a watercooler in a work environment but this doesnt mean they must be talking about the watercooler, they could be talking about the causes of the French Revolution.
[Hockett's design features - Wikipedia](https://en.wikipedia.org/wiki/Hockett%27s_design_features)
## Arbitrariness of the sign
## Duality of patterning
Speech can be analysed on two levels at once:
1. As made up of meaningless elements (i.e a finite inventory of phonemes)
1. As made up of meaningful elements (i.e an infinite array of morphemes)
>
> Spoken languages are composed of a limited set of meaningless speech sounds that are combined according to rules to form meaningful words
## Reflexivity
In essence, the ability of speakers of language to engage in linguistics: to use language to talk about language. Due to reflexivity humans can describe what language is, talk about the structure of language and discuss the idea of language with others, using language.
## Additional features
Add shorter notes on additional features listed by Hockett

19
Linguistics/Morphology.md Normal file
View file

@ -0,0 +1,19 @@
---
tags:
- Linguistics
- morphology
---
Morphology is the linguistic study of words.
We can distinguish two meanings of word:
* The big notion (lexemes):
* notion of something we can look up in a dictionary. Lexicographers describe words as **the largest unpredictable combination of form and meaning.** In this context words are *lexemes* or *lexical items* which comprise a *lexicon* (dictionary)
* The smaller notion (morphemes):
* morphemes are the **smallest unpredictable combinations of form and meaning**. Linguists call these units morphemes and the study of them is morphology
For example *RABBIT HOLE* when viewed as a lexeme is a single word. We have used spaces but we could have used a hyphen to separate the words. In German, compound words are simply squashed together with no space. Viewed morphologically, it actually comprises two morphemes *RABBIT* and *HOLE* (which also have several meanings) which together make up a larger morpheme.
In contrast DEEP and HOLE are both lexemes but DEEPHOLE is not.
Consider now FALLING. This is a single lexeme yet it comprises two morphemes. In contrast to RABBIT HOLE both morphemes (FALL and ING) are not lexemes (only FALL is) however ING does have a meaning, denoting the duration of a process or some related modification of a verb.

View file

@ -0,0 +1,8 @@
---
tags:
- Linux
---
1. Clone the repo from GitHub
1. `cd` into the repo
1. Run `makepkg -c`

View file

@ -0,0 +1,13 @@
---
tags:
- Linux
- arch
---
## Pacman, Yay
|Function|Command / flag|
|:-------|-------------:|
|List all installed packages|`-Q`|
|List all local installed packages|`-Qm`|
|Remove package|`-R [package_name]`|

14
Linux/User management.md Normal file
View file

@ -0,0 +1,14 @@
---
tags:
- Linux
---
## Switch user
If already logged in as a user you can switch users with the command `su - [username]`.
## Login as root
If you are logged in as a standard user, if you use `su -` without specifying a username, then it will assume you wish to log in as root.
If you wish to login as root in the tty at startup, then use `root` as the username.

View file

@ -0,0 +1,51 @@
---
tags:
- Logic
- propositional-logic
---
Sentences or propositions (we will use 'sentences' for consistency) are expressions **that have truth values**, either true or false.
We call a sentence which does not contain a logical connective (or 'sentential connective') a **simple sentence**.
We call a sentence that does contain a logical connective, a **compound sentence**.
Simple sentences are represented within a formal language of sentential logic with a single character, customarily *P* or *Q*. When we refer to the formal representation of such sentences in our system of sentential logic (SL) we call them **atomic sentences**.
Compound sentences consist in single characters for each atomic sentence that they comprise, combined with a symbol for the logical connective. When we refer to the formal representation of such sentences in SL we call them **molecular sentences**.
### Demonstration
Atomic sentence:
````
Socrates was a philosopher.
(P)
````
Molecular sentence:
````
Socrates was a philosopher and a drinker.
(P & Q)
````
Connectives in natural language often obscure the logical basis of the proposition being expressed (where such a sentence contains a proposition, i.e. excluding sentences that are *logically indeterminate*. The molecular sentence is above is such an example. In this instance the sentence can be expressed more precisely as:
````
Socrates was a philosopher and Socrates was a drinker.
````
Where sentences in natural language cannot be elucidated by the addition of implied logical connectives in the manner above, they must be treated not as molecular sentences but as atomic sentence. Example:
````
Two splashes of gin and a few drops of vermouth make a great martini.
````
If we were to formalise this as:
````
Two splashes of gin make a great martini and a few drops of vermouth make a great martini.
````
We would lose the sense of the original and we would not be uncovering any logic that is in the original.

View file

@ -0,0 +1,10 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
---
Give that the biconditional means that if $P$ is the case, $Q$ is the case and if $Q$ is the case, $P$ must be the case, if we have $P \equiv Q$ and $P$, we can derive $Q$ and vice versa.
![biconditional-elim.png](../img/biconditional-elim.png)

View file

@ -0,0 +1,10 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
---
The biconditional means if $P$ is the case, $Q$ is the case and if $Q$ is the case, $P$ must be the case. Thus to introduce this operator we must demonstrate both that $Q$ follows from $P$ and that $P$ follows from $Q$. We do this via two sub-proofs.
![bi-intro.png](../img/bi-intro.png)

View file

@ -0,0 +1,10 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
---
If we have a conditional and we have independently derived its antecedent, we may invoke its consequent. This is often referred to as *Modus ponens* (affirming the antecedent).
![cond-elim.png](../img/cond-elim.png)

View file

@ -0,0 +1,10 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
---
If we can show that $Q$ follows from $P$ (typically via a subproof) than we can assert that P implies Q. This is also sometimes known as *Conditional Proof*
![cond-intro.png](../img/cond-intro.png)

View file

@ -0,0 +1,10 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
---
If a conjunction exists, it means that both conjuncts are the case; therefore we can legitimately extract either one of them. Also known as *Simplification*.
![conjunc-elim.png](../img/conjunc-elim.png)

View file

@ -0,0 +1,10 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
---
If two conjuncts have each been independently derived then they can be conjoined. Also known more simply as *Conjunction*
![conjunc-intro.png](../img/conjunc-intro.png)

66
Logic/Consistency.md Normal file
View file

@ -0,0 +1,66 @@
---
tags:
- Logic
- propositional-logic
- consistency
---
## Informal definition
A set of sentences is consistent if and only if **it is possible for all the members of the set to be true at the same time**. A set of sentences is inconsistent if and only if it is not consistent.
### Demonstration
The following set of sentences form an inconsistent set:
````
(1) Anyone who takes astrology seriously is a lunatic.
(2) Alice is my sister and no sister of mine has a lunatic for a husband.
(3) David is Alice's husband and he read's the horoscope column every morning.
(4) Anyone who reads the horoscope column every morning takes astrology seriously.
````
The set is inconsistent because not all of them can be true. If (1), (3), (4) are true, (2) cannot be. If (2), (3),(4) are true, (1) cannot be.
## Formal definition
>
> A finite set of sentences $\Gamma$ is truth-functionally consistent if and only if there is at least one truth-assignment in which all sentences of $\Gamma$ are true.
### Informal expression
````
The book is blue or the book is brown
The book is brown
````
### Formal expression
````
{P v Q, Q}
````
### Truth-table
````
P Q P Q Q
T T T T *
T F T F
F T T T *
F F F F
````
## Derivation
>
> In terms of logical derivation, a finite $\Gamma$ of propositions is **inconsistent** in a system of derivation for propositional logic if and only if a sentence of the $P & \sim P$ is derivable from $\Gamma$. It is **consistent** just if this is not the case.
In other terms, if you can derive a contradiction from the set, the set is logically inconsistent.
A [contradiction](Logical%20truth%20and%20falsity.md#logical-falsity) contradiction has very important consequences for reasoning because if a set of propositions is inconsistent, every and all other propositions are derivable from that set.
![proofs-drawio-Page-5.drawio 3.png](../img/proofs-drawio-Page-5.drawio%203.png)
*A demonstration of the the consequences of deriving a contradiction in a sequence of reasoning.*
Here we want to derive some proposition $Q$. If we can derive a contradiction from its negation as an assumption then, by the [negation elimination](Negation%20Elimination.md) rule, we can assert $Q$. This is why contradictions should be avoided in arguments, they 'prove' everything which, by association, undermines any particular premise you are trying to assert.

View file

@ -0,0 +1,41 @@
---
tags:
- Logic
- propositional-logic
---
## Corresponding material conditional to show validity
To demonstrate *truth-functional validity* we have to construct a truth-table which contains each of the premises and the conclusion and then review each row to see if there is an assignment where both the premises and the conclusion are true.
A simpler way to get the same result is to invoke the corresponding material conditional. Here we concatenate the premises using conjunction and then join them to the conclusion using the material conditional, which then becomes the main connective. We then populate the truth table for this compound sentence. If it is logically true, the argument is valid.
### Demonstration
We will demonstrate with the following set:
$$ {P \equiv Q, P \lor Q, P &Q } $$
````
P Q P ≡ Q P Q P & Q
T T T T T *
T F F T F
F T F T F
F F T F F
````
````
P Q ( ( P ≡ Q ) & ( P Q ) ) ≡ ( P & Q )
T T T
T F T
F T T
F F T
````
We see above that the main connective, the material conditional returns true for every truth-functional assignment. In other words it is logically true. Consequently the argument is valid
## Corresponding material biconditional
We can use the corresponding material biconditional as a shorthand for demonstrating logical equivalence between two sentences.
For two putatively equivalent sentences $P$ and $Q$, $P$ and $Q$ are logically equivalent if the compound sentence $P \equiv Q$ is logically true.

31
Logic/DeMorgan's Laws.md Normal file
View file

@ -0,0 +1,31 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
- theorems-axioms-laws
---
DeMorgan's laws express some fundamental equivalences that obtain between the Boolean [connectives](Truth-functional%20connectives.md):
## First Law
> The negation of a conjunction is logically equivalent to the disjunction of the negations of the original conjuncts.
$$
\sim (P \& Q) \equiv \sim P \lor \sim Q
$$
The equivalence is demonstrated with the following truth-table
![demorgan-1.png](../img/demorgan-1.png)
## Second Law
> The negation of a disjunction is equivalent to the conjunction of the negation of the original disjuncts.
$$
\sim (P \lor Q) \equiv \sim P & \sim Q
$$
![demorgan-2.png](../img/demorgan-2.png)

View file

@ -0,0 +1,19 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
---
This rule is sometimes also referred to as *Constructive Dilemma*. This can be a bit tricky to understand because the goal is to derive or *introduce* a new proposition separate from the disjunction you start out with. This may be disjunction, a single proposition or a proposition containing any other logical connective. You do this by constructing two sub-proofs, one for each of the disjuncts comprising the disjunction you start out with. If you can derive your target proposition as the conclusion of each subproof then you may invoke the conclusion in the main proof and take it to be derived.
![disjunc-elim.png](../img/disjunc-elim.png)
*Here is an example where Disjunction Elimination is used to derive a new disjunction.*
![proofs-drawio-Page-6.drawio.png](../img/proofs-drawio-Page-6.drawio.png)
*Here are two further examples that use Disjunction Elimination to derive singular propositions*
![ORelim1.png](../img/ORelim1.png)
![ORelim2.png](../img/ORelim2.png)

View file

@ -0,0 +1,11 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
---
This rule can seem a little odd: like we are randomly introducing an additional proposition without giving any justification. However this is just a consequence of the fact if $P$ is true, so is $P \lor Q$ since disjunction is not the same as conjunction: only one disjunct needs to be true for the compound disjunction to be true. This is represented in the context of [truth-trees](Truth-trees.md#disjunction-decomposition) by the fact that truth can pass up via either branch of a disjunction pattern.
This rule is sometimes also referred to (confusingly) as *Addition*.
![disjunc-intro.png](../img/disjunc-intro.png)

View file

@ -0,0 +1,66 @@
---
tags:
- Logic
- propositional-logic
- proofs
---
When we construct a formal proof in logic we are seeking to show that a certain proposition is **derivable** from other propositions. We use the words *derivation* and *proof* interchangeably.
>
> A sentence $P$ is derivable in a system of propositional logic from a finite set of sentences if and only if there is a derivation in this system in which all and only the members of the set are **primary assumptions** and $P$ is the sentence on the last line.
We express the above symbolically as $\Gamma \vdash P$ . (Be careful not to confuse *derivable* ($\vdash$) from *entails* ($\vDash$).
Derivability is therefore a property that a proposition can possess relative to a set.
For instance to demonstrate derivability for:
$$
{\sim F \lor D, F, D \supset (G & H)} \vdash G &H
$$
We would establish $\sim F \lor D, F, D \supset (G & H)$ as primary assumptions and then, using the derivation rules of the system conclude with $G&H$. Every sentence in the derivation is either a **primary assumption** or an **auxiliary** assumption or justified by the rules of the derivation. An auxiliary assumption is one belonging to a sub-derivation. The primary assumptions belong to the main derivation.
For any given derivation of the form $\Gamma \vdash P$ there may be a number of ways of demonstrating the derivation (more than one application of the rules governing the system) but only one is sufficient to establish derivability.
>
> We will tend to use the terms *derivation* and *proof* interchangeably but we should note that there is a technical distinction in that a **proof is a derivation in which all of the assumptions have been discharged**
## Constructing proofs
We place assumptions above derivations and mark them *A* in the right-hand margin. We use a shorthand for the derivation rules and also place these in the right-hand margin.
We divide assumptions from derivations with a horizontal line. We number each line and use this to refer to the line we are applying the derivation to. Sub-proofs follow this structure recursively.
This is known as *Fitch notation*
*Schematically*
![proofs-drawio-Page-5.drawio.png](../img/proofs-drawio-Page-5.drawio.png)
*Applied example*
![proofs-drawio-Page-6.drawio.png](../img/proofs-drawio-Page-6.drawio.png)
## Sub-proofs
When a sub-proof is terminated, the assumption with which it starts is said to be *discharged*. It's conclusion can then be drawn upon and invoked within the main proof by reference to its start and end line number. However statements within the sub-proof cannot be referred to again from within the main proof, only its result.
## Derivation rules
Derivation rules are [syntactic](Syntax%20of%20sentential%20logic.md) rather than semantic. They are applied on the basis of their form rather than on the basis of the truth conditions of the sentences they are applied to.
>
> Derivation rules can be applied without having an interpretation of the symbols in mind. A derivation rule tells us that: given a group of symbols with a certain structure, we can write down another group of symbols with a certain structure.
Each of the main logical connectives has an associated derivation rule. The binary connectives have pairs of rules, one for the introduction of the connective and one for its elimination.
The main derivation rules:
* [Negation Introduction](Negation%20Introduction.md)
* [Negation Elimination](Negation%20Elimination.md)
* [Conjunction Introduction](Conjunction%20Introduction.md)
* [Conjunction Elimination](Conjunction%20Elimination.md)
* [Disjunction Introduction](Disjunction%20Introduction.md)
* [Disjunction Elimination](Disjunction%20Elimination.md)
* [Conditional Introduction](Conditional%20Introduction.md)
* [Disjunction Elimination](Disjunction%20Elimination.md)
* [Biconditional Introduction](Biconditional%20Introduction.md)
* [Biconditional Elimination](Biconditional%20Elimination.md)

50
Logic/Indeterminacy.md Normal file
View file

@ -0,0 +1,50 @@
---
tags:
- Logic
- propositional-logic
---
The vast majority of sentences in natural and formal logical languages are neither [ logically true](Logical%20truth%20and%20falsity.md#logical-truth) or [\| logically false](Logical%20truth%20and%20falsity.md#logical-falsity). This makes sense because sentences of this form are all either tautologies or contradictions and as such do not express information about the state of events in the world. We call sentences that are neither logically true or logically false, logically indeterminate sentences.
## Informal definition
A sentence is logically indeterminate if it is neither logically true or logically false. This is to say: it can be both [consistently](Consistency.md) asserted and consistently denied.
For example the sentence:
````
It is raining.
````
May be true or false thus it can it both be asserted and denied quite consistently. It is true if it actually is raining and false if it actually is not raining. There is no logical contradiction in saying it is raining when it isn't raining, this assertion is simply false. There is a contradiction in saying it is both states. Thus the sentence:
````
It is raining and it is not raining.
````
Cannot be consistently asserted as there is no possibility of the sentence being true. It is either raining or it isn't raining. Given the law for conjunction both conjuncts must be true for the sentence as a whole to be true. But in the case of this sentence if one conjunct is true, the other must be false and vice versa, hence it is not possible for the sentence to be true at all. It can *only* be false.
Contrariwise the sentence:
````
It is raining or it is not raining.
````
Cannot be consistently denied as there is no possibility of it being false. It is either raining or not raining. Given the law for disjunction, either disjunct can be true to make the sentence as a whole true. Given that it is either raining or not raining in either scenario, the sentence as a whole will be true. Therefore there is no possibility of it being false, it can *only* be true.
## Formal definition
>
> A sentence P is truth-functionally indeterminate if and only if it is neither truth-functionally true or truth-functionally false.
````
P
````
### Truth-table
````
P P
T T
F F
````

View file

@ -0,0 +1,13 @@
---
tags:
- Logic
- propositional-logic
- theorems-axioms-laws
---
>
> A proposition cannot be true and false at the same time.
> $$
> \\sim (P & \sim P)
> $$

View file

@ -0,0 +1,13 @@
---
tags:
- Logic
- propositional-logic
- theorems-axioms-laws
---
>
> Every proposition has to be either true or false. There can be no middle ground.
> $$
> P \lor \sim P
> $$

View file

@ -0,0 +1,51 @@
---
tags:
- Logic
- propositional-logic
---
>
> Two sentences, P and Q, are truth-functionally equivalent if and only if there is no truth assignment in which P is true and Q is false
### Informal expression
````
P: If it is raining then the pavement will be wet.
Q: The pavement is not wet unless it is raining.
````
### Formal expression
$$
P \supset Q \equiv \sim P \lor Q
$$
### Truth-tables
````
P Q P ⊃ Q
T T T
T F F
F T T
F F T
````
````
P Q ~ P Q
T T T
T F F
F T T
F F T
````
### Derivation
>
> Propositions $P$ and $Q$ are equivalent in a system of [derivation](Formal%20proofs%20in%20propositional%20logic.md) for propositional logic if $Q$ is derivable from $P$ and $P$ is derivable from $Q$.
Note that the property of equivalence stated in terms of derivablity above is identical to the derivation rule for the [material biconditional](Biconditional%20Introduction.md):
![bi-intro.png](../img/bi-intro.png)
//TODO: Add demonstration of this by deriving two equivalents from one of DeMorgan's Laws

View file

@ -0,0 +1,40 @@
---
tags:
- Logic
- Philosophy
- propositional-logic
- modality
---
## Logical possibility
In distinguishing the properties of [logical consistency](Consistency.md) and [validity](Validity%20and%20entailment.md#validity) we make tacit use of the notion of **possibility**. This is because when we consider the validity of an argument we are assessing truth-conditions and this consists in asking ourselves what could or could not be the case: were it such that *P*, then it would be the case that *Q*. It is important to understand what possibility means in the context of logic and how it differs from what we might mean ordinarily when we use the term.
It is evident from the case of arguments that are valid but not sound that logic operates with a specialised notion of possibility. For example it has to be the case that the proposition *Every woman can levitate* is logically possible since the following argument is valid:
````
1. P: Janice is a woman.
2. P: Every woman can levitate.
3. C: Janice can levitate.
````
But we know of course that women cannot levitate. When we assert that this is impossible we are relying on a stronger notion of possibility than logical possibility. It follows that the concept of possibility can have different degrees. The scope of the concept of possibility has been the concern of logicians and philosophers since at least the time of Plato and numerous different formulations exist. The notion that we mostly work with unreflectively in everyday life is nomological possibility. This means governed by the application of laws where these laws pertain to our current understanding of the natural world as determined by physics. Levitation is therefore nomologically impossible but logically possible.
If logical possibility is not constrained by the laws of physics does it place any restrictions on what is possible? Logic applies a single restriction, the law of non-contradiction: a proposition cannot both be true and false at once. The following propositions are examples of a contradictory propositions.
Some examples of contradictions:
* There is a dog that is not a dog
* Today is Tuesday and today is not Tuesday
* The cat that is dead is alive
From this we can derive the following property of logical possibility:
>
> A proposition is logically possible just if it does not imply a contradiction.
## Logical necessity
A sentence is *logically necessary* if it is true in every logically possible circumstance which is to say: true on every possible truth functional assignment. Necessity and [ logical truth](Logical%20truth%20and%20falsity.md#logical-truth) are therefore synonyms: anything that is logically true (a tautology) is true by necessity (could not be otherwise.)
Further, every logical truth is logically possible but not everything that is logically possible is logically true. It is possible that it is raining but this is not logically necessary - it could be otherwise, i.e not raining. However it is not possible that it could be both raining and not raining.

View file

@ -0,0 +1,109 @@
---
tags:
- Logic
- propositional-logic
---
We say of certain sentences that they are logically true or logically false.
## Logical falsity
### Informal definition
A sentence is logically false if and only if **it is not possible for the sentence to be true**. The sentence itself cannot be consistently asserted.
**Demonstration**
````
There is a country that is not a country.
Apples are fruits and apples are not fruits
````
Neither sentence can be true because the truth of the first clause is contradicted by the second. By the principle of [consistency](Consistency.md), it is not possible for both clauses to be true at once therefore the sentence, overall has the truth value of falsity
The examples above are simple sentences but logical falsity also applies to compound sentences and it is actually easier to see the logical principle at work with compound sentences since once simple sentence of the compound contradicts the other such that the overall sentence cannot be consistently asserted:
````
It is raining and it is not raining.
````
### Formal definition
>
> A sentence P is truth-functionally false if and only if P is false on every truth-value assignment
### Formal expression
````
P & ~ P
````
### Truth-table
````
Can
P P & ~ P
T F
F F
````
## Logical truth
### Informal definition
A sentence is logically true if and only if it is not possible for the sentence to be false. The sentence itself cannot be [consistently](Consistency.md) denied.
**Demonstration**
````
A rose is a rose.
Today is Tuesday unless today is not Tuesday.
````
Regardless of any facts obtaining in the world, these sentences cannot be false.
As with logically false sentences, logical truth can also apply to compound sentences:
````
It is Monday and Monday is a day of the week.
````
### Formal definition
>
> A sentence P is truth-functionally true if and only if P is true on every truth-value assignment
````
P v ~P
````
### Truth-table
````
P P ¬ P
T T
F T
````
### Consequences
The existence of logically false and logically true sentences affects the validity and soundness of arguments in which they are used. These are technicalities that have philosophically interesting consequences.
* If an argument contains premises which are logically false than this argument will perforce be valid. This is because one cannot consistently assert the premises and deny the conclusion which is the definition of validity. However the *reason* why one cannot consistently assert the premises and deny the conclusions is because one cannot consistently assert the premises - they conflict with each other. Furthermore as the argument contains false premises, it cannot be sound.
````
(P1) Russia is a country.
(P2) Russia is not a country
(P3) All countries have languages.
____________________________________________
(C) Russian is a language.
````
* Any argument with a logically true conclusion is valid. Because the conclusion cannot be consistently denied it follows that we cannot consistently assert the premises *and* deny the conclusion. Whether or not the argument is sound remains an open question however. If the premises happen to be true then the argument will be sound on the strength of the conclusion being logically true but if the premises are false it will be unsound regardless of the truth of the conclusion.
````
(P1) Horses have legs.
(P2) Animals with legs can move.
____________________________________________
(C) A horse is a horse
````

View file

@ -0,0 +1,9 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
---
![negate-elim 1.png](../img/negate-elim%201.png)
Like the [introduction](Negation%20Elimination.md) rule for negation, the elimination rule also works by deriving a contradiction. It is basically *Negation Introduction* in reverse. Instead of starting the subproof with a true proposition from which you derive a contradiction, you start with the negation of a proposition, derive a contradiction and then assert the positive of the negated proposition you started out with.

View file

@ -0,0 +1,10 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
---
This is also known as *proof by contradiction*. You start with an assumption declared in a subproof. If you can derive a contradiction from this assumption (typically from the introduction of another proposition and its negation), then you are permitted to derive the negation of the auxiliary assumption in the main proof.
[]()![negate-intro 1.png](../img/negate-intro%201.png)

View file

@ -0,0 +1,31 @@
---
tags:
- Logic
- propositional-logic
---
## Object and metalanguages
When we talk about a language we call that language the **object language**. A **metalanguage** is a language used to describe some object language.
When we are developing a formal logical language (which we may call SL or PL for 'sentential' and 'propositional' logic respectively), the formal language is the object language and natural language (e.g. English) is the metalanguage.
**Demonstration**
If we talk about German in English, German in the object language and English is the metalanguage.
## Use and mention
There is an associated distinction: that of use and mention.
When we create an expression in a language we are said to *use* that language. When we remark upon said expression we are said to be *mentioning* the language. This distinction may correspond to the object and metalanguage difference above but doesn't have to; use and mention can happen in the same language. For example:
````
'London' is the word that denotes the capital of the UK.
````
## Metavariables
A metalinguistic variable (metavariable for short) is an expression in the metalanguage that is used to talk generally about expressions of the object language. The convention in these notes will be to embolden single letters when these letters are used as metavariables.
For example, instead of saying *'P & Q' is an expression comprising two atomic sentences and a conjunction* we might say ***P** is an expression comprising two atomic sentences and a conjunction.* In this instance **P** is a metavariable in the metalanguage mentioning the expression P & Q in the object language

10
Logic/Reiteration.md Normal file
View file

@ -0,0 +1,10 @@
---
tags:
- Logic
- propositional-logic
- derivation-rules
---
**Reiteration (R)** allows us to restate any proposition already in the proof within the main proof or a more deeply nested subproof. Reiteration allows us to reuse any assumptions, or propositions derived from assumptions, without having to introduce a new dependency with another assumption.
![reiteration.png](../img/reiteration.png)

60
Logic/Soundness.md Normal file
View file

@ -0,0 +1,60 @@
---
tags:
- Logic
- propositional-logic
---
### Soundness
Recall that in the definition of [deductive validity](Validity%20and%20entailment.md#validity) we do not say: an argument is valid iff if the premises *are true* and the conclusion *is true*. We say *if it is possible for the premises to be true*. This is important: we are not interested in the actual truth of the premises or the conclusion.
#### Demonstration
Therefore this argument is valid:
````
(P1) Oranges are the same colour as bananas.
(P2) Bananas are yellow.
____________________________________________
(C) Oranges are yellow.
````
Of course oranges are not yellow but *were* (P1) true, then given (P2), the conclusion must also be true.
This argument is also valid:
````
(P1) Oranges are the same colour as carrots.
(P2) Carrots are orange.
____________________________________________
(C) Oranges are orange in colour.
````
The difference here is that the premises happen to be true and, given that the argument is valid, the conclusion must also be true. What we have defined here is **soundness**: the argument is said to be sound as well as valid. This is an additional and stronger criterion of evaluation.
>
> An argument is sound if and only if it is deductively valid and all it's premises are true.
We must not forget that truth alone is not the sole condition for soundness. We can have arguments whose conclusion and premises are all true without the argument being sound:
````
(P1) London is the capital of the United Kingdom
(P2) The capital of the United Kingdom is in the southern part of the United Kingdom.
Can
(P3) Cambridge is not the capital of the United Kingdom
____________________________________________
(C) London is south of Cambridge
````
All sentences are true here but the argument is not deductively valid: the premises are all true but the conclusion is false.
We can also have arguments which are valid but which are not sound:
````
(P1) Vitamin C prevents colds.
(P2) Vitamin C does not prevent colds.
____________________________________________
(C) Vitamin C is harmless
````
This argument is valid because we cannot consistently assert the premises and deny the conclusion. In either case, the conclusion can be said to follow from the premises. The problem is that we cannot consistently assert both premises: it is not possible for both sentences to be true at the same time.

View file

@ -0,0 +1,92 @@
---
tags:
- Logic
- propositional-logic
- proofs
---
## General strategy
* Break complex propositions into simpler sentences by using the elimination rules
* Recombine simple propositions into complex propositions using the introduction rules.
## Goal analysis
The approach above describes the general form of a proof but of course it will not always work and there will be cases where the route to the desired derivation is more circuitous. In these instances it is to best to combine this general top level strategy with goal analysis.
Goal analysis is a [recursive](../Algorithms%20&%20Data%20Structures/Recursion.md) strategy which proceeds by using a 'goal' proposition to guide the construction of intermediary derivations.
Assume that we want to show that an argument is [valid](Validity%20and%20entailment.md#validity). Then our ultimate goal is to derive the conclusion from the premises we are given. We first ask ourselves: *which propositions if we could derive them, would allow us to easily derive the conclusion*? (For example, these propositions might be two simple propositions that when combined with [Conjunction Introduction](Conjunction%20Introduction.md) give us the conclusion.) Deriving these propositions then becomes the new intermediate goal.
If arriving at these propositions is not trivial, we then ask ourselves the question again: *which propositions would permit us to derive the intermediary propositions we need*? You keep working back in this manner until you reach a base level. Then it is just a matter or working upwards from each set of derived intermediary propositions until you reach the ultimate goal.
### Demonstration
Let's say we want to prove $(L \lor A) & D$ from the propositions $\sim N$ and $(\sim N \supset L) & (D \equiv \sim N)$.
First, we consider what is the easiest possible way of achieving the proposition $(L \lor A) & D$. Clearly it is to separately derive each disjunct ($L \lor A$ and $D$) and then combine them with [Conjunction Introduction](Conjunction%20Introduction.md). This provides us with our first goal: to derive each of the separate conjuncts.
Let's start with $D$: where does it occur in the assumptions? It occurs in the compound $(\sim N \supset L) & (D \equiv \sim N)$, but only in the first conjunct. We can get this simply bu applying [Conjunction Elimination](Conjunction%20Elimination.md).
So far we have:
![step1.png](../img/step1.png)
Now we just need to get $D$ from the proposition at line 3. This is easy since we already have access to the consequent of the biconditional at line 1. Therefore we can apply [Biconditional Elimination](Biconditional%20Elimination.md) at line 3 to get $D$. We are now halfway there:
![step2.png](../img/step2.png)
Next we need to turn our attention to deriving $L \lor A$. How can we obtain $L$ ? Well it is contained within the first conjunct of the assumption on line 2. Again, we can get this through the application of [Conjunction Elimination](Conjunction%20Elimination.md).
Now, how do we get $L$ from $(\sim N \supset L)$? Well, we already have the antecedent $\sim N$ as an assumption on the first line, so we can use [Conditional Elimination](Conditional%20Elimination.md) to derive $L$. These two steps give us:
![step3.png](../img/step3.png)
Now we need to get from $L$ to $L \lor A$. This is really straightforward because by using [Disjunction Introduction](Disjunction%20Introduction.md) we can get from any sentence to a disjunction. Finally, having assembled all the constituent parts of the conjunction that is the conclusion, we can combine them with [Conjunction Introduction](Conjunction%20Introduction.md) as we had planned at the outset.
![step4.png](../img/step4.png)
### A further example
We will seek to prove the following:
$$
{ \sim L \equiv \[X & (\sim S \lor B)\], (E & C) \supset \sim L, (E & R) & C} \vdash X & (\sim S \lor B)
$$
The requirements here could easily mislead us. We see that the target proposition is a conjunction so we might think that the best strategy is to seek to derive each conjunct and then combine them via [Conjunction Introduction](Conjunction%20Introduction.md).
Actually, if we look more closely, there is a better approach. The target proposition is contained in the first premise as the consequent to the biconditional ($\sim L \equiv \[X & (\sim S \lor B)\]$). A better approach is therefore to seek to derive the antecedent ($\sim L$) and then use [Biconditional Elimination](Biconditional%20Elimination.md) to extract the target sentence which is the consequent.
![proof.png](../img/proof.png)
## Proving theorems
When we are proving [theorems](Theorems%20and%20empty%20sets.md) we do not have a set of assumptions to work from when constructing the proof. We must derive the target sentence from the 'empty set' which is the target sentence itself. It is therefore like a process of reverse engineering.
### Demonstration
\_Prove _ $\vdash (U & Y) \supset \[L \supset (U & L)\]$
Our strategy here is to identify the main connective in the proposition we want to derive (the [material conditional](Truth-functional%20connectives.md#material-conditional-a-k-a-implication)). We then assume the antecedent and attempt to derive the consequent from it.
![proofs-drawio-Page-5.drawio 4.png](../img/proofs-drawio-Page-5.drawio%204.png)
## A complex theorem proof
*Prove* $\vdash (\sim A \lor \sim B) \equiv \sim(A & B)$
![dsfdsfsdfwe.png](../img/dsfdsfsdfwe.png)
### Walkthrough
**Lines 1-12**
* Our auxiliary goal is to prove $\sim (A \lor B) \supset \sim (A & B)$.
* Our starting assumption is to a disjunction. Thus we can apply [Disjunction Elimination](Disjunction%20Elimination.md) to show that our goal sentence $\sim(A & B)$ follows from each of the disjuncts ($\sim A$ and $\sim B$) in dedicated subproofs. If we can do this, we have the right to derive $\sim (A & B$).
* In both cases($\sim A \vdash \sim (A & B$) and ($\sim B \vdash \sim (A & B$) we require another subproof to reach the target as there is no easy path available. So we derive a negation from $A & B$ so that we can negate it as $\sim (A & B$).
* Having done this, we can discharge the [Disjunction Elimination](Disjunction%20Elimination.md) subproofs and derive $\sim (A & B$) from $\sim A \lor \sim B$
**Lines 13-26**
* Our auxiliary goal is to prove $\sim (A & B) \supset \sim A \lor \sim B$. This will require a different approach to the above because we are not working from a disjunction anymore, we have a negated conjunction.
* We will do this by assuming the negation of what we want to prove ($\sim (\sim A \lor \sim B)$) and then apply [Negation Elimination](Negation%20Elimination.md) to get $\sim A \lor \sim B$.
* This requires us to derive a contradiction. We get this on lines 23 and 24. This requires as previous steps that we have two subproofs that use [Negation Elimination](Negation%20Elimination.md) to release $A$ and $B$

21
Logic/Syllogism.md Normal file
View file

@ -0,0 +1,21 @@
---
tags:
- Logic
- propositional-logic
---
In order to make assertions about the relative [consistency](Consistency.md) or inconsistency of a set of propositions we advance arguments. Consider everyday life: if we are having an argument with someone, we believe that they are wrong. A more logical way to say this is that we believe that their beliefs are inconsistent. In order to change their viewpoint or point out why they are wrong we advance an argument intended to show that belief A conflicts with belief B. Or if C is true, then you cannot believe that D.
In formal terms **an argument is a set of sentences comprising one or more premises and a conclusion. The conclusion is taken to be supported by the premises.**
>
> The terms **argument** and **syllogism** are used interchangeably in logic to describe the above feature of a set of propositions.
### Demonstration
````
(P1) All men are mortal.
(P2) Socrates is a man.
_____________________
(C) Socrates is mortal
````

View file

@ -0,0 +1,65 @@
---
tags:
- Logic
- propositional-logic
---
## Syntax of formal languages versus semantics
>
> The syntactical study of a language is the study of the expressions of the language and the relations among them *without regard* to the possible interpretations or 'meaning' of these expressions.
Syntax is talking about the order and placement of propositions relative to connectives and what constitutes a well-formed expression in these terms. Semantics is about what the connectives mean, in other words: truth-functions and truth-values and not just placement and order.
## Formal specification of the syntax of the language of Sentential Logic
### Vocabulary
Sentences in SL are capitalised Roman letters (non-bold) with or without natural number subscripts. We may call these sentence letters. For example:
````plain
P, Q, R...P1, Q1, R1...
````
The connectives of SL are the five truth-functional connectives:
````
~, &, v, ⊃, ≡
````
The punctuation marks of SL consist in the left and right parentheses:
````
( )
````
### Grammar
1. Every sentence letter is a sentence.
1. If **P** is a sentence then **~P** is a sentence.
1. If **P** and **Q** are sentences, then **(P & Q)** is a sentence
1. If **P** and **Q** are sentences, then **(P v Q)** is a sentence
1. If **P** and **Q** are sentences, then **(P ⊃ Q)** is a sentence
1. If **P** and **Q** are sentences, then **(P ≡ Q)** is a sentence
1. Nothing is a sentence unless it can be formed by repeated application of clauses 1-6
### Additional syntactic concepts
We also distinguish:
* the **main connective**
* **immediate sentential components**
* **sentential components**
* **atomic components**
These definitions provide a formal specification of the concepts of atomic and molecular sentences *introduced earlier*.
1. If **P** is an atomic sentence, **P** contains no connectives and hence does not have a main connective. **P** has no immediate sentential components.
1. If **P** is of the form **~Q** where **Q** is a sentence, then the main connective of **P** is the tilde that occurs before **Q** and **Q** is the immediate sentential component of **P**.
1. If P is of the form:
1. **Q & R**
1. **Q v R**
1. **Q ⊃ R**
1. **Q ≡ R**
where **Q** and **R** are sentences, then the main connective of **P** is the connective that occurs between **Q** and **R** and **Q** and **R** are the immediate sentential components of **P**.

View file

@ -0,0 +1,26 @@
---
tags:
- Logic
- propositional-logic
- proofs
- theorems-axioms-laws
---
We know that when we construct a [derivation](Formal%20proofs%20in%20propositional%20logic.md#constructing-proofs) we start from a set of assumptions and then attempt to reach a proposition that is a consequence of the starting assumptions. However it does not always have to be the case that the starting set contains members. The set can in fact be empty.
*Demonstration*
![proofs-drawio-Page-5.drawio 2.png](../img/proofs-drawio-Page-5.drawio%202.png)
We see in this example that there is no starting set and thus no primary assumptions. Instead we start with nothing other than the proposition we wish to derive. The proposition is effectively derived from itself. In these scenarios we say that we are constructing a derivation from an **empty set**.
Propositions which possess this property are called theorems:
>
> A proposition $P$ or a system of propositions in propositional logic is a theorem in a system of derivation for that logic if $P$ is derivable from the empty set.
We represent a theorem as:
$$
\\vdash P
$$
(There is no preceding $\Gamma$ as the set is empty. )

View file

@ -0,0 +1,290 @@
---
tags:
- Logic
- propositional-logic
- truth-tables
---
## Truth values: simple and compound sentences, symbolic representation of each
## Truth-functional connectives
Sentences generated from other (simple) sentences by means of sentential connectives are [compound sentences](Atomic%20and%20molecular%20sentences.md).
We know that logically determinant sentences express a truth value. When simple sentences are joined with a connective to make a compound sentence they also have a truth value. This is determined by the nature of the connective and the truth value of the constituent sentences. We therefore call connectives of this nature truth *functional* connectives since the **truth value of the compound is a function of the truth values of its components**.
>
> A sentential connective is used truth-functionally if and only if it is used to generate a compound sentence from one or more sentences in such a way that the truth value of the generated compound is wholly determined by the truth-values of those one or more sentences from which the compound is generated, no matter what the truth values may be.
Each truth-functional connective has a characteristic **truth-table**. This discloses the conditions under which the constituent sentences have a given truth value when combined with one or more connectives.
We shall now review each of the truth-functional connectives in detail.
### Conjunction
Conjunction is equivalent to the word AND in natural language. We use `&` as the symbol for this connective.
A molecular sentence joining two conjuncts P and Q is true iff both conjuncts are true and false otherwise:
````
P Q P & Q
T T T
T F F
F T F
F F F
````
### Disjunction
Conjunction is equivalent to the word OR in natural language. We use `v` as the symbol of this connective.
A molecular sentence joining two disjuncts P and Q is true if either disjunct is true or if both disjuncts are true and false otherwise. This corresponds to the inclusive sense of OR in natural language.
````
P Q P Q
T T T
T F T
F T T
F F F
````
### Negation
In contrast to the two previous connectives, negation is a unary connective not a binary connective. We use `~` to symbolise negation. It does not join two or more sentences, it applies to one sentence as a whole. This can be a simple sentence or a complex sentence. It simply negates the truth-value of whichever sentence it is applied to. Hence applied to P, it is true if P is false. And if P is false, it is true when P is true. !
````
P ~ P
T F
F T
````
### Material conditional (a.k.a implication)
The material conditional approximates the meaning expressed in natural language when we say *if* such-and-such is the case *then* such-and-such will the case. Another way of expressing the sense of the material conditional is to say that **P** implies **Q.**
````
If it rains today the pavement will be wet.
````
We call the proposition that expresses the 'if' sentence the **antecedent** and the proposition that expresses the 'then' statement the **consequent**. The symbol we use to represent the material conditional is `⊃` although you may see `→` used as well.
The truth table is as follows:
````
P Q P ⊃ Q
T T T
T F F
F T T
F F T
````
The material conditional is perhaps the least intuitive of the logical connectives. The first case (TT) closely matches what we expect the connective to mean: it has rained so the pavement is wet. The antecedent is true and therefore the consequent is true. This chimes with what we tend to mean by 'if' in natural language. In the second case (TF) it also makes sense: the complex sentence is false because it rained and the pavement wasn't wet: this negates the truth of the expression. The final case (FF) is also straight forward. It didn't rain therefore the pavement wasn't wet, thus the overall assertion that rain implies wet pavements is retained.
FT is less intuitive:
````
It did not rain today. The pavement was wet.
````
To some degree one just has to take these statements as axioms, whether or not they have intuitive sense is a secondary, more philosophical question. The semantic issues arise because we tacitly assume the material conditional to be a causal connective: there is something about the nature of **P** that *engenders* or *brings about* **Q** but causality is not a logical concern.
If we instead just focus on the simple sentences that comprise the truth value it is more plausible. In the case of FT we can say it didn't rain yet the pavement was wet does not stop the pavement being wet when it rains. The fact that I can pour a beer on the pavement thereby making it wet doesn't stop or render false the idea that the rain can also make the pavement wet. The same explanation covers the FF case: it hasn't rained and so the pavement is not wet does not contradict the assertion that when it rains the pavement will be wet.
Things are elucidated when we look at an equivalent expression of P ⊃ Q, ~P v Q:
````
P Q ~ P Q
T T T
T F F
F T T
F F T
````
A disjunction is true whenever either disjunct is true so when both are false the overall expression is false, the same as with FT and FF with the material conditional.
### Material biconditional (a.k.a equivalence)
The material biconditional equates to the English expression 'if and only if', as a conditional connective it therefore avoids some of the perplexity aroused by its material cousin. In this scenario both antecedent and consequent have to be true for the overall expression to be true. If either is false the complex sentence is false. Other ways of expressing the semantics of this connective is to say that one sentence implies the other or that **P** and **Q** are equivalent.
````
If and only if James studies every day he will pass the exam.
````
There is no possibility in which James passes the exam and has not studied every day. If he studies for three out of the seven days leading up to the exam he will not pass. Alternatively, there is no possibility that James studied every day yet failed the exam. The antecedent and consequent are locked, as indicated by the truth-table:
````
P Q P ≡ Q
T T T
T F F
F T F
F F T
````
The last condition (FF) maybe requires some explanation: if he has not studied every day then he cannot have passed the exam. Therefore, to say that he will pass iff he studies every day is rendered true.
## Combinations of truth-functional connectives
---
So far we have applied connectives to simple sentences. In so doing we generate complex sentences. However sentences and connectives are inherently generative: we can build more complex expressions from less complex parts, using more than one type of connective or several different connectives to make larger complex sentences and express more detailed logical conditions ans statements about the world.
For example the sentence:
````
Socrates was either a philosopher or a drinker but he wasn't a politician.
````
Can be expressed with greater logical clarity as:
````
Socrates was a philosopher or Socrates was a drinker and Socrates was not a politician.
````
Using P for 'Socrates was a philosopher', Q for 'Socrates was a drinker' and R for 'Socrates was a politician' we can express this symbolically as:
````
(P v Q) & ~R
````
Which has the truth table:
````
P Q R ( P Q ) & ~ R
T T T F
T T F T
T F T F
T F F T
F T T F
F T F T
F F T F
F F F F
````
Let's walk through each case where S stands for the overall sentence.
1. S is false if Socrates was a philosopher, a drinker and a politician.
1. **S is true if Socrates was a philosopher, a drinker but not a politician.**
1. S is false if Socrates was a philosopher, a politician but not a drinker.
1. **S is true if Socrates was a philosopher but not a drinker or politician.**
1. S is false if Socrates was not a philosopher but was a drinker and politician
1. **S is true if Socrates was not a philosopher or politician but was a drinker.**
1. S is false if Socrates was neither a philosopher or drinker but was a politician.
1. S is false if Socrates was neither a philosopher, drinker, or politician.
If we look just at the true cases for simplicity, it becomes obvious that the truth value of the whole is a function of the truth-values of the parts.
At the highest level of generality the sentence is a conjunction with two disjuncts: `P v Q` and `~R` . Therefore, for the sentence to be true both conjuncts must be true. The first conjunct is true just if one of the subordinate disjuncts is true (Socrates is either a philosopher, a drinker, or both). The second conjunct is true just if Socrates is not a politician. Thus there is only one variation for the second conjunct (not being a politician) and two variations for the first conjunct (being a drinker/being a philosopher) hence there are three cases where the overall sentence is true.
### Logical equivalence
Once we start working with complex sentences with more than one truth-functional connective it becomes clear that the same sentence expressed in natural language can be expressed formally more than one way and thus that in logical terms, both formal expressions are equivalent. We can prove this equivalence by comparing truth tables.
For example the sentence:
````
I am going to the shops and the gym.
````
Can obviously be expressed formally as:
````
P & Q
````
But also as:
````
~ (~P v ~Q)
````
And we know this because the truth-tables are identical:consistency
````
P Q P & Q
T T T
T F F
F T F
F F F
````
````
P Q ~ ( ~ P ~ Q )
T T T
T F F
F T F
F F F
````
Another example of equivalent expressions:
````
Neither Watson or Sherlock Holmes is fond of criminals
````
The first formalisation:
````
~P & ~Q
````
Equivalent to:
````
~(P v Q)
````
Again the truth-tables for verification:
````
P Q ~ P & ~ Q
T T F
T F F
F T F
F F T
````
`~P & ~Q`
````
P Q ~ ( P Q )
T T F
T F F
F T F
F F T
````
### Important equivalences
The example above is a key equivalence that you will encounter a lot especially when deriving formal proofs. It goes together with another one. We have noted them both below for future reference:
````
~P & ~Q = ~P v ~Q
````
````
~P v ~Q = ~(P & Q)
````
## Enforcing binary connectives through bracketing
---
If we had a sentence of the form
````
Socrates is man, is mortal and a philosopher.
````
We could not write this as:
````
P & Q & R
````
This would not be a well-formed sentence because at most truth functional connectives can only connect two simple sentences. It would not be possible to generate truth conditions for this sentence in its current form. Instead we introduce brackets to enforce a binary grouping of simple sentences. In this instance, the placement of the brackets does not affect the accurate interpretation of the truth conditions of the compound, so the following two formalisations are equivalent:
````
(P & Q) & R
P & (Q & R)
````

113
Logic/Truth-tables.md Normal file
View file

@ -0,0 +1,113 @@
---
tags:
- Logic
- propositional-logic
- recursion
- truth-tables
---
We are already familiar with truth-tables from the previous entry on the *truth-functional connectives* and the relationship between sentences, connectives and the overall truth-value of a sentence. Here we will look in further depth at how to build truth-tables and on their mathematical relation to binary truth-values. We will also look at examples of complex truth-tables for large compound expressions and the systematic steps we follow to derive the truth conditions of compound sentences from their simple constituents.
## Formulae for constructing truth-tables
For any truth-table, the number of rows it will contain is equal to $2n$ where:
* $n$ stands for the number of sentences
* $2$ is the total number of possible truth values that the sentence may have: true or false.
When we count the number of sentences, we mean atomic sentences. And we only count each sentence once. Hence for a compound sentence of the form $(\sim B \supset C) & (A \equiv B)$, $B$ occurs twice but there are only three sentences: $A$, $B$, and $C$.
Thus for the sentence $P & Q$ ,we have two sentences so $n$ is 2 which equals 4 rows (2 x 2):
````
P Q P & Q
T T T
T F F
F T F
F F F
````
For the sentence $(P \lor Q) & R$ we have three sentences so $n$ is 3 which equals 8 rows (2 x 2 x 2):
````
P Q R ( P Q ) & R
T T T T
T T F F
T F T T
T F F F
F T T T
F T F F
F F T F
F F F F
````
For the single sentence $P$ we have one sentence so $n$ is 1 which equals 2 rows (2 x 1):
````
P P
T T
F F
````
This tells us how many rows the truth-table should have but it doesn't tell us what each row should consist in. In other words: how many Ts and Fs it should contain. This is fine with simple truth-tables since we can just alternate each value but for truth-tables with three sentences and more it is easy to make mistakes.
To simplify this and ensure that we are including the right number of possible truth-values we can extend the formula to $2n^-i$. This formula tells us how many groups of T and F we should have in each column.
We can already see that there is a pattern at work by looking at the columns of the truth tables above. If we take the sentence $(P \lor Q) & R$ we can see that for each sentence:
* $P$ consists in two sets of ${\textsf{T,T,T,T}}$ and ${\textsf{F,F,F,F}}$ with **four** elements per set
* $Q$ consists in four sets of ${\textsf{T,T}}$ , ${\textsf{F,F}}$, ${\textsf{T,T}}$ , ${\textsf{F,F}}$ with **two** elements per set
* $R$ consists in eight sets of ${\textsf{T}}$, ${\textsf{F}}$, ${\textsf{T}}$, ${\textsf{F}}$, ${\textsf{T}}$, ${\textsf{F}}$, ${\textsf{T}}$, ${\textsf{F}}$ with **one** element per set.
If we work through the formula we see that it returns 4, 2, 1:
$$\begin{equation} \begin{split} 2n^-1 = 3 -1 \\ = 2 \\ = 2 \cdot 2 \\ = 4 \end{split} \end{equation}$$
$$
\\begin{equation} \begin{split} 2n^-2 = 3 - 2 \\ = 1 \\ = 2 \cdot 1 \\ = 2 \end{split} \end{equation}
$$
$$
\\begin{equation} \begin{split} 2n^-3 = 3 - 3 \\ = 0 \\ = 2 \cdot 0 \\ = 1 \end{split} \end{equation}
$$
## Truth-table concepts
### Recursion
When we move to complex truth-tables with more than one connective we realise that truth-tables are recursive. The truth-tables for the truth-functional connectives provide all that we need to determine the truth-values of complex sentences:
>
> The core truth-tables tell us how to determine the truth-value of a molecular sentence given the truth-values of its [immediate sentential components](Syntax%20of%20sentential%20logic.md). And if the immediate sentential components of a molecular sentence are also molecular, we can use the information in the characteristic truth-tables to determine how the truth-value of each immediate component depends n the truth-values of *its* components and so on.
### Truth-value assignment
>
> A truth-value assignment is an assignment of truth-values (either T or F) to the atomic sentences of SL.
When working on complex truth tables, we use the truth-assignment of atomic sentences to count as the values that we feed into the larger expressions at a higher level of the sentential abstraction.
### Partial assignment
We talk about partial assignments of truth-values when we look at one specific row of the truth-table, independently of the others. The total set of partial assignments comprise all possible truth assignments for the given sentence.
## Working through complex truth-tables
The truth-table below shows all truth-value assignments for the sentence $(\sim B \supset C) & (A \equiv B)$ :
````
A B C ( ~ B ⊃ C ) & ( A ≡ B )
T T T F T T T T T T T
T T F F T T F T T T T
T F T T F T T F T F F
T F F T F F F F T F F
F T T F T T T F F F T
F T F F T T F F F F T
F F T T F T T T F T F
F F F T F F F F F T F
````
As with algebra we work outwards from each set of brackets. The sequence for manually arriving at the above table would be roughly as follows:
1. For each sentence letter, copy the truth value for it in each row.
1. Identify the connectives in the atomic sentences and the main overall sentence.
1. Work out the truth-values for the smallest connectives and sub-compound sentences. The first should always be negation and then the other atomic connectives.
1. Feed-in the truth-values of the atomic sentences as values into the main connective, through a process of elimination you then reach the core truth-assignments:

242
Logic/Truth-trees.md Normal file
View file

@ -0,0 +1,242 @@
---
tags:
- Logic
- propositional-logic
---
## Rationale
Like [truth-tables](Truth-tables.md), truth-trees are a means of graphically representing the logical relationships that may obtain between propositions. Truth-trees and truth-tables complement each other and which method you choose depends on which logical property you are seeking to derive.
Whilst truth-tables have the benefit of being exhaustive - every possible truth assignment is factored into the representation - their complexity grows exponentially with each additional proposition they contain. This can make manually constructing truth tables long-winded and prone to mistakes.
Truth-trees are less onerous but they lack the exhaustive scope of a truth-table. They are more targeted and are best used for demonstrating *that something is the case* rather than *all the possible states that could be the case*. For example, a truth tree will tell us that a set *S is logically consistent* whereas a truth-table will tell us that *S is consistent on the following three assignments.*
## Logical consistency
Recall that a set of propositions is logically or truth-functionally [consistent](Consistency.md) just if there is at least one assignment of truth conditions which results in all members of the set being true. To identify consistency for a set of three propositions via the truth table approach we would need to construct a truth table with $2^3$ (8) rows. Assume that this set is consistent on one partial assignment only. This means that 87.5% of our rows are redundant, they are not required to prove the consistency of the set. However we can only know this and we can only be sure of consistency once we have gone through the process of generating an assignment for each row.
Truth trees allow us to reduce the amount of work required and go straight to the assignment that proves consistency, disregarding the rest which are irrelevant.
## Truth tree structure and key terms
**When using a truth tree to derive logical consistency, the goal is to determine whether there is a truth-value assignment on which all of the sentences of a set are true. If the set is consistent we should be able to derive a partial assignment from the tree that demonstrates consistency.**
Each truth tree begins with a series of sentences one on top of the other in a column. We call the sentences that comprise the initial column **set members**. In constructing the tree, we work downwards from the initial column decomposing set members into their atomic constituents. We a call an atomic sentence that has been decomposed a **literal.** A literal will either be an atomic sentence or the negation of an atomic sentence. If one of the set members is already a literal, there is no need to decompose it; it can remain as it is.
Once every set member has been decomposed the truth tree is complete. It can then be interpreted in order to derive logical consistency or inconsistency. If the set is consistent, we are able to derive the partial assignment(s) that demonstrate consistency.
The rules for decomposing compound sentences match the truth conditions of the logical connectives. There are rules for every possible connective and the negation of every possible connective however in terms of their tree shape they all correspond to either a conjunction or a disjunction. Disjunctive decomposition results in new branches being formed off the main column (or trunk). Conjunctive decomposition is non-branching which means the decomposed constituents are placed within the trunk of whichever tree or branch they are decomposed within.
As we construct the tree we list each line in the left-hand margin and the decomposition rule in the right-hand margin. When we apply a decomposition rule we must cite the lines to which it applies.
### Closed and open branches
Any branch on which an atomic sentence ($P$) and the negation of that sentence ($\sim P$) both occur is a **closed branch**. A branch that is not closed is an **open branch**. No partial assignment is recoverable from a closed branch. An open branch allows truth to flow up to the original set members whereas a closed branch blocks this passage.
### Completed open branch
A completed open branch occurs when we have an open branch that has been fully decomposed: the branch is open and all molecular sentences have been ticked off such that it contains only literals.
### Completed tree
A tree where all its branches are either completed open branches or closed branches.
### Closed tree
A tree where all the branches are closed
### Open tree
A tree with at least one completed open branch
## Deriving consistency
Using the definitions above, we can now define truth-functional consistency and inconsistency in terms of truth trees:
>
> A finite set ($\Gamma$ ) of sentences is truth-functionally inconsistent if $\Gamma$ is a closed tree
>
> A finite set ($\Gamma$ ) of sentences is truth-functionally consistent if $\Gamma$ is an open tree
## Examples
### First example
The following is a truth tree for the set ${P \lor Q, \sim P }$:
![basic-open-tree 1.svg](../img/basic-open-tree%201.svg)
### Interpretation
* We decompose the disjunction at line 1 on line 3. We tick off the compound sentence to indicate that it is now decomposed and no longer under consideration.
* Both P and its negation exist on a single branch (at line 2 and line 3). This makes it a closed branch. We indicate this by the X beneath the branch that is closed, citing the source of the closure by line number.
* The rightward branch is a completed open branch given the decomposition at 3 and the lack of negation of Q. Overall this makes the tree an open tree.
As the set gives us an open tree, it must be truth-functionally consistent. If this is the case we should be able to determine the partial assignment in which each set member is true. Given that Q is not negated the assignment of consistency will contain Q but we have both P and ~P. This means there are two possible assignments where the set is consistent: $P, Q$ and $\sim P, Q$. This is confirmed by the truth-table:
````
P Q P ~ P Q
T T T T *
T F T F
F T T T *
F F T F
````
**Any time there is an open tree with a closed branch it will be the case that the negated sentences of the closed branch will appear both as** $S$ and $\sim S$ i**n the resultant assignment.**
Invoking the truth-table highlights the differences between the two techniques. The values that are derived when we interpret a truth tree are not the truth-functions of the set members but the truth-values for when they are simultaneously true. With truth-tables in contrast, we are deriving the truth functions for every possible truth-value assignment. In other words the values derived from a truth tree correspond to the left hand side of the truth table not the right hand side.
### Second example
The following is a truth tree for the set ${A & \sim B, C, \sim A \lor \sim B }$.
![basic-closed-tree 1.svg](../img/basic-closed-tree%201.svg)
### Interpretation
* The two molecular set members are decomposed. The disjunction (line 3) results in a branching tree. The conjunction (line 1) results in the continuation of the trunk.
* Both branches are completed making it a completed tree. As each branch is closed this is a closed tree.
As this is a closed tree, the set is not truth-functionally consistent. This is confirmed by the truth table where there is no partial assignment where all set members are true.
````
A B C A & ~ B C ~ A ~ C
T T T F T F
T T F F F T
T F T T T F
T F F T F T
F T T F T T
F T F F F T
F F T F T T
F F F F F T
````
## Truth tree decomposition rules
---
So far we have encountered the decomposition rules for conjunction (`&D`) and disjunction (`vD`). We will now list all the rules. We will see that for each rule, the decomposition either branches or does not branch which is to say that each rule either has the shape of a conjunction or a disjunction (however the permitted values of the specific disjuncts/conjuncts obviously differ in each case). Moreover there is a parallel rule for the decomposition of the negation of each of the main connectives and these rules rely on logical equivalences
### Negated negation decomposition: `~~D`
![negated-negation-decomposition-rule 2.svg](../img/negated-negation-decomposition-rule%202.svg)
Truth passes only if $P$ is true
### Conjunction decomposition: `&D`
![conjunction-decomposition-rule.svg](../img/conjunction-decomposition-rule.svg)
Truth passes only $P$ and $Q$ are both true.
### Negated Conjunction decomposition: `~&D`
![negated-conjunction-decomposition-rule.svg](../img/negated-conjunction-decomposition-rule.svg)
Truth passes if either $\sim P$ or $\sim Q$ is true. This rule is a consequence of the equivalence between $\sim (P & Q)$ and $\sim P \lor \sim Q$ , the first of DeMorgans Laws.
### Disjunction decomposition: `vD`
![disjunction-decomposition-rule.svg](../img/disjunction-decomposition-rule.svg)
Truth passes if either $P$or $Q$ are true.
### Negated Disjunction decomposition: `~vD`
![negated-disjunction-decomposition-rule.svg](../img/negated-disjunction-decomposition-rule.svg)
Truth passes if both $P$ and $Q$ are false. This rule is a consequence of the equivalence between $\sim (P \lor Q)$ and $\sim P & \sim Q$, the second of DeMorgans Laws.
### Conditional decomposition: `⊃D`
![conditional-decomposition-rule.svg](../img/conditional-decomposition-rule.svg)
Truth passes if either $\sim P$ or $Q$ are true. This rule is a consequence of the equivalence between $P \supset Q$ and $\sim P \lor Q$ therefore this branch has the shape of a disjunction with $\sim P$ , $Q$ as its disjuncts.
### Negated Conditional decomposition: `~⊃D`
Truth passes if both $P$ and $\sim Q$ are true. This is a consequence of the equivalence between $\sim (P \supset Q)$ and $P & \sim Q$.
![negated-conditional-decomposition-rule.svg](../img/negated-conditional-decomposition-rule.svg)
### Biconditional decomposition: `≡D`
![biconditional-decomposition-rule.drawio(1).svg](../img/biconditional-decomposition-rule.drawio%281%29.svg)
Truth passes if either $P$ and $Q$ are true or $\sim P & \sim Q$ are true. This is an interesting rule because it combines the disjunction and conjunction tree shapes.
### Negated biconditional decomposition: `~≡D`
![negated-biconditional-decomposition-rule.drawio.svg](../img/negated-biconditional-decomposition-rule.drawio.svg)
Truth passes if either $P$ and $\sim Q$ is true or if $\sim P$ and $Q$ is true.
## Further examples and heuristics for complex truth trees
With truth-trees regardless of which order you decompose the set members, the conclusion should always be the same. This said, there more are more efficient ways than others to construct the trees. You want to find the route that will demonstrate consistency or non-consistency with the shortest amount of work. The following heuristic techniques followed in order, facilitate this:
1. Decompose those molecular sentences the decomposition of which does not produce new branches. In other words that are decompositions of double negations or pure conjunctions.
1. Perform those decompositions that will rapidly generate closed branches.
1. If neither (1) or (2) is applicable, decompose **the most complex** sentence first.
Here are some examples of these rules applied:
![complex-tree.svg](../img/complex-tree.svg)
Observe that here we dont bother to decompose the sentence on line 1. This is because, having decomposed the sentences on lines 2 and 3 we have arrived at a closed tree. It is therefore unnecessary to go any further for if two sentences in the set are inconsistent with each other, adding another sentence is not going to change the overall assignment of inconsistency.
## Deriving properties other than logical consistency from truth trees
So far truth trees have been discussed purely in terms of logical consistency however they can be used to derive all the other key truth-functional properties of propositional logic. Given the foundational role of consistency to logic, these properties are expressible in terms of consistency which is what makes them amenable to formulation in terms of truth trees.
### Logical falsity
For a given finite set $\Gamma$, $\Gamma$ is logically consistent just if all of its members can be true at once. Expressed in terms of truth trees, this is equivalent to an open tree. Contrariwise, $\Gamma$ is inconsistent if it is not possible for every member of the set to be true at once. This is the same as a tree where all of the branches are closed (i.e. a closed tree).
When we wish to assess [logical falsity](Logical%20truth%20and%20falsity.md#logical-falsity) we are not focused on sets however, we are interested in a property of a sentence. However we can easily construe single sentences as unit sets: sets with a single member. With this in mind and the above accounts of consistency and logical falsity we are equipped to express logical falsity in terms of truth-trees with the following rule:
>
> A sentence $P$ is logically false if and only if the unit set ${ P }$ has a closed tree
A logically false sentence cannot be true on any assignment. This is the same thing as an inconsistent set. Thus it will be represented in a truth tree as inconsistency which is disclosed via a closed tree.
![logical-falsity-tree.svg](../img/logical-falsity-tree.svg)
### Logical truth
For a sentence $P$ to be [logically true](Logical%20truth%20and%20falsity.md#logical-truth), there must be no possible assignment in which $P$ is false. We express this informally by saying *it is not possible to consistently deny $P$.* We know that in terms of truth trees an inconsistent set is a closed tree therefore a unit set of ${ P }$ is logically true if ${ \sim P }$ is a closed tree. This is to say: if the negation of $P$ is inconsistent.
>
> A sentence $P$ is logically true if and only if the set ${ \sim P }$ has a closed tree
### Logical indeterminacy
[Indeterminacy](Indeterminacy.md) follows from the two definitions above; we do not require any additional apparatus. We recall that a sentence $P$ is logically indeterminate just if it is neither logically true or logically false. Thus the truth tree for an indeterminate sentence is straightforward:
>
> A sentence $P$ is logically indeterminate if and only if neither the set  ${ P }$ nor the set ${ \sim P }$ has a closed tree
This follows because a closed tree for  ${ P }$ means it is not logically false and an open tree for ${ \sim P }$ means it is not logically true. So if it is neither of these things, $P$ must be indeterminate.
### Logical equivalence
Recall that $P$ and $Q$ are [logically equivalent](Logical%20equivalence.md) just if there is no truth assignment on which one is true and the other is false. We know from the [material biconditional shorthand](Corresponding%20material%20and%20biconditional.md#corresponding-material-biconditional) that this state of affairs can be expressed as $P \equiv Q$ and that if this compound sentence is true on every assignment then both simple sentences are equivalent. But true on every assignment is another way of saying *logically true* since there is no possibility of a false assignment. We already know what logical truth looks like as a truth tree: it is a closed tree for the negation of the sentence being tested. Therefore, to test the logical equivalence of two sentences it is necessary to construct a truth tree for the negation of the sentences conjoined by the biconditional (i.e. $\sim (P \equiv Q)$ )and see if this results in a closed tree. If it does, the two sentences are logically equivalent.
>
> Sentences $P$ and $Q$ are truth-functionally equivalent if and only if the set $\sim (P \equiv Q)$ has a closed tree
![logical-equivalence-tree.svg](../img/logical-equivalence-tree.svg)
### Logical entailment and validity
Lets remind ourselves of the meaning of truth-functional [entailment](Validity%20and%20entailment.md#entailment) and [validity](Validity%20and%20entailment.md#validity) and the relation between the two. $\Gamma$ $\vdash$ $P$ is true if and only if there is no truth-assignment in which every member of $\Gamma$ is true and $P$ is false. Entailment is closely related to validity; it is really just a matter of emphasis: we say that $\Gamma$ are the premises and $P$ is the conclusion and that this is a valid argument if there is no assignment in which every member of $\Gamma$ is true and $P$ is false.
As with the previous properties, to express validity and entailment in terms of truth trees we need to express these concepts in the language of logical consistency. $\Gamma$ entails $P$ just if one cannot consistently assert $\Gamma$ whilst denying $P$. This is to say that the set $\Gamma \cup {\sim P}$ is inconsistent. So we just need a closed truth tree for $\Gamma \cup {\sim P}$ to demonstrate the validity of this set.
>
> A finite set of sentences $\Gamma$ truth-functionally entails a sentence $P$ if and only if the set $\Gamma \cup {\sim P}$ has a closed truth tree.
>
> An argument is truth functionally valid if and only if the set consisting of the premises and the negation of the conclusion has a closed truth tree.

View file

@ -0,0 +1,100 @@
---
tags:
- Logic
- propositional-logic
- validity
- entailment
---
## Validity
### Informal definition
In order to say whether an argument is 'good' or 'bad' we must have criteria of evaluation. in logic there are different criteria of evaluation:
* **Deductive validity**
An **argument is deductively valid if and only if it is not possible for the premises to be true and the conclusion false**. Linking to consistency: it is not possible to consistently assert all of the premises but deny the conclusion.
* **Inductive strength**
We do not say that inductive arguments have 'validity' because despite inductive premises being true, the conclusion may be falsifiable. Therefore we say inductive 'strength' rather than 'validity'. An argument is inductively strong if and only if the conclusion is probably true given the premises.
#### Demonstration
The Socrates demonstration above is an example of deductive validity.
The following is an example of an argument that is inductively strong:
````
99% of deaf persons have no musical talent.
Beethoven was deaf.
___________________________________________
Beethoven had no musical talent.
````
The test for a strong inductive argument is not whether the conclusion is true, rather it concerns the evidence the premises provide in support of the conclusion.
>
> In propositional logic we are concerned solely with deductive validity or invalidity.
### Formal definition
>
> An argument is truth-functionally valid if and only if there is no truth-assignment on which all the premises are true and the conclusion is false.
Linking this to [derivation](Formal%20proofs%20in%20propositional%20logic.md), we say:
>
> In a system of derivation in propositional logic, an argument is valid if the conclusion of the argument is derivable within the system of derivation from the set consisting of the premises, and invalid otherwise.
### Demonstration
The inference from the set ${P, P \supset Q}$ to $Q$ is valid
### Truth-table
````
P Q P ⊃ Q P Q
T T T T T *
T F F T F
F T T F T
F F T F F
````
## Entailment
### Informal definition
Entailment as a concept is almost identical to validity. We say that a proposition is entailed by a set of propositions if it is not possible for every member of this set to be true and the proposition to be false.
The difference with validity resides in the fact that the propositions are distinguished in terms of whether they are premises or a conclusion. So, technically, validity is a subclass of entailment. A case of entailment where we distinguish propositions in terms of whether they are premises or conclusions. A proposition may be entailed by a given set without that proposition being the *conclusion* of the set and where the set is a syllogism.
### Formal definition
>
> A finite set of sentences $\Gamma$ $\vdash$ $P$ if and only if there is no truth-assignment in which every member of $\Gamma$ is true and $P$ is false.
#### Informal demonstration
````
It is raining.
If it is raining then the pavement will be wet.
The pavement is wet.
````
#### Formal demonstration
````
{P, P ⊃ Q} ⊨ Q
````
#### Truth-table
````
P Q P ⊃ Q P Q
T T T T T *
T F F T F
F T T F T
F F T F F
````

View file

@ -0,0 +1,20 @@
---
tags:
- Mathematics
- Algebra
---
* **Variable**
* A symbol that stands for a value which may vary
* **Equation**
* A mathematical statement that equates two mathematical expressions (states that they are the same/ establishes an identity relation)
* **Solution** ^678811
* A numerical value that **satisfies** an equation. When the variable in the equation is replaced by the solution, a true statement results
### Example
$$ 4 = y - 11 $$
The example above is an **equation**. $y$ is the variable. This can be replaced by $15$ which is the **solution** to the equation:
$$ 4 = 15 -11 $$

View file

@ -0,0 +1,70 @@
---
tags:
- Mathematics
- Algebra
---
## Equivalent equations
>
> Two equations are equivalent if they have the same [solution](Algebra%20key%20terms.md#678811) set.
We know from the distributive property of multiplication that the equation $a \cdot (b + c )$ is equivalent to $a \cdot b + a \cdot c$. If we assign values to the variables such that $b$ is $5$ and $c$ is $2$ we can demonstrate the equivalence that obtains in the case of the distributive property by showing that both $a \cdot (b + c )$ and $a \cdot b + a \cdot c$ have the same solution:
$$ 2 \cdot (5 + 2) = 14 $$
$$ 2 \cdot 5 + 2 \cdot 2 =14 $$
When we substitute $a$ with $2$ (the solution) we arrive at a true statement (the assertion that arrangement of values results in $14$). Since both expressions have the same solution they are equivalent.
## Creating equivalent equations
We can create equivalent equations by adding, subtracting, multiplying and dividing the *same quantity* from both sides of the equation (i.e. either side of the $=$ symbol).
Adding or subtracting the same quantity from both sides (either side of the $=$ ) of the equation results in an equivalent equation.
### Demonstration with addition
$$ x - 4 = 3 $$
The [solution](Algebra%20key%20terms.md#678811) to this equation is $7$
$$ x - 4 (+4) = 3 (+ 4) $$
Here we have added $4$ to each side of the equation. If $x = 7$ then:
$$ 7 - 4 (+ 4) = 7 $$
and:
$$ 3 + 4 = 7 $$
### Demonstration with subtraction
$$ x + 4 = 9 $$
The [solution](Algebra%20key%20terms.md#678811) to this equation is $5$.
$$ x + 4 (-4) = 9(-4) $$
Here we have subtracted $4$ from each side of the equation. If $x = 5$ then:
$$ 5 + 4 (-4) = 5 $$
and
$$ 9 - 4 = 5 $$
### Demonstration with multiplication
$$x \cdot 2 = 10 $$
The [solution](Algebra%20key%20terms.md#678811) to this equation is $5$.
$$ (x \cdot 2) \cdot 3 = 10 \cdot 3 $$
Here we have multiplied each side of the equation by $3$. If $x =5$ then
$$ (5 \cdot 2) \cdot 3 = 30$$
$$ 10 \cdot 3 = 30$$
### Demonstration with division
$$x \cdot 3 = 18 $$
The [solution](Algebra%20key%20terms.md#678811) to this equation is $6$.
$$\frac{x \cdot 3}{3} = \frac{18}{3} $$
Here we have divided each side of the equation by $3$. If $x$ is 6, then
$$\frac{6 \cdot 3}{3} = 6$$
$$\frac{18}{3} = 6 $$

View file

@ -0,0 +1,47 @@
---
tags:
- Mathematics
- Algebra
- exponents
---
## Equivalent equations
>
> Two equations are equivalent if they have the same solution set.
We know from the distributive property of multiplication that the equation $a \cdot (b + c )$ is equivalent to $a \cdot b + a \cdot c$. If we assign values to the variables such that $b$ is equal to $5$ and $c$ is equal to $2$ we can demonstrate the equivalence that obtains in the case of the distributive property by showing that both $a \cdot (b + c )$ and $a \cdot b + a \cdot c$ have the same solution:
$$ 2 \cdot (5 + 2) = 14 $$
$$ 2 \cdot 5 + 2 \cdot 2 =14 $$
When we substitute $a$ with $2$ (the solution) we arrive at a true statement (the assertion that arrangement of values results in $14$). Since both expressions have the same solution they are equivalent.
## Creating equivalent equations
Adding or subtracting the same quantity from both sides (either side of the $=$ ) of the equation results in an equivalent equation.
### Demonstration with addition
$$ x - 4 = 3 \\ x -4 (+ 4) = 3 (+ 4) $$
Here we have added $4$ to each side of the equation. If $x = 7$ then:
$$ 7 - 4 (+ 4) = 7 $$
and:
$$ 3 + 4 = 7 $$
### Demonstration with subtraction
$$ x + 4 = 9 \\ x + 4 (-4) = 9 (-4) $$
Here we have subtracted $4$ from each side of the equation. If $x = 5$ then:
$$ 5 + 4 (-4) = 5 $$
and
$$ 9 - 4 = 5 $$

View file

@ -0,0 +1,96 @@
---
tags:
- Mathematics
- Algebra
- logarithms
---
Most simply a logarithm is a way of answering the question:
>
> How many of one number do we need to get another number. How many of x do we need to get y
More formally:
>
> x raised to what power gives me y
Below is an example of a logarithm:
$$ \log \_{3} 9
$$
We read it:
>
> log base 3 of 9
And it means:
>
> 3 raised to what power gives me 9?
In this case the answer is easy: $3^2$ gives me nine, which is to say: three multiplied by itself.
## Using exponents to calculate logarithms
This approach becomes rapidly difficult when working with larger numbers. It's not as obvious what $\log \_{5} 625$ would be using this method. For this reason, we use exponents which are intimately related to logarithms.
A logarithm can be expressed identically using exponents for example:
$$ \log \_{3} 9 = 2 \leftrightarrow 3^2 = 9
$$
By carrying out the conversion in stages, we can work out the answer to the question a logarithm poses.
Let's work out $\log \_{2} 8$ using this method.
1. First we add a variable (x) to the expression on the right hand:
$$ \log \_{2} 8 \leftrightarrow x
$$
1. Next we take the base of the logarithm and combine it with x as an exponent. Now our formula looks like this:
$$ \log \_{2} 8 \leftrightarrow 2^x
$$
1. Next we add an equals and the number that is left from the logarithm (8):
$$ \log \_{2} 8 \leftrightarrow 2^x = 8
$$
Then the problem is reduced to: how many times do you need to multiply two by itself to get 8? The answer is 3 : 2 x 2 x 2 or 2 p3. Hence we have the balanced equation:
$$ \log \_{2} 8 \leftrightarrow 2^3 = 8
$$
## Common base values
Often times a base won't be specified in a log expression. For example:
$$ \log1000
$$
This is just a shorthand and it means that the base value is ten, i.e that the logarithm is written in denary (base 10). So the above actually means:
$$ \log \_{10} 1000 = 3
$$
This is referred to as the **common logarithm**
Another frequent base is Euler's number (approx. 2.71828) known as the **natural logarithm**
An example
$$ \log \_{e} 7.389 = 2
$$

View file

@ -0,0 +1,13 @@
---
tags:
- Mathematics
- Algebra
- exponents
---
When calculating the exponents of a negative number the answer will always will be positive:
$$
-5^2 = 25
$$
This confused me but it was because I was thinking of it in terms of $-5 \cdot 5$ when in fact it is $-5 \cdot -5$ and when two negative numbers are multiplied the product is always negative.

View file

@ -0,0 +1,39 @@
---
tags:
- Mathematics
- Algebra
- operators
---
## Use inversion of operators
When solving equations we frequently make use of the [ operator inversion rules](../Prealgebra/Inversion%20of%20operators.md) to find the solutions.
### Example: inversion of addition
For example, the equation $9 = 3 + x$ has the solution $6$ ($x$ is equal to $6$). To arrive at this, we can use the inverse of the main operator in the equation (addition): $9-3 = 6$.
### Example: inversion of subtraction
Now consider $19 = x - 3$. The solution to this equation is $22$ ($x$ is equal to $22$). To arrive at this, we can use the inverse of the main operator in the equation (subtraction): $19 + 3 = 22$.
### Example: inversion of division
The equation we want to solve:
$$\frac{x}{6} = 4$$
Now we invert it by multiplying the denominator by the quotient: $6\cdot 4 = 24$. Therefore:
$$ \frac{24}{6} = 4$$
The solution is $24$
### Example: inversion of multiplication
The equation we want to solve:
$$4x = 36$$
Now we invert it by dividing the product by the coefficient:
!Add link to 'coefficient'
$$\frac{36}{4} = 9$$
Therefore the solution is $9$:
$$ 4(9) = 36$$

View file

@ -0,0 +1,12 @@
---
tags:
- Mathematics
- Prealgebra
- theorems-axioms-laws
---
# The Property of Additive Identity
**Let $a$ represent any member of $\mathbb{W}$ or $\mathbb{Z}$ then:**
$$ a + 0 = a $$

View file

@ -0,0 +1,12 @@
---
tags:
- Mathematics
- Prealgebra
- theorems-axioms-laws
---
**Let $a$ represent any member of $\mathbb{Z}$. Then there is a unique member of $\mathbb{Z}$ $-a$ such that:**
$$ a + (-a) = 0 $$
The sum of a number and it's negative (called **the additive inverse**) is always zero.

View file

@ -0,0 +1,16 @@
---
tags:
- Mathematics
- Prealgebra
- theorems-axioms-laws
---
# The Associative Property of Addition and Multiplication
**Let $a$, $b$ , $c$ represent members of $\mathbb{W}$ or $\mathbb{Z}$ then:**
$$ (a + b) + c = a + (b + c) $$
$$ a \cdot (b \cdot c) = (a \cdot b) \cdot c $$
When grouping symbols (parentheses, brackets, braces) are used with the multiplication and addition of whole numbers and integers, the particular placement of the grouping symbols relative to each of the addends or multiplicands does not change the sum/product.

View file

@ -0,0 +1,14 @@
---
tags:
- Mathematics
- Prealgebra
- theorems-axioms-laws
---
# The Commutative Property of Addition and Multiplication
**Let $a$, $b$ represent members of $\mathbb{W}$ or $\mathbb{Z}$ then:**
$$ a + b = b + a $$
$$ a \cdot b = b \cdot a $$

View file

@ -0,0 +1,30 @@
---
tags:
- Mathematics
- Prealgebra
- theorems-axioms-laws
---
# The Distributive Property of Multiplication
**Let $a$, $b$ represent members of $\mathbb{W}$ or $\mathbb{Z}$ then:**
$$ a \cdot (b + c) = a \cdot b + a \cdot c $$
### Demonstration
When faced with $4(2\cdot3)$ we may proceed with the official order of operations in algebra, namely:
````
4 x (2 + 3) = 4 x (5)
= 20
````
In other words we find the sum of the values in parentheses and then multiply this by the value outside of the brackets.
When we use distributive property we *distribute* each value in the parentheses against the value outside of the parentheses:
````
4 x (2 + 3) = (4 x 2) + (4 x 3)
8 + 12 = 20
````

View file

@ -0,0 +1,69 @@
---
tags:
- Mathematics
- Prealgebra
- fractions
- division
---
Suppose you have the following shape:
![draw.io-Page-9.drawio 1.png](../../img/draw.io-Page-9.drawio%201.png)
One part is shaded. This represents one-eighth of the original shape.
![one-eighth-a.png](../../img/one-eighth-a.png)
Now imagine there are four instances of the shape and one-eighth remains shaded. How man one-eighths are there in four?
![draw.io-Page-9.drawio 2.png](../../img/draw.io-Page-9.drawio%202.png)
The shaded proportion represents $\frac{1}{8}$ of the shape. Imagine four of these shapes, how many eighths are there?
This is a division statement: to find how many one-eighths there are we would calculate:
$$
4 \div \frac{1}{8}
$$
But actually it makes more sense to think of this as a multiplication. There are four shapes of eight parts meaning there are $4 \cdot 8$ parts in total, 32. One of these parts is shaded making it equal to $\frac{1}{32}$.
From this we realise that when we divide fractions by an amount, we can express the calculation in terms of multiplication and arrive at the correct answer:
$$
4 \div \frac{1}{8} = 4 \cdot 8 = 32
$$
Note that we omit the numerator but that technically the answer would be $\frac{1}{32}$.
### Formal specification of how to divide fractions
We combine the foregoing (that it is easier to divide by fractional amounts using multiplication) with the concept of a [reciprocol](Reciprocals.md) to arrive at a definitive method for dividing two fractions.
It boils down to: *invert and multiply*:
>
> If $\frac{a}{b}$ and $\frac{c}{d}$ are fractions then: $$\frac{a}{b} \div \frac{c}{d} = \frac{a}{b} \cdot \frac{d}{c}$$
We invert the divisor (the second factor) and change the operator from division to multiplication.
#### Demonstration
*Divide $\frac{1}{2}$ by $\frac{3}{5}$*
$$
\\begin{split}
\\frac{1}{2} \div \frac{3}{5} = \frac{1}{2} \cdot \frac{5}{3} \\
= \frac{5}{5}
\\end{split}
$$
*Divide $\frac{-6}{x}$ by $\frac{-12}{x^2}$*
$$
\\begin{split}
\\frac{-6}{x} \div \frac{12}{x^2} = \frac{-6}{x} \cdot \frac{x^2}{-12} \\ =
\\frac{(\cancel{3} \cdot \cancel{2} )}{\cancel{x}} \cdot \frac{(\cancel{x} \cdot \cancel{x} )}{\cancel{3} \cdot \cancel{2} \cdot 2} \\ =
\\frac{x}{2}
\\end{split}
$$

View file

@ -0,0 +1,43 @@
---
tags:
- Mathematics
- Prealgebra
- fractions
---
Two fractions are equivalent if they represent the same value.
To begin with we can represent this visually:
![equiv-fractions.png](../../img/equiv-fractions.png)
*Each shaded area is taking up the same proportion of the whole.*
The same properties can be represented arithmetically by multiplying the numerator and denominator at each step by 2. Thus:
$$
\\frac{1 (\cdot 2)}{3 (\cdot 2)} = \frac{2}{6}
$$
Therefore the following rule obtains:
>
> If you start with a fraction and multiply both its numerator and denominator by the same value, the resulting fraction is equivalent to the original fraction.
$$
\\frac{a}{b} = \frac{a \cdot x}{b \cdot x}
$$
This process works in reverse when we invert the operator and use division:
$$
\\frac{2 (/ 2)}{6 (/ 2)} = \frac{1}{3}
$$
Thus:
>
> If you start with a fraction and divide both its numerator and denominator by the same value, the resulting fraction is equivalent to the original fraction.
$$
\\frac{a}{b} = \frac{a / x}{b / x}
$$

View file

@ -0,0 +1,14 @@
---
tags:
- Mathematics
- Prealgebra
- fractions
---
Being asked to express a natural number as a fraction seems confusing at first but you need to just know that for any whole number $n$, you express it as a fraction with $\frac{n}{1}$.
*Express 8 as an equivalent fraction having the denominator 5*
$$
8 = \frac{8}{1} = \frac{8 \cdot 5}{1 \cdot 5} = \frac{40}{5}
$$

View file

@ -0,0 +1,80 @@
---
tags:
- Mathematics
- Prealgebra
- factors
- divisors
---
## Factors and divisors
The terms **factor** and **divisor** are used interchangeably. They are different ways of expressing the same mathematical truth and this is because of the inverse relationship between division and multiplication.
### Divisors
>
> For a number $n$, its divisor is any number that divides $n$ evenly without remainder: $$ \frac{a}{b} = 0 $$
In this operation, $a$ is the **divisor**, $b$ is the **dividend** and $0$ is the **quotient**.
### Factors
>
> For a given number $n$, its factors are any pair of numbers that when multiplied together return $n$ as the product: $$ a \cdot b = n $$
We can see the relationship consists in the fact that factors are associated with multiplication and divisors are associated with division: two different perspectives on the same number relationships.
For example, 6 is both a factor and divisor of 18 and 24. To be precise, it is the greatest common divisor of these two numbers.
As a divisor:
$$
\\frac{18/6}{24/6} = \frac{3}{4}
$$
As a factor:
$$
\\frac{3 \cdot 6}{4 \cdot 6} = \frac{18}{24}
$$
When we divide by the common divisor is acts as a divisor. When we multiply by the common divisor it acts as a factor. The fact that the fractions are [equivalent](Equivalent%20fractions.md) in both cases indicates that the properties are equivalent.
## Greatest common divisor
>
> For two two integers $a, b$, $D$ is a common divisor of $a$ and $b$ if it is a divisor of both. The greatest common divisor is the largest value that $D$ can be whilst remaining a divisor to both $a$ and $b$.
### Demonstration
*Find the greatest common divisor of $18$ and $24$*
The divisors of 18:
$$1, 2, 3, 6, 9, 18$$
The divisors of 24:
$$ 1, 2, 3, 4, 6, 8, 12, 24$$
Thus the common divisors are:
$$ 1, 2, 3, 6 $$
The largest value in the above set is 6, thus 6 is the greatest common divisor.
## Heuristics for finding divisors
1. For dividend $n$ , if $n$ ends in an even number or zero, $n$ is **divisible by 2**.
1. $\frac{12}{2} = 6$
1. $\frac{84}{2} = 42$
1. For dividend $n$ if the sum of the digits is divisible by 3 then $n$ is **divisible by 3**.
1. $\frac{72}{3} = 24$
1. $\frac{21}{3} = 7$
1. For a dividend $n$ if the number represented of the last two digits of $n$ divides by 4 then $n$ is divisible by 4
1. $\frac{324}{4} = 81$
1. $\frac{532}{4} = 133$
1. For a dividend $n$, if the last digit of $n$ is divisible by 0 or 5, then $n$ is divisible by 5.
1. $\frac{25}{5} = 5$
1. For a dividend $n$, if $n$ is divisible by 2 and 3, then $n$ is divisible by 6.
1. $\frac{12}{6} = 2$
1. $\frac{18}{6} = 3$
1. For a dividend $n$, if the last three digits of $n$ are divisible by 8, then $n$ is divisible by 8.
1. $\frac{73024}{8} = 9128$
1. For a dividend $n$, if the sum of the digits of $n$ is divisible by 9 then $n$ is divisible by 9.
1. $\frac{117}{9} = 13$

View file

@ -0,0 +1,17 @@
---
tags:
- Mathematics
- Prealgebra
---
## Grouping symbols
We use parentheses to delimit the part of an expression we want evaluated first. If grouping symbols are nested, evaluate the expression in the innermost pair of grouping symbols first,
## Writing mathematical statements: placement of $=$
We only write one equals sign per line. For example, if we are resolving parentheses:
$$ \begin{equation} \begin{split} 2 + \[3 + (4+5)\] = 2 + \[3 +9\] \\ = 2 + 12 \\ = 14 \end{split} \end{equation} $$
We call parentheses (`()`), brackets (`[]`) and braces `{}`) grouping symbols. When groupings (say parentheses) are used, the expression inside any pair of parentheses must be evaluated first.

View file

@ -0,0 +1,16 @@
---
tags:
- Mathematics
- Prealgebra
- theorems-axioms-laws
---
>
> Every integer greater than one is either a prime number itself or is product of a unique combination of primes.
This is also known as the **Unique Factorisation Theorem**.
'Unique' means that there is not more than one way to derive the whole number. Once you reduce the factorisation to primes, there can only be one set of numbers that results in the target number.
For example, $24$ has the following factors: ${12, 24}$ and $6, 4$ but these are composite numbers. The unique factorisation combination for 24 is $2, 2, 3$.

View file

@ -0,0 +1,36 @@
---
tags:
- Mathematics
- Prealgebra
- fractions
- negative-numbers
---
To work with negative fractions we draw on the [Rules for operations on like and unlike terms](Rules%20for%20operations%20on%20like%20and%20unlike%20terms.md).
## Fractions with unlike terms
* A fraction is just one number divided by another. $\frac{5}{10}$ is just ten divided by 5.
* A positive integer divided by a negative or vice versa will always result in a negative. Thus $\frac{5}{-15}$ is equal to $-3$.
* We can therefore express the whole fraction as a negative:
$$
- \frac{5}{15}
$$
* Or we could apply the negative symbol to the numerator. It would stand for the same value:
$$
\\frac{-5}{15}
$$
Therefore:
>
> Let $a,b$ be any integers. The following three fractions are [equivalent](Equivalent%20fractions.md): $$\frac{-5}{15}, \frac{5}{-15}, - \frac{5}{15}$$
## Fractions with like terms
* In cases where both the numerator and denominator are both negative, the value that the fraction represents will be positive overall. This is because the quotient of a negative integer divided by a negative integer will always be positive.
* Thus: $$ \frac{- 12xy^2}{ - 18xy^2} = \frac{12xy^2}{18xy^2}$$

View file

@ -0,0 +1,36 @@
---
tags:
- Mathematics
- Prealgebra
- fractions
- divisors
---
Given the equivalence between factors and divisors we can increase fractions to higher terms in a very similar way to when we reduce fractions. In the latter case we are dividing by divisors to reduce. In the former, we are multiplying by factors to increase.
>
> Whenever we increase a fraction, the resultant fraction will always be [equivalent](Equivalent%20fractions.md) to the fraction we started with.
## Demonstration
*Express $\frac{3}{4}$ as an equivalent fraction having the denominator 20*
$$
\\frac{3 \cdot 4}{5 \cdot 4} = \frac{12}{20}
$$
*Express $\frac{2}{3}$ as an equivalent fraction having the denominator 21*
$$
\\frac{2 \cdot 7}{3 \cdot 7} = \frac{14}{21}
$$
## Increasing fractions with variables to higher terms
*Express $\frac{2}{9}$ as an equivalent fraction having the denominator 18a*
In these cases, just append the variable to the factor:
$$
\\frac{2 \cdot 2a}{9 \cdot 2a} = \frac{4a}{18a}
$$

View file

@ -0,0 +1,7 @@
---
tags:
- Mathematics
- Prealgebra
---
Come back to as many back links

View file

@ -0,0 +1,24 @@
---
tags:
- Mathematics
- Prealgebra
- operators
---
## Addition, subtraction
Addition is the inverse of subtraction:
$$(x - a) + a = x$$
$$ (6 - 2) + 2 = 6 $$
Subtraction is the inverse of addition:
$$(x + a) - a = x$$
$$ (3 + 2) - 2 = 3$$
Division is the inverse of multiplication
$$ \frac{a \cdot x}{a} = x$$
$$ \frac{6 \cdot 3}{6} = 3$$
Multiplication is the inverse of division
$$ a \cdot \frac{x}{a} = x$$
$$ 2 \cdot \frac{8}{2} = 8$$

View file

@ -0,0 +1,12 @@
---
tags:
- Mathematics
- Prealgebra
- theorems-axioms-laws
---
# The Property of Multiplicative Identity
**Let $a$ represent any member of $\mathbb{W}$ or $\mathbb{Z}$ then:**
$$ a \cdot 1 = a $$

View file

@ -0,0 +1,13 @@
---
tags:
- Mathematics
- Prealgebra
- theorems-axioms-laws
---
# The Multiplicative Property of Negative One
>
> **Let $a$ represent any member of $\mathbb{Z}$, then:**
$$ (-1) \cdot a = -a $$

View file

@ -0,0 +1,63 @@
---
tags:
- Mathematics
- Prealgebra
- fractions
- multiplication
---
>
> To find the product of two fractions $\frac{a}{b}$ and $\frac{c}{d}$ multiply their numerators and denominators and then reduce: $$\frac{a}{b} \cdot \frac{c}{d} = \frac{a \cdot c}{b \cdot d}$$
## Example
$$
\\frac{1}{3} \cdot \frac{2}{5} = \frac{1 \cdot 2}{3 \cdot 5} = \frac{2}{15}
$$
## Prime factorisation in place
The example above did not require a reduction, so here is a more complex example:
$$
\\frac{14}{15} \cdot \frac{30}{140} = \frac{420}{2100}
$$
It would be laborious to reduce such a large product using factor trees or the repeated application of divisors. We can use a more efficient method.
This method can be applied at the point at which we conduct the multiplication rather than afterwards once we have the product. We express the the initial multiplicands as factors:
$$
\\frac{14}{15} \cdot \frac{30}{140} = \frac{(2 \cdot 7) \cdot (2 \cdot 3 \cdot 5) }{(3 \cdot 5) \cdot (2 \cdot 2 \cdot 7 \cdot 5)}
$$
We now have the product in factorised form before we have applied the multiplication so we can go ahead and cancel:
$$
\\frac{\cancel{2}, \cancel{7}, \cancel{2}, \cancel{3}, \cancel{5}}{\cancel{3}, \cancel{5}, \cancel{2}, \cancel{2}, \cancel{7}, 5} = \frac{1}{5}
$$
**Note that in the above case, there was only a single 5 left as a denominator and no value left as a numerator. This is equivalent to there just being "one five" so we write $\frac{1}{5}$**
## Example with negative fractions containing variables
*Calculate: $$ - \frac{6x}{55y} \cdot - \frac{110y^2}{105x^2} $$*
First multiply in place:
$$
\\frac{(3 \cdot 2 \cdot x) \cdot (5 \cdot 2 \cdot 11 \cdot y \cdot y)}{(5 \cdot 11 \cdot y) \cdot (7 \cdot 5 \cdot 3 \cdot x \cdot x)}
$$
Then cancel:
$$
\\frac{(\cancel{3} \cdot 2 \cdot \cancel{x}) \cdot (\cancel{5} \cdot 2 \cdot \cancel{11} \cdot \cancel{y} \cdot y)}{(\cancel{5} \cdot \cancel{11} \cdot \cancel{y}) \cdot (7 \cdot 5 \cdot \cancel{3} \cdot \cancel{x} \cdot x)} =
\\frac{2 \cdot 2 \cdot y}{7 \cdot 5 \cdot x}
$$
Then reduce:
$$
\\frac{2 \cdot 2 \cdot y}{7 \cdot 5 \cdot x} = \frac{4y}{35x}
$$

View file

@ -0,0 +1,11 @@
---
tags:
- Mathematics
- Prealgebra
---
## The set of natural numbers
$$ \mathbb{N} = {1, 2, 3, ...} $$
Natural numbers are most simply expressed as **the set of numbers we use for counting**. The set of natural numbers start at one and continuity to infinity.

View file

@ -0,0 +1,22 @@
---
tags:
- Mathematics
- Prealgebra
---
1. Evaluate expressions in **parentheses**
1. Evaluate **exponents**
1. Evaluate **multiplications and divisions** from left to right in the order that they appear
1. Evaluate **additions and subtractions** from left to right in the order that they appear.
In the absence of grouping symbols, addition holds no precedence over subtraction and vice versa.
````
15 - 8 + 4 = 7 + 4
= 11
````
````
15 - 8 + 4 = 15 - 12
= 3
````

View file

@ -0,0 +1,24 @@
---
tags:
- Mathematics
- Prealgebra
- factors primes
---
### Prime factorisation
Prime factorisation is the activity of expressing a composite number as the unique product of [prime numbers](Primes%20and%20composites.md). There are two main approaches to this:
* * factor* trees
* repeated division by two
>
> **Factor trees:** we take a number $n$ and break it down into two factors of $n$. We then repeat this process with the resulting factors working recursively until the numbers we are left with are primes.
![Untitled Diagram-Page-1.drawio.png](../../img/Untitled%20Diagram-Page-1.drawio.png)
*The prime factors of 27 are 2, 3, 3*
it doesn't matter which products we choose as the interim factors, we should always reach the same outcome:
![Untitled Diagram-Page-3.drawio 1.png](../../img/Untitled%20Diagram-Page-3.drawio%201.png)
![Untitled Diagram-Page-2.drawio.png](../../img/Untitled%20Diagram-Page-2.drawio.png)

View file

@ -0,0 +1,18 @@
---
tags:
- Mathematics
- Prealgebra
- primes
---
## Prime and composite numbers
Definition of a **prime number**:
>
> For any whole number $n$ where $n \neq 1$, $n$ is prime if and only if its sole [factors](Factors%20and%20divisors.md) are $1$ and $n$
Definition of a **composite number**:
>
> For any whole number $n$, $n$ is composite just if $n$ is not prime

View file

@ -0,0 +1,31 @@
---
tags:
- Mathematics
- Prealgebra
- fractions
- division
- theorems-axioms-laws
---
The [Property of Multiplicative Identity](Multiplicative%20identity.md) applies to fractions as well as to whole numbers:
$$
\\frac{a}{b} \cdot 1 = \frac{a}{b}
$$
With fractions there is a related property: the **Multiplicative Inverse**.
>
> If $\frac{a}{b}$ is any fraction, the fraction $\frac{b}{a}$ is called the *multiplicative inverse* or *reciprocol* of $\frac{a}{b}$. The product of a fraction multiplied by its reciprocol will always be 1. $$ \frac{a}{b} \cdot \frac{b}{a} = 1$$
For example:
$$
\\frac{3}{4} \cdot \frac{4}{3} = \frac{12}{12} = 1
$$
In this case $\frac{4}{3}$ is the reciprocol or multiplicative inverse of $\frac{3}{4}$.
This accords with what we know a fraction to be: a representation of an amount that is less than one whole. When we multiply a fraction by its reciprocol, we demonstrate that it makes up one whole.
This also means that whenever we have a whole number $n$, we can represent it fractionally by expressing it as $\frac{n}{1}$

View file

@ -0,0 +1,155 @@
---
tags:
- Mathematics
- Prealgebra
- fractions
- divisors
---
## Reducing fractions to their lowest terms
>
> A fraction is said to be *reduced to its lowest terms* if the [greatest common divisor](Factors%20and%20divisors.md#greatest-common-divisor) of the numerator and the denominator is $1$.
>
> Whenever we reduce a fraction, the resultant fraction will always be [equivalent](Equivalent%20fractions.md) to the fraction we started with.
Thus the fraction $\frac{2}{3}$ is reduced to its lowest terms because the greatest common divisor is 1. Neither the numerator or the denominator can be reduced to any lower terms. In contrast, the fraction $\frac{4}{6}$ is not reduced to its lowest terms because the greatest common divisor of both 4 and 6 is 2, not 1.
### 1. Reducing with repeated application of divisors
The following demonstrates the process of reducing a fraction to its lowest terms in a series of steps:
$$
\\frac{18}{24} = \frac{18/2}{24/2} = \frac{9}{12} = \frac{9/3}{12/3} = \frac{3}{4}
$$
\_Once we get to $\frac{3}{4}$ the greatest common divisor is 1, therefore $\frac{18}{24}$ has been reduced to its lowest terms \_.
### 2. Reducing in one step with the highest common divisor
In the previous example the reduction took two steps: first we divided by two and then we divided by three. There is a more efficient way: find the [highest common divisor](Factors%20and%20divisors.md#greatest-common-divisor) of the numerator and denominator and then use this as the basis for the reduction. With this method, the reduction can be completed in a single step.
The greatest common divisor of 18 and 24 is 6, thus:
$$
\\frac{18}{24} = \frac{18/6}{24/6} = \frac{3}{4}
$$
Note how our earlier two divisors 2 and 3 are [factors](Factors%20and%20divisors.md#factors) of 6, showing the consistency between the two methods.
### 3. Reducing with factors and cancellation
The two methods above are not very systematic and are quite heuristic. The third approach is more systematic and relies on the [interchangeability of factors and divisors](Factors%20and%20divisors.md).
Instead of thinking asking what is the greatest common divisor of 18 and 24 we could ask: which single number can we multiply by to get 18 and 24? Obviously both numbers are in the six times table. This is therefore to say that 6 is a [factor](Factors%20and%20divisors.md#factors) of both: we can multiply some number by 6 to arrive at both 18 and 24. The numbers are 3 and 4 respectively:
$$
\\begin{split}
3 \cdot 6 = 18 \\
4 \cdot 6 = 24
\\end{split}
$$
Here, 3 and 4 are the multiplicands of the factor 6. As $\frac{3}{4}$ doesn't have a lower common factor, it is therefore defined in its lowest terms.
Once we have reached this point, we no longer need the common factor 6, we can therefore cancel it out, leaving the multiplicands as the reduced fraction:
$$
\\begin{split}
3 \cancel{\cdot6= 18}\\
4 \cancel{\cdot6= 24}
\\end{split}
$$
### 4. Reducing with prime factorisation
This is still a bit long-winded however particularly when finding the factors of larger numbers because we have to go through the factors of both numbers to find the largest held in common.
A better method is to utilise [prime factorization](Prime%20factorization.md) combined with the canceling technique.
First we find the prime factors of both the numerator and denominator:
![drawio-Page-7.drawio.png](../../img/drawio-Page-7.drawio.png)
This gives us:
$$
\\frac{18}{24} = \frac{2 \cdot 3 \cdot 3}{2 \cdot 2 \cdot 2 \cdot 3}
$$
We then cancel out the factors held in common between the numerator and denominator:
$$
\\frac{\cancel{2} \cdot \cancel{3} \cdot 3}{\cancel{2} \cdot 2 \cdot 2 \cdot \cancel{3}}
$$
This gives us:
$$
\\frac{3}{2 \cdot 2}
$$
We then simplify the fraction as normal to its lowest term (conducting any multiplications required by what is left from the prime factorization):
$$
\\frac{3}{4}
$$
## Reducing fractions that contain variables
Superficially this looks to be more difficult but in fact we can apply the same prime factorization method to get the result.
### Demonstration
*Reduce the following fraction to its lowest terms: $$\frac{25a^3b}{40a^2b^3}$$*
The prime factors of the numerator and denominator:
$$
\\begin{split}
25 = {5, 5} \\
40 = {2,2,2,5}
\\end{split}
$$
Now we apply canceling but we include the variable parts, treating them exactly the same as the coefficients. We break them out of their exponents however.
$$\frac{25a^3b}{40a^2b^3} =\frac{5 \cdot 5 \cdot a \cdot a \cdot a \cdot b}{2 \cdot 2 \cdot 2 \cdot 5 \cdot a \cdot a \cdot b \cdot b \cdot b }$$
Canceled:
$$\frac{\cancel{5} \cdot 5 \cdot \cancel{a} \cdot \cancel{a} \cdot a \cdot \cancel{b}}{2 \cdot 2 \cdot 2 \cdot \cancel{5} \cdot \cancel{a} \cdot \cancel{a} \cdot \cancel{b} \cdot b \cdot b }$$
Which gives us:
$$
\\frac{5 \cdot a}{2 \cdot 2 \cdot 2 \cdot b \cdot b} = \frac{5a}{8b^2}
$$
## Reducing fractions that contain negative values
*Reduce the following fraction to its lowest terms: $$\frac{14y^5}{-35y^3}$$*
* This fraction is an instance of a [fraction with unlike terms](Handling%20negative%20fractions.md#fractions-with-unlike-terms).
* Apply [Prime factorization](Prime%20factorization.md):
![draw.io-Page-8.drawio.png](../../img/draw.io-Page-8.drawio.png)
* Cancel the coefficients and variable parts
$$
\\frac{14y^5}{-35y^3}=\frac{5 \cdot 7 \cdot 2 \cdot y \cdot y \cdot y \cdot y \cdot y}{-5 \cdot 7 \cdot y \cdot y \cdot y} = - \frac{2y^2}{5}
$$
*Reduce the following fraction to its lowest terms:
$$\frac{- 12xy^2}{ - 18xy^2}$$*
* This fraction is an instance of a [fraction with like terms](Handling%20negative%20fractions.md#fractions-with-like-terms).
* Apply [Prime factorization](Prime%20factorization.md):
![draw.io-Page-8.drawio 1.png](../../img/draw.io-Page-8.drawio%201.png)
* Cancel the coefficients and variable parts
$$
* \\frac{12xy^2}{18xy^2}=\frac{3 \cdot 2 \cdot 2 \cdot x \cdot y \cdot y}{3 \cdot 7 \cdot 3 \cdot 2 \cdot x \cdot x \cdot y} = - \frac{2y}{3x}
$$

View file

@ -0,0 +1,126 @@
---
tags:
- Mathematics
- Prealgebra
---
## Addition
### Like terms
>
> Sum the absolute values and add the negative sign afterwards.
$$
\\begin{split}
-4 + -3
\\ = 4 + 3
\\ = -7
\\end{split}
$$
### Unlike terms
>
> Subtract smaller from larger amount and then affix the sign of the larger amount to the sum.
#### Negative plus positive
$$
\\begin{split}
-8 + 5
\\ = 8 - 5
\\ = 3
\\ = -3
\\end{split}
$$
#### Positive plus negative
$$
\\begin{split}
4 + -1
\\ = 4 - 1
\\ = 3
\\end{split}
$$
## Subtraction
### Like terms
>
> Turn the operator and second negative into a plus sign and execute as an addition.
$$
\\begin{split}
-4 - -3 =
\\ = -4 + 3
\\ = -1
\\end{split}
$$
### Unlike terms
#### Positive subtract negative
>
> Turn the negative after the operator to a positive. (Same as previous.)
> $$
> \\begin{split}
> 2 - -3 =
> \\ = 2 + 3
> \\ = 5
> \\end{split}
> $$
#### Negative subtract positive
>
> Start at the negative value and count backwards on the number-line
$$
-2 - 3 = -5
$$
## Multiplication
### Like terms
>
> The product of two negative numbers will always be a positive number.
$$
-15 \cdot -3 = 45
$$
### Unlike terms
>
> The product of a positive and a negative number will always be a negative number.
$$
-3 \cdot 5 = -15
$$
## Division
**Division follows the same rules as multiplication.**
### Like terms
>
> The quotient of two negative numbers will always be a positive number.
$$
-15 / -3 = 5
$$
### Unlike terms
>
> The quotient of a positive and a negative number will always be a negative number.
$$
-15 / 3 = -5
$$

View file

@ -0,0 +1,98 @@
---
tags:
- Mathematics
- Prealgebra
---
# The set of whole numbers
We recall the set of whole numbers:
$$ \mathbb{W} = {0, 1, 2, 3, ...} $$
# The properties of $\mathbb{W}$
>
> In mathematics, a **property** is any characteristic that applies to a given set.
## The commutative property
### Addition
When **adding** whole numbers, the placement of the addends does not affect the sum.
Let **a**, **b** represent whole numbers, then:
$$ a + b = b + a $$
### Multiplication
When **multiplying** whole numbers the placement of the [multiplicands](https://www.notion.so/Symbols-and-formal-conventions-80aeaf1872f94a0d97a2e8d07e3855bd) does not affect the [product](https://www.notion.so/Symbols-and-formal-conventions-80aeaf1872f94a0d97a2e8d07e3855bd).
Let **a, b** represent whole numbers, then:
$$ a \cdot b = b \cdot a $$
### Subtraction
**Subtraction** is not commutative, viz:
$$ a - b \neq b - a $$
### Division
Division is not commutative, viz:
$$ a \div b \neq b \div a $$
## The associative property
### Addition
When grouping symbols (parentheses, brackets, braces) are used with addition, the particular placement of the grouping symbols relative to each of the addends does not change the sum.
Let **a**, **b, c** represent whole numbers, then:
$$ (a + b) + c = a + (b + c) $$
### Multiplication
Let **a, b, c** represent whole numbers, then:
$$ a \cdot (b \cdot c) = (a \cdot b) \cdot c $$
### Subtraction
Subtraction is not associative, viz:
$$ (a - b) - c \neq a - (b - c) $$
### Division
Division is not associative
$$ (a \div b) \div c \neq a \div (b \div c) $$
## The property of additive identity
If **a** is any whole number, then:
$$ a + 0 = a $$
We therefore call zero the additive identity: whenever we add zero to a whole number, the sum is equal to the whole number itself.
## The property of multiplicative identity
If **a** is any whole number, then:
$$ (a \cdot 1 = a) = (1 \cdot a = a) $$
## Multiplication by zero
If **a** is any whole number, then:
$$ (a \cdot 0 = 0) = (0 \cdot a = 0) $$
## Division by zero
Division by zero is **undefined** but zero divided is zero.

View file

@ -0,0 +1,10 @@
---
tags:
- Mathematics
- Prealgebra
- theorems-axioms-laws
---
**Let $a$ represent any member of $\mathbb{W}$ or $\mathbb{Z}$ then:**
$$ a \cdot 0 = 0 $$

View file

@ -0,0 +1,214 @@
---
tags:
- Programming_Languages
- backend
- node-js
- express
- REST
- apis
---
## Core Express methods
The following Express methods correspond to the main [HTTP request types](../../Databases/HTTP%20request%20types.md):
* `app.get()`
* `app.post()`
* `app.put()`
* `app.delete()`
## Instantiate instance of Express
````js
const express = require('express')
const app = express()
````
## Nodemon
We don't want to have to restart the server every time we make a change to our files. We can use `nodemon` instead of `node` when running our index file so that file-changes are immediately registered without the need for a restart. It's a good idea to set your NPM start script to `nodemon index.js`.
## Creating GET requests
We are going return the following array in the GET examples:
````js
const courses = [
{
id: 1,
name: "First course",
},
{
id: 2,
name: "Second course",
},
{
id: 3,
name: "Third course",
},
];
````
### Basic GET without params
We create an [event emitter](Events%20module.md#event-emitters) and listener that listens for GET requests on a specified port and sends data in response to requests.
````js
// Return a value as response from specified URI
app.get('/api/courses', (req, res) => {
res.send(courses)
})
app.listen(3000, () => console.log('Listening on port 30000...'))
````
When creating our API this structure of creating handlers for specific routes will be iterated. Every endpoint will be specified with the `[app].[http_request_type]` syntax.
### GET with parameters and queries
The previous example just serves an array. This corresponds to the entire set of our data. But we will also need to retrieve specific values, we do this by adding (and allowing for) parameters in our requests.
#### Parameters
We will create a GET path that accepts parameters, these parameters will correspond to the specific entry in our main data array.
````js
app.get("/api/courses/:id", (req, res) => {
res.send(req.params.id);
});
````
We use the `:` symbol in the URI to indicate that we looking to parse for a specific value in the data.Now if we call `/api/courses/2`, we will get the second item in the array.
Here is a more detailed example, this time with more than one parameter
````js
app.get("/api/posts/:year/:month", (req, res) => {
res.send(req.params);
})[]();
````
If we navigate to a URL that uses this structure such as `/api/2021/1` we would receive a JSON object corresponding to the parameters passed:
````json
{
"year":"2021",
"month":"1"
}
````
This shows us how parameters are represented by Node internally but we need to provide a way for these parameters to actually be met and for the values that match them to be returned to the client. And also handle errors if no match is found:
Let's say that we want to return a course by its ID:
````js
app.get("/api/courses/:id", (req, res) => {
const course = courses.find((c) => c.id === parseInt(req.params.id));
if (!course) res.status(404).send("A course with the given ID was not found");
res.send(course);
});
````
### Queries
Whereas parameters return specific data points, queries don't get data they aggregate or present the data that is returned in a certain way, such as for instance applying a search function. We indicate queries with a `?` in our URI.
For example: `/api/posts/2018/1?sortBy=name`.
To facilitate a request like this we have to create a GET path that allows for it:
````js
app.get("/api/posts/:year/:month", (req, res) => {
res.send(req.query);
})[]();
````
We would get the following back:
````json
{
sortBy: "name"
}
````
Again a JSON object with key-value pairs is returned.
## Creating POST requests
Some examples of validating the data we receive as the `req` using the Joi schema validator:
````js
````
In our example we are going to demonstrate how to allow for POST requests in an API with the scenario of adding a new course to our array.
````js
app.post('/api/courses', (req, res) => {
const course = {
id: courses.length + 1,
name: req.body.name
}
courses.push(course);
res.send(course)
})
````
Here we use the body that is sent from the client and isolate the field `name`. This presupposes that the client is sending us data with the following shape as the body:
````json
{
name: 'some string'
}
````
The id is added by the server, not the client. Having created the new value we add it to our `courses` array. (In reality we would be creating a new entry in a database.) Then we follow the convention of returning the new value back to the client.
### Validation
We would also typically use a JSON [schema validator](Validation.md) to simplify the process of checking that the `req` body is valid before anything is sent to the database.
## Creating PUT requests
````js
app.put("/api/courses/:id", (req, res) => {
const course = courses.find((c) => c.id === parseInt(req.params.id));
if (!course)
return res.status(404).send("A course with the given ID was not found");
const { error } = validateCourse(req.body);
if (error)
return error.details.map((joiErr) => res.status(400).send(joiErr.message));
course.name = req.body.name;
res.send(course);
});
````
## Creating DELETE requests
````js
app.delete("/api/course/:id", (req, res) => {
const course = courses.find((c) => c.id === parseInt(req.params.id));
if (!course)
return res.status(404).send("A course with the given ID was not found");
courses.indexOf(course);
courses.splice(index, 1);
res.send(course);
});
````

View file

@ -0,0 +1,48 @@
---
tags:
- Programming_Languages
- backend
- node-js
---
## Read file from directory (JSON)
````js
const fs = require("fs");
// Get raw JSON
let inputJson = fs.readFileSync("source.json");
// Convert to JS
let data = JSON.parse(inputJson);
````
## Write file to directory (JSON)
````js
let newFile = 'new.json';
// Write JS object to JSON file as JSON
fs.writeFileSync(writePath, JSON.stringify(alienblood));
````
## Delete file from directory
````js
let filePath = 'file-to-delete.json';
fs.unlinkSync(filePath);
````
## Applications
### Overwrite file
````js
if (fs.existsSync(writePath)) {
fs.unlinkSync(writePath);
fs.writeFileSync(writePath, JSON.stringify(someJS));
} else {
fs.writeFileSync(writePath, JSON.stringif(someJS));
}
````

View file

@ -0,0 +1,13 @@
---
tags:
- Programming_Languages
- backend
- node-js
- async
---
We know that Node works by managing [request-response transactions asynchronously](Single-threaded%20asynchronous%20architecture.md) but how does it achieve this? It does it via the Event Queue. This is the mechanism by which Node keeps track of incoming requests and their fulfillment status: whether the data has been returned successfully or if there has been error.
Node is continually monitoring the Event Queue in the background.
This makes Node ideal for applications that require a lot of disk or network I/O access. However it means it is not well-positioned to build applications that are CPU intensive such as image rendering and manipulation.

View file

@ -0,0 +1,88 @@
---
tags:
- Programming_Languages
- backend
- node-js
- node-modules
---
* Much of the NodeJS core is built around an [asynchronous event-driven architecture](Single-threaded%20asynchronous%20architecture.md) in which certain kinds of objects (called "emitters") emit named events that cause `Function` objects ("listeners") to be called.
* For example: a `fs.ReadStream` emits an event when the file is opened
## Event Emitters
* All objects that emit events are instances of the `EventEmitter` class. These objects expose an `eventEmitter.on()` function that allows one or more functions to be attached to named events emitted by the object.
* These functions are listeners of the emitter.
## Basic syntax
````js
const EventEmitter = require('events') // import the module
// Raise an event
const emitter = new EventEmitter('messageLogged')
// Register a listener
emitter.on('messagedLogged', function() {
console.log('The listener was called.')
})
````
* If we ran this file, we would see `The listener was called` logged to the console.
* Without a listener (similar to a subscriber in Angular) nothing happens.
* When the emission occurs the emitter works *synchronously* through each listener function that is attached to it.
## Event arguments
* Typically we would not just emit a string, we would attach an object to the emitter to pass more useful data. This data is called an **Event Argument**.
* Refactoring the previous example:
````js
// Raise an event
const emitter = new EventEmitter('messageLogged', function(eventArg) {
console.log('Listener called', eventArg)
})
// Register a listener
emitter.on('messagedLogged', {id: 1, url: 'http://www.example.com'})
````
## Extending the `EventEmitter` class
* It's not best practice to call the EventEmitter class directly in `app.js`. If we want to use the capabilities of the class we should create our own module that extends `EventEmitter`, inheriting its functionality with specific additional features that we want to add.
* So, refactoring the previous example:
````js
// File: Logger.js
const EventEmitter = require('events')
class Logger extends EventEmitter {
log(message){
console.log(message)
this.emit('messageLogged', {id: 1, url: 'http://www.example.com'})
}
}
````
*The `this` in the `log` method refers to the properties and methods of `EventEmitter` which we have extended.*
* We also need to refactor our listener code within `app.js` so that it calls the extended class rather than the `EventEmitter` class directly:
````js
// File app.js
const Logger = require('./Logger')
const logger = new Logger()
logger.on('messageLogged', function(eventArg){
console.log('Listener called', eventArg)
}
logger.log('message')
````

View file

@ -0,0 +1,31 @@
---
tags:
- Programming_Languages
- backend
- node-js
- node-modules
---
File System is an essential built-in module of Node that contains utility methods for working with files and directories.
Every method associated with `fs` has a *blocking* and *asynchronous* implementation. The former obviously blocks the [event queue](Event%20queue.md), the latter does not.
The asynchronous methods are useful to have in some contexts but in general and with real-world applications, you should be using the async implementation.
## Methods
### Read directory
Return a string array of all files in the current directory.
````js
fs.readdir('./', function(err, files) {
if (err) {
console.error(err)
} else {
console.log(files)
}
})
````

View file

@ -0,0 +1,18 @@
---
tags:
- Programming_Languages
- backend
- node-js
---
>
> In Node every function and variable should be scoped to a module. We should not define functions and variables within the global scope.
* In Node the equivalent to the browser's `Window` object is `global`. The properties and methods that belong to this method are available anywhere in a program.
* Just as we can technically write `Window.console.log()`, we can write `global.console.log()` however in both cases it is more sane to use the shorthand.
* However if we declare a variable in this scope in browser-based JavaScript, this variable becomes accessible via the `Window` object and thus is accessible in global scope. The same is not true for Node. If you declare a variable at this level it will return undefined.
* This is because of Node's modular nature. If you were to define a function `foo` in a module and then also define it in the global scope, when you call `foo`, the Node interpreter would not know which function to call. Hence it chooses not to recognise the global `foo`, returning undefined.

View file

@ -0,0 +1,84 @@
---
tags:
- Programming_Languages
- backend
- node-js
- node-modules
---
The HTTP Module allows us to create a web server that listens for HTTP requests on a given port. It is therefore perfect for creating backends for client-side JavaScript.
## Creating a server
An HTTP server is another instance of an [event emitter](Events%20module.md#event-emitters). It therefore has all the same methods as the `EventEmitter` class: `on`, `emit`, `addListener` etc. This demonstrates again how much of Node's core functionality is based on event emitters.
*Creating a server*
````js
const http = require('http')
const server = http.createServer() // Create server as emitter
// Register functions to run when listener listener is triggered
server.on('connection', (socket) => {
console.log('new connection...')
})
server.listen(3000)
console.log('Listening on port 3000')
````
This server is functionally equivalent to a generic event emitter:
````js
// Raise an event
const emitter = new EventEmitter('messageLogged')
// Register a listener
emitter.on('messagedLogged', function() {
console.log('The listener was called.')
})
````
Whenever a request is made to this server, it raises an event. We can therefore target it with the `on` method and make it execute a function when requests are made.
If we were to start the server by running the file and we then used a browser to navigate to the port, we would see `new connection` logged every time we refresh the page.
### Sockets and `req, res`
A socket is a generic protocol for client-server communication. Crucially it allows simultaneous communication both ways. The client can contact the server but the server can also contact the client. Our listener function above uses a socket as the callback function but in most cases this is quite low-level, not distinguishing responses from requests. It is more likely that you would initiate a `request, resource` architecture in place of a socket:
````js
const server = http.createServer((req, res) => {
if (req.url === '/'){
res.write('hello')
res.end()
}
})
````
#### Return JSON
Below is an example of using this architecture to return JSON to the client:
````js
const server = http.createServer((req, res) => {
if (req.url === '/products'){
res.write(JSON.stringify(['shoes', 'lipstick', 'cups']))
res.end()
}
})
````
### Express
In reality you would rarely use the `http` module directly to create a server. This is because it is quite low level and each response must be written in a linear fashion as with the two URLs in the previous example. Instead we use Express which is a framework for creating servers and routing that is an abstraction on top of the core HTTP module.
* [Create RESTful API with Express](Create%20RESTful%20API%20with%20Express.md)

View file

@ -0,0 +1,74 @@
---
tags:
- Programming_Languages
- backend
- node-js
- middlewear
---
## What is middlewear?
* Anything that terminates the `req, res` cycle counts as middleware. It is basically anything that acts as an intermediary once the request is received but before the resource is sent. A good example would be the `app.use(express.json()` or `app.use(bodyParser.json)` functions we call in order to be able to parse JSON that is sent from the client.
* You will most likely have multiple middlewear functions running at once. We call this intermediary part of the cycle the **request processing pipeline**.
* Generally all middlewear will be added as a property on the Express `app` instance with the `app.use(...)` syntax.
## Creating custom middlewear functions
### Basic schema
````js
app.use((req, res, next) => {
// do some middlewear
next()
})
````
### `next`
The `next` parameter is key, it allows Express to move onto the next middlewear function once the custom middlewear executes. Without it, the request processing pipeline will get blocked. Middlewear functions are basically asynchronous requests and as such they use a similar syntax as Promises (e.g `then`) for sequencing processes.
### Example of sequence
````js
app.use((req, res, next) => {
console.log('Do process A...')
next()
})
app.use((req, res, next) => {
console.log('Do process B...')
next()
})
````
Would return the following once the server starts:
````plain
Do process A...
Do process B...
````
>
> It makes more sense of course to define our middlewear within a function and then pass it as an argument to `app.use()`
## Useful built-in middlewear
### `express.static()`
>
> `app.use(express.static())`
Allows you to serve static files.
Let's say we have a file called `something.txt` that resides at `public/something.txt`
We can expose this to express with `app.use(static('public'))`. Then if we navigate to `localhost:3000/readme.txt` the file will be served in the browser. (Not the `public` subdirectory is not included in the URL, it will be served from root).
### `express.urlencoded()`
>
> `app.use(express.urlencoded())`
Generally we handle the data of API requests via a JSON body and the `express.json()` middlewear. However, in cases where the data is sent from the client in the form of `key=value&key=value` appendages to the request URL, `urlencoded` allows us to parse them.

View file

@ -0,0 +1,22 @@
---
tags:
- Programming_Languages
- backend
- node-js
- node-modules
---
## The Module Wrapper Function
When Node runs each of our module files are wrapped within an immediately-invoked function expression that has the following parameters:
````js
(function (exports, require, module, __filename, __dirname))
````
This is called the **module wrapper function**
Note that one of these parameters is the [module object](Modules.md#structure-of-a-module).
Within any module we can access these parameters: you can think of them as metadata about the module itself. `__filename` and `__dirname` are particularly useful when writing to files and modifying directories.

View file

@ -0,0 +1,191 @@
---
tags:
- Programming_Languages
- backend
- node-js
- node-modules
---
>
> Modules are small files where we define our variables and functions. Values defined in modules are scoped to that specific module, constituting a unique name space. This avoids name clashes in large programs.
* Every file in a Node application is considered a module.
* The variables and methods in a module are equivalent to `private` properties and methods in object-oriented programming.
* If you wish to use a function or variable defined in a module outside of its modular container you need to explicitly export it and make it public.
## Structure of a module
Node keeps an internal record of the properties of a module. To see this we can log the property `module` to the console.
````js
// index.js
console.log(module)
````
This gives us:
````json
Module {
 id: '.',
 path: '/home/thomas/repos/node-learning',
 exports: {},
 filename: '/home/thomas/repos/node-learning/index.js',
 loaded: false,
 children: [],
 paths: [
   '/home/thomas/repos/node-learning/node_modules',
   '/home/thomas/repos/node_modules',
   '/home/thomas/node_modules',
   '/home/node_modules',
   '/node_modules'
 ]
}
````
## Exports
* Whenever we export a property or method from a module we are directly targeting the `exports` property of the module object.
* Once we add exports to a file they will be displayed under that property of the module object.
* We can export the entire module itself as the export (typically used when the module is a single function or class) or individual properties.
### Exporting a whole module
*The example below is a module file that consists in a single function*
````js
module.exports = function (...params) {
// function body
}
````
Note the module is unnamed. We would name it when we import:
````js
const myFunction = require('./filenme')
````
### Exporting sub-components from a module
In the example below we export a variable and function from the same module. Note only those values prefixed with `exports` are exported.
````js
exports.myFunc = (...params) => {
// function bod[]()y
}
exports.aVar = 321.3
var nonExportedVar = true
````
This time the exports are already name so we would import with the following:
````js
const { myFunc, aVar } = require("./filename");
````
We can also do the exporting at the bottom when the individual components are named:
````js
const myNamedFunc = (val) => {
return val + 1;
};
function anotherNamedFunc(val) {
return val * 2;
}
// This time we export at the bottom
exports.myNamedFunc = myNamedFunc;
exports.differentName = anotherNamedFunc; // We can use different names
// Or we could export them together
module.exports = { myNamedFunc, anotherNamedFunc };
````
The import is the same:
````js
const { myNamedFunc, anotherNamedFunc } = require("./modules/multiExports");
````
## Structuring modules
The techniques above are useful to know but generally you would want to enforce a stricter structure than a mix of exported and private values in the one file. The best way to do this is with a single default export.
Here the thing exported could be a composite function or an object that basically acts like a class with methods and properties.
*Export a composite single function*
````js
module.exports = () => {
foo() {...}
bar() {...}
}
````
*Export an object*
````js
module.exports = {
foo : () => {...},
bar: () => {...}
}
````
**Both of these structures would be referred to in the same way when importing and using them**
Or you could export an actual class as the default. This is practically the same as the two above other than that you would have to use `new` to initiate an instance of the class.
````js
export default class {
foo() {}
bar() {}
}
````
## Built-in modules
Node has numerous built-in methods that provide helpful utility methods:
* [File system module](File%20system%20module.md)
* [Events module](Events%20module.md)
## Structure of Node module methods
Every method belonging to the in-built modules of Node has a callback structure: a callback function of which the first argument is an error-handler and the second is the (typically asynchronous) returned value.
For example
````js
fs.readdir('./', function(err, files) {
if (err) {
console.error(err)
} else {
console.log(files)
}
})
// ['files', 'that', 'were', 'returned']
````

View file

@ -0,0 +1,35 @@
---
tags:
- Programming_Languages
- backend
- node-js
- npm
---
## List installed packages
````
npm list
````
This will return a recursive tree that lists dependencies of dependencies, ...
To limit the depth you can add the `--depth=` flag. For example to see only your installed packages and their versions use `npm list --depth=0`.
## View `package.json` data for an installed package
We could go to the NPM registry and view details or we can quickly view the `package.json` for the dependency with the command `npm view [package_name]`
We can pinpoint specific dependencies in the `package.json`, e.g. `npm view [package_name] dependencies `
## View outdated modules
See whether your dependency version is out of date use `npm outdated`. This gives us a table, for example:
![Pasted image 20220411082627.png](../../img/Pasted%20image%2020220411082627.png)
* *Latest* tells us the latest release available from the developers
* *Wanted* tells us the version that our `package.json` rules target. To take the first dependency as an example. We must have set our SemVer syntax to `^0.4.x` since it is telling us that there is a minor release that is more recent than the one we have installed but is not advising that we update to the latest major release.
* *Current* tells us which version we currently have installed regardless of the version that our `package.json` is targeting or the most recent version available.
## Updating
`npm update` only updates from *current* to *wanted*. In other words it only updates in accordance with your caret and tilde rules applied to semantic versioning.

View file

@ -0,0 +1,21 @@
---
tags:
- Programming_Languages
- backend
- node-js
- node-modules
---
Like Bash, Node utilises [environment variables](../Shell%20Scripting/Environmental%20and%20shell%20variables.md) and the syntax is the same since Node must run in a Bash environment or emulator (like GitBash on Windows).
When working in development we are able to specify the specific port from which we want top serve our application. In production, we do not always have this control: the port will most likely be set by the provider of the server environment.
While we may not know the specific port, whichever it is, it will be accessible via the `PORT` environment variable. So we can use this when writing our [event listeners](Events%20module.md#event-emitters):
````js
const port = process.env.PORT || 3000;
app.listen(port, () => console.log(`Listening on ${port}`));
````
This way, if a port is set by the provider it will use it. If not, it will fall back to 3000.

View file

@ -0,0 +1,21 @@
---
tags:
- Programming_Languages
- backend
- node-js
- async
---
In the context of back-end, we can think of a thread as an instance of a request-response transaction.
For example a request is made from the client for a resource contained in a database. The back-end language is an intermediary between the client machine and the server. It receives the request and returns the resource as a response.
Many backend frameworks are synchronous but multithreaded. This means that a thread can only process one request-response cycle at a time. The thread cannot initiate a new cycle until it has finished with its current cycle.
If there was only one thread, this would be inefficient and unworkable. Consequently the framework will be multi-threaded: multiple request-response cycles can be executed at once by different threads. To increase scalability of applications built with frameworks of this nature you need to be able to spawn more threads commensurate to the increased demand and this increases the resource consumption of the framework (more cores, more memory etc). Moreover it is possible to reach a point where all threads are active and no more can be spawned. In this case there will simply be delays.
In contrast, Node only has a single thread but it works asynchronously, not synchronously. Thus it has a **single-threaded asynchronous architecture**. This means whilst there is only a single thread it can juggle responses by dispatching them asynchronously. When a request is made it sends it off and continues with its execution and handling new requests. Once these resolve, the data is returned to the main thread.
![sync-thread.svg](../../img/sync-thread.svg)
![async.svg](../../img/async.svg)

View file

@ -0,0 +1,38 @@
---
tags:
- Programming_Languages
- backend
- node-js
- validation
---
We can provide server-side validation for our projects by using a **schema validator**. This is a program that declaratively parses the JSON values received as requests from the client. This makes it easy to systematically validate the data that we receive from any HTTP requests where the client sends a body to the endpoint.
One of the most popular schema validators for NodeJS is [joi](https://www.npmjs.com/package/joi).
## Demonstration
Let's say we have a POST request that expects a single field as the body that must be a string and greater than two characters long. First we define our schema:
````js
const schema = Joi.object({
name: Joi.string().min(3).required(),
});
const { error } = schema.validate(req.body);
````
The `schema` variable is an object whose keys should match those of the intended request body. Instead of actual values we provide Joi's in-built validators, concatenated as necessary. We then store the results of the validation in a variable.
Next we add handling in the case of errors:
````js
if (error) {
error.details.map((joiErr) => res.status(400).send(joiErr.message));
return;
}
````
We loop through the error array and return 400s as the response if they are found. If there are no errors, the Joi object will return `undefined`.

View file

@ -0,0 +1,12 @@
---
tags:
- Programming_Languages
- shell
---
## If statements
* Conditional blocks start with `if` and end with the inversion `fi` (this is a common syntactic pattern in bash)
* The conditional expression must be placed in square brackets with spaces either side. The spaces matter: if you omit them, the code will not run
* We designate the code to run when the conditional is met with `then`
* We can incorporate else if logic with `elif`

View file

@ -0,0 +1,64 @@
---
tags:
- Programming_Languages
- shell
- automation
---
In Arch Linux I use `cronie` for cron jobs
````bash
# View list of cron jobs
crontab -l
# Open cron file
crontab -e
````
\*\*Syntax **
````bash
m h d mon dow command
# minute, hour, day of month, day of week, bash script/args
# 0-59, 0-23, 1-31, 1-12, 0-6
````
**Examples**
Run on the hour every hour
````
0 * * * * mysqlcheck --all-databases --check-only-changed --silent
````
At 01:42 every day:
````
42 1 * * * mysqlcheck --all-databases --check-only-changed --silent
````
**Shorthands**
* `@reboot` Run once, at startup
* `@yearly` Run once a year, “0 0 1 1 \*”.\</>
* `@annually` same as @yearly
* `@monthly` Run once a month, “0 0 1 * \*
* `@weekly` Run once a week, “0 0 * * 0”
* `@daily` Run once a day, “0 0 * * \*”
* `@midnight` same as @daily
* `@hourly` Run once an hour, “0 * * * \*
**Examples**
````
@hourly mysqlcheck --all-databases --check-only-changed --silent
````
**View the logs**
````bash
sudo grep crontab syslog
````

View file

@ -0,0 +1,160 @@
---
tags:
- Programming_Languages
- shell
---
## Important!
To understand the difference between environmental and shell variables know that:
* You can spawn child shells from the parent shell that is initiated when you first open the terminal. To do this just run `bash` or `zsh` .
* This is a self-contained new instance of the shell. This means:
* It **will have** access to environmental variables (since they belong to the parent / are global)
* It **will not have** access to any shell variables that are defined in the parent.
* **How do you get back to the upper parent shell?** Type `exit` .
* Note that:
* Custom (user-created) shell variables **do not** pass down to spawned shell instances, nor do they pass up to the parent
* Custom (user-created) environment variables do pass down to spawned shell instances but do not pass up to the parent. They are lost on `exit` .
Q. What methods are there for keeping track of, preserving, and jumping between spawned instances? Is this even possible or do they die on `exit` .
## Questions, research
1. If you create a variable manually I guess it wont make it to any config file. How would you create a persistent var that is added to the `.bashrc` and thus which would be initialised on every session? Is this where the path comes in?
1. What methods are there for keeping track of, preserving, and jumping between spawned instances? Is this even possible or do they die on `exit` ?
## What is the shell environment and what are environment variables?
* Every time that you interact with the shell you do so within an **environment**. This is the context within which you are working and it determines your access to resources and the behaviour that is permitted.
* The environment is an area that the shell builds every time that it starts a session. It contains variables that define system properties.
* Every time a [shell session](https://www.notion.so/Shell-sessions-e6dd743dec1d4fe3b1ee672c8f9731f6) spawns, a process takes place to gather and compile information that should be available to the shell process and its child processes. It obtains the data for these settings from a variety of different files and settings on the system.
* The environment is represented by strings comprising key-value pairs. For example:
````bash
KEY=value1:value2
KEY="value with spaces":"another value with spaces"
````
As the above shows, a key can have multiple related values. Each one is demarcated with a `:` . If the value is longer than a single word, quotation marks are used.
* The keys are **variables**. They come in two types: **environmental variables** and **shell variables:**
* Environmental variables are much more permanent and pertain to things like the user and his path (the overall session)
* Shell variables are more changeable for instance the current working directory (the current program instance)
Variables can be created via config files that run on the initialisation of the session or manually created via the command line in the current session
## What are shell variables useful for?
Some deployment mechanisms rely on environmental variables to configure authentication information. This is useful because it does not require keeping these in files that may be seen by outside parties.
More generally they are used for when you will need to read or alter the environment of your system.
## Viewing shell and environmental variables
To view the settings of your current environment you can execute the `env` command which returns a list of the key-value pairs introduced above. Here are some of the more intelligible variables that are returned when I run this command:
````bash
SHELL=/usr/bin/zsh
DESKTOP_SESSION=plasma
HOME=/home/thomas
USER=thomas
PWD=/home/thomas/repos/bash-scripting
PATH=/home/thomas/.nvm/versions/node/v16.8.0/bin:/home/thomas/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin
````
However if you want to target a specific viable you need to invoke `printenv` with the relevant key, for example:
````bash
printenv SHELL
# SHELL=/usr/bin/zsh
````
Note that `env` and `printenv` do not show all the shell variables, only a selection. To view all the shell variables along with the environmental variables use `set` .
## Creating, exporting and deleting variable shell and environment variables
* You set shell variables using the same syntax you would within a script file:
````bash
TEST_SHELL_VAR="This is a test"
set | grep TEST_SH
TEST_SHELL_VAR='This is a test'
# We can also print it with an echo, again exactly as we would with a shell script
echo S{TEST_SHELL_VAR}
````
* We can verify that it is not an environmental variable based on the fact that following does not return anything:
````bash
printenv | grep TEST-SH
````
* We can verify that this is a shell variable by spawning a new shell and calling it. Nothing will be returned from the child shell.
* You can upgrade a shell variable to an environment variable with `export` :
````bash
export TEST_SHELL_VAR
# And confirm:
printenv | grep TEST_SH
TEST_SHELL_VAR='This is a test'
````
* We can use the same syntax to create new environment variables from scratch:
````bash
export NEW_ENV_VAR="A new var"
````
### Using config files to create variables
You can also add variables to config files that run on login such as your user `.bashrc` / `.zshrc` . This is obviously best for when you want the variables to persist and be accessible within every [shell session](https://www.notion.so/Shell-sessions-e6dd743dec1d4fe3b1ee672c8f9731f6).
## Important environmental and shell variables
### `PATH`
A list of directories that the system will check when looking for commands. When a user types in a command, the system will check directories in this order for the executable.
````bash
echo ${PATH}
# /home/thomas/.nvm/versions/node/v16.8.0/bin:/home/thomas/.local/bin:/usr/local/sbin:/usr/local/bin:
# /usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin
````
For example, if you wish to use `npm` commands globally (in any directory) you will need to have the requisite Node executable in your path, which you can see above.
TODO: Add more info about the path when I have it.
### `SHELL`
This describes the shell that will be interpreting any commands you type in. In most cases, this will be bash by default, but other values can be set if you prefer other options.
````bash
echo ${SHELL}
# /usr/bin/zsh
````
### `USER`
The current logged in user.
````bash
echo ${USER}
# thomas
````
### `PWD`
The current working directory.
````bash
echo ${PWD}
# /home/thomas
````

View file

@ -0,0 +1,60 @@
---
tags:
- Programming_Languages
- shell
---
## Viewing file permissions
In order to see file permissions within the terminal, use the `-l` or `-rfl` with the `ls` command. Remember this command can be applied at both the directory and single-file level. For example:
````bash
drwxr-xr-x 7 thomas thomas 4096 Oct 2 19:22 angular-learning-lab
drwxr-xr-x 5 thomas thomas 4096 Oct 17 18:05 code-exercises
drwxr-xr-x 5 thomas thomas 4096 Sep 4 16:15 js-kata
drwxr-xr-x 9 thomas thomas 4096 Sep 26 18:10 sinequanon
drwxr-xr-x 12 thomas thomas 4096 Sep 19 17:41 thomas-bishop
drwxr-xr-x 5 thomas thomas 4096 Sep 4 19:24 ts-kata
````
## `chmod`
We use `chmod` for transferring ownership and file permissions quickly from the command-line.
### Octal notation
`chmod` uses octal notation. Each numeral refers to a permission set. There are three numerals. The placement denotes the user group. From left to right this is:
* user
* group
* everyone else.
If you are working solo and not with group access to files, you can disregard assigning the other numerals, by putting zeros in as placeholders.
[Permission codes](https://www.notion.so/685254916b2642f189e6316b876e09c9)
### Example
````bash
$ chmod -v 700 dummy.txt
$ ls -l dummy.txt
$ -rwx------ 1 thomasbishop staff 27 13 May 15:42 dummy.txtExample
````
### Useful options
`-v` → verbose: tell the user what `chmod` is doing
`-r` → work recursively, i.e apply the action to directories as well as files
`-f` →silent: suppress most error messages
## Running bash files
In most cases, especially when you are working alone, the most frequent codes you are going to need are 700 and 600. When shell scripting, you need to make your scripts executable for them to work, therefore you should always `chmod 700` when creating a `.sh` file.
Then to invoke the script from the shell you simply enter:
````bash
./your-bash-script.sh
````

View file

@ -0,0 +1,46 @@
---
tags:
- Programming_Languages
- shell
---
## Purpose of `grep`
`grep` stands for “global regular expression print”. It allows you to search plain text data sets for strings which match a regular expression or pattern.
## Syntax
### Schematic
````bash
grep [options] [pattern] [source file] > [output file]
````
Note that above we redirect the file matches to a new file. You don't have to do this. If you omit the redirection, `grep` will output to standard output.
### Applied
````bash
grep -i -n "banana" fruits.txt > banana.txt
````
The above example searches, using regex, for strings matching the pattern “banana” in the file `fruits.txt` regardless of the character case (`-i` ensures this) and outputs its findings to the file `banana.txt`, with the line number where the match occurs appended to the output (`-n` takes care of this).
Note that for simplicity, you can chain optional values together, i.e. the options in the above example could be input as `-in`.
## Useful options
* ignore case: `i`
* count matches instead of returning actual match: `-c`
* precede each match with the line number where it occurs: `-n`
* invert the match (show everything that doesn't match the expression): `-v`
* search entire directories recursively: `-r`
* list file names where matches occur (in the scenario of a recursive match): `-l`
## `ripgrep`
`ripgrep` is generally faster however it does not come as default with Unix and only works recursively, i.e. it is designed to find strings within files within multiple directories not just single files or piped streams.
It also respects `.gitignore` files that it finds within directories by default and `node_modules` which is really handy.
Most of the standard `grep` options transfer over.

View file

@ -0,0 +1,50 @@
---
tags:
- Programming_Languages
- shell
- unix
---
## Unix based systems
Many operating systems are based on the UNIX software architecture. macOS/OSX (Darwin) and GNU/Linux are two very popular examples. Most web servers run a version of Linux as their native OS thus a knowledge of command line for UNIX systems is invaluable for web developers. Windows systems are not UNIX based so everything that is written here is not applicable to the Windows command line program Command Prompt although there are obvious conceptual overlaps.
UNIX was initially developed by AT&T and is still owned by them. GNU (the basis for Linux) is a recursive acronym for GNU is not UNIX. Functionally it is the same as Unix however, it just doesnt contain the proprietary code.
## Key terms
### Kernal
The kernel is the central part of an operating system. It manages the operations of the computer and the hardware; most notably memory and CPU time. There are two types of kernels: a **micro-kernel**, which only contains basic functionality; and a **monolithic kernel**, which contains many device drivers.
### Shell
A shell is a user interface for access to an operating systems services. In general, operating system shells use either a command-line interface or GUI, depending on a computers role and particular operation. It is named a shell because it is the outermost layer around the operating system kernel
### Bash
Bash is a Unix shell and command language written by Brian Fox for the GNU Project as a free software replacement for the Bourne shell. First released in 1989, it has been distributed widely as the default login shell for most Linux distributions and Apples macOS.
## What is the command line?
Command line is both a general computing concept and a specific utility. In its general sense, engaging with a computer via command line means inputting abbreviated written commands into a console or terminal window on a computer. The computer then runs these commands on the basis of the instructions you have given.
You can get the computer to do things (make files, connect to networks etc) via command line and you can use it to check the status of systems within your computer.
This mode of engagement contrasts with the engagement facilitated by modern graphical user interfaces (GUIs) where the user navigates via a combination of keyboard and mouse inputs and interacts with windows whose functionality is usually self-explanatory and therefore more user-friendly. Command line predates GUIs but it is considered a more efficient and expansive means of interacting with the whole of your computer especially if you are a developer or programmer.
In the more specific sense, the command line refers to the location in the console window where you input your commands. The command line consists in a user name and the current directory you are in followed by a dollar sign and a cursor waiting for input.
## Basic orientation
When you are using command line you are always situated somewhere within a file system. Typically this will be your own user files. directory structure By default you are always in your home directory, this is represented by the tilda symbol: `~`. This is why you see the tilda symbol as part of the command line. In command line, we do not use the term folders, instead they are called directories. (I will follow this convention from hereon.) You do not have to worry too much about remembering the names of specific folders and files you can always ask the computer to display them to you by using `ls` argument or `pwd`.
## Command line syntax
The syntax of the language used to input commands is analogous to the grammar of natural languages. We have a verb that is operative on an object/noun and which can be modified through adverbs. The syntax of a command sequence is as follows:
1. Command (verb): what we want to do
1. Option (adverb): modifying the command - always starts with a hyphen
1. Argument (noun/object): what we want our command to operate on
We will see that not all commands require arguments, but this is the general structure

View file

@ -0,0 +1,20 @@
---
tags:
- Programming_Languages
- shell
---
## Kill a process running on a port
For example a local server.
````bash
sudo lsof -t -i:8000
# List files and proces ID (-t) and internet connections (-i) on port number
$ 7890
# Gives you process id
sudo kill -9 7890
# Kill the process that is running there
````

View file

@ -0,0 +1,23 @@
---
tags:
- Programming_Languages
- shell
---
## Listing options
Obviously we know that in order to list the files and sub-directories in our current directory we use `ls` but here are some of the most useful of the different modifiers:
* `**ls -a**`
* list and include hidden dot files
* `**ls -l**`
* list with user permissions, file-size and date-modified (most detailed)
* `**ls ***`
* list organised by folders one level deep
## Navigation shorthands
* `cd -`
* Return to the directory you were last in
* `!!`
* Repeat the last command

View file

@ -0,0 +1,89 @@
---
tags:
- Programming_Languages
- shell
- arrays
---
## List variables
When we use the term **list** in bash, we are not actually referring to a specific type of data structure. Instead a **list variable** is really just a normal variable wrapped in quote marks that has strings separated by spaces. Despite the fact that this is not an actual iterative data structure, we are still able to loop through variables of this type.
````bash
A_STR_LIST="cat dog hamster"
AN_INT_LIST="1 2 3"
````
To iterate through a list variable, we can use a for loop:
````bash
for ele in $A_STR_LIST; do
echo $ele
done
````
## Brace expansion for listed variables
With a sequence of variables that follow a pattern, for example the natural numbers (1, 2, 3, 4, ...) we can represent them in a condensed format using something called **brace expansion**. For instance to represent the natural numbers from 1 through 10:
````bash
{1..10}
````
Here the **two dots** stand for the intervening values.
We can iterate through brace expanded variables just the same:
````bash
for num in {1..4}; do
echo $num
done
````
## Arrays
We define an array as follows:
````bash
words=(here are some words)
````
We can also explicitly define an array using `declare` :
````bash
declare -a words=("element1" "element2" "element3")
````
### Index notation
We access specific array elements by their index using the same braces style we use with variables:
````bash
echo ${words[2]}
# element3
````
### Iterating through arrays
````bash
for i in "${words[@]}"
do
echo "$i"
# or do whatever with individual element of the array
done
# element1 element2 element3
````
Note that `@` here is a special symbol standing for all the members of the `words` array.
## Looping through file system
The following script loops through all files in a directory that begin with `l` and which are of the bash file type (`.sh`) :
````bash
for x in ./l*.sh; do
echo -n "$x "
done
echo
````

View file

@ -0,0 +1,50 @@
---
tags:
- Programming_Languages
- shell
---
## Relation between commands and programs
Whenever we issue a command in bash we are really running an executable program that is associated with the command. This is why when we create our own bash scripts we must run `chmod` to make them executables. When we issue a command like `./file.sh` we are running an executable program.
How come, however that when we use a program like `cd` or `npm` we dont have to type `./cd.sh` or `./npm.sh` ? Remember from our discussion of the `PATH` environment variable that whenever we use inbuilt commands like `ls` and `cd` we are automatically sourcing them from the binary directory because we have these directories in our `PATH` . Hence the shell knows in advance what these commands mean. In the case of custom scripts, these arent typically added to the `PATH` so we have to source them in order to run them.
## Passing arguments
If you think about it, a script is really just a function that runs when you source it. As such there needs to be a way for you to pass data to the function so that it can actually act like a function and take arguments. When we use for example `cd ./Desktop` we are passing a directory name as an argument to the `cd` program. We can do the same thing with our custom bash scripts.
To pass an argument we simply add the values after the script in the command. For example:
````bash
./arguments.sh Thomas 33
````
The script is as follows:
````bash
#!/bin/bash
echo "File is called $0"
echo "The arguments provided are $@"
echo "The first argument is $1"
echo "The second argument is $2"
echo "Your name is $1 and you are $2 years old"
````
This outputs:
````
File is called ./arguments.sh
The arguments provided are Thomas 33
The first argument is Thomas
The second argument is 33
Your name is Thomas and you are 33 years old
````
Some points to note on syntax. The `$` is used to individuate the script itself and its arguments.
* Each argument passed is accessible from an index starting at `1` (`$1`)
* The script itself occupies the `0` position, hence we are able to log the name of the script at line 1 `$0` )
* To log the arguments as a group (for instance to later loop through them) we use `$@` .
* To get the number of arguments use `$#`

View file

@ -0,0 +1,21 @@
---
tags:
- Programming_Languages
- shell
---
## Redirection operator
The symbol `>` is called the **redirection operator** because it redirects the output of a command to another location. You most frequently use this when you want to save contents to a file rather than standard output.
````bash
ls | grep d* >> result.txt
````
## Appending operator
We use `>>` to append contents on the next available line of a pre-existing file. Continuing on from the example above:
````bash
echo 'These are the files I just grepped' >> result.txt
````

View file

@ -0,0 +1,29 @@
---
tags:
- Programming_Languages
- shell
---
## **Types of shell session**
Shell sessions can be one of or several instances of the following types:
* **login shell**
* A session that must be authenticated such as when you access remote resources using SSH
* **non-login shell**
* Not the above
* **interactive shell**
* A shell session that runs in the terminal and thus that the user can *interact* with
* **non-interactive shell**
* A shell session that runs without a terminal
If you are working with a remote server you will be in an **interactive login shell**. If you run a script from the command line you will be in a **non-interactive non-login shell**.
## Shell sessions and access
The type of shell session that you are currently in affects the [environmental and shell variables](https://www.notion.so/Environmental-and-shell-variables-04d5ec7e8e2b486a93f002bf686e4bbb) that you can access. This is because the order in which configuration files are read on initialisation differs depending on the type of shell.
* a session defined as a non-login shell will read `/etc/bash.bashrc` and then the user-specific `~/.bashrc` file to build its environment.
* A session started as a login session will read configuration details from the `/etc/profile` file first. It will then look for the first login shell configuration file in the users home directory to get user-specific configuration details.
In Linux, if you want the environmental variable to be accessible from both login and non-login shells, you must put them in `~/.bashrc`

View file

@ -0,0 +1,22 @@
---
tags:
- Programming_Languages
- shell
- aliases
---
>
> A symbolic link, also termed a soft link, is a special kind of file that points to another file. Unlike a hard link, a symbolic link does not contain the data in the target file. It simply points to another entry somewhere in the file system.
# Syntax
````
ln -s -f ~/[existing_file] ~/.[file_you_want_to_symbolise]
````
Real example:
````
ln -s -f ~/dotfiles/.vimrc ~/.vimrc
````

View file

@ -0,0 +1,96 @@
---
tags:
- Programming_Languages
- shell
---
## Sorting strings: `sort`
If you have a `.txt` file containing text strings, each on a new line you can use the sort function to quickly put them in alphabetical order:
````bash
sort file.txt
````
Note that this will not save the sort, it only presents it as a standard output. To save the sort you need to direct the sort to a file in the standard way:
````bash
sort file.txt > output.txt
````
### Options
* `-r`
* reverse sort
* `c`
* check if file is already sorted. If not, it will highlight the strings which are not sorted
## Find and replace: `sed`
The `sed` programme can be used to implement find and replace procedures. In `sed`, find and replace are covered by the substitution option: `/s` :
````bash
sed s/word/replacement word/ file.txt
````
This however will only change the first instance of word to be replaced, in order to apply to every instance you need to add the global option: `-g` .
As sed is a stream editor, any changes you make using it, will only occur within the standard output , they will not be saved to file. In order to save to file you need to specify a new file output (using `> output.txt`) in addition to the original file. This hasthe benefit of leaving the original file untouched whilst ensuring the desired outcome is stored permanently.
Alternatively, you can use the `-i` option which will make the changes take place in the source file as well as in standard input.
Note that this will overwrite the original version of the file and it cannot be regained. If this is an issue then it is recommended to include a backup command in the overall argument like so:
````bash
sed -i.bak s/word/replacement word/ file.txt
````
This will create the file `file.txt.bak` in the directory you are working within which is the original file before the replacement was carried out.
### Remove duplicates
We can use the `sort -u` command can be used to remove duplicates:
````bash
sort -u file.txt
````
It is important to sort before attempting to remove duplicates since the `-u` flag works on the basis of the strings being adjacent.
## Split a large file into multiple smaller files: `split`
Suppose you have a file containing 1000 lines. You want to break the file up into five separate files, each containing two hundred lines. You can use `split` to accomplish this, like so:
````bash
split -l 200 big-file.txt new-files
````
`split` will categorise the resulting five files as follows:
* new-file-aa,
* new-file-ab
* new-file-ac,
* newfile-ad,
* new-file-ae.
If you would rather have numeric suffixes, use the option `-d` . You can also split a file by its number of bytes, using the option `-b` and specifying a constituent file size.
## Merge multiple files into one with `cat`
We can use `cat` read multiple files at once and then append a redirect to save them to a file:
````bash
cat file_a.txt file_b.txt file_c.txt > merged-file.txt
````
## Count lines, words, etc: `wc`
To count words:
````bash
wc file.txt
````
When we use the command three numbers are outputted, in order: lines, words, bytes.
You can use modifiers to get just one of the numbers: `-l`, `-w` , `-b` .

View file

@ -0,0 +1,79 @@
---
tags:
- Programming_Languages
- shell
---
We know that `$PATH` is an [environment variable](Environmental%20and%20shell%20variables.md). It is an important one because it keeps track of certain directories. Not just any directories but the directories **where executables are found**.
Whenever any command is run, the shell looks up the directories contained in the `PATH` for the target executable file and runs it. We can see this is the case by using the `which` command which traces the executable of bash commands. Take the `echo` program:
````bash
which echo
/home/trinity/.nvm/versions/node/v16.10.0/bin/npm
````
Or `npm` :
````bash
which npm
/home/trinity/.nvm/versions/node/v16.10.0/bin/npm
````
By default the path will always contain the following locations:
* `/usr/bin`
* `/usr/sbin`
* `/usr/local/bin`
* `/usr/local/sbin`
* `/bin`
* `/sbin`
All the inbuilt terminal programs reside at these locations and most of them are at `/usr/bin`. This is why they run automatically without error. If you attempt to run a program that doesnt reside at these locations then you will get an error along the lines of program x is not found in PATH.
## Structure of the PATH
````bash
/home/trinity/.nvm/versions/node/v16.10.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/wsl/lib:/mnt/c/Python39/Scripts/:/mnt/c/Python39/:/mnt/c/Windows/system32:/mnt/c/Windows:/mnt/c/Windows/System32/Wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0/:/mnt/c/Windows/System32/OpenSSH/:/mnt/c/Program Files/dotnet/:/mnt/c/Program Files/nodejs/:/mnt/c/ProgramData/chocolatey/bin:/mnt/c/Users/thomas.bishop/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/thomas.bishop/AppData/Local/Programs/Microsoft VS Code/bin:/mnt/c/Users/thomas.bishop/AppData/Local/Programs/Hyper/resources/bin:/mnt/c/Users/thomas.bishop/AppData/Roaming/npm
````
## Adding to the PATH
Only the default directories load to the PATH on every session. How then can we add custom directories to the path without them being lost every time we start a new session? Remember that the user config `.bashrc` loads on init for every bash session. Therefore, if we set the custom path in this file, it will be created every time we start a session. This is why when you add a new program it often ask you to append a script to the bottom of your `.bashrc` .
For example, at the bottom of my `.zshrc` on my work computer I have:
````bash
export CHROME_BIN=/mnt/c/Program\\ Files\\ \\(x86\\)/Google/Chrome/Application/chrome.exe
````
This enables me to access the Chromium binaries from my terminal session (needed for running Angular tests) but it doesnt add it to the path, it creates an environment variable on every session.
For demonstration, lets add a users desktop directory to the PATH.
First we go to the `.bashrc` and add the `export` command. [Remember](https://www.notion.so/Environmental-and-shell-variables-04d5ec7e8e2b486a93f002bf686e4bbb) that this is the command for creating a new environment variable:
````bash
export PATH="$PATH=:~/Desktop"
````
We force a reload of the `.bashrc` with the command:
````bash
source ~/.bashrc
````
Then we can check this directory has been added to the path with an echo
````bash
echo $PATH
...:~/Desktop
````
## Relation between commands and programs
Whenever we issue a command in bash we are really running an executable program that is associated with the command. After all, this is why when we create our own bash scripts we must run `chmod` to make them executables.
When we issue `./file.sh` we are running an executable program.
How come, however that when we use a program like `cd` or `npm` we dont have to type `./cd.sh` or `./npm.sh` ?

View file

@ -0,0 +1,36 @@
---
tags:
- Programming_Languages
- shell
---
The following are useful built-in utility methods that you can use for checking and validation in the course of your bash scripts.
## Flags
### Prevent newline
Prevent bash from adding a new line after an echo:
````bash
echo 'Your name is Thomas'
echo 'and you are 33 years old'
# Your name is Thomas
# and you are 33 years old
````
````bash
echo -n 'Your name is Thomas '
echo 'and you are 33 years old'
# Your name is Thomas and you are 33 years old
````
## Operators
### Mathematical
````bash
-lt , -gt,
````

View file

@ -0,0 +1,82 @@
---
tags:
- Programming_Languages
- shell
---
## Types
>
> There is no typing in bash!
* Bash variables do not have types thus bash is neither loosely or strictly typed. Anything you apply the identity operator against becomes a character string variable.
* Bash is however able to distinguish numerical strings which is why arithmetic operations and comparisons work.
* Consequently there is no `null` type either. The closest thing is an empty string, i.e. `APPROX_NULL=""` .
## Variables
### Variables that hold character strings
As noted we use the equality symbol to create a variable:
````bash
PRIM_VAR_STR="My first variable"
PRIM_VAR_FLOAT="50.3"
PRIM_VAR_BOOL="true"
````
As there is no typing in bash, the names of these variables are purely notional.
To invoke a variable we use special brackets:
````bash
echo ${PRIM_VAR_STR} # My first variable
echo ${PRIM_VAR_FLOAT} # 50.3
echo ${PRIM_VAR_BOOL} # true
````
* there is no compunction to use capitals for variables but it can be helpful to distinguish custom variables from program variables (see below)
* quotation marks at declaration are also not strictly necessary however they can help avoid bugs. Also serves as a reminder that every type is basically a string at the end of the day
### Variables that hold references to programs
We can store a reference to a bash program with slightly different syntax:
````bash
user="$(whoami)"
````
When we want to invoke a program variable we don't need to use brackets:
````bash
echo $user # thomasbishop
````
>
> Note that when we declare anything in bash (any time `=` is used) we **do not use spaces!** If you do, the variable will not be set.
## Declarations
You can achieve a sort of typing through the `declare` keyword, although bear in mind this is not enforced and you do not have to use it.
### `-r` : readonly
````bash
declare -r var1="I'm read only"
````
Roughly equivalent to a `const` : if you attempt to change the value of `var1` it will fail with an error message.
### `i` : integer
````bash
declare -i var2="43"
````
The script will treat all subsequent occurrences of `var2` as an integer
### `a` : array
````bash
declare -a anArray
````

View file

@ -0,0 +1,17 @@
---
tags:
- Programming_Languages
- typescript
---
`any` is a TS-specific type. We can think of it as a higher level parent to all the other types that exist in JS and TS. It means in effect that either no type declaration has been made or that the TS compiler cannot infer the type that you mean. Because `any` does not have a data type it is equivalent to all the individual scalar and reference types. In TS this kind of type is called a **supertype**, and specific types that actually correspond to a scalar or reference type are known as **subtypes**. `any`is the supertype of all types and `string` (eg) is a subtype of `any`.
>
> Every value of `string` can be assigned to its supertype`any` but not every value of `any` can be assigned to its subtype `string`
You can declare `any` as a type if you wish however it is discouraged because it effectively undermines the whole purpose of TS. Doing so is basically the same thing as declaring a value in normal JS - there is no designation at left hand assignation of which type the data belongs to.
>
> `any` reflects JavaScript's overarching flexibility; you can see it as a backdoor to a world where you want neither tooling nor type safety.
`any` means you can escape errors during development. If you are using custom types/interfaces and you keep getting an annoying error saying that property X doesn't exist on type,, `any` will allow you to overcome it until you go back later and refine.

View file

@ -0,0 +1,112 @@
---
tags:
- Programming_Languages
- typescript
---
## Type declarations
TypeScript offers full type annotations for classes, as well as several TypeScript-specific options (control access modifiers, interfaces etc) that seek to bring JavaScript into closer alignment with more strict object-oriented languages like Java and C#. Here we just focus on the basics.
````jsx
class Age {
constructor(name, birthYear) {
this.name = name;
this.birthYear = birthYear;
}
currentYear() {
return new Date().getFullYear();
}
get age() {
return this.currentYear() - this.birthYear;
}
get dataOutput(){
return `${this.personName} is ${this.age} years old`;
}
}
````
````tsx
class Age {
personName: string; // check if we need to do this when NOT using a constructor
birthYear: number;
constructor(personName: string, birthYear: number) {
this.personName = personName;
this.birthYear = birthYear;
}
currentYear(): number {
return new Date().getFullYear();
}
get age(): number {
return this.currentYear() - this.birthYear;
}
get dataOutput(): string {
return `${this.personName} is ${this.age} years old`;
}
}
````
The main points to note are:
* methods must specify their return type, as with functions
* the constructor function must specify its parameters' types
* we must declare the types of any properties we intend to use at the start of the class.
### Instantiating a class
In order to create an object instance of `Age`, we can use the standard constructor function, viz:
````jsx
const mum = new Age('Mary Jo', 1959);
console.log(mum);
/* Age { personName: 'Mary Jo', birthYear: 1959 } */
````
But given that classes define objects, we can also now use `Age` as a new custom type and define an object that way
````jsx
const thomas: Age = new Age('Thomas', 1988);
````
### Without constructor
If your class does not use a constructor, you still need to define your class property types at the top:
````tsx
class Dummy {
aNum: number = 4;
get getSquare(): number {
return this.aNum * this.aNum;
}
}
````
## Interfaces
In most cases the difference between using the `type` and `interface` keywords when defining a custom type is marginal however interfaces are specifically designed for classes and OOP style programming in TypeScript. This is obviously most apparent in a framework like Angular where interfaces are used heavily.
When we use an interface with a class we are asserting that the class must have certain properties and methods in order to qualify as that type. This is most helpful when you are working with several developers and want to ensure consistency.
Let's say we have the following interface:
````tsx
interface Person {
firstName: string,
secondName: string,
age: number,
employed: () => boolean
}
````
Now we want to create a class that must share this shape. We go ahead and create the class and say that it **implements** `Person` :
````tsx
class Programmer implements Person {
// If the below are not included, TS will generate an error
firstName: string,
secondName: string,
age: number,
employed: () => boolean
}
````

View file

@ -0,0 +1 @@

View file

@ -0,0 +1,56 @@
---
tags:
- Programming_Languages
- typescript
- functions
---
## Function overloads
Function overloading is not a feature of JavaScript but something close to it can be achieved with TypeScript. It proceeds by defining multiple function types (defined above the function) that may serve as the actual function's parameters. Then with the actual function, you leave the changeable parameters open as optional unions and/or `unknown` :
````ts
// First oveload type:
function logSearch(term: string, options?: string): void;
// Second overload type:
function logSearch(term: string, options?: number): void;
// Implementation:
function logSearch(term: string, p2?: unknown) {
let query = `https://searchdatabase/${term}`;
if (typeof p2 === "string") {
query = `${query}/tag=${p2}`;
console.log(query);
} else {
query = `${query}/page=${p2}`;
console.log(query);
}
}
logSearch("apples", "braeburn");
logSearch("bananas", 3);
````
````ts
// First overload type:
function logSearchUnion(term: string, options?: string): void;
// Second overload type:
function logSearchUnion(term: string, options?: number): void;
// Implementation:
function logSearchUnion(term: string, p2?: string | number) {
let query = `https://searchdatabase/${term}`;
if (typeof p2 === "string") {
query = `${query}/tag=${p2}`;
console.log(query);
} else {
query = `${query}/page=${p2}`;
console.log(query);
}
}
logSearchUnion("melon", "honey-dew");
logSearchUnion("oranges", 4);
````

View file

@ -0,0 +1,63 @@
---
tags:
- Programming_Languages
- typescript
---
## Basic typing within a function: arguments and return values
With functions we can apply types to the return value, the parameters and any values that are included within the function body.
````tsx
function search(query: string, tags: string[]): string {}
````
We can also specify optional parameters with use of the `?` symbol:
````tsx
function search(query: string, **tags?: string[]**): string {}
````
### Utilising custom types
---
Whilst we can use standard JS types with the parameters and return value, the real benefit comes when you use custom types. For instance we can specify that an object passed to a function must match the shape of a custom type or interface. Similarly we can ensure that for functions that return objects, the object that is returned must satisfy the shape of the custom object.
````tsx
async function getContributorData(
contributorName: string
): Promise<IContributor> {}
````
For example, this function has a return signature which indicates that it will return a promise matching a type of shape `IContributor`
## Typing functions themselves
---
As well as typing the values that a function receives and returns, you can type the function itself. **This is most useful when you are using higher-order functions and passing functions as parameters to another function.** In these scenarios you will want to type the function that is being passed as a parameter. There are several ways to do this. We'll use the following basic function as our demonstration:
````tsx
function hoFunc(integer: number, addFunction: any): number {
return addFunction(integer);
}
````
### Use `typeof`
````tsx
// Declare an adding function
const addTwo = (int: number) => int + 2;
// Apply it:
hoFunc(3, addTwo);
// We can now define the higher-order function with a specific type:
function hoFunc(integer: number, addFunction: typeof addTwo): number {
return addFunction(integer);
}
````
This way we just use the native `typeof` keyword to assert that any call of `hoFunc` should pass a function of the type `addTwo`

View file

@ -0,0 +1,95 @@
---
tags:
- Programming_Languages
- typescript
---
## Installation and configuration
---
TypeScript offers custom installations for most modern JS-based frameworks including Webpack, React.js and Vue.js. The instructions below cover minimal TS set-up outside of a specific framework. TypeScript adds an additional step to any build process since your code must compile to JS before any bundling and transpilation can occur. This is why using the custom installations helps to de-complicate things.
## Installation
````
mkdir typescript-project
cd typescript-project
npm i typescript --save-dev
````
## Initialise project
````
npx tsc --init
````
## Basic configuration
````json
"compilerOptions": {
"target" : "es2020", //es2015 for prod
"module" : "es2020",
"allowJs": true,
"checkJs": true,
"strict": true,
"esModuleInterop": true
}
````
## Specify output paths (for production code)
````json
"compilerOptions": {
...
"outDir": "dist",
"sourceMap": true,
}
````
## Compile-time commands
````
tsc --noEmit
````
````
tsc --noEmit --watch
````
## Global requirements
````
npm install -g typescript
npm install -g ts-node
````
`ts-node` allows you to run TS files the way you would with standard JS, i.e. `node [filename]`, you just use `ts-node filename` .
This is a convenience that saves you compiling from TS → JS and then running Node against the compiled JS. Which is useful for debugging and checking values as you work.
# Imports and exports
You may wish to define your custom types outside of their application, for instance in a `/types/` directory. The custom type or types can then be imported using standard ES6 imports:
````tsx
export type Article = {
title: string,
price: number,
}
export type AnotherType = {
...
}
````
````tsx
import { Article, AnotherType } from './types/allTypes.js';
import Article from './types/allTypes.js';
````
TypeScript also provides a specific import keyword `type` so that you can distinguish type imports from other module imports. This is handy when you have multiple components in the one file:
````tsx
import type { Article } from "./types/allTypes.js";
````

View file

@ -0,0 +1,216 @@
---
tags:
- Programming_Languages
- typescript
---
## Scalar data types
The most basic type level is the one already present in JavaScript itself: the primitive data types: `boolean`, `string` and `number` . These are the types that will be returned from a `typeof` expression.
You can explicitly declare these data types yourself when you create a variable with `var`, `const`, or `let` if you like, but it is generally unnecessary since Typescript is intelligent enough to perform \*\*type inference \*\*\*\*and will know from what you write which type you mean. However with complex code or code that requires mutations, it may be helpful for your own understanding to explicitly declare the type. The syntax is as follows:
````tsx
const age: number = 32;
const name: string = 'Thomas';
const one: boolean = true;
const zero: boolean = false;
````
## Reference types
This is where you want to start declaring types explicitly.
### Arrays
With arrays that you populate at declaration, the type will be inferred but I think it will be helpful to declare them, to help with debugging. If you declare an unpopulated/empty array, it is necessary to declare the empty brackets.
````tsx
const store: number[] = [1, 2, 3]; // Populated array
const store: number[] = [] // Empty array
````
### Objects
Objects (and classes) are where TypeScript becomes really useful and powerful, especially when you fuse custom types and shape.
In Typescript you don't really have type annotations for the key pairs of an object. This is to say: you don't declare the types as you write the object. Instead you declare a custom type, which is a type-annotated object, and then create instances of that object which **match the shape** of the custom declaration.
So say we have this object:
````jsx
const age = {
name: 'Thomas',
yearOfBirth: 1988,
currentYear: 2021,
ageNow: function(){
return this.currentYear - this.yearOfBirth;
}
};
````
We could write this as type with the following:
````tsx
let Age : {
name: string,
yearOfBirth: number
currentYear: number,
ageNow: () => number
}
````
We use `:` because we are declaring a type not intialising a value of the type.
We could now re-write the first `age` object as an object of type `Age` :
````tsx
let thomas: typeof Age;
thomas = {
name: 'Thomas',
yearOfBirth: 1988,
currentYear: 2021,
ageNow: function () {
return this.currentYear - this.yearOfBirth;
},
};
````
In practise, defining the type and then asserting that a new variable is of this type and then initialising it is rather long-winded. It is better practice to simplify the process by creating a type alias.
````tsx
type Age = {
name: string,
yearOfBirth: number,
currentYear: number,
ageNow():number, // we type the method on the basis of the value it returns
}
````
We could then create objects based on this:
````tsx
const thomas: Age = {
name: 'Thomas',
yearOfBirth: 1988,
currentYear: 2021,
ageNow: function () {
return this.currentYear - this.yearOfBirth;
},
};
````
Note that we pass in `:Age` as our type declaration, using the custom type in the same way as we would use `:string` or `number[]` . We can now use this custom type as a type annotation anywhere we use type annotations, it can be used exactly the same way as a scalar or reference type in our code.
Note that when we do this we are using a **type alias**. `Age` is an alias for the type that `thomas` conforms to.
The benefit is that TS will correct you if:
* attempt to assign a type to a value that does not match the custom type declaration (for instance: assigning a string value to a property you have typed as number)
* attempt to add a property that is not specified in the custom type declaration
Although you can subsequently extend the custom type (see below)
### Interlude: object methods
In our example we include a method in the definition of the custom `Age` type. This is fine but it means that when we create instances of `Age` like `thomas` , we have to constantly rewrite the same method with each new instance:
````tsx
...
ageNow: function () {
return this.currentYear - this.yearOfBirth;
},
````
This is always going to be the same so it violates DRY to write it every time. In these cases it would be better to either use a class (since the method would carry over to each instance of the class) or, if you want to remain within the realm of objects, create a function that takes an `Age` type as a parameter and then applies the method, for instance:
````tsx
function ageNow(person: Age): number {
return person.currentYear - person.yearOfBirth;
}
console.log(ageNow(thomas)) // 33
````
See below for more info on functions \[link\] and classes \[link\].
[Object methods in TypesScript](https://www.reddit.com/r/typescript/comments/m8rck4/object_methods_in_typesscript/)
For more info, see discussion I started on this on /r/typescript
### Interlude: duck typing 🦆
>
> Types are defined by the collection of their properties not their name.
Typescript's implementation of types is as a **structural type system**, which contrasts with a nominal type system. This is often referred to colloquially as 'duck typing': *if it looks like a duck, walks like a duck, and sounds like a duck, it probably is a duck*.
With custom (object types) this means that the following expression of an object of type `Age` doesn't generate an error, TS is satisfied that the shapes of each match.
````tsx
const martha = {
name: 'Martha',
yearOfBirth: 1997,
currentYear: 2021,
gender: 'female',
};
const addition: Age = martha;
````
But if we tried to add this extra property whilst defining `martha` as an instance of the custom type `Age` , we would get an error:
````tsx
const martha: Age = {
name: 'Martha',
yearOfBirth: 1997,
currentYear: 2021,
gender: 'female',
};
````
````
Type '{ name: string; yearOfBirth: number; currentYear: number; ageNow: () => number; gender: string; }' is not assignable to type 'Age'. **Object literal may only specify known properties, and 'gender' does not exist in type 'Age'.**
````
It means that even though in the following, the variable `point` is never declared to be of the type `Point` , it matches the shape of the custom type. As the structural integrity is maintained, it can be passed to the function without error.
````tsx
interface Point {
x: number;
y: number;
}
function logPoint(p: Point) {
console.log(`${p.x}, ${p.y}`);
}
// logs "12, 26"
const point = { x: 12, y: 26 };
logPoint(point);
````
Shape matching only requires a subset of the object's fields to match:
````tsx
const point3 = { x: 12, y: 26, z: 89 };
logPoint(point3); // logs "12, 26"
const rect = { x: 33, y: 3, width: 30, height: 80 };
logPoint(rect);
````
## Interfaces
For most purposes the keywords `type` and `interface` are interchangeable. For me, the main decider is that Angular favours `interface` over `type`.
An interface is concept that crosses over from strict OOP.
>
> In Object Oriented Programming, an Interface is a description of all functions that an object must have in order to be an "X". Again, as an example, anything that "ACTS LIKE" a light, should have a `turn_on()` method and a `turn_off()` method. The purpose of interfaces is to allow the computer to enforce these properties and to know that an object of TYPE T (whatever the interface is ) must have functions called X,Y,Z, etc. **An interface is about actions that are allowed, not about data or implementation of those actions.**
>
> But think also about the real semantics of the word: an interface could be a gear stick, a light switch or a door lock accessed with a key. Interfaces allow an external consumer to interact with a complex system that lies behind the interface. In code, the interface represents the ways to use the capabilities of the object.
So with standard OOP interfaces concern the functions that an object possesses. We can include function typings in TS interfaces but generally, an interface/type outlines the structure of a JS *object.*

View file

@ -0,0 +1,7 @@
---
tags:
- Programming_Languages
- typescript
---
Type narrowing is the process of working out from a supertype like any or unknown what type the value should be in the x course of your code. Related to this is type guarding: ensuring that a value is of the suitable type as a factor of control flow. For instance using typeof to ensure that an input is numerical before proceeding with a function's logic.

View file

@ -0,0 +1,7 @@
---
tags:
- Programming_Languages
- typescript
---
You might think that a good use case for `any` is a scenario where you don't know in advance what the data type is going to be. In order to mark this, you put `any` there as a placeholder. Actually TS provides a type for this specific purpose: `unknown`. Like `any`, `unknown` is equivalent to every type in TS (it is a supertype) but it is deliberately inhibiting, in contrast to `any`. When you use `unknown` you have to use type narrowing to avoid it being returned. So if your code starts with `unknown`, you should filter out scenarios where it evaluates to each of the possible types otherwise if `unknown` is returned it will cause an error.

View file

@ -0,0 +1,16 @@
---
tags:
- Logic
- Set_Theory
- theorems-axioms-laws
---
The basic notions of set theory are defined in [Basic properties of sets](Basic%20properties%20of%20sets.md). There we introduced a formal syntax that will be utilised to define the axioms. For easy reference:
- variables $a,b,c,...$ to range over sets
- variables $x,y,z$ to range over ordinary objects as well as sets.
## Axiom of Extensionality
Sets which contain the same members are the same set. If sets A and B contain the same elements then A = B.
$$\forall a \forall b [\forall x (x \in a \longleftrightarrow x \in b) \rightarrow a =b]$$

View file

@ -0,0 +1,112 @@
---
tags:
- Logic
- Set_Theory
---
## Set theory
Set theory is a sub-discipline of both mathematics and formal logic. In mathematics it is used as a universal framework for describing other mathematical theories. It is also utilised in computer science and linguistics.
It is useful because it provides tools for modelling an extraordinary variety of structures.
>
> Set theory and the theory of infinite sets was created by Georg Cantor (1845-1918), a German mathematician.
## Method of formalisation
We can use the symbols of predicate logic to simplify and clarify natural language expression of set-theoretic principles. There are different ways to do this but we will use the standard quantifiers and:
* variables $a,b,c,...$ to range over sets
* variables $x,y,z$ to range over ordinary objects as well as sets.
More generally we will use capital Latin letters ($A, B, ...$) to denote some specific set, i.e not a generalised/quantified notion of a set.
### Example
'Everything is a member of some set or another:
$$ \forall x \exists a (x\in a) $$
## What are sets?
A set is a collection of objects. In mathematics the objects are mathematical objects.
A **finite set:**
$$ BG = { \textsf{Barry, Maurice, Robin}} $$
An **infinite set:**
$$ I = {1, 2, 3, 4, ...} $$
>
> When we use braces to indicate the members of a set we are providing a **list description** of the set.
## Set membership
If a set S is a collection of objects, to say that object x is a member of S is just to say that x is one of those objects.
We might also express this in natural language as:
* the object x is an element of the set S
* the object x belongs to S
* the set S contains the object x
Formally, we use epsilon to express set membership:
$$ x \in A $$
This asserts that x is a member of the set A.
The negation of this proposition is expressed:
$$ x \notin A $$
This asserts that x is not a member of the set A.
### Subsets
>
> Set A is a subset of set B if every member of A is also a member of B.
For example the set of women is a subset of the set of humans because every woman is a human. We express subset relations like so:
$$ A \subseteq B $$
This asserts that set A is a subset of set B.
The negation of this proposition is expressed:
$$ A \not\subset B $$
We must not confuse the relation of being a subset with being a member. Jane is a member of the set of women but Jane is not a subset of the set of women since Jane is not herself a set, she is an object/individual member.
There is also the notion of a **proper subset.**
>
> If subset *A* of *B* is a proper subset of *B* then *B* contains some elements that are not in *A*.
In other words, if B contains objects other than/ in addition to A.
$$ A \subset B $$
This asserts that set A is a proper subset of set B.
For example, the set of women is a proper subset of the set of humans because the set of humans also includes the set of men. If there were only women and no men, then the set of women would be a subset of the set of humans.
### Supersets
If A is a subset of B then we say that B is a **superset** of A. Being a superset, B contains every object of A and may also contain other objects in addition to A. This is just a different way of asserting that A is a proper subset of B.
$$ B \supseteq A $$
This asserts B is a superset of A. The negation:
$$ B \not\supset A $$
This asserts that B is not a superset of A.
## Resources
[Set symbols](https://www.mathsisfun.com/sets/symbols.html)

View file

@ -0,0 +1,37 @@
---
tags:
- Software_Engineering
---
````
3.4.1 === major.minor.patch
````
* Major
* New feature which may potentially cause breaking changes to applications dependent on the previous major version.
* Minor
* New features that do not break the existing API
* Patch
* Bug fixes for the current minor version
## Glob patterns for versioning
### Caret
Interested in any version so long as the major version remains at $n$. E.g if we are at `^4.2.1` and we upgrade, we are ok with `4.5.3` or `4.8.2`. We are not bothered about the minor or patch version.
This is equivalent to `4.x`
### Tilde
Interested in any patch version within set major and minor parameters. For example `~1.8.3` means you don't mind any patch version so long as it is a patch for `1.8`. This is equivalent to `1.8.x`.
### No tilde or caret
Use the *exact* version specified

View file

@ -0,0 +1,220 @@
---
tags:
- Software_Engineering
- publication
---
## General
### Meyer's Uniform Access Principle
>
> All services offered by a module should be available through a uniform notation, which does not betray whether they are implemented through storage or through computation
This is a clear recommendation for using getters and setters with classes. You should not see method calls outside of the class, they should appear as properties of the object.
## Don't Repeat Yourself
>
> Every piece of knowledge must have a single, unambiguous, authoritative representation within a system
## The Principle of Orthogonality
This notion comes from geometry. Two lines are orthogonal to each other if they form a right-angle when they meet.
Their meeting isn't the important part. Think of a simple x, y graph:
>
> If you move along one of the lines, **your position projected onto the other doesn't change**
In computing this is expressed in terms of **decoupling** and is implemented through modular, component-based architectures. As much as possible code should be scoped narrowly so that a change in one area does not cause changes in others. By keeping components discrete it is easier to make changes, refactor, improve and extend the codebase.
>
> We want to design components that are self-contained: independent and with a single, well-defined purpose. When components are isolated from one another, you know that you can change one without having to worry about the rest. As long as you don't change that component's external interfaces, you can be comfortable that you won't cause problems that ripple through the entire system.
### Benefits of orthogonality: productivity
* Changes are localised so development time and testing time are reduced
* Orthogonality promotes reuse: if components have specific, well-defined responsibilities, they can be combined with new components in ways that were not envisioned by their original implementors. The more loosely coupled your systems, the easier they are to reconfigure and reengineer.
* Assume that one component does *M* distinct things and another does *N* things. If they are orthogonal and you combine them, the result does *M x N* things. However if the two components are not orthogonal, there will be overlap, and the result will do less. You get more functionality per unit effort by combining orthogonal components.
### Benefits of orthogonality: reduced risk
* Diseased sections of code are isolated. If a module is sick, it is less likely to spread the symptoms around the rest of the system.
* Overall the system is less fragile: make small changes to a particular area and any problems you generate will be restricted to that area.
* Orthogonal systems are better tested because it is easier to run and design discrete tests on modularised components.
>
> Building a unit test us itself a an interesting test of orthogonality: what does it take to build and link a unit test? Do you have to drag in a large percentage of the rest of the system just to get a test to compile or link? **If so, you've found a module that is not well decoupled from the rest of the system**
### Relationship between DRY and orthogonality
With DRY you're looking to minimize duplication within a system, whereas with orthogonality, you reduce the interdependency among the system's components. If you use the principle of orthogonality combined closely with the DRY principle, you'll find that the systems you develop are more flexible, more understandable and easier to debug, test, and maintain.
### Reversibility
The principles of orthogonality and DRY result in code that is reversible. This means it is able to change in an agile way when the circumstances of its use and deployment change. This is important because when developing software in a business setting, the best decisions are not always made the first time around. By following the principles it should be relatively easy to change your program's interfaces, platform and scale. In other words, with the principle of orthogonality and DRY, refactoring becomes less of a chore.
## Prototyping and Tracer Bullets
'Tracer bullets' are used by the military for night warfare. They are phosphorous bullets that are included in the magazines of guns alongside normal bullets. They are not intended to kill but instead light-up the surrounding area making it easier to see the terrain and target more efficiently.
The authors use the notion of tracer bullets as a metaphor for developing software at the early stages of a project. This is **not** the same thing as prototyping. A tracer bullet model is useful for building things that haven't been built before. They exist to 'shed light' on the project's needs and to help the client understand what they want.
They differ from prototypes in that they include integrated overall functionality but in a rough state. Whereas prototypes are more for singular, specific subcomponents of the project. Because tracer bullet models are joined-up in this way, even if they turn out to be inappropriate in some regard, they can be adapted and developed into a better form, without losing the core functionality.
>
> Tracer bullets work because they operate in the same environment and under the same constraints as the real bullets. They get to the target fast, so the gunner gets immediate feedback. And from a practical standpoint they are a relatively cheap solution. To get the same effect in code, we're looking for something that gets us from a requirement to some aspect of the final system quickly, visibly and repeatably.
>
> Tracer code is not disposable: you write it for keeps. It contains all the error-checking, structuring, documentation and self-checking that a piece of production code has. It simply is not fully functional. However, once you have made an end-to-end connection among the components of your system, you can check how close to the target you are, adjusting as necessary.
### Distinguishing from prototyping
>
> Prototyping generates disposable code. Tracer code is lean but complete, and forms part of the skeleton of the final system. Think of prototyping as the reconnaissance and intelligence gathering that takes place before a single tracer bullet is fired.
## Design by contract
To understand DBC we have to think of a computational process as involving two stages: the call and the execution of the routine that happens in response to the call (henceforth **caller** and **routine**).
* the caller could be a function expression that invokes a function and passes arguments to it expecting a given output. The function that executes is the routine
* the caller could be an object instantiation that calls a method belonging to its parent class
* the caller could be a parent React component that passes props to a child component
Design by contract means specifying clear and inviolable rules detailing what must obtain at both the call stage and the routine stage if the process is to execute.
Every function and method in a software system does something. Before it starts that something, the routine may have some expectation of the state of the world and it may be able to make a statement about the state of the world when it concludes. These expectations are defined in terms of preconditions, postconditions, and invariants. They form that basis of a **contract** between the caller and the routine. Hence *design by contract**.*\*\*
### Preconditions
Preconditions specify what must be true in order for the routine to be called. In other words, the requirements of the routine. What it needs and what should be the case before it even considers executing the task. A **routine should never get called when its preconditions would be violated**.
### Postconditions
Providing the preconditions are met, this is what the routine is guaranteed to do. In other words: the state of affairs that must obtain after the routine has ran.
### Invariants
Once established, the preconditions and postconditions should not change. If they need to change, that is a separate process and contract. In the processing of a routine, the data may be variant relative to the contract, but by the end the overall conditions establish the equilibrium of the contract.
There is an analogue here with functional programming philosophy: the function should always return the same sort of output, without ancillary processes happening, i.e side-effects.
One way to achieve this is to be miserly when setting up the contract, which overlaps with orthogonality. Only specify the minimum return on a contract rather than multiple postconditions. This only increases the likelihood that the contract will be breached at some point. If you need multiple postconditions, spread them out an achieve them in a compositional way, with multiple separate and modular processes.
>
> Be strict in what you will accept before you begin, and promise as little as possible in return. If your contract indicates that you'll accept anything and promise the world in return, then you've got a lot of code to write!
### Division of responsibilities
>
> If all the routine's preconditions are met by the caller, the routine shall guarantee that all postconditions and invariants will be true when it completes.
Note that the emphasis of responsibilities is on the caller.
Imagine that we have a function that returns the count of an array of integers. It is not the job of the count routine to verify that it has been passed integers and then to execute the count. Or, in the event that it is not passed integers, to mutate the data to integers and then execute.
This should be resolved by the caller: it is the responsibility of the caller to pass integers. If it doesn't, the routine simply crashes or raises an exception. It doesn't try to accommodate the input because that does not come down on its side of the contract. The caller has failed to meet the preconditions. If, due to some bug, the routine receives integers and fails to output the sum, then it has failed on its side
### Example: type checking
An obvious example of this philosophy is when you perform checks or validation within your code (although validation is more of an issue when you are dealing with user data, not your own internal code). For instance using type checking with dynamically-typed languages.
When we use the `prop-types` library with React we are specifying preconditions: so long at the prop (effectively the caller) passed to the component (effectively the routine) is of type X, the component will render invariantly as R. If the prop is of type Y, an exception will be raised highlighting a breach in the contract.
Another example would be more advanced type checking with Javascript written using Typescript.
## The Law of Demeter
Demeter's Law has applicability chiefly when programming with classes.
It's a fancy name for a simple principle summarised by 'don't talk to strangers'. Demeter's law is violated when code has more than one step between classes. You should avoid invoking methods of an object returned by another method. You should only use your own methods when dealing with it.
### Formal
A method *m* of object *O* may only invoke the methods of the following kinds of objects:
* *O* itself
* *m*'s parameters
* any objects created or instantiated within *m*
* *O*'s direct component objects (in other words nested objects)
* a global variable (over and above *O*) accessible by *O*, within the scope of *m*
## Model, View, Controller design pattern
The key concept behind the MVC idiom is separating the model from both the GUI that represents it and the controls that manage the view.
* **Model**
* The abstract data model representing the target object
* The model has no direct knowledge of any views or controllers
* **View**
* A way to interpret the model. It subscribes to changes in the model and logical events from the controller
* **Controller**
* A way to control the view and provide the model with new data. It publishes events to both the model and the view
For comparison, distinguish React from MVC. In React data is unidirectional: the JSX component as controller cannot change the state. The state is passed down to the controller. Also MVC lends itself to separation of technologies: code used to create the View is different from Code that manages Controller and data Model. In React it's all one integrated system.
## Refactoring
>
> Rewriting, reworking, and re-architecting code is collectively known as refactoring
### When to refactor
* **Duplication**: you've discovered a violation of the DRY principle
* **Non-orthogonal design**: you've discovered some code or design that could be made more orthogonal
* **Outdated knowledge**: your knowledge about the problem and you skills at implementing a solution have changed since the code was first written. Update and improve the code to reflect these changes
* **Performance: y**ou need to move functionality from one area of the system to another to improve performance
### Tips when refactoring
* Don't try to refactor and add new functionality at the same time!
* Make sure you have good tests before you begin refactoring. Run the tests as you refactor. That way you will know quickly if your changes have broken anything
* Take short, deliberative steps. Refactoring often involves making many localised changes that result in a larger-scale change.
## Testing
>
> Most developers hate testing. They tend to test-gently, subconsciously knowing where the code will break and avoiding the weak spots. Pragmatic Programmers are different. We are *driven* to find our bugs *now*, so we don't have to endure the shame of others finding our bugs later.
### Unit testing
A unit test is code that exercises a module. It consists in testing each module in isolation to verify its behaviour. Unit testing is the foundation of all other forms of testing. If the parts don't work by themselves, they probably won't work well together. All the modules you are using must pass their own unit tests before you can proceed.
We can think of unit testing as **testing against contract** (detailed above). We want to test that the module delivers the functionality it promises over a wide range of test cases and boundary conditions.
Scope for unit testing should cover:
* Obviously, returning the expected value/outcome
* Ensuring that faulty arguments/ types are rejected and initiate error handling (deliberately breaking your code to ensure it is handled appropriately)
* Pass in the boundary and maximum value
* Pass in values between the zero and the maximum expressible argument to cover a range of cases
Benefits of unit testing include:
* It creates an example to other developers how to use all of the functionality of a given module
* It is a means to build **regression tests** which can be used to validate any future changes to the code. In other words, the future changes should pass the older tests to prove they are consistent with the code base
### Integration testing
Integration testing shows that the major subsystems that make up the project work and play well with each other.
Integration testing is really just an extension of the unit testing described, only know you're testing how entire subsystems honour their contracts.
## Commenting your code
In general, comments should detail **why** something is done, its purpose and its goal. The code already shows *how* it's done, so commenting on this is redundant, and violates the DRY principle.
>
> We like to see a simple module-level comment, comments for significant data and type declarations, and a brief class and per-method header describing how the function is used and anything it does that is not obvious
````js
/*
Find the highest value within a specified data range of samples
Parameter: aRange = range of dates to search for data
Parameter: aThreshold = minimum value to consider
Return: the value, or null if no value found that is greater than or equal to the threshold
*/
````

View file

@ -0,0 +1,16 @@
---
tags:
- Theory_of_Computation
- assembly
---
The Little Man was an example of a computer written in machine code → this can be very hard to easily decipher as it functions at such a low level of abstraction. For this reason there are types of programming languages called **Assembly Language**.
Assembly languages are slightly higher level languages that can easily be converted into machine code. Think of them as one stage up. They are slightly easier to manage and understand. The conversion is carried out by something called an **assembler**.
The table below is an example of how the LMC machine code could be converted into assembly.
![Pasted image 20220319180227.png](../img/Pasted%20image%2020220319180227.png)
>
> While Assembly is rarely used in modern computer programming, it is worthwhile spending a little time experimenting with this set of languages. Programming in Assembly can give you an appreciation for how much complexity is abstracted away by modern languages, and also what are the hardware and software limitations of modern computers. Although programmers nowadays use high-level programming languages, before computers can run them, they must be translated (or compiled) into machine code, which as you have seen is very close to Assembly.

View file

@ -0,0 +1,56 @@
---
tags:
- Theory_of_Computation
- electronics
- binary
---
Now that we know how to add and multiply using binary numbers we can apply this knowledge to our previous understanding of circuits.
Our aim will to be have our inputs as the numbers that we will add or multiply on and our outputs as the product or sum.
## Half adder
Let's start with the most basic example:
*Half adder circuit*
![maths_with_logic_gates_1.png](../img/maths_with_logic_gates_1.png)
This circuit has the following possible range of outputs, where A and B are the input switches and X and Y are the output signals. The logic gates (an `XOR` and an `AND` ) are equivalent to the add function.
````
A B X Y
_ _ _ _
0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 1
````
We can see that if we treat A and B as single binary digits that could correspond to $2^1$ (either `0` or `1` ) , then the X and Y outputs can be viewed collectively to constitute the sum of A and B (we have put the denary equivalent in brackets)
````
A B X Y
_ _ _ _
0 0 0 0 0 + 0 = 00 [0]
0 1 0 1 0 + 1 = 01 [1]
1 0 0 1 1 + 0 = 01 [0]
1 1 1 1 1 + 1 = 10 [2]
````
This is called a half adder because it cannot go higher than $2^1$.
### Representing binary output as denary values
There are special output components that can represent the combination of binary inputs and logic gates as denary values. Here is an example using a **seven-segment display** :
[maths_with_logic_gates_5.gif.crdownload](../img/maths_with_logic_gates_5.gif.crdownload)
## Full adder
To represent numbers higher than the denary 2, we would need a carrying function so that we could represent numbers up to denary 3 and 4. The limit of a half adder is $2^1$.
We do this by adding another switch input:
![maths_with_logic_gates_7.gif](../img/maths_with_logic_gates_7.gif)

View file

@ -0,0 +1,71 @@
---
tags:
- Theory_of_Computation
- Mathematics
- binary
---
## Binary addition
When we add two binary numbers we use place value and carrying as we do in the denary number system. The only difference is that when we reach two in one column (`10`) we put a zero and carry the `1` to the next column.
For example:
````
1101 + 0111 // 13 + 7
---------------------
1 1 0 1
0 1 1 1
_______
101 0 0
1 1 1
````
Let's break down each column from the right:
* `1` and `1` is two. As two is `10` in binary, we place a zero and carry the 1
* In the next column we have `1` and `0` which is one but because we have carried the the previous `1` we have two again so we put a `0` and again carry a `1`
* Now we have `1` and `1` which is two but we also have the carried `1` which makes three. In binary three is `11` so we put a `1` and carry the extra `1`
* This gives us two in the final column `10` but we have no room left to carry so we put `10` itself in the final place making
* In total we have `10100` which makes twenty
### More examples to practise with
![Pasted image 20220319174839.png](../img/Pasted%20image%2020220319174839.png)
## Binary multiplication
Let's remind ourselves of how we do long multiplication within the denary number system:
$$ 36 * 12 $$
So we multiply the bottom unit by the top unit and the top ten and then repeat the process with the bottom ten and sum the results.
````
36
12
__
2 * 6 = 12
2 * 30 = 60
10 * 6 = 60
10 * 30 = 300
_____________
432
````
It is the same in binary multiplication but is actually easier because we are only ever multiplying by ones and zeros.
When we multiply binary numbers in columns we multiply each of the top numbers by the bottom in sequence and then sum the results as in denary.
An important difference is that when we move along the bottom row from the $2^0$, to $2^2$, to $2^4$ etc we must put a zero in the preceding column as a place holder. The sequence is shown below:
![multiplication_01.gif](../img/multiplication_01.gif)
![multiplication_02.gif](../img/multiplication_02.gif)
![multiplication_03.gif](../img/multiplication_03.gif)
![multiplication_04.gif](../img/multiplication_04.gif)

View file

@ -0,0 +1,10 @@
---
tags:
- Theory_of_Computation
---
So far, when talking about binary values we have referred to them as combinations of 1s and 0s or $2^2, 2^3$ etc.
But actually there are nouns for the different groupings of binary digits:
![Pasted image 20220319175450.png](../img/Pasted%20image%2020220319175450.png)

View file

@ -0,0 +1,50 @@
---
tags:
- Theory_of_Computation
- history
---
>
> A general-purpose computer is one that, given the appropriate instructions and required time, should be able to perform most common computing tasks.
This sets a general purpose computer aside from a special-purpose computer, like the one you might find in your dishwasher which may have its instructions hardwired or coded into the machine. Special purpose computers only perform a single set of tasks according to prewritten instructions. Well take the term *computer* to mean general purpose computer.
Simplified model of what a computer is:
![1.4-Input-Process-Output.png](../img/1.4-Input-Process-Output.png)
Although the input, output and storage parts of a computer are very important, they will not be the focus of this course. Instead we are going to learn all about the process part, which will focus on how the computer is able to follow instructions to make calculations.
## **Supplementary Resources**
### Early computing (*Crash Course Computer Science)*
[Early Computing: Crash Course Computer Science #1](https://www.youtube.com/watch?v=O5nskjZ_GoI)
* The abacus was created because the scale of society had become greater than what a single person could create and manipulate in their mind.
* Eg thousands of people in a village and tens of thousands of cattle
* In a basic abacus each row of beads (say its coloured) represents a different power of ten
* As well as aiding calculation, the abacus acts as a primitive storage device
* Similar early computing devices: astrolabe, slide rule, sunrise clocks, tide clocks
>
> As each increase in knowledge, as well as on the contrivance of every new tool, human labour becomes abridged. **Charles Babbage**
* One of the first computers of the modern era was the Step Reckoner built by Leibniz in 1694.
* In addition to adding, this machine was able to multiply and divide basically through hacks because from a mechanical point of view, multiplications and divisions are just many additions and subtractions
* For example, to divide 17/5, we just subtract 5, then 5, then 5 again until we can't do anymore hence two left over
* But as these machines were expensive and slow, people used pre-computed tables in book form generated by human computers. Useful particularly for things like square roots.
* Similarly range tables were created that aided the military in calculating distances for gunboat artillery which factored in contextual factors like wind, drift, slope and elevation. These were used well into WW2 but they were limited to the particular type of cannon or shell
![Screenshot_2020-08-09_at_21.32.54 1.png](../img/Screenshot_2020-08-09_at_21.32.54%201.png)
![Screenshot_2020-08-09_at_21.34.48.png](../img/Screenshot_2020-08-09_at_21.34.48.png)
>
> Before the invention of actual computers, 'computer' was a job-title denoting people who were employed to conduct complex calculations, sometimes with the aid of machinery, but most often not. This persisted until the late 18th century when the word changed to include devices like adding machines.
* Babbage sought to overcome this by designing the **Difference Engine** which was able to compute polynomials. Complex mathematical expressions that have constants, variables and exponent. He failed to complete it in his lifetime because of the complexity and number of intricate parts required. His model was eventually successfully created in the 90s using his designs and it worked.
* But while he was coming up with this he also conceived of a better and general purpose computing device that wasn't limited to polynomial calculations → the Analytical Engine.
* It could run operations in sequence and had memory and a primitive printer. It was way ahead of its time and was never completed.
* Ada Lovelace wrote hypothetical programs for the Analytical Engine, hence she is considered the world's first computer programmer.
* At this point then, computing was limited to scientific and engineering disciplines but in 1890, the US govt needed a computer in order to comply with the constitutional stipulation to have a census every ten years. This was getting increasingly difficult with the growing population - it would take more than 13 years to complete. This led to punch cards designed by Herman Hollereth. From this IBM was born

View file

@ -0,0 +1,69 @@
---
tags:
- Theory_of_Computation
- Logic
- Electronics
- binary
---
>
> Now that we are familiar with the individual logic gates and their truth conditions we are in a position to create **logic circuits**. These are combinations of logic gates controlled by inputs that can provide a range of useful outputs.
## Basic example
In the below circuit we have the following gates connected to two inputs with one output, moving through the following stages:
1. `AND`, `NOT` , `NOT`
1. `AND`, `NOR`
This is equivalent to the following truth table:
````
A B Output
_ _ _____
0 0 0 (1)
1 0 1 (2)
0 1 1 (3)
1 1 0 (4)
````
![Screenshot_2020-08-31_at_13.52.25.png](../img/Screenshot_2020-08-31_at_13.52.25.png)
*Line 1 of the truth table*
![Screenshot_2020-08-31_at_13.52.34.png](../img/Screenshot_2020-08-31_at_13.52.34.png)
*Line 2 and 3 of the truth table (equivalent to each other)*
![Screenshot_2020-08-31_at_13.52.42.png](../img/Screenshot_2020-08-31_at_13.52.42.png)
*Line 4 of the truth table*
## Applied example
With this circuit we have a more interesting applied example.
It corresponds to an automatic sliding door and has the following states
* a proximity sensor that opens the doors when someone approached from outside
* a proximity sensor that opens the doors when someone approaches from the inside
* a manual override that locks both approaches (inside and out) meaning no one can enter of leave
Here's a visual representation:!
[logic_circuits_5.gif](../img/logic_circuits_5.gif)
The following truth table represents this behaviour, with A and B as the door states, C as the override and X as the door action (0 = open, 1 = closed)
````
A B C X
_ _ _ _
0 0 0 0
1 0 0 0
0 1 0 0
1 1 0 0
0 0 1 0
1 0 1 1
0 1 1 1
1 1 1 1
````
![Screenshot_2020-08-31_at_14.12.48.png](../img/Screenshot_2020-08-31_at_14.12.48.png)
*Automatic door sensor with manual override*

View file

@ -0,0 +1,254 @@
---
tags:
- Theory_of_Computation
- Logic
- Electronics
- binary
---
## Logic gates
Logic gates are the basic building blocks of digital computing. **A logic gate is an electrical circuit that has one or more than one input and only one output.** The input controls the output and is isomorphic with logical conditions that can be expressed in the form of truth-tables.
### Truth tables
I know from my study of logic that truth tables enable us to present the conditions under which logical propositions are true or false. To take the `AND` operator: `AND` evaluates to `true` if both of its constituent expressions are `true` and `false` in any other circumstances (e.g. if one proposition is `true` and the other `false` (or vice versa) and if both propositions are `false` ).
This is most clearly expressed in the following truth table:
**Truth table for `AND`**
````
p q p & q
_ _ _____
t t t
t f f
f t f
f f f
````
Another example is the negation (`NOT`) operator in logic which is highly trivial. The negation operator (`¬` or `~` ) switches the value of a proposition from true to false. When we put `~` before `true` it becomes false and when we put `~` before `false` it becomes `true`. We will see shortly that this corresponds to a basic on/off switch.
**Truth table fo `NOT`**
````
p ~ p
_ __
t f
f t
````
## NAND gates
A NAND gate is a logic gate that combines the truth conditions for `AND` and `NOT` . I
Let's first introduce the circuit:
The real-life circuit showing two switches corresponding to two transistors which control the LED light.
![NAND_from_transitors.png](../img/NAND_from_transitors.png)
In this circuit, there are two transistors, each connected to a switch. The switches control the LED light. So the switches are the input and the LED is the output.
For clarity, we are not going to draw both transistors, we will simplify the diagram with a symbol for them which stands for the NAND gate:
![NAND.png](../img/NAND.png)
>
> Remember that a 'logic gate' is a logical abstraction of a physical process: the voltage passing through a transistor. The transistors register the charge and the switches control it's flow, the 'gate' is just the combination of transistors and how they are arranged. There is not a physical gate per se, there is only the transistor whose output we characterize in terms of logic.
The diagram below shows how the circuit models the truth conditions for `AND`
Diagram representing NAND gate:
![NAND.gif](../img/NAND.gif)
* When both switches are off (corresponding to `false` `false`) the output is on (the bulb lights up).
* If either one of the switches are on, the output remains on (corresponding to `true` `false` or `false` `true` )
* It is only when both switches are on, that the output is off (corresponding to `true` `true` )
>
> Remember that switch circuitry is counter intuitive: the switches being on corresponds to the output ceasing to execute because the switches break the circuit, they don't join it.
## Transliterating the logic truth table to the switch behaviour
We can now present a truth table for NAND alongside the truth conditions for `AND` and `NOT`
````
// AND
p q p & q
_ _ _____
t t t (1)
t f f (2)
f t f (3)
f f f (4)
// NOT
p ~ p
_ __
t f
f t
````
````
A B Output
_ _ _____
0 0 1 (1)
1 0 1 (2)
0 1 1 (3)
1 1 0 (4)
````
* So we can see that the binary representation of the circuit accords with `NOT` at rows (1) and (4): when both switches are off (`false` ), the bulb is on ( `true` ). And when both switches are on (`true` ), the bulb is off (`false` ).
* Rows (2) and (3) of the binary truth table accord with rows (2) and (3) of the `AND` truth table: if one of the switches is `true` but the other is `false` , the output is `false` (the bulb remains on).
### More complex outputs from combining NANDS
The example we have looked at so far is fairly simple because there is just one NAND gate corresponding to two inputs (the two switches) and one output (the bulb).
When we add more NAND gates and combine them with each other in different ways we can create more complex output sequences and these two will have corresponding truth tables.
## `NOT` gate
This gate corresponds to the `NOT` Boolean or negation logical connective. It is really simple and derived from the trivial logical fact that `true` is `true` and `false` is `false` also known as **logical identity**.
### Natural language
>
> The negation operator (`¬` or `~` ) switches the value of a proposition from `true` to `false`. When we put `~` before `true` it becomes `false` and when we put `~` before `false` it becomes `true` .
### Truth table
![1-w2ILS6M9pgmLcK6V1PEs3Q.png](../img/1-w2ILS6M9pgmLcK6V1PEs3Q.png)
This corresponds to a simple on-off switch.
In terms of logic gates we would create this by using a single NAND gate. Although it can take a total of two inputs, it would be controlled by a single switch, so both inputs would be set to `1 1` or `0 0` when the switch is activated and deactivated. This would remove the `AND` aspect of `NAND` and reduce it to `NOT` .
A NAND gate simulating NOT logic
![Screenshot_2020-08-25_at_15.09.01.png](../img/Screenshot_2020-08-25_at_15.09.01.png)
### Symbol for `NOT` gate
NOT has its own electrical signal to distinguish it from a NAND:
![Screenshot_2020-08-25_at_15.18.34.png](../img/Screenshot_2020-08-25_at_15.18.34.png)
## `AND` gate
Just as we can create `NOT` logic from a NAND gate, without the `AND` conditions, we can create a circuit that exemplifies the truth conditions of `AND` without including those of `NOT`.
When we attach two NAND gates in sequence connected to two switches as input this creates the following binary conditions:
````
A B Output
_ _ _____
0 0 0 (1)
1 0 0 (2)
0 1 0 (3)
1 1 1 (4)
````
Which is identical to the truth table for `AND` :
````
p q p & q
_ _ _____
t t t (1)
t f f (2)
f t f (3)
f f f (4)
````
### Natural language
>
> `AND` (`&`) is `true` when both constituent propositions are `true` and `false` in all other circumstances viz. `false false` (`¬P & ¬Q` / `0 0` ), `true false` (`P & ¬Q` / `1 0` ), `false true` (`¬P & Q` / `0 1` )
AND at 0 0
![Screenshot_2020-08-25_at_15.04.10 1.png](../img/Screenshot_2020-08-25_at_15.04.10%201.png)
AND at 1 0 or 0 1
![Screenshot_2020-08-25_at_15.05.20.png](../img/Screenshot_2020-08-25_at_15.05.20.png)
![Screenshot_2020-08-25_at_15.05.36.png](../img/Screenshot_2020-08-25_at_15.05.36.png)
### Symbol for `AND` gate
It's very similar to NAND so be careful not to confuse it
![Pasted image 20220319173651.png](../img/Pasted%20image%2020220319173651.png)
### `OR`
>
> `OR` (in logic known as **disjunction**) in its non-exclusive form is `true` if either of its propositions are `true` or both are `true` . It is `false` otherwise.
![Pasted image 20220319173819.png](../img/Pasted%20image%2020220319173819.png)
````
p q p V q
_ _ _____
t t t (1)
t f t (2)
f t t (3)
f f f (4)
````
### `XOR`
>
> `XOR` stands for **exclusive or**, also known as **exclusive conjunction**. This means it can only be `true` if one of its propositions are `true` . If both are `true` this doesn't exclude one of the propositions so the overall statement has to be `false` . This is the only change in the truth conditions from `OR` .
![Pasted image 20220319173834.png](../img/Pasted%20image%2020220319173834.png)
Electrical symbol for XOR
````
p q p X V q
_ _ ________
t t f (1)
t f t (2)
f t t (3)
f f f (4)
````
### `**NOR**`
>
> This is equivalent to saying 'neither' in natural language. It is only `true` both propositions are `false` . If either one of the propositions is `true` the outcome is `false` . If both are `true` it is `false`
![Pasted image 20220319173900.png](../img/Pasted%20image%2020220319173900.png)
### `XNOR`
>
> This one is confusing. I can see the truth conditions but don't understand them. It is `true` if both propositions are `false` like `NOR` or if both propositions are `true` and `false` otherwise.
````
p q p ¬V q
_ _ ________
t t f (1)
t f f (2)
f t f (3)
f f t (4)
````
````
p q p X¬V q
_ _ ________
t t t (1)
t f f (2)
f t f (3)
f f t (4)
````

View file

@ -0,0 +1 @@
I am more than ever now the bride of science. Religion to me is science, and science is religion. In that deeply-felt truth lies the secret of my intense devotion to the reading of Gods natural works… And when I behold the scientific and so-called philosophers full of selfish feelings, and of a tendency to war against circumstances and Providence, I say to myself: They are not true priests, they are but half prophets — if not absolutely false ones. They have read the great page simply with the physical eye, and with none of the spirit within. The intellectual, the moral, the religious seem to me all naturally bound up and interlinked together in one great and harmonious whole… There is too much tendency to making separate and independent bundles of both the physical and the moral facts of the universe. Whereas, all and everything is naturally related and interconnected. A volume could I write on this subject…

View file

@ -0,0 +1,88 @@
---
tags:
- Theory_of_Computation
- Mathematics
- binary
---
## Decimal (denary) number system
Binary is a **positional number system**, just like the decimal number system. This means that the value of an individual digit is conferred by its position relative to other digits. Another way of expressing this is to say that number systems work on the basis of **place value**.
In the decimal system the columns increase by **powers of 10**. This is because there are ten total integers in the system:
$1, 2, 3, 4, 5, 6, 7, 8, 9$
When we have completed all the possible intervals between $0$ and $9$, we start again in a new column.
The principle of counting in decimal:
![denary.gif](../img/denary.gif)
Thus each column is ten times larger than the previous column:
* Ten \[$10^1$\] is ten times larger than one \[$10^0$\]
* A hundred \[$10^2$\] is ten times larger than ten \[$10^1$\]
We use this knowledge of the exponents of the base of 10 to read integers that contain multiple digits (i.e. any number greater than 9).
Thus 264 is the sum of
* $4 * (10^0)$
* $6 * (10^1)$
* $2 * (10^2)$
## Binary number system
In the binary number system, the columns increase by powers of two. This is because there are only two integers: 0 and 1. As a result, you are required to begin a new column every time you complete an interval from 0 to 1.
So instead of:
$$ 10^0, 10^1, 10^2, 10^3 ... (1, 10, 100, 1000) $$
You have:
$$ 2^0, 2^1, 2^2, 2^3, 2^4... (0, 2, 4, 8, 16) $$
When counting in binary, we put zeros as placeholders in the columns we have not yet filled. This helps to indicate when we need to begin a new column. Thus the counting sequence:
$$ 1, 2, 3, 4 $$
is equal to:
$$ 0001, 0010, 0011, 0100 $$
Counting in binary:
![binary.gif](../img/binary.gif)
## Relation to Turing Machines
It's obvious that there is a clear relation between the binary number system and Turing Machines, since in their most basic instance, Turing Machines work with ones and zeros. In order for us to get Turing Machines to compute digital numbers we only need to convert from decimal to binary.
### Example decimal to binary conversion
Let's convert 6 and into binary:
If we have before us the binary place values ($1, 2, 4, 8$). We know that 6 is equal to: **1 in the two column and 1 in the 4 column → 110**
More clearly:
![Pasted image 20220319135558.png](../img/Pasted%20image%2020220319135558.png)
And for comparison:
![Pasted image 20220319135805.png](../img/Pasted%20image%2020220319135805.png)
Or we can express the binary as:
$$ (1 * 2) + (1 * 4) $$
Or more concisely as:
$$ 2^1 + 2^2 $$
### Another example
Let's convert 23 into binary:
![Pasted image 20220319135823.png](../img/Pasted%20image%2020220319135823.png)
![binary_to_denary.gif](../img/binary_to_denary.gif)

View file

@ -0,0 +1,30 @@
---
tags:
- Theory_of_Computation
- turing
---
## What is a Turing Machine?
A Turing Machine consists of an infinitely long tape, that has been divided up into cells. Each cell can contain either a 1, a 0 or an empty space. Above one cell of the tape is a head, which can either move left or right, and can read the symbols written in the cells. The head is also capable of erasing symbols and writing new symbols into the cells.
![Turing_machines_01.gif](../img/Turing_machines_01.gif)
The direction that the head moves, which values it erases, and which values it writes in, are dependent on a set of instructions provided to the machine.0
Different sets of instructions can be divided into **states.** States are like sub-routines and can themselves feature as part of instructions.
For example:
### State 2
* If 0 then erase
* Write 1 then move right
* Go to state 5
### State 5
* If 1, then erase
* Write 0 then move left
* Go to state *n*
Alan Turing proved that **any problem that is computable** can be computed by a Turing Machine using this simple system.

View file

@ -0,0 +1,95 @@
---
tags:
- Theory_of_Computation
- cpu
- von-neumann
---
At the core of a computer sits the Central Processing Unit. This is what manages and executes all computation.
The CPU comprises three core components:
* Registers
* the Arithmetic Logic Unit (ALU)
* the Control Unit (CU)
>
> This method of putting together a computer is known as the **Von Neumann Architecture**. It was devised by John von Neumann in about 1945, well before any of the components that would be needed to produce it had actually been invented.
## Registers
This is the part of the CPU that stores data. The memory cells that comprise it do not have capacitors (unlike RAM) so they cannot store very much data but they work faster, which is what is important.
There are five main types of register in the CPU:
![Pasted image 20220319175645.png](../img/Pasted%20image%2020220319175645.png)
## Arithmetic Logic Unit
This is the hub of the CPU, where the binary work gets done. It contains logic gates and executes processes on them. This is where the data stored by the registers is processed and altered.
It can execute arithmetic on binary numbers and logical operations.
This is the **core** that is referred to in hardware specs of computers, for instance *dual-core*, *quad core* etc.
![74181aluschematic.png](../img/74181aluschematic.png)
## Control Unit
The control unit takes the instructions in binary form from RAM memory (separate from the CPU, but connected) and then signal to the to ALU and memory registers what it is supposed to do to execute the instructions. Think of it as the overseer that gets the ALU and registers to work together to run program instructions.
In addition to the these three active components in the CPU, we also have:
* Buses
Bundles of wires that transfer data between the CPU constituents. There is a bus to carry data, another for addresses and another for instructions.
* Input and output
Devices that connect to the CPU, receive external data and output the results. For instance keyboards and monitors.
## Fetch, decode, execute
*Fetch, decode, execute* is the operating principle of the CPU. We will run through how this works with reference to the CPU components detailed above.
* **Fetch**
* The Program Counter needs to keep track and sequence the different instructions that the CPU will work on. The first place it will look for an instruction is at the RAM address `0000` , equivalent to zero in the count - the starting point. This is address therefore copied to the Memory Address Register for future reference.
* This memory-storing event constitutes an instruction so it is copied to the Instruction Register.
* As the first instruction has been fetched, the system reaches the end of the first cycle. Thus the Program counter increments by 1 to log this.
* At this point the next fetch cycle begins.
* **Decode**
* Now that the instruction is fetched and stored in the RAM it needs to be decoded. It is therefore sent from the RAM to the Control Unit of the CPU.
* There are two parts to the instruction:
1. The operation code → the command that the computer will carry out.
1. The operand → (that which will be operated on) an address in RAM where the data will be read and written to as part of the execution
* The Control Unit converts the operation code and operand into instruction that are fed to the next execute cycle.
* **Execute**
* Now the command will be executed. The operand is copied to the Memory Address Register and then passed to the Memory Data Register and the command is carried out by the ALU.
## The Little Man Computer
The Little Man Computer is a simplified computer that works on Von Neuman [architecture.It](http://architecture.It) has all the CPU components we have detailed above. It is programmed in machine code (as we saw with the Fetch, Decode, Execute cycle above) but for simplicity it uses denary, not binary.
![LMC_5.gif](../img/LMC_5.gif)
On the left is the instruction set. Each number constitutes and execution routine and the `xx` stand for the address in RAM that the execution will work on.
Each row of the RAM has a denary address, 1 through to 99. Each address can hold three digits.
* So the instruction `560` would mean *load the number at address 60.*
* The instruction `340` would mean *store a datum at address 40*
### Working through a basic computation
We are going to add two numbers together as a basic example.
1. First we need to place the two numbers in RAM we are going to use `5` and `3`
* At address `60` we will put the number `5` and at address `61` we will put the number `3`
* We are going to start at address `0` in the top left of the RAM grid
1. The first instruction will be *load address 60* which in the assembly will be `560` . We put this in address `0`, our starting point.
1. This first instruction is now stored in the accumulator.
1. Now we want to *add this number (in the accumulator) to the number in address 61*
1. This second instruction is `161` . We write this in address `1`
1. Finally we want to store the output of the calculation in the RAM, let's say at address `62`
1. So we store the command `362` at address `2`

View file

@ -0,0 +1,52 @@
---
tags:
- Theory_of_Computation
- binary
---
## Now we know how binary works, how does it relate to computing?
The reason is straight forward: it is the simplest way on the level of pure engineering of representing numerical and logical values; both of which are the basic foundations of programming. An electronic circuit or transistor only needs to represent two states: on (1) and off (0) which corresponds to the switch on an electrical circuit.
A single circuit representing the binary values of 1 and 0:
![multi_on_off 1.gif](../img/multi_on_off%201.gif)
It would be much more complicated to have to represent ten different states under the decimal number system, although denary computers do exist.
>
> We will see later that 1/0 also corresponds to the basic values of logic: true and false. This is what allows us to build up complex circuits and programs based on primitive truth conditions.
If we want more digits, we just need to add in more circuits, and we can represent as large a binary number as we need. We just need one switch for every digit we want to represent. The switches used in modern computers are so cheap and so small that you can literally get billions of them on a single circuit board.
![multiple_circuits.gif](../img/multiple_circuits.gif)
When we use the term 'switch' we actually mean the transistor components of a circuit. We don't need to know the physical details at this level but we can say that a transistor turns a current on and off. They can also be used to amplify the current.
On the level of electrical engineering and in the subsequent examples using a light bulb and a simple circuit, the switch being OFF corresponds to the light being ON, this is because the switch breaks the current. So when it is unbroken, the current passes to the bulb. This also means that 1 corresponds to the switch being OFF and 0 corresponds to the switch being ON which can be confusing at first.
## From circuits to programs
The following (from my earlier notes) breaks down how we get from the binary number system → electrical circuits → to computer programs:
1. ”Data”= a piece or pieces of **information**
1. In computing all data is represented in **binary form:**
2.1. ”Binary” means that the value of a piece of data is exclusively one thing or another
2.2. The values in binary code are exclusively 1 and 0 (either a piece of data is a 1 or it is a 0, it can never be both). Binary can also be expressed in logical terms where 1 is equal to `true` and 0 is equal to `false`
1. The smallest piece of data is a **bit**
1. We apply the binary number system to bits
1. Thus by 1-4, **a bit is either 1 or 0**
1. Binary form bears an isomorphic relation to switches on electrical circuits (the hardware of computers). Thus binary code can be *mapped onto circuits*.
* Circuits are controlled by switches
* A switch is a binary function: it is either on or off ”On/off” corresponding to 1/0.
* In this way, switches control the flow of electric current
1. Thus by the above, hardware (which is a physical structure made up of electric circuits) is **readily programmable in binary terms**
1. Restatement of 6: in order for hardware to be programmed the program must provide the hardware with something that it can actually perform. The program must be formulated in binary code.