More reindexing

This commit is contained in:
thomasabishop 2022-09-06 15:44:40 +01:00
parent 78024ac846
commit c63c288e76
77 changed files with 805 additions and 743 deletions

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-classes

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-classes

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-classes

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-hooks

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-hooks

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-hooks

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-hooks

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-hooks

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-hooks

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-hooks

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-hooks

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-hooks

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- react-hooks

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- javascript
- react
- data-types

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
- react
- react-hooks

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
- react
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
- react
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
- react
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
- react
---

View file

@ -1,5 +0,0 @@
# React TypeScript Learning Resources
https://www.toptal.com/react/react-hooks-typescript-example
https://react-typescript-cheatsheet.netlify.app/

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
- async
@ -11,72 +12,69 @@ tags:
Node.js provides a single-threaded asynchronous architecture which is achieved via means of the Event Loop.
## Multi-threaded synchronous architectures
In the context of backend, a thread as an instance of a request-response transaction.
For example a request is made from the client for a resource contained in a database. The back-end language is an intermediary between the client machine and the server. It receives the request and returns the resource as a response.
In the context of backend, a thread as an instance of a request-response transaction.
Many backend frameworks are synchronous but multithreaded. This means that a thread can only process one request-response cycle at a time. The thread cannot initiate a new cycle until it has finished with its current cycle.
For example a request is made from the client for a resource contained in a database. The back-end language is an intermediary between the client machine and the server. It receives the request and returns the resource as a response.
If there was only one thread, this would be inefficient and unworkable. Therefore the framework will be multi-threaded: multiple request-response cycles can be executed at once by different threads.
Many backend frameworks are synchronous but multithreaded. This means that a thread can only process one request-response cycle at a time. The thread cannot initiate a new cycle until it has finished with its current cycle.
If there was only one thread, this would be inefficient and unworkable. Therefore the framework will be multi-threaded: multiple request-response cycles can be executed at once by different threads.
![sync-thread.svg](/img/sync-thread.svg)
To accomodate the ability to increase the scale of synchronous applications you need to be able to spawn more threads commensurate to increased demand. This increases the resource consumption of the framework (more cores, more memory etc). Moreover it is possible to reach a point where all threads are active and no more can be spawned. In this case there will simply be delays in the return of data.
## Node as a single-threaded asynchronous architecture
In contrast, Node only has a single thread but it works asynchronously, not synchronously. Thus it has a **single-threaded asynchronous architecture**. This means whilst there is only a single thread it can juggle responses by dispatching them asynchronously. When a request is made it sends it off and continues with its execution and handling new requests. Once these resolve, the data is returned to the main thread.
## Node as a single-threaded asynchronous architecture
In contrast, Node only has a single thread but it works asynchronously, not synchronously. Thus it has a **single-threaded asynchronous architecture**. This means whilst there is only a single thread it can juggle responses by dispatching them asynchronously. When a request is made it sends it off and continues with its execution and handling new requests. Once these resolve, the data is returned to the main thread.
![async.svg](/img/async.svg)
## The Event Loop
Node implements its single-threaded asynchronous architecture through the Event Loop.
Node implements its single-threaded asynchronous architecture through the Event Loop.
This is the mechanism by which Node keeps track of incoming requests and their fulfillment status: whether the data has been returned successfully or if there has been error.
This is the mechanism by which Node keeps track of incoming requests and their fulfillment status: whether the data has been returned successfully or if there has been error.
Node is continually monitoring the Event Loop in the background.
A running Node application is a single running process. Like everything that happens within the OS, a process is managed by the [kernel](/Operating_Systems/The_Kernel.md) that dispatches operations to the CPU in a clock cycle. A thread is a sequence of code that resides within the process and utilises its memory pool (the amount of memory assigned by the kernel to the Node process). The Event Loop runs on CPU ticks: a tick is a single run of the Event Loop.
A running Node application is a single running process. Like everything that happens within the OS, a process is managed by the [kernel](/Operating_Systems/The_Kernel.md) that dispatches operations to the CPU in a clock cycle. A thread is a sequence of code that resides within the process and utilises its memory pool (the amount of memory assigned by the kernel to the Node process). The Event Loop runs on CPU ticks: a tick is a single run of the Event Loop.
### Phases of the Event Loop
The Event Loop comprises six phases. The Event Loop starts at the moment Node begins to execute your `index.js` file or any other application entry point. These six phases create one cycle, or loop, equal to one **tick**. A Node.js process exits when there is no more pending work in the Event Loop, or when `process.exit()` is called manually. A program only runs for as long as there are tasks queued in the Event Loop, or present on the [call stack](/Software_Engineering/Call_stack.md).
![](/img/node-event-loop.svg)
The phases are as follows:
1. **Timers**
* These are functions that execute callbacks after a set period of time. As in standard JavaScript there are two global timer functions: `setTimeout` and `setInterval`. Interestingly these are not core parts of the JavaScript language, they are something that are made available to JS by the particular browser. As Node does not run in the browser, Node has to provide this functionality. It does so through the core `timers` module.
* At the beginning of this phase the Event Loop updates its own time. Then it checks a queue, or pool, of timers. This queue consists of all timers that are currently set. The Event Loop takes the timer with the shortest wait time and compares it with the Event Loop's current time. If the wait time has elapsed, then the timer's callback is queued to be called once the [call stack](/Software_Engineering/Call_stack.md) is empty.
2. **I/O Callbacks**
* Once timers have been checked and scheduled, Node jumps to I/O operations.
* Node implements a non-blocking input/output interface. This is to say, writing and reading to disk (files in the Node application directory) is implemented asynchronously. The asynchronous I/O request is recorded into the queue and then the call stack continues.
- These are functions that execute callbacks after a set period of time. As in standard JavaScript there are two global timer functions: `setTimeout` and `setInterval`. Interestingly these are not core parts of the JavaScript language, they are something that are made available to JS by the particular browser. As Node does not run in the browser, Node has to provide this functionality. It does so through the core `timers` module.
- At the beginning of this phase the Event Loop updates its own time. Then it checks a queue, or pool, of timers. This queue consists of all timers that are currently set. The Event Loop takes the timer with the shortest wait time and compares it with the Event Loop's current time. If the wait time has elapsed, then the timer's callback is queued to be called once the [call stack](/Software_Engineering/Call_stack.md) is empty.
2. **I/O Callbacks**
- Once timers have been checked and scheduled, Node jumps to I/O operations.
- Node implements a non-blocking input/output interface. This is to say, writing and reading to disk (files in the Node application directory) is implemented asynchronously. The asynchronous I/O request is recorded into the queue and then the call stack continues.
3. **Idle / waiting / preparation**
* This phase is internal to Node and is not accessible to the programmer.
* It is primarily used for gathering informtion, and planning what needs to be executed during the next tick of the Event Loop
4. **I/O polling**
* This is the phase at which the main block of code is read and executed by Node.
* During this phase the Event Loop is managing the I/O workload, calling the functions in the queue until the queue is empty, and calculating how long it should wait until moving to the next phase. All callbacks in this phase are called synchronously (although they return asynchronously) in the order that they were added to the queue, from oldest to newest.
* This is the phase that can potentially block our application if any of these callbacks are slow or do not return asynchronously.
- This phase is internal to Node and is not accessible to the programmer.
- It is primarily used for gathering informtion, and planning what needs to be executed during the next tick of the Event Loop
4. **I/O polling**
- This is the phase at which the main block of code is read and executed by Node.
- During this phase the Event Loop is managing the I/O workload, calling the functions in the queue until the queue is empty, and calculating how long it should wait until moving to the next phase. All callbacks in this phase are called synchronously (although they return asynchronously) in the order that they were added to the queue, from oldest to newest.
- This is the phase that can potentially block our application if any of these callbacks are slow or do not return asynchronously.
5. **`setImmediate` callbacks**
* This phase runs as soon as the poll phase becomes idle. If `setImmediate()` is scheduled within the I/O cycle it will always be executed before other timers regardless of how many timers are present.
* This is your opportunity to grant precedence to certain threads within the Node process
- This phase runs as soon as the poll phase becomes idle. If `setImmediate()` is scheduled within the I/O cycle it will always be executed before other timers regardless of how many timers are present.
- This is your opportunity to grant precedence to certain threads within the Node process
6. **Close events**
* This phase occurs when the Event Loop is wrapping up one cycle and is ready to move to the next one.
* It is an opportunity for clean-up and to guard against memory leaks.
* This phase can be targetted via the `process.exit()` function or the close event of a web-socket.
- This phase occurs when the Event Loop is wrapping up one cycle and is ready to move to the next one.
- It is an opportunity for clean-up and to guard against memory leaks.
- This phase can be targetted via the `process.exit()` function or the close event of a web-socket.
## Event _loop_ and event _queue_
The terms _event loop_ and _event queue_ are often used interchangeably in the literature but in fact they are distinct.
The terms _event loop_ and _event queue_ are often used interchangeably in the literature but in fact they are distinct.
The Event Loop is the Node runtime's method of execution, the queue is the stack of tasks that are lined up and executed by the loop. We can think of the queue as being the input and the loop as what acts on the input. The queue obviously emerges from the program we write but it is scheduled, organised and sequenced by the loop.
The Event Loop is the Node runtime's method of execution, the queue is the stack of tasks that are lined up and executed by the loop. We can think of the queue as being the input and the loop as what acts on the input. The queue obviously emerges from the program we write but it is scheduled, organised and sequenced by the loop.
https://blog.appsignal.com/2022/07/20/an-introduction-to-multithreading-in-nodejs.html
https://school.geekwall.in/p/Bk2xFs1DV

View file

@ -1,18 +1,19 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
---
# Global object
> In Node every function and variable should be scoped to a module. We should not define functions and variables within the global scope.
> In Node every function and variable should be scoped to a module. We should not define functions and variables within the global scope.
* In Node the equivalent to the browser's `Window` object is `global`. The properties and methods that belong to this method are available anywhere in a program.
- In Node the equivalent to the browser's `Window` object is `global`. The properties and methods that belong to this method are available anywhere in a program.
* Just as we can technically write `Window.console.log()`, we can write `global.console.log()` however in both cases it is more sane to use the shorthand.
- Just as we can technically write `Window.console.log()`, we can write `global.console.log()` however in both cases it is more sane to use the shorthand.
* However if we declare a variable in this scope in browser-based JavaScript, this variable becomes accessible via the `Window` object and thus is accessible in global scope. The same is not true for Node. If you declare a variable at this level it will return undefined.
- However if we declare a variable in this scope in browser-based JavaScript, this variable becomes accessible via the `Window` object and thus is accessible in global scope. The same is not true for Node. If you declare a variable at this level it will return undefined.
* This is because of Node's modular nature. If you were to define a function `foo` in a module and then also define it in the global scope, when you call `foo`, the Node interpreter would not know which function to call. Hence it chooses not to recognise the global `foo`, returning undefined.
- This is because of Node's modular nature. If you were to define a function `foo` in a module and then also define it in the global scope, when you call `foo`, the Node interpreter would not know which function to call. Hence it chooses not to recognise the global `foo`, returning undefined.

View file

@ -1,25 +1,29 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
---
# Managing environments
# Managing environments
With a full-scale Node application you will typically run three environments:
* Development
* Testing
* Production
- Development
- Testing
- Production
## Accessing the current environment
To determine the current environment we can use the variable **`process.env.NODE_ENV`**. This works globally regardless of the kind of Node app we are building.
To determine the current environment we can use the variable **`process.env.NODE_ENV`**. This works globally regardless of the kind of Node app we are building.
If you have not manually set up your environments, **`process.env.NODE_ENV`** will return `undefined`.
### Setting the Node environment
#### For the session
`NODE_ENV` is a bash [environment variable](/Programming_Languages/Shell_Scripting/Environmental_and_shell_variables.md) like any other. So we can set it in the normal way:
```bash
@ -27,13 +31,14 @@ export NODE_ENV=production
```
### In Express
In Express, there is a built in method for retrieving the current envrionment: `app.get('env')`. Express will default to the development environment.
In Express, there is a built in method for retrieving the current envrionment: `app.get('env')`. Express will default to the development environment.
<p style="color:red">! How to keep Express and Node environment in sync?</p>
## Configuring environments
## Configuring environments
We use the third-party [Config](https://github.com/node-config/node-config) package to manage different configurations based on the environment.
We use the third-party [Config](https://github.com/node-config/node-config) package to manage different configurations based on the environment.
Once installed we set up a dedicated config directory with a structure as follows:
@ -44,32 +49,34 @@ config/
production.json
```
For example:
For example:
```json
// default.json
// default.json
{
"name": "My Express app"
"name": "My Express app"
}
```
Then to utilise config variables:
```js
const config = require('config')
console.log('Application name:' + config.get('name'))
const config = require('config');
console.log('Application name:' + config.get('name'));
```
If we toggled the different environments, we would see different outputs from the above code (assuming we had different config files in `/config` with different names).
### Sensitive config data
We will need to store passwords, API keys and other kinds of authentication data for our application. We obviously shouldn't store this data openly in our config files since it would be made public.
We will need to store passwords, API keys and other kinds of authentication data for our application. We obviously shouldn't store this data openly in our config files since it would be made public.
We can do so securely by utilising [environmental variables](../Shell_Scripting/Environmental_and_shell_variables.md) alongside the config pacakage.
We create a file called `custom-environment-variables` (must be called this) and map a property to an environmental environment we have already set.
Let's create an environmental variable for a password:
```bash
export APP_PASSWORD='mypassword123'
```
@ -78,17 +85,16 @@ Then in our custom variable file:
```json
{
"password": "APP_PASSWORD"
"password": "APP_PASSWORD"
}
```
We can then safely reference this value in the course of our normal code:
```js
console.log(config.get('password'))
console.log(config.get('password'));
```
<p style="color:red">! But how would this be achieved in a production server?</p>
<p style="color:red">! And how could we do this programmatically at the start of a local development session without manually setting each environment variable in the terminal?</p>
<p style="color:red">! And how could we do this programmatically at the start of a local development session without manually setting each environment variable in the terminal?</p>

View file

@ -1,70 +1,72 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
- middleware
---
# Middleware
# Middleware
## What is middleware?
* Anything that terminates the `req, res` cycle counts as middleware. It is basically anything that acts as an intermediary once the request is received but before the resource is sent. A good example would be the `app.use(express.json()` or `app.use(bodyParser.json)` functions we call in order to be able to parse JSON that is sent from the client.
* You will most likely have multiple middleware functions running at once. We call this intermediary part of the cycle the **request processing pipeline**.
* Generally all middleware will be added as a property on the Express `app` instance with the `app.use(...)` syntax.
- Anything that terminates the `req, res` cycle counts as middleware. It is basically anything that acts as an intermediary once the request is received but before the resource is sent. A good example would be the `app.use(express.json()` or `app.use(bodyParser.json)` functions we call in order to be able to parse JSON that is sent from the client.
- You will most likely have multiple middleware functions running at once. We call this intermediary part of the cycle the **request processing pipeline**.
- Generally all middleware will be added as a property on the Express `app` instance with the `app.use(...)` syntax.
## Creating custom middleware functions
### Basic schema
````js
```js
app.use((req, res, next) => {
// do some middleware
next()
})
````
// do some middleware
next();
});
```
### `next`
The `next` parameter is key, it allows Express to move onto the next middleware function once the custom middleware executes. Without it, the request processing pipeline will get blocked.
Middleware functions are basically asynchronous requests and as such they use a similar syntax as Promises (e.g `then`) for sequencing processes.
Middleware functions are basically asynchronous requests and as such they use a similar syntax as Promises (e.g `then`) for sequencing processes.
### Example of sequence
```js
app.use((req, res, next) => {
console.log('Do process A...')
next()
})
console.log('Do process A...');
next();
});
app.use((req, res, next) => {
console.log('Do process B...')
next()
})
console.log('Do process B...');
next();
});
```
> It makes more sense of course to define our middleware within a function and then pass it as an argument to `app.use()`
## Including middleware based on environment
## Including middleware based on environment
With a full-scale Node application you will typically run three environments:
* Development
* Testing
* Production
- Development
- Testing
- Production
We will not want to run certain types of middleware in all environments. For example, it would be costly to run logging in the app's production environment. It would make more sense to run this only in development.
### Accessing current Node environment
We can control which middleware we run via the Node envrionment variables: `process.env` (see for instance [ports](./Ports.md)).
We can control which middleware we run via the Node envrionment variables: `process.env` (see for instance [ports](./Ports.md)).
We could set [Morgan](/Programming_Languages/NodeJS/Modules/Third_party/Morgan.md) to run only in development with:
```js
if (app.get("env") === 'development') {
app.use(morgan("common"));
console.log('Morgan enabled')
if (app.get('env') === 'development') {
app.use(morgan('common'));
console.log('Morgan enabled');
}
```

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
- node-modules
@ -10,10 +11,10 @@ tags:
When Node runs each of our module files are wrapped within an immediately-invoked function expression that has the following parameters:
````js
```js
(function (exports, require, module, __filename, __dirname))
````
```
This is called the **module wrapper function**

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
- node-modules
@ -8,12 +9,12 @@ tags:
# Ports
When working in development we are able to specify the specific port from which we want top serve our application. In production, we do not always have this control: the port will most likely be set by the provider of the server environment.
When working in development we are able to specify the specific port from which we want top serve our application. In production, we do not always have this control: the port will most likely be set by the provider of the server environment.
While we may not know the specific port, whatever it is, it will be accessible via the `PORT` environment variable. So we can use this when writing our [event listeners](Events%20module.md#event-emitters):
````js
```js
const port = process.env.PORT || 3000;
````
```
This way, if a port is set by the provider it will use it. If not, it will fall back to 3000.

View file

@ -1,49 +1,50 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
---
# I/O with files
## Read file from directory (JSON)
````js
const fs = require("fs");
```js
const fs = require('fs');
// Get raw JSON
let inputJson = fs.readFileSync("source.json");
let inputJson = fs.readFileSync('source.json');
// Convert to JS
let data = JSON.parse(inputJson);
````
```
## Write file to directory (JSON)
````js
```js
let newFile = 'new.json';
// Write JS object to JSON file as JSON
// Write JS object to JSON file as JSON
fs.writeFileSync(writePath, JSON.stringify(alienblood));
````
```
## Delete file from directory
````js
```js
let filePath = 'file-to-delete.json';
fs.unlinkSync(filePath);
````
```
## Applications
### Overwrite file
````js
```js
if (fs.existsSync(writePath)) {
fs.unlinkSync(writePath);
fs.writeFileSync(writePath, JSON.stringify(someJS));
} else {
fs.writeFileSync(writePath, JSON.stringif(someJS));
}
````
fs.unlinkSync(writePath);
fs.writeFileSync(writePath, JSON.stringify(someJS));
} else {
fs.writeFileSync(writePath, JSON.stringif(someJS));
}
```

View file

@ -1,83 +1,82 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
- node-modules
---
# `events` module
# `events` module
In most cases you won't interact with the `events` module directly since other modules and third-party modules are abstractions on top of it. For instance the `http` module is using events under the hood to handle requests and responses.
Another way of putting this is to say that all events in Node inherit from the `EventEmitter` constructor, which is the class you instantiate to create a new event. At bottom everything in Node is an event with a callback, created via event emitters.
Another way of putting this is to say that all events in Node inherit from the `EventEmitter` constructor, which is the class you instantiate to create a new event. At bottom everything in Node is an event with a callback, created via event emitters.
Because Node's runtime is [event-driven](/Programming_Languages/NodeJS/Architecture/Event_loop.md), it is event-emitter cycles that are being processed by the Event Loop, although you may know them as `fs` or `http` (etc) events. The call stack that the Event Loop works through is just a series of event emissions and their associated callbacks.
Because Node's runtime is [event-driven](/Programming_Languages/NodeJS/Architecture/Event_loop.md), it is event-emitter cycles that are being processed by the Event Loop, although you may know them as `fs` or `http` (etc) events. The call stack that the Event Loop works through is just a series of event emissions and their associated callbacks.
## Event Emitters
* All objects that emit events are instances of the `EventEmitter` class. This object exposes an `eventEmitter.on()` function that allows one or more functions to be attached to named events emitted by the object.
* These functions are **listeners** of the emitter.
- All objects that emit events are instances of the `EventEmitter` class. This object exposes an `eventEmitter.on()` function that allows one or more functions to be attached to named events emitted by the object.
- These functions are **listeners** of the emitter.
## Basic syntax
````js
const EventEmitter = require('events') // import the module
```js
const EventEmitter = require('events'); // import the module
// Raise an event
const emitter = new EventEmitter('messageLogged')
// Raise an event
const emitter = new EventEmitter('messageLogged');
// Register a listener
emitter.on('messagedLogged', function() {
console.log('The listener was called.')
})
emitter.on('messagedLogged', function () {
console.log('The listener was called.');
});
```
````
* If we ran this file, we would see `The listener was called` logged to the console.
* Without a listener (similar to a subscriber in Angular) nothing happens.
* When the emission occurs the emitter works *synchronously* through each listener function that is attached to it.
- If we ran this file, we would see `The listener was called` logged to the console.
- Without a listener (similar to a subscriber in Angular) nothing happens.
- When the emission occurs the emitter works _synchronously_ through each listener function that is attached to it.
## Event arguments
* Typically we would not just emit a string, we would attach an object to the emitter to pass more useful data. This data is called an **Event Argument**.
* Refactoring the previous example:
- Typically we would not just emit a string, we would attach an object to the emitter to pass more useful data. This data is called an **Event Argument**.
- Refactoring the previous example:
````js
// Raise an event
const emitter = new EventEmitter('messageLogged', function(eventArg) {
console.log('Listener called', eventArg)
})
```js
// Raise an event
const emitter = new EventEmitter('messageLogged', function (eventArg) {
console.log('Listener called', eventArg);
});
// Register a listener
emitter.on('messagedLogged', {id: 1, url: 'http://www.example.com'})
````
emitter.on('messagedLogged', {id: 1, url: 'http://www.example.com'});
```
## Extending the `EventEmitter` class
* It's not best practice to call the EventEmitter class directly in `app.js`. If we want to use the capabilities of the class we should create our own module that extends `EventEmitter`, inheriting its functionality with specific additional features that we want to add.
* So, refactoring the previous example:
- It's not best practice to call the EventEmitter class directly in `app.js`. If we want to use the capabilities of the class we should create our own module that extends `EventEmitter`, inheriting its functionality with specific additional features that we want to add.
- So, refactoring the previous example:
````js
```js
// File: Logger.js
const EventEmitter = require('events')
const EventEmitter = require('events');
class Logger extends EventEmitter {
log(message){
console.log(message)
this.emit('messageLogged', {id: 1, url: 'http://www.example.com'})
}
log(message) {
console.log(message);
this.emit('messageLogged', {id: 1, url: 'http://www.example.com'});
}
}
```
_The `this` in the `log` method refers to the properties and methods of `EventEmitter` which we have extended._
````
- We also need to refactor our listener code within `app.js` so that it calls the extended class rather than the `EventEmitter` class directly:
*The `this` in the `log` method refers to the properties and methods of `EventEmitter` which we have extended.*
* We also need to refactor our listener code within `app.js` so that it calls the extended class rather than the `EventEmitter` class directly:
````js
// File app.js
```js
// File app.js
const Logger = require('./Logger')
const logger = new Logger()
@ -85,6 +84,6 @@ const logger = new Logger()
logger.on('messageLogged', function(eventArg){
console.log('Listener called', eventArg)
}
logger.log('message')
````
```

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
- node-modules
@ -10,7 +11,7 @@ tags:
File System is an essential built-in module of Node that contains utility methods for working with files and directories.
Every method associated with `fs` has a *blocking* and *asynchronous* implementation. The former obviously blocks the [event queue](Event%20queue.md), the latter does not.
Every method associated with `fs` has a _blocking_ and _asynchronous_ implementation. The former obviously blocks the [event queue](Event%20queue.md), the latter does not.
The synchronous methods are useful to have in some contexts but in general and with real-world applications, you should be using the async implementation so as to accord with the single-threaded event-driven architecture of Node.
@ -18,16 +19,14 @@ The synchronous methods are useful to have in some contexts but in general and w
### Read directory
Return a string array of all files in the current directory.
Return a string array of all files in the current directory.
````js
fs.readdir('./', function(err, files) {
if (err) {
console.error(err)
} else {
console.log(files)
}
})
````
```js
fs.readdir('./', function (err, files) {
if (err) {
console.error(err);
} else {
console.log(files);
}
});
```

View file

@ -1,4 +1,6 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
@ -8,42 +10,41 @@ tags:
# `http` module
The HTTP Module allows us to create a web server that listens for HTTP requests on a given port. It is therefore perfect for creating backends for client-side JavaScript.
The HTTP Module allows us to create a web server that listens for HTTP requests on a given port. It is therefore perfect for creating backends for client-side JavaScript.
## Creating a server
An HTTP server is another instance of an [event emitter](/Programming_Languages/NodeJS/Modules/Core/events.md)). It therefore has all the same methods as the `EventEmitter` class: `on`, `emit`, `addListener` etc. This demonstrates again how much of Node's core functionality is based on event emitters.
*Creating a server*
````js
const http = require('http')
_Creating a server_
const server = http.createServer() // Create server as emitter
```js
const http = require('http');
// Register functions to run when listener is triggered
const server = http.createServer(); // Create server as emitter
// Register functions to run when listener is triggered
server.on('connection', (socket) => {
console.log('new connection...')
})
console.log('new connection...');
});
server.listen(3000)
console.log('Listening on port 3000')
````
server.listen(3000);
console.log('Listening on port 3000');
```
This server is functionally equivalent to a generic event emitter:
````js
// Raise an event
const emitter = new EventEmitter('messageLogged')
```js
// Raise an event
const emitter = new EventEmitter('messageLogged');
// Register a listener
emitter.on('messagedLogged', function() {
console.log('The listener was called.')
})
emitter.on('messagedLogged', function () {
console.log('The listener was called.');
});
```
````
Whenever a request is made to this server, it raises an event. We can therefore target it with the `on` method and make it execute a function when requests are made.
Whenever a request is made to this server, it raises an event. We can therefore target it with the `on` method and make it execute a function when requests are made.
If we were to start the server by running the file and we then used a browser to navigate to the port, we would see `new connection` logged every time we refresh the page.
@ -51,32 +52,28 @@ If we were to start the server by running the file and we then used a browser to
A socket is a generic protocol for client-server communication. Crucially it **allows simultaneous communication both ways**. The client can contact the server but the server can also contact the client. Our listener function above uses a socket as the callback function but in most cases this is quite low-level, not distinguishing responses from requests. It is more likely that you would initiate a `request, resource` architecture in place of a socket:
````js
```js
const server = http.createServer((req, res) => {
if (req.url === '/'){
res.write('hello')
res.end()
}
})
````
if (req.url === '/') {
res.write('hello');
res.end();
}
});
```
#### Return JSON
Below is an example of using this architecture to return JSON to the client:
````js
```js
const server = http.createServer((req, res) => {
if (req.url === '/products'){
res.write(JSON.stringify(['shoes', 'lipstick', 'cups']))
res.end()
}
})
````
if (req.url === '/products') {
res.write(JSON.stringify(['shoes', 'lipstick', 'cups']));
res.end();
}
});
```
### Express
In reality you would rarely use the `http` module directly to create a server. This is because it is quite low level and each response must be written in a linear fashion as with the two URLs in the previous example. Instead we use Express which is a framework for creating servers and routing that is an abstraction on top of the core HTTP module.
In reality you would rarely use the `http` module directly to create a server. This is because it is quite low level and each response must be written in a linear fashion as with the two URLs in the previous example. Instead we use Express which is a framework for creating servers and routing that is an abstraction on top of the core HTTP module.

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
- node-modules
@ -10,11 +11,11 @@ tags:
> Modules are partitioned files where we define our variables and functions. Values defined in modules are scoped to that specific module, constituting a unique name space. This avoids name clashes in large programs.
* Every file in a Node application is considered a module.
- Every file in a Node application is considered a module.
* The variables and methods in a module are equivalent to `private` properties and methods in object-oriented programming.
- The variables and methods in a module are equivalent to `private` properties and methods in object-oriented programming.
* If you wish to use a function or variable defined in a module outside of its modular container you need to explicitly export it and make it public.
- If you wish to use a function or variable defined in a module outside of its modular container you need to explicitly export it and make it public.
## Structure of a module
@ -22,81 +23,74 @@ Node keeps an internal record of the properties of a module. To see this we can
```js
// index.js
console.log(module)
console.log(module);
```
This gives us:
```plaintext
Module {
 id: '.',
 path: '/home/thomas/repos/node-learning',
 exports: {},
 filename: '/home/thomas/repos/node-learning/index.js',
 loaded: false,
 children: [],
 paths: [
   '/home/thomas/repos/node-learning/node_modules',
   '/home/thomas/repos/node_modules',
   '/home/thomas/node_modules',
   '/home/node_modules',
   '/node_modules'
 ]
Module {
 id: '.',
 path: '/home/thomas/repos/node-learning',
 exports: {},
 filename: '/home/thomas/repos/node-learning/index.js',
 loaded: false,
 children: [],
 paths: [
   '/home/thomas/repos/node-learning/node_modules',
   '/home/thomas/repos/node_modules',
   '/home/thomas/node_modules',
   '/home/node_modules',
   '/node_modules'
 ]
}
```
## Exports
* Whenever we export a property or method from a module we are directly targeting the `exports` property of the module object.
* Once we add exports to a file they will be displayed under that property of the module object.
* We can export the entire module itself as the export (typically used when the module is a single function or class) or individual properties.
- Whenever we export a property or method from a module we are directly targeting the `exports` property of the module object.
- Once we add exports to a file they will be displayed under that property of the module object.
- We can export the entire module itself as the export (typically used when the module is a single function or class) or individual properties.
### Exporting a whole module
*The example below is a module file that consists in a single function*
````js
_The example below is a module file that consists in a single function_
```js
module.exports = function (...params) {
// function body
}
````
// function body
};
```
Note the module is unnamed. We would name it when we import:
````js
const myFunction = require('./filenme')
````
```js
const myFunction = require('./filenme');
```
### Exporting sub-components from a module
In the example below we export a variable and function from the same module. Note only those values prefixed with `exports` are exported.
````js
In the example below we export a variable and function from the same module. Note only those values prefixed with `exports` are exported.
```js
exports.myFunc = (...params) => {
// function bod[]()y
}
// function bod[]()y
};
exports.aVar = 321.3
exports.aVar = 321.3;
var nonExportedVar = true
````
var nonExportedVar = true;
```
This time the exports are already name so we would import with the following:
````js
```js
const {myFunc, aVar} = require('./filename');
```
const { myFunc, aVar } = require("./filename");
````
We can also do the exporting at the bottom when the individual components are named:
````js
We can also do the exporting at the bottom when the individual components are named:
```js
const myNamedFunc = (val) => {
return val + 1;
};
@ -110,39 +104,38 @@ exports.myNamedFunc = myNamedFunc;
exports.differentName = anotherNamedFunc; // We can use different names
// Or we could export them together
module.exports = { myNamedFunc, anotherNamedFunc };
````
module.exports = {myNamedFunc, anotherNamedFunc};
```
The import is the same:
````js
const { myNamedFunc, anotherNamedFunc } = require("./modules/multiExports");
````
```js
const {myNamedFunc, anotherNamedFunc} = require('./modules/multiExports');
```
## Structuring modules
The techniques above are useful to know but generally you would want to enforce a stricter structure than a mix of exported and private values in the one file. The best way to do this is with a single default export.
The techniques above are useful to know but generally you would want to enforce a stricter structure than a mix of exported and private values in the one file. The best way to do this is with a single default export.
Here the thing exported could be a composite function or an object that basically acts like a class with methods and properties.
*Export a composite single function*
_Export a composite single function_
```js
module.exports = () => {
foo() {...}
bar() {...}
module.exports = () => {
foo() {...}
bar() {...}
}
```
*Export an object*
_Export an object_
```js
module.exports = {
module.exports = {
foo : () => {...},
bar: () => {...}
bar: () => {...}
}
```
@ -152,8 +145,9 @@ Or you could export an actual class as the default. This is practically the same
```js
export default class {
foo() {}
bar() {}
}
foo() {}
bar() {}
}
```
Every method and property within the export will be public by default, whether it is an object, class or function. If you wanted to keep certain methods/properties private, the best approach is to define them as variables and functions within the module file but outside of the `export` block.
Every method and property within the export will be public by default, whether it is an object, class or function. If you wanted to keep certain methods/properties private, the best approach is to define them as variables and functions within the module file but outside of the `export` block.

View file

@ -1,37 +1,39 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
- npm
---
# Package management
## List installed packages
```bash
npm list
npm list
```
This will return a recursive tree that lists dependencies, dependences of dependencies, ... and so on.
This will return a recursive tree that lists dependencies, dependences of dependencies, ... and so on.
To limit the depth you can add the `--depth=` flag. For example to see only your installed packages and their versions use `npm list --depth=0`.
## View `package.json` data for an installed package
We could go to the NPM registry and view details or we can quickly view the `package.json` for the dependency with the command `npm view [package_name]`
We could go to the NPM registry and view details or we can quickly view the `package.json` for the dependency with the command `npm view [package_name]`
We can pinpoint specific dependencies in the `package.json`, e.g. `npm view [package_name] dependencies `
We can pinpoint specific dependencies in the `package.json`, e.g. `npm view [package_name] dependencies `
## View outdated modules
See whether your dependency version is out of date use `npm outdated`. This gives us a table, for example:
See whether your dependency version is out of date use `npm outdated`. This gives us a table, for example:
![Pasted image 20220411082627.png](/img/Pasted_image_20220411082627.png)
* *Latest* tells us the latest release available from the developers
* *Wanted* tells us the version that our `package.json` rules target. To take the first dependency as an example. We must have set our SemVer syntax to `^0.4.x` since it is telling us that there is a minor release that is more recent than the one we have installed but is not advising that we update to the latest major release.
* *Current* tells us which version we currently have installed regardless of the version that our `package.json` is targeting or the most recent version available.
- _Latest_ tells us the latest release available from the developers
- _Wanted_ tells us the version that our `package.json` rules target. To take the first dependency as an example. We must have set our SemVer syntax to `^0.4.x` since it is telling us that there is a minor release that is more recent than the one we have installed but is not advising that we update to the latest major release.
- _Current_ tells us which version we currently have installed regardless of the version that our `package.json` is targeting or the most recent version available.
## Updating
`npm update` only updates from *current* to *wanted*. In other words it only updates in accordance with your caret and tilde rules applied to [semantic versioning](/Software_Engineering/Semantic_versioning.md).
`npm update` only updates from _current_ to _wanted_. In other words it only updates in accordance with your caret and tilde rules applied to [semantic versioning](/Software_Engineering/Semantic_versioning.md).

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
- middleware
@ -8,16 +9,18 @@ tags:
# Morgan
Morgan is middleware that is used to log HTTP requests to the Express instance.
Morgan is middleware that is used to log HTTP requests to the Express instance.
```js
app.use(morgan('dev'))
app.use(morgan('dev'));
```
With Morgan in place, every time we run a request it will be logged on the console that is running our Node application, e.g:
```plain
GET /api/courses 200 95 - 1.774 ms
```
This uses the `tiny` default which logs the bare minimum giving us: request type; endpoint; response code; and time to execute. But there are more verbose settings.
It defaults to logging on the console but can also be configured to write to a log file.
It defaults to logging on the console but can also be configured to write to a log file.

View file

@ -1,12 +1,13 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- backend
- node-js
---
# Nodemon
We don't want to have to restart the local server every time we make a change to our files. We can use `nodemon` instead of `node` when running our `index.js` file so that file-changes are immediately registered without the need for a restart.
We don't want to have to restart the local server every time we make a change to our files. We can use `nodemon` instead of `node` when running our `index.js` file so that file-changes are immediately registered without the need for a restart.
This is a wrapper around the `node` command so it doesn't require any configuration. Once installed, update your start script from `node index.js` to `nodemon index.js`.
This is a wrapper around the `node` command so it doesn't require any configuration. Once installed, update your start script from `node index.js` to `nodemon index.js`.

View file

@ -1,12 +1,13 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
## If statements
* Conditional blocks start with `if` and end with the inversion `fi` (this is a common syntactic pattern in bash)
* The conditional expression must be placed in square brackets with spaces either side. The spaces matter: if you omit them, the code will not run
* We designate the code to run when the conditional is met with `then`
* We can incorporate else if logic with `elif`
- Conditional blocks start with `if` and end with the inversion `fi` (this is a common syntactic pattern in bash)
- The conditional expression must be placed in square brackets with spaces either side. The spaces matter: if you omit them, the code will not run
- We designate the code to run when the conditional is met with `then`
- We can incorporate else if logic with `elif`

View file

@ -1,30 +1,37 @@
---
categories:
- Programming Languages
- Linux
tags:
- Programming_Languages
- shell
---
# Cron
## `cronie`
## `cronie`
In Arch Linux I use `cronie` for cron jobs. (There is no cron service installed by default). Install `cronie` and then enable it in systemd with:
In Arch Linux I use `cronie` for cron jobs. (There is no cron service installed by default). Install `cronie` and then enable it in systemd with:
```bash
```bash
systemctrl enable --now cronie.service
```
## commands
### List cron jobs
```
crontab -l
```
### Open cron file
```
crontab -e
```
### Check cron log
```bash
journalctl | grep CRON
@ -32,28 +39,29 @@ journalctl | grep CRON
```
## Syntax
````bash
```bash
m h d mon dow command
# minute, hour, day of month, day of week, bash script/args
# 0-59, 0-23, 1-31, 1-12, 0-6
````
```
**Examples**
Run on the hour every hour
Run on the hour every hour
````
```
0 * * * * mysqlcheck --all-databases --check-only-changed --silent
````
```
At 01:42 every day:
At 01:42 every day:
````
```
42 1 * * * mysqlcheck --all-databases --check-only-changed --silent
````
```
Every half hour:
Every half hour:
```
0,30 * * * * ${HOME}/bash_scripts/automate_commit.sh
@ -61,25 +69,25 @@ Every half hour:
**Shorthands**
* `@reboot` Run once, at startup
* `@yearly` Run once a year, “0 0 1 1 \*”.\</>
* `@annually` same as @yearly
* `@monthly` Run once a month, “0 0 1 * \*
* `@weekly` Run once a week, “0 0 * * 0”
* `@daily` Run once a day, “0 0 * * \*”
* `@midnight` same as @daily
* `@hourly` Run once an hour, “0 * * * \*
- `@reboot` Run once, at startup
- `@yearly` Run once a year, “0 0 1 1 \*”.\</>
- `@annually` same as @yearly
- `@monthly` Run once a month, “0 0 1 \* \*
- `@weekly` Run once a week, “0 0 \* \* 0”
- `@daily` Run once a day, “0 0 \* \* \*”
- `@midnight` same as @daily
- `@hourly` Run once an hour, “0 \* \* \* \*
**Examples**
````
```
@hourly mysqlcheck --all-databases --check-only-changed --silent
````
```
**View the logs**
````bash
```bash
sudo grep crontab syslog
````
```

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
@ -8,14 +9,14 @@ tags:
To understand the difference between environmental and shell variables know that:
* You can spawn child shells from the parent shell that is initiated when you first open the terminal. To do this just run `bash` or `zsh` .
* This is a self-contained new instance of the shell. This means:
* It **will have** access to environmental variables (since they belong to the parent / are global)
* It **will not have** access to any shell variables that are defined in the parent.
* **How do you get back to the upper parent shell?** Type `exit` .
* Note that:
* Custom (user-created) shell variables **do not** pass down to spawned shell instances, nor do they pass up to the parent
* Custom (user-created) environment variables do pass down to spawned shell instances but do not pass up to the parent. They are lost on `exit` .
- You can spawn child shells from the parent shell that is initiated when you first open the terminal. To do this just run `bash` or `zsh` .
- This is a self-contained new instance of the shell. This means:
- It **will have** access to environmental variables (since they belong to the parent / are global)
- It **will not have** access to any shell variables that are defined in the parent.
- **How do you get back to the upper parent shell?** Type `exit` .
- Note that:
- Custom (user-created) shell variables **do not** pass down to spawned shell instances, nor do they pass up to the parent
- Custom (user-created) environment variables do pass down to spawned shell instances but do not pass up to the parent. They are lost on `exit` .
Q. What methods are there for keeping track of, preserving, and jumping between spawned instances? Is this even possible or do they die on `exit` .
@ -26,25 +27,25 @@ Q. What methods are there for keeping track of, preserving, and jumping between
## What is the shell environment and what are environment variables?
* Every time that you interact with the shell you do so within an **environment**. This is the context within which you are working and it determines your access to resources and the behaviour that is permitted.
- Every time that you interact with the shell you do so within an **environment**. This is the context within which you are working and it determines your access to resources and the behaviour that is permitted.
* The environment is an area that the shell builds every time that it starts a session. It contains variables that define system properties.
- The environment is an area that the shell builds every time that it starts a session. It contains variables that define system properties.
* Every time a [shell session](https://www.notion.so/Shell-sessions-e6dd743dec1d4fe3b1ee672c8f9731f6) spawns, a process takes place to gather and compile information that should be available to the shell process and its child processes. It obtains the data for these settings from a variety of different files and settings on the system.
- Every time a [shell session](https://www.notion.so/Shell-sessions-e6dd743dec1d4fe3b1ee672c8f9731f6) spawns, a process takes place to gather and compile information that should be available to the shell process and its child processes. It obtains the data for these settings from a variety of different files and settings on the system.
* The environment is represented by strings comprising key-value pairs. For example:
````bash
- The environment is represented by strings comprising key-value pairs. For example:
```bash
KEY=value1:value2
KEY="value with spaces":"another value with spaces"
````
```
As the above shows, a key can have multiple related values. Each one is demarcated with a `:` . If the value is longer than a single word, quotation marks are used.
* The keys are **variables**. They come in two types: **environmental variables** and **shell variables:**
* Environmental variables are much more permanent and pertain to things like the user and his path (the overall session)
* Shell variables are more changeable for instance the current working directory (the current program instance)
- The keys are **variables**. They come in two types: **environmental variables** and **shell variables:**
- Environmental variables are much more permanent and pertain to things like the user and his path (the overall session)
- Shell variables are more changeable for instance the current working directory (the current program instance)
Variables can be created via config files that run on the initialisation of the session or manually created via the command line in the current session
@ -58,59 +59,59 @@ More generally they are used for when you will need to read or alter the environ
To view the settings of your current environment you can execute the `env` command which returns a list of the key-value pairs introduced above. Here are some of the more intelligible variables that are returned when I run this command:
````bash
```bash
SHELL=/usr/bin/zsh
DESKTOP_SESSION=plasma
HOME=/home/thomas
USER=thomas
PWD=/home/thomas/repos/bash-scripting
PATH=/home/thomas/.nvm/versions/node/v16.8.0/bin:/home/thomas/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin
````
```
However if you want to target a specific viable you need to invoke `printenv` with the relevant key, for example:
````bash
```bash
printenv SHELL
# SHELL=/usr/bin/zsh
````
```
Note that `env` and `printenv` do not show all the shell variables, only a selection. To view all the shell variables along with the environmental variables use `set` .
## Creating, exporting and deleting variable shell and environment variables
* You set shell variables using the same syntax you would within a script file:
````bash
- You set shell variables using the same syntax you would within a script file:
```bash
TEST_SHELL_VAR="This is a test"
set | grep TEST_SH
set | grep TEST_SH
TEST_SHELL_VAR='This is a test'
# We can also print it with an echo, again exactly as we would with a shell script
echo S{TEST_SHELL_VAR}
````
```
* We can verify that it is not an environmental variable based on the fact that following does not return anything:
````bash
- We can verify that it is not an environmental variable based on the fact that following does not return anything:
```bash
printenv | grep TEST-SH
````
```
* We can verify that this is a shell variable by spawning a new shell and calling it. Nothing will be returned from the child shell.
- We can verify that this is a shell variable by spawning a new shell and calling it. Nothing will be returned from the child shell.
* You can upgrade a shell variable to an environment variable with `export` :
````bash
- You can upgrade a shell variable to an environment variable with `export` :
```bash
export TEST_SHELL_VAR
# And confirm:
printenv | grep TEST_SH
TEST_SHELL_VAR='This is a test'
````
```
* We can use the same syntax to create new environment variables from scratch:
````bash
- We can use the same syntax to create new environment variables from scratch:
```bash
export NEW_ENV_VAR="A new var"
````
```
### Using config files to create variables
@ -122,11 +123,11 @@ You can also add variables to config files that run on login such as your user `
A list of directories that the system will check when looking for commands. When a user types in a command, the system will check directories in this order for the executable.
````bash
echo ${PATH}
```bash
echo ${PATH}
# /home/thomas/.nvm/versions/node/v16.8.0/bin:/home/thomas/.local/bin:/usr/local/sbin:/usr/local/bin:
# /usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin
````
```
For example, if you wish to use `npm` commands globally (in any directory) you will need to have the requisite Node executable in your path, which you can see above.
@ -136,25 +137,25 @@ TODO: Add more info about the path when I have it.
This describes the shell that will be interpreting any commands you type in. In most cases, this will be bash by default, but other values can be set if you prefer other options.
````bash
```bash
echo ${SHELL}
# /usr/bin/zsh
````
```
### `USER`
The current logged in user.
````bash
```bash
echo ${USER}
# thomas
````
# thomas
```
### `PWD`
The current working directory.
````bash
```bash
echo ${PWD}
# /home/thomas
````
# /home/thomas
```

View file

@ -1,31 +1,36 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
# File permissions and executables
Every Unix file has a set of permissions that determine whether you can read, write or run (execute) the file.
Every Unix file has a set of permissions that determine whether you can read, write or run (execute) the file.
## Viewing file permissions
In order to see file permissions within the terminal, use the `-l` or `-rfl` with the `ls` command. Remember this command can be applied at both the directory and single-file level. For example:
````bash
```bash
drwxr-xr-x 7 thomas thomas 4096 Oct 2 19:22 angular-learning-lab
drwxr-xr-x 5 thomas thomas 4096 Oct 17 18:05 code-exercises
drwxr-xr-x 5 thomas thomas 4096 Sep 4 16:15 js-kata
drwxr-xr-x 9 thomas thomas 4096 Sep 26 18:10 sinequanon
drwxr-xr-x 12 thomas thomas 4096 Sep 19 17:41 thomas-bishop
drwxr-xr-x 5 thomas thomas 4096 Sep 4 19:24 ts-kata
````
```
### What the output means
The first column of the permissions output is known as the file's *mode*. The sequence from left to right is as follows:
The first column of the permissions output is known as the file's _mode_. The sequence from left to right is as follows:
```
- - - - - - - - - -
- - - - - - - - - -
type user permissions group permissions other permissions
```
<dl>
<dt>type</dt>
<dd>The file type. A dash just means an ordinary file. `d` means directory </dd>
@ -37,8 +42,6 @@ type user permissions group permissions other permissions
<dd>group is obviously what anyone belonging to the current file's user group can do. Everyone else (outside of the user and the group) is covered by the other permissions, sometimes known as 'world' permissions</dd>
</dl>
## Modifying permissions: `chmod`
We use `chmod` for transferring ownership and file permissions quickly from the command-line.
@ -47,9 +50,9 @@ We use `chmod` for transferring ownership and file permissions quickly from the
`chmod` uses octal notation. Each numeral refers to a permission set. There are three numerals. The placement denotes the user group. From left to right this is:
* user
* group
* everyone else.
- user
- group
- everyone else.
If you are working solo and not with group access to files, you can disregard assigning the other numerals, by putting zeros in as placeholders.
@ -57,11 +60,11 @@ If you are working solo and not with group access to files, you can disregard as
### Example
````bash
```bash
$ chmod -v 700 dummy.txt
$ ls -l dummy.txt
$ -rwx------ 1 thomasbishop staff 27 13 May 15:42 dummy.txtExample
````
```
### Useful options
@ -77,6 +80,6 @@ In most cases, especially when you are working alone, the most frequent codes yo
Then to invoke the script from the shell you simply enter:
````bash
```bash
./your-bash-script.sh
````
```

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
@ -11,9 +12,11 @@ tags:
## Main syntax
### No options
Without options specified, `find` alone will return a recursive index of all the files in the directory from which it is run.
### Sub-directory
If we pass a directory to `find` it will repeat the above process but specifically for that directory.
```bash
@ -23,9 +26,11 @@ i3/config
```
### Filters
We can specify flags as filters (known as 'tests' within the program).
#### Type
Filter by type: file or directory
```
@ -40,6 +45,7 @@ $ find i3 -type f
```
#### Filename
This is the most frequent use case: filter files by name with globbing.
```bash
@ -59,6 +65,7 @@ $ find -iname "*.JS"
```
#### Path
As above but this time includes directory names in the match. `ipath` is the case-insensitive version.
```bash
@ -68,6 +75,7 @@ utils/do-something.js
```
### Operators
We can combine `find` commands by using logical operators: `-and`, `-or`, `-not`. For example:
```bash
@ -82,6 +90,7 @@ dfdf
```
## Actions
Using the `exec` keyword we can run a program against the files that are returned from `find`.
In this syntax we use `{}` as a placeholder for the path of the file that is matched. We use `;` (escaped) to indicate the end of the operation.
@ -93,7 +102,9 @@ This script deletes the files that match the filter criteria:
```bash
$ find -name "*.js" -exec rm {} \;
```
This script finds all the files with the substring 'config' in their name and writes their file size to a file.
```bash
find -name '*config*' -exec wc -c {} \; > config-sizes
```
find -name '*config*' -exec wc -c {} \; > config-sizes
```

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
@ -12,17 +13,17 @@ tags:
### Schematic
````bash
```bash
grep [options] [pattern] [source file] > [output file]
````
```
Note that above we redirect the file matches to a new file. You don't have to do this. If you omit the redirection, `grep` will output to standard output.
### Applied
````bash
```bash
grep -i -n "banana" fruits.txt > banana.txt
````
```
The above example searches, using regex, for strings matching the pattern “banana” in the file `fruits.txt` regardless of the character case (`-i` ensures this) and outputs its findings to the file `banana.txt`, with the line number where the match occurs appended to the output (`-n` takes care of this).
@ -30,12 +31,12 @@ Note that for simplicity, you can chain optional values together, i.e. the optio
## Useful options
* ignore case: `i`
* count matches instead of returning actual match: `-c`
* precede each match with the line number where it occurs: `-n`
* invert the match (show everything that doesn't match the expression): `-v`
* search entire directories recursively: `-r`
* list file names where matches occur (in the scenario of a recursive match): `-l`
- ignore case: `i`
- count matches instead of returning actual match: `-c`
- precede each match with the line number where it occurs: `-n`
- invert the match (show everything that doesn't match the expression): `-v`
- search entire directories recursively: `-r`
- list file names where matches occur (in the scenario of a recursive match): `-l`
## `ripgrep`

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
- unix
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
@ -8,7 +9,7 @@ tags:
For example a local server.
````bash
```bash
sudo lsof -t -i:8000
# List files and proces ID (-t) and internet connections (-i) on port number
@ -17,4 +18,4 @@ $ 7890
sudo kill -9 7890
# Kill the process that is running there
````
```

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
@ -8,16 +9,16 @@ tags:
Obviously we know that in order to list the files and sub-directories in our current directory we use `ls` but here are some of the most useful of the different modifiers:
* `**ls -a**`
* list and include hidden dot files
* `**ls -l**`
* list with user permissions, file-size and date-modified (most detailed)
* `**ls ***`
* list organised by folders one level deep
- `**ls -a**`
- list and include hidden dot files
- `**ls -l**`
- list with user permissions, file-size and date-modified (most detailed)
- `**ls ***`
- list organised by folders one level deep
## Navigation shorthands
* `cd -`
* Return to the directory you were last in
* `!!`
* Repeat the last command
- `cd -`
- Return to the directory you were last in
- `!!`
- Repeat the last command

View file

@ -1,79 +1,79 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
- arrays
---
## List variables
When we use the term **list** in bash, we are not actually referring to a specific type of data structure. Instead a **list variable** is really just a normal variable wrapped in quote marks that has strings separated by spaces. Despite the fact that this is not an actual iterative data structure, we are still able to loop through variables of this type.
````bash
```bash
A_STR_LIST="cat dog hamster"
AN_INT_LIST="1 2 3"
````
```
To iterate through a list variable, we can use a for loop:
````bash
```bash
for ele in $A_STR_LIST; do
echo $ele
done
````
```
## Brace expansion for listed variables
With a sequence of variables that follow a pattern, for example the natural numbers (1, 2, 3, 4, ...) we can represent them in a condensed format using something called **brace expansion**. For instance to represent the natural numbers from 1 through 10:
````bash
```bash
{1..10}
````
```
Here the **two dots** stand for the intervening values.
We can iterate through brace expanded variables just the same:
````bash
```bash
for num in {1..4}; do
echo $num
done
````
```
## Arrays
We define an array as follows:
````bash
```bash
words=(here are some words)
````
```
We can also explicitly define an array using `declare` :
````bash
```bash
declare -a words=("element1" "element2" "element3")
````
```
### Index notation
We access specific array elements by their index using the same braces style we use with variables:
````bash
```bash
echo ${words[2]}
# element3
````
```
### Iterating through arrays
````bash
```bash
for i in "${words[@]}"
do
echo "$i"
# or do whatever with individual element of the array
done
# element1 element2 element3
````
```
Note that `@` here is a special symbol standing for all the members of the `words` array.
@ -81,9 +81,9 @@ Note that `@` here is a special symbol standing for all the members of the `word
The following script loops through all files in a directory that begin with `l` and which are of the bash file type (`.sh`) :
````bash
```bash
for x in ./l*.sh; do
echo -n "$x "
done
echo
````
```

View file

@ -1,9 +1,13 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
# Passing arguments to scripts
## Relation between commands and programs
Whenever we issue a command in bash we are really running an executable program that is associated with the command. This is why when we create our own bash scripts we must run `chmod` to make them executables. When we issue a command like `./file.sh` we are running an executable program.
@ -16,13 +20,13 @@ If you think about it, a script is really just a function that runs when you sou
To pass an argument we simply add the values after the script in the command. For example:
````bash
```bash
./arguments.sh Thomas 33
````
```
The script is as follows:
````bash
```bash
#!/bin/bash
echo "File is called $0"
@ -30,21 +34,21 @@ echo "The arguments provided are $@"
echo "The first argument is $1"
echo "The second argument is $2"
echo "Your name is $1 and you are $2 years old"
````
```
This outputs:
````
```
File is called ./arguments.sh
The arguments provided are Thomas 33
The first argument is Thomas
The second argument is 33
Your name is Thomas and you are 33 years old
````
```
Some points to note on syntax. The `$` is used to individuate the script itself and its arguments.
* Each argument passed is accessible from an index starting at `1` (`$1`)
* The script itself occupies the `0` position, hence we are able to log the name of the script at line 1 `$0` )
* To log the arguments as a group (for instance to later loop through them) we use `$@` .
* To get the number of arguments use `$#`
- Each argument passed is accessible from an index starting at `1` (`$1`)
- The script itself occupies the `0` position, hence we are able to log the name of the script at line 1 `$0` )
- To log the arguments as a group (for instance to later loop through them) we use `$@` .
- To get the number of arguments use `$#`

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
- processes
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
@ -8,14 +9,14 @@ tags:
The symbol `>` is called the **redirection operator** because it redirects the output of a command to another location. You most frequently use this when you want to save contents to a file rather than standard output.
````bash
```bash
ls | grep d* >> result.txt
````
```
## Appending operator
We use `>>` to append contents on the next available line of a pre-existing file. Continuing on from the example above:
````bash
```bash
echo 'These are the files I just grepped' >> result.txt
````
```

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
@ -8,14 +9,14 @@ tags:
Shell sessions can be one of or several instances of the following types:
* **login shell**
* A session that must be authenticated such as when you access remote resources using SSH
* **non-login shell**
* Not the above
* **interactive shell**
* A shell session that runs in the terminal and thus that the user can *interact* with
* **non-interactive shell**
* A shell session that runs without a terminal
- **login shell**
- A session that must be authenticated such as when you access remote resources using SSH
- **non-login shell**
- Not the above
- **interactive shell**
- A shell session that runs in the terminal and thus that the user can _interact_ with
- **non-interactive shell**
- A shell session that runs without a terminal
If you are working with a remote server you will be in an **interactive login shell**. If you run a script from the command line you will be in a **non-interactive non-login shell**.
@ -23,7 +24,7 @@ If you are working with a remote server you will be in an **interactive login sh
The type of shell session that you are currently in affects the [environmental and shell variables](https://www.notion.so/Environmental-and-shell-variables-04d5ec7e8e2b486a93f002bf686e4bbb) that you can access. This is because the order in which configuration files are read on initialisation differs depending on the type of shell.
* a session defined as a non-login shell will read `/etc/bash.bashrc` and then the user-specific `~/.bashrc` file to build its environment.
* A session started as a login session will read configuration details from the `/etc/profile` file first. It will then look for the first login shell configuration file in the users home directory to get user-specific configuration details.
- a session defined as a non-login shell will read `/etc/bash.bashrc` and then the user-specific `~/.bashrc` file to build its environment.
- A session started as a login session will read configuration details from the `/etc/profile` file first. It will then look for the first login shell configuration file in the users home directory to get user-specific configuration details.
In Linux, if you want the environmental variable to be accessible from both login and non-login shells, you must put them in `~/.bashrc`

View file

@ -1,22 +1,20 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
- aliases
---
>
> A symbolic link, also termed a soft link, is a special kind of file that points to another file. Unlike a hard link, a symbolic link does not contain the data in the target file. It simply points to another entry somewhere in the file system.
> A symbolic link, also termed a soft link, is a special kind of file that points to another file. Unlike a hard link, a symbolic link does not contain the data in the target file. It simply points to another entry somewhere in the file system.
# Syntax
````
```
ln -s -f ~/[existing_file] ~/.[file_you_want_to_symbolise]
````
```
Real example:
````
```
ln -s -f ~/dotfiles/.vimrc ~/.vimrc
````
```

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
@ -8,30 +9,30 @@ tags:
If you have a `.txt` file containing text strings, each on a new line you can use the sort function to quickly put them in alphabetical order:
````bash
```bash
sort file.txt
````
```
Note that this will not save the sort, it only presents it as a standard output. To save the sort you need to direct the sort to a file in the standard way:
````bash
```bash
sort file.txt > output.txt
````
```
### Options
* `-r`
* reverse sort
* `c`
* check if file is already sorted. If not, it will highlight the strings which are not sorted
- `-r`
- reverse sort
- `c`
- check if file is already sorted. If not, it will highlight the strings which are not sorted
## Find and replace: `sed`
The `sed` programme can be used to implement find and replace procedures. In `sed`, find and replace are covered by the substitution option: `/s` :
````bash
```bash
sed s/word/replacement word/ file.txt
````
```
This however will only change the first instance of word to be replaced, in order to apply to every instance you need to add the global option: `-g` .
@ -41,9 +42,9 @@ Alternatively, you can use the `-i` option which will make the changes take plac
Note that this will overwrite the original version of the file and it cannot be regained. If this is an issue then it is recommended to include a backup command in the overall argument like so:
````bash
```bash
sed -i.bak s/word/replacement word/ file.txt
````
```
This will create the file `file.txt.bak` in the directory you are working within which is the original file before the replacement was carried out.
@ -51,9 +52,9 @@ This will create the file `file.txt.bak` in the directory you are working within
We can use the `sort -u` command can be used to remove duplicates:
````bash
```bash
sort -u file.txt
````
```
It is important to sort before attempting to remove duplicates since the `-u` flag works on the basis of the strings being adjacent.
@ -61,17 +62,17 @@ It is important to sort before attempting to remove duplicates since the `-u` fl
Suppose you have a file containing 1000 lines. You want to break the file up into five separate files, each containing two hundred lines. You can use `split` to accomplish this, like so:
````bash
```bash
split -l 200 big-file.txt new-files
````
```
`split` will categorise the resulting five files as follows:
* new-file-aa,
* new-file-ab
* new-file-ac,
* newfile-ad,
* new-file-ae.
- new-file-aa,
- new-file-ab
- new-file-ac,
- newfile-ad,
- new-file-ae.
If you would rather have numeric suffixes, use the option `-d` . You can also split a file by its number of bytes, using the option `-b` and specifying a constituent file size.
@ -79,17 +80,17 @@ If you would rather have numeric suffixes, use the option `-d` . You can also sp
We can use `cat` read multiple files at once and then append a redirect to save them to a file:
````bash
```bash
cat file_a.txt file_b.txt file_c.txt > merged-file.txt
````
```
## Count lines, words, etc: `wc`
To count words:
````bash
```bash
wc file.txt
````
```
When we use the command three numbers are outputted, in order: lines, words, bytes.

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
@ -8,34 +9,34 @@ We know that `$PATH` is an [environment variable](Environmental%20and%20shell%20
Whenever any command is run, the shell looks up the directories contained in the `PATH` for the target executable file and runs it. We can see this is the case by using the `which` command which traces the executable of bash commands. Take the `echo` program:
````bash
```bash
which echo
/home/trinity/.nvm/versions/node/v16.10.0/bin/npm
````
```
Or `npm` :
````bash
```bash
which npm
/home/trinity/.nvm/versions/node/v16.10.0/bin/npm
````
```
By default the path will always contain the following locations:
* `/usr/bin`
* `/usr/sbin`
* `/usr/local/bin`
* `/usr/local/sbin`
* `/bin`
* `/sbin`
- `/usr/bin`
- `/usr/sbin`
- `/usr/local/bin`
- `/usr/local/sbin`
- `/bin`
- `/sbin`
All the inbuilt terminal programs reside at these locations and most of them are at `/usr/bin`. This is why they run automatically without error. If you attempt to run a program that doesnt reside at these locations then you will get an error along the lines of program x is not found in PATH.
## Structure of the PATH
````bash
```bash
/home/trinity/.nvm/versions/node/v16.10.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/wsl/lib:/mnt/c/Python39/Scripts/:/mnt/c/Python39/:/mnt/c/Windows/system32:/mnt/c/Windows:/mnt/c/Windows/System32/Wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0/:/mnt/c/Windows/System32/OpenSSH/:/mnt/c/Program Files/dotnet/:/mnt/c/Program Files/nodejs/:/mnt/c/ProgramData/chocolatey/bin:/mnt/c/Users/thomas.bishop/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/thomas.bishop/AppData/Local/Programs/Microsoft VS Code/bin:/mnt/c/Users/thomas.bishop/AppData/Local/Programs/Hyper/resources/bin:/mnt/c/Users/thomas.bishop/AppData/Roaming/npm
````
```
## Adding to the PATH
@ -43,9 +44,9 @@ Only the default directories load to the PATH on every session. How then can we
For example, at the bottom of my `.zshrc` on my work computer I have:
````bash
```bash
export CHROME_BIN=/mnt/c/Program\\ Files\\ \\(x86\\)/Google/Chrome/Application/chrome.exe
````
```
This enables me to access the Chromium binaries from my terminal session (needed for running Angular tests) but it doesnt add it to the path, it creates an environment variable on every session.
@ -53,22 +54,22 @@ For demonstration, lets add a users desktop directory to the PATH.
First we go to the `.bashrc` and add the `export` command. [Remember](https://www.notion.so/Environmental-and-shell-variables-04d5ec7e8e2b486a93f002bf686e4bbb) that this is the command for creating a new environment variable:
````bash
```bash
export PATH="$PATH=:~/Desktop"
````
```
We force a reload of the `.bashrc` with the command:
````bash
```bash
source ~/.bashrc
````
```
Then we can check this directory has been added to the path with an echo
````bash
```bash
echo $PATH
...:~/Desktop
````
```
## Relation between commands and programs

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
@ -12,25 +13,25 @@ The following are useful built-in utility methods that you can use for checking
Prevent bash from adding a new line after an echo:
````bash
```bash
echo 'Your name is Thomas'
echo 'and you are 33 years old'
# Your name is Thomas
# and you are 33 years old
````
```
````bash
```bash
echo -n 'Your name is Thomas '
echo 'and you are 33 years old'
# Your name is Thomas and you are 33 years old
````
```
## Operators
### Mathematical
````bash
-lt , -gt,
````
```bash
-lt , -gt,
```

View file

@ -1,17 +1,17 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- shell
---
## Types
>
> There is no typing in bash!
> There is no typing in bash!
* Bash variables do not have types thus bash is neither loosely or strictly typed. Anything you apply the identity operator against becomes a character string variable.
* Bash is however able to distinguish numerical strings which is why arithmetic operations and comparisons work.
* Consequently there is no `null` type either. The closest thing is an empty string, i.e. `APPROX_NULL=""` .
- Bash variables do not have types thus bash is neither loosely or strictly typed. Anything you apply the identity operator against becomes a character string variable.
- Bash is however able to distinguish numerical strings which is why arithmetic operations and comparisons work.
- Consequently there is no `null` type either. The closest thing is an empty string, i.e. `APPROX_NULL=""` .
## Variables
@ -19,41 +19,40 @@ tags:
As noted we use the equality symbol to create a variable:
````bash
```bash
PRIM_VAR_STR="My first variable"
PRIM_VAR_FLOAT="50.3"
PRIM_VAR_BOOL="true"
````
```
As there is no typing in bash, the names of these variables are purely notional.
To invoke a variable we use special brackets:
````bash
```bash
echo ${PRIM_VAR_STR} # My first variable
echo ${PRIM_VAR_FLOAT} # 50.3
echo ${PRIM_VAR_BOOL} # true
````
```
* there is no compunction to use capitals for variables but it can be helpful to distinguish custom variables from program variables (see below)
* quotation marks at declaration are also not strictly necessary however they can help avoid bugs. Also serves as a reminder that every type is basically a string at the end of the day
- there is no compunction to use capitals for variables but it can be helpful to distinguish custom variables from program variables (see below)
- quotation marks at declaration are also not strictly necessary however they can help avoid bugs. Also serves as a reminder that every type is basically a string at the end of the day
### Variables that hold references to programs
We can store a reference to a bash program with slightly different syntax:
````bash
```bash
user="$(whoami)"
````
```
When we want to invoke a program variable we don't need to use brackets:
````bash
```bash
echo $user # thomasbishop
````
```
>
> Note that when we declare anything in bash (any time `=` is used) we **do not use spaces!** If you do, the variable will not be set.
> Note that when we declare anything in bash (any time `=` is used) we **do not use spaces!** If you do, the variable will not be set.
## Declarations
@ -61,22 +60,22 @@ You can achieve a sort of typing through the `declare` keyword, although bear in
### `-r` : readonly
````bash
```bash
declare -r var1="I'm read only"
````
```
Roughly equivalent to a `const` : if you attempt to change the value of `var1` it will fail with an error message.
### `i` : integer
````bash
```bash
declare -i var2="43"
````
```
The script will treat all subsequent occurrences of `var2` as an integer
### `a` : array
````bash
```bash
declare -a anArray
````
```

View file

@ -1,20 +1,20 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
---
# Any
`any` is a TS-specific type that we can think of it as a higher level parent to all the other types that exist in TypeScript.
`any` is a TS-specific type that we can think of it as a higher level parent to all the other types that exist in TypeScript.
It means in effect that either no type declaration has been asserted or that the TS compiler cannot infer the type that you mean. Because `any` does not have a data type it is equivalent to all the individual scalar and reference types combined. In TS this kind of type is called a **supertype**, and specific types that actually correspond to a scalar or reference type are known as **subtypes**. `any`is the supertype of all types and `string` for example is a subtype of `any`.
>
> Every value of `string` can be assigned to its supertype`any` but not every value of `any` can be assigned to its subtype `string`
> Every value of `string` can be assigned to its supertype`any` but not every value of `any` can be assigned to its subtype `string`
You can declare `any` as a type if you wish however it is discouraged because it effectively undermines the whole purpose of TS. Doing so is basically the same thing as declaring a value in normal JS - there is no designation at left hand assignation of which type the data belongs to.
>
> `any` reflects JavaScript's overarching flexibility; you can see it as a backdoor to a world where you want neither tooling nor type safety.
> `any` reflects JavaScript's overarching flexibility; you can see it as a backdoor to a world where you want neither tooling nor type safety.
`any` means you can escape errors during development. If you are using custom types/interfaces and you keep getting an annoying error saying that property X doesn't exist on type,, `any` will allow you to overcome it until you go back later and refine.

View file

@ -1,6 +1,8 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
---

View file

@ -1,8 +1,8 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
- functions
---
## Function overloads
@ -19,7 +19,7 @@ function logSearch(term: string, options?: number): void;
// Implementation:
function logSearch(term: string, p2?: unknown) {
let query = `https://searchdatabase/${term}`;
if (typeof p2 === "string") {
if (typeof p2 === 'string') {
query = `${query}/tag=${p2}`;
console.log(query);
} else {
@ -28,8 +28,8 @@ function logSearch(term: string, p2?: unknown) {
}
}
logSearch("apples", "braeburn");
logSearch("bananas", 3);
logSearch('apples', 'braeburn');
logSearch('bananas', 3);
```
```ts
@ -42,7 +42,7 @@ function logSearchUnion(term: string, options?: number): void;
// Implementation:
function logSearchUnion(term: string, p2?: string | number) {
let query = `https://searchdatabase/${term}`;
if (typeof p2 === "string") {
if (typeof p2 === 'string') {
query = `${query}/tag=${p2}`;
console.log(query);
} else {
@ -51,6 +51,6 @@ function logSearchUnion(term: string, p2?: string | number) {
}
}
logSearchUnion("melon", "honey-dew");
logSearchUnion("oranges", 4);
logSearchUnion('melon', 'honey-dew');
logSearchUnion('oranges', 4);
```

View file

@ -1,8 +1,8 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
- functions
---
# Functions

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
- data-types
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
---

View file

@ -1,6 +1,7 @@
---
categories:
- Programming Languages
tags:
- Programming_Languages
- typescript
---

View file

@ -1,7 +1,7 @@
---
category: Software Engineering
tags:
- Software_Engineering
- call-stack
- callstack
---
# The call-stack
@ -11,20 +11,21 @@ A [stack](/Data_Structures/Stacks.md) data structure that holds the information
## Example
```js
function greet(who){
console.log("Hello " + who);
function greet(who) {
console.log('Hello ' + who);
}
greet("Harry");
greet('Harry');
console.log("Bye");
console.log('Bye');
```
### Breakdown
### Breakdown
1. Interpreter receives call to `greet()`
2. Goes to the definition of this function (`function greet(who)...`)
2. Goes to the definition of this function (`function greet(who)...`)
3. Executes the `console.log` within this function
4. Returns to the location that called it (`greet("Harry")`)
5. Finds that there is nothing else to do in this function so moves to next function: the `console.log("bye")`
6. Executes
7. Returns to line that called it. Finds nothing else to do. Exits program.

View file

@ -1,6 +1,7 @@
---
tags:
- Software_Engineering
categories:
- Software Engineering
tags: [memory]
---
# Memory leaks

View file

@ -1,28 +1,32 @@
---
tags:
- Software_Engineering
categories:
- Software Engineering
tags: [semver]
---
# Semantic versioning
# Semantic versioning
```
3.4.1 === major.minor.patch
```
* Major
* New feature which may potentially cause breaking changes to applications dependent on the previous major version.
* Minor
* New features that do not break the existing API
* Patch
* Bug fixes for the current minor version
- Major
- New feature which may potentially cause breaking changes to applications dependent on the previous major version.
- Minor
- New features that do not break the existing API
- Patch
- Bug fixes for the current minor version
## Glob patterns for versioning
### Caret
Interested in any version so long as the major version remains at $n$. E.g if we are at `^4.2.1` and we upgrade, we are ok with `4.5.3` or `4.8.2`. We are not bothered about the minor or patch version.
Interested in any version so long as the major version remains at $n$. E.g if we are at `^4.2.1` and we upgrade, we are ok with `4.5.3` or `4.8.2`. We are not bothered about the minor or patch version.
This is equivalent to `4.x`
@ -32,4 +36,4 @@ Interested in any patch version within set major and minor parameters. For examp
### No tilde or caret
Use the *exact* version specified
Use the _exact_ version specified

View file

@ -1,22 +1,21 @@
---
categories:
- Software Engineering
tags:
- Software_Engineering
- publication
- resources
---
## General
### Meyer's Uniform Access Principle
>
> All services offered by a module should be available through a uniform notation, which does not betray whether they are implemented through storage or through computation
> All services offered by a module should be available through a uniform notation, which does not betray whether they are implemented through storage or through computation
This is a clear recommendation for using getters and setters with classes. You should not see method calls outside of the class, they should appear as properties of the object.
## Don't Repeat Yourself
>
> Every piece of knowledge must have a single, unambiguous, authoritative representation within a system
> Every piece of knowledge must have a single, unambiguous, authoritative representation within a system
## The Principle of Orthogonality
@ -24,28 +23,25 @@ This notion comes from geometry. Two lines are orthogonal to each other if they
Their meeting isn't the important part. Think of a simple x, y graph:
>
> If you move along one of the lines, **your position projected onto the other doesn't change**
> If you move along one of the lines, **your position projected onto the other doesn't change**
In computing this is expressed in terms of **decoupling** and is implemented through modular, component-based architectures. As much as possible code should be scoped narrowly so that a change in one area does not cause changes in others. By keeping components discrete it is easier to make changes, refactor, improve and extend the codebase.
>
> We want to design components that are self-contained: independent and with a single, well-defined purpose. When components are isolated from one another, you know that you can change one without having to worry about the rest. As long as you don't change that component's external interfaces, you can be comfortable that you won't cause problems that ripple through the entire system.
> We want to design components that are self-contained: independent and with a single, well-defined purpose. When components are isolated from one another, you know that you can change one without having to worry about the rest. As long as you don't change that component's external interfaces, you can be comfortable that you won't cause problems that ripple through the entire system.
### Benefits of orthogonality: productivity
* Changes are localised so development time and testing time are reduced
* Orthogonality promotes reuse: if components have specific, well-defined responsibilities, they can be combined with new components in ways that were not envisioned by their original implementors. The more loosely coupled your systems, the easier they are to reconfigure and reengineer.
* Assume that one component does *M* distinct things and another does *N* things. If they are orthogonal and you combine them, the result does *M x N* things. However if the two components are not orthogonal, there will be overlap, and the result will do less. You get more functionality per unit effort by combining orthogonal components.
- Changes are localised so development time and testing time are reduced
- Orthogonality promotes reuse: if components have specific, well-defined responsibilities, they can be combined with new components in ways that were not envisioned by their original implementors. The more loosely coupled your systems, the easier they are to reconfigure and reengineer.
- Assume that one component does _M_ distinct things and another does _N_ things. If they are orthogonal and you combine them, the result does _M x N_ things. However if the two components are not orthogonal, there will be overlap, and the result will do less. You get more functionality per unit effort by combining orthogonal components.
### Benefits of orthogonality: reduced risk
* Diseased sections of code are isolated. If a module is sick, it is less likely to spread the symptoms around the rest of the system.
* Overall the system is less fragile: make small changes to a particular area and any problems you generate will be restricted to that area.
* Orthogonal systems are better tested because it is easier to run and design discrete tests on modularised components.
- Diseased sections of code are isolated. If a module is sick, it is less likely to spread the symptoms around the rest of the system.
- Overall the system is less fragile: make small changes to a particular area and any problems you generate will be restricted to that area.
- Orthogonal systems are better tested because it is easier to run and design discrete tests on modularised components.
>
> Building a unit test us itself a an interesting test of orthogonality: what does it take to build and link a unit test? Do you have to drag in a large percentage of the rest of the system just to get a test to compile or link? **If so, you've found a module that is not well decoupled from the rest of the system**
> Building a unit test us itself a an interesting test of orthogonality: what does it take to build and link a unit test? Do you have to drag in a large percentage of the rest of the system just to get a test to compile or link? **If so, you've found a module that is not well decoupled from the rest of the system**
### Relationship between DRY and orthogonality
@ -63,28 +59,25 @@ The authors use the notion of tracer bullets as a metaphor for developing softwa
They differ from prototypes in that they include integrated overall functionality but in a rough state. Whereas prototypes are more for singular, specific subcomponents of the project. Because tracer bullet models are joined-up in this way, even if they turn out to be inappropriate in some regard, they can be adapted and developed into a better form, without losing the core functionality.
>
> Tracer bullets work because they operate in the same environment and under the same constraints as the real bullets. They get to the target fast, so the gunner gets immediate feedback. And from a practical standpoint they are a relatively cheap solution. To get the same effect in code, we're looking for something that gets us from a requirement to some aspect of the final system quickly, visibly and repeatably.
> Tracer bullets work because they operate in the same environment and under the same constraints as the real bullets. They get to the target fast, so the gunner gets immediate feedback. And from a practical standpoint they are a relatively cheap solution. To get the same effect in code, we're looking for something that gets us from a requirement to some aspect of the final system quickly, visibly and repeatably.
>
> Tracer code is not disposable: you write it for keeps. It contains all the error-checking, structuring, documentation and self-checking that a piece of production code has. It simply is not fully functional. However, once you have made an end-to-end connection among the components of your system, you can check how close to the target you are, adjusting as necessary.
> Tracer code is not disposable: you write it for keeps. It contains all the error-checking, structuring, documentation and self-checking that a piece of production code has. It simply is not fully functional. However, once you have made an end-to-end connection among the components of your system, you can check how close to the target you are, adjusting as necessary.
### Distinguishing from prototyping
>
> Prototyping generates disposable code. Tracer code is lean but complete, and forms part of the skeleton of the final system. Think of prototyping as the reconnaissance and intelligence gathering that takes place before a single tracer bullet is fired.
> Prototyping generates disposable code. Tracer code is lean but complete, and forms part of the skeleton of the final system. Think of prototyping as the reconnaissance and intelligence gathering that takes place before a single tracer bullet is fired.
## Design by contract
To understand DBC we have to think of a computational process as involving two stages: the call and the execution of the routine that happens in response to the call (henceforth **caller** and **routine**).
* the caller could be a function expression that invokes a function and passes arguments to it expecting a given output. The function that executes is the routine
* the caller could be an object instantiation that calls a method belonging to its parent class
* the caller could be a parent React component that passes props to a child component
- the caller could be a function expression that invokes a function and passes arguments to it expecting a given output. The function that executes is the routine
- the caller could be an object instantiation that calls a method belonging to its parent class
- the caller could be a parent React component that passes props to a child component
Design by contract means specifying clear and inviolable rules detailing what must obtain at both the call stage and the routine stage if the process is to execute.
Every function and method in a software system does something. Before it starts that something, the routine may have some expectation of the state of the world and it may be able to make a statement about the state of the world when it concludes. These expectations are defined in terms of preconditions, postconditions, and invariants. They form that basis of a **contract** between the caller and the routine. Hence *design by contract**.*\*\*
Every function and method in a software system does something. Before it starts that something, the routine may have some expectation of the state of the world and it may be able to make a statement about the state of the world when it concludes. These expectations are defined in terms of preconditions, postconditions, and invariants. They form that basis of a **contract** between the caller and the routine. Hence _design by contract\*\*._\*\*
### Preconditions
@ -102,13 +95,11 @@ There is an analogue here with functional programming philosophy: the function s
One way to achieve this is to be miserly when setting up the contract, which overlaps with orthogonality. Only specify the minimum return on a contract rather than multiple postconditions. This only increases the likelihood that the contract will be breached at some point. If you need multiple postconditions, spread them out an achieve them in a compositional way, with multiple separate and modular processes.
>
> Be strict in what you will accept before you begin, and promise as little as possible in return. If your contract indicates that you'll accept anything and promise the world in return, then you've got a lot of code to write!
> Be strict in what you will accept before you begin, and promise as little as possible in return. If your contract indicates that you'll accept anything and promise the world in return, then you've got a lot of code to write!
### Division of responsibilities
>
> If all the routine's preconditions are met by the caller, the routine shall guarantee that all postconditions and invariants will be true when it completes.
> If all the routine's preconditions are met by the caller, the routine shall guarantee that all postconditions and invariants will be true when it completes.
Note that the emphasis of responsibilities is on the caller.
@ -132,50 +123,48 @@ It's a fancy name for a simple principle summarised by 'don't talk to strangers'
### Formal
A method *m* of object *O* may only invoke the methods of the following kinds of objects:
A method _m_ of object _O_ may only invoke the methods of the following kinds of objects:
* *O* itself
* *m*'s parameters
* any objects created or instantiated within *m*
* *O*'s direct component objects (in other words nested objects)
* a global variable (over and above *O*) accessible by *O*, within the scope of *m*
- _O_ itself
- _m_'s parameters
- any objects created or instantiated within _m_
- _O_'s direct component objects (in other words nested objects)
- a global variable (over and above _O_) accessible by _O_, within the scope of _m_
## Model, View, Controller design pattern
The key concept behind the MVC idiom is separating the model from both the GUI that represents it and the controls that manage the view.
* **Model**
* The abstract data model representing the target object
* The model has no direct knowledge of any views or controllers
* **View**
* A way to interpret the model. It subscribes to changes in the model and logical events from the controller
* **Controller**
* A way to control the view and provide the model with new data. It publishes events to both the model and the view
- **Model**
- The abstract data model representing the target object
- The model has no direct knowledge of any views or controllers
- **View**
- A way to interpret the model. It subscribes to changes in the model and logical events from the controller
- **Controller**
- A way to control the view and provide the model with new data. It publishes events to both the model and the view
For comparison, distinguish React from MVC. In React data is unidirectional: the JSX component as controller cannot change the state. The state is passed down to the controller. Also MVC lends itself to separation of technologies: code used to create the View is different from Code that manages Controller and data Model. In React it's all one integrated system.
## Refactoring
>
> Rewriting, reworking, and re-architecting code is collectively known as refactoring
> Rewriting, reworking, and re-architecting code is collectively known as refactoring
### When to refactor
* **Duplication**: you've discovered a violation of the DRY principle
* **Non-orthogonal design**: you've discovered some code or design that could be made more orthogonal
* **Outdated knowledge**: your knowledge about the problem and you skills at implementing a solution have changed since the code was first written. Update and improve the code to reflect these changes
* **Performance: y**ou need to move functionality from one area of the system to another to improve performance
- **Duplication**: you've discovered a violation of the DRY principle
- **Non-orthogonal design**: you've discovered some code or design that could be made more orthogonal
- **Outdated knowledge**: your knowledge about the problem and you skills at implementing a solution have changed since the code was first written. Update and improve the code to reflect these changes
- **Performance: y**ou need to move functionality from one area of the system to another to improve performance
### Tips when refactoring
* Don't try to refactor and add new functionality at the same time!
* Make sure you have good tests before you begin refactoring. Run the tests as you refactor. That way you will know quickly if your changes have broken anything
* Take short, deliberative steps. Refactoring often involves making many localised changes that result in a larger-scale change.
- Don't try to refactor and add new functionality at the same time!
- Make sure you have good tests before you begin refactoring. Run the tests as you refactor. That way you will know quickly if your changes have broken anything
- Take short, deliberative steps. Refactoring often involves making many localised changes that result in a larger-scale change.
## Testing
>
> Most developers hate testing. They tend to test-gently, subconsciously knowing where the code will break and avoiding the weak spots. Pragmatic Programmers are different. We are *driven* to find our bugs *now*, so we don't have to endure the shame of others finding our bugs later.
> Most developers hate testing. They tend to test-gently, subconsciously knowing where the code will break and avoiding the weak spots. Pragmatic Programmers are different. We are _driven_ to find our bugs _now_, so we don't have to endure the shame of others finding our bugs later.
### Unit testing
@ -185,15 +174,15 @@ We can think of unit testing as **testing against contract** (detailed above). W
Scope for unit testing should cover:
* Obviously, returning the expected value/outcome
* Ensuring that faulty arguments/ types are rejected and initiate error handling (deliberately breaking your code to ensure it is handled appropriately)
* Pass in the boundary and maximum value
* Pass in values between the zero and the maximum expressible argument to cover a range of cases
- Obviously, returning the expected value/outcome
- Ensuring that faulty arguments/ types are rejected and initiate error handling (deliberately breaking your code to ensure it is handled appropriately)
- Pass in the boundary and maximum value
- Pass in values between the zero and the maximum expressible argument to cover a range of cases
Benefits of unit testing include:
* It creates an example to other developers how to use all of the functionality of a given module
* It is a means to build **regression tests** which can be used to validate any future changes to the code. In other words, the future changes should pass the older tests to prove they are consistent with the code base
- It creates an example to other developers how to use all of the functionality of a given module
- It is a means to build **regression tests** which can be used to validate any future changes to the code. In other words, the future changes should pass the older tests to prove they are consistent with the code base
### Integration testing
@ -203,12 +192,11 @@ Integration testing is really just an extension of the unit testing described, o
## Commenting your code
In general, comments should detail **why** something is done, its purpose and its goal. The code already shows *how* it's done, so commenting on this is redundant, and violates the DRY principle.
In general, comments should detail **why** something is done, its purpose and its goal. The code already shows _how_ it's done, so commenting on this is redundant, and violates the DRY principle.
>
> We like to see a simple module-level comment, comments for significant data and type declarations, and a brief class and per-method header describing how the function is used and anything it does that is not obvious
> We like to see a simple module-level comment, comments for significant data and type declarations, and a brief class and per-method header describing how the function is used and anything it does that is not obvious
````js
```js
/*
Find the highest value within a specified data range of samples
@ -217,4 +205,4 @@ Parameter: aThreshold = minimum value to consider
Return: the value, or null if no value found that is greater than or equal to the threshold
*/
````
```

View file

@ -1,50 +1,48 @@
---
categories:
- Computer Architecture
tags:
- Theory_of_Computation
- theory-of-omputation
- history
---
> A general-purpose computer is one that, given the appropriate instructions and required time, should be able to perform most common computing tasks.
>
> A general-purpose computer is one that, given the appropriate instructions and required time, should be able to perform most common computing tasks.
This sets a general purpose computer aside from a special-purpose computer, like the one you might find in your dishwasher which may have its instructions hardwired or coded into the machine. Special purpose computers only perform a single set of tasks according to prewritten instructions. Well take the term _computer_ to mean general purpose computer.
This sets a general purpose computer aside from a special-purpose computer, like the one you might find in your dishwasher which may have its instructions hardwired or coded into the machine. Special purpose computers only perform a single set of tasks according to prewritten instructions. Well take the term *computer* to mean general purpose computer.
Simplified model of what a computer is:
Simplified model of what a computer is:
![1.4-Input-Process-Output.png](../img/1.4-Input-Process-Output.png)
Although the input, output and storage parts of a computer are very important, they will not be the focus of this course. Instead we are going to learn all about the process part, which will focus on how the computer is able to follow instructions to make calculations.
## **Supplementary Resources**
### Early computing (*Crash Course Computer Science)*
### Early computing (_Crash Course Computer Science)_
[Early Computing: Crash Course Computer Science #1](https://www.youtube.com/watch?v=O5nskjZ_GoI)
* The abacus was created because the scale of society had become greater than what a single person could create and manipulate in their mind.
* Eg thousands of people in a village and tens of thousands of cattle
* In a basic abacus each row of beads (say its coloured) represents a different power of ten
* As well as aiding calculation, the abacus acts as a primitive storage device
* Similar early computing devices: astrolabe, slide rule, sunrise clocks, tide clocks
- The abacus was created because the scale of society had become greater than what a single person could create and manipulate in their mind.
- Eg thousands of people in a village and tens of thousands of cattle
- In a basic abacus each row of beads (say its coloured) represents a different power of ten
- As well as aiding calculation, the abacus acts as a primitive storage device
- Similar early computing devices: astrolabe, slide rule, sunrise clocks, tide clocks
>
> As each increase in knowledge, as well as on the contrivance of every new tool, human labour becomes abridged. **Charles Babbage**
> As each increase in knowledge, as well as on the contrivance of every new tool, human labour becomes abridged. **Charles Babbage**
* One of the first computers of the modern era was the Step Reckoner built by Leibniz in 1694.
* In addition to adding, this machine was able to multiply and divide basically through hacks because from a mechanical point of view, multiplications and divisions are just many additions and subtractions
* For example, to divide 17/5, we just subtract 5, then 5, then 5 again until we can't do anymore hence two left over
* But as these machines were expensive and slow, people used pre-computed tables in book form generated by human computers. Useful particularly for things like square roots.
* Similarly range tables were created that aided the military in calculating distances for gunboat artillery which factored in contextual factors like wind, drift, slope and elevation. These were used well into WW2 but they were limited to the particular type of cannon or shell
- One of the first computers of the modern era was the Step Reckoner built by Leibniz in 1694.
- In addition to adding, this machine was able to multiply and divide basically through hacks because from a mechanical point of view, multiplications and divisions are just many additions and subtractions
- For example, to divide 17/5, we just subtract 5, then 5, then 5 again until we can't do anymore hence two left over
- But as these machines were expensive and slow, people used pre-computed tables in book form generated by human computers. Useful particularly for things like square roots.
- Similarly range tables were created that aided the military in calculating distances for gunboat artillery which factored in contextual factors like wind, drift, slope and elevation. These were used well into WW2 but they were limited to the particular type of cannon or shell
![Screenshot_2020-08-09_at_21.32.54 1.png](../img/Screenshot_2020-08-09_at_21.32.54%201.png)
![Screenshot_2020-08-09_at_21.34.48.png](../img/Screenshot_2020-08-09_at_21.34.48.png)
>
> Before the invention of actual computers, 'computer' was a job-title denoting people who were employed to conduct complex calculations, sometimes with the aid of machinery, but most often not. This persisted until the late 18th century when the word changed to include devices like adding machines.
> Before the invention of actual computers, 'computer' was a job-title denoting people who were employed to conduct complex calculations, sometimes with the aid of machinery, but most often not. This persisted until the late 18th century when the word changed to include devices like adding machines.
* Babbage sought to overcome this by designing the **Difference Engine** which was able to compute polynomials. Complex mathematical expressions that have constants, variables and exponent. He failed to complete it in his lifetime because of the complexity and number of intricate parts required. His model was eventually successfully created in the 90s using his designs and it worked.
* But while he was coming up with this he also conceived of a better and general purpose computing device that wasn't limited to polynomial calculations → the Analytical Engine.
* It could run operations in sequence and had memory and a primitive printer. It was way ahead of its time and was never completed.
* Ada Lovelace wrote hypothetical programs for the Analytical Engine, hence she is considered the world's first computer programmer.
* At this point then, computing was limited to scientific and engineering disciplines but in 1890, the US govt needed a computer in order to comply with the constitutional stipulation to have a census every ten years. This was getting increasingly difficult with the growing population - it would take more than 13 years to complete. This led to punch cards designed by Herman Hollereth. From this IBM was born
- Babbage sought to overcome this by designing the **Difference Engine** which was able to compute polynomials. Complex mathematical expressions that have constants, variables and exponent. He failed to complete it in his lifetime because of the complexity and number of intricate parts required. His model was eventually successfully created in the 90s using his designs and it worked.
- But while he was coming up with this he also conceived of a better and general purpose computing device that wasn't limited to polynomial calculations → the Analytical Engine.
- It could run operations in sequence and had memory and a primitive printer. It was way ahead of its time and was never completed.
- Ada Lovelace wrote hypothetical programs for the Analytical Engine, hence she is considered the world's first computer programmer.
- At this point then, computing was limited to scientific and engineering disciplines but in 1890, the US govt needed a computer in order to comply with the constitutional stipulation to have a census every ten years. This was getting increasingly difficult with the growing population - it would take more than 13 years to complete. This led to punch cards designed by Herman Hollereth. From this IBM was born

View file

@ -1 +0,0 @@
I am more than ever now the bride of science. Religion to me is science, and science is religion. In that deeply-felt truth lies the secret of my intense devotion to the reading of Gods natural works… And when I behold the scientific and so-called philosophers full of selfish feelings, and of a tendency to war against circumstances and Providence, I say to myself: They are not true priests, they are but half prophets — if not absolutely false ones. They have read the great page simply with the physical eye, and with none of the spirit within. The intellectual, the moral, the religious seem to me all naturally bound up and interlinked together in one great and harmonious whole… There is too much tendency to making separate and independent bundles of both the physical and the moral facts of the universe. Whereas, all and everything is naturally related and interconnected. A volume could I write on this subject…

View file

@ -1,6 +1,8 @@
---
categories:
- Computer Architecture
tags:
- Theory_of_Computation
- theory-of-computation
- turing
---
@ -17,14 +19,14 @@ For example:
### State 2
* If 0 then erase
* Write 1 then move right
* Go to state 5
- If 0 then erase
- Write 1 then move right
- Go to state 5
### State 5
* If 1, then erase
* Write 0 then move left
* Go to state *n*
- If 1, then erase
- Write 0 then move left
- Go to state _n_
Alan Turing proved that **any problem that is computable** can be computed by a Turing Machine using this simple system.