# Machine Learning

### simpleLinearRegression

`simpleLinearRegression`

calculates the linear relationship between two variables, an independent (x-axis) and dependent(y-axis) variable, resulting in a value for `a`

and `b`

in the equation `y = ax+b`

, where a is the slope of the line, and b is where the line crosses the x axis.

**Parameters:**

`independent`

- The set representing the x values`dependent`

- The set representing the y values, in the same position as their corresponding`independent`

values if arrays are used.

**Usage:**

`simpleLinearRegression(independent, dependent)`

**Returns:**

- a tuple of
`Float64`

,`(a, b)`

, where a is the slope of the line and b is the intersection with the x axis according to the linear equation`y = ax + b`

.

### stochasticLinearRegression

When fitting a line to a set of data, many times, for the value of an independent variable, on the x axis, there may be more than one value for the dependent variable, or y-axis. Stochastic Gradient Descent Optimization accounts for this variability by doing calculations on batches of the overall set to find the best line fit, by comparing expected outcomes with the actual outcomes of the original data set, while minimizing the overall error.

`stochasticLinearRegression`

does linear regression on two variables, one independent and one dependent, while doing **stochastic gradient descent optimization** before returning `(slope, intercept)`

. This is particularly important on large sets of data.

Regularization penalizes parameters, except the intercept, to avoid over-fitting.

Ridge Regression Ridge adds “squared magnitude” of coefficient as penalty to the loss function.

**Parameters:**

`learning_rate`

- the maxiumum rate to change the model each time in reaction to the calculated error rate. Default is`0.00001`

.`L2`

- the ridge regression coefficient. Default is`0.1`

.`batch_size`

- the size of the mini-calculations used to adjust the model. Smaller batch sizes mean smaller adjustments. Default is`15`

.`method`

- method for updating the weights. It should be contained in single quotes, like`'Momentum'`

. Possible values are:

- Adam(default) - designed specifically for training deep neural networks.

- Momentum - takes last several updates into account.

- Nesterov looks ahead to avoid over-adjusting.

- SGD vanilla stochastic gradient descent.

**Usage:**

`stochasticLinearRegression(learning_rate, L2, batch_size, method)(independent, dependent)`

**Returns:**

- a tuple of
`Float64`

,`(a, b)`

, where a is the slope of the line and b is the intersection with the x axis according to the linear equation`y = ax + b`

.

For more information, see:

### stochasticLogisticRegression

Binary Logistic Regression is when the dependent variable is categorial and has a pass/fail condition, like Hotdog/Not Hotdog. It calculates the probability of an outcome based on a dependent variable. For example, calculating the probability that A 'pass' is represented as a `1`

and a 'fail' is represented with a `0`

. The resulting equation is the probability, given the independent variable, that the dependent variable will 'pass' or 'fail' the categorization.

When fitting a line to a set of data, many times, for the value of an independent variable, on the x axis, sometimes the value could pass categorization and some could fail. Stochastic Gradient Descent Optimization accounts for this variability by doing calculations on batches of the overall set to find the best line fit, by comparing expected outcomes with the actual outcomes of the original data set, while minimizing the overall error.

`stochasticLogisticRegression`

does binary logistic regression on two variables, one independent and the other the categorization of 'pass'/'fail', while doing **stochastic gradient descent optimization** before returning `(slope, intercept)`

. This is particularly important on large sets of data.

Regularization penalizes parameters, except the intercept, to avoid over-fitting.

Ridge Regression Ridge adds “squared magnitude” of coefficient as penalty to the loss function.

**Parameters:**

`learning_rate`

- the maxiumum rate to change the model each time in reaction to the calculated error rate. Default is`0.00001`

.`L2`

- the ridge regression coefficient. Default is`0.1`

.`batch_size`

- the size of the mini-calculations used to adjust the model. Smaller batch sizes mean smaller adjustments. Default is`15`

.`method`

- method for updating the weights. It should be contained in single quotes, like`'Momentum'`

. Possible values are:

- Adam(default) - designed specifically for training deep neural networks.

- Momentum - takes last several updates into account.

- Nesterov looks ahead to avoid over-adjusting.

- SGD vanilla stochastic gradient descent.

**Usage:**

`stochasticLogisticRegression(learning_rate, L2, batch_size, method)(independent, categorization)`

**Returns:**

- a tuple of
`Float64`

,`(a, b)`

, where a is the slope of the line and b is the intersection with the x axis according to the linear equation`y = ax + b`

.

For more information, see:

Updated about 1 month ago