FunctionIntrabarCrossValueLibrary "FunctionIntrabarCrossValue"
intrabar_cross_value(a, b, step) Find the minimum difference of a intrabar cross and return its median value.
Parameters:
a : float, series a.
b : float, series b.
step : float, step to iterate x axis, default=0.01
Returns: float
MATH
OrdinaryLeastSquaresLibrary "OrdinaryLeastSquares"
One of the most common ways to estimate the coefficients for a linear regression is to use the Ordinary Least Squares (OLS) method.
This library implements OLS in pine. This implementation can be used to fit a linear regression of multiple independent variables onto one dependent variable,
as long as the assumptions behind OLS hold.
solve_xtx_inv(x, y) Solve a linear system of equations using the Ordinary Least Squares method.
This function returns both the estimated OLS solution and a matrix that essentially measures the model stability (linear dependence between the columns of 'x').
NOTE: The latter is an intermediate step when estimating the OLS solution but is useful when calculating the covariance matrix and is returned here to save computation time
so that this step doesn't have to be calculated again when things like standard errors should be calculated.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
Returns: Returns both the estimated OLS solution and a matrix that essentially measures the model stability (xtx_inv is equal to (X'X)^-1).
solve(x, y) Solve a linear system of equations using the Ordinary Least Squares method.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
Returns: Returns the estimated OLS solution.
standard_errors(x, y, beta_hat, xtx_inv) Calculate the standard errors.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
xtx_inv : This is (X'X)^-1, which means we take the transpose of the X matrix, multiply that the X matrix and then take the inverse of the result.
This essentially measures the linear dependence between the columns of the X matrix.
Returns: The standard errors.
estimate(x, beta_hat) Estimate the next step of a linear model.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
Returns: Returns the new estimate of Y based on the linear model.
FunctionMatrixSolveLibrary "FunctionMatrixSolve"
Matrix Equation solution for Ax = B, finds the value of x.
solve(A, B) Solves Matrix Equation for Ax = B, finds value for x.
Parameters:
A : matrix, Square matrix with data values.
B : matrix, One column matrix with data values.
Returns: matrix with X, x = A^-1 b, assuming A is square and has full rank
introcs.cs.princeton.edu
FunctionPolynomialFitLibrary "FunctionPolynomialFit"
Performs Polynomial Regression fit to data.
In statistics, polynomial regression is a form of regression analysis in which
the relationship between the independent variable x and the dependent variable
y is modelled as an nth degree polynomial in x.
reference:
en.wikipedia.org
www.bragitoff.com
gauss_elimination(A, m, n) Perform Gauss-Elimination and returns the Upper triangular matrix and solution of equations.
Parameters:
A : float matrix, data samples.
m : int, defval=na, number of rows.
n : int, defval=na, number of columns.
Returns: float array with coefficients.
polyfit(X, Y, degree) Fits a polynomial of a degree to (x, y) points.
Parameters:
X : float array, data sample x point.
Y : float array, data sample y point.
degree : int, defval=2, degree of the polynomial.
Returns: float array with coefficients.
note:
p(x) = p * x**deg + ... + p
interpolate(coeffs, x) interpolate the y position at the provided x.
Parameters:
coeffs : float array, coefficients of the polynomial.
x : float, position x to estimate y.
Returns: float.
DominantCycleCollection of Dominant Cycle estimators. Length adaptation used in the Adaptive Moving Averages and the Adaptive Oscillators try to follow price movements and accelerate/decelerate accordingly (usually quite rapidly with a huge range). Cycle estimators, on the other hand, try to measure the cycle period of the current market, which does not reflect price movement or the rate of change (the rate of change may also differ depending on the cycle phase, but the cycle period itself usually changes slowly). This collection may become encyclopaedic, so if you have any working cycle estimator, drop me a line in the comments below. Suggestions are welcome. Currently included estimators are based on the work of John F. Ehlers
mamaPeriod(src, dynLow, dynHigh) MESA Adaptation - MAMA Cycle
Parameters:
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
Returns: Calculated period
Based on MESA Adaptive Moving Average by John F. Ehlers
Performs Hilbert Transform Homodyne Discriminator cycle measurement
Unlike MAMA Alpha function (in LengthAdaptation library), this does not compute phase rate of change
Introduced in the September 2001 issue of Stocks and Commodities
Inspired by the @everget implementation:
Inspired by the @anoojpatel implementation:
paPeriod(src, dynLow, dynHigh, preHP, preSS, preHP) Pearson Autocorrelation
Parameters:
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
preHP : Use High Pass prefilter (default)
preSS : Use Super Smoother prefilter (default)
preHP : Use Hann Windowing prefilter
Returns: Calculated period
Based on Pearson Autocorrelation Periodogram by John F. Ehlers
Introduced in the September 2016 issue of Stocks and Commodities
Inspired by the @blackcat1402 implementation:
Inspired by the @rumpypumpydumpy implementation:
Corrected many errors, and made small speed optimizations, so this could be the best implementation to date (still slow, though, so may revisit in future)
High Pass and Super Smoother prefilters are used in the original implementation
dftPeriod(src, dynLow, dynHigh, preHP, preSS, preHP) Discrete Fourier Transform
Parameters:
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
preHP : Use High Pass prefilter (default)
preSS : Use Super Smoother prefilter (default)
preHP : Use Hann Windowing prefilter
Returns: Calculated period
Based on Spectrum from Discrete Fourier Transform by John F. Ehlers
Inspired by the @blackcat1402 implementation:
High Pass, Super Smoother and Hann Windowing prefilters are used in the original implementation
phasePeriod(src, dynLow, dynHigh, preHP, preSS, preHP) Phase Accumulation
Parameters:
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
preHP : Use High Pass prefilter (default)
preSS : Use Super Smoother prefilter (default)
preHP : Use Hamm Windowing prefilter
Returns: Calculated period
Based on Dominant Cycle from Phase Accumulation by John F. Ehlers
High Pass and Super Smoother prefilters are used in the original implementation
doAdapt(type, src, len, dynLow, dynHigh, chandeSDLen, chandeSmooth, chandePower, preHP, preSS, preHP) Execute a particular Length Adaptation or Dominant Cycle Estimator from the list
Parameters:
type : Length Adaptation or Dominant Cycle Estimator type to use
src : Series to use
len : Reference lookback length
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
chandeSDLen : Lookback length of Standard deviation for Chande's Dynamic Length
chandeSmooth : Smoothing length of Standard deviation for Chande's Dynamic Length
chandePower : Exponent of the length adaptation for Chande's Dynamic Length (lower is smaller variation)
preHP : Use High Pass prefilter for the Estimators that support it (default)
preSS : Use Super Smoother prefilter for the Estimators that support it (default)
preHP : Use Hann Windowing prefilter for the Estimators that support it
Returns: Calculated period (float, not limited)
doEstimate(type, src, dynLow, dynHigh, preHP, preSS, preHP) Execute a particular Dominant Cycle Estimator from the list
Parameters:
type : Dominant Cycle Estimator type to use
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
preHP : Use High Pass prefilter for the Estimators that support it (default)
preSS : Use Super Smoother prefilter for the Estimators that support it (default)
preHP : Use Hann Windowing prefilter for the Estimators that support it
Returns: Calculated period (float, not limited)
least_squares_regressionLibrary "least_squares_regression"
least_squares_regression: Least squares regression algorithm to find the optimal price interval for a given time period
basic_lsr(series, series, series) basic_lsr: Basic least squares regression algorithm
Parameters:
series : int t: time scale value array corresponding to price
series : float p: price scale value array corresponding to time
series : int array_size: the length of regression array
Returns: reg_slop, reg_intercept, reg_level, reg_stdev
trend_line_lsr(series, series, series, string, series, series) top_trend_line_lsr: Trend line fitting based on least square algorithm
Parameters:
series : int t: time scale value array corresponding to price
series : float p: price scale value array corresponding to time
series : int array_size: the length of regression array
string : reg_type: regression type in 'top' and 'bottom'
series : int max_iter: maximum fitting iterations
series : int min_points: the threshold of regression point numbers
Returns: reg_slop, reg_intercept, reg_level, reg_stdev, reg_point_num
simple_squares_regressionLibrary "simple_squares_regression"
simple_squares_regression: simple squares regression algorithm to find the optimal price interval for a given time period
basic_ssr(series, series, series) basic_ssr: Basic simple squares regression algorithm
Parameters:
series : float src: the regression source such as close
series : int region_forward: number of candle lines at the right end of the regression region from the current candle line
series : int region_len: the length of regression region
Returns: left_loc, right_loc, reg_val, reg_std, reg_max_offset
search_ssr(series, series, series, series) search_ssr: simple squares regression region search algorithm
Parameters:
series : float src: the regression source such as close
series : int max_forward: max number of candle lines at the right end of the regression region from the current candle line
series : int region_lower: the lower length of regression region
series : int region_upper: the upper length of regression region
Returns: left_loc, right_loc, reg_val, reg_level, reg_std_err, reg_max_offset
on_balance_volumeLibrary "on_balance_volume"
on_balance_volume: custom on balance volume
obv_diff(string, simple) obv_diff: custom on balance volume diff version
Parameters:
string : type: the moving average type of on balance volume
simple : int len: the moving average length of on balance volume
Returns: obv_diff: custom on balance volume diff value
obv_diff_norm(string, simple) obv_diff_norm: custom normalized on balance volume diff version
Parameters:
string : type: the moving average type of on balance volume
simple : int len: the moving average length of on balance volume
Returns: obv_diff: custom normalized on balance volume diff value
moving_averageLibrary "moving_average"
moving_average: moving average variants
variant(string, series, simple) variant: moving average variants
Parameters:
string : type: type in
series : float src: the source series of moving average
simple : int len: the length of moving average
Returns: float: the moving average variant value
NormalizedOscillatorsLibrary "NormalizedOscillators"
Collection of some common Oscillators. All are zero-mean and normalized to fit in the -1..1 range. Some are modified, so that the internal smoothing function could be configurable (for example, to enable Hann Windowing, that John F. Ehlers uses frequently). Some are modified for other reasons (see comments in the code), but never without a reason. This collection is neither encyclopaedic, nor reference, however I try to find the most correct implementation. Suggestions are welcome.
rsi2(upper, lower) RSI - second step
Parameters:
upper : Upwards momentum
lower : Downwards momentum
Returns: Oscillator value
Modified by Ehlers from Wilder's implementation to have a zero mean (oscillator from -1 to +1)
Originally: 100.0 - (100.0 / (1.0 + upper / lower))
Ignoring the 100 scale factor, we get: upper / (upper + lower)
Multiplying by two and subtracting 1, we get: (2 * upper) / (upper + lower) - 1 = (upper - lower) / (upper + lower)
rms(src, len) Root mean square (RMS)
Parameters:
src : Source series
len : Lookback period
Based on by John F. Ehlers implementation
ift(src) Inverse Fisher Transform
Parameters:
src : Source series
Returns: Normalized series
Based on by John F. Ehlers implementation
The input values have been multiplied by 2 (was "2*src", now "4*src") to force expansion - not compression
The inputs may be further modified, if needed
stoch(src, len) Stochastic
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
ssstoch(src, len) Super Smooth Stochastic (part of MESA Stochastic) by John F. Ehlers
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
Introduced in the January 2014 issue of Stocks and Commodities
This is not an implementation of MESA Stochastic, as it is based on Highpass filter not present in the function (but you can construct it)
This implementation is scaled by 0.95, so that Super Smoother does not exceed 1/-1
I do not know, if this the right way to fix this issue, but it works for now
netKendall(src, len) Noise Elimination Technology by John F. Ehlers
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
Introduced in the December 2020 issue of Stocks and Commodities
Uses simplified Kendall correlation algorithm
Implementation by @QuantTherapy:
rsi(src, len, smooth) RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
vrsi(src, len, smooth) Volume-scaled RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
This is my own version of RSI. It scales price movements by the proportion of RMS of volume
mrsi(src, len, smooth) Momentum RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Inspired by RocketRSI by John F. Ehlers (Stocks and Commodities, May 2018)
rrsi(src, len, smooth) Rocket RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Inspired by RocketRSI by John F. Ehlers (Stocks and Commodities, May 2018)
Does not include Fisher Transform of the original implementation, as the output must be normalized
Does not include momentum smoothing length configuration, so always assumes half the lookback length
mfi(src, len, smooth) Money Flow Index
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
lrsi(src, in_gamma, len) Laguerre RSI by John F. Ehlers
Parameters:
src : Source series
in_gamma : Damping factor (default is -1 to generate from len)
len : Lookback period (alternatively, if gamma is not set)
Returns: Oscillator series
The original implementation is with gamma. As it is impossible to collect gamma in my system, where the only user input is length,
an alternative calculation is included, where gamma is set by dividing len by 30. Maybe different calculation would be better?
fe(len) Choppiness Index or Fractal Energy
Parameters:
len : Lookback period
Returns: Oscillator series
The Choppiness Index (CHOP) was created by E. W. Dreiss
This indicator is sometimes called Fractal Energy
er(src, len) Efficiency ratio
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
Based on Kaufman Adaptive Moving Average calculation
This is the correct Efficiency ratio calculation, and most other implementations are wrong:
the number of bar differences is 1 less than the length, otherwise we are adding the change outside of the measured range!
For reference, see Stocks and Commodities June 1995
dmi(len, smooth) Directional Movement Index
Parameters:
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Based on the original Tradingview algorithm
Modified with inspiration from John F. Ehlers DMH (but not implementing the DMH algorithm!)
Only ADX is returned
Rescaled to fit -1 to +1
Unlike most oscillators, there is no src parameter as DMI works directly with high and low values
fdmi(len, smooth) Fast Directional Movement Index
Parameters:
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Same as DMI, but without secondary smoothing. Can be smoothed later. Instead, +DM and -DM smoothing can be configured
doOsc(type, src, len, smooth) Execute a particular Oscillator from the list
Parameters:
type : Oscillator type to use
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Chande Momentum Oscillator (CMO) is RSI without smoothing. No idea, why some authors use different calculations
LRSI with Fractal Energy is a combo oscillator that uses Fractal Energy to tune LRSI gamma, as seen here: www.prorealcode.com
doPostfilter(type, src, len) Execute a particular Oscillator Postfilter from the list
Parameters:
type : Oscillator type to use
src : Source series
len : Lookback period
Returns: Oscillator series
CommonFiltersLibrary "CommonFilters"
Collection of some common Filters and Moving Averages. This collection is not encyclopaedic, but to declutter my other scripts. Suggestions are welcome, though. Many filters here are based on the work of John F. Ehlers
sma(src, len) Simple Moving Average
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
ema(src, len) Exponential Moving Average
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
rma(src, len) Wilder's Smoothing (Running Moving Average)
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
hma(src, len) Hull Moving Average
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
vwma(src, len) Volume Weighted Moving Average
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
hp2(src) Simple denoiser
Parameters:
src : Series to use
Returns: Filtered series
fir2(src) Zero at 2 bar cycle period by John F. Ehlers
Parameters:
src : Series to use
Returns: Filtered series
fir3(src) Zero at 3 bar cycle period by John F. Ehlers
Parameters:
src : Series to use
Returns: Filtered series
fir23(src) Zero at 2 bar and 3 bar cycle periods by John F. Ehlers
Parameters:
src : Series to use
Returns: Filtered series
fir234(src) Zero at 2, 3 and 4 bar cycle periods by John F. Ehlers
Parameters:
src : Series to use
Returns: Filtered series
hp(src, len) High Pass Filter for cyclic components shorter than langth. Part of Roofing Filter by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
supers2(src, len) 2-pole Super Smoother by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
filt11(src, len) Filt11 is a variant of 2-pole Super Smoother with error averaging for zero-lag response by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
supers3(src, len) 3-pole Super Smoother by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
hannFIR(src, len) Hann Window Filter by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
hammingFIR(src, len) Hamming Window Filter (inspired by John F. Ehlers). Simplified implementation as Pedestal input parameter cannot be supplied, so I calculate it from the supplied length
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
triangleFIR(src, len) Triangle Window Filter by John F. Ehlers
Parameters:
src : Series to use
len : Filtering length
Returns: Filtered series
doPrefilter(type, src) Execute a particular Prefilter from the list
Parameters:
type : Prefilter type to use
src : Series to use
Returns: Filtered series
doMA(type, src, len) Execute a particular MA from the list
Parameters:
type : MA type to use
src : Series to use
len : Filtering length
Returns: Filtered series
math_utilsLibrary "math_utils"
Collection of math functions that are not part of the standard math library
num_of_non_decimal_digits(number) num_of_non_decimal_digits - The number of the most significant digits on the left of the dot
Parameters:
number : - The floating point number
Returns: number of non digits
num_of_decimal_digits(number) num_of_decimal_digits - The number of the most significant digits on the right of the dot
Parameters:
number : - The floating point number
Returns: number of decimal digits
floor(number, precision) floor - floor with precision to the given most significant decimal point
Parameters:
number : - The floating point number
precision : - The number of decimal places a.k.a the most significant decimal digit - e.g precision 2 will produce 0.01 minimum change
Returns: number floored to the decimal digits defined by the precision
ceil(number, precision) floor - ceil with precision to the given most significant decimal point
Parameters:
number : - The floating point number
precision : - The number of decimal places a.k.a the most significant decimal digit - e.g precision 2 will produce 0.01 minimum change
Returns: number ceiled to the decimal digits defined by the precision
clamp(number, lower, higher, precision) clamp - clamp with precision to the given most significant decimal point
Parameters:
number : - The floating point number
lower : - The lowerst number limit to return
higher : - The highest number limit to return
precision : - The number of decimal places a.k.a the most significant decimal digit - e.g precision 2 will produce 0.01 minimum change
Returns: number clamped to the decimal digits defined by the precision
ConditionalAverages█ OVERVIEW
This library is a Pine Script™ programmer’s tool containing functions that average values selectively.
█ CONCEPTS
Averaging can be useful to smooth out unstable readings in the data set, provide a benchmark to see the underlying trend of the data, or to provide a general expectancy of values in establishing a central tendency. Conventional averaging techniques tend to apply indiscriminately to all values in a fixed window, but it can sometimes be useful to average values only when a specific condition is met. As conditional averaging works on specific elements of a dataset, it can help us derive more context-specific conclusions. This library offers a collection of averaging methods that not only accomplish these tasks, but also exploit the efficiencies of the Pine Script™ runtime by foregoing unnecessary and resource-intensive for loops.
█ NOTES
To Loop or Not to Loop
Though for and while loops are essential programming tools, they are often unnecessary in Pine Script™. This is because the Pine Script™ runtime already runs your scripts in a loop where it executes your code on each bar of the dataset. Pine Script™ programmers who understand how their code executes on charts can use this to their advantage by designing loop-less code that will run orders of magnitude faster than functionally identical code using loops. Most of this library's function illustrate how you can achieve loop-less code to process past values. See the User Manual page on loops for more information. If you are looking for ways to measure execution time for you scripts, have a look at our LibraryStopwatch library .
Our `avgForTimeWhen()` and `totalForTimeWhen()` are exceptions in the library, as they use a while structure. Only a few iterations of the loop are executed on each bar, however, as its only job is to remove the few elements in the array that are outside the moving window defined by a time boundary.
Cumulating and Summing Conditionally
The ta.cum() or math.sum() built-in functions can be used with ternaries that select only certain values. In our `avgWhen(src, cond)` function, for example, we use this technique to cumulate only the occurrences of `src` when `cond` is true:
float cumTotal = ta.cum(cond ? src : 0) We then use:
float cumCount = ta.cum(cond ? 1 : 0) to calculate the number of occurrences where `cond` is true, which corresponds to the quantity of values cumulated in `cumTotal`.
Building Custom Series With Arrays
The advent of arrays in Pine has enabled us to build our custom data series. Many of this library's functions use arrays for this purpose, saving newer values that come in when a condition is met, and discarding the older ones, implementing a queue .
`avgForTimeWhen()` and `totalForTimeWhen()`
These two functions warrant a few explanations. They operate on a number of values included in a moving window defined by a timeframe expressed in milliseconds. We use a 1D timeframe in our example code. The number of bars included in the moving window is unknown to the programmer, who only specifies the period of time defining the moving window. You can thus use `avgForTimeWhen()` to calculate a rolling moving average for the last 24 hours, for example, that will work whether the chart is using a 1min or 1H timeframe. A 24-hour moving window will typically contain many more values on a 1min chart that on a 1H chart, but their calculated average will be very close.
Problems will arise on non-24x7 markets when large time gaps occur between chart bars, as will be the case across holidays or trading sessions. For example, if you were using a 24H timeframe and there is a two-day gap between two bars, then no chart bars would fit in the moving window after the gap. The `minBars` parameter mitigates this by guaranteeing that a minimum number of bars are always included in the calculation, even if including those bars requires reaching outside the prescribed timeframe. We use a minimum value of 10 bars in the example code.
Using var in Constant Declarations
In the past, we have been using var when initializing so-called constants in our scripts, which as per the Style Guide 's recommendations, we identify using UPPER_SNAKE_CASE. It turns out that var variables incur slightly superior maintenance overhead in the Pine Script™ runtime, when compared to variables initialized on each bar. We thus no longer use var to declare our "int/float/bool" constants, but still use it when an initialization on each bar would require too much time, such as when initializing a string or with a heavy function call.
Look first. Then leap.
█ FUNCTIONS
avgWhen(src, cond)
Gathers values of the source when a condition is true and averages them over the total number of occurrences of the condition.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
Returns: (float) A cumulative average of values when a condition is met.
avgWhenLast(src, cond, cnt)
Gathers values of the source when a condition is true and averages them over a defined number of occurrences of the condition.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
cnt : (simple int) The quantity of last occurrences of the condition for which to average values.
Returns: (float) The average of `src` for the last `x` occurrences where `cond` is true.
avgWhenInLast(src, cond, cnt)
Gathers values of the source when a condition is true and averages them over the total number of occurrences during a defined number of bars back.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
cnt : (simple int) The quantity of bars back to evaluate.
Returns: (float) The average of `src` in last `cnt` bars, but only when `cond` is true.
avgSince(src, cond)
Averages values of the source since a condition was true.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when the average is reset.
Returns: (float) The average of `src` since `cond` was true.
avgForTimeWhen(src, ms, cond, minBars)
Averages values of `src` when `cond` is true, over a moving window of length `ms` milliseconds.
Parameters:
src : (series int/float) The source of the values to be averaged.
ms : (simple int) The time duration in milliseconds defining the size of the moving window.
cond : (series bool) The condition determining which values are included. Optional.
minBars : (simple int) The minimum number of values to keep in the moving window. Optional.
Returns: (float) The average of `src` when `cond` is true in the moving window.
totalForTimeWhen(src, ms, cond, minBars)
Sums values of `src` when `cond` is true, over a moving window of length `ms` milliseconds.
Parameters:
src : (series int/float) The source of the values to be summed.
ms : (simple int) The time duration in milliseconds defining the size of the moving window.
cond : (series bool) The condition determining which values are included. Optional.
minBars : (simple int) The minimum number of values to keep in the moving window. Optional.
Returns: (float) The sum of `src` when `cond` is true in the moving window.
MovingAveragesLibraryLibrary "MovingAveragesLibrary"
This is a library allowing one to select between many different Moving Average formulas to smooth out any float variable.
You can use this library to apply a Moving Average function to any series of data as long as your source is a float.
The default application would be for applying Moving Averages onto your chart. However, the scope of this library is beyond that. Any indicator or strategy you are building can benefit from this library.
You can apply different types of smoothing and moving average functions to your indicators, momentum oscillators, average true range calculations, support and resistance zones, envelope bands, channels, and anything you can think of to attempt to smooth out noise while finding a delicate balance against lag.
If you are developing an indicator, you can use the 'ave_func' to allow your users to select any Moving Average for any function or variable by creating an input string with the following structure:
var_name = input.string(, , )
Where the types of Moving Average you would like to be provided would be included in options.
Example:
i_ma_type = input.string(title = "Moving Average Type", defval = "Hull Moving Average", options = )
Where you would add after options the strings I have included for you at the top of the PineScript for your convenience.
Then for the output you desire, simply call 'ave_func' like so:
ma = ave_func(source, length, i_ma_type)
Now the plotted Moving Average will be the same as what you or your users select from the Input.
ema(src, len) Exponential Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: Float value.
sma(src, len) Simple Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: Float value.
rma(src, len) Relative Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: Float value.
wma(src, len) Weighted Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: Float value.
dv2(len) Donchian V2 function.
Parameters:
len : Lookback length to use.
Returns: Open + Close / 2 for the selected length.
ModFilt(src, len) Modular Filter smoothing function.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: Float value.
EDSMA(src, len) Ehlers Dynamic Smoothed Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: EDSMA smoothing.
dema(x, t) Double Exponential Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: DEMA smoothing.
tema(src, len) Triple Exponential Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: TEMA smoothing.
smma(x, t) Smoothed Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: SMMA smoothing.
vwma(x, t) Volume Weighted Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: VWMA smoothing.
hullma(x, t) Hull Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: Hull smoothing.
covwma(x, t) Coefficient of Variation Weighted Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: COVWMA smoothing.
frama(x, t) Fractal Reactive Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: FRAMA smoothing.
kama(x, t) Kaufman's Adaptive Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: KAMA smoothing.
donchian(len) Donchian Calculation.
Parameters:
len : Lookback length to use.
Returns: Average of the highest price and the lowest price for the specified look-back period.
tma(src, len) Triangular Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: TMA smoothing.
VAMA(src, len) Volatility Adjusted Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: VAMA smoothing.
Jurik(src, len) Jurik Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: JMA smoothing.
MCG(src, len) McGinley smoothing.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: McGinley smoothing.
zlema(series, length) Zero Lag Exponential Moving Average.
Parameters:
series : Series to use ('close' is used if no argument is supplied).
length : Lookback length to use.
Returns: ZLEMA smoothing.
xema(src, len) Optimized Exponential Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: XEMA smoothing.
EhlersSuperSmoother(src, lower) Ehlers Super Smoother.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
lower : Smoothing value to use.
Returns: Ehlers Super smoothing.
EhlersEmaSmoother(sig, smoothK, smoothP) Ehlers EMA Smoother.
Parameters:
sig : Series to use ('close' is used if no argument is supplied).
smoothK : Lookback length to use.
smoothP : Smothing value to use.
Returns: Ehlers EMA smoothing.
ave_func(in_src, in_len, in_type) Returns the source after running it through a Moving Average function.
Parameters:
in_src : Series to use ('close' is used if no argument is supplied).
in_len : Lookback period to be used for the Moving Average function.
in_type : Type of Moving Average function to use. Must have a string input to select the options from that MUST match the type-casing in the function below.
Returns: The source as a float after running it through the Moving Average function.
lib_MilitzerLibrary "lib_Militzer"
// This is a collection of functions either found on the internet, or made by me.
// This is only public so my other scripts that reference this can also be public.
// But if you find anything useful here, be my guest.
print()
strToInt()
timeframeToMinutes()
MathProbabilityDistributionLibrary "MathProbabilityDistribution"
Probability Distribution Functions.
name(idx) Indexed names helper function.
Parameters:
idx : int, position in the range (0, 6).
Returns: string, distribution name.
usage:
.name(1)
Notes:
(0) => 'StdNormal'
(1) => 'Normal'
(2) => 'Skew Normal'
(3) => 'Student T'
(4) => 'Skew Student T'
(5) => 'GED'
(6) => 'Skew GED'
zscore(position, mean, deviation) Z-score helper function for x calculation.
Parameters:
position : float, position.
mean : float, mean.
deviation : float, standard deviation.
Returns: float, z-score.
usage:
.zscore(1.5, 2.0, 1.0)
std_normal(position) Standard Normal Distribution.
Parameters:
position : float, position.
Returns: float, probability density.
usage:
.std_normal(0.6)
normal(position, mean, scale) Normal Distribution.
Parameters:
position : float, position in the distribution.
mean : float, mean of the distribution, default=0.0 for standard distribution.
scale : float, scale of the distribution, default=1.0 for standard distribution.
Returns: float, probability density.
usage:
.normal(0.6)
skew_normal(position, skew, mean, scale) Skew Normal Distribution.
Parameters:
position : float, position in the distribution.
skew : float, skewness of the distribution.
mean : float, mean of the distribution, default=0.0 for standard distribution.
scale : float, scale of the distribution, default=1.0 for standard distribution.
Returns: float, probability density.
usage:
.skew_normal(0.8, -2.0)
ged(position, shape, mean, scale) Generalized Error Distribution.
Parameters:
position : float, position.
shape : float, shape.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.ged(0.8, -2.0)
skew_ged(position, shape, skew, mean, scale) Skew Generalized Error Distribution.
Parameters:
position : float, position.
shape : float, shape.
skew : float, skew.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.skew_ged(0.8, 2.0, 1.0)
student_t(position, shape, mean, scale) Student-T Distribution.
Parameters:
position : float, position.
shape : float, shape.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.student_t(0.8, 2.0, 1.0)
skew_student_t(position, shape, skew, mean, scale) Skew Student-T Distribution.
Parameters:
position : float, position.
shape : float, shape.
skew : float, skew.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.skew_student_t(0.8, 2.0, 1.0)
select(distribution, position, mean, scale, shape, skew, log) Conditional Distribution.
Parameters:
distribution : string, distribution name.
position : float, position.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
shape : float, shape.
skew : float, skew.
log : bool, if true apply log() to the result.
Returns: float, probability.
usage:
.select('StdNormal', __CYCLE4F__, log=true)
historicalrangeLibrary "historicalrange"
Library provices a method to calculate historical percentile range of series.
hpercentrank(source) calculates historical percentrank of the source
Parameters:
source : Source for which historical percentrank needs to be calculated. Source should be ranging between 0-100. If using a source which can beyond 0-100, use short term percentrank to baseline them.
Returns: pArray - percentrank array which contains how many instances of source occurred at different levels.
upperPercentile - percentile based on higher value
lowerPercentile - percentile based on lower value
median - median value of the source
max - max value of the source
distancefromath(source) returns stats on historical distance from ath in terms of percentage
Parameters:
source : for which stats are calculated
Returns: percentile and related historical stats regarding distance from ath
distancefromma(maType, length, source) returns stats on historical distance from moving average in terms of percentage
Parameters:
maType : Moving Average Type : Can be sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
length : Moving Average Length
source : for which stats are calculated
Returns: percentile and related historical stats regarding distance from ath
bpercentb(source, maType, length, multiplier, sticky) returns percentrank and stats on historical bpercentb levels
Parameters:
source : Moving Average Source
maType : Moving Average Type : Can be sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
length : Moving Average Length
multiplier : Standard Deviation multiplier
sticky : - sticky boundaries which will only change when value is outside boundary.
Returns: percentile and related historical stats regarding Bollinger Percent B
kpercentk(source, maType, length, multiplier, useTrueRange, sticky) returns percentrank and stats on historical kpercentk levels
Parameters:
source : Moving Average Source
maType : Moving Average Type : Can be sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
length : Moving Average Length
multiplier : Standard Deviation multiplier
useTrueRange : - if set to false, uses high-low.
sticky : - sticky boundaries which will only change when value is outside boundary.
Returns: percentile and related historical stats regarding Keltener Percent K
dpercentd(useAlternateSource, alternateSource, length, sticky) returns percentrank and stats on historical dpercentd levels
Parameters:
useAlternateSource : - Custom source is used only if useAlternateSource is set to true
alternateSource : - Custom source
length : - donchian channel length
sticky : - sticky boundaries which will only change when value is outside boundary.
Returns: percentile and related historical stats regarding Donchian Percent D
oscillator(type, length, shortLength, longLength, source, highSource, lowSource, method, highlowLength, sticky) oscillator - returns Choice of oscillator with custom overbought/oversold range
Parameters:
type : - oscillator type. Valid values : cci, cmo, cog, mfi, roc, rsi, stoch, tsi, wpr
length : - Oscillator length - not used for TSI
shortLength : - shortLength only used for TSI
longLength : - longLength only used for TSI
source : - custom source if required
highSource : - custom high source for stochastic oscillator
lowSource : - custom low source for stochastic oscillator
method : - Valid values for method are : sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
highlowLength : - length on which highlow of the oscillator is calculated
sticky : - overbought, oversold levels won't change unless crossed
Returns: percentile and related historical stats regarding oscillator
ArrayOperationsLibrary "ArrayOperations"
Array element wise basic operations.
add(sample_a, sample_b) Adds sample_b to sample_a and returns a new array.
Parameters:
sample_a : values to be added to.
sample_b : values to add.
Returns: array with added results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
subtract(sample_a, sample_b) Subtracts sample_b from sample_a and returns a new array.
sample_a : values to be subtracted from.
sample_b : values to subtract.
Returns: array with subtracted results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
multiply(sample_a, sample_b) multiply sample_a by sample_b and returns a new array.
sample_a : values to multiply.
sample_b : values to multiply with.
Returns: array with multiplied results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
divide(sample_a, sample_b) Divide sample_a by sample_b and returns a new array.
sample_a : values to divide.
sample_b : values to divide with.
Returns: array with divided results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
power(sample_a, sample_b) power sample_a by sample_b and returns a new array.
sample_a : values to power.
sample_b : values to power with.
Returns: float array with power results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
remainder(sample_a, sample_b) Remainder sample_a by sample_b and returns a new array.
sample_a : values to remainder.
sample_b : values to remainder with.
Returns: array with remainder results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
equal(sample_a, sample_b) Check element wise sample_a equals sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
not_equal(sample_a, sample_b) Check element wise sample_a not equals sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
over_or_equal(sample_a, sample_b) Check element wise sample_a over or equals sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
under_or_equal(sample_a, sample_b) Check element wise sample_a under or equals sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
over(sample_a, sample_b) Check element wise sample_a over sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
under(sample_a, sample_b) Check element wise sample_a under sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
and_(sample_a, sample_b) Check element wise sample_a and sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
or_(sample_a, sample_b) Check element wise sample_a or sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
all(sample) Check element wise if all numeric samples are true (!= 0).
sample : values to check.
Returns: int.
any(sample) Check element wise if any numeric samples are true (!= 0).
sample : values to check.
Returns: int.
FunctionWeekofmonthLibrary "FunctionWeekofmonth"
Week of Month function.
weekofmonth(utime) Week of month for provided unix time.
Parameters:
utime : int, unix timestamp.
Returns: int
WIPNNetworkLibrary "WIPNNetwork"
this is a work in progress (WIP) and prone to have some errors, so use at your own risk...
let me know if you find any issues..
Method for a generalized Neural Network.
network(x) Generalized Neural Network Method.
Parameters:
x : TODO: add parameter x description here
Returns: TODO: add what function returns
FunctionBlackScholesLibrary "FunctionBlackScholes"
Some methods for the Black Scholes Options Model, which demonstrates several approaches to the valuation of a European call.
// reference:
// people.math.sc.edu
// people.math.sc.edu
asset_path(s0, mu, sigma, t1, n) Simulates the behavior of an asset price over time.
Parameters:
s0 : float, asset price at time 0.
mu : float, growth rate.
sigma : float, volatility.
t1 : float, time to expiry date.
n : int, time steps to expiry date.
Returns: option values at each equal timed step (0 -> t1)
binomial(s0, e, r, sigma, t1, m) Uses the binomial method for a European call.
Parameters:
s0 : float, asset price at time 0.
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
m : int, time steps to expiry date.
Returns: option value at time 0.
bsf(s0, t0, e, r, sigma, t1) Evaluates the Black-Scholes formula for a European call.
Parameters:
s0 : float, asset price at time 0.
t0 : float, time at which the price is known.
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
Returns: option value at time 0.
forward(e, r, sigma, t1, nx, nt, smax) Forward difference method to value a European call option.
Parameters:
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
nx : int, number of space steps in interval (0, L).
nt : int, number of time steps.
smax : float, maximum value of S to consider.
Returns: option values for the european call, float array of size ((nx-1) * (nt+1)).
mc(s0, e, r, sigma, t1, m) Uses Monte Carlo valuation on a European call.
Parameters:
s0 : float, asset price at time 0.
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
m : int, time steps to expiry date.
Returns: confidence interval for the estimated range of valuation.
FunctionMinkowskiDistanceLibrary "FunctionMinkowskiDistance"
Method for Minkowski Distance,
The Minkowski distance or Minkowski metric is a metric in a normed vector space
which can be considered as a generalization of both the Euclidean distance and
the Manhattan distance.
It is named after the German mathematician Hermann Minkowski.
reference: en.wikipedia.org
double(point_ax, point_ay, point_bx, point_by, p_value) Minkowsky Distance for single points.
Parameters:
point_ax : float, x value of point a.
point_ay : float, y value of point a.
point_bx : float, x value of point b.
point_by : float, y value of point b.
p_value : float, p value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: float
ndim(point_x, point_y, p_value) Minkowsky Distance for N dimensions.
Parameters:
point_x : float array, point x dimension attributes.
point_y : float array, point y dimension attributes.
p_value : float, p value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: float
TimeLockedMALibrary "TimeLockedMA"
Library & function(s) which generates a moving average that stays locked to users desired time preference.
TODO - Add functionality for more moving average types. IE: smooth, weighted etc...
Example:
time_locked_ma(close, length=1, timeframe='days', type='ema')
Will generate a 1 day exponential moving average that will stay consistent across all chart intervals.
Error Handling
On small time frames with large moving averages (IE: 1min chart with a 50 week moving average), you'll get a study error that says "(function "sma") references too many candles in history" .
To fix this, make sure you have timeframe="" as an indicator() header. Next, in the indicator settings, increase the timeframe from to a higher interval until the error goes away.
By default, it's set to "Chart". Bringing the interval up to 1hr will usually solve the issue.
Furthermore, adding timeframe_gaps=false to your indicator() header will give you an approximation of real-time values.
Misc Info
For time_lock_ma() setting type='na' will return the relative length value that adjusts dynamically to user's chart time interval.
This is good for plugging into other functions where a lookback or length is required. (IE: Bollinger Bands)
time_locked_ma(source, length, timeframe, type) Creates a moving average that is locked to a desired timeframe
Parameters:
source : float, Moving average source
length : int, Moving average length
timeframe : string, Desired timeframe. Use: "minutes", "hours", "days", "weeks", "months", "chart"
type : string, string Moving average type. Use "SMA" (default) or "EMA". Value of "NA" will return relative lookback length.
Returns: moving average that is locked to desired timeframe.
timeframe_convert(t, a, b) Converts timeframe to desired timeframe. From a --> b
Parameters:
t : int, Time interval
a : string, Time period
b : string, Time period to convert to
Returns: Converted timeframe value
chart_time(timeframe_period, timeframe_multiplier) Separates timeframe.period function and returns chart interval and period
Parameters:
timeframe_period : string, timeframe.period
timeframe_multiplier : int, timeframe.multiplier
Enjoy :)