Machine Learning : Dominant Cycle Elastic Volume KNNAbout the Script
Dominant Cycle Elastic Volume KNN ,
is a non-parametric algorithm, which means that, initially it makes no assumptions about the underlying distribution of the time-series price as well as volume.
This approach gives it flexibility so that it can be used on a wide variety of securities at variety of timeframes.(even on lower timeframes such as seconds)
The main purpose of this indicator is to predict the trend of the underlying, by converging price, volume and dominant cycle as dimensions and generate signals of action.
Key terms :
Dominant cycle is a time cycle that has a greater influence on the overall behaviour of a system than other cycles.
The system uses Ehlers method to calculate Dominant Cycle/ Period.
Dominant cycle is used to determine the influencing period for the underlying.
Once the dominant cycle/ period is identified, it is treated as a dynamic length for considering further calculations
Elastic Volume MA is a volume based moving average which is generally used to converge the volume with price, the dominant period is used here as the length parameter
KNN K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on Supervised Learning technique.
K-NN algorithm assumes the similarity between the new case/data and available cases and put the new case into the category that is most similar to the available categories.
K-NN algorithm stores all the available data and classifies a new data point based on the similarity. This means when new data appears then it can be easily classified into a well suite category by using K- NN algorithm. K-NN algorithm can be used for Regression as well as for Classification but mostly it is used for the Classification problems.
So, K-NN is used here to classify the trend of the Dominant Cycle Elastic Volume, and Generate Signals on top of it
How to Use the Indicator ?
The Buy Signal Candle
The Sell Signal Candle
The Buy Setup
The Sell Setup
Stop and Reverse Structure
What Timeframes and Symbols can this indicator be used on ?
The above indicator can be used on any liquid security which has volume information intact with ticker
and it can be used on any timeframe, but the best timeframes are
The indicator can also be used as a trend confirmatory indicators on lower time frames, like 30second
The Script has provision for alerts
Two alerts are there :
Alert 1= "LONG CONDITION : DCEV-ML"
Alert 2= "SHORT CONDITION : DCEV-ML"
How to request for access ?
Simply private message me !
Machinelearning
Machine Learning : Cosine Similarity & Euclidean DistanceIntroduction:
This script implements a comprehensive trading strategy that adheres to the established rules and guidelines of housing trading. It leverages advanced machine learning techniques and incorporates customised moving averages, including the Conceptive Price Moving Average (CPMA), to provide accurate signals for informed trading decisions in the housing market. Additionally, signal processing techniques such as Lorentzian, Euclidean distance, Cosine similarity, Know sure thing, Rational Quadratic, and sigmoid transformation are utilised to enhance the signal quality and improve trading accuracy.
Features:
Market Analysis: The script utilizes advanced machine learning methods such as Lorentzian, Euclidean distance, and Cosine similarity to analyse market conditions. These techniques measure the similarity and distance between data points, enabling more precise signal identification and enhancing trading decisions.
Cosine similarity:
Cosine similarity is a measure used to determine the similarity between two vectors, typically in a high-dimensional space. It calculates the cosine of the angle between the vectors, indicating the degree of similarity or dissimilarity.
In the context of trading or signal processing, cosine similarity can be employed to compare the similarity between different data points or signals. The vectors in this case represent the numerical representations of the data points or signals.
Cosine similarity ranges from -1 to 1, with 1 indicating perfect similarity, 0 indicating no similarity, and -1 indicating perfect dissimilarity. A higher cosine similarity value suggests a closer match between the vectors, implying that the signals or data points share similar characteristics.
Lorentzian Classification:
Lorentzian classification is a machine learning algorithm used for classification tasks. It is based on the Lorentzian distance metric, which measures the similarity or dissimilarity between two data points. The Lorentzian distance takes into account the shape of the data distribution and can handle outliers better than other distance metrics.
Euclidean Distance:
Euclidean distance is a distance metric widely used in mathematics and machine learning. It calculates the straight-line distance between two points in Euclidean space. In two-dimensional space, the Euclidean distance between two points (x1, y1) and (x2, y2) is calculated using the formula sqrt((x2 - x1)^2 + (y2 - y1)^2).
Dynamic Time Windows: The script incorporates a dynamic time window function that allows users to define specific time ranges for trading. It checks if the current time falls within the specified window to execute the relevant trading signals.
Custom Moving Averages: The script includes the CPMA, a powerful moving average calculation. Unlike traditional moving averages, the CPMA provides improved support and resistance levels by considering multiple price types and employing a combination of Exponential Moving Averages (EMAs) and Simple Moving Averages (SMAs). Its adaptive nature ensures responsiveness to changes in price trends.
Signal Processing Techniques: The script applies signal processing techniques such as Know sure thing, Rational Quadratic, and sigmoid transformation to enhance the quality of the generated signals. These techniques improve the accuracy and reliability of the trading signals, aiding in making well-informed trading decisions.
Trade Statistics and Metrics: The script provides comprehensive trade statistics and metrics, including total wins, losses, win rate, win-loss ratio, and early signal flips. These metrics offer valuable insights into the performance and effectiveness of the trading strategy.
Usage:
Configuring Time Windows: Users can customize the time windows by specifying the start and finish time ranges according to their trading preferences and local market conditions.
Signal Interpretation: The script generates long and short signals based on the analysis, custom moving averages, and signal processing techniques. Users should pay attention to these signals and take appropriate action, such as entering or exiting trades, depending on their trading strategies.
Trade Statistics: The script continuously tracks and updates trade statistics, providing users with a clear overview of their trading performance. These statistics help users assess the effectiveness of the strategy and make informed decisions.
Conclusion:
With its adherence to housing trading rules, advanced machine learning methods, customized moving averages like the CPMA, and signal processing techniques such as Lorentzian, Euclidean distance, Cosine similarity, Know sure thing, Rational Quadratic, and sigmoid transformation, this script offers users a powerful tool for housing market analysis and trading. By leveraging the provided signals, time windows, and trade statistics, users can enhance their trading strategies and improve their overall trading performance.
Disclaimer:
Please note that while this script incorporates established tradingview housing rules, advanced machine learning techniques, customized moving averages, and signal processing techniques, it should be used for informational purposes only. Users are advised to conduct their own analysis and exercise caution when making trading decisions. The script's performance may vary based on market conditions, user settings, and the accuracy of the machine learning methods and signal processing techniques. The trading platform and developers are not responsible for any financial losses incurred while using this script.
By publishing this script on the platform, traders can benefit from its professional presentation, clear instructions, and the utilisation of advanced machine learning techniques, customised moving averages, and signal processing techniques for enhanced trading signals and accuracy.
I extend my gratitude to TradingView, LUX ALGO, and JDEHORTY for their invaluable contributions to the trading community. Their innovative scripts, meticulous coding patterns, and insightful ideas have profoundly enriched traders' strategies, including my own.
N-Rho To Noise (Reinforcement Learning)N-Rho To Noise is a ratio of 2 components. Rho is my own calculation of a signal that is differenced (force time series stationary, allowing for more predictability) and its relation to a unit of a measure of noise. N is the amount of times it is differenced. Using a simplified q-learning reinforcement learning agent, the length of the ratio is calibrated to its optimal value.
- Purple indicates the undifferenced signal is above the RMSE error bands
- Red indicates both the differenced and undifferenced signals are above the threshold for a strong positive deviation, suggesting a short
- Blue indicates the undifferenced signal is below the RMSE error bands
- Green indicates both the differenced and undifferenced signals are below the threshold for a negative strong deviation, suggesting a long
- Strong long signal when you have both an undifferenced Rho and differenced Rho giving you local agreement (blue bar followed by green)
- Strong short signal when you have an undifferenced and differenced Rho giving you identical signals (purple bar followed by red)
Optimal length: the parameter of the length that the model configures to be the best parameter
Optimal reward: the reward corresponding to the optimal length (green=strong value, orange=intermediate strength, red=poor)
Average reward: the average reward of the set of lengths used over all episodes (green=strong value, orange=intermediate strength, red=poor)
Cumulative reward: the sum of all the rewards
Variance: a measure of how varied the data is (too much variance can suggest it cannot generalize too well to unseen data)
Endpointed SSA of Price [Loxx]The Endpointed SSA of Price: A Comprehensive Tool for Market Analysis and Decision-Making
The financial markets present sophisticated challenges for traders and investors as they navigate the complexities of market behavior. To effectively interpret and capitalize on these complexities, it is crucial to employ powerful analytical tools that can reveal hidden patterns and trends. One such tool is the Endpointed SSA of Price, which combines the strengths of Caterpillar Singular Spectrum Analysis, a sophisticated time series decomposition method, with insights from the fields of economics, artificial intelligence, and machine learning.
The Endpointed SSA of Price has its roots in the interdisciplinary fusion of mathematical techniques, economic understanding, and advancements in artificial intelligence. This unique combination allows for a versatile and reliable tool that can aid traders and investors in making informed decisions based on comprehensive market analysis.
The Endpointed SSA of Price is not only valuable for experienced traders but also serves as a useful resource for those new to the financial markets. By providing a deeper understanding of market forces, this innovative indicator equips users with the knowledge and confidence to better assess risks and opportunities in their financial pursuits.
█ Exploring Caterpillar SSA: Applications in AI, Machine Learning, and Finance
Caterpillar SSA (Singular Spectrum Analysis) is a non-parametric method for time series analysis and signal processing. It is based on a combination of principles from classical time series analysis, multivariate statistics, and the theory of random processes. The method was initially developed in the early 1990s by a group of Russian mathematicians, including Golyandina, Nekrutkin, and Zhigljavsky.
Background Information:
SSA is an advanced technique for decomposing time series data into a sum of interpretable components, such as trend, seasonality, and noise. This decomposition allows for a better understanding of the underlying structure of the data and facilitates forecasting, smoothing, and anomaly detection. Caterpillar SSA is a particular implementation of SSA that has proven to be computationally efficient and effective for handling large datasets.
Uses in AI and Machine Learning:
In recent years, Caterpillar SSA has found applications in various fields of artificial intelligence (AI) and machine learning. Some of these applications include:
1. Feature extraction: Caterpillar SSA can be used to extract meaningful features from time series data, which can then serve as inputs for machine learning models. These features can help improve the performance of various models, such as regression, classification, and clustering algorithms.
2. Dimensionality reduction: Caterpillar SSA can be employed as a dimensionality reduction technique, similar to Principal Component Analysis (PCA). It helps identify the most significant components of a high-dimensional dataset, reducing the computational complexity and mitigating the "curse of dimensionality" in machine learning tasks.
3. Anomaly detection: The decomposition of a time series into interpretable components through Caterpillar SSA can help in identifying unusual patterns or outliers in the data. Machine learning models trained on these decomposed components can detect anomalies more effectively, as the noise component is separated from the signal.
4. Forecasting: Caterpillar SSA has been used in combination with machine learning techniques, such as neural networks, to improve forecasting accuracy. By decomposing a time series into its underlying components, machine learning models can better capture the trends and seasonality in the data, resulting in more accurate predictions.
Application in Financial Markets and Economics:
Caterpillar SSA has been employed in various domains within financial markets and economics. Some notable applications include:
1. Stock price analysis: Caterpillar SSA can be used to analyze and forecast stock prices by decomposing them into trend, seasonal, and noise components. This decomposition can help traders and investors better understand market dynamics, detect potential turning points, and make more informed decisions.
2. Economic indicators: Caterpillar SSA has been used to analyze and forecast economic indicators, such as GDP, inflation, and unemployment rates. By decomposing these time series, researchers can better understand the underlying factors driving economic fluctuations and develop more accurate forecasting models.
3. Portfolio optimization: By applying Caterpillar SSA to financial time series data, portfolio managers can better understand the relationships between different assets and make more informed decisions regarding asset allocation and risk management.
Application in the Indicator:
In the given indicator, Caterpillar SSA is applied to a financial time series (price data) to smooth the series and detect significant trends or turning points. The method is used to decompose the price data into a set number of components, which are then combined to generate a smoothed signal. This signal can help traders and investors identify potential entry and exit points for their trades.
The indicator applies the Caterpillar SSA method by first constructing the trajectory matrix using the price data, then computing the singular value decomposition (SVD) of the matrix, and finally reconstructing the time series using a selected number of components. The reconstructed series serves as a smoothed version of the original price data, highlighting significant trends and turning points. The indicator can be customized by adjusting the lag, number of computations, and number of components used in the reconstruction process. By fine-tuning these parameters, traders and investors can optimize the indicator to better match their specific trading style and risk tolerance.
Caterpillar SSA is versatile and can be applied to various types of financial instruments, such as stocks, bonds, commodities, and currencies. It can also be combined with other technical analysis tools or indicators to create a comprehensive trading system. For example, a trader might use Caterpillar SSA to identify the primary trend in a market and then employ additional indicators, such as moving averages or RSI, to confirm the trend and generate trading signals.
In summary, Caterpillar SSA is a powerful time series analysis technique that has found applications in AI and machine learning, as well as financial markets and economics. By decomposing a time series into interpretable components, Caterpillar SSA enables better understanding of the underlying structure of the data, facilitating forecasting, smoothing, and anomaly detection. In the context of financial trading, the technique is used to analyze price data, detect significant trends or turning points, and inform trading decisions.
█ Input Parameters
This indicator takes several inputs that affect its signal output. These inputs can be classified into three categories: Basic Settings, UI Options, and Computation Parameters.
Source: This input represents the source of price data, which is typically the closing price of an asset. The user can select other price data, such as opening price, high price, or low price. The selected price data is then utilized in the Caterpillar SSA calculation process.
Lag: The lag input determines the window size used for the time series decomposition. A higher lag value implies that the SSA algorithm will consider a longer range of historical data when extracting the underlying trend and components. This parameter is crucial, as it directly impacts the resulting smoothed series and the quality of extracted components.
Number of Computations: This input, denoted as 'ncomp,' specifies the number of eigencomponents to be considered in the reconstruction of the time series. A smaller value results in a smoother output signal, while a higher value retains more details in the series, potentially capturing short-term fluctuations.
SSA Period Normalization: This input is used to normalize the SSA period, which adjusts the significance of each eigencomponent to the overall signal. It helps in making the algorithm adaptive to different timeframes and market conditions.
Number of Bars: This input specifies the number of bars to be processed by the algorithm. It controls the range of data used for calculations and directly affects the computation time and the output signal.
Number of Bars to Render: This input sets the number of bars to be plotted on the chart. A higher value slows down the computation but provides a more comprehensive view of the indicator's performance over a longer period. This value controls how far back the indicator is rendered.
Color bars: This boolean input determines whether the bars should be colored according to the signal's direction. If set to true, the bars are colored using the defined colors, which visually indicate the trend direction.
Show signals: This boolean input controls the display of buy and sell signals on the chart. If set to true, the indicator plots shapes (triangles) to represent long and short trade signals.
Static Computation Parameters:
The indicator also includes several internal parameters that affect the Caterpillar SSA algorithm, such as Maxncomp, MaxLag, and MaxArrayLength. These parameters set the maximum allowed values for the number of computations, the lag, and the array length, ensuring that the calculations remain within reasonable limits and do not consume excessive computational resources.
█ A Note on Endpionted, Non-repainting Indicators
An endpointed indicator is one that does not recalculate or repaint its past values based on new incoming data. In other words, the indicator's previous signals remain the same even as new price data is added. This is an important feature because it ensures that the signals generated by the indicator are reliable and accurate, even after the fact.
When an indicator is non-repainting or endpointed, it means that the trader can have confidence in the signals being generated, knowing that they will not change as new data comes in. This allows traders to make informed decisions based on historical signals, without the fear of the signals being invalidated in the future.
In the case of the Endpointed SSA of Price, this non-repainting property is particularly valuable because it allows traders to identify trend changes and reversals with a high degree of accuracy, which can be used to inform trading decisions. This can be especially important in volatile markets where quick decisions need to be made.
Gamma Bands v. 7.0Gamma Bands are based on previous day data of base intrument, Volatility , Options flow (imported from external source Quandl via TradingView API as TV is not supporting Options as instruments) and few other additional factors to calculate intraday levels. Those levels in correlation with even pure Price Action works like a charm what is confirmed by big orders often placed exactly on those levels on Futures Contracts. We have levels +/- 0.25, 0.5 and 1.0 that are calculated from Pivot Point and are working like Support and Resistance. Higher the number of Gamma, stronger the level. Passing Gamma +1/-1 would be good entry point for trades as almost everytime it is equal to Trend Day. Levels are calculated by Machine Learning algorithm written in Python which downloads data from Options and Darkpool markets, process and calculate levels, export to Quandl and then in PineScript I import the data to indicator. Levels are refreshed each day and are valid for particular trading day.
There's possibility also to enable display of Initial Balance range (High and Low range of bars/candles from 1st hour of regular cash session). Breaking one of extremes of Initial Balance is very often driving sentiment for rest of the session.
Volatility Reversal Levels
They're calculated taking into account Options flow imported to TV (Strikes, Call/Put types & Expiration dates) in combination with Volatility, Volume flow. Based on that we calculate on daily basis Significant Close level and "Stop and Reversal level".
Very often reaching area close to those levels either trigger immediate reversal of previous trend or at least push price into consolidation range.
Lorentzian ML [Sublime Traders]Lorentzian ML
Context: The whole idea of this indicator is to use the Lorentzian Classifier (a popular machine learning model suited for analyzing data in a time series) , add some oscillators and filter them with volume averages in order to get precise swing move indications.
The Lorentzian ML indicator uses the Lorenzian Classifier (LDC) algorithm that takes into account the Commodity Channel Index (CCI) and Relative Strength Index (RSI) signals as raw material to provide buy and sell signals. The indicator is accompanied by take profit , stop loss and entry lines based on the Average True Range (ATR).
Features:
1. Lorentzian Classifier:
Uses the difference between the current and previous values of CCI and RSI to generate buy and sell signals.
The classifier threshold can be adjusted using the input parameter.
2. ATR-based Take Profit Line:
A horizontal take profit line is plotted when buy or sell signals occur.
The line is based on the ATR value and a user-defined multiplier.
3. VMA filtering
Using the simple switches: Scalper, Swing or Holder , the users can easily filter the frequency of the signals in addition to the lookback and threshold filters. This will affect the used VMA lines that use data gathered from multiple timeframes.
Visual Representation:
The indicator plots green candles for buy signals and red candles for sell signals.
Buy and sell labels are displayed on the chart to mark the points where signals occur.
The ATR-based take profit line is displayed in a user-defined color and line width.
Visual representation of the VMA lines : Red - bearish , Blue - uncertain , Green - bullish
Changes and features to come
Fix "holder" switch on sell side that sometimes bugs the whole chart.
Add more intuitive filtering methods.
Add two more oscillators to the Lorentzian pool.
Create switches for Lorentzian source.
GKD-C Smooth Step [Loxx]Giga Kaleidoscope Smooth Step is a Confirmation module included in Loxx's "Giga Kaleidoscope Modularized Trading System".
█ Giga Kaleidoscope Modularized Trading System
What is Loxx's "Giga Kaleidoscope Modularized Trading System"?
The Giga Kaleidoscope Modularized Trading System is a trading system built on the philosophy of the NNFX (No Nonsense Forex) algorithmic trading.
What is an NNFX algorithmic trading strategy?
The NNFX algorithm is built on the principles of trend, momentum, and volatility. There are six core components in the NNFX trading algorithm:
1. Volatility - price volatility; e.g., Average True Range, True Range Double, Close-to-Close, etc.
2. Baseline - a moving average to identify price trend
3. Confirmation 1 - a technical indicator used to identify trends.
4. Confirmation 2 - a technical indicator used to identify trends.
5. Continuation - a technical indicator used to identify trends.
6. Volatility/Volume - a technical indicator used to identify volatility/volume breakouts/breakdown.
7. Exit - a technical indicator used to determine when a trend is exhausted.
How does Loxx's GKD (Giga Kaleidoscope Modularized Trading System) implement the NNFX algorithm outlined above?
Loxx's GKD v1.0 system has five types of modules (indicators/strategies). These modules are:
1. GKD-BT - Backtesting module (Volatility, Number 1 in the NNFX algorithm)
2. GKD-B - Baseline module (Baseline and Volatility/Volume, Numbers 1 and 2 in the NNFX algorithm)
3. GKD-C - Confirmation 1/2 and Continuation module (Confirmation 1/2 and Continuation, Numbers 3, 4, and 5 in the NNFX algorithm)
4. GKD-V - Volatility/Volume module (Confirmation 1/2, Number 6 in the NNFX algorithm)
5. GKD-E - Exit module (Exit, Number 7 in the NNFX algorithm)
(additional module types will added in future releases)
Each module interacts with every module by passing data between modules. Data is passed between each module as described below:
GKD-B => GKD-V => GKD-C(1) => GKD-C(2) => GKD-C(Continuation) => GKD-E => GKD-BT
That is, the Baseline indicator passes its data to Volatility/Volume. The Volatility/Volume indicator passes its values to the Confirmation 1 indicator. The Confirmation 1 indicator passes its values to the Confirmation 2 indicator. The Confirmation 2 indicator passes its values to the Continuation indicator. The Continuation indicator passes its values to the Exit indicator, and finally, the Exit indicator passes its values to the Backtest strategy.
This chaining of indicators requires that each module conform to Loxx's GKD protocol, therefore allowing for the testing of every possible combination of technical indicators that make up the six components of the NNFX algorithm.
What does the application of the GKD trading system look like?
Example trading system:
Backtest: Strategy with 1-3 take profits, trailing stop loss, multiple types of PnL volatility, and 2 backtesting styles
Baseline: Hull Moving Average as shown on the chart above
Volatility/Volume: Average Directional Index (ADX) as shown on the chart above
Confirmation 1: Smooth Step as shown on the chart above
Confirmation 2: Williams Percent Range
Continuation: Fisher Transform
Exit: Rex Oscillator
Each GKD indicator is denoted with a module identifier of either: GKD-BT, GKD-B, GKD-C, GKD-V, or GKD-E. This allows traders to understand to which module each indicator belongs and where each indicator fits into the GKD protocol chain.
Giga Kaleidoscope Modularized Trading System Signals (based on the NNFX algorithm)
Standard Entry
1. GKD-C Confirmation 1 Signal
2. GKD-B Baseline agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility/Volume agrees
Baseline Entry
1. GKD-B Baseline signal
2. GKD-C Confirmation 1 agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility/Volume agrees
6. GKD-C Confirmation 1 signal was less than 7 candles prior
Continuation Entry
1. Standard Entry, Baseline Entry, or Pullback; entry triggered previously
2. GKD-B Baseline hasn't crossed since entry signal trigger
3. GKD-C Confirmation Continuation Indicator signals
4. GKD-C Confirmation 1 agrees
5. GKD-B Baseline agrees
6. GKD-C Confirmation 2 agrees
1-Candle Rule Standard Entry
1. GKD-C Confirmation 1 signal
2. GKD-B Baseline agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
Next Candle:
1. Price retraced (Long: close < close or Short: close > close )
2. GKD-B Baseline agrees
3. GKD-C Confirmation 1 agrees
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility/Volume agrees
1-Candle Rule Baseline Entry
1. GKD-B Baseline signal
2. GKD-C Confirmation 1 agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
4. GKD-C Confirmation 1 signal was less than 7 candles prior
Next Candle:
1. Price retraced (Long: close < close or Short: close > close )
2. GKD-B Baseline agrees
3. GKD-C Confirmation 1 agrees
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility/Volume Agrees
PullBack Entry
1. GKD-B Baseline signal
2. GKD-C Confirmation 1 agrees
3. Price is beyond 1.0x Volatility of Baseline
Next Candle:
1. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
3. GKD-C Confirmation 1 agrees
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility/Volume Agrees
█ Smooth Step
What is Smooth Step?
In many cases (computer graphics, machine learning, ta analysis) we need normalized values. Smooth step is one of the possible ways to do it
It belongs to the family of sigmoidal (clamping in this case, since this indicator, as is, is not used for interpolation) functions, and is producing a sub-set of what is the built in stochastic, except that I kept it in its original range of 0 to 1 and that it filters out some values that stochastic produces. Also, this indicator can use all the usual prices (not just the close/close or low/high/close)
Requirements
Inputs
Confirmation 1 and Solo Confirmation: GKD-V Volatility / Volume indicator
Confirmation 2: GKD-C Confirmation indicator
Outputs
Confirmation 2 and Solo Confirmation: GKD-E Exit indicator
Confirmation 1: GKD-C Confirmation indicator
Continuation: GKD-E Exit indicator
Additional features will be added in future releases.
This indicator is only available to ALGX Trading VIP group members . You can see the Author's Instructions below to get more information on how to get access.
Machine Learning: Lorentzian Classification█ OVERVIEW
A Lorentzian Distance Classifier (LDC) is a Machine Learning classification algorithm capable of categorizing historical data from a multi-dimensional feature space. This indicator demonstrates how Lorentzian Classification can also be used to predict the direction of future price movements when used as the distance metric for a novel implementation of an Approximate Nearest Neighbors (ANN) algorithm.
█ BACKGROUND
In physics, Lorentzian space is perhaps best known for its role in describing the curvature of space-time in Einstein's theory of General Relativity (2). Interestingly, however, this abstract concept from theoretical physics also has tangible real-world applications in trading.
Recently, it was hypothesized that Lorentzian space was also well-suited for analyzing time-series data (4), (5). This hypothesis has been supported by several empirical studies that demonstrate that Lorentzian distance is more robust to outliers and noise than the more commonly used Euclidean distance (1), (3), (6). Furthermore, Lorentzian distance was also shown to outperform dozens of other highly regarded distance metrics, including Manhattan distance, Bhattacharyya similarity, and Cosine similarity (1), (3). Outside of Dynamic Time Warping based approaches, which are unfortunately too computationally intensive for PineScript at this time, the Lorentzian Distance metric consistently scores the highest mean accuracy over a wide variety of time series data sets (1).
Euclidean distance is commonly used as the default distance metric for NN-based search algorithms, but it may not always be the best choice when dealing with financial market data. This is because financial market data can be significantly impacted by proximity to major world events such as FOMC Meetings and Black Swan events. This event-based distortion of market data can be framed as similar to the gravitational warping caused by a massive object on the space-time continuum. For financial markets, the analogous continuum that experiences warping can be referred to as "price-time".
Below is a side-by-side comparison of how neighborhoods of similar historical points appear in three-dimensional Euclidean Space and Lorentzian Space:
This figure demonstrates how Lorentzian space can better accommodate the warping of price-time since the Lorentzian distance function compresses the Euclidean neighborhood in such a way that the new neighborhood distribution in Lorentzian space tends to cluster around each of the major feature axes in addition to the origin itself. This means that, even though some nearest neighbors will be the same regardless of the distance metric used, Lorentzian space will also allow for the consideration of historical points that would otherwise never be considered with a Euclidean distance metric.
Intuitively, the advantage inherent in the Lorentzian distance metric makes sense. For example, it is logical that the price action that occurs in the hours after Chairman Powell finishes delivering a speech would resemble at least some of the previous times when he finished delivering a speech. This may be true regardless of other factors, such as whether or not the market was overbought or oversold at the time or if the macro conditions were more bullish or bearish overall. These historical reference points are extremely valuable for predictive models, yet the Euclidean distance metric would miss these neighbors entirely, often in favor of irrelevant data points from the day before the event. By using Lorentzian distance as a metric, the ML model is instead able to consider the warping of price-time caused by the event and, ultimately, transcend the temporal bias imposed on it by the time series.
For more information on the implementation details of the Approximate Nearest Neighbors (ANN) algorithm used in this indicator, please refer to the detailed comments in the source code.
█ HOW TO USE
Below is an explanatory breakdown of the different parts of this indicator as it appears in the interface:
Below is an explanation of the different settings for this indicator:
General Settings:
Source - This has a default value of "hlc3" and is used to control the input data source.
Neighbors Count - This has a default value of 8, a minimum value of 1, a maximum value of 100, and a step of 1. It is used to control the number of neighbors to consider.
Max Bars Back - This has a default value of 2000.
Feature Count - This has a default value of 5, a minimum value of 2, and a maximum value of 5. It controls the number of features to use for ML predictions.
Color Compression - This has a default value of 1, a minimum value of 1, and a maximum value of 10. It is used to control the compression factor for adjusting the intensity of the color scale.
Show Exits - This has a default value of false. It controls whether to show the exit threshold on the chart.
Use Dynamic Exits - This has a default value of false. It is used to control whether to attempt to let profits ride by dynamically adjusting the exit threshold based on kernel regression.
Feature Engineering Settings:
Note: The Feature Engineering section is for fine-tuning the features used for ML predictions. The default values are optimized for the 4H to 12H timeframes for most charts, but they should also work reasonably well for other timeframes. By default, the model can support features that accept two parameters (Parameter A and Parameter B, respectively). Even though there are only 4 features provided by default, the same feature with different settings counts as two separate features. If the feature only accepts one parameter, then the second parameter will default to EMA-based smoothing with a default value of 1. These features represent the most effective combination I have encountered in my testing, but additional features may be added as additional options in the future.
Feature 1 - This has a default value of "RSI" and options are: "RSI", "WT", "CCI", "ADX".
Feature 2 - This has a default value of "WT" and options are: "RSI", "WT", "CCI", "ADX".
Feature 3 - This has a default value of "CCI" and options are: "RSI", "WT", "CCI", "ADX".
Feature 4 - This has a default value of "ADX" and options are: "RSI", "WT", "CCI", "ADX".
Feature 5 - This has a default value of "RSI" and options are: "RSI", "WT", "CCI", "ADX".
Filters Settings:
Use Volatility Filter - This has a default value of true. It is used to control whether to use the volatility filter.
Use Regime Filter - This has a default value of true. It is used to control whether to use the trend detection filter.
Use ADX Filter - This has a default value of false. It is used to control whether to use the ADX filter.
Regime Threshold - This has a default value of -0.1, a minimum value of -10, a maximum value of 10, and a step of 0.1. It is used to control the Regime Detection filter for detecting Trending/Ranging markets.
ADX Threshold - This has a default value of 20, a minimum value of 0, a maximum value of 100, and a step of 1. It is used to control the threshold for detecting Trending/Ranging markets.
Kernel Regression Settings:
Trade with Kernel - This has a default value of true. It is used to control whether to trade with the kernel.
Show Kernel Estimate - This has a default value of true. It is used to control whether to show the kernel estimate.
Lookback Window - This has a default value of 8 and a minimum value of 3. It is used to control the number of bars used for the estimation. Recommended range: 3-50
Relative Weighting - This has a default value of 8 and a step size of 0.25. It is used to control the relative weighting of time frames. Recommended range: 0.25-25
Start Regression at Bar - This has a default value of 25. It is used to control the bar index on which to start regression. Recommended range: 0-25
Display Settings:
Show Bar Colors - This has a default value of true. It is used to control whether to show the bar colors.
Show Bar Prediction Values - This has a default value of true. It controls whether to show the ML model's evaluation of each bar as an integer.
Use ATR Offset - This has a default value of false. It controls whether to use the ATR offset instead of the bar prediction offset.
Bar Prediction Offset - This has a default value of 0 and a minimum value of 0. It is used to control the offset of the bar predictions as a percentage from the bar high or close.
Backtesting Settings:
Show Backtest Results - This has a default value of true. It is used to control whether to display the win rate of the given configuration.
█ WORKS CITED
(1) R. Giusti and G. E. A. P. A. Batista, "An Empirical Comparison of Dissimilarity Measures for Time Series Classification," 2013 Brazilian Conference on Intelligent Systems, Oct. 2013, DOI: 10.1109/bracis.2013.22.
(2) Y. Kerimbekov, H. Ş. Bilge, and H. H. Uğurlu, "The use of Lorentzian distance metric in classification problems," Pattern Recognition Letters, vol. 84, 170–176, Dec. 2016, DOI: 10.1016/j.patrec.2016.09.006.
(3) A. Bagnall, A. Bostrom, J. Large, and J. Lines, "The Great Time Series Classification Bake Off: An Experimental Evaluation of Recently Proposed Algorithms." ResearchGate, Feb. 04, 2016.
(4) H. Ş. Bilge, Yerzhan Kerimbekov, and Hasan Hüseyin Uğurlu, "A new classification method by using Lorentzian distance metric," ResearchGate, Sep. 02, 2015.
(5) Y. Kerimbekov and H. Şakir Bilge, "Lorentzian Distance Classifier for Multiple Features," Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods, 2017, DOI: 10.5220/0006197004930501.
(6) V. Surya Prasath et al., "Effects of Distance Measure Choice on KNN Classifier Performance - A Review." .
█ ACKNOWLEDGEMENTS
@veryfid - For many invaluable insights, discussions, and advice that helped to shape this project.
@capissimo - For open sourcing his interesting ideas regarding various KNN implementations in PineScript, several of which helped inspire my original undertaking of this project.
@RikkiTavi - For many invaluable physics-related conversations and for his helping me develop a mechanism for visualizing various distance algorithms in 3D using JavaScript
@jlaurel - For invaluable literature recommendations that helped me to understand the underlying subject matter of this project.
@annutara - For help in beta-testing this indicator and for sharing many helpful ideas and insights early on in its development.
@jasontaylor7 - For helping to beta-test this indicator and for many helpful conversations that helped to shape my backtesting workflow
@meddymarkusvanhala - For helping to beta-test this indicator
@dlbnext - For incredibly detailed backtesting testing of this indicator and for sharing numerous ideas on how the user experience could be improved.
Golden SlopeGolden Slope is an ATR based trend tool that mixes KNN machine learning to allow you to confirm your entry and exits, which can give out significantly more accurate signals.
Flag and rectangle signals are machine learning signals, they confirm an entry and exit position. You can use entry and exit signals alone but it's more accurate to confirm with machine learning signals. The idea is to either see a machine learning signal first and confirm it by Golden Slope entry or the other way around.
PS. Watch out if candle starts hitting the golden belly (or the yellow area after an entry signal is given because it can indicate a reversal before machine learning or the golden slope itself catch it, but these events happen rarely.
Machine Learning: kNN (New Approach)Description:
kNN is a very robust and simple method for data classification and prediction. It is very effective if the training data is large. However, it is distinguished by difficulty at determining its main parameter, K (a number of nearest neighbors), beforehand. The computation cost is also quite high because we need to compute distance of each instance to all training samples. Nevertheless, in algorithmic trading KNN is reported to perform on a par with such techniques as SVM and Random Forest. It is also widely used in the area of data science.
The input data is just a long series of prices over time without any particular features. The value to be predicted is just the next bar's price. The way that this problem is solved for both nearest neighbor techniques and for some other types of prediction algorithms is to create training records by taking, for instance, 10 consecutive prices and using the first 9 as predictor values and the 10th as the prediction value. Doing this way, given 100 data points in your time series you could create 10 different training records. It's possible to create even more training records than 10 by creating a new record starting at every data point. For instance, you could take the first 10 data points and create a record. Then you could take the 10 consecutive data points starting at the second data point, the 10 consecutive data points starting at the third data point, etc.
By default, shown are only 10 initial data points as predictor values and the 6th as the prediction value.
Here is a step-by-step workthrough on how to compute K nearest neighbors (KNN) algorithm for quantitative data:
1. Determine parameter K = number of nearest neighbors.
2. Calculate the distance between the instance and all the training samples. As we are dealing with one-dimensional distance, we simply take absolute value from the instance to value of x (| x – v |).
3. Rank the distance and determine nearest neighbors based on the K'th minimum distance.
4. Gather the values of the nearest neighbors.
5. Use average of nearest neighbors as the prediction value of the instance.
The original logic of the algorithm was slightly modified, and as a result at approx. N=17 the resulting curve nicely approximates that of the sma(20). See the description below. Beside the sma-like MA this algorithm also gives you a hint on the direction of the next bar move.
Machine Learning & Optimization Moving Average (Expo)█ An indicator that finds the best moving average
We all know that the market change in characteristics over time, volatility, volume, momentum, etc., keep changing. Therefore, traders fine-tune their indicators and strategies to fit the constantly changing market. Unfortunately, that means there is no "best" MA period that suits all these conditions. That is why we have developed this algorithm that self-adapts and finds the best MA period based on Machine Learning and Optimization calculations.
This indicator help traders and investors to use the best possible moving average period on the selected timeframe and asset and ensures that the period is updated even though the market characteristics change over time.
█ Self-optimizing moving average
There is no doubt that different markets and timeframes need different MA periods. Therefore, our algorithm optimizes the moving average period within the given parameter range and optimizes its value based on either performance, win rate, or the combined results. The moving average period updates automatically on the chart for you.
Traders can choose to use our Machine Learning Algorithm to optimize the MA values or can optimize only using the optimization algorithm.
Performance
If you select to optimize based on performance, the calculation returns the period with the highest gains.
Winrate
If you select to optimize based on win rate, the calculation returns the period that gives the best win rate.
Combined
If you select to optimize based on combined results, the calculations score the performance and win rate separately and choose the best period with the highest ranking in both aspects.
█ Finding the best moving average for any asset and timeframe
Traders can choose to find the best moving average based on price crossings.
█ Finding the best combination of moving averages for any asset and timeframe
Traders can choose to find the best crossing strategy, where the algorithm compares the 2 averages and returns the best fast and slow period.
█ Alerts
Traders can choose to be alerted when a new best moving average is found or when a moving average cross occurs.
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
Esqvair's Neural Reversal Probability IndicatorIntroduction
Esqvair's Neural Reversal Probability Indicator is the indicator that shows probability of reversal.
Warning: This script should only be used on 1 minute chart.
How to use
When a signal appears (by default it is a green bar), a reversal should be expected.
The signal appears when the indicator value >= Threshold.
If you want more signals, you must lower the threshold, if less, you must increase the threshold.
For some assets, like Forex pairs, you have to optimize the threshold yourself, but for most stocks, the default threshold works well.
How well a threshold fits an asset depends on the volatility of the asset.
For most assets, the indicator ranges from 35 to 75.
Settings
Smoothing - The default is 1, which means no smoothing. Indicator smoothing by SMA.
Threshold - default 71.0 is responsible for the occurrence of signals, read "How to use" part to learn more
The Indicator
This indicator is a pre-trained neural network that was trained outside of TradingView and then its structure and weights values were converted to PineScript.
Warning: A neural network is a black box in the sense that although it can approximate any function, studying its structure will not give you any idea about the structure of the function being approximated.
Possible questions
Why does the indicator value most time range from 35 to 75 when the probability should ranges from 0 to 100?
-Due to some randomness in the markets, a neural network can never be 100% sure.
What data was used to train the neural network?
-This was BTCUSD 1 minute chart data from 02/05/2020 to 02/05/2022.
Where did you train the neural network and convert it to PineScript?
-I used a programming language that I know.
Tesla Coil MLThis is a re-implementation of @veryfid's wonderful Tesla Coil indicator to leverage basic Machine Learning Algorithms to help classify coil crossovers. The original Tesla Coil indicator requires extensive training and practice for the user to develop adequate intuition to interpret coil crossovers. The goal for this version is to help the user understand the underlying logic of the Tesla Coil indicator and provide a more intuitive way to interpret the indicator. The signals should be interpreted as suggestions rather than as a hard-coded set of rules.
NOTE: Please do NOT trade off the signals blindly. Always try to use your own intuition for understanding the coils and check for confluence with other indicators before initiating a trade.
Mean Shift Pivot ClusteringCore Concepts
According to Jeff Greenblatt in his book "Breakthrough Strategies for Predicting Any Market", Fibonacci and Lucas sequences are observed repeated in the bar counts from local pivot highs/lows. They occur from high to high, low to high, high to low, or low to high. Essentially, this phenomenon is observed repeatedly from any pivot points on any time frame. Greenblatt combines this observation with Elliott Waves to predict the price and time reversals. However, I am no Elliottician so it was not easy for me to use this in a practical manner. I decided to only use the bar count projections and ignore the price. I projected a subset of Fibonacci and Lucas sequences along with the Fibonacci ratios from each pivot point. As expected, a projection from each pivot point resulted in a large set of plotted data and looks like a huge gong show of lines. Surprisingly, I did notice clusters and have observed those clusters to be fairly accurate.
Fibonacci Sequence: 1, 2, 3, 5, 8, 13, 21, 34...
Lucas Sequence: 2, 1, 3, 4, 7, 11, 18, 29, 47...
Fibonacci Ratios (converted to whole numbers): 23, 38, 50, 61, 78, 127, 161...
Light Bulb Moment
My eyes may suck at grouping the lines together but what about clustering algorithms? I chose to use a gimped version of Mean Shift because it doesn't require me to know in advance how many lines to expect like K-Means. Mean shift is computationally expensive and with Pinescript's 500ms timeout, I had to make due without the KDE. In other words, I skipped the weighting part but I may try to incorporate it in the future. The code is from Harrison Kinsley . He's a fantastic teacher!
Usage
Search Radius: how far apart should the bars be before they are excluded from the cluster? Try to stick with a figure between 1-5. Too large a figure will give meaningless results.
Pivot Offset: looks left and right X number of bars for a pivot. Same setting as the default TradingView pivot high/low script.
Show Lines Back: show historical predicted lines. (These can change)
Use this script in conjunction with Fibonacci price retracement/extension levels and/or other support/resistance levels. If it's no where near a support/resistance and there's a projected time pivot coming up, it's probably a fake out.
Notes
Re-painting is intended. When a new pivot is found, it will project out the Fib/Lucas sequences so the algorithm will run again with additional information.
The script is for informational and educational purposes only.
Do not use this indicator by itself to trade!
Unreal Algo [UPRIGHT] (cc)Hello Traders,
It's finally that time, I'm releasing my baby out into the world.
Unreal Algo is the answer to the question you didn't know you were asking.
It's for beginners and advanced traders alike. I've made the settings very customizable, but also easy to just jump right in.
How it works:
It uses tons of calculations, confirmations, and filters to bring you the most accurate predictive algorithm possible. The algo will automatically adjust to different volatility in the market to still provide accurate signals and confirmation. It will automatically show support and resistance in real-time. A Moving Average cloud with speeds varying from extra fast to slow; they will help traders confirm whether they should stay in the trade. Also, I added 2 stoplosses, because the importance of risk management should always be emphasized even with strong accuracy.
Features:
---The Most Accurate Signals on the planet.
--------Buy/Sell, Up/Down direction change, and Red/Green arrows.
--- MA cloud with beautiful color blend that can act as a confirmation of direction.
-------- 17 different types/versions of moving Averages to choose from.
--------Easy line transparency and toggle adjustments.
--------Easy cloud transparency adjustments.
--- Support and Resistance .
--- Advanced PSAR that will show red when bearish while in a bullish trend, and visa-versa.
---Potential Orderblocks that can be extended to show a grid (adding additional support/resistance information).
--- Fibonacci Lines.
--- Pivot bar that changes colors based on pivot direction.
---Resistance Breakout and Support Breakdown Signals .
--- Relative volume & momentum bar coloring.
---Two Separate Stoplosses .
--------Circles change color and flip to top and red for Short, bottom and green for long.
--------Horizontal stoploss that tracks the price and flags to take profit. White for Long and Yellow for short.
---As always... Fully customizable .
Different customization options:
Without stoplosses and Support/Resistance.
Without Support/Resistance, arrows and psar removed.
Added back Support/Resistance, lightened MA cloud
Fully loaded (minus trailing stoploss)
[UPRIGHT Trading] MoneyFlowTrend Oscillator(cc) PremiumHey Traders,
Tonight I'm updating my beloved original MoneyFlowTrend Oscillator with a Premium version.
A little background:
This is an indicator that I've been working to bring to life for years; learning pinescript code has allowed me to do just that.
Built on the idea of Supply & Demand Zones, this utilizes money flow and numerous calculations to create a picture of what is happening underneath the surface of the price action.
Richard Wykoff was one of the first market analysts to explain how the economic cycle can be applied to explain market price action; thus, technical analysis . He described two zones among the total of 4 phases; the two zones are Distribution and Accumulation zones, also known as Supply & Demand zones.
______________________________
Since most of you already know the economic cycle, I will try to be concise.
The basic ideas:
When supply > demand, the price goes up down.
When demand > supply, price goes up.
When demand = supply, the price stays about the same (going sideways).
Price action has --Uptrends, downtrends, and price ranges (consolidation).
Wykoff's 4 phases to explain this price action :
1) Accumulation (Demand zone)
2) Markup (Uptrend)
3) Distribution (Supply zone)
4) Markdown (Downtrend)
______________________________
With all that said, usually you will either see a sharp jump from a supply or demand zone or it will consolidate within it. Until a new one is formed on the chart.
This indicator attempts to put all of that into a lower indicator. I tried to separate the retailers and the banks and then put them back together to get a full picture.
Premium:
-Even MORE (quality & quantity) Accurate signals.
-Reversal Signal added (Circle- shown on chart)
-Cleaner Scaling and Organization.
The chart shown above should look like this:
Good luck traders.
Cheers,
Mike
(UPRIGHT Trading)
M.Right Awesome RSI+ (cc)Hey Traders,
Tonight I figured I'd release a special indicator that I've had in the works for years and finally was able to piece it together using pine. It's an extremely accurate take on the RSI. I plan to continue to refine the indicator and add more features, but as it is this is still one you can make a lot of money with.
(((((Please note: all circles and arrows in the chart above are drawn for illustration. Below is a chart showing regular session)))))
This indicator will act similarly to a regular RSI (Relative Strength Indicator) in that there are Oversold and Overbought levels, but also volatility bands around it to allow for more accurate signals whilst moving the Oversold (OS) and Overbought (OB) levels further apart ( less false OB/OS signals ). As shown in the chart above, it's able to detect some pretty big moves with both speed and accuracy .
Most of you are familiar with and use an RSI indicator so I will keep this description as brief as possible: The Relative Strength Index (RSI), developed by the legendary J. Welles Wilder, is a momentum oscillator that measures the speed and change of price movements; it oscillates between 0 - 100, with levels set as Overbought and Oversold. These levels are where a trader make look for a reversal, however they must keep in mind in an uptrend or bull market, the RSI tends to remain in the 40 - 90 range; 40 - 50 zone often will act as support. More advanced traders will also look for divergences between the price and the oscillator (i.e. price trending upward while oscillator trending downward). As far as oscillators go, the RSI is one of the most frequently used, by both advanced and beginner traders alike.
Works great on multiple timeframes. It may not catch every rally, but it will catch most --even on smaller timeframes (i.e. 5 minutes in image below).
As with all of my scripts I like to make them customizable:
You can change the up and down colors on the RSI ribbons and the color and style (dotted shown) of Overbought / Oversold lines. In future versions, I will add more color customizations and additions.
Can toggle 1 or both of the 2 highlight signals off to make it a little more plain.
Lot's of ways to make it look the way you'd like it to.
--The alerts include both the super accurate Bullish and Bearish signals shown with the background highlights. They are pre-filled so it will automatically display the price and time that the alert went off for you.
If I missed anything or you have a question, please let me know!
Cheers,
Mike
Please note: I have made this indicator invite only, send me a DM if you're interested in trying it out.
Financial Astrology Indexes ML Daily TrendDaily trend indicator based on financial astrology cycles detected with advanced machine learning techniques for some of the most important market indexes: DJI, UK100, SPX, IBC, IXIC, NI225, BANKNIFTY, NIFTY and GLD fund (not index) for Gold predictions. The daily price trend is forecasted through planets cycles (angular aspects, speed phases, declination zone), fast cycles are based on Moon, Mercury, Venus and Sun and Mid term cycles are based on Mars, Vesta and Ceres . The combination of all this cycles produce a daily price trend prediction that is encoded into a PineScript array using binary format "0 or 1" that represent sell and buy signals respectively. The indicator provides signals since 2021-01-01 to 2022-12-31, the past months signals purpose is to support backtesting of the indicator combined with other technical indicator entries like MAs, RSI or Stochastic . For future predictions besides 2022 a machine learning models re-train phase will be required.
When the signal moving average is increasing from 0 to 1 indicates an increase of buy force, when is decreasing from 1 to 0 indicates an increase in sell force, finally, when is sideways around the 0.4-0.6 area predicts a period of buy/sell forces equilibrium, traders indecision which result in a price congestion within a narrow price range.
We also have published same indicator for Crypto-Currencies research portfolio:
DISCLAIMER: This indicator is experimental and don’t provide financial or investment advice, the main purpose is to demonstrate the predictive power of financial astrology. Any allocation of funds following the documented machine learning model prediction is a high-risk endeavour and it’s the users responsibility to practice healthy risk management according to your situation.
Financial Astrology Crypto ML Daily TrendThis daily trend indicator is based on financial astrology cycles detected with advanced machine learning techniques for the crypto-currencies research portfolio: ADA, BAT, BNB, BTC, DASH, EOS, ETC, ETH, LINK, LTC, XLM, XMR, XRP, ZEC and ZRX. The daily price trend is forecasted through this planets cycles (angular aspects, speed, declination), fast ones are based on Moon, Mercury, Venus and Sun and Mid term cycles are based on Mars, Vesta and Ceres. The combination of all this cycles produce a daily price trend prediction that is encoded into a PineScript array using binary format "0 or 1" that represent sell and buy signals respectively. The indicator provides signals since 2021-01-01 to 2022-12-31, the past months signals purpose is to support backtesting of the indicator combined with other technical indicator entries like MAs, RSI or Stochastic. For future predictions besides 2022 a machine learning models re-train phase will be required.
The resolution of this indicator is 1D, you can tune a parameter where you can determine how many future bars of daily trend are plotted and adjust an hours shift to anticipate future signals into current bar in order to produce a leading indicator effect to anticipate the trend changes with some hours of anticipation. Combined with technical analysis indicators this daily trend is very powerful because can help to produce approximately 60% of profitable signals based on the backtesting results. You can look at our open source Github repositories to validate accuracy using the backtesting strategies we have implemented in Jesse Crypto Trading Framework as proof of concept of the predictive potential of this indicator. Alternatively, we have implemented a PineScript strategy that use this indicator, just consider that we are pending to do signals update to the period July 2021 to December 2022: This strategy have accumulated more than 110 likes and many traders have validated the predictive power of Financial Astrology.
DISCLAIMER: This indicator is experimental and don’t provide financial or investment advice, the main purpose is to demonstrate the predictive power of financial astrology. Any allocation of funds following the documented machine learning model prediction is a high-risk endeavour and it’s the users responsibility to practice healthy risk management according to your situation.
Machine Learning: kNN-based Strategy (update)kNN-based Strategy (FX and Crypto)
Description:
This update to the popular kNN-based strategy features:
improvements in the business logic,
an adjustible k value for the kNN model,
one more feature (MOM),
a streamlined signal filter and
some other minor fixes.
Now this script works in all timeframes !
I intentionally decided to publish this script separately
in order for the users to see the differences.
Machine Learning: LVQ-based StrategyLVQ-based Strategy (FX and Crypto)
Description:
Learning Vector Quantization (LVQ) can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all learning-based approach. It is based on prototype supervised learning classification task and trains its weights through a competitive learning algorithm.
Algorithm:
Initialize weights
Train for 1 to N number of epochs
- Select a training example
- Compute the winning vector
- Update the winning vector
Classify test sample
The LVQ algorithm offers a framework to test various indicators easily to see if they have got any *predictive value*. One can easily add cog, wpr and others.
Note: TradingViews's playback feature helps to see this strategy in action. The algo is tested with BTCUSD/1Hour.
Warning: This is a preliminary version! Signals ARE repainting.
***Warning***: Signals LARGELY depend on hyperparams (lrate and epochs).
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours+++/Days
Machine Learning: Logistic RegressionMulti-timeframe Strategy based on Logistic Regression algorithm
Description:
This strategy uses a classic machine learning algorithm that came from statistics - Logistic Regression (LR).
The first and most important thing about logistic regression is that it is not a 'Regression' but a 'Classification' algorithm. The name itself is somewhat misleading. Regression gives a continuous numeric output but most of the time we need the output in classes (i.e. categorical, discrete). For example, we want to classify emails into “spam” or 'not spam', classify treatment into “success” or 'failure', classify statement into “right” or 'wrong', classify election data into 'fraudulent vote' or 'non-fraudulent vote', classify market move into 'long' or 'short' and so on. These are the examples of logistic regression having a binary output (also called dichotomous).
You can also think of logistic regression as a special case of linear regression when the outcome variable is categorical, where we are using log of odds as dependent variable. In simple words, it predicts the probability of occurrence of an event by fitting data to a logit function.
Basically, the theory behind Logistic Regression is very similar to the one from Linear Regression, where we seek to draw a best-fitting line over data points, but in Logistic Regression, we don’t directly fit a straight line to our data like in linear regression. Instead, we fit a S shaped curve, called Sigmoid, to our observations, that best SEPARATES data points. Technically speaking, the main goal of building the model is to find the parameters (weights) using gradient descent.
In this script the LR algorithm is retrained on each new bar trying to classify it into one of the two categories. This is done via the logistic_regression function by updating the weights w in the loop that continues for iterations number of times. In the end the weights are passed through the sigmoid function, yielding a prediction.
Mind that some assets require to modify the script's input parameters. For instance, when used with BTCUSD and USDJPY, the 'Normalization Lookback' parameter should be set down to 4 (2,...,5..), and optionally the 'Use Price Data for Signal Generation?' parameter should be checked. The defaults were tested with EURUSD.
Note: TradingViews's playback feature helps to see this strategy in action.
Warning: Signals ARE repainting.
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours/Days
Machine Learning: Perceptron-based strategyPerceptron-based strategy
Description:
The Learning Perceptron is the simplest possible artificial neural network (ANN), consisting of just a single neuron and capable of learning a certain class of binary classification problems. The idea behind ANNs is that by selecting good values for the weight parameters (and the bias), the ANN can model the relationships between the inputs and some target.
Generally, ANN neurons receive a number of inputs, weight each of those inputs, sum the weights, and then transform that sum using a special function called an activation function. The output of that activation function is then either used as the prediction (in a single neuron model) or is combined with the outputs of other neurons for further use in more complex models.
The purpose of the activation function is to take the input signal (that’s the weighted sum of the inputs and the bias) and turn it into an output signal. Think of this activation function as firing (activating) the neuron when it returns 1, and doing nothing when it returns 0. This sort of computation is accomplished with a function called step function: f(z) = {1 if z > 0 else 0}. This function then transforms any weighted sum of the inputs and converts it into a binary output (either 1 or 0). The trick to making this useful is finding (learning) a set of weights that lead to good predictions using this activation function.
Training our perceptron is simply a matter of initializing the weights to zero (or random value) and then implementing the perceptron learning rule, which just updates the weights based on the error of each observation with the current weights. This has the effect of moving the classifier’s decision boundary in the direction that would have helped it classify the last observation correctly. This is achieved via a for loop which iterates over each observation, making a prediction of each observation, calculating the error of that prediction and then updating the weights accordingly. In this way, weights are gradually updated until they converge. Each sweep through the training data is called an epoch.
In this script the perceptron is retrained on each new bar trying to classify this bar by drawing the moving average curve above or below the bar.
This script was tested with BTCUSD, USDJPY, and EURUSD.
Note: TradingViews's playback feature helps to see this strategy in action.
Warning: Signals ARE repainting.
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours+/Days