Volatility Targeting: Single Asset [BackQuant]Volatility Targeting: Single Asset
An educational example that demonstrates how volatility targeting can scale exposure up or down on one symbol, then applies a simple EMA cross for long or short direction and a higher timeframe style regime filter to gate risk. It builds a synthetic equity curve and compares it to buy and hold and a benchmark.
Important disclaimer
This script is a concept and education example only . It is not a complete trading system and it is not meant for live execution. It does not model many real world constraints, and its equity curve is only a simplified simulation. If you want to trade any idea like this, you need a proper strategy() implementation, realistic execution assumptions, and robust backtesting with out of sample validation.
Single asset vs the full portfolio concept
This indicator is the single asset, long short version of the broader volatility targeted momentum portfolio concept. The original multi asset concept and full portfolio implementation is here:
That portfolio script is about allocating across multiple assets with a portfolio view. This script is intentionally simpler and focuses on one symbol so you can clearly see how volatility targeting behaves, how the scaling interacts with trend direction, and what an equity curve comparison looks like.
What this indicator is trying to demonstrate
Volatility targeting is a risk scaling framework. The core idea is simple:
If realized volatility is low relative to a target, you can scale position size up so the strategy behaves like it has a stable risk budget.
If realized volatility is high relative to a target, you scale down to avoid getting blown around by the market.
Instead of always being 1x long or 1x short, exposure becomes dynamic. This is often used in risk parity style systems, trend following overlays, and volatility controlled products.
This script combines that risk scaling with a simple trend direction model:
Fast and slow EMA cross determines whether the strategy is long or short.
A second, longer EMA cross acts as a regime filter that decides whether the system is ACTIVE or effectively in CASH.
An equity curve is built from the scaled returns so you can visualize how the framework behaves across regimes.
How the logic works step by step
1) Returns and simple momentum
The script uses log returns for the base return stream:
ret = log(price / price )
It also computes a simple momentum value:
mom = price / price - 1
In this version, momentum is mainly informational since the directional signal is the EMA cross. The lookback input is shared with volatility estimation to keep the concept compact.
2) Realized volatility estimation
Realized volatility is estimated as the standard deviation of returns over the lookback window, then annualized:
vol = stdev(ret, lookback) * sqrt(tradingdays)
The Trading Days/Year input controls annualization:
252 is typical for traditional markets.
365 is typical for crypto since it trades daily.
3) Volatility targeting multiplier
Once realized vol is estimated, the script computes a scaling factor that tries to push realized volatility toward the target:
volMult = targetVol / vol
This is then clamped into a reasonable range:
Minimum 0.1 so exposure never goes to zero just because vol spikes.
Maximum 5.0 so exposure is not allowed to lever infinitely during ultra low volatility periods.
This clamp is one of the most important “sanity rails” in any volatility targeted system. Without it, very low volatility regimes can create unrealistic leverage.
4) Scaled return stream
The per bar return used for the equity curve is the raw return multiplied by the volatility multiplier:
sr = ret * volMult
Think of this as the return you would have earned if you scaled exposure to match the volatility budget.
5) Long short direction via EMA cross
Direction is determined by a fast and slow EMA cross on price:
If fast EMA is above slow EMA, direction is long.
If fast EMA is below slow EMA, direction is short.
This produces dir as either +1 or -1. The scaled return stream is then signed by direction:
avgRet = dir * sr
So the strategy return is volatility targeted and directionally flipped depending on trend.
6) Regime filter: ACTIVE vs CASH
A second EMA pair acts as a top level regime filter:
If fast regime EMA is above slow regime EMA, the system is ACTIVE.
If fast regime EMA is below slow regime EMA, the system is considered CASH, meaning it does not compound equity.
This is designed to reduce participation in long bear phases or low quality environments, depending on how you set the regime lengths. By default it is a classic 50 and 200 EMA cross structure.
Important detail, the script applies regime_filter when compounding equity, meaning it uses the prior bar regime state to avoid ambiguous same bar updates.
7) Equity curve construction
The script builds a synthetic equity curve starting from Initial Capital after Start Date . Each bar:
If regime was ACTIVE on the previous bar, equity compounds by (1 + netRet).
If regime was CASH, equity stays flat.
Fees are modeled very simply as a per bar penalty on returns:
netRet = avgRet - (fee_rate * avgRet)
This is not realistic execution modeling, it is just a simple turnover penalty knob to show how friction can reduce compounded performance. Real backtesting should model trade based costs, spreads, funding, and slippage.
Benchmark and buy and hold comparison
The script pulls a benchmark symbol via request.security and builds a buy and hold equity curve starting from the same date and initial capital. The buy and hold curve is based on benchmark price appreciation, not the strategy’s asset price, so you can compare:
Strategy equity on the chart symbol.
Buy and hold equity for the selected benchmark instrument.
By default the benchmark is TVC:SPX, but you can set it to anything, for crypto you might set it to BTC, or a sector index, or a dominance proxy depending on your study.
What it plots
If enabled, the indicator plots:
Strategy Equity as a line, colored by recent direction of equity change, using Positive Equity Color and Negative Equity Color .
Buy and Hold Equity for the chosen benchmark as a line.
Optional labels that tag each curve on the right side of the chart.
This makes it easy to visually see when volatility targeting and regime gating change the shape of the equity curve relative to a simple passive hold.
Metrics table explained
If Show Metrics Table is enabled, a table is built and populated with common performance statistics based on the simulated daily returns of the strategy equity curve after the start date. These include:
Net Profit (%) total return relative to initial capital.
Max DD (%) maximum drawdown computed from equity peaks, stored over time.
Win Rate percent of positive return bars.
Annual Mean Returns (% p/y) mean daily return annualized.
Annual Stdev Returns (% p/y) volatility of daily returns annualized.
Variance of annualized returns.
Sortino Ratio annualized return divided by downside deviation, using negative return stdev.
Sharpe Ratio risk adjusted return using the risk free rate input.
Omega Ratio positive return sum divided by negative return sum.
Gain to Pain total return sum divided by absolute loss sum.
CAGR (% p/y) compounded annual growth rate based on time since start date.
Portfolio Alpha (% p/y) alpha versus benchmark using beta and the benchmark mean.
Portfolio Beta covariance of strategy returns with benchmark returns divided by benchmark variance.
Skewness of Returns actually the script computes a conditional value based on the lower 5 percent tail of returns, so it behaves more like a simple CVaR style tail loss estimate than classic skewness.
Important note, these are calculated from the synthetic equity stream in an indicator context. They are useful for concept exploration, but they are not a substitute for professional backtesting where trade timing, fills, funding, and leverage constraints are accurately represented.
How to interpret the system conceptually
Vol targeting effect
When volatility rises, volMult falls, so the strategy de risks and the equity curve typically becomes smoother. When volatility compresses, volMult rises, so the system takes more exposure and tries to maintain a stable risk budget.
This is why volatility targeting is often used as a “risk equalizer”, it can reduce the “biggest drawdowns happen only because vol expanded” problem, at the cost of potentially under participating in explosive upside if volatility rises during a trend.
Long short directional effect
Because direction is an EMA cross:
In strong trends, the direction stays stable and the scaled return stream compounds in that trend direction.
In choppy ranges, the EMA cross can flip and create whipsaws, which is where fees and regime filtering matter most.
Regime filter effect
The 50 and 200 style filter tries to:
Keep the system active in sustained up regimes.
Reduce exposure during long down regimes or extended weakness.
It will always be late at turning points, by design. It is a slow filter meant to reduce deep participation, not to catch bottoms.
Common applications
This script is mainly for understanding and research, but conceptually, volatility targeting overlays are used for:
Risk budgeting normalize risk so your exposure is not accidentally huge in high vol regimes.
System comparison see how a simple trend model behaves with and without vol scaling.
Parameter exploration test how target volatility, lookback length, and regime lengths change the shape of equity and drawdowns.
Framework building as a reference blueprint before implementing a proper strategy() version with trade based execution logic.
Tuning guidance
Lookback lower values react faster to vol shifts but can create unstable scaling, higher values smooth scaling but react slower to regime changes.
Target volatility higher targets increase exposure and drawdown potential, lower targets reduce exposure and usually lower drawdowns, but can under perform in strong trends.
Signal EMAs tighter EMAs increase trade frequency, wider EMAs reduce churn but react slower.
Regime EMAs slower regime filters reduce false toggles but will miss early trend transitions.
Fees if you crank this up you will see how sensitive higher turnover parameter sets are to friction.
Final note
This is a compact educational demonstration of a volatility targeted, long short single asset framework with a regime gate and a synthetic equity curve. If you want a production ready implementation, the correct next step is to convert this concept into a strategy() script, add realistic execution and cost modeling, test across multiple timeframes and market regimes, and validate out of sample before making any decision based on the results.
In den Scripts nach "curve" suchen
Dynamic Equity Allocation Model"Cash is Trash"? Not Always. Here's Why Science Beats Guesswork.
Every retail trader knows the frustration: you draw support and resistance lines, you spot patterns, you follow market gurus on social media—and still, when the next bear market hits, your portfolio bleeds red. Meanwhile, institutional investors seem to navigate market turbulence with ease, preserving capital when markets crash and participating when they rally. What's their secret?
The answer isn't insider information or access to exotic derivatives. It's systematic, scientifically validated decision-making. While most retail traders rely on subjective chart analysis and emotional reactions, professional portfolio managers use quantitative models that remove emotion from the equation and process multiple streams of market information simultaneously.
This document presents exactly such a system—not a proprietary black box available only to hedge funds, but a fully transparent, academically grounded framework that any serious investor can understand and apply. The Dynamic Equity Allocation Model (DEAM) synthesizes decades of financial research from Nobel laureates and leading academics into a practical tool for tactical asset allocation.
Stop drawing colorful lines on your chart and start thinking like a quant. This isn't about predicting where the market goes next week—it's about systematically adjusting your risk exposure based on what the data actually tells you. When valuations scream danger, when volatility spikes, when credit markets freeze, when multiple warning signals align—that's when cash isn't trash. That's when cash saves your portfolio.
The irony of "cash is trash" rhetoric is that it ignores timing. Yes, being 100% cash for decades would be disastrous. But being 100% equities through every crisis is equally foolish. The sophisticated approach is dynamic: aggressive when conditions favor risk-taking, defensive when they don't. This model shows you how to make that decision systematically, not emotionally.
Whether you're managing your own retirement portfolio or seeking to understand how institutional allocation strategies work, this comprehensive analysis provides the theoretical foundation, mathematical implementation, and practical guidance to elevate your investment approach from amateur to professional.
The choice is yours: keep hoping your chart patterns work out, or start using the same quantitative methods that professionals rely on. The tools are here. The research is cited. The methodology is explained. All you need to do is read, understand, and apply.
The Dynamic Equity Allocation Model (DEAM) is a quantitative framework for systematic allocation between equities and cash, grounded in modern portfolio theory and empirical market research. The model integrates five scientifically validated dimensions of market analysis—market regime, risk metrics, valuation, sentiment, and macroeconomic conditions—to generate dynamic allocation recommendations ranging from 0% to 100% equity exposure. This work documents the theoretical foundations, mathematical implementation, and practical application of this multi-factor approach.
1. Introduction and Theoretical Background
1.1 The Limitations of Static Portfolio Allocation
Traditional portfolio theory, as formulated by Markowitz (1952) in his seminal work "Portfolio Selection," assumes an optimal static allocation where investors distribute their wealth across asset classes according to their risk aversion. This approach rests on the assumption that returns and risks remain constant over time. However, empirical research demonstrates that this assumption does not hold in reality. Fama and French (1989) showed that expected returns vary over time and correlate with macroeconomic variables such as the spread between long-term and short-term interest rates. Campbell and Shiller (1988) demonstrated that the price-earnings ratio possesses predictive power for future stock returns, providing a foundation for dynamic allocation strategies.
The academic literature on tactical asset allocation has evolved considerably over recent decades. Ilmanen (2011) argues in "Expected Returns" that investors can improve their risk-adjusted returns by considering valuation levels, business cycles, and market sentiment. The Dynamic Equity Allocation Model presented here builds on this research tradition and operationalizes these insights into a practically applicable allocation framework.
1.2 Multi-Factor Approaches in Asset Allocation
Modern financial research has shown that different factors capture distinct aspects of market dynamics and together provide a more robust picture of market conditions than individual indicators. Ross (1976) developed the Arbitrage Pricing Theory, a model that employs multiple factors to explain security returns. Following this multi-factor philosophy, DEAM integrates five complementary analytical dimensions, each tapping different information sources and collectively enabling comprehensive market understanding.
2. Data Foundation and Data Quality
2.1 Data Sources Used
The model draws its data exclusively from publicly available market data via the TradingView platform. This transparency and accessibility is a significant advantage over proprietary models that rely on non-public data. The data foundation encompasses several categories of market information, each capturing specific aspects of market dynamics.
First, price data for the S&P 500 Index is obtained through the SPDR S&P 500 ETF (ticker: SPY). The use of a highly liquid ETF instead of the index itself has practical reasons, as ETF data is available in real-time and reflects actual tradability. In addition to closing prices, high, low, and volume data are captured, which are required for calculating advanced volatility measures.
Fundamental corporate metrics are retrieved via TradingView's Financial Data API. These include earnings per share, price-to-earnings ratio, return on equity, debt-to-equity ratio, dividend yield, and share buyback yield. Cochrane (2011) emphasizes in "Presidential Address: Discount Rates" the central importance of valuation metrics for forecasting future returns, making these fundamental data a cornerstone of the model.
Volatility indicators are represented by the CBOE Volatility Index (VIX) and related metrics. The VIX, often referred to as the market's "fear gauge," measures the implied volatility of S&P 500 index options and serves as a proxy for market participants' risk perception. Whaley (2000) describes in "The Investor Fear Gauge" the construction and interpretation of the VIX and its use as a sentiment indicator.
Macroeconomic data includes yield curve information through US Treasury bonds of various maturities and credit risk premiums through the spread between high-yield bonds and risk-free government bonds. These variables capture the macroeconomic conditions and financing conditions relevant for equity valuation. Estrella and Hardouvelis (1991) showed that the shape of the yield curve has predictive power for future economic activity, justifying the inclusion of these data.
2.2 Handling Missing Data
A practical problem when working with financial data is dealing with missing or unavailable values. The model implements a fallback system where a plausible historical average value is stored for each fundamental metric. When current data is unavailable for a specific point in time, this fallback value is used. This approach ensures that the model remains functional even during temporary data outages and avoids systematic biases from missing data. The use of average values as fallback is conservative, as it generates neither overly optimistic nor pessimistic signals.
3. Component 1: Market Regime Detection
3.1 The Concept of Market Regimes
The idea that financial markets exist in different "regimes" or states that differ in their statistical properties has a long tradition in financial science. Hamilton (1989) developed regime-switching models that allow distinguishing between different market states with different return and volatility characteristics. The practical application of this theory consists of identifying the current market state and adjusting portfolio allocation accordingly.
DEAM classifies market regimes using a scoring system that considers three main dimensions: trend strength, volatility level, and drawdown depth. This multidimensional view is more robust than focusing on individual indicators, as it captures various facets of market dynamics. Classification occurs into six distinct regimes: Strong Bull, Bull Market, Neutral, Correction, Bear Market, and Crisis.
3.2 Trend Analysis Through Moving Averages
Moving averages are among the oldest and most widely used technical indicators and have also received attention in academic literature. Brock, Lakonishok, and LeBaron (1992) examined in "Simple Technical Trading Rules and the Stochastic Properties of Stock Returns" the profitability of trading rules based on moving averages and found evidence for their predictive power, although later studies questioned the robustness of these results when considering transaction costs.
The model calculates three moving averages with different time windows: a 20-day average (approximately one trading month), a 50-day average (approximately one quarter), and a 200-day average (approximately one trading year). The relationship of the current price to these averages and the relationship of the averages to each other provide information about trend strength and direction. When the price trades above all three averages and the short-term average is above the long-term, this indicates an established uptrend. The model assigns points based on these constellations, with longer-term trends weighted more heavily as they are considered more persistent.
3.3 Volatility Regimes
Volatility, understood as the standard deviation of returns, is a central concept of financial theory and serves as the primary risk measure. However, research has shown that volatility is not constant but changes over time and occurs in clusters—a phenomenon first documented by Mandelbrot (1963) and later formalized through ARCH and GARCH models (Engle, 1982; Bollerslev, 1986).
DEAM calculates volatility not only through the classic method of return standard deviation but also uses more advanced estimators such as the Parkinson estimator and the Garman-Klass estimator. These methods utilize intraday information (high and low prices) and are more efficient than simple close-to-close volatility estimators. The Parkinson estimator (Parkinson, 1980) uses the range between high and low of a trading day and is based on the recognition that this information reveals more about true volatility than just the closing price difference. The Garman-Klass estimator (Garman and Klass, 1980) extends this approach by additionally considering opening and closing prices.
The calculated volatility is annualized by multiplying it by the square root of 252 (the average number of trading days per year), enabling standardized comparability. The model compares current volatility with the VIX, the implied volatility from option prices. A low VIX (below 15) signals market comfort and increases the regime score, while a high VIX (above 35) indicates market stress and reduces the score. This interpretation follows the empirical observation that elevated volatility is typically associated with falling markets (Schwert, 1989).
3.4 Drawdown Analysis
A drawdown refers to the percentage decline from the highest point (peak) to the lowest point (trough) during a specific period. This metric is psychologically significant for investors as it represents the maximum loss experienced. Calmar (1991) developed the Calmar Ratio, which relates return to maximum drawdown, underscoring the practical relevance of this metric.
The model calculates current drawdown as the percentage distance from the highest price of the last 252 trading days (one year). A drawdown below 3% is considered negligible and maximally increases the regime score. As drawdown increases, the score decreases progressively, with drawdowns above 20% classified as severe and indicating a crisis or bear market regime. These thresholds are empirically motivated by historical market cycles, in which corrections typically encompassed 5-10% drawdowns, bear markets 20-30%, and crises over 30%.
3.5 Regime Classification
Final regime classification occurs through aggregation of scores from trend (40% weight), volatility (30%), and drawdown (30%). The higher weighting of trend reflects the empirical observation that trend-following strategies have historically delivered robust results (Moskowitz, Ooi, and Pedersen, 2012). A total score above 80 signals a strong bull market with established uptrend, low volatility, and minimal losses. At a score below 10, a crisis situation exists requiring defensive positioning. The six regime categories enable a differentiated allocation strategy that not only distinguishes binarily between bullish and bearish but allows gradual gradations.
4. Component 2: Risk-Based Allocation
4.1 Volatility Targeting as Risk Management Approach
The concept of volatility targeting is based on the idea that investors should maximize not returns but risk-adjusted returns. Sharpe (1966, 1994) defined with the Sharpe Ratio the fundamental concept of return per unit of risk, measured as volatility. Volatility targeting goes a step further and adjusts portfolio allocation to achieve constant target volatility. This means that in times of low market volatility, equity allocation is increased, and in times of high volatility, it is reduced.
Moreira and Muir (2017) showed in "Volatility-Managed Portfolios" that strategies that adjust their exposure based on volatility forecasts achieve higher Sharpe Ratios than passive buy-and-hold strategies. DEAM implements this principle by defining a target portfolio volatility (default 12% annualized) and adjusting equity allocation to achieve it. The mathematical foundation is simple: if market volatility is 20% and target volatility is 12%, equity allocation should be 60% (12/20 = 0.6), with the remaining 40% held in cash with zero volatility.
4.2 Market Volatility Calculation
Estimating current market volatility is central to the risk-based allocation approach. The model uses several volatility estimators in parallel and selects the higher value between traditional close-to-close volatility and the Parkinson estimator. This conservative choice ensures the model does not underestimate true volatility, which could lead to excessive risk exposure.
Traditional volatility calculation uses logarithmic returns, as these have mathematically advantageous properties (additive linkage over multiple periods). The logarithmic return is calculated as ln(P_t / P_{t-1}), where P_t is the price at time t. The standard deviation of these returns over a rolling 20-trading-day window is then multiplied by √252 to obtain annualized volatility. This annualization is based on the assumption of independently identically distributed returns, which is an idealization but widely accepted in practice.
The Parkinson estimator uses additional information from the trading range (High minus Low) of each day. The formula is: σ_P = (1/√(4ln2)) × √(1/n × Σln²(H_i/L_i)) × √252, where H_i and L_i are high and low prices. Under ideal conditions, this estimator is approximately five times more efficient than the close-to-close estimator (Parkinson, 1980), as it uses more information per observation.
4.3 Drawdown-Based Position Size Adjustment
In addition to volatility targeting, the model implements drawdown-based risk control. The logic is that deep market declines often signal further losses and therefore justify exposure reduction. This behavior corresponds with the concept of path-dependent risk tolerance: investors who have already suffered losses are typically less willing to take additional risk (Kahneman and Tversky, 1979).
The model defines a maximum portfolio drawdown as a target parameter (default 15%). Since portfolio volatility and portfolio drawdown are proportional to equity allocation (assuming cash has neither volatility nor drawdown), allocation-based control is possible. For example, if the market exhibits a 25% drawdown and target portfolio drawdown is 15%, equity allocation should be at most 60% (15/25).
4.4 Dynamic Risk Adjustment
An advanced feature of DEAM is dynamic adjustment of risk-based allocation through a feedback mechanism. The model continuously estimates what actual portfolio volatility and portfolio drawdown would result at the current allocation. If risk utilization (ratio of actual to target risk) exceeds 1.0, allocation is reduced by an adjustment factor that grows exponentially with overutilization. This implements a form of dynamic feedback that avoids overexposure.
Mathematically, a risk adjustment factor r_adjust is calculated: if risk utilization u > 1, then r_adjust = exp(-0.5 × (u - 1)). This exponential function ensures that moderate overutilization is gently corrected, while strong overutilization triggers drastic reductions. The factor 0.5 in the exponent was empirically calibrated to achieve a balanced ratio between sensitivity and stability.
5. Component 3: Valuation Analysis
5.1 Theoretical Foundations of Fundamental Valuation
DEAM's valuation component is based on the fundamental premise that the intrinsic value of a security is determined by its future cash flows and that deviations between market price and intrinsic value are eventually corrected. Graham and Dodd (1934) established in "Security Analysis" the basic principles of fundamental analysis that remain relevant today. Translated into modern portfolio context, this means that markets with high valuation metrics (high price-earnings ratios) should have lower expected returns than cheaply valued markets.
Campbell and Shiller (1988) developed the Cyclically Adjusted P/E Ratio (CAPE), which smooths earnings over a full business cycle. Their empirical analysis showed that this ratio has significant predictive power for 10-year returns. Asness, Moskowitz, and Pedersen (2013) demonstrated in "Value and Momentum Everywhere" that value effects exist not only in individual stocks but also in asset classes and markets.
5.2 Equity Risk Premium as Central Valuation Metric
The Equity Risk Premium (ERP) is defined as the expected excess return of stocks over risk-free government bonds. It is the theoretical heart of valuation analysis, as it represents the compensation investors demand for bearing equity risk. Damodaran (2012) discusses in "Equity Risk Premiums: Determinants, Estimation and Implications" various methods for ERP estimation.
DEAM calculates ERP not through a single method but combines four complementary approaches with different weights. This multi-method strategy increases estimation robustness and avoids dependence on single, potentially erroneous inputs.
The first method (35% weight) uses earnings yield, calculated as 1/P/E or directly from operating earnings data, and subtracts the 10-year Treasury yield. This method follows Fed Model logic (Yardeni, 2003), although this model has theoretical weaknesses as it does not consistently treat inflation (Asness, 2003).
The second method (30% weight) extends earnings yield by share buyback yield. Share buybacks are a form of capital return to shareholders and increase value per share. Boudoukh et al. (2007) showed in "The Total Shareholder Yield" that the sum of dividend yield and buyback yield is a better predictor of future returns than dividend yield alone.
The third method (20% weight) implements the Gordon Growth Model (Gordon, 1962), which models stock value as the sum of discounted future dividends. Under constant growth g assumption: Expected Return = Dividend Yield + g. The model estimates sustainable growth as g = ROE × (1 - Payout Ratio), where ROE is return on equity and payout ratio is the ratio of dividends to earnings. This formula follows from equity theory: unretained earnings are reinvested at ROE and generate additional earnings growth.
The fourth method (15% weight) combines total shareholder yield (Dividend + Buybacks) with implied growth derived from revenue growth. This method considers that companies with strong revenue growth should generate higher future earnings, even if current valuations do not yet fully reflect this.
The final ERP is the weighted average of these four methods. A high ERP (above 4%) signals attractive valuations and increases the valuation score to 95 out of 100 possible points. A negative ERP, where stocks have lower expected returns than bonds, results in a minimal score of 10.
5.3 Quality Adjustments to Valuation
Valuation metrics alone can be misleading if not interpreted in the context of company quality. A company with a low P/E may be cheap or fundamentally problematic. The model therefore implements quality adjustments based on growth, profitability, and capital structure.
Revenue growth above 10% annually adds 10 points to the valuation score, moderate growth above 5% adds 5 points. This adjustment reflects that growth has independent value (Modigliani and Miller, 1961, extended by later growth theory). Net margin above 15% signals pricing power and operational efficiency and increases the score by 5 points, while low margins below 8% indicate competitive pressure and subtract 5 points.
Return on equity (ROE) above 20% characterizes outstanding capital efficiency and increases the score by 5 points. Piotroski (2000) showed in "Value Investing: The Use of Historical Financial Statement Information" that fundamental quality signals such as high ROE can improve the performance of value strategies.
Capital structure is evaluated through the debt-to-equity ratio. A conservative ratio below 1.0 multiplies the valuation score by 1.2, while high leverage above 2.0 applies a multiplier of 0.8. This adjustment reflects that high debt constrains financial flexibility and can become problematic in crisis times (Korteweg, 2010).
6. Component 4: Sentiment Analysis
6.1 The Role of Sentiment in Financial Markets
Investor sentiment, defined as the collective psychological attitude of market participants, influences asset prices independently of fundamental data. Baker and Wurgler (2006, 2007) developed a sentiment index and showed that periods of high sentiment are followed by overvaluations that later correct. This insight justifies integrating a sentiment component into allocation decisions.
Sentiment is difficult to measure directly but can be proxied through market indicators. The VIX is the most widely used sentiment indicator, as it aggregates implied volatility from option prices. High VIX values reflect elevated uncertainty and risk aversion, while low values signal market comfort. Whaley (2009) refers to the VIX as the "Investor Fear Gauge" and documents its role as a contrarian indicator: extremely high values typically occur at market bottoms, while low values occur at tops.
6.2 VIX-Based Sentiment Assessment
DEAM uses statistical normalization of the VIX by calculating the Z-score: z = (VIX_current - VIX_average) / VIX_standard_deviation. The Z-score indicates how many standard deviations the current VIX is from the historical average. This approach is more robust than absolute thresholds, as it adapts to the average volatility level, which can vary over longer periods.
A Z-score below -1.5 (VIX is 1.5 standard deviations below average) signals exceptionally low risk perception and adds 40 points to the sentiment score. This may seem counterintuitive—shouldn't low fear be bullish? However, the logic follows the contrarian principle: when no one is afraid, everyone is already invested, and there is limited further upside potential (Zweig, 1973). Conversely, a Z-score above 1.5 (extreme fear) adds -40 points, reflecting market panic but simultaneously suggesting potential buying opportunities.
6.3 VIX Term Structure as Sentiment Signal
The VIX term structure provides additional sentiment information. Normally, the VIX trades in contango, meaning longer-term VIX futures have higher prices than short-term. This reflects that short-term volatility is currently known, while long-term volatility is more uncertain and carries a risk premium. The model compares the VIX with VIX9D (9-day volatility) and identifies backwardation (VIX > 1.05 × VIX9D) and steep backwardation (VIX > 1.15 × VIX9D).
Backwardation occurs when short-term implied volatility is higher than longer-term, which typically happens during market stress. Investors anticipate immediate turbulence but expect calming. Psychologically, this reflects acute fear. The model subtracts 15 points for backwardation and 30 for steep backwardation, as these constellations signal elevated risk. Simon and Wiggins (2001) analyzed the VIX futures curve and showed that backwardation is associated with market declines.
6.4 Safe-Haven Flows
During crisis times, investors flee from risky assets into safe havens: gold, US dollar, and Japanese yen. This "flight to quality" is a sentiment signal. The model calculates the performance of these assets relative to stocks over the last 20 trading days. When gold or the dollar strongly rise while stocks fall, this indicates elevated risk aversion.
The safe-haven component is calculated as the difference between safe-haven performance and stock performance. Positive values (safe havens outperform) subtract up to 20 points from the sentiment score, negative values (stocks outperform) add up to 10 points. The asymmetric treatment (larger deduction for risk-off than bonus for risk-on) reflects that risk-off movements are typically sharper and more informative than risk-on phases.
Baur and Lucey (2010) examined safe-haven properties of gold and showed that gold indeed exhibits negative correlation with stocks during extreme market movements, confirming its role as crisis protection.
7. Component 5: Macroeconomic Analysis
7.1 The Yield Curve as Economic Indicator
The yield curve, represented as yields of government bonds of various maturities, contains aggregated expectations about future interest rates, inflation, and economic growth. The slope of the yield curve has remarkable predictive power for recessions. Estrella and Mishkin (1998) showed that an inverted yield curve (short-term rates higher than long-term) predicts recessions with high reliability. This is because inverted curves reflect restrictive monetary policy: the central bank raises short-term rates to combat inflation, dampening economic activity.
DEAM calculates two spread measures: the 2-year-minus-10-year spread and the 3-month-minus-10-year spread. A steep, positive curve (spreads above 1.5% and 2% respectively) signals healthy growth expectations and generates the maximum yield curve score of 40 points. A flat curve (spreads near zero) reduces the score to 20 points. An inverted curve (negative spreads) is particularly alarming and results in only 10 points.
The choice of two different spreads increases analysis robustness. The 2-10 spread is most established in academic literature, while the 3M-10Y spread is often considered more sensitive, as the 3-month rate directly reflects current monetary policy (Ang, Piazzesi, and Wei, 2006).
7.2 Credit Conditions and Spreads
Credit spreads—the yield difference between risky corporate bonds and safe government bonds—reflect risk perception in the credit market. Gilchrist and Zakrajšek (2012) constructed an "Excess Bond Premium" that measures the component of credit spreads not explained by fundamentals and showed this is a predictor of future economic activity and stock returns.
The model approximates credit spread by comparing the yield of high-yield bond ETFs (HYG) with investment-grade bond ETFs (LQD). A narrow spread below 200 basis points signals healthy credit conditions and risk appetite, contributing 30 points to the macro score. Very wide spreads above 1000 basis points (as during the 2008 financial crisis) signal credit crunch and generate zero points.
Additionally, the model evaluates whether "flight to quality" is occurring, identified through strong performance of Treasury bonds (TLT) with simultaneous weakness in high-yield bonds. This constellation indicates elevated risk aversion and reduces the credit conditions score.
7.3 Financial Stability at Corporate Level
While the yield curve and credit spreads reflect macroeconomic conditions, financial stability evaluates the health of companies themselves. The model uses the aggregated debt-to-equity ratio and return on equity of the S&P 500 as proxies for corporate health.
A low leverage level below 0.5 combined with high ROE above 15% signals robust corporate balance sheets and generates 20 points. This combination is particularly valuable as it represents both defensive strength (low debt means crisis resistance) and offensive strength (high ROE means earnings power). High leverage above 1.5 generates only 5 points, as it implies vulnerability to interest rate increases and recessions.
Korteweg (2010) showed in "The Net Benefits to Leverage" that optimal debt maximizes firm value, but excessive debt increases distress costs. At the aggregated market level, high debt indicates fragilities that can become problematic during stress phases.
8. Component 6: Crisis Detection
8.1 The Need for Systematic Crisis Detection
Financial crises are rare but extremely impactful events that suspend normal statistical relationships. During normal market volatility, diversified portfolios and traditional risk management approaches function, but during systemic crises, seemingly independent assets suddenly correlate strongly, and losses exceed historical expectations (Longin and Solnik, 2001). This justifies a separate crisis detection mechanism that operates independently of regular allocation components.
Reinhart and Rogoff (2009) documented in "This Time Is Different: Eight Centuries of Financial Folly" recurring patterns in financial crises: extreme volatility, massive drawdowns, credit market dysfunction, and asset price collapse. DEAM operationalizes these patterns into quantifiable crisis indicators.
8.2 Multi-Signal Crisis Identification
The model uses a counter-based approach where various stress signals are identified and aggregated. This methodology is more robust than relying on a single indicator, as true crises typically occur simultaneously across multiple dimensions. A single signal may be a false alarm, but the simultaneous presence of multiple signals increases confidence.
The first indicator is a VIX above the crisis threshold (default 40), adding one point. A VIX above 60 (as in 2008 and March 2020) adds two additional points, as such extreme values are historically very rare. This tiered approach captures the intensity of volatility.
The second indicator is market drawdown. A drawdown above 15% adds one point, as corrections of this magnitude can be potential harbingers of larger crises. A drawdown above 25% adds another point, as historical bear markets typically encompass 25-40% drawdowns.
The third indicator is credit market spreads above 500 basis points, adding one point. Such wide spreads occur only during significant credit market disruptions, as in 2008 during the Lehman crisis.
The fourth indicator identifies simultaneous losses in stocks and bonds. Normally, Treasury bonds act as a hedge against equity risk (negative correlation), but when both fall simultaneously, this indicates systemic liquidity problems or inflation/stagflation fears. The model checks whether both SPY and TLT have fallen more than 10% and 5% respectively over 5 trading days, adding two points.
The fifth indicator is a volume spike combined with negative returns. Extreme trading volumes (above twice the 20-day average) with falling prices signal panic selling. This adds one point.
A crisis situation is diagnosed when at least 3 indicators trigger, a severe crisis at 5 or more indicators. These thresholds were calibrated through historical backtesting to identify true crises (2008, 2020) without generating excessive false alarms.
8.3 Crisis-Based Allocation Override
When a crisis is detected, the system overrides the normal allocation recommendation and caps equity allocation at maximum 25%. In a severe crisis, the cap is set at 10%. This drastic defensive posture follows the empirical observation that crises typically require time to develop and that early reduction can avoid substantial losses (Faber, 2007).
This override logic implements a "safety first" principle: in situations of existential danger to the portfolio, capital preservation becomes the top priority. Roy (1952) formalized this approach in "Safety First and the Holding of Assets," arguing that investors should primarily minimize ruin probability.
9. Integration and Final Allocation Calculation
9.1 Component Weighting
The final allocation recommendation emerges through weighted aggregation of the five components. The standard weighting is: Market Regime 35%, Risk Management 25%, Valuation 20%, Sentiment 15%, Macro 5%. These weights reflect both theoretical considerations and empirical backtesting results.
The highest weighting of market regime is based on evidence that trend-following and momentum strategies have delivered robust results across various asset classes and time periods (Moskowitz, Ooi, and Pedersen, 2012). Current market momentum is highly informative for the near future, although it provides no information about long-term expectations.
The substantial weighting of risk management (25%) follows from the central importance of risk control. Wealth preservation is the foundation of long-term wealth creation, and systematic risk management is demonstrably value-creating (Moreira and Muir, 2017).
The valuation component receives 20% weight, based on the long-term mean reversion of valuation metrics. While valuation has limited short-term predictive power (bull and bear markets can begin at any valuation), the long-term relationship between valuation and returns is robustly documented (Campbell and Shiller, 1988).
Sentiment (15%) and Macro (5%) receive lower weights, as these factors are subtler and harder to measure. Sentiment is valuable as a contrarian indicator at extremes but less informative in normal ranges. Macro variables such as the yield curve have strong predictive power for recessions, but the transmission from recessions to stock market performance is complex and temporally variable.
9.2 Model Type Adjustments
DEAM allows users to choose between four model types: Conservative, Balanced, Aggressive, and Adaptive. This choice modifies the final allocation through additive adjustments.
Conservative mode subtracts 10 percentage points from allocation, resulting in consistently more cautious positioning. This is suitable for risk-averse investors or those with limited investment horizons. Aggressive mode adds 10 percentage points, suitable for risk-tolerant investors with long horizons.
Adaptive mode implements procyclical adjustment based on short-term momentum: if the market has risen more than 5% in the last 20 days, 5 percentage points are added; if it has declined more than 5%, 5 points are subtracted. This logic follows the observation that short-term momentum persists (Jegadeesh and Titman, 1993), but the moderate size of adjustment avoids excessive timing bets.
Balanced mode makes no adjustment and uses raw model output. This neutral setting is suitable for investors who wish to trust model recommendations unchanged.
9.3 Smoothing and Stability
The allocation resulting from aggregation undergoes final smoothing through a simple moving average over 3 periods. This smoothing is crucial for model practicality, as it reduces frequent trading and thus transaction costs. Without smoothing, the model could fluctuate between adjacent allocations with every small input change.
The choice of 3 periods as smoothing window is a compromise between responsiveness and stability. Longer smoothing would excessively delay signals and impede response to true regime changes. Shorter or no smoothing would allow too much noise. Empirical tests showed that 3-period smoothing offers an optimal ratio between these goals.
10. Visualization and Interpretation
10.1 Main Output: Equity Allocation
DEAM's primary output is a time series from 0 to 100 representing the recommended percentage allocation to equities. This representation is intuitive: 100% means full investment in stocks (specifically: an S&P 500 ETF), 0% means complete cash position, and intermediate values correspond to mixed portfolios. A value of 60% means, for example: invest 60% of wealth in SPY, hold 40% in money market instruments or cash.
The time series is color-coded to enable quick visual interpretation. Green shades represent high allocations (above 80%, bullish), red shades low allocations (below 20%, bearish), and neutral colors middle allocations. The chart background is dynamically colored based on the signal, enhancing readability in different market phases.
10.2 Dashboard Metrics
A tabular dashboard presents key metrics compactly. This includes current allocation, cash allocation (complement), an aggregated signal (BULLISH/NEUTRAL/BEARISH), current market regime, VIX level, market drawdown, and crisis status.
Additionally, fundamental metrics are displayed: P/E Ratio, Equity Risk Premium, Return on Equity, Debt-to-Equity Ratio, and Total Shareholder Yield. This transparency allows users to understand model decisions and form their own assessments.
Component scores (Regime, Risk, Valuation, Sentiment, Macro) are also displayed, each normalized on a 0-100 scale. This shows which factors primarily drive the current recommendation. If, for example, the Risk score is very low (20) while other scores are moderate (50-60), this indicates that risk management considerations are pulling allocation down.
10.3 Component Breakdown (Optional)
Advanced users can display individual components as separate lines in the chart. This enables analysis of component dynamics: do all components move synchronously, or are there divergences? Divergences can be particularly informative. If, for example, the market regime is bullish (high score) but the valuation component is very negative, this signals an overbought market not fundamentally supported—a classic "bubble warning."
This feature is disabled by default to keep the chart clean but can be activated for deeper analysis.
10.4 Confidence Bands
The model optionally displays uncertainty bands around the main allocation line. These are calculated as ±1 standard deviation of allocation over a rolling 20-period window. Wide bands indicate high volatility of model recommendations, suggesting uncertain market conditions. Narrow bands indicate stable recommendations.
This visualization implements a concept of epistemic uncertainty—uncertainty about the model estimate itself, not just market volatility. In phases where various indicators send conflicting signals, the allocation recommendation becomes more volatile, manifesting in wider bands. Users can understand this as a warning to act more cautiously or consult alternative information sources.
11. Alert System
11.1 Allocation Alerts
DEAM implements an alert system that notifies users of significant events. Allocation alerts trigger when smoothed allocation crosses certain thresholds. An alert is generated when allocation reaches 80% (from below), signaling strong bullish conditions. Another alert triggers when allocation falls to 20%, indicating defensive positioning.
These thresholds are not arbitrary but correspond with boundaries between model regimes. An allocation of 80% roughly corresponds to a clear bull market regime, while 20% corresponds to a bear market regime. Alerts at these points are therefore informative about fundamental regime shifts.
11.2 Crisis Alerts
Separate alerts trigger upon detection of crisis and severe crisis. These alerts have highest priority as they signal large risks. A crisis alert should prompt investors to review their portfolio and potentially take defensive measures beyond the automatic model recommendation (e.g., hedging through put options, rebalancing to more defensive sectors).
11.3 Regime Change Alerts
An alert triggers upon change of market regime (e.g., from Neutral to Correction, or from Bull Market to Strong Bull). Regime changes are highly informative events that typically entail substantial allocation changes. These alerts enable investors to proactively respond to changes in market dynamics.
11.4 Risk Breach Alerts
A specialized alert triggers when actual portfolio risk utilization exceeds target parameters by 20%. This is a warning signal that the risk management system is reaching its limits, possibly because market volatility is rising faster than allocation can be reduced. In such situations, investors should consider manual interventions.
12. Practical Application and Limitations
12.1 Portfolio Implementation
DEAM generates a recommendation for allocation between equities (S&P 500) and cash. Implementation by an investor can take various forms. The most direct method is using an S&P 500 ETF (e.g., SPY, VOO) for equity allocation and a money market fund or savings account for cash allocation.
A rebalancing strategy is required to synchronize actual allocation with model recommendation. Two approaches are possible: (1) rule-based rebalancing at every 10% deviation between actual and target, or (2) time-based monthly rebalancing. Both have trade-offs between responsiveness and transaction costs. Empirical evidence (Jaconetti, Kinniry, and Zilbering, 2010) suggests rebalancing frequency has moderate impact on performance, and investors should optimize based on their transaction costs.
12.2 Adaptation to Individual Preferences
The model offers numerous adjustment parameters. Component weights can be modified if investors place more or less belief in certain factors. A fundamentally-oriented investor might increase valuation weight, while a technical trader might increase regime weight.
Risk target parameters (target volatility, max drawdown) should be adapted to individual risk tolerance. Younger investors with long investment horizons can choose higher target volatility (15-18%), while retirees may prefer lower volatility (8-10%). This adjustment systematically shifts average equity allocation.
Crisis thresholds can be adjusted based on preference for sensitivity versus specificity of crisis detection. Lower thresholds (e.g., VIX > 35 instead of 40) increase sensitivity (more crises are detected) but reduce specificity (more false alarms). Higher thresholds have the reverse effect.
12.3 Limitations and Disclaimers
DEAM is based on historical relationships between indicators and market performance. There is no guarantee these relationships will persist in the future. Structural changes in markets (e.g., through regulation, technology, or central bank policy) can break established patterns. This is the fundamental problem of induction in financial science (Taleb, 2007).
The model is optimized for US equities (S&P 500). Application to other markets (international stocks, bonds, commodities) would require recalibration. The indicators and thresholds are specific to the statistical properties of the US equity market.
The model cannot eliminate losses. Even with perfect crisis prediction, an investor following the model would lose money in bear markets—just less than a buy-and-hold investor. The goal is risk-adjusted performance improvement, not risk elimination.
Transaction costs are not modeled. In practice, spreads, commissions, and taxes reduce net returns. Frequent trading can cause substantial costs. Model smoothing helps minimize this, but users should consider their specific cost situation.
The model reacts to information; it does not anticipate it. During sudden shocks (e.g., 9/11, COVID-19 lockdowns), the model can only react after price movements, not before. This limitation is inherent to all reactive systems.
12.4 Relationship to Other Strategies
DEAM is a tactical asset allocation approach and should be viewed as a complement, not replacement, for strategic asset allocation. Brinson, Hood, and Beebower (1986) showed in their influential study "Determinants of Portfolio Performance" that strategic asset allocation (long-term policy allocation) explains the majority of portfolio performance, but this leaves room for tactical adjustments based on market timing.
The model can be combined with value and momentum strategies at the individual stock level. While DEAM controls overall market exposure, within-equity decisions can be optimized through stock-picking models. This separation between strategic (market exposure) and tactical (stock selection) levels follows classical portfolio theory.
The model does not replace diversification across asset classes. A complete portfolio should also include bonds, international stocks, real estate, and alternative investments. DEAM addresses only the US equity allocation decision within a broader portfolio.
13. Scientific Foundation and Evaluation
13.1 Theoretical Consistency
DEAM's components are based on established financial theory and empirical evidence. The market regime component follows from regime-switching models (Hamilton, 1989) and trend-following literature. The risk management component implements volatility targeting (Moreira and Muir, 2017) and modern portfolio theory (Markowitz, 1952). The valuation component is based on discounted cash flow theory and empirical value research (Campbell and Shiller, 1988; Fama and French, 1992). The sentiment component integrates behavioral finance (Baker and Wurgler, 2006). The macro component uses established business cycle indicators (Estrella and Mishkin, 1998).
This theoretical grounding distinguishes DEAM from purely data-mining-based approaches that identify patterns without causal theory. Theory-guided models have greater probability of functioning out-of-sample, as they are based on fundamental mechanisms, not random correlations (Lo and MacKinlay, 1990).
13.2 Empirical Validation
While this document does not present detailed backtest analysis, it should be noted that rigorous validation of a tactical asset allocation model should include several elements:
In-sample testing establishes whether the model functions at all in the data on which it was calibrated. Out-of-sample testing is crucial: the model should be tested in time periods not used for development. Walk-forward analysis, where the model is successively trained on rolling windows and tested in the next window, approximates real implementation.
Performance metrics should be risk-adjusted. Pure return consideration is misleading, as higher returns often only compensate for higher risk. Sharpe Ratio, Sortino Ratio, Calmar Ratio, and Maximum Drawdown are relevant metrics. Comparison with benchmarks (Buy-and-Hold S&P 500, 60/40 Stock/Bond portfolio) contextualizes performance.
Robustness checks test sensitivity to parameter variation. If the model only functions at specific parameter settings, this indicates overfitting. Robust models show consistent performance over a range of plausible parameters.
13.3 Comparison with Existing Literature
DEAM fits into the broader literature on tactical asset allocation. Faber (2007) presented a simple momentum-based timing system that goes long when the market is above its 10-month average, otherwise cash. This simple system avoided large drawdowns in bear markets. DEAM can be understood as a sophistication of this approach that integrates multiple information sources.
Ilmanen (2011) discusses various timing factors in "Expected Returns" and argues for multi-factor approaches. DEAM operationalizes this philosophy. Asness, Moskowitz, and Pedersen (2013) showed that value and momentum effects work across asset classes, justifying cross-asset application of regime and valuation signals.
Ang (2014) emphasizes in "Asset Management: A Systematic Approach to Factor Investing" the importance of systematic, rule-based approaches over discretionary decisions. DEAM is fully systematic and eliminates emotional biases that plague individual investors (overconfidence, hindsight bias, loss aversion).
References
Ang, A. (2014) *Asset Management: A Systematic Approach to Factor Investing*. Oxford: Oxford University Press.
Ang, A., Piazzesi, M. and Wei, M. (2006) 'What does the yield curve tell us about GDP growth?', *Journal of Econometrics*, 131(1-2), pp. 359-403.
Asness, C.S. (2003) 'Fight the Fed Model', *The Journal of Portfolio Management*, 30(1), pp. 11-24.
Asness, C.S., Moskowitz, T.J. and Pedersen, L.H. (2013) 'Value and Momentum Everywhere', *The Journal of Finance*, 68(3), pp. 929-985.
Baker, M. and Wurgler, J. (2006) 'Investor Sentiment and the Cross-Section of Stock Returns', *The Journal of Finance*, 61(4), pp. 1645-1680.
Baker, M. and Wurgler, J. (2007) 'Investor Sentiment in the Stock Market', *Journal of Economic Perspectives*, 21(2), pp. 129-152.
Baur, D.G. and Lucey, B.M. (2010) 'Is Gold a Hedge or a Safe Haven? An Analysis of Stocks, Bonds and Gold', *Financial Review*, 45(2), pp. 217-229.
Bollerslev, T. (1986) 'Generalized Autoregressive Conditional Heteroskedasticity', *Journal of Econometrics*, 31(3), pp. 307-327.
Boudoukh, J., Michaely, R., Richardson, M. and Roberts, M.R. (2007) 'On the Importance of Measuring Payout Yield: Implications for Empirical Asset Pricing', *The Journal of Finance*, 62(2), pp. 877-915.
Brinson, G.P., Hood, L.R. and Beebower, G.L. (1986) 'Determinants of Portfolio Performance', *Financial Analysts Journal*, 42(4), pp. 39-44.
Brock, W., Lakonishok, J. and LeBaron, B. (1992) 'Simple Technical Trading Rules and the Stochastic Properties of Stock Returns', *The Journal of Finance*, 47(5), pp. 1731-1764.
Calmar, T.W. (1991) 'The Calmar Ratio', *Futures*, October issue.
Campbell, J.Y. and Shiller, R.J. (1988) 'The Dividend-Price Ratio and Expectations of Future Dividends and Discount Factors', *Review of Financial Studies*, 1(3), pp. 195-228.
Cochrane, J.H. (2011) 'Presidential Address: Discount Rates', *The Journal of Finance*, 66(4), pp. 1047-1108.
Damodaran, A. (2012) *Equity Risk Premiums: Determinants, Estimation and Implications*. Working Paper, Stern School of Business.
Engle, R.F. (1982) 'Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation', *Econometrica*, 50(4), pp. 987-1007.
Estrella, A. and Hardouvelis, G.A. (1991) 'The Term Structure as a Predictor of Real Economic Activity', *The Journal of Finance*, 46(2), pp. 555-576.
Estrella, A. and Mishkin, F.S. (1998) 'Predicting U.S. Recessions: Financial Variables as Leading Indicators', *Review of Economics and Statistics*, 80(1), pp. 45-61.
Faber, M.T. (2007) 'A Quantitative Approach to Tactical Asset Allocation', *The Journal of Wealth Management*, 9(4), pp. 69-79.
Fama, E.F. and French, K.R. (1989) 'Business Conditions and Expected Returns on Stocks and Bonds', *Journal of Financial Economics*, 25(1), pp. 23-49.
Fama, E.F. and French, K.R. (1992) 'The Cross-Section of Expected Stock Returns', *The Journal of Finance*, 47(2), pp. 427-465.
Garman, M.B. and Klass, M.J. (1980) 'On the Estimation of Security Price Volatilities from Historical Data', *Journal of Business*, 53(1), pp. 67-78.
Gilchrist, S. and Zakrajšek, E. (2012) 'Credit Spreads and Business Cycle Fluctuations', *American Economic Review*, 102(4), pp. 1692-1720.
Gordon, M.J. (1962) *The Investment, Financing, and Valuation of the Corporation*. Homewood: Irwin.
Graham, B. and Dodd, D.L. (1934) *Security Analysis*. New York: McGraw-Hill.
Hamilton, J.D. (1989) 'A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle', *Econometrica*, 57(2), pp. 357-384.
Ilmanen, A. (2011) *Expected Returns: An Investor's Guide to Harvesting Market Rewards*. Chichester: Wiley.
Jaconetti, C.M., Kinniry, F.M. and Zilbering, Y. (2010) 'Best Practices for Portfolio Rebalancing', *Vanguard Research Paper*.
Jegadeesh, N. and Titman, S. (1993) 'Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency', *The Journal of Finance*, 48(1), pp. 65-91.
Kahneman, D. and Tversky, A. (1979) 'Prospect Theory: An Analysis of Decision under Risk', *Econometrica*, 47(2), pp. 263-292.
Korteweg, A. (2010) 'The Net Benefits to Leverage', *The Journal of Finance*, 65(6), pp. 2137-2170.
Lo, A.W. and MacKinlay, A.C. (1990) 'Data-Snooping Biases in Tests of Financial Asset Pricing Models', *Review of Financial Studies*, 3(3), pp. 431-467.
Longin, F. and Solnik, B. (2001) 'Extreme Correlation of International Equity Markets', *The Journal of Finance*, 56(2), pp. 649-676.
Mandelbrot, B. (1963) 'The Variation of Certain Speculative Prices', *The Journal of Business*, 36(4), pp. 394-419.
Markowitz, H. (1952) 'Portfolio Selection', *The Journal of Finance*, 7(1), pp. 77-91.
Modigliani, F. and Miller, M.H. (1961) 'Dividend Policy, Growth, and the Valuation of Shares', *The Journal of Business*, 34(4), pp. 411-433.
Moreira, A. and Muir, T. (2017) 'Volatility-Managed Portfolios', *The Journal of Finance*, 72(4), pp. 1611-1644.
Moskowitz, T.J., Ooi, Y.H. and Pedersen, L.H. (2012) 'Time Series Momentum', *Journal of Financial Economics*, 104(2), pp. 228-250.
Parkinson, M. (1980) 'The Extreme Value Method for Estimating the Variance of the Rate of Return', *Journal of Business*, 53(1), pp. 61-65.
Piotroski, J.D. (2000) 'Value Investing: The Use of Historical Financial Statement Information to Separate Winners from Losers', *Journal of Accounting Research*, 38, pp. 1-41.
Reinhart, C.M. and Rogoff, K.S. (2009) *This Time Is Different: Eight Centuries of Financial Folly*. Princeton: Princeton University Press.
Ross, S.A. (1976) 'The Arbitrage Theory of Capital Asset Pricing', *Journal of Economic Theory*, 13(3), pp. 341-360.
Roy, A.D. (1952) 'Safety First and the Holding of Assets', *Econometrica*, 20(3), pp. 431-449.
Schwert, G.W. (1989) 'Why Does Stock Market Volatility Change Over Time?', *The Journal of Finance*, 44(5), pp. 1115-1153.
Sharpe, W.F. (1966) 'Mutual Fund Performance', *The Journal of Business*, 39(1), pp. 119-138.
Sharpe, W.F. (1994) 'The Sharpe Ratio', *The Journal of Portfolio Management*, 21(1), pp. 49-58.
Simon, D.P. and Wiggins, R.A. (2001) 'S&P Futures Returns and Contrary Sentiment Indicators', *Journal of Futures Markets*, 21(5), pp. 447-462.
Taleb, N.N. (2007) *The Black Swan: The Impact of the Highly Improbable*. New York: Random House.
Whaley, R.E. (2000) 'The Investor Fear Gauge', *The Journal of Portfolio Management*, 26(3), pp. 12-17.
Whaley, R.E. (2009) 'Understanding the VIX', *The Journal of Portfolio Management*, 35(3), pp. 98-105.
Yardeni, E. (2003) 'Stock Valuation Models', *Topical Study*, 51, Yardeni Research.
Zweig, M.E. (1973) 'An Investor Expectations Stock Price Predictive Model Using Closed-End Fund Premiums', *The Journal of Finance*, 28(1), pp. 67-78.
Infinity Signal Momentum ConsensusMulti-Timeframe Momentum Fusion & Projection
Infinity Signal — Momentum Consensus is a multi-timeframe momentum oscillator designed to identify early turning points, directional bias, and momentum structure by blending momentum data across multiple timeframes into a single, unified signal.
Instead of relying on a traditional single-timeframe Stochastic RSI, this indicator creates a consensus momentum curve that reflects how short-, medium-, and long-term momentum align in real time.
The result is a smoother, more stable oscillator that often turns before price and before standard momentum indicators react.
This approach reduces noise while preserving the geometric structure required for forward projection and swing analysis.
🔍 How It Works
The indicator computes Stochastic RSI momentum across multiple timeframes (1H, 4H, 1D, 1W, 1M), normalizes those values, and combines them into a single composite curve.
Each timeframe contributes differently:
Higher timeframes shape overall curvature and bias
Mid timeframes influence impulse strength
Lower timeframes refine timing
When averaged together, these form a momentum consensus that highlights genuine shifts in market behavior.
The indicator also includes:
A forward momentum projection based on prior curvature
A multi-timeframe alignment table with weighted bias and grading
Visual context for overbought, oversold, and transitional states
🧭 How to Use
1️⃣ Identify Directional Bias
Use the Composite Momentum Curve to determine the dominant market bias.
Rising curve → bullish momentum pressure
Falling curve → bearish momentum pressure
Flattening or compressing curve → consolidation or transition
Because the curve blends multiple timeframes, its direction is often more reliable than single-TF oscillators.
2️⃣ Watch for Early Turning Points
Key signals occur when the composite curve bends, flattens, or crosses.
Momentum turns frequently appear before price reversals
Signals near overbought or oversold zones carry greater significance
The smoother curve helps reduce whipsaw
These inflection points are particularly useful for swing and position traders.
3️⃣ Use the Multi-Timeframe Table for Confirmation
The table summarizes momentum alignment across all tracked timeframes.
Bull / Bear / Mixed shows agreement or divergence
Weighted scores reveal which timeframes dominate
Signal grades (A+ → F) reflect alignment quality
The strongest setups occur when table bias and momentum direction agree.
4️⃣ Interpret Projections as Context
Projected momentum paths visualize how momentum may evolve based on prior structure.
Use projections as guidance, not guarantees
Look for symmetry, slope changes, and recurring curvature
Combine projections with structure or support/resistance
Projections are most effective in stable momentum regimes.
5️⃣ Combine with Price Action & Risk Management
Infinity Signal — Momentum Consensus is designed as a decision-support tool.
Confirm signals with market structure and price behavior
Use clear invalidation levels and risk controls
Reduce exposure during mixed or low-alignment conditions
No indicator replaces proper risk management.
🎯 Ideal Use Cases
Swing trading & position trading
Momentum-based trend analysis
Early reversal and pivot detection
Multi-timeframe confirmation
⚠️ Disclaimer
This indicator is for educational and analytical purposes only and does not constitute financial advice. Always manage risk appropriately.
VB-MainLiteVB-MainLite – v1.0 Initial Release
Overview
VB-MainLite is a consolidated market-structure and execution framework designed to streamline decision-making into a single chart-level view. The script combines multi-timeframe trend, volatility, volume, and liquidity signals into one cohesive visual layer, reducing indicator clutter while preserving depth of information for active traders.
Core Architecture
Trend Backbone – EMA 200
Dedicated EMA 200 acts as the primary trend filter and higher-timeframe bias reference.
Serves as the “spine” of the system for contextualizing all secondary signals (swings, reversals, volume events, etc.).
Custom MA Suite (Envelope Ready)
Four configurable moving averages with flexible source, length, and smoothing.
Default configuration (preset idea: “8/89 Envelope”):
MA #1: EMA 8 on high
MA #2: EMA 8 on low
MA #3: EMA 89 on high
MA #4: EMA 89 on low
All four are disabled by default to keep the chart minimal. Users can toggle them on from the Custom MAs group for envelope or cloud-style configurations.
Nadaraya–Watson Smoother (Swing Framework)
Gaussian-kernel Nadaraya–Watson regression applied to price (hl2) to build a smooth synthetic curve.
Two layers of functionality:
Swing labels (▲ / ▼) at inflection points in the smoothed curve.
Optional curve line that visually tracks the turning structure over the last ~500 bars.
Designed to surface early swing potential before standard MAs react.
Hull Moving Average (Trend Overlay)
Optional Hull MA (HMA) for faster trend visualization.
Color-coded by slope (buy/sell bias).
Default: off to prevent overloading the chart; can be enabled under Hull MA settings.
Momentum, Exhaustion & Pattern Engine
CCI-Based Bar Coloring
CCI applied to close with configurable thresholds.
Overbought / oversold CCI zones map directly into candle coloring to visually highlight short-term momentum extremes.
RSI Top / Bottom Exhaustion Finder
RSI logic applied separately to high-driven (tops) and low-driven (bottoms) sequences.
Plots:
Top arrows where high-side RSI stretches into high-risk territory.
Bottom arrows where low-side RSI indicates exhaustion on the downside.
Useful as confluence around the Nadaraya swing turns and EMA 200 regime.
Engulfing + MA Trend Engine (“Fat Bull / Fat Bear”)
Detects bullish and bearish engulfing patterns, then combines them with MA trend cross logic.
Only when both pattern and MA regime align does the engine flag:
Fat Bull (Engulf + MA aligned long)
Fat Bear (Engulf + MA aligned short)
Candles are marked via conditional barcolor to highlight strong, structured shifts in control.
Fat Finger Detection (Wick Spikes / Stop Runs)
Identifies abnormal wick extensions relative to the prior bar’s body range with configurable tolerance.
Supports detection of potential liquidity grabs, stop runs, or “excess” that may precede reversals or mean-reversion behavior.
Volume & Liquidity Intelligence
Bull Snort (Aggressive Buy Spikes)
Flags events where:
Volume is significantly above the 50-period average, and
Price closes in the upper portion of the bar and above prior close.
Plots a labeled marker below the bar to indicate aggressive upside initiative by buyers.
Pocket Pivots (Accumulation Flags)
Compares current volume vs prior 10 sessions with a filter on prior “up” days.
Highlights pocket pivot days where current green candle volume outclasses recent down-day volumes, suggesting stealth accumulation.
Delta Volume Core (Directional Volume by Price)
Internal volume-by-price style engine over a user-defined lookback.
Splits volume into up-close and down-close buckets across dynamic price bins.
Feeds into S&R and ICT zone logic to quantify where buying vs selling pressure built up.
Structural Context: S&R and ICT Zones
S&R Power Channel
Computes local high/low band over a configurable lookback window.
Renders:
Upper and lower S&R channel lines.
Shaded support / resistance zones using boxes.
Adds Buy Power / Sell Power metrics based on the ratio of up vs down bars inside the window, displayed directly in the zone overlays.
Drops ◈ markers where price interacts dynamically with the top or bottom band, highlighting reaction points.
ICT-Style Premium / Discount & Macro Zones
Two tiered structures:
Local Premium / Discount zones over a shorter SR window.
Macro Premium / Discount zones over a longer macro window.
Each zone:
Uses underlying directional volume to annotate accumulation vs distribution bias.
Provides Delta Volume Bias shading in the mid-band region, visually encoding whether local power flows are net-buying or net-selling.
Enables traders to quickly see whether current trade location is in a local/macro discount or premium context while still respecting volume profile.
Positioning Intelligence: PCD (Stocks)
Position Cost Distribution (PCD) – Stocks Only
Available for stock symbols on intraday up to daily timeframe (≤ 1D).
Uses:
TOTAL_SHARES_OUTSTANDING fundamentals,
Daily OHLCV snapshot, and
A bucketed distribution engine
to approximate cost basis distribution across price.
Outputs:
Horizontal “PCD bars” to the right of current price, density-scaled by estimated share concentration.
Color-coding by profitability relative to current price (profitable vs unprofitable positions).
Labels for:
Current price
Average cost
Profit ratio (share % below current price)
90% cost range
70% cost range
Range overlap as a measure of clustering / concentration.
Multi-Timeframe Trend: Two-Pole Gaussian Dashboard
Two-Pole Gaussian Filter (Line + Cloud)
Smooths a user-selected source (default: close) using a two-pole Gaussian filter with tunable alpha.
Plots:
A thin Gaussian trend line, and
A thick Gaussian “cloud” line with transparency, colored by slope vs past (offsetG).
Functions as a responsive trend backbone that is more sensitive than EMA 200 but less noisy than raw price.
Multi-Timeframe Gaussian Dashboard
Evaluates Gaussian trend direction across up to six timeframes (e.g., 1H / 2H / 4H / Daily / Weekly).
Renders a compact bottom-right table:
Header: symbol + overall bias arrow (up / down) based on average trend alignment.
Row of colored cells per timeframe (green for uptrend, magenta for downtrend) with human-readable TF labels (e.g., “60M”, “4H”, “1D”).
Gives an immediate read on whether intraday, swing, and higher-timeframe flows are aligned or fragmented.
Default Configuration & Usage Guidance
Default state after adding the script:
Enabled by default:
EMA 200 trend backbone
Nadaraya–Watson swing labels and curve
CCI bar coloring
RSI top/bottom arrows
Fat Bull / Fat Bear engine
Bull Snort & Pocket Pivots
S&R Power Channel
ICT Local + Macro zones
Two-pole Gaussian line + cloud + dashboard
PCD engine for stocks (auto-active where data is available)
Disabled by default (opt-in):
Custom MA suite (4x MAs, preset as EMA 8/8/89/89)
Hull MA overlay
How traders can use VB-MainLite in practice:
Use EMA 200 + Gaussian dashboard to define top-down directional bias and avoid trading directly against multi-TF trend.
Use Nadaraya swing labels, RSI exhaustion arrows, and CCI bar colors to time entries within that higher-timeframe bias.
Use Fat Bull / Fat Bear events as structured confirmation that both pattern and MA regime have flipped in the same direction.
Use Bull Snort, Pocket Pivots, and S&R / ICT zones to align execution with liquidity, volume, and location (premium vs discount).
On stocks, use PCD as a positioning map to understand trapped supply, support zones near crowded cost basis, and where profit-taking is likely.
LogNormalLibrary "LogNormal"
A collection of functions used to model skewed distributions as log-normal.
Prices are commonly modeled using log-normal distributions (ie. Black-Scholes) because they exhibit multiplicative changes with long tails; skewed exponential growth and high variance. This approach is particularly useful for understanding price behavior and estimating risk, assuming continuously compounding returns are normally distributed.
Because log space analysis is not as direct as using math.log(price) , this library extends the Error Functions library to make working with log-normally distributed data as simple as possible.
- - -
QUICK START
Import library into your project
Initialize model with a mean and standard deviation
Pass model params between methods to compute various properties
var LogNorm model = LN.init(arr.avg(), arr.stdev()) // Assumes the library is imported as LN
var mode = model.mode()
Outputs from the model can be adjusted to better fit the data.
var Quantile data = arr.quantiles()
var more_accurate_mode = mode.fit(model, data) // Fits value from model to data
Inputs to the model can also be adjusted to better fit the data.
datum = 123.45
model_equivalent_datum = datum.fit(data, model) // Fits value from data to the model
area_from_zero_to_datum = model.cdf(model_equivalent_datum)
- - -
TYPES
There are two requisite UDTs: LogNorm and Quantile . They are used to pass parameters between functions and are set automatically (see Type Management ).
LogNorm
Object for log space parameters and linear space quantiles .
Fields:
mu (float) : Log space mu ( µ ).
sigma (float) : Log space sigma ( σ ).
variance (float) : Log space variance ( σ² ).
quantiles (Quantile) : Linear space quantiles.
Quantile
Object for linear quantiles, most similar to a seven-number summary .
Fields:
Q0 (float) : Smallest Value
LW (float) : Lower Whisker Endpoint
LC (float) : Lower Whisker Crosshatch
Q1 (float) : First Quartile
Q2 (float) : Second Quartile
Q3 (float) : Third Quartile
UC (float) : Upper Whisker Crosshatch
UW (float) : Upper Whisker Endpoint
Q4 (float) : Largest Value
IQR (float) : Interquartile Range
MH (float) : Midhinge
TM (float) : Trimean
MR (float) : Mid-Range
- - -
TYPE MANAGEMENT
These functions reliably initialize and update the UDTs. Because parameterization is interdependent, avoid setting the LogNorm and Quantile fields directly .
init(mean, stdev, variance)
Initializes a LogNorm object.
Parameters:
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
set(ln, mean, stdev, variance)
Transforms linear measurements into log space parameters for a LogNorm object.
Parameters:
ln (LogNorm) : Object containing log space parameters.
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
quantiles(arr)
Gets empirical quantiles from an array of floats.
Parameters:
arr (array) : Float array object.
Returns: Quantile Object
- - -
DESCRIPTIVE STATISTICS
Using only the initialized LogNorm parameters, these functions compute a model's central tendency and standardized moments.
mean(ln)
Computes the linear mean from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
median(ln)
Computes the linear median from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
mode(ln)
Computes the linear mode from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
variance(ln)
Computes the linear variance from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
skewness(ln)
Computes the linear skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
kurtosis(ln, excess)
Computes the linear kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Kurtosis (true) or regular Kurtosis (false).
Returns: Between 0 and ∞
hyper_skewness(ln)
Computes the linear hyper skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
hyper_kurtosis(ln, excess)
Computes the linear hyper kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Hyper Kurtosis (true) or regular Hyper Kurtosis (false).
Returns: Between 0 and ∞
- - -
DISTRIBUTION FUNCTIONS
These wrap Gaussian functions to make working with model space more direct. Because they are contained within a log-normal library, they describe estimations relative to a log-normal curve, even though they fundamentally measure a Gaussian curve.
pdf(ln, x, empirical_quantiles)
A Probability Density Function estimates the probability density . For clarity, density is not a probability .
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate for which a density will be estimated.
empirical_quantiles (Quantile) : Quantiles as observed in the data (optional).
Returns: Between 0 and ∞
cdf(ln, x, precise)
A Cumulative Distribution Function estimates the area under a Log-Normal curve between Zero and a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ccdf(ln, x, precise)
A Complementary Cumulative Distribution Function estimates the area under a Log-Normal curve between a linear X coordinate and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
cdfinv(ln, a, precise)
An Inverse Cumulative Distribution Function reverses the Log-Normal cdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
ccdfinv(ln, a, precise)
An Inverse Complementary Cumulative Distribution Function reverses the Log-Normal ccdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
cdfab(ln, x1, x2, precise)
A Cumulative Distribution Function from A to B estimates the area under a Log-Normal curve between two linear X coordinates (A and B).
Parameters:
ln (LogNorm) : Object of log space parameters.
x1 (float) : First linear X coordinate .
x2 (float) : Second linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ott(ln, x, precise)
A One-Tailed Test transforms a linear X coordinate into an absolute Z Score before estimating the area under a Log-Normal curve between Z and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 0.5
ttt(ln, x, precise)
A Two-Tailed Test transforms a linear X coordinate into symmetrical ± Z Scores before estimating the area under a Log-Normal curve from Zero to -Z, and +Z to Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ottinv(ln, a, precise)
An Inverse One-Tailed Test reverses the Log-Normal ott() by estimating a linear X coordinate for the right tail from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Half a normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
tttinv(ln, a, precise)
An Inverse Two-Tailed Test reverses the Log-Normal ttt() by estimating two linear X coordinates from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Linear space tuple :
- - -
UNCERTAINTY
Model-based measures of uncertainty, information, and risk.
sterr(sample_size, fisher_info)
The standard error of a sample statistic.
Parameters:
sample_size (float) : Number of observations.
fisher_info (float) : Fisher information.
Returns: Between 0 and ∞
surprisal(p, base)
Quantifies the information content of a single event.
Parameters:
p (float) : Probability of the event .
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
entropy(ln, base)
Computes the differential entropy (average surprisal).
Parameters:
ln (LogNorm) : Object of log space parameters.
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
perplexity(ln, base)
Computes the average number of distinguishable outcomes from the entropy.
Parameters:
ln (LogNorm)
base (float) : Logarithmic base used for Entropy (optional).
Returns: Between 0 and ∞
value_at_risk(ln, p, precise)
Estimates a risk threshold under normal market conditions for a given confidence level.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
value_at_risk_inv(ln, value_at_risk, precise)
Reverses the value_at_risk() by estimating the confidence level from the risk threshold.
Parameters:
ln (LogNorm) : Object of log space parameters.
value_at_risk (float) : Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
conditional_value_at_risk(ln, p, precise)
Estimates the average loss beyond a confidence level, aka. expected shortfall.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_value_at_risk_inv(ln, conditional_value_at_risk, precise)
Reverses the conditional_value_at_risk() by estimating the confidence level of an average loss.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_value_at_risk (float) : Conditional Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
partial_expectation(ln, x, precise)
Estimates the partial expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and µ
partial_expectation_inv(ln, partial_expectation, precise)
Reverses the partial_expectation() by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
partial_expectation (float) : Partial Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_expectation(ln, x, precise)
Estimates the conditional expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between X and ∞
conditional_expectation_inv(ln, conditional_expectation, precise)
Reverses the conditional_expectation by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_expectation (float) : Conditional Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
fisher(ln, log)
Computes the Fisher Information Matrix for the distribution, not a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the distribution
fisher(ln, x, log)
Computes the Fisher Information Matrix for a linear X coordinate, not the distribution itself.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the linear X coordinate
confidence_interval(ln, x, sample_size, confidence, precise)
Estimates a confidence interval for a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
sample_size (float) : Number of observations.
confidence (float) : Confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: CI for the linear X coordinate
- - -
CURVE FITTING
An overloaded function that helps transform values between spaces. The primary function uses quantiles, and the overloads wrap the primary function to make working with LogNorm more direct.
fit(x, a, b)
Transforms X coordinate between spaces A and B.
Parameters:
x (float) : Linear X coordinate from space A .
a (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
b (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
Returns: Adjusted X coordinate
- - -
EXPORTED HELPERS
Small utilities to simplify extensibility.
z_score(ln, x)
Converts a linear X coordinate into a Z Score.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate.
Returns: Between -∞ and +∞
x_coord(ln, z)
Converts a Z Score into a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
z (float) : Standard normal Z Score.
Returns: Between 0 and ∞
iget(arr, index)
Gets an interpolated value of a pseudo -element (fictional element between real array elements). Useful for quantile mapping.
Parameters:
arr (array) : Float array object.
index (float) : Index of the pseudo element.
Returns: Interpolated value of the arrays pseudo element.
Polynomial Regression HeatmapPolynomial Regression Heatmap – Advanced Trend & Volatility Visualizer
Overview
The Polynomial Regression Heatmap is a sophisticated trading tool designed for traders who require a clear and precise understanding of market trends and volatility. By applying a second-degree polynomial regression to price data, the indicator generates a smooth trend curve, augmented with adaptive volatility bands and a dynamic heatmap. This framework allows users to instantly recognize trend direction, potential reversals, and areas of market strength or weakness, translating complex price action into a visually intuitive map.
Unlike static trend indicators, the Polynomial Regression Heatmap adapts to changing market conditions. Its visual design—including color-coded candles, regression bands, optional polynomial channels, and breakout markers—ensures that price behavior is easy to interpret. This makes it suitable for scalping, swing trading, and longer-term strategies across multiple asset classes.
How It Works
The core of the indicator relies on fitting a second-degree polynomial to a defined lookback period of price data. This regression curve captures the non-linear nature of market movements, revealing the true trajectory of price beyond the distortions of noise or short-term volatility.
Adaptive upper and lower bands are constructed using ATR-based scaling, surrounding the regression line to reflect periods of high and low volatility. When price moves toward or beyond these bands, it signals areas of potential overextension or support/resistance.
The heatmap colors each candle based on its relative position within the bands. Green shades indicate proximity to the upper band, red shades indicate proximity to the lower band, and neutral tones represent mid-range positioning. This continuous gradient visualization provides immediate feedback on trend strength, market balance, and potential turning points.
Optional polynomial channels can be overlaid around the regression curve. These three-line channels are based on regression residuals and a fixed width multiplier, offering additional reference points for analyzing price deviations, trend continuation, and reversion zones.
Signals and Breakouts
The Polynomial Regression Heatmap includes statistical pivot-based signals to highlight actionable price movements:
Buy Signals – A triangular marker appears below the candle when a pivot low occurs below the lower regression band.
Sell Signals – A triangular marker appears above the candle when a pivot high occurs above the upper regression band.
These markers identify significant deviations from the regression curve while accounting for volatility, providing high-quality visual cues for potential entry points.
The indicator ensures clarity by spacing markers vertically using ATR-based calculations, preventing overlap during periods of high volatility. Users can rely on these signals in combination with heatmap intensity and regression slope for contextual confirmation.
Interpretation
Trend Analysis :
The slope of the polynomial regression line represents trend direction. A rising curve indicates bullish bias, a falling curve indicates bearish bias, and a flat curve indicates consolidation.
Steeper slopes suggest stronger momentum, while gradual slopes indicate more moderate trend conditions.
Volatility Assessment :
Band width provides an instant visual measure of market volatility. Narrow bands correspond to low volatility and potential consolidation, whereas wide bands indicate higher volatility and significant price swings.
Heatmap Coloring :
Candle colors visually represent price position within the bands. This allows traders to quickly identify zones of bullish or bearish pressure without performing complex calculations.
Channel Analysis (Optional) :
The polynomial channel defines zones for evaluating potential overextensions or retracements. Price interacting with these lines may suggest areas where mean-reversion or trend continuation is likely.
Breakout Signals :
Buy and Sell markers highlight pivot points relative to the regression and volatility bands. These are statistical signals, not arbitrary triggers, and should be interpreted in context with trend slope, band width, and heatmap intensity.
Strategy Integration
The Polynomial Regression Heatmap supports multiple trading approaches:
Trend Following – Enter trades in the direction of the regression slope while using the heatmap for momentum confirmation.
Pullback Entries – Use breakouts or deviations from the regression bands as low-risk entry points during trend continuation.
Mean Reversion – Price reaching outer channel boundaries can indicate potential reversal or retracement opportunities.
Multi-Timeframe Alignment – Overlay on higher and lower timeframes to filter noise and improve entry timing.
Stop-loss levels can be set just beyond the opposing regression band, while take-profit targets can be informed by the distance between the bands or the curvature of the polynomial line.
Advanced Techniques
For traders seeking greater precision:
Combine the Polynomial Regression Heatmap with volume, momentum, or volatility indicators to validate signals.
Observe the width and slope of the regression bands over time to anticipate expanding or contracting volatility.
Track sequences of breakout signals in conjunction with heatmap intensity for systematic trade management.
Adjusting regression length allows customization for different assets or timeframes, balancing responsiveness and smoothing. The combination of polynomial curve, adaptive bands, heatmap, and optional channels provides a comprehensive statistical framework for informed decision-making.
Inputs and Customization
Regression Length – Determines the number of bars used for polynomial fitting. Shorter lengths increase responsiveness; longer lengths improve smoothing.
Show Bands – Toggle visibility of the ATR-based regression bands.
Show Channel – Enable or disable the polynomial channel overlay.
Color Settings – Customize bullish, bearish, neutral, and accent colors for clarity and visual preference.
All other internal parameters are fixed to ensure consistent statistical behavior and minimize potential misconfiguration.
Why Use Polynomial Regression Heatmap
The Polynomial Regression Heatmap transforms complex price action into a clear, actionable visual framework. By combining non-linear trend mapping, adaptive volatility bands, heatmap visualization, and breakout signals, it provides a multi-dimensional perspective that is both quantitative and intuitive.
This indicator allows traders to focus on execution, interpret market structure at a glance, and evaluate trend strength, overextensions, and potential reversals in real time. Its design is compatible with scalping, swing trading, and long-term strategies, providing a robust tool for disciplined, data-driven trading.
Neural Pulse System [Alpha Extract]Neural Pulse System (NPS)
The Neural Pulse System (NPS) is a custom technical indicator that analyzes price action through a probabilistic lens, offering a dynamic view of bullish and bearish tendencies.
Unlike traditional binary classification models, NPS employs Ordinary Least Squares (OLS) regression with dynamically computed coefficients to produce a smooth probability output ranging from -1 to 1.
Paired with ATR-based bands, this indicator provides an intuitive and volatility-aware approach to trend analysis.
🔶 CALCULATION
The Neural Pulse System utilizes OLS regression to compute probabilities of bullish or bearish price action while incorporating ATR-based bands for volatility context:
Dynamic Coefficients: Coefficients are recalculated in real-time and scaled up to ensure the regression adapts to evolving market conditions.
Ordinary Least Squares (OLS): Uses OLS regression instead of gradient descent for more precise and efficient coefficient estimation.
ATR Bands: Smoothed Average True Range (ATR) bands serve as dynamic boundaries, framing the regression within market volatility.
Probability Output: Instead of a binary result, the output is a continuous probability curve (-1 to 1), helping traders gauge the strength of bullish or bearish momentum.
Formula:
OLS Regression = Line of best fit minimizing squared errors
Probability Signal = Transformed regression output scaled to -1 (bearish) to 1 (bullish)
ATR Bands = Smoothed Average True Range (ATR) to frame price movements within market volatility
🔶 DETAILS
📊 Visual Features:
Probability Curve: Smooth probability signal ranging from -1 (bearish) to 1 (bullish)
ATR Bands: Price action is constrained within volatility bands, preventing extreme deviations
Color-Coded Signals:
Blue to Green: Increasing probability of bullish momentum
Orange to Red: Increasing probability of bearish momentum
Interpretation:
Bullish Bias: Probability output consistently above 0 suggests a bullish trend.
Bearish Bias: Probability output consistently below 0 indicates bearish pressure.
Reversals: Extreme values near -1 or 1, followed by a move toward 0, may signal potential trend reversals.
🔶 EXAMPLES
📌 Trend Identification: Use the probability output to gauge trend direction.
📌Example: On a 1-hour chart, NPS moves from -0.5 to 0.8 as price breaks resistance, signaling a bullish trend.
Reversal Signals: Watch for probability extremes near -1 or 1 followed by a reversal toward 0.
Example: NPS hits 0.9, price touches the upper ATR band, then both retreat—indicating a potential pullback.
📌 Example snapshots:
Volatility Context: ATR bands help assess whether price action aligns with typical market conditions.
Example: During low volatility, the probability signal hovers near 0, and ATR bands tighten, suggesting a potential breakout.
🔶 SETTINGS
Customization Options:
ATR Period – Defines lookback length for ATR calculation (shorter = more responsive, longer = smoother).
ATR Multiplier – Adjusts band width for better volatility capture.
Regression Length – Controls how many bars feed into the coefficient calculation (longer = smoother, shorter = more reactive).
Scaling Factor – Adjusts the strength of regression coefficients.
Output Smoothing – Option to apply a moving average for a cleaner probability curve
Dynamic ALMA with signalsEnhanced ALMA with Signals
This TradingView indicator is designed to enhance your trading strategy by utilizing the Arnaud Legoux Moving Average (ALMA), a unique moving average that provides smoother price action while minimizing lag. The script not only plots the ALMA line but also dynamically adjusts its parameters based on market volatility to adapt to different trading conditions. Additionally, it highlights potential bounce points off the line, as well as breakout points, giving traders clear signals for potential support, resistance levels, and breakouts.
Key Features:
Dynamic ALMA Line with Glow Effect:
The core of this indicator is the ALMA line, which is dynamically adjusted to market volatility, providing more accurate signals in varying conditions. The line adapts to both trending and consolidating markets by adjusting its sensitivity in real time. A glow effect is created by plotting the ALMA line multiple times with increasing transparency, making it visually distinct.
Bounce Detection Signals with Volatility Filter:
The script detects and labels potential support and resistance bounces based on the crossover and crossunder of the price with the ALMA line, further filtered by a volatility condition. This helps in filtering out false signals during low-volatility conditions, making the signals more reliable.
Visual Enhancements:
Custom glow effects and labels for bounce detection enhance chart readability and help traders quickly identify key levels.
Inputs:
Base Window Size: Sets the number of bars used in calculating the ALMA, allowing traders to adjust the sensitivity of the moving average. This parameter is dynamically adjusted based on current market volatility.
Offset: Determines the position of the ALMA curve. Higher values move the curve further away from the price. This value remains constant for stability.
Sigma: Controls the smoothness of the ALMA curve; a higher sigma results in a smoother curve. This value also remains constant.
ATR Period and Threshold Multiplier: Used to calculate the Average True Range (ATR) for the volatility filter, which determines whether the market conditions are sufficiently volatile to consider bounce signals.
How It Works:
Dynamic ALMA Calculation:
The script calculates the ALMA (Arnaud Legoux Moving Average) using the ta.alma function, dynamically adjusting the window size based on market volatility measured by the ATR (Average True Range). This ensures that the ALMA line remains responsive in high-volatility environments and smooth in low-volatility conditions.
Glow Effect:
To create a glow effect around the ALMA line, the script plots the ALMA multiple times with varying degrees of transparency. This visual enhancement helps the ALMA line stand out on the chart.
Bounce Detection with Volatility Filter:
The script uses two conditions to detect potential bounces:
Support Bounce: Detected when the low of the bar crosses above the ALMA line (ta.crossover(low, alma)) and the close is above the ALMA, while the volatility filter confirms sufficient market activity. This suggests potential support at the ALMA line.
Resistance Bounce: Detected when the high of the bar crosses below the ALMA line (ta.crossunder(high, alma)) and the close is below the ALMA, while the volatility filter confirms sufficient market activity. This indicates potential resistance at the ALMA line.
Labeling Bounce Points:
When a bounce is detected, the script labels it on the chart:
Support Bounces (S): Labeled with a blue "S" below the bar where a support bounce is detected.
Resistance Bounces (R): Labeled with a white "R" above the bar where a resistance bounce is detected.
Usage:
This enhanced indicator helps traders visualize key support and resistance levels more effectively by dynamically adjusting the ALMA moving average to market conditions. By detecting and labeling potential bounce points and filtering these signals based on volatility, traders can better identify entry and exit points in their trading strategy. The dynamic adjustments and visual enhancements make it easier to spot critical levels quickly and adapt to changing market conditions.
Customize the inputs to fit your trading style, and use this enhanced ALMA indicator to gain a more refined understanding of market trends, potential reversals, and breakouts.
SMIIO + VolumeThis indicator generates long and short signals.
The operation of the indicator is as follows;
First, true strength index is calculated with closing prices. We call this the "ergodic" curve.
Then the average of the ergodic (ema) is calculated to obtain the "signal" curve.
To calculate the "oscillator", the signal is subtracted from ergodic (oscillator = ergodic - signal).
The last variable to be used in the calculation is the average volume, calculated with sma.
Calculation for long signal;
- If the ergodic curve cross up the zero line (ergodic > 0 AND ergodic < 0) and,
- If the current oscillator is greater than the previous oscillator (oscillator > oscillator ) and,
- If the current ergonic is greater than the previous signal (ergonic > signal) and,
- If the current volume is greater than the average volume (volume > averageVolume) and,
- If the current candle closing price is greater than the opening price (close > open)
If all the above conditions are fullfilled, the long input signal is issued with "Buy" label.
Calculation for short signal;
- If the ergodic curve cross down the zero line (ergodic < 0 AND ergodic > 0) and,
- If the current oscillator is smaller than the previous oscillator (oscillator < oscillator ) and,
- If the current ergonic is smaller than the previous signal (ergonic < signal) and,
- If the current volume is greater than the average volume (volume > averageVolume) and,
- If the current candle closing price is smaller than the opening price (close < open)
If all the above conditions are fullfilled, the short input signal is issued with "Sell" label.
Treasury Yields Heatmap [By MUQWISHI]▋ INTRODUCTION :
The “Treasury Yields Heatmap” generates a dynamic heat map table, showing treasury yield bond values corresponding with dates. In the last column, it presents the status of the yield curve, discerning whether it’s in a normal, flat, or inverted configuration, which determined by using Pearson's linear regression coefficient. This tool is built to offer traders essential insights for effectively tracking bond values and monitoring yield curve status, featuring the flexibility to input a starting period, timeframe, and select from a range of major countries' bond data.
_______________________
▋ OVERVIEW:
______________________
▋ YIELD CURVE:
It is determined through Pearson's linear regression coefficient and considered…
R ≥ 0.7 → Normal
0.7 > R ≥ 0.35 → Slight Normal
0.35 > R > -0.35 → Flat
-0.35 ≥ R > -0.7 → Slight Inverted
-0.7 ≥ R → Inverted
_______________________
▋ INDICATOR SETTINGS:
#Section One: Table Setting
#Section Two: Technical Setting
(1) Country: Select country’s treasury yields data
(2) Timeframe: Time interval.
(3) Fetch By:
(3A) Date: Retrieve data by beginning of date.
(3B) Period: Retrieve data by specifying the number of time series back.
Enjoy. Please let me know if you have any questions.
Thank you.
Crude Oil: Backwardation Vs ContangoCrude Oil, CL
Plots Futures Curve: Futures contract prices over the next 3.5 years; to easily visualize Backwardation Vs Contango(carrying charge) markets.
Carrying charge (contract prices increasing into the future) = normal, representing the costs of carrying/storage of a commodity. When this is flipped to Backwardation(As the above; contract prices decreasing into the future): it's a bullish sign: Buyers want this commodity, and they want it NOW.
Note: indicator does not map to time axis in the same way as price; it simply plots the progression of contract months out into the future; left to right; so timeframe DOESN'T MATTER for this plot
TO UPDATE (every year or so): in REQUEST CONTRACTS section, delete old contracts (top) and add new ones (bottom). Then in PLOTTING section, Delete old contract labels (bottom); add new contract labels (top); adjust the X in 'bar_index-(X+_historical)' numbers accordingly
This is one of several similar Futures Curve indicators: Meats | Metals | Grains | VIX | Crude Oil
If you want to build from this; to work on other commodities; be aware that Tradingview limits the number of contract calls to 40 (hence the multiple indicators)
Tips:
-Right click and reset chart if you can't see the plot; or if you have trouble with the scaling.
-Right click and add to new scale if you prefer this not to overlay directly on price. Or move to new pane below.
-If this takes too long to load (due to so many security calls); comment out the more distant future half of the contracts; and their respective labels. Or comment out every other contract and every other label if you prefer.
--Added historical input: input days back in time; to see the historical shape of the Futures curve via selecting 'days back' snapshot
updated 20th June 2022
© twingall
Nadaraya-Watson: Rational Quadratic Kernel (Opening Gap Shift)What we did to fix it: We didn't throw out the old data (that made it too jumpy early in the day).
Instead, we "tricked" the kernel by shifting all the previous day's prices up or down by the exact gap amount (e.g., if it gapped up 50 points, add 50 to every old price point). This makes the history "line up" with the new day's starting level.
Created so with a fresh session the Nadaraya-Watson Regression Kernel is relevant from the get go - no catch up on opening gaps.
All credit to jdehorty his full description is below.
What is Nadaraya–Watson Regression?
Nadaraya–Watson Regression is a type of Kernel Regression, which is a non-parametric method for estimating the curve of best fit for a dataset. Unlike Linear Regression or Polynomial Regression, Kernel Regression does not assume any underlying distribution of the data. For estimation, it uses a kernel function, which is a weighting function that assigns a weight to each data point based on how close it is to the current point. The computed weights are then used to calculate the weighted average of the data points.
How is this different from using a Moving Average?
A Simple Moving Average is actually a special type of Kernel Regression that uses a Uniform (Retangular) Kernel function. This means that all data points in the specified lookback window are weighted equally. In contrast, the Rational Quadratic Kernel function used in this indicator assigns a higher weight to data points that are closer to the current point. This means that the indicator will react more quickly to changes in the data.
Why use the Rational Quadratic Kernel over the Gaussian Kernel?
The Gaussian Kernel is one of the most commonly used Kernel functions and is used extensively in many Machine Learning algorithms due to its general applicability across a wide variety of datasets. The Rational Quadratic Kernel can be thought of as a Gaussian Kernel on steroids; it is equivalent to adding together many Gaussian Kernels of differing length scales. This allows the user even more freedom to tune the indicator to their specific needs.
The formula for the Rational Quadratic function is:
K(x, x') = (1 + ||x - x'||^2 / (2 * alpha * h^2))^(-alpha)
where x and x' data are points, alpha is a hyperparameter that controls the smoothness (i.e. overall "wiggle") of the curve, and h is the band length of the kernel.
Does this Indicator Repaint?
No, this indicator has been intentionally designed to NOT repaint. This means that once a bar has closed, the indicator will never change the values in its plot. This is useful for backtesting and for trading strategies that require a non-repainting indicator.
Settings:
Bandwidth. This is the number of bars that the indicator will use as a lookback window.
Relative Weighting Parameter. The alpha parameter for the Rational Quadratic Kernel function. This is a hyperparameter that controls the smoothness of the curve. A lower value of alpha will result in a smoother, more stretched-out curve, while a lower value will result in a more wiggly curve with a tighter fit to the data. As this parameter approaches 0, the longer time frames will exert more influence on the estimation, and as it approaches infinity, the curve will become identical to the one produced by the Gaussian Kernel.
Color Smoothing. Toggles the mechanism for coloring the estimation plot between rate of change and cross over modes.
SMC Structures and Multi-Timeframe FVG PYSMC Structures and Multi-Timeframe FVG Indicator
Tip: For optimal performance, adjust the number of FVGs displayed per timeframe in the settings. On high-performance devices, up to 8 FVGs per timeframe can be used without issues. If you experience slowdowns, reduce to 3 or 4 FVGs per timeframe. If the chart flashes, disable indicators one by one to identify conflicts, or try using the TradingView Mobile or Windows App for a smoother experience.
Overview
This Pine Script indicator enhances market analysis by integrating Smart Money Concepts (SMC) with Fair Value Gaps (FVG) across multiple timeframes. It identifies trend continuations (Break of Structure, BOS) and trend reversals (Change of Character, CHoCH) while highlighting liquidity zones through FVG detection. The indicator includes eight customizable Moving Average (MA) curve templates, disabled by default, to complement SMC and FVG analysis. Its originality lies in combining multi-timeframe FVG detection with SMC structure analysis, providing traders with a cohesive tool to visualize price action patterns and liquidity zones efficiently.
Features and Functionality
1. Fair Value Gaps (FVG)
The indicator detects and displays bullish, bearish, and mitigated FVGs, representing liquidity zones where price inefficiencies occur. These gaps are dynamically updated based on price action:
Bullish FVG: Displayed in green when unmitigated, indicating potential upward liquidity zones.
Bearish FVG: Displayed in red when unmitigated, signaling potential downward liquidity zones.
Mitigated FVG: Shown in gray once the gap is partially filled by price action.
Fully Mitigated FVG: Automatically removed from the chart when the gap is fully filled, reducing visual clutter.
Users can customize the number of historical FVGs displayed via the settings, allowing focus on recent liquidity zones for targeted analysis.
2. SMC Structures
The indicator identifies key SMC price action patterns:
Break of Structure (BOS): Marked with gray lines, indicating trend continuation when price breaks a significant high or low.
Change of Character (CHoCH): Highlighted with yellow lines, signaling potential trend reversals when price fails to maintain the current structure.
High/Low Values: Blue lines denote the highest high and lowest low of the current structure, providing reference points for market context.
3. Multi-Timeframe FVG Analysis
A standout feature is the ability to analyze FVGs across multiple timeframes simultaneously. This allows traders to align higher-timeframe liquidity zones with lower-timeframe entries, improving trade precision. The indicator fetches FVG data from user-selected timeframes, displaying them cohesively on the chart.
4. Moving Average (MA) Templates
The indicator includes eight customizable MA curve templates in the Settings > Template section, disabled by default. These templates allow users to overlay MAs (e.g., SMA, EMA, WMA) to complement SMC and FVG analysis. Each template is pre-configured with different periods and types, enabling quick adaptation to various trading strategies, such as trend confirmation or dynamic support/resistance.
How It Works
The script processes price action to detect FVGs by analyzing three-candle patterns where a gap forms between the high/low of the first and third candles. Multi-timeframe data is retrieved using Pine Script’s request.security() function, ensuring accurate FVG plotting across user-defined timeframes. BOS and CHoCH are identified by tracking swing highs and lows, with logic to differentiate trend continuation from reversals. The MA templates are computed using standard Pine Script TA functions, with user inputs controlling visibility and parameters.
How to Use
Add to Chart: Apply the indicator to any TradingView chart.
Configure Settings:
FVG Settings: Adjust the number of historical FVGs to display (default: 10). Enable/disable specific FVG types (bullish, bearish, mitigated).
Timeframe Selection: Choose up to three timeframes for FVG analysis (e.g., 1H, 4H, 1D) to align with your trading strategy.
Structure Settings: Toggle BOS (gray lines) and CHoCH (yellow lines) visibility. Adjust sensitivity for structure detection if needed.
MA Templates: Enable MA curves via the Template section. Select from eight pre-configured MA types and periods to suit your analysis.
Interpret Signals:
Use green/red FVGs for potential entry points targeting liquidity zones.
Monitor gray lines (BOS) for trend continuation and yellow lines (CHoCH) for reversal signals.
Align multi-timeframe FVGs with BOS/CHoCH for high-probability setups.
Optionally, use MA curves for trend confirmation or dynamic levels.
Clean Chart Usage: The indicator is designed to work standalone. Ensure no conflicting scripts are applied unless explicitly needed for your strategy.
Why This Indicator Is Unique
Unlike standalone FVG or SMC indicators, this script combines both concepts with multi-timeframe analysis, offering a comprehensive view of market structure and liquidity. The addition of customizable MA templates enhances flexibility, while the dynamic removal of mitigated FVGs keeps the chart clean. This mashup is purposeful, as it integrates complementary tools to streamline decision-making for traders using SMC strategies.
Credits
This indicator builds on foundational SMC and FVG concepts from the TradingView community. Some open-source code was reused, and do performance enhancement as you guys can read the code. This type of indicators has inspiration was drawn from public domain SMC methodologies. All code is partly original with manual work on performance optimization in Pine Script.
Notes
Ensure your chart is clean (no unnecessary drawings or indicators) to maximize clarity.
The indicator is open-source, and traders are encouraged to review the code for deeper understanding.
For optimal use, test the indicator on a demo account to familiarize yourself with its signals.
High Probability Order Blocks [AlgoAlpha]🟠 OVERVIEW
This script detects and visualizes high-probability order blocks by combining a volatility-based z-score trigger with a statistical survival model inspired by Kaplan-Meier estimation. It builds and manages bullish and bearish order blocks dynamically on the chart, displays live survival probabilities per block, and plots optional rejection signals. What makes this tool unique is its use of historical mitigation behavior to estimate and plot how likely each zone is to persist, offering traders a probabilistic perspective on order block strength—something rarely seen in retail indicators.
🟠 CONCEPTS
Order blocks are regions of strong institutional interest, often marked by large imbalances between buying and selling. This script identifies those areas using z-score thresholds on directional distance (up or down candles), detecting statistically significant moves that signal potential smart money footprints. A bullish block is drawn when a strong up-move (zUp > 4) follows a down candle, and vice versa for bearish blocks. Over time, each block is evaluated: if price “mitigates” it (i.e., closes cleanly past the opposite side and confirmed with a 1 bar delay), it’s considered resolved and logged. These resolved blocks then inform a Kaplan-Meier-like survival curve, estimating the likelihood that future blocks of a given age will remain unbroken. The indicator then draws a probability curve for each side (bull/bear), updating it in real time.
🟠 FEATURES
Live label inside each block showing survival probability or “N.E.D.” if insufficient data.
Kaplan-Meier survival curves drawn directly on the chart to show estimated strength decay.
Rejection markers (▲ ▼) if price bounces cleanly off an active order block.
Alerts for zone creation and rejection signals, supporting rule-based trading workflows.
🟠 USAGE
Read the label inside each block for Age | Survival% (or N.E.D. if there aren’t enough samples yet); higher survival % suggests blocks of that age have historically lasted longer.
Use the right-side survival curves to gauge how probability decays with age for bull vs bear blocks, and align entries with the side showing stronger survival at current age.
Treat ▲ (bullish rejection) and ▼ (bearish rejection) as optional confluence when price tests a boundary and fails to break.
Turn on alerts for “Bullish Zone Created,” “Bearish Zone Created,” and rejection signals so you don’t need to watch constantly.
If your chart gets crowded, enable Prevent Overlap ; tune Max Box Age to your timeframe; and adjust KM Training Window / Minimum Samples to trade off responsiveness vs stability.
Nonlinear Regression, Zero-lag Moving Average [Loxx]Nonlinear Regression and Zero-lag Moving Average
Technical indicators are widely used in financial markets to analyze price data and make informed trading decisions. This indicator presents an implementation of two popular indicators: Nonlinear Regression and Zero-lag Moving Average (ZLMA). Let's explore the functioning of these indicators and discuss their significance in technical analysis.
Nonlinear Regression
The Nonlinear Regression indicator aims to fit a nonlinear curve to a given set of data points. It calculates the best-fit curve by minimizing the sum of squared errors between the actual data points and the predicted values on the curve. The curve is determined by solving a system of equations derived from the data points.
We define a function "nonLinearRegression" that takes two parameters: "src" (the input data series) and "per" (the period over which the regression is calculated). It calculates the coefficients of the nonlinear curve using the least squares method and returns the predicted value for the current period. The nonlinear regression curve provides insights into the overall trend and potential reversals in the price data.
Zero-lag Moving Average (ZLMA)
Moving averages are widely used to smoothen price data and identify trend directions. However, traditional moving averages introduce a lag due to the inclusion of past data. The Zero-lag Moving Average (ZLMA) overcomes this lag by dynamically adjusting the weights of past values, resulting in a more responsive moving average.
We create a function named "zlma" that calculates the ZLMA. It takes two parameters: "src" (the input data series) and "per" (the period over which the ZLMA is calculated). The ZLMA is computed by first calculating a weighted moving average (LWMA) using a linearly decreasing weight scheme. The LWMA is then used to calculate the ZLMA by applying the same weight scheme again. The ZLMA provides a smoother representation of the price data while reducing lag.
Combining Nonlinear Regression and ZLMA
The ZLMA is applied to the input data series using the function "zlma(src, zlmaper)". The ZLMA values are then passed as input to the "nonLinearRegression" function, along with the specified period for nonlinear regression. The output of the nonlinear regression is stored in the variable "out".
To enhance the visual representation of the indicator, colors are assigned based on the relationship between the nonlinear regression value and a signal value (sig) calculated from the previous period's nonlinear regression value. If the current "out" value is greater than the previous "sig" value, the color is set to green; otherwise, it is set to red.
The indicator also includes optional features such as coloring the bars based on the indicator's values and displaying signals for potential long and short positions. The signals are generated based on the crossover and crossunder of the "out" and "sig" values.
Wrapping Up
This indicator combines two important concepts: Nonlinear Regression and Zero-lag Moving Average indicators, which are valuable tools for technical analysis in financial markets. These indicators help traders identify trends, potential reversals, and generate trading signals. By combining the nonlinear regression curve with the zero-lag moving average, this indicator provides a comprehensive view of the price dynamics. Traders can customize the indicator's settings and use it in conjunction with other analysis techniques to make well-informed trading decisions.
Any Oscillator Underlay [TTF]We are proud to release a new indicator that has been a while in the making - the Any Oscillator Underlay (AOU) !
Note: There is a lot to discuss regarding this indicator, including its intent and some of how it operates, so please be sure to read this entire description before using this indicator to help ensure you understand both the intent and some limitations with this tool.
Our intent for building this indicator was to accomplish the following:
Combine all of the oscillators that we like to use into a single indicator
Take up a bit less screen space for the underlay indicators for strategies that utilize multiple oscillators
Provide a tool for newer traders to be able to leverage multiple oscillators in a single indicator
Features:
Includes 8 separate, fully-functional indicators combined into one
Ability to easily enable/disable and configure each included indicator independently
Clearly named plots to support user customization of color and styling, as well as manual creation of alerts
Ability to customize sub-indicator title position and color
Ability to customize sub-indicator divider lines style and color
Indicators that are included in this initial release:
TSI
2x RSIs (dubbed the Twin RSI )
Stochastic RSI
Stochastic
Ultimate Oscillator
Awesome Oscillator
MACD
Outback RSI (Color-coding only)
Quick note on OB/OS:
Before we get into covering each included indicator, we first need to cover a core concept for how we're defining OB and OS levels. To help illustrate this, we will use the TSI as an example.
The TSI by default has a mid-point of 0 and a range of -100 to 100. As a result, a common practice is to place lines on the -30 and +30 levels to represent OS and OB zones, respectively. Most people tend to view these levels as distance from the edges/outer bounds or as absolute levels, but we feel a more way to frame the OB/OS concept is to instead define it as distance ("offset") from the mid-line. In keeping with the -30 and +30 levels in our example, the offset in this case would be "30".
Taking this a step further, let's say we decided we wanted an offset of 25. Since the mid-point is 0, we'd then calculate the OB level as 0 + 25 (+25), and the OS level as 0 - 25 (-25).
Now that we've covered the concept of how we approach defining OB and OS levels (based on offset/distance from the mid-line), and since we did apply some transformations, rescaling, and/or repositioning to all of the indicators noted above, we are going to discuss each component indicator to detail both how it was modified from the original to fit the stacked-indicator model, as well as the various major components that the indicator contains.
TSI:
This indicator contains the following major elements:
TSI and TSI Signal Line
Color-coded fill for the TSI/TSI Signal lines
Moving Average for the TSI
TSI Histogram
Mid-line and OB/OS lines
Default TSI fill color coding:
Green : TSI is above the signal line
Red : TSI is below the signal line
Note: The TSI traditionally has a range of -100 to +100 with a mid-point of 0 (range of 200). To fit into our stacking model, we first shrunk the range to 100 (-50 to +50 - cut it in half), then repositioned it to have a mid-point of 50. Since this is the "bottom" of our indicator-stack, no additional repositioning is necessary.
Twin RSI:
This indicator contains the following major elements:
Fast RSI (useful if you want to leverage 2x RSIs as it makes it easier to see the overlaps and crosses - can be disabled if desired)
Slow RSI (primary RSI)
Color-coded fill for the Fast/Slow RSI lines (if Fast RSI is enabled and configured)
Moving Average for the Slow RSI
Mid-line and OB/OS lines
Default Twin RSI fill color coding:
Dark Red : Fast RSI below Slow RSI and Slow RSI below Slow RSI MA
Light Red : Fast RSI below Slow RSI and Slow RSI above Slow RSI MA
Dark Green : Fast RSI above Slow RSI and Slow RSI below Slow RSI MA
Light Green : Fast RSI above Slow RSI and Slow RSI above Slow RSI MA
Note: The RSI naturally has a range of 0 to 100 with a mid-point of 50, so no rescaling or transformation is done on this indicator. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Stochastic and Stochastic RSI:
These indicators contain the following major elements:
Configurable lengths for the RSI (for the Stochastic RSI only), K, and D values
Configurable base price source
Mid-line and OB/OS lines
Note: The Stochastic and Stochastic RSI both have a normal range of 0 to 100 with a mid-point of 50, so no rescaling or transformations are done on either of these indicators. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Ultimate Oscillator (UO):
This indicator contains the following major elements:
Configurable lengths for the Fast, Middle, and Slow BP/TR components
Mid-line and OB/OS lines
Moving Average for the UO
Color-coded fill for the UO/UO MA lines (if UO MA is enabled and configured)
Default UO fill color coding:
Green : UO is above the moving average line
Red : UO is below the moving average line
Note: The UO naturally has a range of 0 to 100 with a mid-point of 50, so no rescaling or transformation is done on this indicator. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Awesome Oscillator (AO):
This indicator contains the following major elements:
Configurable lengths for the Fast and Slow moving averages used in the AO calculation
Configurable price source for the moving averages used in the AO calculation
Mid-line
Option to display the AO as a line or pseudo-histogram
Moving Average for the AO
Color-coded fill for the AO/AO MA lines (if AO MA is enabled and configured)
Default AO fill color coding (Note: Fill was disabled in the image above to improve clarity):
Green : AO is above the moving average line
Red : AO is below the moving average line
Note: The AO is technically has an infinite (unbound) range - -∞ to ∞ - and the effective range is bound to the underlying security price (e.g. BTC will have a wider range than SP500, and SP500 will have a wider range than EUR/USD). We employed some special techniques to rescale this indicator into our desired range of 100 (-50 to 50), and then repositioned it to have a midpoint of 50 (range of 0 to 100) to meet the constraints of our stacking model. We then do one final repositioning to place it in the correct position the indicator-stack based on which other indicators are also enabled. For more details on how we accomplished this, read our section "Binding Infinity" below.
MACD:
This indicator contains the following major elements:
Configurable lengths for the Fast and Slow moving averages used in the MACD calculation
Configurable price source for the moving averages used in the MACD calculation
Configurable length and calculation method for the MACD Signal Line calculation
Mid-line
Note: Like the AO, the MACD also technically has an infinite (unbound) range. We employed the same principles here as we did with the AO to rescale and reposition this indicator as well. For more details on how we accomplished this, read our section "Binding Infinity" below.
Outback RSI (ORSI):
This is a stripped-down version of the Outback RSI indicator (linked above) that only includes the color-coding background (suffice it to say that it was not technically feasible to attempt to rescale the other components in a way that could consistently be clearly seen on-chart). As this component is a bit of a niche/special-purpose sub-indicator, it is disabled by default, and we suggest it remain disabled unless you have some pre-defined strategy that leverages the color-coding element of the Outback RSI that you wish to use.
Binding Infinity - How We Incorporated the AO and MACD (Warning - Math Talk Ahead!)
Note: This applies only to the AO and MACD at time of original publication. If any other indicators are added in the future that also fall into the category of "binding an infinite-range oscillator", we will make that clear in the release notes when that new addition is published.
To help set the stage for this discussion, it's important to note that the broader challenge of "equalizing inputs" is nothing new. In fact, it's a key element in many of the most popular fields of data science, such as AI and Machine Learning. They need to take a diverse set of inputs with a wide variety of ranges and seemingly-random inputs (referred to as "features"), and build a mathematical or computational model in order to work. But, when the raw inputs can vary significantly from one another, there is an inherent need to do some pre-processing to those inputs so that one doesn't overwhelm another simply due to the difference in raw values between them. This is where feature scaling comes into play.
With this in mind, we implemented 2 of the most common methods of Feature Scaling - Min-Max Normalization (which we call "Normalization" in our settings), and Z-Score Normalization (which we call "Standardization" in our settings). Let's take a look at each of those methods as they have been implemented in this script.
Min-Max Normalization (Normalization)
This is one of the most common - and most basic - methods of feature scaling. The basic formula is: y = (x - min)/(max - min) - where x is the current data sample, min is the lowest value in the dataset, and max is the highest value in the dataset. In this transformation, the max would evaluate to 1, and the min would evaluate to 0, and any value in between the min and the max would evaluate somewhere between 0 and 1.
The key benefits of this method are:
It can be used to transform datasets of any range into a new dataset with a consistent and known range (0 to 1).
It has no dependency on the "shape" of the raw input dataset (i.e. does not assume the input dataset can be approximated to a normal distribution).
But there are a couple of "gotchas" with this technique...
First, it assumes the input dataset is complete, or an accurate representation of the population via random sampling. While in most situations this is a valid assumption, in trading indicators we don't really have that luxury as we're often limited in what sample data we can access (i.e. number of historical bars available).
Second, this method is highly sensitive to outliers. Since the crux of this transformation is based on the max-min to define the initial range, a single significant outlier can result in skewing the post-transformation dataset (i.e. major price movement as a reaction to a significant news event).
You can potentially mitigate those 2 "gotchas" by using a mechanism or technique to find and discard outliers (e.g. calculate the mean and standard deviation of the input dataset and discard any raw values more than 5 standard deviations from the mean), but if your most recent datapoint is an "outlier" as defined by that algorithm, processing it using the "scrubbed" dataset would result in that new datapoint being outside the intended range of 0 to 1 (e.g. if the new datapoint is greater than the "scrubbed" max, it's post-transformation value would be greater than 1). Even though this is a bit of an edge-case scenario, it is still sure to happen in live markets processing live data, so it's not an ideal solution in our opinion (which is why we chose not to attempt to discard outliers in this manner).
Z-Score Normalization (Standardization)
This method of rescaling is a bit more complex than the Min-Max Normalization method noted above, but it is also a widely used process. The basic formula is: y = (x – μ) / σ - where x is the current data sample, μ is the mean (average) of the input dataset, and σ is the standard deviation of the input dataset. While this transformation still results in a technically-infinite possible range, the output of this transformation has a 2 very significant properties - the mean (average) of the output dataset has a mean (μ) of 0 and a standard deviation (σ) of 1.
The key benefits of this method are:
As it's based on normalizing the mean and standard deviation of the input dataset instead of a linear range conversion, it is far less susceptible to outliers significantly affecting the result (and in fact has the effect of "squishing" outliers).
It can be used to accurately transform disparate sets of data into a similar range regardless of the original dataset's raw/actual range.
But there are a couple of "gotchas" with this technique as well...
First, it still technically does not do any form of range-binding, so it is still technically unbounded (range -∞ to ∞ with a mid-point of 0).
Second, it implicitly assumes that the raw input dataset to be transformed is normally distributed, which won't always be the case in financial markets.
The first "gotcha" is a bit of an annoyance, but isn't a huge issue as we can apply principles of normal distribution to conceptually limit the range by defining a fixed number of standard deviations from the mean. While this doesn't totally solve the "infinite range" problem (a strong enough sudden move can still break out of our "conceptual range" boundaries), the amount of movement needed to achieve that kind of impact will generally be pretty rare.
The bigger challenge is how to deal with the assumption of the input dataset being normally distributed. While most financial markets (and indicators) do tend towards a normal distribution, they are almost never going to match that distribution exactly. So let's dig a bit deeper into distributions are defined and how things like trending markets can affect them.
Skew (skewness): This is a measure of asymmetry of the bell curve, or put another way, how and in what way the bell curve is disfigured when comparing the 2 halves. The easiest way to visualize this is to draw an imaginary vertical line through the apex of the bell curve, then fold the curve in half along that line. If both halves are exactly the same, the skew is 0 (no skew/perfectly symmetrical) - which is what a normal distribution has (skew = 0). Most financial markets tend to have short, medium, and long-term trends, and these trends will cause the distribution curve to skew in one direction or another. Bullish markets tend to skew to the right (positive), and bearish markets to the left (negative).
Kurtosis: This is a measure of the "tail size" of the bell curve. Another way to state this could be how "flat" or "steep" the bell-shape is. If the bell is steep with a strong drop from the apex (like a steep cliff), it has low kurtosis. If the bell has a shallow, more sweeping drop from the apex (like a tall hill), is has high kurtosis. Translating this to financial markets, kurtosis is generally a metric of volatility as the bell shape is largely defined by the strength and frequency of outliers. This is effectively a measure of volatility - volatile markets tend to have a high level of kurtosis (>3), and stable/consolidating markets tend to have a low level of kurtosis (<3). A normal distribution (our reference), has a kurtosis value of 3.
So to try and bring all that back together, here's a quick recap of the Standardization rescaling method:
The Standardization method has an assumption of a normal distribution of input data by using the mean (average) and standard deviation to handle the transformation
Most financial markets do NOT have a normal distribution (as discussed above), and will have varying degrees of skew and kurtosis
Q: Why are we still favoring the Standardization method over the Normalization method, and how are we accounting for the innate skew and/or kurtosis inherent in most financial markets?
A: Well, since we're only trying to rescale oscillators that by-definition have a midpoint of 0, kurtosis isn't a major concern beyond the affect it has on the post-transformation scaling (specifically, the number of standard deviations from the mean we need to include in our "artificially-bound" range definition).
Q: So that answers the question about kurtosis, but what about skew?
A: So - for skew, the answer is in the formula - specifically the mean (average) element. The standard mean calculation assumes a complete dataset and therefore uses a standard (i.e. simple) average, but we're limited by the data history available to us. So we adapted the transformation formula to leverage a moving average that included a weighting element to it so that it favored recent datapoints more heavily than older ones. By making the average component more adaptive, we gained the effect of reducing the skew element by having the average itself be more responsive to recent movements, which significantly reduces the effect historical outliers have on the dataset as a whole. While this is certainly not a perfect solution, we've found that it serves the purpose of rescaling the MACD and AO to a far more well-defined range while still preserving the oscillator behavior and mid-line exceptionally well.
The most difficult parts to compensate for are periods where markets have low volatility for an extended period of time - to the point where the oscillators are hovering around the 0/midline (in the case of the AO), or when the oscillator and signal lines converge and remain close to each other (in the case of the MACD). It's during these periods where even our best attempt at ensuring accurate mirrored-behavior when compared to the original can still occasionally lead or lag by a candle.
Note: If this is a make-or-break situation for you or your strategy, then we recommend you do not use any of the included indicators that leverage this kind of bounding technique (the AO and MACD at time of publication) and instead use the Trandingview built-in versions!
We know this is a lot to read and digest, so please take your time and feel free to ask questions - we will do our best to answer! And as always, constructive feedback is always welcome!
Nadaraya-Watson: Rational Quadratic Kernel (Non-Repainting)What is Nadaraya–Watson Regression?
Nadaraya–Watson Regression is a type of Kernel Regression, which is a non-parametric method for estimating the curve of best fit for a dataset. Unlike Linear Regression or Polynomial Regression, Kernel Regression does not assume any underlying distribution of the data. For estimation, it uses a kernel function, which is a weighting function that assigns a weight to each data point based on how close it is to the current point. The computed weights are then used to calculate the weighted average of the data points.
How is this different from using a Moving Average?
A Simple Moving Average is actually a special type of Kernel Regression that uses a Uniform (Retangular) Kernel function. This means that all data points in the specified lookback window are weighted equally. In contrast, the Rational Quadratic Kernel function used in this indicator assigns a higher weight to data points that are closer to the current point. This means that the indicator will react more quickly to changes in the data.
Why use the Rational Quadratic Kernel over the Gaussian Kernel?
The Gaussian Kernel is one of the most commonly used Kernel functions and is used extensively in many Machine Learning algorithms due to its general applicability across a wide variety of datasets. The Rational Quadratic Kernel can be thought of as a Gaussian Kernel on steroids; it is equivalent to adding together many Gaussian Kernels of differing length scales. This allows the user even more freedom to tune the indicator to their specific needs.
The formula for the Rational Quadratic function is:
K(x, x') = (1 + ||x - x'||^2 / (2 * alpha * h^2))^(-alpha)
where x and x' data are points, alpha is a hyperparameter that controls the smoothness (i.e. overall "wiggle") of the curve, and h is the band length of the kernel.
Does this Indicator Repaint?
No, this indicator has been intentionally designed to NOT repaint. This means that once a bar has closed, the indicator will never change the values in its plot. This is useful for backtesting and for trading strategies that require a non-repainting indicator.
Settings:
Bandwidth. This is the number of bars that the indicator will use as a lookback window.
Relative Weighting Parameter. The alpha parameter for the Rational Quadratic Kernel function. This is a hyperparameter that controls the smoothness of the curve. A lower value of alpha will result in a smoother, more stretched-out curve, while a lower value will result in a more wiggly curve with a tighter fit to the data. As this parameter approaches 0, the longer time frames will exert more influence on the estimation, and as it approaches infinity, the curve will become identical to the one produced by the Gaussian Kernel.
Color Smoothing. Toggles the mechanism for coloring the estimation plot between rate of change and cross over modes.
Moving Average Delta (Deviation = Absolute/Pips Simple MA)MAD stands for Moving Average Delta, it calculates the difference between moving average and price. The curve shows the difference in Pips.
By calculating the delta between two points we can see more small changes in the direction of the moving average curve which are normally hard to see. You can see the MAD curve as look through the microscope at a simple moving average curve. It may help predicting a trend change before it happens, the sample shows a beginning trend change from long to short.
Interpretation:
If the MAD curve is bigger than 0, the moving average is above the price
conversely;
If the MAD curve is smaller than 0, the moving average is below the price
Before a trend change, the moving average gets flatter, the MAD curve points to towards the zero
We can see what is the maximum rising/falling of the difference and predict an upcomming trend change
Usage:
Moving Average Delta Indicator by KIVANC fr3762Description:
MAD stands for Moving Average Delta, it calculates the difference between moving average and price. The curve shows the difference in Pips.
By calculating the delta between two points we can see more small changes in the direction of the moving average curve which are normally hard to see. You can see the MAD curve as look through the microscope at a simple moving average curve. It may help predicting a trend change before it happens, the sample shows a beginning trend change from long to short.
Interpretation:
If the MAD curve is bigger than 0, the moving average is above the price
conversely;
If the MAD curve is smaller than 0, the moving average is below the price
Before a trend change, the moving average gets flatter, the MAD curve points to towards the zero
We can see what is the maximum rising/falling of the difference and predict an upcomming trend change
Usage:
Drop a simple moving average to a chart and set the period in a way that it best fits the movements. There is no "magic" settings for the moving average period, you may double click the MA line to set it to a different period.
Drop the MAD indicator to the cart and give it the same period as your simple moving average .
Adaptive Genesis Engine [AGE]ADAPTIVE GENESIS ENGINE (AGE)
Pure Signal Evolution Through Genetic Algorithms
Where Darwin Meets Technical Analysis
🧬 WHAT YOU'RE GETTING - THE PURE INDICATOR
This is a technical analysis indicator - it generates signals, visualizes probability, and shows you the evolutionary process in real-time. This is NOT a strategy with automatic execution - it's a sophisticated signal generation system that you control .
What This Indicator Does:
Generates Long/Short entry signals with probability scores (35-88% range)
Evolves a population of up to 12 competing strategies using genetic algorithms
Validates strategies through walk-forward optimization (train/test cycles)
Visualizes signal quality through premium gradient clouds and confidence halos
Displays comprehensive metrics via enhanced dashboard
Provides alerts for entries and exits
Works on any timeframe, any instrument, any broker
What This Indicator Does NOT Do:
Execute trades automatically
Manage positions or calculate position sizes
Place orders on your behalf
Make trading decisions for you
This is pure signal intelligence. AGE tells you when and how confident it is. You decide whether and how much to trade.
🔬 THE SCIENCE: GENETIC ALGORITHMS MEET TECHNICAL ANALYSIS
What Makes This Different - The Evolutionary Foundation
Most indicators are static - they use the same parameters forever, regardless of market conditions. AGE is alive . It maintains a population of competing strategies that evolve, adapt, and improve through natural selection principles:
Birth: New strategies spawn through crossover breeding (combining DNA from fit parents) plus random mutation for exploration
Life: Each strategy trades virtually via shadow portfolios, accumulating wins/losses, tracking drawdown, and building performance history
Selection: Strategies are ranked by comprehensive fitness scoring (win rate, expectancy, drawdown control, signal efficiency)
Death: Weak strategies are culled periodically, with elite performers (top 2 by default) protected from removal
Evolution: The gene pool continuously improves as successful traits propagate and unsuccessful ones die out
This is not curve-fitting. Each new strategy must prove itself on out-of-sample data through walk-forward validation before being trusted for live signals.
🧪 THE DNA: WHAT EVOLVES
Every strategy carries a 10-gene chromosome controlling how it interprets market data:
Signal Sensitivity Genes
Entropy Sensitivity (0.5-2.0): Weight given to market order/disorder calculations. Low values = conservative, require strong directional clarity. High values = aggressive, act on weaker order signals.
Momentum Sensitivity (0.5-2.0): Weight given to RSI/ROC/MACD composite. Controls responsiveness to momentum shifts vs. mean-reversion setups.
Structure Sensitivity (0.5-2.0): Weight given to support/resistance positioning. Determines how much price location within swing range matters.
Probability Adjustment Genes
Probability Boost (-0.10 to +0.10): Inherent bias toward aggressive (+) or conservative (-) entries. Acts as personality trait - some strategies naturally optimistic, others pessimistic.
Trend Strength Requirement (0.3-0.8): Minimum trend conviction needed before signaling. Higher values = only trades strong trends, lower values = acts in weak/sideways markets.
Volume Filter (0.5-1.5): Strictness of volume confirmation. Higher values = requires strong volume, lower values = volume less important.
Risk Management Genes
ATR Multiplier (1.5-4.0): Base volatility scaling for all price levels. Controls whether strategy uses tight or wide stops/targets relative to ATR.
Stop Multiplier (1.0-2.5): Stop loss tightness. Lower values = aggressive profit protection, higher values = more breathing room.
Target Multiplier (1.5-4.0): Profit target ambition. Lower values = quick scalping exits, higher values = swing trading holds.
Adaptation Gene
Regime Adaptation (0.0-1.0): How much strategy adjusts behavior based on detected market regime (trending/volatile/choppy). Higher values = more reactive to regime changes.
The Magic: AGE doesn't just try random combinations. Through tournament selection and fitness-weighted crossover, successful gene combinations spread through the population while unsuccessful ones fade away. Over 50-100 bars, you'll see the population converge toward genes that work for YOUR instrument and timeframe.
📊 THE SIGNAL ENGINE: THREE-LAYER SYNTHESIS
Before any strategy generates a signal, AGE calculates probability through multi-indicator confluence:
Layer 1 - Market Entropy (Information Theory)
Measures whether price movements exhibit directional order or random walk characteristics:
The Math:
Shannon Entropy = -Σ(p × log(p))
Market Order = 1 - (Entropy / 0.693)
What It Means:
High entropy = choppy, random market → low confidence signals
Low entropy = directional market → high confidence signals
Direction determined by up-move vs down-move dominance over lookback period (default: 20 bars)
Signal Output: -1.0 to +1.0 (bearish order to bullish order)
Layer 2 - Momentum Synthesis
Combines three momentum indicators into single composite score:
Components:
RSI (40% weight): Normalized to -1/+1 scale using (RSI-50)/50
Rate of Change (30% weight): Percentage change over lookback (default: 14 bars), clamped to ±1
MACD Histogram (30% weight): Fast(12) - Slow(26), normalized by ATR
Why This Matters: RSI catches mean-reversion opportunities, ROC catches raw momentum, MACD catches momentum divergence. Weighting favors RSI for reliability while keeping other perspectives.
Signal Output: -1.0 to +1.0 (strong bearish to strong bullish)
Layer 3 - Structure Analysis
Evaluates price position within swing range (default: 50-bar lookback):
Position Classification:
Bottom 20% of range = Support Zone → bullish bounce potential
Top 20% of range = Resistance Zone → bearish rejection potential
Middle 60% = Neutral Zone → breakout/breakdown monitoring
Signal Logic:
At support + bullish candle = +0.7 (strong buy setup)
At resistance + bearish candle = -0.7 (strong sell setup)
Breaking above range highs = +0.5 (breakout confirmation)
Breaking below range lows = -0.5 (breakdown confirmation)
Consolidation within range = ±0.3 (weak directional bias)
Signal Output: -1.0 to +1.0 (bearish structure to bullish structure)
Confluence Voting System
Each layer casts a vote (Long/Short/Neutral). The system requires minimum 2-of-3 agreement (configurable 1-3) before generating a signal:
Examples:
Entropy: Bullish, Momentum: Bullish, Structure: Neutral → Signal generated (2 long votes)
Entropy: Bearish, Momentum: Neutral, Structure: Neutral → No signal (only 1 short vote)
All three bullish → Signal generated with +5% probability bonus
This is the key to quality. Single indicators give too many false signals. Triple confirmation dramatically improves accuracy.
📈 PROBABILITY CALCULATION: HOW CONFIDENCE IS MEASURED
Base Probability:
Raw_Prob = 50% + (Average_Signal_Strength × 25%)
Then AGE applies strategic adjustments:
Trend Alignment:
Signal with trend: +4%
Signal against strong trend: -8%
Weak/no trend: no adjustment
Regime Adaptation:
Trending market (efficiency >50%, moderate vol): +3%
Volatile market (vol ratio >1.5x): -5%
Choppy market (low efficiency): -2%
Volume Confirmation:
Volume > 70% of 20-bar SMA: no change
Volume below threshold: -3%
Volatility State (DVS Ratio):
High vol (>1.8x baseline): -4% (reduce confidence in chaos)
Low vol (<0.7x baseline): -2% (markets can whipsaw in compression)
Moderate elevated vol (1.0-1.3x): +2% (trending conditions emerging)
Confluence Bonus:
All 3 indicators agree: +5%
2 of 3 agree: +2%
Strategy Gene Adjustment:
Probability Boost gene: -10% to +10%
Regime Adaptation gene: scales regime adjustments by 0-100%
Final Probability: Clamped between 35% (minimum) and 88% (maximum)
Why These Ranges?
Below 35% = too uncertain, better not to signal
Above 88% = unrealistic, creates overconfidence
Sweet spot: 65-80% for quality entries
🔄 THE SHADOW PORTFOLIO SYSTEM: HOW STRATEGIES COMPETE
Each active strategy maintains a virtual trading account that executes in parallel with real-time data:
Shadow Trading Mechanics
Entry Logic:
Calculate signal direction, probability, and confluence using strategy's unique DNA
Check if signal meets quality gate:
Probability ≥ configured minimum threshold (default: 65%)
Confluence ≥ configured minimum (default: 2 of 3)
Direction is not zero (must be long or short, not neutral)
Verify signal persistence:
Base requirement: 2 bars (configurable 1-5)
Adapts based on probability: high-prob signals (75%+) enter 1 bar faster, low-prob signals need 1 bar more
Adjusts for regime: trending markets reduce persistence by 1, volatile markets add 1
Apply additional filters:
Trend strength must exceed strategy's requirement gene
Regime filter: if volatile market detected, probability must be 72%+ to override
Volume confirmation required (volume > 70% of average)
If all conditions met for required persistence bars, enter shadow position at current close price
Position Management:
Entry Price: Recorded at close of entry bar
Stop Loss: ATR-based distance = ATR × ATR_Mult (gene) × Stop_Mult (gene) × DVS_Ratio
Take Profit: ATR-based distance = ATR × ATR_Mult (gene) × Target_Mult (gene) × DVS_Ratio
Position: +1 (long) or -1 (short), only one at a time per strategy
Exit Logic:
Check if price hit stop (on low) or target (on high) on current bar
Record trade outcome in R-multiples (profit/loss normalized by ATR)
Update performance metrics:
Total trades counter incremented
Wins counter (if profit > 0)
Cumulative P&L updated
Peak equity tracked (for drawdown calculation)
Maximum drawdown from peak recorded
Enter cooldown period (default: 8 bars, configurable 3-20) before next entry allowed
Reset signal age counter to zero
Walk-Forward Tracking:
During position lifecycle, trades are categorized:
Training Phase (first 250 bars): Trade counted toward training metrics
Testing Phase (next 75 bars): Trade counted toward testing metrics (out-of-sample)
Live Phase (after WFO period): Trade counted toward overall metrics
Why Shadow Portfolios?
No lookahead bias (uses only data available at the bar)
Realistic execution simulation (entry on close, stop/target checks on high/low)
Independent performance tracking for true fitness comparison
Allows safe experimentation without risking capital
Each strategy learns from its own experience
🏆 FITNESS SCORING: HOW STRATEGIES ARE RANKED
Fitness is not just win rate. AGE uses a comprehensive multi-factor scoring system:
Core Metrics (Minimum 3 trades required)
Win Rate (30% of fitness):
WinRate = Wins / TotalTrades
Normalized directly (0.0-1.0 scale)
Total P&L (30% of fitness):
Normalized_PnL = (PnL + 300) / 600
Clamped 0.0-1.0. Assumes P&L range of -300R to +300R for normalization scale.
Expectancy (25% of fitness):
Expectancy = Total_PnL / Total_Trades
Normalized_Expectancy = (Expectancy + 30) / 60
Clamped 0.0-1.0. Rewards consistency of profit per trade.
Drawdown Control (15% of fitness):
Normalized_DD = 1 - (Max_Drawdown / 15)
Clamped 0.0-1.0. Penalizes strategies that suffer large equity retracements from peak.
Sample Size Adjustment
Quality Factor:
<50 trades: 1.0 (full weight, small sample)
50-100 trades: 0.95 (slight penalty for medium sample)
100 trades: 0.85 (larger penalty for large sample)
Why penalize more trades? Prevents strategies from gaming the system by taking hundreds of tiny trades to inflate statistics. Favors quality over quantity.
Bonus Adjustments
Walk-Forward Validation Bonus:
if (WFO_Validated):
Fitness += (WFO_Efficiency - 0.5) × 0.1
Strategies proven on out-of-sample data receive up to +10% fitness boost based on test/train efficiency ratio.
Signal Efficiency Bonus (if diagnostics enabled):
if (Signals_Evaluated > 10):
Pass_Rate = Signals_Passed / Signals_Evaluated
Fitness += (Pass_Rate - 0.1) × 0.05
Rewards strategies that generate high-quality signals passing the quality gate, not just profitable trades.
Final Fitness: Clamped at 0.0 minimum (prevents negative fitness values)
Result: Elite strategies typically achieve 0.50-0.75 fitness. Anything above 0.60 is excellent. Below 0.30 is prime candidate for culling.
🔬 WALK-FORWARD OPTIMIZATION: ANTI-OVERFITTING PROTECTION
This is what separates AGE from curve-fitted garbage indicators.
The Three-Phase Process
Every new strategy undergoes a rigorous validation lifecycle:
Phase 1 - Training Window (First 250 bars, configurable 100-500):
Strategy trades normally via shadow portfolio
All trades count toward training performance metrics
System learns which gene combinations produce profitable patterns
Tracks independently: Training_Trades, Training_Wins, Training_PnL
Phase 2 - Testing Window (Next 75 bars, configurable 30-200):
Strategy continues trading without any parameter changes
Trades now count toward testing performance metrics (separate tracking)
This is out-of-sample data - strategy has never seen these bars during "optimization"
Tracks independently: Testing_Trades, Testing_Wins, Testing_PnL
Phase 3 - Validation Check:
Minimum_Trades = 5 (configurable 3-15)
IF (Train_Trades >= Minimum AND Test_Trades >= Minimum):
WR_Efficiency = Test_WinRate / Train_WinRate
Expectancy_Efficiency = Test_Expectancy / Train_Expectancy
WFO_Efficiency = (WR_Efficiency + Expectancy_Efficiency) / 2
IF (WFO_Efficiency >= 0.55): // configurable 0.3-0.9
Strategy.Validated = TRUE
Strategy receives fitness bonus
ELSE:
Strategy receives 30% fitness penalty
ELSE:
Validation deferred (insufficient trades in one or both periods)
What Validation Means
Validated Strategy (Green "✓ VAL" in dashboard):
Performed at least 55% as well on unseen data compared to training data
Gets fitness bonus: +(efficiency - 0.5) × 0.1
Receives priority during tournament selection for breeding
More likely to be chosen as active trading strategy
Unvalidated Strategy (Orange "○ TRAIN" in dashboard):
Failed to maintain performance on test data (likely curve-fitted to training period)
Receives 30% fitness penalty (0.7x multiplier)
Makes strategy prime candidate for culling
Can still trade but with lower selection probability
Insufficient Data (continues collecting):
Hasn't completed both training and testing periods yet
OR hasn't achieved minimum trade count in both periods
Validation check deferred until requirements met
Why 55% Efficiency Threshold?
If a strategy earned 10R during training but only 5.5R during testing, it still proved an edge exists beyond random luck. Requiring 100% efficiency would be unrealistic - market conditions change between periods. But requiring >50% ensures the strategy didn't completely degrade on fresh data.
The Protection: Strategies that work great on historical data but fail on new data are automatically identified and penalized. This prevents the population from being polluted by overfitted strategies that would fail in live trading.
🌊 DYNAMIC VOLATILITY SCALING (DVS): ADAPTIVE STOP/TARGET PLACEMENT
AGE doesn't use fixed stop distances. It adapts to current volatility conditions in real-time.
Four Volatility Measurement Methods
1. ATR Ratio (Simple Method):
Current_Vol = ATR(14) / Close
Baseline_Vol = SMA(Current_Vol, 100)
Ratio = Current_Vol / Baseline_Vol
Basic comparison of current ATR to 100-bar moving average baseline.
2. Parkinson (High-Low Range Based):
For each bar: HL = log(High / Low)
Parkinson_Vol = sqrt(Σ(HL²) / (4 × Period × log(2)))
More stable than close-to-close volatility. Captures intraday range expansion without overnight gap noise.
3. Garman-Klass (OHLC Based):
HL_Term = 0.5 × ²
CO_Term = (2×log(2) - 1) × ²
GK_Vol = sqrt(Σ(HL_Term - CO_Term) / Period)
Most sophisticated estimator. Incorporates all four price points (open, high, low, close) plus gap information.
4. Ensemble Method (Default - Median of All Three):
Ratio_1 = ATR_Current / ATR_Baseline
Ratio_2 = Parkinson_Current / Parkinson_Baseline
Ratio_3 = GK_Current / GK_Baseline
DVS_Ratio = Median(Ratio_1, Ratio_2, Ratio_3)
Why Ensemble?
Takes median to avoid outliers and false spikes
If ATR jumps but range-based methods stay calm, median prevents overreaction
If one method fails, other two compensate
Most robust approach across different market conditions
Sensitivity Scaling
Scaled_Ratio = (Raw_Ratio) ^ Sensitivity
Sensitivity 0.3: Cube root - heavily dampens volatility impact
Sensitivity 0.5: Square root - moderate dampening
Sensitivity 0.7 (Default): Balanced response to volatility changes
Sensitivity 1.0: Linear - full 1:1 volatility impact
Sensitivity 1.5: Exponential - amplified response to volatility spikes
Safety Clamps: Final DVS Ratio always clamped between 0.5x and 2.5x baseline to prevent extreme position sizing or stop placement errors.
How DVS Affects Shadow Trading
Every strategy's stop and target distances are multiplied by the current DVS ratio:
Stop Loss Distance:
Stop_Distance = ATR × ATR_Mult (gene) × Stop_Mult (gene) × DVS_Ratio
Take Profit Distance:
Target_Distance = ATR × ATR_Mult (gene) × Target_Mult (gene) × DVS_Ratio
Example Scenario:
ATR = 10 points
Strategy's ATR_Mult gene = 2.5
Strategy's Stop_Mult gene = 1.5
Strategy's Target_Mult gene = 2.5
DVS_Ratio = 1.4 (40% above baseline volatility - market heating up)
Stop = 10 × 2.5 × 1.5 × 1.4 = 52.5 points (vs. 37.5 in normal vol)
Target = 10 × 2.5 × 2.5 × 1.4 = 87.5 points (vs. 62.5 in normal vol)
Result:
During volatility spikes: Stops automatically widen to avoid noise-based exits, targets extend for bigger moves
During calm periods: Stops tighten for better risk/reward, targets compress for realistic profit-taking
Strategies adapt risk management to match current market behavior
🧬 THE EVOLUTIONARY CYCLE: SPAWN, COMPETE, CULL
Initialization (Bar 1)
AGE begins with 4 seed strategies (if evolution enabled):
Seed Strategy #0 (Balanced):
All sensitivities at 1.0 (neutral)
Zero probability boost
Moderate trend requirement (0.4)
Standard ATR/stop/target multiples (2.5/1.5/2.5)
Mid-level regime adaptation (0.5)
Seed Strategy #1 (Momentum-Focused):
Lower entropy sensitivity (0.7), higher momentum (1.5)
Slight probability boost (+0.03)
Higher trend requirement (0.5)
Tighter stops (1.3), wider targets (3.0)
Seed Strategy #2 (Entropy-Driven):
Higher entropy sensitivity (1.5), lower momentum (0.8)
Slight probability penalty (-0.02)
More trend tolerant (0.6)
Wider stops (1.8), standard targets (2.5)
Seed Strategy #3 (Structure-Based):
Balanced entropy/momentum (0.8/0.9), high structure (1.4)
Slight probability boost (+0.02)
Lower trend requirement (0.35)
Moderate risk parameters (1.6/2.8)
All seeds start with WFO validation bypassed if WFO is disabled, or must validate if enabled.
Spawning New Strategies
Timing (Adaptive):
Historical phase: Every 30 bars (configurable 10-100)
Live phase: Every 200 bars (configurable 100-500)
Automatically switches to live timing when barstate.isrealtime triggers
Conditions:
Current population < max population limit (default: 8, configurable 4-12)
At least 2 active strategies exist (need parents)
Available slot in population array
Selection Process:
Run tournament selection 3 times with different seeds
Each tournament: randomly sample active strategies, pick highest fitness
Best from 3 tournaments becomes Parent 1
Repeat independently for Parent 2
Ensures fit parents but maintains diversity
Crossover Breeding:
For each of 10 genes:
Parent1_Fitness = fitness
Parent2_Fitness = fitness
Weight1 = Parent1_Fitness / (Parent1_Fitness + Parent2_Fitness)
Gene1 = parent1's value
Gene2 = parent2's value
Child_Gene = Weight1 × Gene1 + (1 - Weight1) × Gene2
Fitness-weighted crossover ensures fitter parent contributes more genetic material.
Mutation:
For each gene in child:
IF (random < mutation_rate):
Gene_Range = GENE_MAX - GENE_MIN
Noise = (random - 0.5) × 2 × mutation_strength × Gene_Range
Mutated_Gene = Clamp(Child_Gene + Noise, GENE_MIN, GENE_MAX)
Historical mutation rate: 20% (aggressive exploration)
Live mutation rate: 8% (conservative stability)
Mutation strength: 12% of gene range (configurable 5-25%)
Initialization of New Strategy:
Unique ID assigned (total_spawned counter)
Parent ID recorded
Generation = max(parent generations) + 1
Birth bar recorded (for age tracking)
All performance metrics zeroed
Shadow portfolio reset
WFO validation flag set to false (must prove itself)
Result: New strategy with hybrid DNA enters population, begins trading in next bar.
Competition (Every Bar)
All active strategies:
Calculate their signal based on unique DNA
Check quality gate with their thresholds
Manage shadow positions (entries/exits)
Update performance metrics
Recalculate fitness score
Track WFO validation progress
Strategies compete indirectly through fitness ranking - no direct interaction.
Culling Weak Strategies
Timing (Adaptive):
Historical phase: Every 60 bars (configurable 20-200, should be 2x spawn interval)
Live phase: Every 400 bars (configurable 200-1000, should be 2x spawn interval)
Minimum Adaptation Score (MAS):
Initial MAS = 0.10
MAS decays: MAS × 0.995 every cull cycle
Minimum MAS = 0.03 (floor)
MAS represents the "survival threshold" - strategies below this fitness level are vulnerable.
Culling Conditions (ALL must be true):
Population > minimum population (default: 3, configurable 2-4)
At least one strategy has fitness < MAS
Strategy's age > culling interval (prevents premature culling of new strategies)
Strategy is not in top N elite (default: 2, configurable 1-3)
Culling Process:
Find worst strategy:
For each active strategy:
IF (age > cull_interval):
Fitness = base_fitness
IF (not WFO_validated AND WFO_enabled):
Fitness × 0.7 // 30% penalty for unvalidated
IF (Fitness < MAS AND Fitness < worst_fitness_found):
worst_strategy = this_strategy
worst_fitness = Fitness
IF (worst_strategy found):
Count elite strategies with fitness > worst_fitness
IF (elite_count >= elite_preservation_count):
Deactivate worst_strategy (set active flag = false)
Increment total_culled counter
Elite Protection:
Even if a strategy's fitness falls below MAS, it survives if fewer than N strategies are better. This prevents culling when population is generally weak.
Result: Weak strategies removed from population, freeing slots for new spawns. Gene pool improves over time.
Selection for Display (Every Bar)
AGE chooses one strategy to display signals:
Best fitness = -1
Selected = none
For each active strategy:
Fitness = base_fitness
IF (WFO_validated):
Fitness × 1.3 // 30% bonus for validated strategies
IF (Fitness > best_fitness):
best_fitness = Fitness
selected_strategy = this_strategy
Display selected strategy's signals on chart
Result: Only the highest-fitness (optionally validated-boosted) strategy's signals appear as chart markers. Other strategies trade invisibly in shadow portfolios.
🎨 PREMIUM VISUALIZATION SYSTEM
AGE includes sophisticated visual feedback that standard indicators lack:
1. Gradient Probability Cloud (Optional, Default: ON)
Multi-layer gradient showing signal buildup 2-3 bars before entry:
Activation Conditions:
Signal persistence > 0 (same directional signal held for multiple bars)
Signal probability ≥ minimum threshold (65% by default)
Signal hasn't yet executed (still in "forming" state)
Visual Construction:
7 gradient layers by default (configurable 3-15)
Each layer is a line-fill pair (top line, bottom line, filled between)
Layer spacing: 0.3 to 1.0 × ATR above/below price
Outer layers = faint, inner layers = bright
Color transitions from base to intense based on layer position
Transparency scales with probability (high prob = more opaque)
Color Selection:
Long signals: Gradient from theme.gradient_bull_mid to theme.gradient_bull_strong
Short signals: Gradient from theme.gradient_bear_mid to theme.gradient_bear_strong
Base transparency: 92%, reduces by up to 8% for high-probability setups
Dynamic Behavior:
Cloud grows/shrinks as signal persistence increases/decreases
Redraws every bar while signal is forming
Disappears when signal executes or invalidates
Performance Note: Computationally expensive due to linefill objects. Disable or reduce layers if chart performance degrades.
2. Population Fitness Ribbon (Optional, Default: ON)
Histogram showing fitness distribution across active strategies:
Activation: Only draws on last bar (barstate.islast) to avoid historical clutter
Visual Construction:
10 histogram layers by default (configurable 5-20)
Plots 50 bars back from current bar
Positioned below price at: lowest_low(100) - 1.5×ATR (doesn't interfere with price action)
Each layer represents a fitness threshold (evenly spaced min to max fitness)
Layer Logic:
For layer_num from 0 to ribbon_layers:
Fitness_threshold = min_fitness + (max_fitness - min_fitness) × (layer / layers)
Count strategies with fitness ≥ threshold
Height = ATR × 0.15 × (count / total_active)
Y_position = base_level + ATR × 0.2 × layer
Color = Gradient from weak to strong based on layer position
Line_width = Scaled by height (taller = thicker)
Visual Feedback:
Tall, bright ribbon = healthy population, many fit strategies at high fitness levels
Short, dim ribbon = weak population, few strategies achieving good fitness
Ribbon compression (layers close together) = population converging to similar fitness
Ribbon spread = diverse fitness range, active selection pressure
Use Case: Quick visual health check without opening dashboard. Ribbon growing upward over time = population improving.
3. Confidence Halo (Optional, Default: ON)
Circular polyline around entry signals showing probability strength:
Activation: Draws when new position opens (shadow_position changes from 0 to ±1)
Visual Construction:
20-segment polyline forming approximate circle
Center: Low - 0.5×ATR (long) or High + 0.5×ATR (short)
Radius: 0.3×ATR (low confidence) to 1.0×ATR (elite confidence)
Scales with: (probability - min_probability) / (1.0 - min_probability)
Color Coding:
Elite (85%+): Cyan (theme.conf_elite), large radius, minimal transparency (40%)
Strong (75-85%): Strong green (theme.conf_strong), medium radius, moderate transparency (50%)
Good (65-75%): Good green (theme.conf_good), smaller radius, more transparent (60%)
Moderate (<65%): Moderate green (theme.conf_moderate), tiny radius, very transparent (70%)
Technical Detail:
Uses chart.point array with index-based positioning
5-bar horizontal spread for circular appearance (±5 bars from entry)
Curved=false (Pine Script polyline limitation)
Fill color matches line color but more transparent (88% vs line's transparency)
Purpose: Instant visual probability assessment. No need to check dashboard - halo size/brightness tells the story.
4. Evolution Event Markers (Optional, Default: ON)
Visual indicators of genetic algorithm activity:
Spawn Markers (Diamond, Cyan):
Plots when total_spawned increases on current bar
Location: bottom of chart (location.bottom)
Color: theme.spawn_marker (cyan/bright blue)
Size: tiny
Indicates new strategy just entered population
Cull Markers (X-Cross, Red):
Plots when total_culled increases on current bar
Location: bottom of chart (location.bottom)
Color: theme.cull_marker (red/pink)
Size: tiny
Indicates weak strategy just removed from population
What It Tells You:
Frequent spawning early = population building, active exploration
Frequent culling early = high selection pressure, weak strategies dying fast
Balanced spawn/cull = healthy evolutionary churn
No markers for long periods = stable population (evolution plateaued or optimal genes found)
5. Entry/Exit Markers
Clear visual signals for selected strategy's trades:
Long Entry (Triangle Up, Green):
Plots when selected strategy opens long position (position changes 0 → +1)
Location: below bar (location.belowbar)
Color: theme.long_primary (green/cyan depending on theme)
Transparency: Scales with probability:
Elite (85%+): 0% (fully opaque)
Strong (75-85%): 10%
Good (65-75%): 20%
Acceptable (55-65%): 35%
Size: small
Short Entry (Triangle Down, Red):
Plots when selected strategy opens short position (position changes 0 → -1)
Location: above bar (location.abovebar)
Color: theme.short_primary (red/pink depending on theme)
Transparency: Same scaling as long entries
Size: small
Exit (X-Cross, Orange):
Plots when selected strategy closes position (position changes ±1 → 0)
Location: absolute (at actual exit price if stop/target lines enabled)
Color: theme.exit_color (orange/yellow depending on theme)
Transparency: 0% (fully opaque)
Size: tiny
Result: Clean, probability-scaled markers that don't clutter chart but convey essential information.
6. Stop Loss & Take Profit Lines (Optional, Default: ON)
Visual representation of shadow portfolio risk levels:
Stop Loss Line:
Plots when selected strategy has active position
Level: shadow_stop value from selected strategy
Color: theme.short_primary with 60% transparency (red/pink, subtle)
Width: 2
Style: plot.style_linebr (breaks when no position)
Take Profit Line:
Plots when selected strategy has active position
Level: shadow_target value from selected strategy
Color: theme.long_primary with 60% transparency (green, subtle)
Width: 2
Style: plot.style_linebr (breaks when no position)
Purpose:
Shows where shadow portfolio would exit for stop/target
Helps visualize strategy's risk/reward ratio
Useful for manual traders to set similar levels
Disable for cleaner chart (recommended for presentations)
7. Dynamic Trend EMA
Gradient-colored trend line that visualizes trend strength:
Calculation:
EMA(close, trend_length) - default 50 period (configurable 20-100)
Slope calculated over 10 bars: (current_ema - ema ) / ema × 100
Color Logic:
Trend_direction:
Slope > 0.1% = Bullish (1)
Slope < -0.1% = Bearish (-1)
Otherwise = Neutral (0)
Trend_strength = abs(slope)
Color = Gradient between:
- Neutral color (gray/purple)
- Strong bullish (bright green) if direction = 1
- Strong bearish (bright red) if direction = -1
Gradient factor = trend_strength (0 to 1+ scale)
Visual Behavior:
Faint gray/purple = weak/no trend (choppy conditions)
Light green/red = emerging trend (low strength)
Bright green/red = strong trend (high conviction)
Color intensity = trend strength magnitude
Transparency: 50% (subtle, doesn't overpower price action)
Purpose: Subconscious awareness of trend state without checking dashboard or indicators.
8. Regime Background Tinting (Subtle)
Ultra-low opacity background color indicating detected market regime:
Regime Detection:
Efficiency = directional_movement / total_range (over trend_length bars)
Vol_ratio = current_volatility / average_volatility
IF (efficiency > 0.5 AND vol_ratio < 1.3):
Regime = Trending (1)
ELSE IF (vol_ratio > 1.5):
Regime = Volatile (2)
ELSE:
Regime = Choppy (0)
Background Colors:
Trending: theme.regime_trending (dark green, 92-93% transparency)
Volatile: theme.regime_volatile (dark red, 93% transparency)
Choppy: No tint (normal background)
Purpose:
Subliminal regime awareness
Helps explain why signals are/aren't generating
Trending = ideal conditions for AGE
Volatile = fewer signals, higher thresholds applied
Choppy = mixed signals, lower confidence
Important: Extremely subtle by design. Not meant to be obvious, just subconscious context.
📊 ENHANCED DASHBOARD
Comprehensive real-time metrics in single organized panel (top-right position):
Dashboard Structure (5 columns × 14 rows)
Header Row:
Column 0: "🧬 AGE PRO" + phase indicator (🔴 LIVE or ⏪ HIST)
Column 1: "POPULATION"
Column 2: "PERFORMANCE"
Column 3: "CURRENT SIGNAL"
Column 4: "ACTIVE STRATEGY"
Column 0: Market State
Regime (📈 TREND / 🌊 CHAOS / ➖ CHOP)
DVS Ratio (current volatility scaling factor, format: #.##)
Trend Direction (▲ BULL / ▼ BEAR / ➖ FLAT with color coding)
Trend Strength (0-100 scale, format: #.##)
Column 1: Population Metrics
Active strategies (count / max_population)
Validated strategies (WFO passed / active total)
Current generation number
Total spawned (all-time strategy births)
Total culled (all-time strategy deaths)
Column 2: Aggregate Performance
Total trades across all active strategies
Aggregate win rate (%) - color-coded:
Green (>55%)
Orange (45-55%)
Red (<45%)
Total P&L in R-multiples - color-coded by positive/negative
Best fitness score in population (format: #.###)
MAS - Minimum Adaptation Score (cull threshold, format: #.###)
Column 3: Current Signal Status
Status indicator:
"▲ LONG" (green) if selected strategy in long position
"▼ SHORT" (red) if selected strategy in short position
"⏳ FORMING" (orange) if signal persisting but not yet executed
"○ WAITING" (gray) if no active signal
Confidence percentage (0-100%, format: #.#%)
Quality assessment:
"🔥 ELITE" (cyan) for 85%+ probability
"✓ STRONG" (bright green) for 75-85%
"○ GOOD" (green) for 65-75%
"- LOW" (dim) for <65%
Confluence score (X/3 format)
Signal age:
"X bars" if signal forming
"IN TRADE" if position active
"---" if no signal
Column 4: Selected Strategy Details
Strategy ID number (#X format)
Validation status:
"✓ VAL" (green) if WFO validated
"○ TRAIN" (orange) if still in training/testing phase
Generation number (GX format)
Personal fitness score (format: #.### with color coding)
Trade count
P&L and win rate (format: #.#R (##%) with color coding)
Color Scheme:
Panel background: theme.panel_bg (dark, low opacity)
Panel headers: theme.panel_header (slightly lighter)
Primary text: theme.text_primary (bright, high contrast)
Secondary text: theme.text_secondary (dim, lower contrast)
Positive metrics: theme.metric_positive (green)
Warning metrics: theme.metric_warning (orange)
Negative metrics: theme.metric_negative (red)
Special markers: theme.validated_marker, theme.spawn_marker
Update Frequency: Only on barstate.islast (current bar) to minimize CPU usage
Purpose:
Quick overview of entire system state
No need to check multiple indicators
Trading decisions informed by population health, regime state, and signal quality
Transparency into what AGE is thinking
🔍 DIAGNOSTICS PANEL (Optional, Default: OFF)
Detailed signal quality tracking for optimization and debugging:
Panel Structure (3 columns × 8 rows)
Position: Bottom-right corner (doesn't interfere with main dashboard)
Header Row:
Column 0: "🔍 DIAGNOSTICS"
Column 1: "COUNT"
Column 2: "%"
Metrics Tracked (for selected strategy only):
Total Evaluated:
Every signal that passed initial calculation (direction ≠ 0)
Represents total opportunities considered
✓ Passed:
Signals that passed quality gate and executed
Green color coding
Percentage of evaluated signals
Rejection Breakdown:
⨯ Probability:
Rejected because probability < minimum threshold
Most common rejection reason typically
⨯ Confluence:
Rejected because confluence < minimum required (e.g., only 1 of 3 indicators agreed)
⨯ Trend:
Rejected because signal opposed strong trend
Indicates counter-trend protection working
⨯ Regime:
Rejected because volatile regime detected and probability wasn't high enough to override
Shows regime filter in action
⨯ Volume:
Rejected because volume < 70% of 20-bar average
Indicates volume confirmation requirement
Color Coding:
Passed count: Green (success metric)
Rejection counts: Red (failure metrics)
Percentages: Gray (neutral, informational)
Performance Cost: Slight CPU overhead for tracking counters. Disable when not actively optimizing settings.
How to Use Diagnostics
Scenario 1: Too Few Signals
Evaluated: 200
Passed: 10 (5%)
⨯ Probability: 120 (60%)
⨯ Confluence: 40 (20%)
⨯ Others: 30 (15%)
Diagnosis: Probability threshold too high for this strategy's DNA.
Solution: Lower min probability from 65% to 60%, or allow strategy more time to evolve better DNA.
Scenario 2: Too Many False Signals
Evaluated: 200
Passed: 80 (40%)
Strategy win rate: 45%
Diagnosis: Quality gate too loose, letting low-quality signals through.
Solution: Raise min probability to 70%, or increase min confluence to 3 (all indicators must agree).
Scenario 3: Regime-Specific Issues
⨯ Regime: 90 (45% of rejections)
Diagnosis: Frequent volatile regime detection blocking otherwise good signals.
Solution: Either accept fewer trades during chaos (recommended), or disable regime filter if you want signals regardless of market state.
Optimization Workflow:
Enable diagnostics
Run 200+ bars
Analyze rejection patterns
Adjust settings based on data
Re-run and compare pass rate
Disable diagnostics when satisfied
⚙️ CONFIGURATION GUIDE
🧬 Evolution Engine Settings
Enable AGE Evolution (Default: ON):
ON: Full genetic algorithm (recommended for best results)
OFF: Uses only 4 seed strategies, no spawning/culling (static population for comparison testing)
Max Population (4-12, Default: 8):
Higher = more diversity, more exploration, slower performance
Lower = faster computation, less exploration, risk of premature convergence
Sweet spot: 6-8 for most use cases
4 = minimum for meaningful evolution
12 = maximum before diminishing returns
Min Population (2-4, Default: 3):
Safety floor - system never culls below this count
Prevents population extinction during harsh selection
Should be at least half of max population
Elite Preservation (1-3, Default: 2):
Top N performers completely immune to culling
Ensures best genes always survive
1 = minimal protection, aggressive selection
2 = balanced (recommended)
3 = conservative, slower gene pool turnover
Historical: Spawn Interval (10-100, Default: 30):
Bars between spawning new strategies during historical data
Lower = faster evolution, more exploration
Higher = slower evolution, more evaluation time per strategy
30 bars = ~1-2 hours on 15min chart
Historical: Cull Interval (20-200, Default: 60):
Bars between culling weak strategies during historical data
Should be 2x spawn interval for balanced churn
Lower = aggressive selection pressure
Higher = patient evaluation
Live: Spawn Interval (100-500, Default: 200):
Bars between spawning during live trading
Much slower than historical for stability
Prevents population chaos during live trading
200 bars = ~1.5 trading days on 15min chart
Live: Cull Interval (200-1000, Default: 400):
Bars between culling during live trading
Should be 2x live spawn interval
Conservative removal during live trading
Historical: Mutation Rate (0.05-0.40, Default: 0.20):
Probability each gene mutates during breeding (20% = 2 out of 10 genes on average)
Higher = more exploration, slower convergence
Lower = more exploitation, faster convergence but risk of local optima
20% balances exploration vs exploitation
Live: Mutation Rate (0.02-0.20, Default: 0.08):
Mutation rate during live trading
Much lower for stability (don't want population to suddenly degrade)
8% = mostly inherits parent genes with small tweaks
Mutation Strength (0.05-0.25, Default: 0.12):
How much genes change when mutated (% of gene's total range)
0.05 = tiny nudges (fine-tuning)
0.12 = moderate jumps (recommended)
0.25 = large leaps (aggressive exploration)
Example: If gene range is 0.5-2.0, 12% strength = ±0.18 possible change
📈 Signal Quality Settings
Min Signal Probability (0.55-0.80, Default: 0.65):
Quality gate threshold - signals below this never generate
0.55-0.60 = More signals, accept lower confidence (higher risk)
0.65 = Institutional-grade balance (recommended)
0.70-0.75 = Fewer but higher-quality signals (conservative)
0.80+ = Very selective, very few signals (ultra-conservative)
Min Confluence Score (1-3, Default: 2):
Required indicator agreement before signal generates
1 = Any single indicator can trigger (not recommended - too many false signals)
2 = Requires 2 of 3 indicators agree (RECOMMENDED for balance)
3 = All 3 must agree (very selective, few signals, high quality)
Base Persistence Bars (1-5, Default: 2):
Base bars signal must persist before entry
System adapts automatically:
High probability signals (75%+) enter 1 bar faster
Low probability signals (<68%) need 1 bar more
Trending regime: -1 bar (faster entries)
Volatile regime: +1 bar (more confirmation)
1 = Immediate entry after quality gate (responsive but prone to whipsaw)
2 = Balanced confirmation (recommended)
3-5 = Patient confirmation (slower but more reliable)
Cooldown After Trade (3-20, Default: 8):
Bars to wait after exit before next entry allowed
Prevents overtrading and revenge trading
3 = Minimal cooldown (active trading)
8 = Balanced (recommended)
15-20 = Conservative (position trading)
Entropy Length (10-50, Default: 20):
Lookback period for market order/disorder calculation
Lower = more responsive to regime changes (noisy)
Higher = more stable regime detection (laggy)
20 = works across most timeframes
Momentum Length (5-30, Default: 14):
Period for RSI/ROC calculations
14 = standard (RSI default)
Lower = more signals, less reliable
Higher = fewer signals, more reliable
Structure Length (20-100, Default: 50):
Lookback for support/resistance swing range
20 = short-term swings (day trading)
50 = medium-term structure (recommended)
100 = major structure (position trading)
Trend EMA Length (20-100, Default: 50):
EMA period for trend detection and direction bias
20 = short-term trend (responsive)
50 = medium-term trend (recommended)
100 = long-term trend (position trading)
ATR Period (5-30, Default: 14):
Period for volatility measurement
14 = standard ATR
Lower = more responsive to vol changes
Higher = smoother vol calculation
📊 Volatility Scaling (DVS) Settings
Enable DVS (Default: ON):
Dynamic volatility scaling for adaptive stop/target placement
Highly recommended to leave ON
OFF only for testing fixed-distance stops
DVS Method (Default: Ensemble):
ATR Ratio: Simple, fast, single-method (good for beginners)
Parkinson: High-low range based (good for intraday)
Garman-Klass: OHLC based (sophisticated, considers gaps)
Ensemble: Median of all three (RECOMMENDED - most robust)
DVS Memory (20-200, Default: 100):
Lookback for baseline volatility comparison
20 = very responsive to vol changes (can overreact)
100 = balanced adaptation (recommended)
200 = slow, stable baseline (minimizes false vol signals)
DVS Sensitivity (0.3-1.5, Default: 0.7):
How much volatility affects scaling (power-law exponent)
0.3 = Conservative, heavily dampens vol impact (cube root)
0.5 = Moderate dampening (square root)
0.7 = Balanced response (recommended)
1.0 = Linear, full 1:1 vol response
1.5 = Aggressive, amplified response (exponential)
🔬 Walk-Forward Optimization Settings
Enable WFO (Default: ON):
Out-of-sample validation to prevent overfitting
Highly recommended to leave ON
OFF only for testing or if you want unvalidated strategies
Training Window (100-500, Default: 250):
Bars for in-sample optimization
100 = fast validation, less data (risky)
250 = balanced (recommended) - about 1-2 months on daily, 1-2 weeks on 15min
500 = patient validation, more data (conservative)
Testing Window (30-200, Default: 75):
Bars for out-of-sample validation
Should be ~30% of training window
30 = minimal test (fast validation)
75 = balanced (recommended)
200 = extensive test (very conservative)
Min Trades for Validation (3-15, Default: 5):
Required trades in BOTH training AND testing periods
3 = minimal sample (risky, fast validation)
5 = balanced (recommended)
10+ = conservative (slow validation, high confidence)
WFO Efficiency Threshold (0.3-0.9, Default: 0.55):
Minimum test/train performance ratio required
0.30 = Very loose (test must be 30% as good as training)
0.55 = Balanced (recommended) - test must be 55% as good
0.70+ = Strict (test must closely match training)
Higher = fewer validated strategies, lower risk of overfitting
🎨 Premium Visuals Settings
Visual Theme:
Neon Genesis: Cyberpunk aesthetic (cyan/magenta/purple)
Carbon Fiber: Industrial look (blue/red/gray)
Quantum Blue: Quantum computing (blue/purple/pink)
Aurora: Northern lights (teal/orange/purple)
⚡ Gradient Probability Cloud (Default: ON):
Multi-layer gradient showing signal buildup
Turn OFF if chart lags or for cleaner look
Cloud Gradient Layers (3-15, Default: 7):
More layers = smoother gradient, more CPU intensive
Fewer layers = faster, blockier appearance
🎗️ Population Fitness Ribbon (Default: ON):
Histogram showing fitness distribution
Turn OFF for cleaner chart
Ribbon Layers (5-20, Default: 10):
More layers = finer fitness detail
Fewer layers = simpler histogram
⭕ Signal Confidence Halo (Default: ON):
Circular indicator around entry signals
Size/brightness scales with probability
Minimal performance cost
🔬 Evolution Event Markers (Default: ON):
Diamond (spawn) and X (cull) markers
Shows genetic algorithm activity
Minimal performance cost
🎯 Stop/Target Lines (Default: ON):
Shows shadow portfolio stop/target levels
Turn OFF for cleaner chart (recommended for screenshots/presentations)
📊 Enhanced Dashboard (Default: ON):
Comprehensive metrics panel
Should stay ON unless you want zero overlays
🔍 Diagnostics Panel (Default: OFF):
Detailed signal rejection tracking
Turn ON when optimizing settings
Turn OFF during normal use (slight performance cost)
📈 USAGE WORKFLOW - HOW TO USE THIS INDICATOR
Phase 1: Initial Setup & Learning
Add AGE to your chart
Recommended timeframes: 15min, 30min, 1H (best signal-to-noise ratio)
Works on: 5min (day trading), 4H (swing trading), Daily (position trading)
Load 1000+ bars for sufficient evolution history
Let the population evolve (100+ bars minimum)
First 50 bars: Random exploration, poor results expected
Bars 50-150: Population converging, fitness improving
Bars 150+: Stable performance, validated strategies emerging
Watch the dashboard metrics
Population should grow toward max capacity
Generation number should advance regularly
Validated strategies counter should increase
Best fitness should trend upward toward 0.50-0.70 range
Observe evolution markers
Diamond markers (cyan) = new strategies spawning
X markers (red) = weak strategies being culled
Frequent early activity = healthy evolution
Activity slowing = population stabilizing
Be patient. Evolution takes time. Don't judge performance before 150+ bars.
Phase 2: Signal Observation
Watch signals form
Gradient cloud builds up 2-3 bars before entry
Cloud brightness = probability strength
Cloud thickness = signal persistence
Check signal quality
Look at confidence halo size when entry marker appears
Large bright halo = elite setup (85%+)
Medium halo = strong setup (75-85%)
Small halo = good setup (65-75%)
Verify market conditions
Check trend EMA color (green = uptrend, red = downtrend, gray = choppy)
Check background tint (green = trending, red = volatile, clear = choppy)
Trending background + aligned signal = ideal conditions
Review dashboard signal status
Current Signal column shows:
Status (Long/Short/Forming/Waiting)
Confidence % (actual probability value)
Quality assessment (Elite/Strong/Good)
Confluence score (2/3 or 3/3 preferred)
Only signals meeting ALL quality gates appear on chart. If you're not seeing signals, population is either still learning or market conditions aren't suitable.
Phase 3: Manual Trading Execution
When Long Signal Fires:
Verify confidence level (dashboard or halo size)
Confirm trend alignment (EMA sloping up, green color)
Check regime (preferably trending or choppy, avoid volatile)
Enter long manually on your broker platform
Set stop loss at displayed stop line level (if lines enabled), or use your own risk management
Set take profit at displayed target line level, or trail manually
Monitor position - exit if X marker appears (signal reversal)
When Short Signal Fires:
Same verification process
Confirm downtrend (EMA sloping down, red color)
Enter short manually
Use displayed stop/target levels or your own
AGE tells you WHEN and HOW CONFIDENT. You decide WHETHER and HOW MUCH.
Phase 4: Set Up Alerts (Never Miss a Signal)
Right-click on indicator name in legend
Select "Add Alert"
Choose condition:
"AGE Long" = Long entry signal fired
"AGE Short" = Short entry signal fired
"AGE Exit" = Position reversal/exit signal
Set notification method:
Sound alert (popup on chart)
Email notification
Webhook to phone/trading platform
Mobile app push notification
Name the alert (e.g., "AGE BTCUSD 15min Long")
Save alert
Recommended: Set alerts for both long and short, enable mobile push notifications. You'll get alerted in real-time even if not watching charts.
Phase 5: Monitor Population Health
Weekly Review:
Check dashboard Population column:
Active count should be near max (6-8 of 8)
Validated count should be >50% of active
Generation should be advancing (1-2 per week typical)
Check dashboard Performance column:
Aggregate win rate should be >50% (target: 55-65%)
Total P&L should be positive (may fluctuate)
Best fitness should be >0.50 (target: 0.55-0.70)
MAS should be declining slowly (normal adaptation)
Check Active Strategy column:
Selected strategy should be validated (✓ VAL)
Personal fitness should match best fitness
Trade count should be accumulating
Win rate should be >50%
Warning Signs:
Zero validated strategies after 300+ bars = settings too strict or market unsuitable
Best fitness stuck <0.30 = population struggling, consider parameter adjustment
No spawning/culling for 200+ bars = evolution stalled (may be optimal or need reset)
Aggregate win rate <45% sustained = system not working on this instrument/timeframe
Health Check Pass:
50%+ strategies validated
Best fitness >0.50
Aggregate win rate >52%
Regular spawn/cull activity
Selected strategy validated
Phase 6: Optimization (If Needed)
Enable Diagnostics Panel (bottom-right) for data-driven tuning:
Problem: Too Few Signals
Evaluated: 200
Passed: 8 (4%)
⨯ Probability: 140 (70%)
Solutions:
Lower min probability: 65% → 60% or 55%
Reduce min confluence: 2 → 1
Lower base persistence: 2 → 1
Increase mutation rate temporarily to explore new genes
Check if regime filter is blocking signals (⨯ Regime high?)
Problem: Too Many False Signals
Evaluated: 200
Passed: 90 (45%)
Win rate: 42%
Solutions:
Raise min probability: 65% → 70% or 75%
Increase min confluence: 2 → 3
Raise base persistence: 2 → 3
Enable WFO if disabled (validates strategies before use)
Check if volume filter is being ignored (⨯ Volume low?)
Problem: Counter-Trend Losses
⨯ Trend: 5 (only 5% rejected)
Losses often occur against trend
Solutions:
System should already filter trend opposition
May need stronger trend requirement
Consider only taking signals aligned with higher timeframe trend
Use longer trend EMA (50 → 100)
Problem: Volatile Market Whipsaws
⨯ Regime: 100 (50% rejected by volatile regime)
Still getting stopped out frequently
Solutions:
System is correctly blocking volatile signals
Losses happening because vol filter isn't strict enough
Consider not trading during volatile periods (respect the regime)
Or disable regime filter and accept higher risk
Optimization Workflow:
Enable diagnostics
Run 200+ bars with current settings
Analyze rejection patterns and win rate
Make ONE change at a time (scientific method)
Re-run 200+ bars and compare results
Keep change if improvement, revert if worse
Disable diagnostics when satisfied
Never change multiple parameters at once - you won't know what worked.
Phase 7: Multi-Instrument Deployment
AGE learns independently on each chart:
Recommended Strategy:
Deploy AGE on 3-5 different instruments
Different asset classes ideal (e.g., ES futures, EURUSD, BTCUSD, SPY, Gold)
Each learns optimal strategies for that instrument's personality
Take signals from all 5 charts
Natural diversification reduces overall risk
Why This Works:
When one market is choppy, others may be trending
Different instruments respond to different news/catalysts
Portfolio-level win rate more stable than single-instrument
Evolution explores different parameter spaces on each chart
Setup:
Same settings across all charts (or customize if preferred)
Set alerts for all
Take every validated signal across all instruments
Position size based on total account (don't overleverage any single signal)
⚠️ REALISTIC EXPECTATIONS - CRITICAL READING
What AGE Can Do
✅ Generate probability-weighted signals using genetic algorithms
✅ Evolve strategies in real-time through natural selection
✅ Validate strategies on out-of-sample data (walk-forward optimization)
✅ Adapt to changing market conditions automatically over time
✅ Provide comprehensive metrics on population health and signal quality
✅ Work on any instrument, any timeframe, any broker
✅ Improve over time as weak strategies are culled and fit strategies breed
What AGE Cannot Do
❌ Win every trade (typical win rate: 55-65% at best)
❌ Predict the future with certainty (markets are probabilistic, not deterministic)
❌ Work perfectly from bar 1 (needs 100-150 bars to learn and stabilize)
❌ Guarantee profits under all market conditions
❌ Replace your trading discipline and risk management
❌ Execute trades automatically (this is an indicator, not a strategy)
❌ Prevent all losses (drawdowns are normal and expected)
❌ Adapt instantly to regime changes (re-learning takes 50-100 bars)
Performance Realities
Typical Performance After Evolution Stabilizes (150+ bars):
Win Rate: 55-65% (excellent for trend-following systems)
Profit Factor: 1.5-2.5 (realistic for validated strategies)
Signal Frequency: 5-15 signals per 100 bars (quality over quantity)
Drawdown Periods: 20-40% of time in equity retracement (normal trading reality)
Max Consecutive Losses: 5-8 losses possible even with 60% win rate (probability says this is normal)
Evolution Timeline:
Bars 0-50: Random exploration, learning phase - poor results expected, don't judge yet
Bars 50-150: Population converging, fitness climbing - results improving
Bars 150-300: Stable performance, most strategies validated - consistent results
Bars 300+: Mature population, optimal genes dominant - best results
Market Condition Dependency:
Trending Markets: AGE excels - clear directional moves, high-probability setups
Choppy Markets: AGE struggles - fewer signals generated, lower win rate
Volatile Markets: AGE cautious - higher rejection rate, wider stops, fewer trades
Market Regime Changes:
When market shifts from trending to choppy overnight
Validated strategies can become temporarily invalidated
AGE will adapt through evolution, but not instantly
Expect 50-100 bar re-learning period after major regime shifts
Fitness may temporarily drop then recover
This is NOT a holy grail. It's a sophisticated signal generator that learns and adapts using genetic algorithms. Your success depends on:
Patience during learning periods (don't abandon after 3 losses)
Proper position sizing (risk 0.5-2% per trade, not 10%)
Following signals consistently (cherry-picking defeats statistical edge)
Not abandoning system prematurely (give it 200+ bars minimum)
Understanding probability (60% win rate means 40% of trades WILL lose)
Respecting market conditions (trending = trade more, choppy = trade less)
Managing emotions (AGE is emotionless, you need to be too)
Expected Drawdowns:
Single-strategy max DD: 10-20% of equity (normal)
Portfolio across multiple instruments: 5-15% (diversification helps)
Losing streaks: 3-5 consecutive losses expected periodically
No indicator eliminates risk. AGE manages risk through:
Quality gates (rejecting low-probability signals)
Confluence requirements (multi-indicator confirmation)
Persistence requirements (no knee-jerk reactions)
Regime awareness (reduced trading in chaos)
Walk-forward validation (preventing overfitting)
But it cannot prevent all losses. That's inherent to trading.
🔧 TECHNICAL SPECIFICATIONS
Platform: TradingView Pine Script v5
Indicator Type: Overlay indicator (plots on price chart)
Execution Type: Signals only - no automatic order placement
Computational Load:
Moderate to High (genetic algorithms + shadow portfolios)
8 strategies × shadow portfolio simulation = significant computation
Premium visuals add additional load (gradient cloud, fitness ribbon)
TradingView Resource Limits (Built-in Caps):
Max Bars Back: 500 (sufficient for WFO and evolution)
Max Labels: 100 (plenty for entry/exit markers)
Max Lines: 150 (adequate for stop/target lines)
Max Boxes: 50 (not heavily used)
Max Polylines: 100 (confidence halos)
Recommended Chart Settings:
Timeframe: 15min to 1H (optimal signal/noise balance)
5min: Works but noisier, more signals
4H/Daily: Works but fewer signals
Bars Loaded: 1000+ (ensures sufficient evolution history)
Replay Mode: Excellent for testing without risk
Performance Optimization Tips:
Disable gradient cloud if chart lags (most CPU intensive visual)
Disable fitness ribbon if still laggy
Reduce cloud layers from 7 to 3
Reduce ribbon layers from 10 to 5
Turn off diagnostics panel unless actively tuning
Close other heavy indicators to free resources
Browser/Platform Compatibility:
Works on all modern browsers (Chrome, Firefox, Safari, Edge)
Mobile app supported (full functionality on phone/tablet)
Desktop app supported (best performance)
Web version supported (may be slower on older computers)
Data Requirements:
Real-time or delayed data both work
No special data feeds required
Works with TradingView's standard data
Historical + live data seamlessly integrated
🎓 THEORETICAL FOUNDATIONS
AGE synthesizes advanced concepts from multiple disciplines:
Evolutionary Computation
Genetic Algorithms (Holland, 1975): Population-based optimization through natural selection metaphor
Tournament Selection: Fitness-based parent selection with diversity preservation
Crossover Operators: Fitness-weighted gene recombination from two parents
Mutation Operators: Random gene perturbation for exploration of new parameter space
Elitism: Preservation of top N performers to prevent loss of best solutions
Adaptive Parameters: Different mutation rates for historical vs. live phases
Technical Analysis
Support/Resistance: Price structure within swing ranges
Trend Following: EMA-based directional bias
Momentum Analysis: RSI, ROC, MACD composite indicators
Volatility Analysis: ATR-based risk scaling
Volume Confirmation: Trade activity validation
Information Theory
Shannon Entropy (1948): Quantification of market order vs. disorder
Signal-to-Noise Ratio: Directional information vs. random walk
Information Content: How much "information" a price move contains
Statistics & Probability
Walk-Forward Analysis: Rolling in-sample/out-of-sample optimization
Out-of-Sample Validation: Testing on unseen data to prevent overfitting
Monte Carlo Principles: Shadow portfolio simulation with realistic execution
Expectancy Theory: Win rate × avg win - loss rate × avg loss
Probability Distributions: Signal confidence quantification
Risk Management
ATR-Based Stops: Volatility-normalized risk per trade
Volatility Regime Detection: Market state classification (trending/choppy/volatile)
Drawdown Control: Peak-to-trough equity measurement
R-Multiple Normalization: Performance measurement in risk units
Machine Learning Concepts
Online Learning: Continuous adaptation as new data arrives
Fitness Functions: Multi-objective optimization (win rate + expectancy + drawdown)
Exploration vs. Exploitation: Balance between trying new strategies and using proven ones
Overfitting Prevention: Walk-forward validation as regularization
Novel Contribution:
AGE is the first TradingView indicator to apply genetic algorithms to real-time indicator parameter optimization while maintaining strict anti-overfitting controls through walk-forward validation.
Most "adaptive" indicators simply recalibrate lookback periods or thresholds. AGE evolves entirely new strategies through competitive selection - it's not parameter tuning, it's Darwinian evolution of trading logic itself.
The combination of:
Genetic algorithm population management
Shadow portfolio simulation for realistic fitness evaluation
Walk-forward validation to prevent overfitting
Multi-indicator confluence for signal quality
Dynamic volatility scaling for adaptive risk
...creates a system that genuinely learns and improves over time while avoiding the curse of curve-fitting that plagues most optimization approaches.
🏗️ DEVELOPMENT NOTES
This project represents months of intensive development, facing significant technical challenges:
Challenge 1: Making Genetics Actually Work
Early versions spawned garbage strategies that polluted the gene pool:
Random gene combinations produced nonsensical parameter sets
Weak strategies survived too long, dragging down population
No clear convergence toward optimal solutions
Solution:
Comprehensive fitness scoring (4 factors: win rate, P&L, expectancy, drawdown)
Elite preservation (top 2 always protected)
Walk-forward validation (unproven strategies penalized 30%)
Tournament selection (fitness-weighted breeding)
Adaptive culling (MAS decay creates increasing selection pressure)
Challenge 2: Balancing Evolution Speed vs. Stability
Too fast = population chaos, no convergence. Too slow = can't adapt to regime changes.
Solution:
Dual-phase timing: Fast evolution during historical (30/60 bar intervals), slow during live (200/400 bar intervals)
Adaptive mutation rates: 20% historical, 8% live
Spawn/cull ratio: Always 2:1 to prevent population collapse
Challenge 3: Shadow Portfolio Accuracy
Needed realistic trade simulation without lookahead bias:
Can't peek at future bars for exits
Must track multiple portfolios simultaneously
Stop/target checks must use bar's high/low correctly
Solution:
Entry on close (realistic)
Exit checks on current bar's high/low (realistic)
Independent position tracking per strategy
Cooldown periods to prevent unrealistic rapid re-entry
ATR-normalized P&L (R-multiples) for fair comparison across volatility regimes
Challenge 4: Pine Script Compilation Limits
Hit TradingView's execution limits multiple times:
Too many array operations
Too many variables
Too complex conditional logic
Solution:
Optimized data structures (single DNA array instead of 8 separate arrays)
Minimal visual overlays (only essential plots)
Efficient fitness calculations (vectorized where possible)
Strategic use of barstate.islast to minimize dashboard updates
Challenge 5: Walk-Forward Implementation
Standard WFO is difficult in Pine Script:
Can't easily "roll forward" through historical data
Can't re-optimize strategies mid-stream
Must work in real-time streaming environment
Solution:
Age-based phase detection (first 250 bars = training, next 75 = testing)
Separate metric tracking for train vs. test
Efficiency calculation at fixed interval (after test period completes)
Validation flag persists for strategy lifetime
Challenge 6: Signal Quality Control
Early versions generated too many signals with poor win rates:
Single indicators produced excessive noise
No trend alignment
No regime awareness
Instant entries on single-bar spikes
Solution:
Three-layer confluence system (entropy + momentum + structure)
Minimum 2-of-3 agreement requirement
Trend alignment checks (penalty for counter-trend)
Regime-based probability adjustments
Persistence requirements (signals must hold multiple bars)
Volume confirmation
Quality gate (probability + confluence thresholds)
The Result
A system that:
Truly evolves (not just parameter sweeps)
Truly validates (out-of-sample testing)
Truly adapts (ongoing competition and breeding)
Stays within TradingView's platform constraints
Provides institutional-quality signals
Maintains transparency (full metrics dashboard)
Development time: 3+ months of iterative refinement
Lines of code: ~1500 (highly optimized)
Test instruments: ES, NQ, EURUSD, BTCUSD, SPY, AAPL
Test timeframes: 5min, 15min, 1H, Daily
🎯 FINAL WORDS
The Adaptive Genesis Engine is not just another indicator - it's a living system that learns, adapts, and improves through the same principles that drive biological evolution. Every bar it observes adds to its experience. Every strategy it spawns explores new parameter combinations. Every strategy it culls removes weakness from the gene pool.
This is evolution in action on your charts.
You're not getting a static formula locked in time. You're getting a system that thinks , that competes , that survives through natural selection. The strongest strategies rise to the top. The weakest die. The gene pool improves generation after generation.
AGE doesn't claim to predict the future - it adapts to whatever the future brings. When markets shift from trending to choppy, from calm to volatile, from bullish to bearish - AGE evolves new strategies suited to the new regime.
Use it on any instrument. Any timeframe. Any market condition. AGE will adapt.
This indicator gives you the pure signal intelligence. How you choose to act on it - position sizing, risk management, execution discipline - that's your responsibility. AGE tells you when and how confident . You decide whether and how much .
Trust the process. Respect the evolution. Let Darwin work.
"In markets, as in nature, it is not the strongest strategies that survive, nor the most intelligent - but those most responsive to change."
Taking you to school. — Dskyz, Trade with insight. Trade with anticipation.
— Happy Holiday's
Pattern Match & Forward Projection – Weekly (EN)
Overview
This indicator searches for recurring price patterns in weekly data and projects their average forward performance.
The logic is based on historical pattern repetition: it scans past price sequences similar to the most recent one, then aggregates their forward returns to estimate potential outcomes.
⚠️ Important: The indicator is designed for weekly timeframe only. Using it on daily or intraday charts will trigger an error message.
Settings (Inputs)
Pattern Settings
Pattern length (weeks): Number of weeks used to define the reference pattern.
Forward length (weeks): Number of weeks into the future to evaluate after each pattern match.
Lookback (weeks): Historical window to scan for past pattern matches.
Normalize by shape (z-score): If enabled, patterns are normalized by z-score, focusing on shape similarity rather than absolute values.
Distance threshold (Euclidean): Maximum allowed Euclidean distance between the reference pattern and historical candidates. Smaller values = stricter matching.
Min. required matches: Minimum number of valid matches needed for analysis.
Quality Filters
Min required Hit%: Minimum percentage of positive outcomes (upside forward returns) required for the pattern to be considered valid.
Return filter mode:
Either: absolute average return ≥ threshold
Long only: average return ≥ threshold
Short only: average return ≤ -threshold
Min avg return (%): Minimum average forward return threshold for validation.
Visual Options
Highlight historical matches (labels): Marks where in history similar patterns occurred.
Max match labels to draw: Caps the number of match markers shown to avoid clutter.
Draw average projection: Displays the average projected forward curve if conditions are met.
Show summary panel: Enables/disables the information panel.
Show weekly avg curve in panel: Adds a breakdown of average returns week by week.
Projection color: Choose the color of the projected forward curve.
What the Screen Shows
Summary Panel (top-left by default)
Total matches found in history
Matches with valid forward data
Average, minimum, and maximum distance (similarity measure)
Average forward return and Hit%
Distance threshold and normalization setting
Weekly average forward curve (if enabled)
Quality filter results (pass/fail)
Projection Curve (dotted line on price chart)
Drawn only if enough valid matches are found and filters are satisfied
Represents the average forward performance of historical matches, anchored at the current bar
Historical Match Labels (▲ markers)
Small arrows below past bars where similar patterns occurred
Tooltip: “Historical match”
Forecast Logic
The indicator does not predict the future in a deterministic way.
Instead, it relies on a pattern-matching algorithm:
The most recent N weeks (defined by Pattern length) are taken as the reference.
The algorithm scans the last Lookback (weeks) for segments with similar shape and magnitude.
Similarity is measured using Euclidean distance (optionally z-score normalized).
For each valid match, the subsequent Forward length weeks are collected.
These forward paths are averaged to generate a composite forward projection.
The summary panel reports whether the current setup passes the quality filters (Hit% and minimum average return).
Usage Notes
Best used as a contextual tool, not a standalone trading system.
Works only on weekly timeframe.
Quality filters help distinguish between noisy and statistically meaningful patterns.
A higher number of matches usually improves reliability, but very strict thresholds may reduce sample size.
📊 This tool is useful for traders who want to evaluate how similar historical setups have behaved and to visualize potential forward paths in a statistically aggregated way.
Linear Regression Forecast Tool [Daveatt]Hello traders,
Navigating through the financial markets requires a blend of analysis, insight, and a touch of foresight.
My Linear Regression Forecast Tool is here to add that touch of foresight to your analysis toolkit on TradingView!
Linear Regression is the heart of this tool, a statistical method that explores the relationship between a dependent variable and one (or more) independent variable(s).
In simpler terms, it finds a straight line that best fits a set of data points.
This "line of best fit" then becomes a visual representation of the relationship in the data, providing a basis for making predictions.
Here's what the Linear Regression Forecast Tool brings to your trading table:
Multiple Indicator Choices: Select from various market indicators like Simple Moving Averages, Bollinger Bands, or the Volume Weighted Average Price as the basis for your linear regression analysis.
Customizable Forecast Periods: Define how many periods ahead you want to forecast, adjusting to your analysis needs, whether that's looking 5, 7, or 10 periods into the future.
On-Chart Forecast Points: The tool plots the forecasted points on your chart, providing a straightforward visual representation of potential future values based on past data.
In this script:
1. We first calculate the indicator using the specified period.
2. We then use the ta.linreg function to calculate a linear regression curve fitted to the indicator over the last Period bars.
3. We calculate the slope of the linear regression curve using the last two points on the curve.
We use this slope to extrapolate the linear regression curve to forecast the next X points of the indicator.
4/ Finally, we use the plot function to plot the original indicator and the forecasted points on the chart, using the offset parameter to shift the forecasted points to the right (into the future).
This method assumes that the trend represented by the linear regression curve will continue, which may not always be the case, especially in volatile or changing market conditions.
Examples:
Works with a moving average
Works with a Bollinger band
The code can be adapted to work with any other indicator (imagine RSI, MACD, other Moving Average Type, PSAR, Supertrend, etc...)
Conclusion
The Linear Regression Forecast Tool doesn't promise to tell the future but provides a structured way to visualize possible future price trends based on historical data. I
Remember, no tool can predict market conditions with certainty.
It's always advisable to corroborate findings with other analysis methods and stay updated with market news and events.
Happy trading!
Risk-Adjusted Return OscillatorThe Risk-Adjusted Return Oscillator (RAR) is designed to aid traders in predicting future price action by analysing the risk-adjusted performance of an asset. This oscillator is displayed directly on the price chart, unlike other oscillators.
By considering the risk-return relationship, the indicator helps identify periods of overvaluation or undervaluation, allowing traders to anticipate potential price reversals or trend accelerations.
HOW TO USE
The Risk-Adjusted Return Oscillator analyses the risk-adjusted performance of an asset to detect price reversals and accelerations. Here's how to interpret its signals:
Ranging Market:
Overbought Signal: When the RAR curve reaches the overbought level (upper red line), it suggests a potential reversal signal. It indicates that the asset may be overvalued, and a price correction or trend reversal could occur.
Oversold Signal: When the RAR curve reaches the oversold level (lower red line), it indicates a potential reversal signal. It suggests that the asset may be undervalued, and a price correction or trend reversal could take place.
Trending Market:
Overbought Signal: In a trending market, an overbought signal (RAR curve reaching upper red line) suggests trend acceleration. It indicates that the existing trend is gaining strength, and buying pressure is increasing.
Oversold Signal: In a trending market, an oversold signal (RAR curve reaching lower red line) also signifies trend acceleration. It suggests that the prevailing trend is intensifying, and selling pressure is increasing.
Thus, it's important to consider the market context when interpreting overbought and oversold signals. In ranging markets, these signals act as potential reversal points. However, in trending markets, they indicate trend acceleration, reinforcing the current price direction.
SETTINGS
Period Length: Adjust the number of bars used to calculate returns and standard deviation.
Smoothing: Define the smoothing period for the RAR curve.
Show Overbought/Oversold Signals: Choose whether to display triangular shapes for overbought and oversold conditions.






















