Fibonacci 3-D🟩 The Fibonacci 3-D indicator is a visual tool that introduces a three-dimensional approach to Fibonacci projections, leveraging market geometry. Unlike traditional Fibonacci tools that rely on two points and project horizontal levels, this indicator leverages slopes derived from three points to introduce a dynamic element into the calculations. The Fibonacci 3-D indicator uses three user-defined points to form a triangular structure, enabling multi-dimensional projections based on the relationships between the triangle’s sides.
This triangular framework forms the foundation for the indicator’s calculations, with each slope (⌳AB, ⌳AC, and ⌳BC) representing the rate of price change between its respective points. By incorporating these slopes into Fibonacci projections, the indicator provides an alternate approach to identifying potential support and resistance levels. The Fibonacci 3-D expands on traditional methods by integrating both historical price trends and recent momentum, offering deeper insights into market dynamics and aligning with broader market geometry.
The indicator operates across three modes, each defined by the triangular framework formed by three user-selected points (A, B, and C):
1-Dimensional (1-D): Fibonacci levels are based on a single side of the triangle, such as AB, AC, or BC. The slope of the selected side determines the angle of the projection, allowing users to analyze linear trends or directional price movements.
2-Dimensional (2-D): Combines two slopes derived from the sides of the triangle, such as AB and BC or AC and BC. This mode adds depth to the projections, accounting for both historical price swings and recent market momentum.
3-Dimensional (3-D): Integrates all three slopes into a unified projection. This mode captures the full geometric relationship between the points, revealing a comprehensive view of geometric market structure.
🌀 THEORY & CONCEPT 🌀
The Fibonacci 3-D indicator builds on the foundational principles of traditional Fibonacci analysis while expanding its scope to capture more intricate market structures. At its core, the indicator operates based on three user-selected points (A, B, and C), forming the vertices of a triangle that provides the structural basis for all calculations. This triangle determines the slopes, projections, and Fibonacci levels, aligning with the unique geometric relationships between the chosen points. By introducing multiple dimensions and leveraging this triangular framework, the indicator enables a deeper examination of price movements.
1️⃣ First Dimension (1-D)
In technical analysis, traditional Fibonacci retracement and extension tools operate as one-dimensional instruments. They rely on two price points, often a swing high and a swing low, to calculate and project horizontal levels at predefined Fibonacci ratios. These levels identify potential support and resistance zones based solely on the price difference between the selected points.
A one-dimensional Fibonacci showing levels derived from two price points (B and C).
The Fibonacci 3-D indicator extends this one-dimensional concept by introducing Ascending and Descending projection options. These options calculate the levels to align with the directional movement of price, creating sloped projections instead of purely horizontal levels.
1-D mode with an ascending projection along the ⌳BC slope aligned to the market's slope. Potential support is observed at 0.236 and 0.382, while resistance appears at 1.0 and 0.5.
2️⃣ Second Dimension (2-D)
The second dimension incorporates a second side of the triangle, introducing relationships between two slopes (e.g., ⌳AB and ⌳BC) to form a more dynamic three-point structure (A, B, and C) on the chart. This structure enables the indicator to move beyond the single-axis (price) calculations of traditional Fibonacci tools. The sides of the triangle (AB, AC, BC) represent slopes calculated as the rate of price change over time, capturing distinct components of market movement, such as trend direction and momentum.
2-D mode of the Fibonacci 3-D indicator using the ⌳AC slope with a descending projection. The Fibonacci projections align closely with observed market behavior, providing support at 0.236 and resistance at 0.618. Unlike traditional zigzag setups, this configuration uses two swing highs (A and B) and a swing low (C). The alignment along the descending slope highlights the geometric relationships between selected points in identifying potential support and resistance levels.
3️⃣ Third Dimension (3-D)
The third dimension expands the analysis by integrating all three slopes into a unified calculation, encompassing the entire triangle structure formed by points A, B, and C. Unlike the second dimension, which analyzes pairwise slope relationships, the 3-D mode reflects the combined geometry of the triangle. Each slope contributes a distinct perspective: AB and AC provide historical context, while BC emphasizes the most recent price movement and is given greater weight in the calculations to ensure projections remain responsive to current dynamics.
Using this integrated framework, the 3-D mode dynamically adjusts Fibonacci projections to balance long-term patterns and short-term momentum. The projections extend outward in alignment with the triangle’s geometry, offering a comprehensive framework for identifying potential support and resistance zones and capturing market structures beyond the scope of simpler 1-D or 2-D modes.
Three-dimensional Fibonacci projection using the ⌳AC slope, aligning closely with the market's directional movement. The projection highlights key levels: resistance at 0.0 and 0.618, and support at 1.0, 0.786, and 0.382.
By leveraging all three slopes simultaneously, the 3-D mode introduces a level of complexity particularly suited for volatile or non-linear markets. The weighted slope calculations ensure no single price movement dominates the analysis, allowing the projections to adapt dynamically to the broader market structure while remaining sensitive to recent momentum.
Three-dimensional ascending projection. In 3D mode, the indicator integrates all three slopes to calculate the angle of projection for the Fibonacci levels. The resulting projections adapt dynamically to the overall geometry of the ABC structure, aligning with the market’s current direction.
🔂 Interactions: Dimensions. Slope Source, Projections, and Orientation
The Dimensions , Projections , and Orientation settings work together to define Fibonacci projections within the triangular framework. Each setting plays a specific role in the geometric analysis of price movements.
♾️ Dimension determines which of the three modes (1-D, 2-D, or 3-D) is used for Fibonacci projections. In 1-D mode, the projections are based on a single side of the triangle, such as AB, AC, or BC. In 2-D mode, two sides are combined, producing levels based on their geometric relationship. The 3-D mode integrates all three sides of the triangle, calculating projections using weighted averages that emphasize the BC side for its relevance to recent price movement while maintaining historical context from the AB and AC sides.
A one-dimensional Fibonacci projection using the ⌳AB slope with a neutral projection. Important levels of interaction are highlighted: repeated resistance at Level 1.0 and repeated support at Levels 0.5 and 0.618. The projection aligns horizontally, reflecting the relationship between points A, B, and C while identifying recurring zones of market structure.
🧮 Slope Source determines which side of the triangle (AB, AC, or BC) serves as the foundation for Fibonacci projections. This selection directly impacts the calculations by specifying the slope that anchors the geometric relationships within the chosen Dimension mode (1-D, 2-D, or 3-D).
In 1-D mode, the selected Source defines the single side used for the projection. In 2-D and 3-D modes, the Source works in conjunction with other settings to refine projections by integrating the selected slope into the multi-dimensional framework.
One-dimensional Fibonacci projection using the ⌳AC Slope Source and Ascending projection. The projection continues on the AC slope line.
🎯 Projection controls the direction and alignment of Fibonacci levels. Neutral projections produce horizontal levels, similar to traditional Fibonacci tools. Ascending and Descending projections adjust the levels along the calculated slope to reflect market trends. These options allow the indicator’s outputs to align with different market behaviors.
An ascending projection along the ⌳BC slope aligns with resistance levels at 1.0, 0.618, and 0.236. The geometric relationship between points A, B, and C illustrates how the projection adapts to market structure, identifying resistance zones that may not be captured by traditional Fibonacci tools.
🧭 Orientation modifies the alignment of the setup area defined by points A, B, and C, which influences Fibonacci projections in 2-D and 3-D modes. In Default mode, the triangle aligns naturally based on the relative positions of points B and C. In Inverted mode, the geometric orientation of the setup area is reversed, altering the slope calculations while preserving the projection direction specified in the Projection setting. In 1-D mode, Orientation has no effect since only one side is used for the projection.
Adjusting the Orientation setting provides alternative views of how Fibonacci levels align with the market's structure. By recalibrating the triangle’s setup, the inverted orientation can highlight different relationships between the sides, providing additional perspectives on support and resistance zones.
2-D inverted. The ⌳AC slope defines the projection, and the inverted orientation adjusts the alignment of the setup area, altering the angles used in level calculations. Key levels are highlighted: resistance at 0.786, strong support at 0.5 and 0.236, and a resistance-turned-support interaction at 0.618.
🛠️ CONFIGURATION AND SETTINGS 🛠️
The Fibonacci 3-D indicator includes configurable settings to adjust its functionality and visual representation. These options include customization of the dimensions (1-D, 2-D, or 3-D), slope calculations, orientations, projections, Fibonacci levels, and visual elements.
When adding the indicator to a new chart, select three reference points (A, B, and C). These are usually set to recent swing points. All three points can be easily changed at any time by clicking on the reference point and dragging it to a new location.
By default, all settings are set to Auto . The indicator uses an internal algorithm to estimate the projections based on the orientation and relative positions of the reference points. However, all values can be overridden to reflect the user's interpretation of the current market geometry.
⚙️ Core Settings
Dimensions : Defines how many sides of the triangle formed by points A, B, and C are incorporated into the calculations for Fibonacci projections. This setting determines the level of complexity and detail in the analysis. 1-D : Projects levels along the angle of a single user-selected side of the triangle.
2-D : Projects levels based on a composite slope derived from the angles of two sides of the triangle.
3-D : Projects levels based on a composite slope derived from all three sides of the triangle (A-B, A-C, and B-C), providing a multi-dimensional projection that adapts to both historical and recent market movements.
Slope Source : Determines which side of the triangle is used as the basis for slope calculations. A–B: The slope between points A and B. In 1-D mode, this determines the projection. In 2-D and 3-D modes, it contributes to the composite slope calculation.
A–C: The slope between points A and C. In 1-D mode, this determines the projection. In 2-D and 3-D modes, it contributes to the composite slope calculation.
B--C: The slope between points B and C. In 1-D mode, this determines the projection. In 2-D and 3-D modes, it contributes to the composite slope calculation.
Orientation : Defines the triangle's orientation formed by points A, B, and C, influencing slope calculations. Auto : Automatically determines orientation based on the relative positions of points B and C. If point C is to the right of point B, the orientation is "normal." If point C is to the left, the orientation is inverted.
Inverted : Reverses the orientation set in "Auto" mode. This flips the triangle, reversing slope calculations ⌳AB becomes ⌳BA).
Projection : Determines the direction of Fibonacci projections: Auto : Automatically determines projection direction based on the triangle formed by A, B, and C.
Ascending : Projects the levels upward.
Neutral : Projects the levels horizontally, similar to traditional Fibonacci retracements.
Descending : Projects the levels downward.
⚙️ Fibonacci Level Settings Show or hide specific levels.
Level Value : Adjust Fibonacci ratios for each level. The 0.0 and 1.0 levels are fixed.
Color : Set level colors.
⚙️ Visibility Settings Show Setup : Toggle the display of the setup area, which includes the projected lines used in calculations.
Show Triangle : Toggle the display of the triangle formed by points A, B, and C.
Triangle Color : Set triangle line colors.
Show Point Labels : Toggle the display of labels for points A, B, and C.
Show Left/Right Labels : Toggle price labels on the left and right sides of the chart.
Fill % : Adjust the fill intensity between Fibonacci levels (0% for no fill, 100% for full fill).
Info : Set the location or hide the Slope Source and Dimension. If Orientation is Inverted , the Slope Source will display with an asterisk (*).
⚙️ Time-Price Points : Set the time and price for points A, B, and C, which define the Fibonacci projections.
A, B, and C Points : User-defined time and price coordinates that form the foundation of the indicator's calculations.
Interactive Adjustments : Changes made to points on the chart automatically synchronize with the settings panel and update projections in real time.
Notes
Unlike traditional Fibonacci tools that include extensions beyond 1.0 (e.g., 1.618 or 2.618), the Fibonacci 3-D indicator restricts Fibonacci levels to the range between 0.0 and 1.0. This is because the projections are tied directly to the proportional relationships along the sides of the triangle formed by points A, B, and C, rather than extending beyond its defined structure.
The indicator's calculations dynamically sort the user-defined A, B, and C points by time, ensuring point A is always the earliest, point C the latest, and point B the middle. This automatic sorting allows users to freely adjust the points directly on the chart without concern for their sequence, maintaining consistency in the triangular structure.
🖼️ ADDITIONAL CHART EXAMPLES 🖼️
Three-dimensional ⌳AC slope is used with an ascending projection, even as the broader market trend moves downward. Despite the apparent contradiction, the projected Fibonacci levels align closely with price action, identifying zones of support and resistance. These levels highlight smaller countertrend movements, such as pullbacks to 0.382 and 0.236, followed by continuations at resistance levels like 0.618 and 0.786.
In 2-D mode, an ascending projection based on the BC slope highlights the market's geometric structure. A setup triangle, defined by a swing high (A), a swing low (B), and another swing high (C), reveals Fibonacci projections aligning with support at 0.236, 0.382, and 0.5, and resistance at 0.618, 0.786, and 1.0, as shown by the green and red arrows. This demonstrates the ability to uncover dynamic support and resistance levels not calculated in traditional Fibonacci tools.
In 2-D mode with an ascending projection from the ⌳AB slope, price movement is contained within the 0.5 and 0.786 levels. The 0.5 level serves as support, while the 0.786 level acts as resistance, with price action consistently interacting with these boundaries.
An AC (2-D) ascending projection is derived from two swing highs (A and B) and a swing low (C), reflecting a non-linear market structure that deviates from traditional zigzag patterns. The ascending projection aligns closely with the market's upward trajectory, forming a channel between the 0.0 and 0.5 Fibonacci levels. Note how price action interacts with the projected levels, showing support at 0.236 and 0.382, with the 0.5 level acting as a mid-channel equilibrium.
Two-dimensional ascending Fibonacci projection using the ⌳AC slope. Arrows highlight resistance at 0.786 and support at 0.0 and 0.236. The projection follows the ⌳AC slope, reflecting the geometric relationship between points A, B, and C to identify these levels.
Three-dimensional Fibonacci projection using the ⌳AC slope, aligned with the actual market's directional trend. By removing additional Fibonacci levels, the image emphasizes key areas: resistance at Level 0.0 and support at Levels 1.0 and 0.5. The projection dynamically follows the ⌳AC slope, adapting to the market's structure as defined by points A, B, and C.
A three-dimensional configuration uses the ⌳AB slope as the baseline for projections while incorporating the geometric influence of point C. Only the 0.0 and 0.618 levels are enabled, emphasizing the relationship between support at 0.0 and resistance at 0.618. Unlike traditional Fibonacci tools, which operate in a single plane, this setup reveals levels that rely on the triangular relationship between points A, B, and C. The third dimension allows for projections that align more closely with the market’s structure and reflect its multi-dimensional geometry.
The Fibonacci 3-D indicator can adapt to non-traditional point selection. Point A serves as a swing low, while points B and C are swing highs, forming an unconventional configuration. ⌳The BC slope is used in 2-D mode with an inverted orientation, flipping the projection direction and revealing resistance at Level 0.786 and support at Levels 0.618 and 0.5.
⚠️ DISCLAIMER ⚠️
The Fibonacci 3-D indicator is a visual analysis tool designed to illustrate Fibonacci relationships. While the indicator employs precise mathematical and geometric formulas, no guarantee is made that its calculations will align with other Fibonacci tools or proprietary methods. Like all technical and visual indicators, the Fibonacci projections generated by this tool may appear to visually align with key price zones in hindsight. However, these projections are not intended as standalone signals for trading decisions. This indicator is intended for educational and analytical purposes, complementing other tools and methods of market analysis.
🧠 BEYOND THE CODE 🧠
The Fibonacci 3-D indicator, like other xxattaxx indicators , is designed to encourage both education and community engagement. Your feedback and insights are invaluable to refining and enhancing the Fibonacci 3-D indicator. We look forward to the creative applications, adaptations, and observations this tool inspires within the trading community.
In den Scripts nach "one一季度财报" suchen
Uptrick: Arbitrage OpportunityINTRODUCTION
This script, titled Uptrick: Arbitrage Monitor, is a Pine Script™ indicator that aims to help traders quickly visualize potential arbitrage scenarios across multiple cryptocurrency exchanges. Arbitrage, in general, involves taking advantage of price differences for the same asset across different trading platforms. By comparing market prices of the same symbol on two user-selected exchanges, as well as scanning a broader list of exchanges, this script attempts to signal areas where you might want to buy on one exchange and sell on another. It includes various graphical tools, calculations, and an optional Automated Detection signal feature, allowing users to incorporate more advanced data scanning into their trading decisions. Keep in mind that transaction fees must also be considered in real-world scenarios. These fees can negate potential profits and, in some cases, result in a net loss.
PURPOSE
The primary purpose of this indicator is to show potential percentage differences between the same cryptocurrency trading pairs on two different exchanges. This difference is displayed numerically, visually as a line chart, and it is also tested against user-defined thresholds. With the threshold in place, buy and sell signals can be generated. The script allows you to quickly gauge how significant a spread is between two exchanges and whether that spread surpasses a specified threshold. This is particularly useful for arbitrage trading, where an asset is bought at a lower price on one exchange and sold at a higher price on another, capitalizing on price discrepancies. By identifying these opportunities, traders can potentially secure profits across different markets.
WHY IT WAS MADE
This script was developed to help traders who frequently look for arbitrage opportunities in the fast-paced cryptocurrency market. Cryptocurrencies sometimes experience quick price divergences across different exchanges. By having an automated approach that compares and displays prices, traders can spend less time manually tracking price discrepancies and more time focusing on actual trading strategies. The script was also made with user customization in mind, allowing you to toggle an optional Automated-based approach and choose different moving average methods to smooth out the displayed price difference.
WHAT ARBITRAGE IS
Arbitrage is the practice of buying an asset on one market (or exchange) at a lower price and simultaneously selling it on another market where the price is higher, thus profiting from the price difference. In cryptocurrency markets, these price differentials can occur across multiple exchanges due to varying liquidity, trading volume, geographic factors, or market inefficiencies. Though sometimes small, these differences can be exploited for profit when approached methodically.
EXPLANATION OF INPUTS
The script includes a variety of user inputs that help tailor the indicator to your specific needs:
1. Compared Symbol 1: This is the primary symbol you want to track (for example, BTCUSDT). Make sure it's written in all capital and make sure that it's price from that exchange is available on Tradingview.
2. Compare Exchange 1: The first exchange on which the script will request pricing data for the chosen symbol.
3. Compared to Exchange: The second exchange, used for the comparison.
4. Opportunity Threshold (%): A percentage threshold that, when exceeded by the price difference, can trigger buy or sell signals.
5. Plot Style?: Allows you to choose between plotting the raw difference line or a moving average of that difference.
6. MA Type: Select among SMA, EMA, WMA, RMA, or HMA for your moving average calculation.
7. MA Length: The lookback period for the selected moving average.
8. Plot Buy/Sell Signals?: Enables or disables the plotting of arrows signaling potential buy or sell zones based on threshold crossovers.
9. Automated Detection?: Toggles an additional multi-exchange data scan feature that calculates the highest and lowest prices for the specified symbol across a predefined list of exchanges.
CALCULATIONS
At its core, the script calculates price1 and price2 using the request.security function to fetch close prices from two selected exchanges. The difference is measured as (price1 - price2) / price2 * 100. This results in a percentage that indicates how much higher or lower price1 is relative to price2. Additionally, the script calculates a slope for this difference, which helps color the line depending on whether it is trending up or down. If you choose the moving average option, the script will replace the raw difference data with one of several moving average calculations (SMA, EMA, WMA, RMA, or HMA).
The script also includes an iterative scan of up to 15 different exchanges for Automated detection, collecting the highest and lowest price across all those exchanges. If the Automated option is enabled, it compiles a potential recommendation: buy at the cheapest exchange price and sell at the most expensive one. The difference across all exchanges (allExDiffPercent) is calculated using (highestPriceAll - lowestPriceAll) / lowestPriceAll * 100.
WHAT AUTOMATED DETECTION SIGNAL DOES
If enabled, the Automated detection feature scans all 15 supported exchanges for the specified symbol. It then identifies the exchange with the highest price and the exchange with the lowest price. The script displays a recommended action: buy on the lowest-exchange price and sell on the highest-exchange price. While called “Automated,” it is essentially a multi-exchange data query that automates a portion of research by consolidating different price points. It does not replace thorough analysis or guaranteed execution; it simply provides an overview of potential extremes.
WHAT ALL-EX-DIFF IS
The variable allExDiffPercent is used to show the overall difference between the highest price and the lowest price found among the 15 pre-chosen exchanges. This figure can be useful for anyone wanting a big-picture view of how large the arbitrage spread might be across the broader market.
SIGNALS AND HOW THEY ARE GENERATED
The script provides two main modes of signal generation:
1. Raw Difference Mode: If the user chooses “Use Normal Line,” the script compares the percentage difference of the two selected exchanges (price1 and price2) to the user-defined threshold. When the difference crosses under the positive threshold, a sell signal is displayed (red arrow). Conversely, when the difference crosses above the negative threshold, a buy signal is displayed (green arrow).
2. Moving Average Mode: If the user selects “Use Moving Average,” the script instead references the moving average values (maValue). The signals fire under similar conditions but use the average line to gauge whether the threshold has been crossed.
HOW TO USE THE INDICATOR
1. Add the script to your chart in TradingView.
2. In the script’s settings panel, configure the symbol you wish to compare (for example, BTCUSDT), choose the two exchanges you want to evaluate, and set your desired threshold.
3. Optionally, pick a moving average type and length if you prefer a smoother representation of the difference.
4. Enable or disable buy/sell signals according to your preference.
5. If you’d like to see potential extremes among a broader list of exchanges, enable Automated Detection. Keep in mind that this feature runs additional security requests, so it might slow down performance on weaker devices or if you already have many scripts running.
EXCHANGES TO USE
The script currently supports up to 15 exchanges: BYBIT, BINANCE, MEXC, BLOFIN, BITGET, OKX, KUCOIN, COINBASE, COINEX, PHEMEX, POLONIEX, GATEIO, BITSTAMP, and KRAKEN. You can choose any two of these for direct comparison, and if you enable the Automated detection, it will attempt to query them all to find extremes in real time.
VISUALS
The exchanges and current prices & differences are all plotted in the table while the colored line represents the difference in the price. The two thresholds colored red are where signals are generated. A cross below the upper threshold is a sell signal and a cross above the lower threshold is a buy signal. In the line at the bottom, purple is a negative slope and aqua is a positive slope.
LIMITATIONS AND POTENTIAL PROBLEMS
If you enable too many visual elements such as signals, additional lines, and the Automated-based scanning table, you may find that your chart becomes cluttered, or text might overlap. One workaround is to remove and reapply the indicator to refresh its display. You may also want to reduce the number of displayed table rows by disabling some features if your chart becomes too crowded. Sometimes there might be an error that the price of an asset is not available on an exchange, to fix this, go and select another exchange to compare it to, or if it happens in Automated detection, choose a different asset, ideally more widely spread.
UNIQUENESS
This indicator stands out due to its multifaceted approach: it doesn’t just look at two exchanges but optionally scans up to 15 exchanges in real time, presenting users with a much broader view of the market. The dual-mode system (raw difference vs. moving average) allows for both immediate, unfiltered signals and smoother, noise-reduced signals depending on user preference. By default, it introduces dynamic visual cues through color changes when the slope of the difference transitions upward or downward. The optional Automated detection, while not a deep learning system, adds a functional intelligence layer by collating extreme price points from multiple exchanges in one place, thereby streamlining the manual research process. This combination of features gives the script a unique edge in the TradingView ecosystem, catering equally to novices wanting a straightforward approach and to advanced users looking for an aggregated multi-exchange analysis.
CONCLUSION
Uptrick: Arbitrage Monitor is a versatile and customizable Pine Script™ indicator that highlights price differences for a specified symbol between two user-selected exchanges. Through signals, threshold-based alerts, and optional Automated detection across multiple exchanges, it aims to support traders in identifying potential arbitrage opportunities quickly and efficiently. This script makes no guarantees of profitability but can serve as a valuable tool to add to your trading toolkit. Always use caution when implementing arbitrage strategies, and be mindful of market risks, exchange fees, and latency.
ADDITIONAL DISCLOSURES
This script is provided for educational and informational purposes only. It does not constitute financial advice or a guarantee of performance. Users are encouraged to conduct thorough research and consider the inherent risks of arbitrage trading. Market conditions can change rapidly, and orders may fail to execute at desired prices, especially when large price discrepancies attract competition from other traders.
Advanced Pattern Detector**Script Overview**
**Indicator Name:** Advanced Pattern Detector
**Pine Script Version:** v5
**Indicator Type:** Overlaid on the chart (overlay=true)
**Main Features:**
- Detection and visualization of various technical patterns.
- Generation of BUY and SELL signals based on detected patterns.
- Display of Fibonacci levels to identify potential support and resistance levels.
- Ability to enable or disable each pattern through the indicator settings.
---
**Indicator Settings**
**Switches to Enable/Disable Patterns**
At the top of the indicator, there are parameters that allow the user to select which patterns will be displayed on the chart:
- Three Drives
- Rounding Top
- Rounding Bottom
- ZigZag Pattern
- Inverse Head and Shoulders
- Fibonacci Retracement
**Parameters for ZigZag**
Settings are also available for the ZigZag pattern, such as the depth of peak and trough detection, allowing the user to adjust the indicator's sensitivity to price changes.
---
**Pattern Detection**
Each pattern is implemented with its own logic, which checks specific conditions on the current bar (candle). Below are the main patterns:
1. **Three Drives**
- **Description:** This pattern consists of three consecutive price movements in one direction (up or down). It can signal the continuation of the current trend or its reversal.
- **How It Works:**
- **Upward Drive:** The indicator checks that the closing price of each subsequent candle is higher than the previous one for three bars.
- **Downward Drive:** The indicator checks that the closing price of each subsequent candle is lower than the previous one for three bars.
2. **Rounding Top**
- **Description:** A pattern representing a smooth decrease in maximum prices over several bars, which may indicate a potential downward trend reversal.
- **How It Works:**
- The indicator checks that the maximum prices of the last five bars are gradually decreasing, and the current bar shows a decrease in the maximum price.
3. **Rounding Bottom**
- **Description:** A pattern characterized by a smooth increase in minimum prices over several bars, signaling a possible upward trend reversal.
- **How It Works:**
- The indicator checks that the minimum prices of the last five bars are gradually increasing, and the current bar shows an increase in the minimum price.
4. **ZigZag Pattern**
- **Description:** Used to identify corrective movements on the chart. The pattern shows peak and trough points connected by lines, helping to visualize the main price movement.
- **How It Works:**
- The indicator uses a function to determine local maxima and minima based on the specified depth.
- Detected peaks and troughs are connected by lines to create a visual zigzag structure.
5. **Inverse Head and Shoulders**
- **Description:** An inverted head and shoulders formation signals a possible reversal of a downward trend to an upward one.
- **How It Works:**
- The indicator looks for three local minima: the left shoulder, the head (the lowest minimum), and the right shoulder.
- It checks that the left and right shoulders are approximately at the same level and below the head.
6. **Fibonacci Retracement Levels**
- **Description:** Automatically builds key Fibonacci levels based on the maximum and minimum prices over the last 50 bars. These levels are often used as potential support and resistance levels.
- **How It Works:**
- Daily, the minimum and maximum prices over the last 50 bars are calculated.
- Based on these values, Fibonacci levels are drawn: 100%, 23.6%, 38.2%, 50%, 61.8%, and 0%.
- Old levels are removed when a new day begins to keep the chart clean and up-to-date.
---
**Generation of Buy and Sell Signals**
The indicator combines the results of detected patterns to generate trading signals:
- **Buy Signals (BUY):**
- Rounding Bottom
- Three Drives Up
- Inverse Head and Shoulders
- ZigZag Low
- **Sell Signals (SELL):**
- Rounding Top
- Three Drives Down
- Inverse Head and Shoulders
- ZigZag High
**How It Works:**
- If one or more buy conditions are met, a "BUY" label is displayed below the corresponding bar on the chart.
- If one or more sell conditions are met, a "SELL" label is displayed above the corresponding bar on the chart.
---
**Visualization of Patterns on the Chart**
Each detected pattern is visualized using various graphical elements, allowing traders to easily identify them on the chart:
- **Three Drives Up:** Green upward triangle below the bar.
- **Three Drives Down:** Red downward triangle above the bar.
- **Rounding Top:** Orange "RT" label above the bar.
- **Rounding Bottom:** Blue "RB" label below the bar.
- **Inverse Head and Shoulders:** Turquoise "iH&S" label below the bar.
- **ZigZag High/Low:** Purple circles at the peaks and troughs of the zigzag.
---
**Displaying Fibonacci Levels**
Fibonacci levels are displayed as horizontal lines on the chart with corresponding labels. These levels help traders determine potential entry and exit points, as well as support and resistance levels.
---
**Drawing ZigZag Lines**
ZigZag lines connect the detected peaks and troughs, visualizing corrective movements. To avoid cluttering the chart, the number of lines is limited, and old lines are automatically removed as new ones are added.
Supply and demandHi all!
This is my take on supply/demand. The gist is that it creates a zone if there is a big enough reaction. This is configurable in settings as "Minimum range (ATR factor)" (the Average True Length of length 14) that is the distance that the price must travel and "Reaction bars" that is the maximum number of bars that price must travel this distance. The zones that are shown are the ones that have a retest, break and retest or is unmitigated (untouched). If a zone is mitigated (entered) or broken it is temporarily hidden. For a zone to be created it needs to have this reaction and the previous bar does not.
So this script will show you zones that are fresh (unmitigated), retested or broken and retested. This means that the zones that are shown have "proven" that they are good zones through this. Basically it means that the script creates a bunch of zones and then picks the good once. This makes the script have some latency, but will hopefully give you good zones. A zone is completely removed if it's broken twice (it's okay if it's broken once and can still have a retest after it has flipped from previous supply (or resistance) into demand (or support)).
Here is a zone (the one that has the lowest opacity) that is broken and retested that could have resulted in a good long trade (the settings are default but has a stop in the beginning of 2024):
You have a setting to remove zones that are pierced (broken by price wicks). The following zone is pierced by price (in the beginning of May) that will not be shown after the start of May if you have "Pierced" checked (the indicator has default settings but a stop in the middle of April):
You have a trend section. Zones that create a reaction upwards can only be created if the trend is considered to be up, and vice versa. The options here are "SMA50" (the current price needs to be over the Simple Moving Average of length 50) and "SMA50, SMA200" (price needs to be over the Simple Moving Average of length 50 and the Simple Moving Average of length 50 needs to be over the Simple Moving Average of length 200). If these conditions are met the trend is considered to be up, otherwise it's down. You can disable this by choosing "No detection".
The zones that are shown also need to be within a limit (of the current price). This limit is 10 (factor of the Average True Range if length 14) by default. Set this to 0 to deactivate. This is useful for not showing zones that are far away from current price and therefore unlikely to be interacted with.
You can stop the calculation of zones (through the "Stop" value in the settings). This is useful to see if previous zones were any good. I used it in my testing of the script but left it because it can be nice to have.
The zones created by the script have different transparency based upon the zone's interaction. The clearest zones are the ones that are unmitigated, the second clearest ones are the ones having a retest and lastly the zones which are most unclear are the ones having a break and then a retest.
You can see the concept of this script to be a mix of supply/demand and support/resistance, having zones being unmitigated (untouched) as the most important but also show the zones having an interaction (in the form of a retest or a break and retest).
This is from a previous supply (or resistance) zone that has flipped into demand (or support) and has shown to be a good zone through a retest followed by a rally (default settings):
This zone has multiple retest and then rallies that could have given a good long trades (it has the default settings but a "Stop" time at 2022-01-14):
TODO:
- Create zones based on pivots
- Handle overlapping zones
- Incorporate volume in the creation and/or interaction with zones
- Add alerts
- Add ability to set maximum zone width
- Add ability to set the maximum number of retest bars
- ...?
The example for this publication has the default settings bit a "Stop" and a tighter "Limit" of 4.
I hope this explanation makes sense, let me know otherwise. Also let me know if you have any suggestions on improvements.
Best of trading luck!
ICT Killzones and Sessions W/ Silver Bullet + MacrosForex and Equity Session Tracker with Killzones, Silver Bullet, and Macro Times
This Pine Script indicator is a comprehensive timekeeping tool designed specifically for ICT traders using any time-based strategy. It helps you visualize and keep track of forex and equity session times, kill zones, macro times, and silver bullet hours.
Features:
Session and Killzone Lines:
Green: London Open (LO)
White: New York (NY)
Orange: Australian (AU)
Purple: Asian (AS)
Includes AM and PM session markers.
Dotted/Striped Lines indicate overlapping kill zones within the session timeline.
Customization Options:
Display sessions and killzones in collapsed or full view.
Hide specific sessions or killzones based on your preferences.
Customize colors, texts, and sizes.
Option to hide drawings older than the current day.
Automatic Updates:
The indicator draws all lines and boxes at the start of a new day.
Automatically adjusts time-based boxes according to the New York timezone.
Killzone Time Windows (for indices):
London KZ: 02:00 - 05:00
New York AM KZ: 07:00 - 10:00
New York PM KZ: 13:30 - 16:00
Silver Bullet Times:
03:00 - 04:00
10:00 - 11:00
14:00 - 15:00
Macro Times:
02:33 - 03:00
04:03 - 04:30
08:50 - 09:10
09:50 - 10:10
10:50 - 11:10
11:50 - 12:50
Latest Update:
January 15:
Added option to automatically change text coloring based on the chart.
Included additional optional macro times per user request:
12:50 - 13:10
13:50 - 14:15
14:50 - 15:10
15:50 - 16:15
Usage:
To maximize your experience, minimize the pane where the script is drawn. This minimizes distractions while keeping the essential time markers visible. The script is designed to help traders by clearly annotating key trading periods without overwhelming their charts.
Originality and Justification:
This indicator uniquely integrates various time-based strategies essential for ICT traders. Unlike other indicators, it consolidates session times, kill zones, macro times, and silver bullet hours into one comprehensive tool. This allows traders to have a clear and organized view of critical trading periods, facilitating better decision-making.
Credits:
This script incorporates open-source elements with significant improvements to enhance functionality and user experience.
Forex and Equity Session Tracker with Killzones, Silver Bullet, and Macro Times
This Pine Script indicator is a comprehensive timekeeping tool designed specifically for ICT traders using any time-based strategy. It helps you visualize and keep track of forex and equity session times, kill zones, macro times, and silver bullet hours.
Features:
Session and Killzone Lines:
Green: London Open (LO)
White: New York (NY)
Orange: Australian (AU)
Purple: Asian (AS)
Includes AM and PM session markers.
Dotted/Striped Lines indicate overlapping kill zones within the session timeline.
Customization Options:
Display sessions and killzones in collapsed or full view.
Hide specific sessions or killzones based on your preferences.
Customize colors, texts, and sizes.
Option to hide drawings older than the current day.
Automatic Updates:
The indicator draws all lines and boxes at the start of a new day.
Automatically adjusts time-based boxes according to the New York timezone.
Killzone Time Windows (for indices):
London KZ: 02:00 - 05:00
New York AM KZ: 07:00 - 10:00
New York PM KZ: 13:30 - 16:00
Silver Bullet Times:
03:00 - 04:00
10:00 - 11:00
14:00 - 15:00
Macro Times:
02:33 - 03:00
04:03 - 04:30
08:50 - 09:10
09:50 - 10:10
10:50 - 11:10
11:50 - 12:50
Latest Update:
January 15:
Added option to automatically change text coloring based on the chart.
Included additional optional macro times per user request:
12:50 - 13:10
13:50 - 14:15
14:50 - 15:10
15:50 - 16:15
ICT Sessions and Kill Zones
What They Are:
ICT Sessions: These are specific times during the trading day when market activity is expected to be higher, such as the London Open, New York Open, and the Asian session.
Kill Zones: These are specific time windows within these sessions where the probability of significant price movements is higher. For example, the New York AM Kill Zone is typically from 8:30 AM to 11:00 AM EST.
How to Use Them:
Identify the Session: Determine which trading session you are in (London, New York, or Asian).
Focus on Kill Zones: Within that session, focus on the kill zones for potential trade setups. For instance, during the New York session, look for setups between 8:30 AM and 11:00 AM EST.
Silver Bullets
What They Are:
Silver Bullets: These are specific, high-probability trade setups that occur within the kill zones. They are designed to be "one shot, one kill" trades, meaning they aim for precise and effective entries and exits.
How to Use Them:
Time-Based Setup: Look for these setups within the designated kill zones. For example, between 10:00 AM and 11:00 AM for the New York AM session .
Chart Analysis: Start with higher time frames like the 15-minute chart and then refine down to 5-minute and 1-minute charts to identify imbalances or specific patterns .
Macros
What They Are:
Macros: These are broader market conditions and trends that influence your trading decisions. They include understanding the overall market direction, seasonal tendencies, and the Commitment of Traders (COT) reports.
How to Use Them:
Understand Market Conditions: Be aware of the macroeconomic factors and market conditions that could affect price movements.
Seasonal Tendencies: Know the seasonal patterns that might influence the market direction.
COT Reports: Use the Commitment of Traders reports to understand the positioning of large traders and commercial hedgers .
Putting It All Together
Preparation: Understand the macro conditions and review the COT reports.
Session and Kill Zone: Identify the trading session and focus on the kill zones.
Silver Bullet Setup: Look for high-probability setups within the kill zones using refined chart analysis.
Execution: Execute the trade with precision, aiming for a "one shot, one kill" outcome.
By following these steps, you can effectively use ICT sessions, kill zones, silver bullets, and macros to enhance your trading strategy.
Usage:
To maximize your experience, shrink the pane where the script is drawn. This minimizes distractions while keeping the essential time markers visible. The script is designed to help traders by clearly annotating key trading periods without overwhelming their charts.
Originality and Justification:
This indicator uniquely integrates various time-based strategies essential for ICT traders. Unlike other indicators, it consolidates session times, kill zones, macro times, and silver bullet hours into one comprehensive tool. This allows traders to have a clear and organized view of critical trading periods, facilitating better decision-making.
Credits:
This script incorporates open-source elements with significant improvements to enhance functionality and user experience. All credit goes to itradesize for the SB + Macro boxes
TrendzonesHi all!
This indicator plots trendlines. These lines are not plotted as traditional lines, but are instead zones. This is useful if you think that trend lines are more of an area of importance than a line.
It does so by finding pivots and connecting two of them if they have not been broken (more about that later) in-between the pivots.
These trend zones can be used as support/resistance that the price can react to.
• The first trendline is drawn between the high/low of the first and second pivot.
• The second trendline's first point is at the open/close of the pivot (either the first pivot or the second one) that has the smallest difference between the high/low and the nearest open/close. The same difference (between the high/low and the open/close) is then subtracted from the other pivot's high/low. This creates a point at the other pivot bar. A trendline is then drawn between the points.
This creates two trendlines and a zone between the two trendlines. This zone is the one kept and is shown by the script.
You can define the pivot lengths used to find trend zones (defaults to 3/3). You can also define the number of pivots to look back for, to find trend zones and the number of active zones, both of these defaults to 3. You can also choose to let the script create new zones based on time ("Oldest") or the zone that is furthest away in price, this defaults to be based on time but it can be useful for letting the script remove the one which is furthest away in price. Another useful setting is the one called "Cross source". This defines the price that has to cross the trend zone to make it invalid (broken). This defaults to "Close", i.e. the bar has to close on the "wrong side" of the trend zone.
The current zones are shown with an extension to the right, but you can also choose to keep the previous lines (without extension). Please note that kept zones are only the ones that are broken, not the replaced ones. I.e. the zones that are kept are the ones that are crossed by the user defined "cross source" (defaults to the closing/current price of the bar).
Hope this makes sense, let me know if you have any questions.
Best of trading luck!
TRADINGLibrary "TRADING"
This library is a client script for making a webhook signal formatted string to PoABOT server.
entry_message(password, percent, leverage, margin_mode, kis_number)
Create a entry message for POABOT
Parameters:
password (string) : (string) The password of your bot.
percent (float) : (float) The percent for entry based on your wallet balance.
leverage (int) : (int) The leverage of entry. If not set, your levereage doesn't change.
margin_mode (string) : (string) The margin mode for trade(only for OKX). "cross" or "isolated"
kis_number (int) : (int) The number of koreainvestment account. Default 1
Returns: (string) A json formatted string for webhook message.
order_message(password, percent, leverage, margin_mode, kis_number)
Create a order message for POABOT
Parameters:
password (string) : (string) The password of your bot.
percent (float) : (float) The percent for entry based on your wallet balance.
leverage (int) : (int) The leverage of entry. If not set, your levereage doesn't change.
margin_mode (string) : (string) The margin mode for trade(only for OKX). "cross" or "isolated"
kis_number (int) : (int) The number of koreainvestment account. Default 1
Returns: (string) A json formatted string for webhook message.
close_message(password, percent, margin_mode, kis_number)
Create a close message for POABOT
Parameters:
password (string) : (string) The password of your bot.
percent (float) : (float) The percent for close based on your wallet balance.
margin_mode (string) : (string) The margin mode for trade(only for OKX). "cross" or "isolated"
kis_number (int) : (int) The number of koreainvestment account. Default 1
Returns: (string) A json formatted string for webhook message.
exit_message(password, percent, margin_mode, kis_number)
Create a exit message for POABOT
Parameters:
password (string) : (string) The password of your bot.
percent (float) : (float) The percent for exit based on your wallet balance.
margin_mode (string) : (string) The margin mode for trade(only for OKX). "cross" or "isolated"
kis_number (int) : (int) The number of koreainvestment account. Default 1
Returns: (string) A json formatted string for webhook message.
manual_message(password, exchange, base, quote, side, qty, price, percent, leverage, margin_mode, kis_number, order_name)
Create a manual message for POABOT
Parameters:
password (string) : (string) The password of your bot.
exchange (string) : (string) The exchange
base (string) : (string) The base
quote (string) : (string) The quote of order message
side (string) : (string) The side of order messsage
qty (float) : (float) The qty of order message
price (float) : (float) The price of order message
percent (float) : (float) The percent for order based on your wallet balance.
leverage (int) : (int) The leverage of entry. If not set, your levereage doesn't change.
margin_mode (string) : (string) The margin mode for trade(only for OKX). "cross" or "isolated"
kis_number (int) : (int) The number of koreainvestment account.
order_name (string) : (string) The name of order message
Returns: (string) A json formatted string for webhook message.
in_trade(start_time, end_time, hide_trade_line)
Create a trade start line
Parameters:
start_time (int) : (int) The start of time.
end_time (int) : (int) The end of time.
hide_trade_line (bool) : (bool) if true, hide trade line. Default false.
Returns: (bool) Get bool for trade based on time range.
real_qty(qty, precision, leverage, contract_size, default_qty_type, default_qty_value)
Get exchange specific real qty
Parameters:
qty (float) : (float) qty
precision (float) : (float) precision
leverage (int) : (int) leverage
contract_size (float) : (float) contract_size
default_qty_type (string)
default_qty_value (float)
Returns: (float) exchange specific qty.
method set(this, password, start_time, end_time, leverage, initial_capital, default_qty_type, default_qty_value, margin_mode, contract_size, kis_number, entry_percent, close_percent, exit_percent, fixed_qty, fixed_cash, real, auto_alert_message, hide_trade_line)
Set bot object.
Namespace types: bot
Parameters:
this (bot)
password (string) : (string) password for poabot.
start_time (int) : (int) start_time timestamp.
end_time (int) : (int) end_time timestamp.
leverage (int) : (int) leverage.
initial_capital (float)
default_qty_type (string)
default_qty_value (float)
margin_mode (string) : (string) The margin mode for trade(only for OKX). "cross" or "isolated"
contract_size (float)
kis_number (int) : (int) kis_number for poabot.
entry_percent (float) : (float) entry_percent for poabot.
close_percent (float) : (float) close_percent for poabot.
exit_percent (float) : (float) exit_percent for poabot.
fixed_qty (float) : (float) fixed qty.
fixed_cash (float) : (float) fixed cash.
real (bool) : (bool) convert qty for exchange specific.
auto_alert_message (bool) : (bool) convert alert_message for exchange specific.
hide_trade_line (bool) : (bool) if true, Hide trade line. Default false.
Returns: (void)
method print(this, message)
Print message using log table.
Namespace types: bot
Parameters:
this (bot)
message (string)
Returns: (void)
method start_trade(this)
start trade using start_time and end_time
Namespace types: bot
Parameters:
this (bot)
Returns: (void)
method entry(this, id, direction, qty, limit, stop, oca_name, oca_type, comment, alert_message, when)
It is a command to enter market position. If an order with the same ID is already pending, it is possible to modify the order. If there is no order with the specified ID, a new order is placed. To deactivate an entry order, the command strategy.cancel or strategy.cancel_all should be used. In comparison to the function strategy.order, the function strategy.entry is affected by pyramiding and it can reverse market position correctly. If both 'limit' and 'stop' parameters are 'NaN', the order type is market order.
Namespace types: bot
Parameters:
this (bot)
id (string) : (string) A required parameter. The order identifier. It is possible to cancel or modify an order by referencing its identifier.
direction (string) : (string) A required parameter. Market position direction: 'strategy.long' is for long, 'strategy.short' is for short.
qty (float) : (float) An optional parameter. Number of contracts/shares/lots/units to trade. The default value is 'NaN'.
limit (float) : (float) An optional parameter. Limit price of the order. If it is specified, the order type is either 'limit', or 'stop-limit'. 'NaN' should be specified for any other order type.
stop (float) : (float) An optional parameter. Stop price of the order. If it is specified, the order type is either 'stop', or 'stop-limit'. 'NaN' should be specified for any other order type.
oca_name (string) : (string) An optional parameter. Name of the OCA group the order belongs to. If the order should not belong to any particular OCA group, there should be an empty string.
oca_type (string) : (string) An optional parameter. Type of the OCA group. The allowed values are: "strategy.oca.none" - the order should not belong to any particular OCA group; "strategy.oca.cancel" - the order should belong to an OCA group, where as soon as an order is filled, all other orders of the same group are cancelled; "strategy.oca.reduce" - the order should belong to an OCA group, where if X number of contracts of an order is filled, number of contracts for each other order of the same OCA group is decreased by X.
comment (string) : (string) An optional parameter. Additional notes on the order.
alert_message (string) : (string) An optional parameter which replaces the {{strategy.order.alert_message}} placeholder when it is used in the "Create Alert" dialog box's "Message" field.
when (bool) : (bool) An optional parmeter. Condition, deprecated.
Returns: (void)
method order(this, id, direction, qty, limit, stop, oca_name, oca_type, comment, alert_message, when)
It is a command to place order. If an order with the same ID is already pending, it is possible to modify the order. If there is no order with the specified ID, a new order is placed. To deactivate order, the command strategy.cancel or strategy.cancel_all should be used. In comparison to the function strategy.entry, the function strategy.order is not affected by pyramiding. If both 'limit' and 'stop' parameters are 'NaN', the order type is market order.
Namespace types: bot
Parameters:
this (bot)
id (string) : (string) A required parameter. The order identifier. It is possible to cancel or modify an order by referencing its identifier.
direction (string) : (string) A required parameter. Market position direction: 'strategy.long' is for long, 'strategy.short' is for short.
qty (float) : (float) An optional parameter. Number of contracts/shares/lots/units to trade. The default value is 'NaN'.
limit (float) : (float) An optional parameter. Limit price of the order. If it is specified, the order type is either 'limit', or 'stop-limit'. 'NaN' should be specified for any other order type.
stop (float) : (float) An optional parameter. Stop price of the order. If it is specified, the order type is either 'stop', or 'stop-limit'. 'NaN' should be specified for any other order type.
oca_name (string) : (string) An optional parameter. Name of the OCA group the order belongs to. If the order should not belong to any particular OCA group, there should be an empty string.
oca_type (string) : (string) An optional parameter. Type of the OCA group. The allowed values are: "strategy.oca.none" - the order should not belong to any particular OCA group; "strategy.oca.cancel" - the order should belong to an OCA group, where as soon as an order is filled, all other orders of the same group are cancelled; "strategy.oca.reduce" - the order should belong to an OCA group, where if X number of contracts of an order is filled, number of contracts for each other order of the same OCA group is decreased by X.
comment (string) : (string) An optional parameter. Additional notes on the order.
alert_message (string) : (string) An optional parameter which replaces the {{strategy.order.alert_message}} placeholder when it is used in the "Create Alert" dialog box's "Message" field.
when (bool) : (bool) An optional parmeter. Condition, deprecated.
Returns: (void)
method close_all(this, comment, alert_message, immediately, when)
Exits the current market position, making it flat.
Namespace types: bot
Parameters:
this (bot)
comment (string) : (string) An optional parameter. Additional notes on the order.
alert_message (string) : (string) An optional parameter which replaces the {{strategy.order.alert_message}} placeholder when it is used in the "Create Alert" dialog box's "Message" field.
immediately (bool) : (bool) An optional parameter. If true, the closing order will be executed on the tick where it has been placed, ignoring the strategy parameters that restrict the order execution to the open of the next bar. The default is false.
when (bool) : (bool) An optional parmeter. Condition, deprecated.
Returns: (void)
method cancel(this, id, when)
It is a command to cancel/deactivate pending orders by referencing their names, which were generated by the functions: strategy.order, strategy.entry and strategy.exit.
Namespace types: bot
Parameters:
this (bot)
id (string) : (string) A required parameter. The order identifier. It is possible to cancel an order by referencing its identifier.
when (bool) : (bool) An optional parmeter. Condition, deprecated.
Returns: (void)
method cancel_all(this, when)
It is a command to cancel/deactivate all pending orders, which were generated by the functions: strategy.order, strategy.entry and strategy.exit.
Namespace types: bot
Parameters:
this (bot)
when (bool) : (bool) An optional parmeter. Condition, deprecated.
Returns: (void)
method close(this, id, comment, qty, qty_percent, alert_message, immediately, when)
It is a command to exit from the entry with the specified ID. If there were multiple entry orders with the same ID, all of them are exited at once. If there are no open entries with the specified ID by the moment the command is triggered, the command will not come into effect. The command uses market order. Every entry is closed by a separate market order.
Namespace types: bot
Parameters:
this (bot)
id (string) : (string) A required parameter. The order identifier. It is possible to close an order by referencing its identifier.
comment (string) : (string) An optional parameter. Additional notes on the order.
qty (float) : (float) An optional parameter. Number of contracts/shares/lots/units to exit a trade with. The default value is 'NaN'.
qty_percent (float) : (float) Defines the percentage (0-100) of the position to close. Its priority is lower than that of the 'qty' parameter. Optional. The default is 100.
alert_message (string) : (string) An optional parameter which replaces the {{strategy.order.alert_message}} placeholder when it is used in the "Create Alert" dialog box's "Message" field.
immediately (bool) : (bool) An optional parameter. If true, the closing order will be executed on the tick where it has been placed, ignoring the strategy parameters that restrict the order execution to the open of the next bar. The default is false.
when (bool) : (bool) An optional parmeter. Condition, deprecated.
Returns: (void)
ticks_to_price(ticks, from)
Converts ticks to a price offset from the supplied price or the average entry price.
Parameters:
ticks (float) : (float) Ticks to convert to a price.
from (float) : (float) A price that can be used to calculate from. Optional. The default value is `strategy.position_avg_price`.
Returns: (float) A price level that has a distance from the entry price equal to the specified number of ticks.
method exit(this, id, from_entry, qty, qty_percent, profit, limit, loss, stop, trail_price, trail_points, trail_offset, oca_name, comment, comment_profit, comment_loss, comment_trailing, alert_message, alert_profit, alert_loss, alert_trailing, when)
It is a command to exit either a specific entry, or whole market position. If an order with the same ID is already pending, it is possible to modify the order. If an entry order was not filled, but an exit order is generated, the exit order will wait till entry order is filled and then the exit order is placed. To deactivate an exit order, the command strategy.cancel or strategy.cancel_all should be used. If the function strategy.exit is called once, it exits a position only once. If you want to exit multiple times, the command strategy.exit should be called multiple times. If you use a stop loss and a trailing stop, their order type is 'stop', so only one of them is placed (the one that is supposed to be filled first). If all the following parameters 'profit', 'limit', 'loss', 'stop', 'trail_points', 'trail_offset' are 'NaN', the command will fail. To use market order to exit, the command strategy.close or strategy.close_all should be used.
Namespace types: bot
Parameters:
this (bot)
id (string) : (string) A required parameter. The order identifier. It is possible to cancel or modify an order by referencing its identifier.
from_entry (string) : (string) An optional parameter. The identifier of a specific entry order to exit from it. To exit all entries an empty string should be used. The default values is empty string.
qty (float) : (float) An optional parameter. Number of contracts/shares/lots/units to exit a trade with. The default value is 'NaN'.
qty_percent (float) : (float) Defines the percentage of (0-100) the position to close. Its priority is lower than that of the 'qty' parameter. Optional. The default is 100.
profit (float) : (float) An optional parameter. Profit target (specified in ticks). If it is specified, a limit order is placed to exit market position when the specified amount of profit (in ticks) is reached. The default value is 'NaN'.
limit (float) : (float) An optional parameter. Profit target (requires a specific price). If it is specified, a limit order is placed to exit market position at the specified price (or better). Priority of the parameter 'limit' is higher than priority of the parameter 'profit' ('limit' is used instead of 'profit', if its value is not 'NaN'). The default value is 'NaN'.
loss (float) : (float) An optional parameter. Stop loss (specified in ticks). If it is specified, a stop order is placed to exit market position when the specified amount of loss (in ticks) is reached. The default value is 'NaN'.
stop (float) : (float) An optional parameter. Stop loss (requires a specific price). If it is specified, a stop order is placed to exit market position at the specified price (or worse). Priority of the parameter 'stop' is higher than priority of the parameter 'loss' ('stop' is used instead of 'loss', if its value is not 'NaN'). The default value is 'NaN'.
trail_price (float) : (float) An optional parameter. Trailing stop activation level (requires a specific price). If it is specified, a trailing stop order will be placed when the specified price level is reached. The offset (in ticks) to determine initial price of the trailing stop order is specified in the 'trail_offset' parameter: X ticks lower than activation level to exit long position; X ticks higher than activation level to exit short position. The default value is 'NaN'.
trail_points (float) : (float) An optional parameter. Trailing stop activation level (profit specified in ticks). If it is specified, a trailing stop order will be placed when the calculated price level (specified amount of profit) is reached. The offset (in ticks) to determine initial price of the trailing stop order is specified in the 'trail_offset' parameter: X ticks lower than activation level to exit long position; X ticks higher than activation level to exit short position. The default value is 'NaN'.
trail_offset (float) : (float) An optional parameter. Trailing stop price (specified in ticks). The offset in ticks to determine initial price of the trailing stop order: X ticks lower than 'trail_price' or 'trail_points' to exit long position; X ticks higher than 'trail_price' or 'trail_points' to exit short position. The default value is 'NaN'.
oca_name (string) : (string) An optional parameter. Name of the OCA group (oca_type = strategy.oca.reduce) the profit target, the stop loss / the trailing stop orders belong to. If the name is not specified, it will be generated automatically.
comment (string) : (string) Additional notes on the order. If specified, displays near the order marker on the chart. Optional. The default is na.
comment_profit (string) : (string) Additional notes on the order if the exit was triggered by crossing `profit` or `limit` specifically. If specified, supercedes the `comment` parameter and displays near the order marker on the chart. Optional. The default is na.
comment_loss (string) : (string) Additional notes on the order if the exit was triggered by crossing `stop` or `loss` specifically. If specified, supercedes the `comment` parameter and displays near the order marker on the chart. Optional. The default is na.
comment_trailing (string) : (string) Additional notes on the order if the exit was triggered by crossing `trail_offset` specifically. If specified, supercedes the `comment` parameter and displays near the order marker on the chart. Optional. The default is na.
alert_message (string) : (string) Text that will replace the '{{strategy.order.alert_message}}' placeholder when one is used in the "Message" field of the "Create Alert" dialog. Optional. The default is na.
alert_profit (string) : (string) Text that will replace the '{{strategy.order.alert_message}}' placeholder when one is used in the "Message" field of the "Create Alert" dialog. Only replaces the text if the exit was triggered by crossing `profit` or `limit` specifically. Optional. The default is na.
alert_loss (string) : (string) Text that will replace the '{{strategy.order.alert_message}}' placeholder when one is used in the "Message" field of the "Create Alert" dialog. Only replaces the text if the exit was triggered by crossing `stop` or `loss` specifically. Optional. The default is na.
alert_trailing (string) : (string) Text that will replace the '{{strategy.order.alert_message}}' placeholder when one is used in the "Message" field of the "Create Alert" dialog. Only replaces the text if the exit was triggered by crossing `trail_offset` specifically. Optional. The default is na.
when (bool) : (bool) An optional parmeter. Condition, deprecated.
Returns: (void)
percent_to_ticks(percent, from)
Converts a percentage of the supplied price or the average entry price to ticks.
Parameters:
percent (float) : (float) The percentage of supplied price to convert to ticks. 50 is 50% of the entry price.
from (float) : (float) A price that can be used to calculate from. Optional. The default value is `strategy.position_avg_price`.
Returns: (float) A value in ticks.
percent_to_price(percent, from)
Converts a percentage of the supplied price or the average entry price to a price.
Parameters:
percent (float) : (float) The percentage of the supplied price to convert to price. 50 is 50% of the supplied price.
from (float) : (float) A price that can be used to calculate from. Optional. The default value is `strategy.position_avg_price`.
Returns: (float) A value in the symbol's quote currency (USD for BTCUSD).
bot
Fields:
password (series__string)
start_time (series__integer)
end_time (series__integer)
leverage (series__integer)
initial_capital (series__float)
default_qty_type (series__string)
default_qty_value (series__float)
margin_mode (series__string)
contract_size (series__float)
kis_number (series__integer)
entry_percent (series__float)
close_percent (series__float)
exit_percent (series__float)
log_table (series__table)
fixed_qty (series__float)
fixed_cash (series__float)
real (series__bool)
auto_alert_message (series__bool)
hide_trade_line (series__bool)
IDX Financials v2This indicator adds financial data, ratios, and valuations to your chart. The main objective is to present financial overview that can be glanced quickly to add to your considerations.
The visualization of the indicator consists of two parts:
A. Plots (lines alongside the candlestick)
B. Financial table on the right. Drag your candlestick to the left to provide blank area for the table.
Programatically, the financial data is obtained by using these Pine API:
request.earnings(...) API for the EPS values that are used by the price at average PER line , and
request.financial(..) API for the rest of financial data required by the indicator.
See What financial data is available in Pine for more info on getting financial data in Pine.
A. THE PLOTS
The plots produces two lines, price at average PER in blue and price at average PBV line in pink, calculated over some adjustable time period (the default is one year). By default, only price at average PER line is shown.
Note that PER stands for Price to Earning Ratio.
The price at average PER line shows the price level at the average PER. It is calculated using formula as follows:
line = AVGPER * EPSTTM
where AVGPER is the average PER over some time period (default is one year, adjustable) and EPSTTM is the standardized EPS TTM.
Note that the EPS is updated at the actual time of earning report publication , and not at standard quarter dates such as March 31st, Dec 31st, etc.. This approach is chosen to represent the actual PE at the time.
The price at average PBV line (PBV stands for Price to Book Value), which can be enabled in settings, shows the price at average PBV. It is calculated using formula as follows:
line = AVGPBV * BVPS
where AVGPBV is the average PBV over some period of time (default is one year, adjustable) and BVPS is the book value per share. Note that the PBV is clipped to range to avoid values that are too small/large.
Also note that unlike PER, the BVPS is updated at each quarterly date (such as March 31st, Dec 31st, etc.).
Apart from those lines, some values are written to the status line (i.e. the numbers next to indicator name), which represent the corresponding value at the currently hovered bar:
PER TTM
Average PER
Std value (zvalue) of PER TTM (equal to = (PERTTM - AVGPER)/STDPER)
PBV
The meaning for these abbreviations should be straightforward.
Using the price at average PER line
There are several ways to use the price at average PER line .
You can quickly get the sense of current valuation by seeing the price relative to the price at average PER line . If the price is above the line, the valuation is higher than the average valuation, and vice versa if the price is lower.
The distance between the price and the average is measured in unit of standard deviation. This is represented by the third number in the status line. Value zero indicates the price is exactly at the average PER line. Positive value indicates price is higher than average, and negative if price is lower than average. Usually people use value +2 and -2 to indicate extreme positions.
The second way to use the line is to see how the line jumps up or down at the earning report date . If the line jumps up, this indicates the increase of EPSTTM. And vice versa when the line jumps down.
When EPSTTM is trending up over several quarters, or if EPSTTM is expected to go up, usually the price is also trending up and the valuation is over the average. And vice versa when EPSTTM is trending down or expected to go down. Deviation from this pattern may present some buying or selling opportunity.
B. THE FINANCIAL TABLE
The second visual part is the financial table. The financial table contains financial information at the last bar . It has four sections:
1. Revenue
2. Income
3. Valuations
4. Ratios
Let's discuss them in detail.
1. Revenue and income sections
The revenue and income table are organized into rows and columns.
Each row shows the data at the specified time frame, as follows:
The first four rows shows quarterly revenue/income of the last four quarters.
Then followed by TTM data.
Then followed by forecast of next quarter revenue/income, if such forecast exists. Note the "(F)" notation next to the quarter name.
Then followed by forecast of TTM data of next quarter (only for income), if such forecast exists. Note the "(F)" notation next to the TTM name.
The columns of revenue and income sections show the following:
The time frame information (such as quarter name, TTM, etc.)
The revenue/income value, in billions or millions (configurable).
YoY (year on year) growth, i.e. comparing the value with the value one year earlier, if any.
QoQ (quarter on quarter) growth, i.e. comparing the value with previous quarter value, if any.
GPM/NPM (gross profit margin or net profit margin), i.e. the margin on the specified time period.
Using the Revenue and Income table
The table provides quick way to see the revenue and income trend. You can see the YoY growth as well as QoQ, if that is applicable (non seasonal stocks). You can also see how the margins change over the periods.
The values are also presented with relevant background color . Green indicates "good" value and red indicates "bad" value. The intensity represents how good/bad the value is. The limits of the good and bad values are currently hardcoded in the script.
2. Valuations section
The valuation shows current stock valuation. The section is organized in rows and columns. Each row contains one type of valuation criteria, as follows:
PER (Price Earning Ratio)
Next quarter PER forecast (marked by "(F)" notation) when available
PBV (Price to Book value)
For each valuation criteria, several values are presented as columns:
The current value of the criteria. By current, it means the value at the last bar.
The one year standard deviation position
The three years standard deviation position
3. Ratios Section
The ratios section contains the following useful financial ratios:
ROA (Return on Asset), equal to: NET_INCOME_TTM / TOTAL_ASSETS
ROE (Return on Equity), equal to: NET_INCOME_TTM / BOOK_VALUE_PER_SHARE
PEG (PER to Growth Ratio), equal to PER_TTM / (INCOME_TTM_GROWTH*100)
DER (Debt to Equity Ratio), taken from request.financial(syminfo.tickerid, "DEBT_TO_EQUITY", "FQ")
DPR (Dividend Payout Ratio), taken from request.financial(syminfo.tickerid, "DIVIDEND_PAYOUT_RATIO", "FY")
Dividend yield, equal to (DPR * (NET_INCOME_TTM / TOTAL_SHARES_OUTSTANDING)) / close
KNOWN BUGS
Currently does not handle when the financial quarter contains gap, i.e. there is missing quarter. This usually happens on newly IPO stocks.
BreakoutTrendFollowingINFO:
The "BreakoutTrendFollowing" indicator is a comprehensive trading system designed for trend-following in various market environments. It combines multiple technical indicators, including Moving Averages (MA), MACD, and RSI,
along with volume analysis and breakout detection from consolidation, to identify potential entry points in trending markets. This strategy is particularly effective for assets that exhibit strong trends and significant price movements.
Note that using the consolidation filter reduces the amount of entries the strategy detects significantly, and needs to be used if we want to have an increased confidence in the trend via breakout.
However, the strategy can be easily transformed to various only trend-following strategies, by applying different filters and configurations.
The indicator can be used to connect to the Signal input of the TTS (TempalteTradingStrategy) by jason5480 in order to backtest it, thus effectively turning it into a strategy (instructions below in TTS CONNECTIVITY section)
DETAILS:
The strategy's core is built upon several key components:
Moving Average (MA): Used to determine the general trend direction. The strategy checks if the price is above the selected MA type and length.
MACD Filter: Analyzes the relationship between two moving averages to confirm the trend's momentum.
Consolidation Detection: Identifies periods of price consolidation and triggers trades on breakouts from these ranges.
Volume Analysis: Assesses trading volume to confirm the strength and validity of the breakout.
RSI: Used to avoid overbought conditions, ensuring trades are entered in favorable market situations.
Wick filters: make sure there is not a long wick that indicates selling pressure from above
The strategy generates buy signals when several conditions are met concurrently (each one of them can be individually enabled/disabled)"
The price is above the selected MA.
A breakout occurs from a configurable consolidation range.
The MACD line is above the signal line, indicating bullish momentum.
The RSI is below the overbought threshold.
There's an increase in trading volume, confirming the breakout's strength.
Currently the strategy fires SL signals, as the approach is to check for loss of momentum - i.e. crossunder of the MACD line and signal line, but that is to everyone to determine the exit conditions.
The buy and SL signals are set on the chart using green or orange triangles on the below/above the price action.
SETTINGS:
Users can customize various parameters, including MA type and period, MACD settings, consolidation length, and volume increase percentage. The strategy is equipped with alert conditions for both entry (buy signals) and exit (set stop loss) points, facilitating both manual and automated trading.
Each one of the technical indicators, as well as the consilidation range and breakout/wick settings can be configured and enabled/disabled individually.
Please thoroughly review the available settings of the script, but here is an outline of the most important ones:
Use bar wicks (instead of open/close) - the ref_high/low will be taken based on the bar wicks, rather than the open/close when determining the breakout and MA
Enter position only on green candles - additional filters to make sure that we enter only on strong momentum
MA Filter: (enable, source, type, length) - general settings for MA filter to be checked against the stock price (close or upper wick)
MACD Filter: (enable, source, Osc MA type, Signal MA type, Fast MA length, Slow MA length, Low MACD Hist) - detailed settings for fine MACD tuning
Consolidation:
Consolidation Type: we have two different ways of detecting the consolidation, note the types below.
CONSOLIDATION_BASIC - consolidation areas by looking for the pivot point of a trend and counts the number of bars that have not broken the consolidation high/low levels.
CONSOLIDATIO_RANGE_PERCENT - identifies consolidation by comparing the range between the highest and lowest price points over a specified period.
So in summary the CONSOLIDATIO_RANGE_PERCENT uses a percentage-based range to define consolidation, while CONSOLIDATION_BASIC uses a count of bars within a high-low range to establish consolidation.
Thus the former is more focused on the tightness of the price range, whereas the latter emphasizes the duration of the consolidation phase.
The CONSOLIDATIO_RANGE_PERCENT might be more sensitive to recent price movements and suitable for shorter-term analysis, while CONSOLIDATION_BASIC could be better for identifying longer-term consolidation patterns.
Min consolidation length - applicable for CONSOLIDATION_BASIC case, the min number of bars for the price to be in the range to consider consolidation
Consolidation Loopback period - applicable for CONSOLIDATION_BASIC case, the loopback number of bars to look for consolidation
Consolidation Range percent - applicable for CONSOLIDATIO_RANGE_PERCENT, the percent between the high and low in the range to consider consolidation
Plot consolidation - enables plotting of the consolidation (only for debug purposes)
Breakout: (enable, low, high) - the definition of the breakout from the previous consolidation range, the price should be between to determine the breakout as successfull
Upper wick: (enable, percent) - defines the percent of the upper wick compared to the whole candle to allow breakout (if the wick is too big part of the candle we can consider entering the position riskier)
RSI: (enable, length, overbought) - general settings for RSI TA
Volume (enbale, percentage increase, average volume filter en, loopback bars) - percentage of increase of the volume to consider for a breakout. There are two modes - percentage increase compared to the previous bar, or percentage against the average volume for the last loopback bars.
Note that there are many different configuration that you can play with, and I believe this is the strength of the strategy, as it can provide a single solution for different cases and scenarios.
My advice is to try and play with the different options for different markets based on the approach you want to implement and try turning features on/off and tuning them further.
TTS SETTINGS (NEEDED IF USED TO BACKTEST WITH TTS):
The TempalteTradingStrategy is a strategy script developed in Pine by jason5480, which I recommend for quick turn-around of testing different ideas on a proven and tested framework
I cannot give enough credit to the developer for the efforts put in building of the infrastructure, so I advice everyone that wants to use it first to get familiar with the concept and by checking
by checking jason5480's profile www.tradingview.com
The TTS itself is extremely functional and have a lot of properties, so its functionality is beyond the scope of the current script -
Again, I strongly recommend to be thoroughly explored by everyone that plans on using it.
In the nutshell it is a script that can be feed with buy/sell signals from an external indicator script and based on many configuration options it can determine how to execute the trades.
The TTS has many settings that can be applied, so below I will cover only the ones that differ from the default ones, at least according to my testing - do your own research, you may find something even better :)
The current/latest version that I've been using as of writing and testing this script is TTSv48
Settings which differ from the default ones:
Deal Conditions Mode - External (take enter/exit conditions from an external script)
🔌Signal 🛈➡ - BreakoutTrendFollowing: 🔌Signal to TTS (this is the output from the indicator script, according to the TTS convention)
Order Type - STOP (perform stop order)
Distance Method - HHLL (HigherHighLowerLow - in order to set the SL according to the strategy definition from above)
The next are just personal preferences, you can feel free to experiment according to your trading style
Take Profit Targets - 0 (either 100% in or out, no incremental stepping in or out of positions)
Dist Mul|Len Long/Short- 10 (make sure that we don't close on profitable trades by any reason)
Quantity Method - EQUITY (personal backtesting preference is to consider each backtest as a separate portfolio, so determine the position size by 100% of the allocated equity size)
Equity % - 100 (note above)
SPTS_StatsPakLibFinally getting around to releasing the library component to the SPTS indicator!
This library is packed with a ton of great statistics functions to supplement SPTS, these functions add to the capabilities of SPTS including a forecast function.
The library includes the following functions
1. Linear Regression (single independent and single dependent)
2. Multiple Regression (2 independent variables, 1 dependent)
3. Standard Error of Residual Assessment
4. Z-Score
5. Effect Size
6. Confidence Interval
7. Paired Sample Test
8. Two Tailed T-Test
9. Qualitative assessment of T-Test
10. T-test table and p value assigner
11. Correlation of two arrays
12. Quadratic correlation (curvlinear)
13. R Squared value of 2 arrays
14. R Squared value of 2 floats
15. Test of normality
16. Forecast function which will push the desired forecasted variables into an array.
One of the biggest added functionalities of this library is the forecasting function.
This function provides an autoregressive, trainable model that will export forecasted values to 3 arrays, one contains the autoregressed forecasted results, the other two contain the upper confidence forecast and the lower confidence forecast.
Hope you enjoy and find use for this!
Library "SPTS_StatsPakLib"
f_linear_regression(independent, dependent, len, variable)
TODO: creates a simple linear regression model between two variables.
Parameters:
independent (float)
dependent (float)
len (int)
variable (float)
Returns: TODO: returns 6 float variables
result: The result of the regression model
pear_cor: The pearson correlation of the regresion model
rsqrd: the R2 of the regression model
std_err: the error of residuals
slope: the slope of the model (coefficient)
intercept: the intercept of the model (y = mx + b is y = slope x + intercept)
f_multiple_regression(y, x1, x2, input1, input2, len)
TODO: creates a multiple regression model between two independent variables and 1 dependent variable.
Parameters:
y (float)
x1 (float)
x2 (float)
input1 (float)
input2 (float)
len (int)
Returns: TODO: returns 7 float variables
result: The result of the regression model
pear_cor: The pearson correlation of the regresion model
rsqrd: the R2 of the regression model
std_err: the error of residuals
b1 & b2: the slopes of the model (coefficients)
intercept: the intercept of the model (y = mx + b is y = b1 x + b2 x + intercept)
f_stanard_error(result, dependent, length)
x TODO: performs an assessment on the error of residuals, can be used with any variable in which there are residual values (such as moving averages or more comlpex models)
param x TODO: result is the output, for example, if you are calculating the residuals of a 200 EMA, the result would be the 200 EMA
dependent: is the dependent variable. In the above example with the 200 EMA, your dependent would be the source for your 200 EMA
Parameters:
result (float)
dependent (float)
length (int)
Returns: x TODO: the standard error of the residual, which can then be multiplied by standard deviations or used as is.
f_zscore(variable, length)
TODO: Calculates the z-score
Parameters:
variable (float)
length (int)
Returns: TODO: returns float z-score
f_effect_size(array1, array2)
TODO: Calculates the effect size between two arrays of equal scale.
Parameters:
array1 (float )
array2 (float )
Returns: TODO: returns the effect size (float)
f_confidence_interval(array1, array2, ci_input)
TODO: Calculates the confidence interval between two arrays
Parameters:
array1 (float )
array2 (float )
ci_input (float)
Returns: TODO: returns the upper_bound and lower_bound cofidence interval as float values
paired_sample_t(src1, src2, len)
TODO: Performs a paired sample t-test
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: Returns the t-statistic and degrees of freedom of a paired sample t-test
two_tail_t_test(array1, array2)
TODO: Perofrms a two tailed t-test
Parameters:
array1 (float )
array2 (float )
Returns: TODO: Returns the t-statistic and degrees of freedom of a two_tail_t_test sample t-test
t_table_analysis(t_stat, df)
TODO: This is to make a qualitative assessment of your paired and single sample t-test
Parameters:
t_stat (float)
df (float)
Returns: TODO: the function will return 2 string variables and 1 colour variable. The 2 string variables indicate whether the results are significant or not and the colour will
output red for insigificant and green for significant
t_table_p_value(df, t_stat)
TODO: This performs a quantaitive assessment on your t-tests to determine the statistical significance p value
Parameters:
df (float)
t_stat (float)
Returns: TODO: The function will return 1 float variable, the p value of the t-test.
cor_of_array(array1, array2)
TODO: This performs a pearson correlation assessment of two arrays. They need to be of equal size!
Parameters:
array1 (float )
array2 (float )
Returns: TODO: The function will return the pearson correlation.
quadratic_correlation(src1, src2, len)
TODO: This performs a quadratic (curvlinear) pearson correlation between two values.
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: The function will return the pearson correlation (quadratic based).
f_r2_array(array1, array2)
TODO: Calculates the r2 of two arrays
Parameters:
array1 (float )
array2 (float )
Returns: TODO: returns the R2 value
f_rsqrd(src1, src2, len)
TODO: Calculates the r2 of two float variables
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: returns the R2 value
test_of_normality(array, src)
TODO: tests the normal distribution hypothesis
Parameters:
array (float )
src (float)
Returns: TODO: returns 4 variables, 2 float and 2 string
Skew: the skewness of the dataset
Kurt: the kurtosis of the dataset
dist = the distribution type (recognizes 7 different distribution types)
implication = a string assessment of the implication of the distribution (qualitative)
f_forecast(output, input, train_len, forecast_length, output_array, upper_array, lower_array)
TODO: This performs a simple forecast function on a single dependent variable. It will autoregress this based on the train time, to the desired length of output,
then it will push the forecasted values to 3 float arrays, one that contains the forecasted result, 1 that contains the Upper Confidence Result and one with the lower confidence
result.
Parameters:
output (float)
input (float)
train_len (int)
forecast_length (int)
output_array (float )
upper_array (float )
lower_array (float )
Returns: TODO: Will return 3 arrays, one with the forecasted results, one with the upper confidence results, and a final with the lower confidence results. Example is given below.
LNL Trend SystemLNL Trend System is an ATR based day trading system specifically designed for intra-day traders and scalpers. The System works on any chart time frame & can be applied to any market. The study consist of two components - the Trend Line and the Stop Line. Trend System is based on a special ATR calculation that is achieved by combining the previous values of the 13 EMA in relation to the ATR which creates a line of deviations that visually look similar to the basic moving average but actually produce very different results ESPECIALLY in sideways market.
Trend Line:
Trend Line is a simple line which is basically a fast gauge represented by the 13 EMA that can change the color based on the current trend structure defined by multiple averages (8,13,21,34 EMAs). Trend Line is there to simply add the confluence for the current trend. Colors of the line are pretty much self-explanatory. Whenever the line turns red it states that the current structure is bearish. Vice versa for green line. Gray line represents neutral market structure.
Stop Line:
Stop Line is an ATR deviaton line with special calculation based on the previous bar ATRs and position of the price in relation to the current and previous values of 13 EMA. As already stated, this creates an ATR deviation marker either above or below the price that trails the price up or down until they touch. Whenever the price comes into the Stop Line it means it is making an ATR expansion move up or down .This touch will usually resolve into a reaction (a bounce) which provides trade opportunities.
Trend Bars:
When turned ON, Trend Bars can provide additional confulence of the current trend alongside with the Trend Line color. Trend Bars are based on the DMI and ADX indicators. Whenever the DMI is bearish and ADX is above 20 the candles paint themselfs red. And vice versa applies for the green candles and bullish DMI. Whenever the ADX falls below the 20, candles are netural (Gray) which means there is no real trend in place at the moment.
Trend Mode:
There are total of 5 different trend modes available. Each mode is visualizing different ATR settings which provides either aggressive or more conservative approach. The more tigher the mode, the more closer the distance between the price and the Stop Line. First two modes were designed for slower markets, whereas the "Loose" and "FOMC" modes are more suitable for products with high volatility.
Trend Modes:
1. Tight
Ideal for the slowest markets. Slowest market can be any market with unusually small average true range values or just simply a market that does have a personality of a "sleeper". Tight Mode can be also used for aggresive entries in the most ridiculous trends. Sometimes price will barely pullback to the Trend Line not even the Stop Line.
2. Normal
Normal Mode is the golden mean between the modes. "Normal" provides the ideal ATR lengths for the most used markets such as S&P Futures (ES) or SPY, AAPL and plenty of other highly popular stocks. More often than not, the length of this mode is respected considering there is no breaking news or high impact market event scheduled.
3. Loose
The "Loose" mode is basically a normal mode but a little bit more loose. This mode is useful whenever the ATRs jump higher than usual or during the days of highly anticipated news events. This mode is also better suited for more active markets such as NQ futures.
4. FOMC
The FOMC mode is called FOMC for a reason. This mode provides the maximum amount of wiggle room between the price and the Stop Line. This mode was designed for the extreme volatility, breaking news events or post-FOMC trading. If the market quiets down, this mode will not get the Stop Line touch as frequently as othete modes, thus it is not very useful to run this on markets with the average volatlity. Although never properly tested, perhaps the FOMC mode can find its value in the crypto market?
5. The Net
The net mode is basically a combination of all modes into one stop line system which creates "the net" effect. The Net provides the widest Stop Line zone which can be mainly appreciated by traders that like to use scale-in scale-out methods for their trading. Not to mention the visual side of the indicator which looks pretty great with the net mode on.
HTF (Higher Time Frame) Trend System:
The system also includes additional higher time frame (HTF) trend system. This can be set to any time frame by manual HTF mode. HTF mode set to "auto" will automatically choose the best suitable higher time frame trend system based on how appropriate the aggregation is. For everything below 5min the HTF Trend System will stay on 5min. Anything between 5-15min = 30min. 30min - 120min will turn on the 240min. 180min and higher will result in Daily time frame. Anything above the Daily will result in Weekly HTF aggregation, above W = Monthly, above M = Quarterly.
Background Clouds:
In terms of visualization, each trend system is fully customizable through the inputs settings. There is also an option to turn on/off the background clouds behind the stop lines. These clouds can make the charts more clean & visible.
Tips & Tricks:
1. Different Trend Modes
Try out different modes in different markets. There is no one single mode that will fit to everyone on the same type of market. I myself actually prefer more Loose than the Normal.
2. Stop Line Mirroring
Whenever the Stop Lines start to mirror each other (there is one above the price and one below) this means the price is entering a ranging sideways market. It does not matter which Stop Line will the price touch first. They can both be faded until one of them flips.
3. Signs of the Ranging Market
Watch out for signs of ranging market. Whenever the Trend System looses its colors whether on trend line or trend bars, if everything turns neutral (gray) that is usually a solid indication of a range type action for the following moments. Also as already stated before, the Stop Line mirroring is a good sign of the range market.
4. Trailing Tool, Trend System as an Additional Study?
In case you are not a fan of the colorful green / red charts & candles. You can switch all of them off and just leave the Stop Line on. This way you can use the benefits of the trend system and still use other studies on top of that. Similarly as the Parabolic SAR is often used.
5. The Flip Setup
One of my favorite trades is the Flip Setup on the 5min charts. Whenever the Stop Line is broken , the very first opposing touch after the Trend System flips is a usually a highly participated touch. If there is a strong reaction, this means this is likely a beginning of a new trend. Once I am in the position i like to trail the Stop Line on the 1min charts.
Hope it helps.
MOST + Moving Average ScreenerScreener version of Anıl Özekşi's Moving Stop Loss (MOST) Indicator:
USERS MAY SCREEN MOST WITH 11 DIFFERENT TYPES OF MOVING AVERAGES + THEY CAN ALSO SCREEN SIGNALS WITH THAT 11 MOVING AVERAGES INSTEAD OF USING MOST LINE.
Adjustable Moving Average Types:
SMA : Simple Moving Average
EMA : Exponential Moving Average
WMA : Weighted Moving Average
DEMA : Double Exponential Moving Average
TMA : Triangular Moving Average
VAR : Variable Index Dynamic Moving Average aka VIDYA
WWMA : Welles Wilder's Moving Average
ZLEMA : Zero Lag Exponential Moving Average
TSF : True Strength Force
HULL : Hull Moving Average
TILL : Tillson T3 Moving Average
About Screener Panel:
Users can explore 20 different and user-defined tickers, which can be changed from the SETTINGS (shares, crypto, commodities...) on this screener version.
The screener panel shows up right after the bars on the right side of the chart.
-In this screener version of MOST, users can define the number of demanded tickers (symbols) from 1 to 20 by checking the relevant boxes on the settings tab.
-All selected tickers can be screened in different timeframes.
-Also, different timeframes of the same Ticker can be screened.
IMPORTANT NOTICE:
Screener shows the results in 3 different logic:
1st LOGIC (Default Settings):
BUY AND SELL SIGNALS of MOST and MOVING AVERAGE LINE
Most Buy Signal: Moving Average Crosses ABOVE the MOST LINE
Most Sel Signal: Moving Average Crosses BELOW the MOST LINE
Tickers seen in green are the ones that are in an uptrend, according to MOST.
The ones that appear in red are those in the SELL signal, in a downtrend.
The numbers before each Ticker indicate how many bars passed after MOST's last BUY or SELL signal.
For example, according to the indicator, when BTCUSDT appears (3) in GREEN, Bitcoin switched to a BUY signal 3 bars ago.
2nd LOGIC (Moving Average & Price Flips Screener Mode):
This mode can only be activated by checking the 'Activate Moving Average Screening Mode' box on the settings menu.
MOST line will be disappeared after checking the box.
Buy Signal: When the Selected Price crosses ABOVE the selected Moving Average.
Sell Signal: When the Selected Price crosses BELOW the selected Moving Average.
Tickers seen in green are the ones that are in an uptrend, according to Moving Average & Price Flips.
The ones that appear in red are those in the SELL signal, in a downtrend.
The numbers before each Ticker indicate how many bars passed after the last BUY or SELL signal of Moving Average & Price Flips.
For example, according to the indicator, when BTCUSDT appears (3) in GREEN, Bitcoin switched to a BUY signal 3 bars ago.
3rd LOGIC (Moving Average Color Change Screener Mode):
Both 'Activate Moving Average Screening Mode' and 'Activate Moving Average Color Change Screening Mode' boxes must be checked in the settings tab.
Moving Average Line will turn out into two colors.
Green color means the moving average value is greater than the previous bar's value.
Red color means the moving average value is smaller than the previous bar's value.
Buy Signal: After the Selected Moving Average turns GREEN from red.
Sell Signal: After the Selected Moving Average turns RED from green.
-Screener shows the information about the color changes of the selected Moving Average with default settings.
If this option is preferred, users are advised to enlarge the length to have better signals.
Tickers seen in green are the ones that are in an uptrend, according to Moving Average Color.
The ones that appear in red are those in the SELL signal, in a downtrend.
The numbers before each Ticker indicate how many bars passed after the last BUY or SELL signal of Moving Average Color Change.
For example, according to the indicator, when BTCUSDT appears (3) in GREEN, Bitcoin switched to a BUY signal 3 bars ago.
Psychological levels (Bank levels) PsychoLevels v3 - TartigradiaPsychological levels (Bank levels) plots the closest "round" price levels above and below current price, based on neuroscience research of how humans intuitively calculate in logarithms.
Psychological levels, also called bank levels, are "round" price numbers, by truncating after the nth leftmost digits, around which price often experience resistance or support, because traders and investors tend to set orders around these round numbers.
The calculation done here is fully automatic and dynamic, contrary to other similar scripts, this one uses a mathematical calculation that extracts the 1, 2 or 3 leftmost digits and calculate the previous and next level by incrementing/decrementing these digits. This means it works for any symbol under any price range.
This approach is based on neuroscience research, which found that human brains intuitively approximate numbers on a logarithmic scale, adults and children alike, and similarly to macaques, for more info see Numerical Cognition , Weber-Fechner Law , Zipf law .
For example, if price is at 0.0421, the next major price level is 0.05 and medium one is 0.043. For another asset currently priced at 19354, the next and previous major price levels are 20000 and 10000 respectively, and the next/previous medium levels are 20000 and 19000, and the next/previous weak levels are 19400 and 19300.
IMPORTANT: Please enable "Scale price chart only" in the chart's scale's options, as otherwise major levels may make the chart's scale very small and hard to read.
How it works
At any time, there are 3 levels of strength (1 leftmost digit, 2 leftmost digits, 3 leftmost digits) represented by different sizes, and 3 directional levels for each of these strengths (level above, level below, and half-level) represented by different colors and positions, around current price.
Indeed, contrary to other similar price levels scripts, we do not plot ALL price levels at all times, because otherwise the chart becomes wayyy too cluttered, and also it's highly processing intensive to plot so many lines. So we here use a dynamical approach: we plot only the relevant levels, the closest ones according to current price.
Hence, when a level disappears, it does not mean that it does not exist anymore, but simply that we are not drawing it right now because it is not pertinent for the current price movement (ie, too far away).
Breakouts can be detected in two different ways depending on if SMA is set to a value higher than 1 or not: if SMA == 1, then there is no smoothing, so the levels adapt instantaneously to the current price, so to detect breakout, you should refer to the levels at the previous tick and whether they were broken by current tick's price; if SMA > 1, then there is some smoothing, and so the levels will stay in-place even if there is a breakout, so it's easier to spot breakouts without having to look at the previous ticks, but on the other hand you won't see the new levels for the new price range until after a few more ticks for the smoothing window to adapt. Hence, by default, smoothing is disabled, so that you can see the currently pertinent levels at all time, even right after or during a breakout.
By default, the strong above level is in green, strong below level is in red, medium above level is in blue, medium below level is in yellow, and weak levels aren't displayed but can be. Half levels are also displayed, in a darker color. Strong levels are increments of the first leftmost digit (eg, 10000 to 20000), medium levels are increments of the second leftmost digit (eg, 19000 to 20000), and weak levels of the third leftmost digit (eg, 19100 to 19200). Instead of plotting all the psychological levels all at once as a grid, which makes the chart unintelligible, here the levels adapt dynamically around the current price, so that they show the above/below/half levels relatively to the current price.
Indeed, "half-levels" are also displayed (eg, medium level can also display 19500 instead of only 19000 or 20000). This was made because otherwise the gap between two levels was too big, especially for the strongest levels (eg, there was no major level between 20000 and 30000, but with a half-step we also get a half-level at 25000, and empirically price tends to respect these half levels - I also tried quarter levels but empirically the results were not good). In addition to this hard-coded half-level, you can also create more subdivisions (eg, quarter levels) by setting the simple moving average to a value higher than 1.
The script can be made to run on the daily timeframe whatever the current chart's timeframe is, to reduce the variability in levels, to make it less noisy than intraday price movement. But by default, the chart resolution is used, because I empirically found that the levels found with this indicator work on all time resolutions quite well.
The step can be adjusted to increase the gap between levels, eg, if you want to display one every 2 levels then input step = 2 (eg, 22000, 24000, 26000, etc), or if you want to display quarter levels, input 0.25 (eg, 22000, 22250, 22500, etc). The default values should fit most use cases and cover most psychological levels.
How to read
Focust first on bigger dotted levels, they are stronger and more likely to cause a rebound or a major event or price to stay at this level.
Remember that it's not enough to just look at levels, the context is important, because levels have various effects depending on current price movement: if price is above a level, the level is a support on which price can rebound; if price is below a level, the level is a resistance on which price can rebound (or break); and finally sometimes price also stays hovering around a level for some time.
Levels closer to 9 are less weaker, and levels closer to 0 are stronger, according to Zipf law. This is now reflected since v3 in the transparency, levels that are closer to 9 will be more transparent.
The switch in color for the same level illustrates how a level switches from being a support to a resistance and inversely. Eg, if a major level turns from green to red, then it changed from being a resistance (above) to a support (below).
As is well known in trading, longer standing levels are stronger. This indicator provides a direct illustration: in practice, the number of consecutive dots on the same line influences the strength of the level: the longer the chain of dots, the more you can expect this price level to be significant. The length does not mean the level will necessarily hold, but that other traders are likely to monitor if it holds, and if not then price will break down. Hence, longer levels are good spots to place stop losses, or to enter trades depending on your strategy. In general, a single dot is not enough to consider a level significant, but 2 or more is a good enough level, and 10+ is a strong level. Intuitively, this makes sense, and is what pro traders do: the longer a level is tested, the stronger it is. This indicator can visually represent this intuition and allows to use it as a more systematic trading signal.
Motivation
I initially made the first version of the PsychoLevels indicator mainly to train with PineScript, but I found it surprisingly accurate to define levels that are respected by price movements. So I guess it can be useful for new traders and experienced traders alike, as it's easy to forget that psychological levels can often be as strong if not stronger than technical levels. It can also be used to quickly screen other minor assets for trading opportunities. For example, a hybrid strategy would be to manually define levels on BTCUSD but using this script to automatically define levels in crypto altcoins and quickly screen them for a trade opportunity that can be greater than with BTCUSD but with the same trend.
Personally, although initially I did not believe an automated tool would work well for this purpose, I could now empirically verify that it is quite reliable for the purpose of detecting levels, and so I use it all the time to find the levels automatically and help me monitor them like a hawk, so that I only have to draw uber major levels, the ones that last between cycles and that are hard to autodetect, but otherwise all daily/weekly levels are usually covered. However, trendlines must still be drawn manually or with another indicator (but note that up to now I have found none that worked well enough), as PsychoLevels only draws levels (ie, horizontal lines, not oblique ones!).
Differences with the previous version PsychoLevels v2
price levels now have a transparency according to their importance for the human brain: numbers closer to 9 are weaker, and numbers closer to 0 are stronger and represent a major psychological threshold (eg, that's why prices marked as $9.99 sell better than $10.00). This option can be disabled to get the exact same behavior as v2.
modularized and typed code
PsychoLevels v2 can be found here:
libhs.log.DEMO◼ Overview
This is a demonstration of dual logging library I have ported from my personal use for public use. Please start bar replay from Bar#4, and progress automatically slowly or manually.You would need to go through 450+ bars to see the full capability.
Logger=A dual logging library for developers. Tradingview lacks logging capability. This library provided logging while developing your scripts and is to be used by developers when developing and debugging their scripts.
Using this library would potentially slow down you scripts. Hence, use this for debugging only. Once your code is as you would like it to be, remove the logging code.
◼︎ Usage (Console):
Console = A sleek single cell logging with a limit of 4096 characters. When you dont need a large logging capability.
//@version=5
indicator("demo.Console", overlay=true)
plot(na)
import GETpacman/log/2 as logger
var console = logger.log.new()
console.init() // init() should be called as first line after variable declaration
console.FrameColor:=color.green
console.log('\n')
console.log('\n')
console.log('Hello World')
console.log('\n')
console.log('\n')
console.ShowStatusBar:=true
console.StatusBarAtBottom:=true
console.FrameColor:=color.blue //settings can be changed anytime before show method is called. Even twice. The last call will set the final value
console.ShowHeader:=false //this wont throw error but is not used for console
console.show(position=position.bottom_right) //this should be the last line of your code, after all methods and settings have been dealt with.
◼︎ Usage (Logx):
Logx = Multiple columns logging with a limit of 4096 characters each message. When you need to log large number of messages.
//@version=5
indicator("demo.Logx", overlay=true)
plot(na)
import GETpacman/log/2 as logger
var logx = logger.log.new()
logx.init() // init() should be called as first line after variable declaration
logx.FrameColor:=color.green
logx.log('\n')
logx.log('\n')
logx.log('Hello World')
logx.log('\n')
logx.log('\n')
logx.ShowStatusBar:=true
logx.StatusBarAtBottom:=true
logx.ShowQ3:=false
logx.ShowQ4:=false
logx.ShowQ5:=false
logx.ShowQ6:=false
logx.FrameColor:=color.olive //settings can be changed anytime before show method is called. Even twice. The last call will set the final value
logx.show(position=position.top_right) //this should be the last line of your code, after all methods and settings have been dealt with.
◼︎ Fields (with default settings)
▶︎ IsConsole = True Log will act as Console if true, otherwise it will act as Logx
▶︎ ShowHeader = True (Log only) Will show a header at top or bottom of logx.
▶︎ HeaderAtTop = True (Log only) Will show the header at the top, or bottom if false, if ShowHeader is true.
▶︎ ShowStatusBar = True Will show a status bar at the bottom
▶︎ StatusBarAtBottom = True Will show the status bar at the bottom, or top if false, if ShowHeader is true.
▶︎ ShowMetaStatus = True Will show the meta info within status bar (Current Bar, characters left in console, Paging On Every Bar, Console dumped data etc)
▶︎ ShowBarIndex = True Logx will show column for Bar Index when the message was logged. Console will add Bar index at the front of logged messages
▶︎ ShowDateTime = True Logx will show column for Date/Time passed with the logged message logged. Console will add Date/Time at the front of logged messages
▶︎ ShowLogLevels = True Logx will show column for Log levels corresponding to error codes. Console will log levels in the status bar
▶︎ ReplaceWithErrorCodes = True (Log only) Logx will show error codes instead of log levels, if ShowLogLevels is switched on
▶︎ RestrictLevelsToKey7 = True Log levels will be restricted to Ley 7 codes - TRACE, DEBUG, INFO, WARNING, ERROR, CRITICAL, FATAL
▶︎ ShowQ1 = True (Log only) Show the column for Q1
▶︎ ShowQ2 = True (Log only) Show the column for Q2
▶︎ ShowQ3 = True (Log only) Show the column for Q3
▶︎ ShowQ4 = True (Log only) Show the column for Q4
▶︎ ShowQ5 = True (Log only) Show the column for Q5
▶︎ ShowQ6 = True (Log only) Show the column for Q6
▶︎ ColorText = True Log/Console will color text as per error codes
▶︎ HighlightText = True Log/Console will highlight text (like denoting) as per error codes
▶︎ AutoMerge = True (Log only) Merge the queues towards the right if there is no data in those queues.
▶︎ PageOnEveryBar = True Clear data from previous bars on each new bar, in conjuction with PageHistory setting.
▶︎ MoveLogUp = True Move log in up direction. Setting to false will push logs down.
▶︎ MarkNewBar = True On each change of bar, add a marker to show the bar has changed
▶︎ PrefixLogLevel = True (Console only) Prefix all messages with the log level corresponding to error code.
▶︎ MinWidth = 40 Set the minimum width needed to be seen. Prevents logx/console shrinking below these number of characters.
▶︎ TabSizeQ1 = 0 If set to more than one, the messages on Q1 or Console messages will indent by this size based on error code (Max 4 used)
▶︎ TabSizeQ2 = 0 If set to more than one, the messages on Q2 will indent by this size based on error code (Max 4 used)
▶︎ TabSizeQ3 = 0 If set to more than one, the messages on Q2 will indent by this size based on error code (Max 4 used)
▶︎ TabSizeQ4 = 0 If set to more than one, the messages on Q2 will indent by this size based on error code (Max 4 used)
▶︎ TabSizeQ5 = 0 If set to more than one, the messages on Q2 will indent by this size based on error code (Max 4 used)
▶︎ TabSizeQ6 = 0 If set to more than one, the messages on Q2 will indent by this size based on error code (Max 4 used)
▶︎ PageHistory = 0 Used with PageOnEveryBar. Determines how many historial pages to keep.
▶︎ HeaderQbarIndex = 'Bar#' (Logx only) The header to show for Bar Index
▶︎ HeaderQdateTime = 'Date' (Logx only) The header to show for Date/Time
▶︎ HeaderQerrorCode = 'eCode' (Logx only) The header to show for Error Codes
▶︎ HeaderQlogLevel = 'State' (Logx only) The header to show for Log Level
▶︎ HeaderQ1 = 'h.Q1' (Logx only) The header to show for Q1
▶︎ HeaderQ2 = 'h.Q2' (Logx only) The header to show for Q2
▶︎ HeaderQ3 = 'h.Q3' (Logx only) The header to show for Q3
▶︎ HeaderQ4 = 'h.Q4' (Logx only) The header to show for Q4
▶︎ HeaderQ5 = 'h.Q5' (Logx only) The header to show for Q5
▶︎ HeaderQ6 = 'h.Q6' (Logx only) The header to show for Q6
▶︎ Status = '' Set the status to this text.
▶︎ HeaderColor Set the color for the header
▶︎ HeaderColorBG Set the background color for the header
▶︎ StatusColor Set the color for the status bar
▶︎ StatusColorBG Set the background color for the status bar
▶︎ TextColor Set the color for the text used without error code or code 0.
▶︎ TextColorBG Set the background color for the text used without error code or code 0.
▶︎ FrameColor Set the color for the frame around Logx/Console
▶︎ FrameSize = 1 Set the size of the frame around Logx/Console
▶︎ CellBorderSize = 0 Set the size of the border around cells.
▶︎ CellBorderColor Set the color for the border around cells within Logx/Console
▶︎ SeparatorColor = gray Set the color of separate in between Console/Logx Attachment
◼︎ Methods (summary)
● init ▶︎ Initialise the log
● log ▶︎ Log the messages. Use method show to display the messages
● page ▶︎ Clear messages from previous bar while logging messages on this bar.
● show ▶︎ Shows a table displaying the logged messages
● clear ▶︎ Clears the log of all messages
● resize ▶︎ Resizes the log. If size is for reduction then oldest messages are lost first.
● turnPage ▶︎ When called, all messages marked with previous page, or from start are cleared
● dateTimeFormat ▶︎ Sets the date time format to be used when displaying date/time info.
● resetTextColor ▶︎ Reset Text Color to library default
● resetTextBGcolor ▶︎ Reset Text BG Color to library default
● resetHeaderColor ▶︎ Reset Header Color to library default
● resetHeaderBGcolor ▶︎ Reset Header BG Color to library default
● resetStatusColor ▶︎ Reset Status Color to library default
● resetStatusBGcolor ▶︎ Reset Status BG Color to library default
● setColors ▶︎ Sets the colors to be used for corresponding error codes
● setColorsBG ▶︎ Sets the background colors to be used for corresponding error codes. If not match of error code, then text color used.
● setColorsHC ▶︎ Sets the highlight colors to be used for corresponding error codes.If not match of error code, then text bg color used.
● resetColors ▶︎ Reset the colors to library default (Total 36, not including error code 0)
● resetColorsBG ▶︎ Reset the background colors to library default
● resetColorsHC ▶︎ Reset the highlight colors to library default
● setLevelNames ▶︎ Set the log level names to be used for corresponding error codes. If not match of error code, then empty string used.
● resetLevelNames ▶︎ Reset the log level names to library default. (Total 36) 1=TRACE, 2=DEBUG, 3=INFO, 4=WARNING, 5=ERROR, 6=CRITICAL, 7=FATAL
● attach ▶︎ Attaches a console to an existing Logx, allowing to have dual logging system independent of each other
● detach ▶︎ Detaches an already attached console from Logx
loggerLibrary "logger"
◼ Overview
A dual logging library for developers. Tradingview lacks logging capability. This library provides logging while developing your scripts and is to be used by developers when developing and debugging their scripts.
Using this library would potentially slow down you scripts. Hence, use this for debugging only. Once your code is as you would like it to be, remove the logging code.
◼︎ Usage (Console):
Console = A sleek single cell logging with a limit of 4096 characters. When you dont need a large logging capability.
//@version=5
indicator("demo.Console", overlay=true)
plot(na)
import GETpacman/logger/1 as logger
var console = logger.log.new()
console.init() // init() should be called as first line after variable declaration
console.FrameColor:=color.green
console.log('\n')
console.log('\n')
console.log('Hello World')
console.log('\n')
console.log('\n')
console.ShowStatusBar:=true
console.StatusBarAtBottom:=true
console.FrameColor:=color.blue //settings can be changed anytime before show method is called. Even twice. The last call will set the final value
console.ShowHeader:=false //this wont throw error but is not used for console
console.show(position=position.bottom_right) //this should be the last line of your code, after all methods and settings have been dealt with.
◼︎ Usage (Logx):
Logx = Multiple columns logging with a limit of 4096 characters each message. When you need to log large number of messages.
//@version=5
indicator("demo.Logx", overlay=true)
plot(na)
import GETpacman/logger/1 as logger
var logx = logger.log.new()
logx.init() // init() should be called as first line after variable declaration
logx.FrameColor:=color.green
logx.log('\n')
logx.log('\n')
logx.log('Hello World')
logx.log('\n')
logx.log('\n')
logx.ShowStatusBar:=true
logx.StatusBarAtBottom:=true
logx.ShowQ3:=false
logx.ShowQ4:=false
logx.ShowQ5:=false
logx.ShowQ6:=false
logx.FrameColor:=color.olive //settings can be changed anytime before show method is called. Even twice. The last call will set the final value
logx.show(position=position.top_right) //this should be the last line of your code, after all methods and settings have been dealt with.
◼︎ Fields (with default settings)
▶︎ IsConsole = True Log will act as Console if true, otherwise it will act as Logx
▶︎ ShowHeader = True (Log only) Will show a header at top or bottom of logx.
▶︎ HeaderAtTop = True (Log only) Will show the header at the top, or bottom if false, if ShowHeader is true.
▶︎ ShowStatusBar = True Will show a status bar at the bottom
▶︎ StatusBarAtBottom = True Will show the status bar at the bottom, or top if false, if ShowHeader is true.
▶︎ ShowMetaStatus = True Will show the meta info within status bar (Current Bar, characters left in console, Paging On Every Bar, Console dumped data etc)
▶︎ ShowBarIndex = True Logx will show column for Bar Index when the message was logged. Console will add Bar index at the front of logged messages
▶︎ ShowDateTime = True Logx will show column for Date/Time passed with the logged message logged. Console will add Date/Time at the front of logged messages
▶︎ ShowLogLevels = True Logx will show column for Log levels corresponding to error codes. Console will log levels in the status bar
▶︎ ReplaceWithErrorCodes = True (Log only) Logx will show error codes instead of log levels, if ShowLogLevels is switched on
▶︎ RestrictLevelsToKey7 = True Log levels will be restricted to Ley 7 codes - TRACE, DEBUG, INFO, WARNING, ERROR, CRITICAL, FATAL
▶︎ ShowQ1 = True (Log only) Show the column for Q1
▶︎ ShowQ2 = True (Log only) Show the column for Q2
▶︎ ShowQ3 = True (Log only) Show the column for Q3
▶︎ ShowQ4 = True (Log only) Show the column for Q4
▶︎ ShowQ5 = True (Log only) Show the column for Q5
▶︎ ShowQ6 = True (Log only) Show the column for Q6
▶︎ ColorText = True Log/Console will color text as per error codes
▶︎ HighlightText = True Log/Console will highlight text (like denoting) as per error codes
▶︎ AutoMerge = True (Log only) Merge the queues towards the right if there is no data in those queues.
▶︎ PageOnEveryBar = True Clear data from previous bars on each new bar, in conjuction with PageHistory setting.
▶︎ MoveLogUp = True Move log in up direction. Setting to false will push logs down.
▶︎ MarkNewBar = True On each change of bar, add a marker to show the bar has changed
▶︎ PrefixLogLevel = True (Console only) Prefix all messages with the log level corresponding to error code.
▶︎ MinWidth = 40 Set the minimum width needed to be seen. Prevents logx/console shrinking below these number of characters.
▶︎ TabSizeQ1 = 0 If set to more than one, the messages on Q1 or Console messages will indent by this size based on error code (Max 4 used)
▶︎ TabSizeQ2 = 0 If set to more than one, the messages on Q2 will indent by this size based on error code (Max 4 used)
▶︎ TabSizeQ3 = 0 If set to more than one, the messages on Q2 will indent by this size based on error code (Max 4 used)
▶︎ TabSizeQ4 = 0 If set to more than one, the messages on Q2 will indent by this size based on error code (Max 4 used)
▶︎ TabSizeQ5 = 0 If set to more than one, the messages on Q2 will indent by this size based on error code (Max 4 used)
▶︎ TabSizeQ6 = 0 If set to more than one, the messages on Q2 will indent by this size based on error code (Max 4 used)
▶︎ PageHistory = 0 Used with PageOnEveryBar. Determines how many historial pages to keep.
▶︎ HeaderQbarIndex = 'Bar#' (Logx only) The header to show for Bar Index
▶︎ HeaderQdateTime = 'Date' (Logx only) The header to show for Date/Time
▶︎ HeaderQerrorCode = 'eCode' (Logx only) The header to show for Error Codes
▶︎ HeaderQlogLevel = 'State' (Logx only) The header to show for Log Level
▶︎ HeaderQ1 = 'h.Q1' (Logx only) The header to show for Q1
▶︎ HeaderQ2 = 'h.Q2' (Logx only) The header to show for Q2
▶︎ HeaderQ3 = 'h.Q3' (Logx only) The header to show for Q3
▶︎ HeaderQ4 = 'h.Q4' (Logx only) The header to show for Q4
▶︎ HeaderQ5 = 'h.Q5' (Logx only) The header to show for Q5
▶︎ HeaderQ6 = 'h.Q6' (Logx only) The header to show for Q6
▶︎ Status = '' Set the status to this text.
▶︎ HeaderColor Set the color for the header
▶︎ HeaderColorBG Set the background color for the header
▶︎ StatusColor Set the color for the status bar
▶︎ StatusColorBG Set the background color for the status bar
▶︎ TextColor Set the color for the text used without error code or code 0.
▶︎ TextColorBG Set the background color for the text used without error code or code 0.
▶︎ FrameColor Set the color for the frame around Logx/Console
▶︎ FrameSize = 1 Set the size of the frame around Logx/Console
▶︎ CellBorderSize = 0 Set the size of the border around cells.
▶︎ CellBorderColor Set the color for the border around cells within Logx/Console
▶︎ SeparatorColor = gray Set the color of separate in between Console/Logx Attachment
◼︎ Methods (summary)
● init ▶︎ Initialise the log
● log ▶︎ Log the messages. Use method show to display the messages
● page ▶︎ Clear messages from previous bar while logging messages on this bar.
● show ▶︎ Shows a table displaying the logged messages
● clear ▶︎ Clears the log of all messages
● resize ▶︎ Resizes the log. If size is for reduction then oldest messages are lost first.
● turnPage ▶︎ When called, all messages marked with previous page, or from start are cleared
● dateTimeFormat ▶︎ Sets the date time format to be used when displaying date/time info.
● resetTextColor ▶︎ Reset Text Color to library default
● resetTextBGcolor ▶︎ Reset Text BG Color to library default
● resetHeaderColor ▶︎ Reset Header Color to library default
● resetHeaderBGcolor ▶︎ Reset Header BG Color to library default
● resetStatusColor ▶︎ Reset Status Color to library default
● resetStatusBGcolor ▶︎ Reset Status BG Color to library default
● setColors ▶︎ Sets the colors to be used for corresponding error codes
● setColorsBG ▶︎ Sets the background colors to be used for corresponding error codes. If not match of error code, then text color used.
● setColorsHC ▶︎ Sets the highlight colors to be used for corresponding error codes.If not match of error code, then text bg color used.
● resetColors ▶︎ Reset the colors to library default (Total 36, not including error code 0)
● resetColorsBG ▶︎ Reset the background colors to library default
● resetColorsHC ▶︎ Reset the highlight colors to library default
● setLevelNames ▶︎ Set the log level names to be used for corresponding error codes. If not match of error code, then empty string used.
● resetLevelNames ▶︎ Reset the log level names to library default. (Total 36) 1=TRACE, 2=DEBUG, 3=INFO, 4=WARNING, 5=ERROR, 6=CRITICAL, 7=FATAL
● attach ▶︎ Attaches a console to an existing Logx, allowing to have dual logging system independent of each other
● detach ▶︎ Detaches an already attached console from Logx
method clear(this)
Clears all the queue, including bar_index and time queues, of existing messages
Namespace types: log
Parameters:
this (log)
method resize(this, rows)
Resizes the message queues. If size is decreased then removes the oldest messages
Namespace types: log
Parameters:
this (log)
rows (int) : The new size needed for the queues. Default value is 40.
method dateTimeFormat(this, format)
Re/set the date time format used for displaying date and time. Default resets to dd.MMM.yy HH:mm
Namespace types: log
Parameters:
this (log)
format (string)
method resetTextColor(this)
Resets the text color of the log to library default.
Namespace types: log
Parameters:
this (log)
method resetTextColorBG(this)
Resets the background color of the log to library default.
Namespace types: log
Parameters:
this (log)
method resetHeaderColor(this)
Resets the color used for Headers, to library default.
Namespace types: log
Parameters:
this (log)
method resetHeaderColorBG(this)
Resets the background color used for Headers, to library default.
Namespace types: log
Parameters:
this (log)
method resetStatusColor(this)
Resets the text color of the status row, to library default.
Namespace types: log
Parameters:
this (log)
method resetStatusColorBG(this)
Resets the background color of the status row, to library default.
Namespace types: log
Parameters:
this (log)
method resetFrameColor(this)
Resets the color used for the frame around the log table, to library default.
Namespace types: log
Parameters:
this (log)
method resetColorsHC(this)
Resets the color used for the highlighting when Highlight Text option is used, to library default
Namespace types: log
Parameters:
this (log)
method resetColorsBG(this)
Resets the background color used for setting the background color, when the Color Text option is used, to library default
Namespace types: log
Parameters:
this (log)
method resetColors(this)
Resets the color used for respective error codes, when the Color Text option is used, to library default
Namespace types: log
Parameters:
this (log)
method setColors(this, c)
Sets the colors corresponding to error codes
Index 0 of input array c is color is reserved for future use.
Index 1 of input array c is color for debug code 1.
Index 2 of input array c is color for debug code 2.
There are 2 modes of coloring
1 . Using the Foreground color
2 . Using the Foreground color as background color and a white/black/gray color as foreground color
This is denoting or highlighting. Which effectively puts the foreground color as background color
Namespace types: log
Parameters:
this (log)
c (color ) : Array of colors to be used for corresponding error codes. If the corresponding code is not found, then text color is used
method setColorsHC(this, c)
Sets the highlight colors corresponding to error codes
Index 0 of input array c is color is reserved for future use.
Index 1 of input array c is color for debug code 1.
Index 2 of input array c is color for debug code 2.
There are 2 modes of coloring
1 . Using the Foreground color
2 . Using the Foreground color as background color and a white/black/gray color as foreground color
This is denoting or highlighting. Which effectively puts the foreground color as background color
Namespace types: log
Parameters:
this (log)
c (color ) : Array of highlight colors to be used for corresponding error codes. If the corresponding code is not found, then text color BG is used
method setColorsBG(this, c)
Sets the highlight colors corresponding to debug codes
Index 0 of input array c is color is reserved for future use.
Index 1 of input array c is color for debug code 1.
Index 2 of input array c is color for debug code 2.
There are 2 modes of coloring
1 . Using the Foreground color
2 . Using the Foreground color as background color and a white/black/gray color as foreground color
This is denoting or highlighting. Which effectively puts the foreground color as background color
Namespace types: log
Parameters:
this (log)
c (color ) : Array of background colors to be used for corresponding error codes. If the corresponding code is not found, then text color BG is used
method resetLevelNames(this, prefix, suffix)
Resets the log level names used for corresponding error codes
With prefix/suffix, the default Level name will be like => prefix + Code + suffix
Namespace types: log
Parameters:
this (log)
prefix (string) : Prefix to use when resetting level names
suffix (string) : Suffix to use when resetting level names
method setLevelNames(this, names)
Resets the log level names used for corresponding error codes
Index 0 of input array names is reserved for future use.
Index 1 of input array names is name used for error code 1.
Index 2 of input array names is name used for error code 2.
Namespace types: log
Parameters:
this (log)
names (string ) : Array of log level names be used for corresponding error codes. If the corresponding code is not found, then an empty string is used
method init(this, rows, isConsole)
Sets up data for logging. It consists of 6 separate message queues, and 3 additional queues for bar index, time and log level/error code. Do not directly alter the contents, as library could break.
Namespace types: log
Parameters:
this (log)
rows (int) : Log size, excluding the header/status. Default value is 50.
isConsole (bool) : Whether to init the log as console or logx. True= as console, False = as Logx. Default is true, hence init as console.
method log(this, ec, m1, m2, m3, m4, m5, m6, tv, log)
Logs messages to the queues , including, time/date, bar_index, and error code
Namespace types: log
Parameters:
this (log)
ec (int) : Error/Code to be assigned.
m1 (string) : Message needed to be logged to Q1, or for console.
m2 (string) : Message needed to be logged to Q2. Not used/ignored when in console mode
m3 (string) : Message needed to be logged to Q3. Not used/ignored when in console mode
m4 (string) : Message needed to be logged to Q4. Not used/ignored when in console mode
m5 (string) : Message needed to be logged to Q5. Not used/ignored when in console mode
m6 (string) : Message needed to be logged to Q6. Not used/ignored when in console mode
tv (int) : Time to be used. Default value is time, which logs the start time of bar.
log (bool) : Whether to log the message or not. Default is true.
method page(this, ec, m1, m2, m3, m4, m5, m6, tv, page)
Logs messages to the queues , including, time/date, bar_index, and error code. All messages from previous bars are cleared
Namespace types: log
Parameters:
this (log)
ec (int) : Error/Code to be assigned.
m1 (string) : Message needed to be logged to Q1, or for console.
m2 (string) : Message needed to be logged to Q2. Not used/ignored when in console mode
m3 (string) : Message needed to be logged to Q3. Not used/ignored when in console mode
m4 (string) : Message needed to be logged to Q4. Not used/ignored when in console mode
m5 (string) : Message needed to be logged to Q5. Not used/ignored when in console mode
m6 (string) : Message needed to be logged to Q6. Not used/ignored when in console mode
tv (int) : Time to be used. Default value is time, which logs the start time of bar.
page (bool) : Whether to log the message or not. Default is true.
method turnPage(this, turn)
Set the messages to be on a new page, clearing messages from previous page.
This is not dependent on PageHisotry option, as this method simply just clears all the messages, like turning old pages to a new page.
Namespace types: log
Parameters:
this (log)
turn (bool)
method show(this, position, hhalign, hvalign, hsize, thalign, tvalign, tsize, show, attach)
Display Message Q, Index Q, Time Q, and Log Levels
All options for postion/alignment accept TV values, such as position.bottom_right, text.align_left, size.auto etc.
Namespace types: log
Parameters:
this (log)
position (string) : Position of the table used for displaying the messages. Default is Bottom Right.
hhalign (string) : Horizontal alignment of Header columns
hvalign (string) : Vertical alignment of Header columns
hsize (string) : Size of Header text Options
thalign (string) : Horizontal alignment of all messages
tvalign (string) : Vertical alignment of all messages
tsize (string) : Size of text across the table
show (bool) : Whether to display the logs or not. Default is true.
attach (log) : Console that has been attached via attach method. If na then console will not be shown
method attach(this, attach, position)
Attaches a console to Logx, or moves already attached console around Logx
All options for position/alignment accept TV values, such as position.bottom_right, text.align_left, size.auto etc.
Namespace types: log
Parameters:
this (log)
attach (log) : Console object that has been previously attached.
position (string) : Position of Console in relation to Logx. Can be Top, Right, Bottom, Left. Default is Bottom. If unknown specified then defaults to bottom.
method detach(this, attach)
Detaches the attached console from Logx.
All options for position/alignment accept TV values, such as position.bottom_right, text.align_left, size.auto etc.
Namespace types: log
Parameters:
this (log)
attach (log) : Console object that has been previously attached.
Any Oscillator Underlay [TTF]We are proud to release a new indicator that has been a while in the making - the Any Oscillator Underlay (AOU) !
Note: There is a lot to discuss regarding this indicator, including its intent and some of how it operates, so please be sure to read this entire description before using this indicator to help ensure you understand both the intent and some limitations with this tool.
Our intent for building this indicator was to accomplish the following:
Combine all of the oscillators that we like to use into a single indicator
Take up a bit less screen space for the underlay indicators for strategies that utilize multiple oscillators
Provide a tool for newer traders to be able to leverage multiple oscillators in a single indicator
Features:
Includes 8 separate, fully-functional indicators combined into one
Ability to easily enable/disable and configure each included indicator independently
Clearly named plots to support user customization of color and styling, as well as manual creation of alerts
Ability to customize sub-indicator title position and color
Ability to customize sub-indicator divider lines style and color
Indicators that are included in this initial release:
TSI
2x RSIs (dubbed the Twin RSI )
Stochastic RSI
Stochastic
Ultimate Oscillator
Awesome Oscillator
MACD
Outback RSI (Color-coding only)
Quick note on OB/OS:
Before we get into covering each included indicator, we first need to cover a core concept for how we're defining OB and OS levels. To help illustrate this, we will use the TSI as an example.
The TSI by default has a mid-point of 0 and a range of -100 to 100. As a result, a common practice is to place lines on the -30 and +30 levels to represent OS and OB zones, respectively. Most people tend to view these levels as distance from the edges/outer bounds or as absolute levels, but we feel a more way to frame the OB/OS concept is to instead define it as distance ("offset") from the mid-line. In keeping with the -30 and +30 levels in our example, the offset in this case would be "30".
Taking this a step further, let's say we decided we wanted an offset of 25. Since the mid-point is 0, we'd then calculate the OB level as 0 + 25 (+25), and the OS level as 0 - 25 (-25).
Now that we've covered the concept of how we approach defining OB and OS levels (based on offset/distance from the mid-line), and since we did apply some transformations, rescaling, and/or repositioning to all of the indicators noted above, we are going to discuss each component indicator to detail both how it was modified from the original to fit the stacked-indicator model, as well as the various major components that the indicator contains.
TSI:
This indicator contains the following major elements:
TSI and TSI Signal Line
Color-coded fill for the TSI/TSI Signal lines
Moving Average for the TSI
TSI Histogram
Mid-line and OB/OS lines
Default TSI fill color coding:
Green : TSI is above the signal line
Red : TSI is below the signal line
Note: The TSI traditionally has a range of -100 to +100 with a mid-point of 0 (range of 200). To fit into our stacking model, we first shrunk the range to 100 (-50 to +50 - cut it in half), then repositioned it to have a mid-point of 50. Since this is the "bottom" of our indicator-stack, no additional repositioning is necessary.
Twin RSI:
This indicator contains the following major elements:
Fast RSI (useful if you want to leverage 2x RSIs as it makes it easier to see the overlaps and crosses - can be disabled if desired)
Slow RSI (primary RSI)
Color-coded fill for the Fast/Slow RSI lines (if Fast RSI is enabled and configured)
Moving Average for the Slow RSI
Mid-line and OB/OS lines
Default Twin RSI fill color coding:
Dark Red : Fast RSI below Slow RSI and Slow RSI below Slow RSI MA
Light Red : Fast RSI below Slow RSI and Slow RSI above Slow RSI MA
Dark Green : Fast RSI above Slow RSI and Slow RSI below Slow RSI MA
Light Green : Fast RSI above Slow RSI and Slow RSI above Slow RSI MA
Note: The RSI naturally has a range of 0 to 100 with a mid-point of 50, so no rescaling or transformation is done on this indicator. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Stochastic and Stochastic RSI:
These indicators contain the following major elements:
Configurable lengths for the RSI (for the Stochastic RSI only), K, and D values
Configurable base price source
Mid-line and OB/OS lines
Note: The Stochastic and Stochastic RSI both have a normal range of 0 to 100 with a mid-point of 50, so no rescaling or transformations are done on either of these indicators. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Ultimate Oscillator (UO):
This indicator contains the following major elements:
Configurable lengths for the Fast, Middle, and Slow BP/TR components
Mid-line and OB/OS lines
Moving Average for the UO
Color-coded fill for the UO/UO MA lines (if UO MA is enabled and configured)
Default UO fill color coding:
Green : UO is above the moving average line
Red : UO is below the moving average line
Note: The UO naturally has a range of 0 to 100 with a mid-point of 50, so no rescaling or transformation is done on this indicator. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Awesome Oscillator (AO):
This indicator contains the following major elements:
Configurable lengths for the Fast and Slow moving averages used in the AO calculation
Configurable price source for the moving averages used in the AO calculation
Mid-line
Option to display the AO as a line or pseudo-histogram
Moving Average for the AO
Color-coded fill for the AO/AO MA lines (if AO MA is enabled and configured)
Default AO fill color coding (Note: Fill was disabled in the image above to improve clarity):
Green : AO is above the moving average line
Red : AO is below the moving average line
Note: The AO is technically has an infinite (unbound) range - -∞ to ∞ - and the effective range is bound to the underlying security price (e.g. BTC will have a wider range than SP500, and SP500 will have a wider range than EUR/USD). We employed some special techniques to rescale this indicator into our desired range of 100 (-50 to 50), and then repositioned it to have a midpoint of 50 (range of 0 to 100) to meet the constraints of our stacking model. We then do one final repositioning to place it in the correct position the indicator-stack based on which other indicators are also enabled. For more details on how we accomplished this, read our section "Binding Infinity" below.
MACD:
This indicator contains the following major elements:
Configurable lengths for the Fast and Slow moving averages used in the MACD calculation
Configurable price source for the moving averages used in the MACD calculation
Configurable length and calculation method for the MACD Signal Line calculation
Mid-line
Note: Like the AO, the MACD also technically has an infinite (unbound) range. We employed the same principles here as we did with the AO to rescale and reposition this indicator as well. For more details on how we accomplished this, read our section "Binding Infinity" below.
Outback RSI (ORSI):
This is a stripped-down version of the Outback RSI indicator (linked above) that only includes the color-coding background (suffice it to say that it was not technically feasible to attempt to rescale the other components in a way that could consistently be clearly seen on-chart). As this component is a bit of a niche/special-purpose sub-indicator, it is disabled by default, and we suggest it remain disabled unless you have some pre-defined strategy that leverages the color-coding element of the Outback RSI that you wish to use.
Binding Infinity - How We Incorporated the AO and MACD (Warning - Math Talk Ahead!)
Note: This applies only to the AO and MACD at time of original publication. If any other indicators are added in the future that also fall into the category of "binding an infinite-range oscillator", we will make that clear in the release notes when that new addition is published.
To help set the stage for this discussion, it's important to note that the broader challenge of "equalizing inputs" is nothing new. In fact, it's a key element in many of the most popular fields of data science, such as AI and Machine Learning. They need to take a diverse set of inputs with a wide variety of ranges and seemingly-random inputs (referred to as "features"), and build a mathematical or computational model in order to work. But, when the raw inputs can vary significantly from one another, there is an inherent need to do some pre-processing to those inputs so that one doesn't overwhelm another simply due to the difference in raw values between them. This is where feature scaling comes into play.
With this in mind, we implemented 2 of the most common methods of Feature Scaling - Min-Max Normalization (which we call "Normalization" in our settings), and Z-Score Normalization (which we call "Standardization" in our settings). Let's take a look at each of those methods as they have been implemented in this script.
Min-Max Normalization (Normalization)
This is one of the most common - and most basic - methods of feature scaling. The basic formula is: y = (x - min)/(max - min) - where x is the current data sample, min is the lowest value in the dataset, and max is the highest value in the dataset. In this transformation, the max would evaluate to 1, and the min would evaluate to 0, and any value in between the min and the max would evaluate somewhere between 0 and 1.
The key benefits of this method are:
It can be used to transform datasets of any range into a new dataset with a consistent and known range (0 to 1).
It has no dependency on the "shape" of the raw input dataset (i.e. does not assume the input dataset can be approximated to a normal distribution).
But there are a couple of "gotchas" with this technique...
First, it assumes the input dataset is complete, or an accurate representation of the population via random sampling. While in most situations this is a valid assumption, in trading indicators we don't really have that luxury as we're often limited in what sample data we can access (i.e. number of historical bars available).
Second, this method is highly sensitive to outliers. Since the crux of this transformation is based on the max-min to define the initial range, a single significant outlier can result in skewing the post-transformation dataset (i.e. major price movement as a reaction to a significant news event).
You can potentially mitigate those 2 "gotchas" by using a mechanism or technique to find and discard outliers (e.g. calculate the mean and standard deviation of the input dataset and discard any raw values more than 5 standard deviations from the mean), but if your most recent datapoint is an "outlier" as defined by that algorithm, processing it using the "scrubbed" dataset would result in that new datapoint being outside the intended range of 0 to 1 (e.g. if the new datapoint is greater than the "scrubbed" max, it's post-transformation value would be greater than 1). Even though this is a bit of an edge-case scenario, it is still sure to happen in live markets processing live data, so it's not an ideal solution in our opinion (which is why we chose not to attempt to discard outliers in this manner).
Z-Score Normalization (Standardization)
This method of rescaling is a bit more complex than the Min-Max Normalization method noted above, but it is also a widely used process. The basic formula is: y = (x – μ) / σ - where x is the current data sample, μ is the mean (average) of the input dataset, and σ is the standard deviation of the input dataset. While this transformation still results in a technically-infinite possible range, the output of this transformation has a 2 very significant properties - the mean (average) of the output dataset has a mean (μ) of 0 and a standard deviation (σ) of 1.
The key benefits of this method are:
As it's based on normalizing the mean and standard deviation of the input dataset instead of a linear range conversion, it is far less susceptible to outliers significantly affecting the result (and in fact has the effect of "squishing" outliers).
It can be used to accurately transform disparate sets of data into a similar range regardless of the original dataset's raw/actual range.
But there are a couple of "gotchas" with this technique as well...
First, it still technically does not do any form of range-binding, so it is still technically unbounded (range -∞ to ∞ with a mid-point of 0).
Second, it implicitly assumes that the raw input dataset to be transformed is normally distributed, which won't always be the case in financial markets.
The first "gotcha" is a bit of an annoyance, but isn't a huge issue as we can apply principles of normal distribution to conceptually limit the range by defining a fixed number of standard deviations from the mean. While this doesn't totally solve the "infinite range" problem (a strong enough sudden move can still break out of our "conceptual range" boundaries), the amount of movement needed to achieve that kind of impact will generally be pretty rare.
The bigger challenge is how to deal with the assumption of the input dataset being normally distributed. While most financial markets (and indicators) do tend towards a normal distribution, they are almost never going to match that distribution exactly. So let's dig a bit deeper into distributions are defined and how things like trending markets can affect them.
Skew (skewness): This is a measure of asymmetry of the bell curve, or put another way, how and in what way the bell curve is disfigured when comparing the 2 halves. The easiest way to visualize this is to draw an imaginary vertical line through the apex of the bell curve, then fold the curve in half along that line. If both halves are exactly the same, the skew is 0 (no skew/perfectly symmetrical) - which is what a normal distribution has (skew = 0). Most financial markets tend to have short, medium, and long-term trends, and these trends will cause the distribution curve to skew in one direction or another. Bullish markets tend to skew to the right (positive), and bearish markets to the left (negative).
Kurtosis: This is a measure of the "tail size" of the bell curve. Another way to state this could be how "flat" or "steep" the bell-shape is. If the bell is steep with a strong drop from the apex (like a steep cliff), it has low kurtosis. If the bell has a shallow, more sweeping drop from the apex (like a tall hill), is has high kurtosis. Translating this to financial markets, kurtosis is generally a metric of volatility as the bell shape is largely defined by the strength and frequency of outliers. This is effectively a measure of volatility - volatile markets tend to have a high level of kurtosis (>3), and stable/consolidating markets tend to have a low level of kurtosis (<3). A normal distribution (our reference), has a kurtosis value of 3.
So to try and bring all that back together, here's a quick recap of the Standardization rescaling method:
The Standardization method has an assumption of a normal distribution of input data by using the mean (average) and standard deviation to handle the transformation
Most financial markets do NOT have a normal distribution (as discussed above), and will have varying degrees of skew and kurtosis
Q: Why are we still favoring the Standardization method over the Normalization method, and how are we accounting for the innate skew and/or kurtosis inherent in most financial markets?
A: Well, since we're only trying to rescale oscillators that by-definition have a midpoint of 0, kurtosis isn't a major concern beyond the affect it has on the post-transformation scaling (specifically, the number of standard deviations from the mean we need to include in our "artificially-bound" range definition).
Q: So that answers the question about kurtosis, but what about skew?
A: So - for skew, the answer is in the formula - specifically the mean (average) element. The standard mean calculation assumes a complete dataset and therefore uses a standard (i.e. simple) average, but we're limited by the data history available to us. So we adapted the transformation formula to leverage a moving average that included a weighting element to it so that it favored recent datapoints more heavily than older ones. By making the average component more adaptive, we gained the effect of reducing the skew element by having the average itself be more responsive to recent movements, which significantly reduces the effect historical outliers have on the dataset as a whole. While this is certainly not a perfect solution, we've found that it serves the purpose of rescaling the MACD and AO to a far more well-defined range while still preserving the oscillator behavior and mid-line exceptionally well.
The most difficult parts to compensate for are periods where markets have low volatility for an extended period of time - to the point where the oscillators are hovering around the 0/midline (in the case of the AO), or when the oscillator and signal lines converge and remain close to each other (in the case of the MACD). It's during these periods where even our best attempt at ensuring accurate mirrored-behavior when compared to the original can still occasionally lead or lag by a candle.
Note: If this is a make-or-break situation for you or your strategy, then we recommend you do not use any of the included indicators that leverage this kind of bounding technique (the AO and MACD at time of publication) and instead use the Trandingview built-in versions!
We know this is a lot to read and digest, so please take your time and feel free to ask questions - we will do our best to answer! And as always, constructive feedback is always welcome!
End-Pointed SSA of Normalized Price Corridor [Loxx]End-Pointed SSA of Normalized Price Corridor is an end-pointed SSA of normalized input price to output a smoothed normalized oscillator of price. Corridors are added in attempt to decipher larger trend direction of price. These corridor trend lines are based on highs and lows of price. Due to the SSA algorithm, this indicator takes some time load on the chat, so be patient. You can adjust the lag parameter downward to speed up the indicator load time but this will also degrade the signal. There are many different ways to use this indicator. It is also Renko chart friendly.
An example of emerging trends (these do not repaint)
What is Singular Spectrum Analysis ( SSA )?
Singular spectrum analysis ( SSA ) is a technique of time series analysis and forecasting. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. SSA aims at decomposing the original series into a sum of a small number of interpretable components such as a slowly varying trend, oscillatory components and a ‘structureless’ noise. It is based on the singular value decomposition ( SVD ) of a specific matrix constructed upon the time series. Neither a parametric model nor stationarity-type conditions have to be assumed for the time series. This makes SSA a model-free method and hence enables SSA to have a very wide range of applicability.
For our purposes here, we are only concerned with the "Caterpillar" SSA . This methodology was developed in the former Soviet Union independently (the ‘iron curtain effect’) of the mainstream SSA . The main difference between the main-stream SSA and the "Caterpillar" SSA is not in the algorithmic details but rather in the assumptions and in the emphasis in the study of SSA properties. To apply the mainstream SSA , one often needs to assume some kind of stationarity of the time series and think in terms of the "signal plus noise" model (where the noise is often assumed to be ‘red’). In the "Caterpillar" SSA , the main methodological stress is on separability (of one component of the series from another one) and neither the assumption of stationarity nor the model in the form "signal plus noise" are required.
"Caterpillar" SSA
The basic "Caterpillar" SSA algorithm for analyzing one-dimensional time series consists of:
Transformation of the one-dimensional time series to the trajectory matrix by means of a delay procedure (this gives the name to the whole technique);
Singular Value Decomposition of the trajectory matrix;
Reconstruction of the original time series based on a number of selected eigenvectors.
This decomposition initializes forecasting procedures for both the original time series and its components. The method can be naturally extended to multidimensional time series and to image processing.
The method is a powerful and useful tool of time series analysis in meteorology, hydrology, geophysics, climatology and, according to our experience, in economics, biology, physics, medicine and other sciences; that is, where short and long, one-dimensional and multidimensional, stationary and non-stationary, almost deterministic and noisy time series are to be analyzed.
Included
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
STD-Adaptive T3 [Loxx]STD-Adaptive T3 is a standard deviation adaptive T3 moving average filter. This indicator acts more like a trend overlay indicator with gradient coloring.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Bar coloring
Loxx's Expanded Source Types
Softmax Normalized T3 Histogram [Loxx]Softmax Normalized T3 Histogram is a T3 moving average that is morphed into a normalized oscillator from -1 to 1.
What is the Softmax function?
The softmax function, also known as softargmax: or normalized exponential function, converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
End-pointed SSA of Williams %R [Loxx]End-pointed SSA of Williams %R is an indicator that runes Williams %R SSA calculation through a Singular Spectrum Analysis (SSA) algorithm to derive a smoother final output. The reduction in noise from the traditional Williams %R is significant.
What is Williams %R?
Williams %R , also known as the Williams Percent Range, is a type of momentum indicator that moves between 0 and -100 and measures overbought and oversold levels. The Williams %R may be used to find entry and exit points in the market. The indicator is very similar to the Stochastic oscillator and is used in the same way. It was developed by Larry Williams and it compares a stock’s closing price to the high-low range over a specific period, typically 14 days or periods.
What is Singular Spectrum Analysis ( SSA )?
Singular spectrum analysis ( SSA ) is a technique of time series analysis and forecasting. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. SSA aims at decomposing the original series into a sum of a small number of interpretable components such as a slowly varying trend, oscillatory components and a ‘structureless’ noise. It is based on the singular value decomposition ( SVD ) of a specific matrix constructed upon the time series. Neither a parametric model nor stationarity-type conditions have to be assumed for the time series. This makes SSA a model-free method and hence enables SSA to have a very wide range of applicability.
For our purposes here, we are only concerned with the "Caterpillar" SSA . This methodology was developed in the former Soviet Union independently (the ‘iron curtain effect’) of the mainstream SSA . The main difference between the main-stream SSA and the "Caterpillar" SSA is not in the algorithmic details but rather in the assumptions and in the emphasis in the study of SSA properties. To apply the mainstream SSA , one often needs to assume some kind of stationarity of the time series and think in terms of the "signal plus noise" model (where the noise is often assumed to be ‘red’). In the "Caterpillar" SSA , the main methodological stress is on separability (of one component of the series from another one) and neither the assumption of stationarity nor the model in the form "signal plus noise" are required.
"Caterpillar" SSA
The basic "Caterpillar" SSA algorithm for analyzing one-dimensional time series consists of:
Transformation of the one-dimensional time series to the trajectory matrix by means of a delay procedure (this gives the name to the whole technique);
Singular Value Decomposition of the trajectory matrix;
Reconstruction of the original time series based on a number of selected eigenvectors.
This decomposition initializes forecasting procedures for both the original time series and its components. The method can be naturally extended to multidimensional time series and to image processing.
The method is a powerful and useful tool of time series analysis in meteorology, hydrology, geophysics, climatology and, according to our experience, in economics, biology, physics, medicine and other sciences; that is, where short and long, one-dimensional and multidimensional, stationary and non-stationary, almost deterministic and noisy time series are to be analyzed.
Included:
Bar coloring
[*Alerts
[*Signals
[*Loxx's Expanded Source Types
Related Williams %R Indicators
Williams %R on Chart w/ Dynamic Zones
Williams %R w/ Bollinger Bands
Intermediate Williams %R w/ Discontinued Signal Lines
Related SSA Indicators
End-pointed SSA of FDASMA
End-pointed SSA of Normalized Price Oscillator
End-pointed SSA of FDASMA [Loxx]End-pointed SSA of FDASMA is a modification of Fractal-Dimension-Adaptive SMA (FDASMA) using End-Pointed Singular Spectrum Analysis. This is a multilayer adaptive indicator.
What is the Fractal Dimension Index?
The goal of the fractal dimension index is to determine whether the market is trending or in a trading range. It does not measure the direction of the trend. A value less than 1.5 indicates that the price series is persistent or that the market is trending. Lower values of the FDI indicate a stronger trend. A value greater than 1.5 indicates that the market is in a trading range and is acting in a more random fashion.
See here for more info:
Fractal-Dimension-Adaptive SMA (FDASMA) w/ DSL
What is Singular Spectrum Analysis ( SSA )?
Singular spectrum analysis ( SSA ) is a technique of time series analysis and forecasting. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. SSA aims at decomposing the original series into a sum of a small number of interpretable components such as a slowly varying trend, oscillatory components and a ‘structureless’ noise. It is based on the singular value decomposition ( SVD ) of a specific matrix constructed upon the time series. Neither a parametric model nor stationarity-type conditions have to be assumed for the time series. This makes SSA a model-free method and hence enables SSA to have a very wide range of applicability.
For our purposes here, we are only concerned with the "Caterpillar" SSA . This methodology was developed in the former Soviet Union independently (the ‘iron curtain effect’) of the mainstream SSA . The main difference between the main-stream SSA and the "Caterpillar" SSA is not in the algorithmic details but rather in the assumptions and in the emphasis in the study of SSA properties. To apply the mainstream SSA , one often needs to assume some kind of stationarity of the time series and think in terms of the "signal plus noise" model (where the noise is often assumed to be ‘red’). In the "Caterpillar" SSA , the main methodological stress is on separability (of one component of the series from another one) and neither the assumption of stationarity nor the model in the form "signal plus noise" are required.
"Caterpillar" SSA
The basic "Caterpillar" SSA algorithm for analyzing one-dimensional time series consists of:
Transformation of the one-dimensional time series to the trajectory matrix by means of a delay procedure (this gives the name to the whole technique);
Singular Value Decomposition of the trajectory matrix;
Reconstruction of the original time series based on a number of selected eigenvectors.
This decomposition initializes forecasting procedures for both the original time series and its components. The method can be naturally extended to multidimensional time series and to image processing.
The method is a powerful and useful tool of time series analysis in meteorology, hydrology, geophysics, climatology and, according to our experience, in economics, biology, physics, medicine and other sciences; that is, where short and long, one-dimensional and multidimensional, stationary and non-stationary, almost deterministic and noisy time series are to be analyzed.
Included:
Bar coloring
Alerts
Signals
Loxx's Expanded Source Types
SSA of Price [Loxx]SSA of Price ris an indicator that runs an SSA calculation on price to derive final output. This indicator also serves to introduce the concept of SSA to the Pine Coder community. The data returned from this algorithm is an array of modeled values on past X bars. Unlike the end-pointed SSA posted previously, this version pulls the modeled data from the output array and draws a line backward from the current bar. This indicator recalculates so past observations aren't very useful, but the current observation is since the current bar is index 0 of the output array which means it's the endpointed value.
What is Singular Spectrum Analysis ( SSA )?
Singular spectrum analysis ( SSA ) is a technique of time series analysis and forecasting. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. SSA aims at decomposing the original series into a sum of a small number of interpretable components such as a slowly varying trend, oscillatory components and a ‘structureless’ noise. It is based on the singular value decomposition ( SVD ) of a specific matrix constructed upon the time series. Neither a parametric model nor stationarity-type conditions have to be assumed for the time series. This makes SSA a model-free method and hence enables SSA to have a very wide range of applicability.
For our purposes here, we are only concerned with the "Caterpillar" SSA . This methodology was developed in the former Soviet Union independently (the ‘iron curtain effect’) of the mainstream SSA . The main difference between the main-stream SSA and the "Caterpillar" SSA is not in the algorithmic details but rather in the assumptions and in the emphasis in the study of SSA properties. To apply the mainstream SSA , one often needs to assume some kind of stationarity of the time series and think in terms of the "signal plus noise" model (where the noise is often assumed to be ‘red’). In the "Caterpillar" SSA , the main methodological stress is on separability (of one component of the series from another one) and neither the assumption of stationarity nor the model in the form "signal plus noise" are required.
"Caterpillar" SSA
The basic "Caterpillar" SSA algorithm for analyzing one-dimensional time series consists of:
Transformation of the one-dimensional time series to the trajectory matrix by means of a delay procedure (this gives the name to the whole technique);
Singular Value Decomposition of the trajectory matrix;
Reconstruction of the original time series based on a number of selected eigenvectors.
This decomposition initializes forecasting procedures for both the original time series and its components. The method can be naturally extended to multidimensional time series and to image processing.
The method is a powerful and useful tool of time series analysis in meteorology, hydrology, geophysics, climatology and, according to our experience, in economics, biology, physics, medicine and other sciences; that is, where short and long, one-dimensional and multidimensional, stationary and non-stationary, almost deterministic and noisy time series are to be analyzed.
Included:
Bar coloring
Alerts
Signals
Loxx's Expanded Source Types
End-pointed SSA of Normalized Price Oscillator [Loxx]End-pointed SSA of Normalized Price Oscillator is an indicator that converts source price into a normalized oscillator and runs an SSA calculation to derived a smoother final output. This indicator also serves to introduce the concept of SSA to the Pine Coder community. The data returned from this algorithm is an array of modeled values on past X bars. We could use this data but it's not useful, so instead we use the end-pointed value which is the first value of the array at index 0.
What is Singular Spectrum Analysis (SSA)?
Singular spectrum analysis (SSA) is a technique of time series analysis and forecasting. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. SSA aims at decomposing the original series into a sum of a small number of interpretable components such as a slowly varying trend, oscillatory components and a ‘structureless’ noise. It is based on the singular value decomposition (SVD) of a specific matrix constructed upon the time series. Neither a parametric model nor stationarity-type conditions have to be assumed for the time series. This makes SSA a model-free method and hence enables SSA to have a very wide range of applicability.
For our purposes here, we are only concerned with the "Caterpillar" SSA. This methodology was developed in the former Soviet Union independently (the ‘iron curtain effect’) of the mainstream SSA. The main difference between the main-stream SSA and the "Caterpillar" SSA is not in the algorithmic details but rather in the assumptions and in the emphasis in the study of SSA properties. To apply the mainstream SSA, one often needs to assume some kind of stationarity of the time series and think in terms of the "signal plus noise" model (where the noise is often assumed to be ‘red’). In the "Caterpillar" SSA, the main methodological stress is on separability (of one component of the series from another one) and neither the assumption of stationarity nor the model in the form "signal plus noise" are required.
"Caterpillar" SSA
The basic "Caterpillar" SSA algorithm for analyzing one-dimensional time series consists of:
Transformation of the one-dimensional time series to the trajectory matrix by means of a delay procedure (this gives the name to the whole technique);
Singular Value Decomposition of the trajectory matrix;
Reconstruction of the original time series based on a number of selected eigenvectors.
This decomposition initializes forecasting procedures for both the original time series and its components. The method can be naturally extended to multidimensional time series and to image processing.
The method is a powerful and useful tool of time series analysis in meteorology, hydrology, geophysics, climatology and, according to our experience, in economics, biology, physics, medicine and other sciences; that is, where short and long, one-dimensional and multidimensional, stationary and non-stationary, almost deterministic and noisy time series are to be analyzed.
Included:
Bar coloring
Alerts
Signals
Loxx's Expanded Source Types