Dividend Calendar (Zeiierman)█ Overview
The Dividend Calendar is a financial tool designed for investors and analysts in the stock market. Its primary function is to provide a schedule of expected dividend payouts from various companies.
Dividends, which are portions of a company's earnings distributed to shareholders, represent a return on their investment. This calendar is particularly crucial for investors who prioritize dividend income, as it enables them to plan and manage their investment strategies with greater effectiveness. By offering a comprehensive overview of when dividends are due, the Dividend Calendar aids in informed decision-making, allowing investors to time their purchases and sales of stocks to optimize their dividend income. Additionally, it can be a valuable tool for forecasting cash flow and assessing the financial health and dividend-paying consistency of different companies.
█ How to Use
Dividend Yield Analysis:
By tracking dividend growth and payouts, traders can identify stocks with attractive dividend yields. This is particularly useful for income-focused investors who prioritize steady cash flow from their investments.
Income Planning:
For those relying on dividends as a source of income, the calendar helps in forecasting income.
Trend Identification:
Analyzing the growth rates of dividends helps in identifying long-term trends in a company's financial health. Consistently increasing dividends can be a sign of a company's strong financial position, while decreasing dividends might signal potential issues.
Portfolio Diversification:
The tool can assist in diversifying a portfolio by identifying a range of dividend-paying stocks across different sectors. This can help mitigate risk as different sectors may react differently to market conditions.
Timing Investments:
For those who follow a dividend capture strategy, this indicator can be invaluable. It can help in timing the buying and selling of stocks around their ex-dividend dates to maximize dividend income.
█ How it Works
This script is a comprehensive tool for tracking and analyzing stock dividend data. It calculates growth rates, monthly and yearly totals, and allows for custom date handling. Structured to be visually informative, it provides tables and alerts for the easy monitoring of dividend-paying stocks.
Data Retrieval and Estimation: It fetches dividend payout times and amounts for a list of stocks. The script also estimates future values based on historical data.
Growth Analysis: It calculates the average growth rate of dividend payments for each stock, providing insights into dividend consistency and growth over time.
Summation and Aggregation: The script sums up dividends on a monthly and yearly basis, allowing for a clear view of total payouts.
Customization and Alerts: Users can input custom months for dividend tracking. The script also generates alerts for upcoming or current dividend payouts.
Visualization: It produces various tables and visual representations, including full calendar views and income tables, to display the dividend data in an easily understandable format.
█ Settings
Overview:
Currency:
Description: This setting allows the user to specify the currency in which dividend values are displayed. By default, it's set to USD, but users can change it to their local currency.
Impact: Changing this value alters the currency denomination for all dividend values displayed by the script.
Ex-Date or Pay-Date:
Description: Users can select whether to show the Ex-dividend day or the Actual Payout day.
Impact: This changes the reference date for dividend data, affecting the timing of when dividends are shown as due or paid.
Estimate Forward:
Description: Enables traders to predict future dividends based on historical data.
Impact: When enabled, the script estimates future dividend payments, providing a forward-looking view of potential income.
Dividend Table Design:
Description: Choose between viewing the full dividend calendar, just the cumulative monthly dividend, or a summary view.
Impact: This alters the format and extent of the dividend data displayed, catering to different levels of detail a user might require.
Show Dividend Growth:
Description: Users can enable dividend growth tracking over a specified number of years.
Impact: When enabled, the script displays the growth rate of dividends over the selected number of years, providing insight into dividend trends.
Customize Stocks & User Inputs:
This setting allows users to customize the stocks they track, the number of shares they hold, the dividend payout amount, and the payout months.
Impact: Users can tailor the script to their specific portfolio, making the dividend data more relevant and personalized to their investments.
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
Indikatoren und Strategien
Liquidity Price Depth Chart [LuxAlgo]The Liquidity Price Depth Chart is a unique indicator inspired by the visual representation of order book depth charts, highlighting sorted prices from bullish and bearish candles located on the chart's visible range, as well as their degree of liquidity.
Note that changing the chart's visible range will recalculate the indicator.
🔶 USAGE
The indicator can be used to visualize sorted bullish/bearish prices (in descending order), with bullish prices being highlighted on the left side of the chart, and bearish prices on the right. Prices are highlighted by dots, and connected by a line.
The displacement of a line relative to the x-axis is an indicator of liquidity, with a higher displacement highlighting prices with more volume.
These can also be easily identified by only keeping the dots, visible voids can be indicative of a price associated with significant volume or of a large price movement if the displacement is more visible for the price axis. These areas could play a key role in future trends.
Additionally, the location of the bullish/bearish prices with the highest volume is highlighted with dotted lines, with the returned horizontal lines being useful as potential support/resistances.
🔹 Liquidity Clusters
Clusters of liquidity can be spotted when the Liquidity Price Depth Chart exhibits more rectangular shapes rather than "V" shapes.
The steepest segments of the shape represent periods of non-stationarity/high volatility, while zones with clustered prices highlight zones of potential liquidity clusters, that is zones where traders accumulate positions.
🔹 Liquidity Sentiment
At the bottom of each area, a percentage can be visible. This percentage aims to indicate if the traded volume is more often associated with bullish or bearish price variations.
In the chart above we can see that bullish price variations make 63.89% of the total volume in the range visible range.
🔶 SETTINGS
🔹 Bullish Elements
Bullish Price Highest Volume Location: Shows the location of the bullish price variation with the highest associated volume using one horizontal and one vertical line.
Bullish Volume %: Displays the bullish volume percentage at the bottom of the depth chart.
🔹 Bearish Elements
Bearish Price Highest Volume Location: Shows the location of the bearish price variation with the highest associated volume using one horizontal and one vertical line.
Bearish Volume %: Displays the bearish volume percentage at the bottom of the depth chart.
🔹 Misc
Volume % Box Padding: Width of the volume % boxes at the bottom of the Liquidity Price Depth Chart as a percentage of the chart visible range
Intersection Value FunctionsWinning entry for the first Pinefest contest. The challenge required providing three functions returning the intersection value between two series source1 and source2 in the event of a cross, crossunder, and crossover.
Feel free to use the code however you like.
🔶 CHALLENGE FUNCTIONS
🔹 crossValue()
//@function Finds intersection value of 2 lines/values if any cross occurs - First function of challenge -> crossValue(source1, source2)
//@param source1 (float) source value 1
//@param source2 (float) source value 2
//@returns Intersection value
example:
value = crossValue(close, close )
🔹 crossoverValue()
//@function Finds intersection value of 2 lines/values if crossover occurs - Second function of challenge -> crossoverValue(source1, source2)
//@param source1 (float) source value 1
//@param source2 (float) source value 2
//@returns Intersection value
example:
value = crossoverValue(close, close )
🔹 crossunderValue()
//@function Finds intersect of 2 lines/values if crossunder occurs - Third function of challenge -> crossunderValue(source1, source2)
//@param source1 (float) source value 1
//@param source2 (float) source value 2
//@returns Intersection value
example:
value = crossunderValue(close, close )
🔶 DETAILS
A series of values can be displayed as a series of points, where the point location highlights its value, however, it is more common to connect each point with a line to have a continuous aspect.
A line is a geometrical object connecting two points, each having y and x coordinates. A line has a slope controlling its steepness and an intercept indicating where the line crosses an axis. With these elements, we can describe a line as follows:
slope × x + intercept
A cross between two series of values occurs when one series is greater or lower than the other while its previous value isn't.
We are interested in finding the "intersection value", that is the value where two crossing lines are equal. This problem can be approached via linear interpolation.
A simple and direct approach to finding our intersection value is to find the common scaling factor of the slopes of the lines, that is the multiplicative factor that multiplies both lines slopes such that the resulting points are equal.
Given:
A = Point A1 + m1 × scaling_factor
B = Point B1 + m2 × scaling_factor
where scaling_factor is the common scaling factor, and m1 and m2 the slopes:
m1 = Point A2 - Point A1
m2 = Point B2 - Point B1
In our cases, since the horizontal distance between two points is simply 1, our lines slopes are equal to their vertical distance (rise).
Under the event of a cross, there exists a scaling_factor satisfying A = B , which allows us to directly compute our intersection value. The solution is given by:
scaling_factor = (B1 - A1)/(m1 - m2)
As such our intersection value can be given by the following equivalent calculations:
(1) A1 + m1 × (B1 - A1)/(m1 - m2)
(2) B1 + m2 × (B1 - A1)/(m1 - m2)
(3) A2 - m2 × (A2 - B2)/(m1 - m2)
(4) B2 - m2 × (A2 - B2)/(m1 - m2)
The proposed functions use the third calculation.
This approach is equivalent to expressions using the classical line equation, with:
slope1 × x + intercept1 = slope2 × x + intercept2
By solving for x , the intersection point is obtained by evaluating any of the line equations for the obtained x solution.
🔶 APPLICATIONS
The intersection point of two crossing lines might lead to interesting applications and creations, in this section various information/tools derived from the proposed calculations are presented.
This supplementary material is available within the script.
🔹 Intersections As Support/Resistances
The script allows extending the lines of the intersection value when a cross is detected, these extended lines could have applications as support/resistance lines.
🔹 Using The Scaling Factor
The core of the proposed calculation method is the common scaling factor, which can be used to return useful information, such as the position of the cross relative to the x coordinates of a line.
The above image highlights two moving averages (in green and red), the cross-interval areas are highlighted in blue, and the intersection point is highlighted as a blue line.
The pane below shows a bar plot displaying:
1 - scaling factor = 1 -
Values closer to 1 indicate that the cross location is closer to x2 (the right coordinate of the lines), while values closer to 0 indicate that the cross location is closer to x1 .
🔹 Intersection Matrix
The main proposed functions of this challenge focus on the crossings between two series of values, however, we might be interested in applying this over a collection of series.
We can see in the image above how the lines connecting two points intersect with each other, we can construct a matrix populated with the intersection value of two corresponding lines. If (X, Y) represents the intersection value between lines X and Y we have the following matrix:
| Line A | Line B | Line C | Line D |
-------|--------|--------|--------|--------|
Line A | | (A, B) | (A, C) | (A, D) |
Line B | (B, A) | | (B, C) | (B, D) |
Line C | (C, A) | (C, B) | | (C, D) |
Line D | (D, A) | (D, B) | (D, C) | |
We can see that the upper triangular part of this matrix is redundant, which is why the script does not compute it. This function is provided in the script as intersectionMatrix :
//@function Return the N * N intersection matrix from an array of values
//@param array_series (array) array of values, requires an array supporting historical referencing
//@returns (matrix) Intersection matrix showing intersection values between all array entries
In the script, we create an intersection matrix from an array containing the outputs of simple moving averages with a period in a specific user set range and can highlight if a simple moving average of a certain period crosses with another moving average with a different period, as well as the intersection value.
🔹 Magnification Glass
Crosses on a chart can be quite small and might require zooming in significantly to see a detailed picture of them. Using the obtained scaling factor allows reconstructing crossing events with an higher resolution.
A simple supplementary zoomIn function is provided to this effect:
//@function Display an higher resolution representation of intersecting lines
//@param source1 (float) source value 1
//@param source2 (float) source value 2
//@param css1 (color) color of source 1 line
//@param css2 (color) color of source 2 line
//@param intersec_css (color) color of intersection line
//@param area_css (color) color of box area
Users can obtain a higher resolution by modifying the provided "Resolution" setting.
The function returns a higher resolution representation of the most recent crosses between two input series, the intersection value is also provided.
AR Forecast Scatterplot [SS]This is a showcase indicator of my recently released SPTS library (the partner of the SPTS indicator).
This is just to show some of the practical applications of the boring statistical functions contained within the library/SPTS indicator :-).
This is an autoregressive (AR), scatter plot forecaster. What this means is it tags a lag of 1, performs an autoregressive assessment over the desired training time, then uses what it learns over that training time to forecast the likely outcome.
Its not a machine learning (I am in the process of creating one like this, but it is taking quite some time to complete), but the model needs to learn to plan the statistical coefficients that will best mimic the current trend.
As of its current state, this actually surpassed my own expectations. I can show you some QQQ examples:
Example #1:
Prediction:
Actual:
Example #2:
Prediction:
Actual:
Pretty nuts, eh?
Statistics, I'm telling you, its the answer haha.
So how do we determine the train time?
Because this is not using machine learning to control for over/under representation of datasize (again, I am making a version that does this, but its a slow process), some quick tips at determine appropriate train time is to use the Tradingview Regression tool:
When you set the parameters to align with the current, strongest trend, it is more reliable.
You will see, that it acutally is forecasting a move back to the exact top of this trend, that is because it is using the same processes as the linear regression trend on Tradingview.
You can use a bar counter indicator ( such as mine available here ) to calculate the number or bars back for your model training.
You can verify that these parameters are appropriate by looking at the Model Data table (which can be toggled on and off). You want to see both a high correlation and a high R2 value.
Quick note on colour:
Green = represents the upper confidence predictions (best case scenario)
Blue = represents the most likely result
red = represents that lower confidence (not as best case scenario)
Hope you enjoy!
Safe trades everyone!
Volume Profile with a few polylinesThe base of "Volume Profile with a few polylines" is another script of mine, Volume Profile (Maps) .
The structure of maps is used to gather the data. However, the drawings is done with polylines.
This enables coders to draw an entire volume profile with just a few polylines, while the range is broader.
This results in the benefit to draw more "lines" than with line.new() / box.new() alone.
🔶 CONCEPTS
🔹 Polylines
polyline.new creates a new polyline instance and displays it on the chart, sequentially connecting all of the points in the `points` array with line segments.
The segments in the drawing can be straight or curved depending on the `curved` parameter.
In this script, points are connected, starting from the bottom. The created line moves up until there is a price level where a volume value needs to be displayed,
at which the line goes to the left to the concerning volume value, coming back at the same price level until the line returns to its initial x-axis,
after which the line will continue to rise until all values are displayed.
A polyline can contain maximum 10000 points (10K).
Since the line has to go back and forth, each price/volume line takes 3 points.
In the case that 20K bars all have a different price, we would need 60K points, or just 6 polylines. A maximum of 100 polylines can be displayed.
The 3 highest volume values are displayed with line.new(), each with their own colour.
🔹 Maps
A map object is a collection that consists of key - value pairs
Each key is unique and can only appear once. When adding a new value with a key that the map already contains, that value replaces the old value associated with the key .
You can change the value of a particular key though, for example adding volume (value) at the same price (key), the latter technique is used in this script.
Volume is added to the map, associated with a particular price (default close, can be set at high, low, open,...)
When the map already contains the same price (key), the value (volume) is added to the existing volume at the associated price.
A map can contain maximum 50K values, which is more than enough to hold 20K bars (Basic 5K - Premium plan 20K), so the whole history can be put into a map.
🔹 Rounding function
This publication contains 2 round functions, which can be used to widen the Volume Profile
Round
• "Round" set at zero -> nothing changes to the source number
• "Round" set below zero -> x digit(s) after the decimal point, starting from the right side, and rounded.
• "Round" set above zero -> x digit(s) before the decimal point, starting from the right side, and rounded.
Example: 123456.789
0->123456.789
1->123456.79
2->123456.8
3->123457
-1->123460
-2->123500
Step
Another option is custom steps.
After setting "Round" to "Step", choose the desired steps in price,
Examples
• 2 -> 1234.00, 1236.00, 1238.00, 1240.00
• 5 -> 1230.00, 1235.00, 1240.00, 1245.00
• 100 -> 1200.00, 1300.00, 1400.00, 1500.00
• 0.05 -> 1234.00, 1234.05, 1234.10, 1234.15
•••
🔶 FEATURES
🔹 Volume * currency
Let's take as example BTCUSD, relative to USD, 10 volume at a price of 100 BTCUSD will be very different than 10 volume at a price of 30000 (1K vs. 300K)
If you want volume to be associated with USD, enable Volume * currency . Volume will then be multiplied by the price:
• 10 volume, 1 BTC = 100 -> 1000
• 10 volume, 1 BTC = 30K -> 300K
Polylines has the attributes curved & closed.
When "curved" is enabled the drawing will connect all points from the `points` array using curved line segments.
When "closed" is enabled the drawing will also connect the first point to the last point from the `points` array, resulting in a closed polyline.
They are default disabled, but can be enabled:
🔶 DETAILS
🔹 Put
When the map doesn't contain a price, it will be added, using map.put(id, key, value)
In our code:
map.put(originalMap, price, volume)
or
originalMap.put(price, volume)
A key (price) is now associated with a value (volume) -> key : value
Since all keys are unique, we don't have to know its position to extract the value, we just need to know the key -> map.get(id, key)
We use map.get() when a certain key already exists in the map, and we want to add volume with that value.
if originalMap.contains(price)
originalMap.put(price, originalMap.get(price) + volume)
-> At the last bar, all prices (source) are now associated with volume.
🔶 SETTINGS
Source : Set source of choice; default close , can be set as high , low , open , ...
Volume & currency : Enable to multiply volume with price (see Features )
Amount of bars : Set amount of bars which you want to include in the Volume Profile
🔹 Round -> ' Round/Step '
Round -> see Concepts
Step -> see Concepts
🔹 Display Volume Profile
Offset: shifts the Volume Profile (max. 500 bars to the right of last bar, see Features )
Max width Volume Profile: largest volume will be x bars wide, the rest is displayed as a ratio against largest volume (see Features )
Colours
Curved: make lines curved
Closed: connect last with first point
🔶 LIMITATIONS
• Lines won't go further than first bar (coded).
• The Volume Profile can be placed maximum 500 bar to the right of last price.
MA Sabres [LuxAlgo]The "MA Sabres" indicator highlights potential trend reversals based on a moving average direction. Detected reversals are accompanied by an extrapolated "Sabre" looking shape that can be used as support/resistance and as a source of breakouts.
🔶 USAGE
If a selected moving average (MA) continues in the same direction for a certain time, a change in that direction could signify a potential reversal.
In this publication, when a trend change occurs, a sabre-shaped figure is drawn which can be used as support/resistance:
A sabre can be indicative of a direction, however, it can also act as a stop-loss when the price should go in the opposite direction:
Or show potential areas of interest:
🔶 DETAILS
This publication will look for a change in direction after the MA went in the same direction during x consecutive bars (settings: " Reversal after x bars in the same direction ").
Then a circle-shaped drawing will be drawn 1 bar back, at the previous high/low, dependable of the previous direction.
From there originates a sabre-shaped figure where the tip lies as far as the user-set MA length.
The angle of the "sabre" relies on the ATR of the previous 14 bars.
Less volatility will create a flatter sabre while the opposite is true when there is more volatility in the previous 14 bars.
The sabre is created by the latest feature, polylines , which enables us to connect several 'points', resulting in a polyline.new() object.
Do note that sabres are offset by one bar to the past to align their locations.
🔶 SETTINGS
MA Type: SMA, EMA, SMMA (RMA), HullMA, WMA, VWMA, DEMA, TEMA, NONE (off)
Length: this sets the length of MA, and the length of the sabre shape
Previous Trend Duration: After the MA direction is the same for x consecutive bars, the first time the direction changes, a sabre is drawn
Zig-Zag Volume Profile (Bull vs. Bear) [Kioseff Trading]Hello!
Thank you @Pinecoders and @TradingView for putting polylines in production and making this viable!!
This script "Zig Zag Volume Profile" implements the polyline feature for Pine Script!
Features
Volume Profile anchored to zig zag trends
Bull vs Bear profiles!
Delta x price level
Standard POC and value area lines, in addition to separated POCs and value area lines for bull profiles and bear profiles
Up to 9999 profile rows per zigzag trend
Stylistic options for profiles
Configurable zig zag - profiles generated for small to large trends
Polylines!
This script generates Bull vs. Bear volume profiles for zig zag trends!
The zigzag indicator is configurable as normal; minor and major trend volume profiles are calculable. This indicator can be thought of as "Volume Profile/Delta for Trends''.
Up to 9999 volume profile levels (price levels) can be calculated for each profile, thanks to the new polyline feature, allowing for less aggregation / more precision of volume at price and volume delta.
Zig Zag Bull Vs Bear Profiles
The image above shows primary functionality!
Green profiles = buying volume
Red profiles = selling volume
Profiles are generated for each trend identified by the zigzag indicator.
The image above shows the indicator calculating volume delta for specific price blocks on the profile. Aggregate volume delta for the identified trend is displayed over the profile!
The image above shows Bull Profile POC lines and value area lines. Bear Profile POC lines and value area lines are also shown!
All colors and transparencies are configurable to the user's liking :D
Additionally, you can select to have the profiles drawn on contrasting sides. Bull Profile on left and Bear Profile on right.
For a more traditional look - you can select to draw the Bull & Bear profiles on the same x-point.
The indicator is robust enough to calculate on "long zig zags" and "short zig zags"; curved profiles can also be used!
The image above exemplifies usage of the indicator!
Bull & Bear volume profiles are calculated for trends on the 30-second timeframe.
The image above shows a more "utilitarian" presentation of the profiles. Once more, line and linefill colors/transparencies are all customizable; the indicator can look however you would like it to!
The image above shows key levels, the Bull vs. Bear profile, and volume delta for the current trend!
That's about it :D
This indicator is part of a series titled "Bull vs. Bear" - a suite of profile-like indicators I will be releasing over coming days. Thanks for checking this out!
Of course, a big thank you to @RicardoSantos for his MathOperator library that I use in every script.
If you have any suggestions please feel free to share!
Volume and Price Z-Score [Multi-Asset] - By LeviathanThis script offers in-depth Z-Score analytics on price and volume for 200 symbols. Utilizing visualizations such as scatter plots, histograms, and heatmaps, it enables traders to uncover potential trade opportunities, discern market dynamics, pinpoint outliers, delve into the relationship between price and volume, and much more.
A Z-Score is a statistical measurement indicating the number of standard deviations a data point deviates from the dataset's mean. Essentially, it provides insight into a value's relative position within a group of values (mean).
- A Z-Score of zero means the data point is exactly at the mean.
- A positive Z-Score indicates the data point is above the mean.
- A negative Z-Score indicates the data point is below the mean.
For instance, a Z-Score of 1 indicates that the data point is 1 standard deviation above the mean, while a Z-Score of -1 indicates that the data point is 1 standard deviation below the mean. In simple terms, the more extreme the Z-Score of a data point, the more “unusual” it is within a larger context.
If data is normally distributed, the following properties can be observed:
- About 68% of the data will lie within ±1 standard deviation (z-score between -1 and 1).
- About 95% will lie within ±2 standard deviations (z-score between -2 and 2).
- About 99.7% will lie within ±3 standard deviations (z-score between -3 and 3).
Datasets like price and volume (in this context) are most often not normally distributed. While the interpretation in terms of percentage of data lying within certain ranges of z-scores (like the ones mentioned above) won't hold, the z-score can still be a useful measure of how "unusual" a data point is relative to the mean.
The aim of this indicator is to offer a unique way of screening the market for trading opportunities by conveniently visualizing where current volume and price activity stands in relation to the average. It also offers features to observe the convergent/divergent relationships between asset’s price movement and volume, observe a single symbol’s activity compared to the wider market activity and much more.
Here is an overview of a few important settings.
Z-SCORE TYPE
◽️ Z-Score Type: Current Z-Score
Calculates the z-score by comparing current bar’s price and volume data to the mean (moving average with any custom length, default is 20 bars). This indicates how much the current bar’s price and volume data deviates from the average over the specified period. A positive z-score suggests that the current bar's price or volume is above the mean of the last 20 bars (or the custom length set by the user), while a negative z-score means it's below that mean.
Example: Consider an asset whose current price and volume both show deviations from their 20-bar averages. If the price's Z-Score is +1.5 and the volume's Z-Score is +2.0, it means the asset's price is 1.5 standard deviations above its average, and its trading volume is 2 standard deviations above its average. This might suggest a significant upward move with strong trading activity.
◽️ Z-Score Type: Average Z-Score
Calculates the custom-length average of symbol's z-score. Think of it as a smoothed version of the Current Z-Score. Instead of just looking at the z-score calculated on the latest bar, it considers the average behavior over the last few bars. By doing this, it helps reduce sudden jumps and gives a clearer, steadier view of the market.
Example: Instead of a single bar, imagine the average price and volume of an asset over the last 5 bars. If the price's 5-bar average Z-Score is +1.0 and the volume's is +1.5, it tells us that, over these recent bars, both the price and volume have been consistently above their longer-term averages, indicating sustained increase.
◽️ Z-Score Type: Relative Z-Score
Calculates a relative z-score by comparing symbol’s current bar z-score to the mean (average z-score of all symbols in the group). This is essentially a z-score of a z-score, and it helps in understanding how a particular symbol's activity stands out not just in its own historical context, but also in relation to the broader set of symbols being analyzed. In other words, while the primary z-score tells you how unusual a bar's activity is for that specific symbol, the relative z-score informs you how that "unusualness" ranks when compared to the entire group's deviations. This can be particularly useful in identifying symbols that are outliers even among outliers, indicating exceptionally unique behaviors or opportunities.
Example: If one asset's price Z-Score is +2.5 and volume Z-Score is +3.0, but the group's average Z-Scores are +0.5 for price and +1.0 for volume, this asset’s Relative Z-Score would be high and therefore stand out. This means that asset's price and volume activities are notably high, not just by its own standards, but also when compared to other symbols in the group.
DISPLAY TYPE
◽️ Display Type: Scatter Plot
The Scatter Plot is a visual tool designed to represent values for two variables, in this case the Z-Scores of price and volume for multiple symbols. Each symbol has it's own dot with x and y coordinates:
X-Axis: Represents the Z-Score of price. A symbol further to the right indicates a higher positive deviation in its price from its average, while a symbol to the left indicates a negative deviation.
Y-Axis: Represents the Z-Score of volume. A symbol positioned higher up on the plot suggests a higher positive deviation in its trading volume from its average, while one lower down indicates a negative deviation.
Here are some guideline insights of plot positioning:
- Top-Right Quadrant (High Volume-High Price): Symbols in this quadrant indicate a scenario where both the trading volume and price are higher than their respective mean.
- Top-Left Quadrant (High Volume-Low Price): Symbols here reflect high trading volumes but prices lower than the mean.
- Bottom-Left Quadrant (Low Volume-Low Price): Assets in this quadrant have both low trading volume and price compared to their mean.
- Bottom-Right Quadrant (Low Volume-High Price): Symbols positioned here have prices that are higher than their mean, but the trading volume is low compared to the mean.
The plot also integrates a set of concentric squares which serve as visual guides:
- 1st Square (1SD): Encapsulates symbols that have Z-Scores within ±1 standard deviation for both price and volume. Symbols within this square are typically considered to be displaying normal behavior or within expected range.
- 2nd Square (2SD): Encapsulates those with Z-Scores within ±2 standard deviations. Symbols within this boundary, but outside the 1 SD square, indicate a moderate deviation from the norm.
- 3rd Square (3SD): Represents symbols with Z-Scores within ±3 standard deviations. Any symbol outside this square is deemed to be a significant outlier, exhibiting extreme behavior in terms of either its price, its volume, or both.
By assessing the position of symbols relative to these squares, traders can swiftly identify which assets are behaving typically and which are showing unusual activity. This visualization simplifies the process of spotting potential outliers or unique trading opportunities within the market. The farther a symbol is from the center, the more it deviates from its typical behavior.
◽️ Display Type: Columns
In this visualization, z-scores are represented using columns, where each symbol is presented horizontally. Each symbol has two distinct nodes:
- Left Node: Represents the z-score of volume.
- Right Node: Represents the z-score of price.
The height of these nodes can vary along the y-axis between -4 and 4, based on the z-score value:
- Large Positive Columns: Signify a high or positive z-score, indicating that the price or volume is significantly above its average.
- Large Negative Columns: Represent a low or negative z-score, suggesting that the price or volume is considerably below its average.
- Short Columns Near 0: Indicate that the price or volume is close to its mean, showcasing minimal deviation.
This columnar representation provides a clear, intuitive view of how each symbol's price and volume deviate from their respective averages.
◽️ Display Type: Circles
In this visualization style, z-scores are depicted using circles. Each symbol is horizontally aligned and represented by:
- Solid Circle: Represents the z-score of price.
- Transparent Circle: Represents the z-score of volume.
The vertical position of these circles on the y-axis ranges between -4 and 4, reflecting the z-score value:
- Circles Near the Top: Indicate a high or positive z-score, suggesting the price or volume is well above its average.
- Circles Near the Bottom: Represent a low or negative z-score, pointing to the price or volume being notably below its average.
- Circles Around the Midline (0): Highlight that the price or volume is close to its mean, with minimal deviation.
◽️ Display Type: Delta Columns
There's also an option to utilize Z-Score Delta Columns. For each symbol, a single column is presented, depicting the difference between the z-score of price and the z-score of volume.
The z-score delta essentially captures the disparity between how much the price and volume deviate from their respective mean:
- Positive Delta: Indicates that the z-score of price is greater than the z-score of volume. This suggests that the price has deviated more from its average than the volume has from its own average. Such a scenario could point to price movements being more significant or pronounced compared to the changes in volume.
- Negative Delta: Represents that the z-score of volume is higher than the z-score of price. This might mean that there are substantial volume changes, yet the price hasn't moved as dramatically. This can be indicative of potential build-up in trading interest without an equivalent impact on price.
- Delta Close to 0: Means that the z-scores for price and volume are almost equal, indicating their deviations from the average are in sync.
◽️ Display Type: Z-Volume/Z-Price Heatmap
This visualization offers a heatmap either for volume z-scores or price z-scores across all symbols. Here's how it's presented:
Each symbol is allocated its own horizontal row. Within this row, bar-by-bar data is displayed using a color gradient to represent the z-score values. The heatmap employs a user-defined gradient scale, where a chosen "cold" color represents low z-scores and a chosen "hot" color signifies high z-scores. As the z-score increases or decreases, the colors transition smoothly along this gradient, providing an intuitive visual indication of the z-score's magnitude.
- Cold Colors: Indicate values significantly below the mean (negative z-score)
- Mild Colors: Represent values close to the mean, suggesting minimal deviation.
- Hot Colors: Indicate values significantly above the mean (positive z-score)
This heatmap format provides a rapid, visually impactful means to discern how each symbol's price or volume is behaving relative to its average. The color-coded rows allow you to quickly spot outliers.
VOLUME TYPE
The "Volume Type" input allows you to choose the nature of volume data that will be factored into the volume z-score calculation. The interpretation of indicator’s data changes based on this input. You can opt between:
- Volume (Regular Volume): This is the classic measure of trading volume, which represents the volume traded in a given time period - bar.
- OBV (On-Balance Volume): OBV is a momentum indicator that accumulates volume on up bars and subtracts it on down bars, making it a cumulative indicator that sort of measures buying and selling pressure.
Interpretation Implications:
- For Volume Type: Regular Volume:
Positive Z-Score: Indicates that the trading volume is above its average, meaning there's unusually high trading activity .
Negative Z-Score: Suggests that the trading volume is below its average, signifying unusually low trading activity.
- For Volume Type: OBV:
Positive Z-Score: Signifies that “buying pressure” is above its average.
Negative Z-Score: Signifies that “selling pressure” is above its average.
When comparing Z-Score of OBV to Z-Score of price, we can observe several scenarios. If Z-Price and Z-Volume are convergent (have similar z-scores), we can say that the directional price movement is supported by volume. If Z-Price and Z-Volume are divergent (have very different z-scores or one of them being zero), it suggests a potential misalignment between price movement and volume support, which might hint at possible reversals or weakness.
Machine Learning using Neural Networks | EducationalThe script provided is a comprehensive illustration of how to implement and execute a simplistic Neural Network (NN) on TradingView using PineScript.
It encompasses the entire workflow from data input, weight initialization, implicit neuron calculation, feedforward computation, backpropagation for weight adjustments, generating predictions, to visualizing the Mean Squared Error (MSE) Loss Curve for monitoring the training phase.
In the visual example above, you can see that the prediction is not aligned with the actual value. This is intentional for demonstrative purposes, and by incrementing the Epochs or Learning Rate, you will see these two values converge as the accuracy increases.
Hyperparameters:
Learning Rate, Epochs, and the choice between Simple Backpropagation and a verbose version are declared as script inputs, allowing users to tailor the training process.
Initialization:
Random initialization of weight matrices (w1, w2) is performed to ensure asymmetry, promoting effective gradient updates. A seed is added for reproducibility.
Utility Functions:
Functions for matrix randomization, sigmoid activation, MSE loss calculation, data normalization, and standardization are defined to streamline the computation process.
Neural Network Computation:
The feedforward function computes the hidden and output layer values given the input.
Two variants of the backpropagation function are provided for weight adjustment, with one offering a more verbose step-by-step computation of gradients.
A wrapper train_nn function iterates through epochs, performing feedforward, loss computation, and backpropagation in each epoch while logging and collecting loss values.
Training Invocation:
The input data is prepared by normalizing it to a value between 0 and 1 using the maximum standardized value, and the training process is invoked only on the last confirmed bar to preserve computational resources.
Output Forecasting and Visualization:
Post training, the NN's output (predicted price) is computed, standardized and visualized alongside the actual price on the chart.
The MSE loss between the predicted and actual prices is visualized, providing insight into the prediction accuracy.
Optionally, the MSE Loss Curve is plotted on the chart, illustrating the loss trajectory through epochs, assisting in understanding the training performance.
Customizable Visualization:
Various inputs control visualization aspects like Chart Scaling, Chart Horizontal Offset, and Chart Vertical Offset, allowing users to adapt the visualization to their preference.
-------------------------------------------------------
The following is this Neural Network structure, consisting of one hidden layer, with two hidden neurons.
Through understanding the steps outlined in my code, one should be able to scale the NN in any way they like, such as changing the input / output data and layers to fit their strategy ideas.
Additionally, one could forgo the backpropagation function, and load their own trained weights into the w1 and w2 matrices, to have this code run purely for inference.
-------------------------------------------------------
While this demonstration does create a “prediction”, it is on historical data. The purpose here is educational, rather than providing a ready tool for non-programmer consumers.
Normally in Machine Learning projects, the training process would be split into two segments, the Training and the Validation parts. For the purpose of conveying the core concept in a concise and non-repetitive way, I have foregone the Validation part. However, it is merely the application of your trained network on new data (feedforward), and monitoring the loss curve.
Essentially, checking the accuracy on “unseen” data, while training it on “seen” data.
-------------------------------------------------------
I hope that this code will help developers create interesting machine learning applications within the Tradingview ecosystem.
Sync Frame (MTF Charts) [Kioseff Trading]Hello!
This indicator "Sync Frame" displays various lower timeframe charts for the asset on your screen!
5 lower timeframe candle charts shown
Timeframes auto-calculated using the new timeframe.from_seconds() function
Heikin-Ashi candles available
Baseline chart type available
Dynamic Scaling for ease of use
User customizable timeframes
Simple script (:
The image above shows the baseline chart type.
Time image above shows a traditional candlestick chart.
The image above shows a hekin-ashi chart.
The image above shows the indicator when nearly zoomed in as much as possible. The lower timeframe charts adjust to my chart positioning.
The image above shows my screen fully zoomed out; the lower timeframe charts adjust in both height and width to accommodate my chart positioning!
Thank you for checking this out (:
Tops & Bottoms - Time of Day Report█ OVERVIEW
The indicator tracks and reports the percentage of occurrence of daily tops and bottoms by the time of the day.
█ CONCEPTS
At certain times during the trading day, the market reverses and marks the high or low of the day. Tops and bottoms are vital when entering a trade, as they will decide if you are catching the train or being straight offside. They are equally crucial when exiting a position, as they will determine if you are closing at the optimal price or seeing your unrealized profits vanish.
This indicator is before all for educational purposes. It aims to make the knowledge available to all traders, facilitate understanding of the various markets, and ultimately get to know your trading pairs by heart.
Tops and bottoms percentage of occurrence on EURGBP (London time).
Up days versus down days on EURUSD (London time).
█ FEATURES
Selectable time zones
Present the column chart in your local time zone (or other market participants).
Configurable time range filter
Select the period to report from.
Day type filter
Analyze all days, or filter only up days or down days.
█ HOW TO USE
Plot the indicator and visit the 1-hour or 30-minute timeframe.
█ NOTES
Timeframe choice
The 1-hour timeframe produces a higher number of days sampled. Prefer the usage of the 30-minute timeframe when your market starts at 9:30 AM.
Daylight Saving Time (DST)
The exchange time and geographical time zone options may observe Daylight Saving Time, unlike UTC+0.
Monte Carlo Simulation - Your Strategy [Kioseff Trading]Hello!
This script “Monte Carlo Simulation - Your Strategy” uses Monte Carlo simulations for your inputted strategy returns or the asset on your chart!
Features
Monte Carlo Simulation: Performs Monte Carlo simulation to generate multiple future paths.
Asset Price or Strategy: Can simulate either future asset prices based on historical log returns or a specific trading strategy's future performance.
User-Defined Input: Allows you to input your own historical returns for simulation.
Statistical Methods: Offers two simulation methods—Gaussian (Normal) distribution and Bootstrapping.
Graphical Display: Provides options for graphical representation, including line plots and histograms.
Cumulative Probability Target: Enables setting a user-defined cumulative probability target to quantify simulation results.
Adjustable Parameters: Offers numerous user-adjustable settings like number of simulations, forecast length, and more.
Historical Data Points: Option to specify the amount of historical data to be used in the simulation (price).
Custom Binning: Allows you to select the binning method for histograms, with options like Sturges, Rice, and Square Root.
Best/Worst Case: Allows you to show only the best case / worst case outcome (range) for all simulations!
Scatterplot: allows you to show up to 1000 potential outcomes for a specified trade number (or bars forward price endpoint) using a scatter plot.
The image above shows the primary components of the indicator!
The image above shows the best/worst case outcome feature in action!
The image above shows a "fun feature" where 1000 simulated end points for a 15-bar price trajectory are shown as a scatter plot!
How To Perform a Monte Carlo Simulation On Your Strategy
Really, you can input any data into the indicator it will perform a Monte Carlo Simulation on it :D
The following instructions show how to export your strategy results from TradingView to an Excel File, copy the data, and input it into the indicator.
However , you are not limited to following this method!
Wherever your strategy results are stored, simply copy and paste them into the indicator text area in the settings and simulations will begin.
Returns Should Follow This Format
1
3
-3
2
-5
The numbers are presented as a single column. No commas or separators used.
The numbers above are in sequential order. A return of "1" for the first trade and a return of "-5" for the last trade. Your strategy returns will likely be in sequential order already so don't worry too much about this (:
How To Perform a Monte Carlo Simulation On Your TradingView Strategy With Excel Data
Export your strategy returns to an excel file using TradingView
Navigate to your downloads folder to column G "Profit"
Click the column and press CTRL + SPACE to highlight the entire column
Press CTRL + C to copy the entire column
Open this indicator's settings and paste the returns into the text area
The image above illustrates the process!
Notes on Inputting Returns
*Must input your returns without a separate as a vertical list
*The initial text area can only hold so many return values. If your list of trades is large you can input additional returns into two additional text areas at the bottom of the indicator settings.
That should be it; thank you for checking this out!
Statistical Package for the Trading Sciences [SS]
This is SPTS.
It stands for Statistical Package for the Trading Sciences.
Its a play on SPSS (Statistical Package for the Social Sciences) by IBM (software that, prior to Pinescript, I would use on a daily basis for trading).
Let's preface this indicator first:
This isn't so much an indicator as it is a project. A passion project really.
This has been in the works for months and I still feel like its incomplete. But the plan here is to continue to add functionality to it and actually have the Pinecoding and Tradingview community contribute to it.
As a math based trader, I relied on Excel, SPSS and R constantly to plan my trades. Since learning a functional amount of Pinescript and coding a lot of what I do and what I relied on SPSS, Excel and R for, I use it perhaps maybe a few times a week.
This indicator, or package, has some of the key things I used Excel and SPSS for on a daily and weekly basis. This also adds a lot of, I would say, fairly complex math functionality to Pinescript. Because this is adding functionality not necessarily native to Pinescript, I have placed most, if not all, of the functionality into actual exportable functions. I have also set it up as a kind of library, with explanations and tips on how other coders can take these functions and implement them into other scripts.
The hope here is that other coders will take it, build upon it, improve it and hopefully share additional functionality that can be added into this package. Hence why I call it a project. Okay, let's get into an overview:
Current Functions of SPTS:
SPTS currently has the following functionality (further explanations will be offered below):
Ability to Perform a One-Tailed, Two-Tailed and Paired Sample T-Test, with corresponding P value.
Standard Pearson Correlation (with functionality to be able to calculate the Pearson Correlation between 2 arrays).
Quadratic (or Curvlinear) correlation assessments.
R squared Assessments.
Standard Linear Regression.
Multiple Regression of 2 independent variables.
Tests of Normality (with Kurtosis and Skewness) and recognition of up to 7 Different Distributions.
ARIMA Modeller (Sort of, more details below)
Okay, so let's go over each of them!
T-Tests
So traditionally, most correlation assessments on Pinescript are done with a generic Pearson Correlation using the "ta.correlation" argument. However, this is not always the best test to be used for correlations and determine effects. One approach to correlation assessments used frequently in economics is the T-Test assessment.
The t-test is a statistical hypothesis test used to determine if there is a significant difference between the means of two groups. It assesses whether the sample means are likely to have come from populations with the same mean. The test produces a t-statistic, which is then compared to a critical value from the t-distribution to determine statistical significance. Lower p-values indicate stronger evidence against the null hypothesis of equal means.
A significant t-test result, indicating the rejection of the null hypothesis, suggests that there is statistical evidence to support that there is a significant difference between the means of the two groups being compared. In practical terms, it means that the observed difference in sample means is unlikely to have occurred by random chance alone. Researchers typically interpret this as evidence that there is a real, meaningful difference between the groups being studied.
Some uses of the T-Test in finance include:
Risk Assessment: The t-test can be used to compare the risk profiles of different financial assets or portfolios. It helps investors assess whether the differences in returns or volatility are statistically significant.
Pairs Trading: Traders often apply the t-test when engaging in pairs trading, a strategy that involves trading two correlated securities. It helps determine when the price spread between the two assets is statistically significant and may revert to the mean.
Volatility Analysis: Traders and risk managers use t-tests to compare the volatility of different assets or portfolios, assessing whether one is significantly more or less volatile than another.
Market Efficiency Tests: Financial researchers use t-tests to test the Efficient Market Hypothesis by assessing whether stock price movements follow a random walk or if there are statistically significant deviations from it.
Value at Risk (VaR) Calculation: Risk managers use t-tests to calculate VaR, a measure of potential losses in a portfolio. It helps assess whether a portfolio's value is likely to fall below a certain threshold.
There are many other applications, but these are a few of the highlights. SPTS permits 3 different types of T-Test analyses, these being the One Tailed T-Test (if you want to test a single direction), two tailed T-Test (if you are unsure of which direction is significant) and a paired sample t-test.
Which T is the Right T?
Generally, a one-tailed t-test is used to determine if a sample mean is significantly greater than or less than a specified population mean, whereas a two-tailed t-test assesses if the sample mean is significantly different (either greater or less) from the population mean. In contrast, a paired sample t-test compares two sets of paired observations (e.g., before and after treatment) to assess if there's a significant difference in their means, typically used when the data points in each pair are related or dependent.
So which do you use? Well, it depends on what you want to know. As a general rule a one tailed t-test is sufficient and will help you pinpoint directionality of the relationship (that one ticker or economic indicator has a significant affect on another in a linear way).
A two tailed is more broad and looks for significance in either direction.
A paired sample t-test usually looks at identical groups to see if one group has a statistically different outcome. This is usually used in clinical trials to compare treatment interventions in identical groups. It's use in finance is somewhat limited, but it is invaluable when you want to compare equities that track the same thing (for example SPX vs SPY vs ES1!) or you want to test a hypothesis about an index and a leveraged share (for example, the relationship between FNGU and, say, MSFT or NVDA).
Statistical Significance
In general, with a t-test you would need to reference a T-Table to determine the statistical significance of the degree of Freedom and the T-Statistic.
However, because I wanted Pinescript to full fledge replace SPSS and Excel, I went ahead and threw the T-Table into an array, so that Pinescript can make the determination itself of the actual P value for a t-test, no cross referencing required :-).
Left tail (Significant):
Both tails (Significant):
Distributed throughout (insignificant):
As you can see in the images above, the t-test will also display a bell-curve analysis of where the significance falls (left tail, both tails or insignificant, distributed throughout).
That said, I have not included this function for the paired sample t-test because that is a bit more nuanced. But for the one and two tailed assessments, the indicator will provide you the P value.
Pearson Correlation Assessment
I don't think I need to go into too much detail on this one.
I have put in functionality to quickly calculate the Pearson Correlation of two array's, which is not currently possible with the "ta.correlation" function.
Quadratic (Curvlinear) Correlation
Not everything in life is linear, sometimes things are curved!
The Pearson Correlation is great for linear assessments, but tends to under-estimate the degree of the relationship in curved relationships. There currently is no native function to t-test for quadratic/curvlinear relationships, so I went ahead and created one.
You can see an example of how Quadratic and Pearson Correlations vary when you look at CME_MINI:ES1! against AMEX:DIA for the past 10 ish months:
Pearson Correlation:
Quadratic Correlation:
One or the other is not always the best, so it is important to check both!
R-Squared Assessments:
The R-squared value, or the square of the Pearson correlation coefficient (r), is used to measure the proportion of variance in one variable that can be explained by the linear relationship with another variable. It represents the goodness-of-fit of a linear regression model with a single predictor variable.
R-Squared is offered in 3 separate forms within this indicator. First, there is the generic R squared which is taking the square root of a Pearson Correlation assessment to assess the variance.
The next is the R-Squared which is calculated from an actual linear regression model done within the indicator.
The first is the R-Squared which is calculated from a multiple regression model done within the indicator.
Regardless of which R-Squared value you are using, the meaning is the same. R-Square assesses the variance between the variables under assessment and can offer an insight into the goodness of fit and the ability of the model to account for the degree of variance.
Here is the R Squared assessment of the SPX against the US Money Supply:
Standard Linear Regression
The indicator contains the ability to do a standard linear regression model. You can convert one ticker or economic indicator into a stock, ticker or other economic indicator. The indicator will provide you with all of the expected information from a linear regression model, including the coefficients, intercept, error assessments, correlation and R2 value.
Here is AAPL and MSFT as an example:
Multiple Regression
Oh man, this was something I really wanted in Pinescript, and now we have it!
I have created a function for multiple regression, which, if you export the function, will permit you to perform multiple regression on any variables available in Pinescript!
Using this functionality in the indicator, you will need to select 2, dependent variables and a single independent variable.
Here is an example of multiple regression for NASDAQ:AAPL using NASDAQ:MSFT and NASDAQ:NVDA :
And an example of SPX using the US Money Supply (M2) and AMEX:GLD :
Tests of Normality:
Many indicators perform a lot of functions on the assumption of normality, yet there are no indicators that actually test that assumption!
So, I have inputted a function to assess for normality. It uses the Kurtosis and Skewness to determine up to 7 different distribution types and it will explain the implication of the distribution. Here is an example of SP:SPX on the Monthly Perspective since 2010:
And NYSE:BA since the 60s:
And NVDA since 2015:
ARIMA Modeller
Okay, so let me disclose, this isn't a full fledge ARIMA modeller. I took some shortcuts.
True ARIMA modelling would involve decomposing the seasonality from the trend. I omitted this step for simplicity sake. Instead, you can select between using an EMA or SMA based approach, and it will perform an autogressive type analysis on the EMA or SMA.
I have tested it on lookback with results provided by SPSS and this actually works better than SPSS' ARIMA function. So I am actually kind of impressed.
You will need to input your parameters for the ARIMA model, I usually would do a 14, 21 and 50 day EMA of the close price, and it will forecast out that range over the length of the EMA.
So for example, if you select the EMA 50 on the daily, it will plot out the forecast for the next 50 days based on an autoregressive model created on the EMA 50. Here is how it looks on AMEX:SPY :
You can also elect to plot the upper and lower confidence bands:
Closing Remarks
So that is the indicator/package.
I do hope to continue expanding its functionality, but as of now, it does already have quite a lot of functionality.
I really hope you enjoy it and find it helpful. This. Has. Taken. AGES! No joke. Between referencing my old statistics textbooks, trying to remember how to calculate some of these things, and wanting to throw my computer against the wall because of errors in the code, this was a task, that's for sure. So I really hope you find some usefulness in it all and enjoy the ability to be able to do functions that previously could really only be done in external software.
As always, leave your comments, suggestions and feedback below!
Take care!
Position Cost DistributionThe Position Cost Distribution indicator (also known as the Market Position Overview, Chip Distribution, or CYQ Algorithm) provides an estimate of how shares are distributed across different price levels. Visually, it resembles the Volume Profile indicator, though they rely on distinct computational approaches.
🟠 Principle
The Position Cost Distribution algorithm is based on the principle that a security's total shares outstanding usually remains constant, except under conditions like stock splits, reverse splits, or new share issuance. It views all trading activity as simply exchanging share positions between holders at different price points.
By analyzing daily trade volume and the prior day's distribution, the algorithm infers the resulting share distribution after each day. By tracking these inferred transpositions over time, the indicator builds up an aggregate view of the estimated share concentration at each price level. This provides insights into potential buying and selling pressure zones that could form support or resistance areas.
Together with the Volume Profile, the Position Cost Distribution gives traders multiple lenses for examining market structure from both a volume and positional standpoint. Both can help identify meaningful technical price levels.
🟠 Algorithm
The algorithm initializes by allocating all shares to the price range encompassed by the first bar displayed on the chart. Preferably, the chart window should include the stock's IPO date, allowing the model to distribute shares specifically to the IPO price.
For subsequent trading sessions, the indicator performs the following calculations:
1. The daily turnover ratio is calculated by dividing the bar's trading volume by total outstanding shares.
2. For each price level (bucket), the number of shares is reduced by the turnover amount to represent shares transferring from existing holders.
3. The bar's total volume is then added to buckets corresponding to that period's price range.
Currently, the model assumes each share has an equal probability of being exchanged, regardless of how long ago it was acquired or at what price. Potential optimizations could incorporate factors like making shares held longer face a smaller chance of transfer compared to more recently purchased shares.
────────────────────────────────────────────
中文介绍:该指标为“筹码分布”的一个 TradingView 实现 :)
[Excalibur] Ehlers AutoCorrelation Periodogram ModifiedKeep your coins folks, I don't need them, don't want them. If you wish be generous, I do hope that charitable peoples worldwide with surplus food stocks may consider stocking local food banks before stuffing monetary bank vaults, for the crusade of remedying the needs of less than fortunate children, parents, elderly, homeless veterans, and everyone else who deserves nutritional sustenance for the soul.
DEDICATION:
This script is dedicated to the memory of Nikolai Dmitriyevich Kondratiev (Никола́й Дми́триевич Кондра́тьев) as tribute for being a pioneering economist and statistician, paving the way for modern econometrics by advocation of rigorous and empirical methodologies. One of his most substantial contributions to the study of business cycle theory include a revolutionary hypothesis recognizing the existence of dynamic cycle-like phenomenon inherent to economies that are characterized by distinct phases of expansion, stagnation, recession and recovery, what we now know as "Kondratiev Waves" (K-waves). Kondratiev was one of the first economists to recognize the vital significance of applying quantitative analysis on empirical data to evaluate economic dynamics by means of statistical methods. His understanding was that conceptual models alone were insufficient to adequately interpret real-world economic conditions, and that sophisticated analysis was necessary to better comprehend the nature of trending/cycling economic behaviors. Additionally, he recognized prosperous economic cycles were predominantly driven by a combination of technological innovations and infrastructure investments that resulted in profound implications for economic growth and development.
I will mention this... nation's economies MUST be supported and defended to continuously evolve incrementally in order to flourish in perpetuity OR suffer through eras with lasting ramifications of societal stagnation and implosion.
Analogous to the realm of economics, aperiodic cycles/frequencies, both enduring and ephemeral, do exist in all facets of life, every second of every day. To name a few that any blind man can naturally see are: heartbeat (cardiac cycles), respiration rates, circadian rhythms of sleep, powerful magnetic solar cycles, seasonal cycles, lunar cycles, weather patterns, vegetative growth cycles, and ocean waves. Do not pretend for one second that these basic aforementioned examples do not affect business cycle fluctuations in minuscule and monumental ways hour to hour, day to day, season to season, year to year, and decade to decade in every nation on the planet. Kondratiev's original seminal theories in macroeconomics from nearly a century ago have proven remarkably prescient with many of his antiquated elementary observations/notions/hypotheses in macroeconomics being scholastically studied and topically researched further. Therefore, I am compelled to honor and recognize his statistical insight and foresight.
If only.. Kondratiev could hold a pocket sized computer in the cup of both hands bearing the TradingView logo and platform services, I truly believe he would be amazed in marvelous delight with a GARGANTUAN smile on his face.
INTRODUCTION:
Firstly, this is NOT technically speaking an indicator like most others. I would describe it as an advanced cycle period detector to obtain market data spectral estimates with low latency and moderate frequency resolution. Developers can take advantage of this detector by creating scripts that utilize a "Dominant Cycle Source" input to adaptively govern algorithms. Be forewarned, I would only recommend this for advanced developers, not novice code dabbling. Although, there is some Pine wizardry introduced here for novice Pine enthusiasts to witness and learn from. AI did describe the code into one super-crunched sentence as, "a rare feat of exceptionally formatted code masterfully balancing visual clarity, precision, and complexity to provide immense educational value for both programming newcomers and expert Pine coders alike."
Understand all of the above aforementioned? Buckle up and proceed for a lengthy read of verbose complexity...
This is my enhanced and heavily modified version of autocorrelation periodogram (ACP) for Pine Script v5.0. It was originally devised by the mathemagician John Ehlers for detecting dominant cycles (frequencies) in an asset's price action. I have been sitting on code similar to this for a long time, but I decided to unleash the advanced code with my fashion. Originally Ehlers released this with multiple versions, one in a 2016 TASC article and the other in his last published 2013 book "Cycle Analytics for Traders", chapter 8. He wasn't joking about "concepts of advanced technical trading" and ACP is nowhere near to his most intimidating and ingenious calculations in code. I will say the book goes into many finer details about the original periodogram, so if you wish to delve into even more elaborate info regarding Ehlers' original ACP form AND how you may adapt algorithms, you'll have to obtain one. Note to reader, comparing Ehlers' original code to my chimeric code embracing the "Power of Pine", you will notice they have little resemblance.
What you see is a new species of autocorrelation periodogram combining Ehlers' innovation with my fascinations of what ACP could be in a Pine package. One other intention of this script's code is to pay homage to Ehlers' lifelong works. Like Kondratiev, Ehlers is also a hardcore cycle enthusiast. I intend to carry on the fire Ehlers envisioned and I believe that is literally displayed here as a pleasant "fiery" example endowed with Pine. With that said, I tried to make the code as computationally efficient as possible, without going into dozens of more crazy lines of code to speed things up even more. There's also a few creative modifications I made by making alterations to the originating formulas that I felt were improvements, one of them being lag reduction. By recently questioning every single thing I thought I knew about ACP, combined with the accumulation of my current knowledge base, this is the innovative revision I came up with. I could have improved it more but decided not to mind thrash too many TV members, maybe later...
I am now confident Pine should have adequate overhead left over to attach various indicators to the dominant cycle via input.source(). TV, I apologize in advance if in the future a server cluster combusts into a raging inferno... Coders, be fully prepared to build entire algorithms from pure raw code, because not all of the built-in Pine functions fully support dynamic periods (e.g. length=ANYTHING). Many of them do, as this was requested and granted a while ago, but some functions are just inherently finicky due to implementation combinations and MUST be emulated via raw code. I would imagine some comprehensive library or numerous authored scripts have portions of raw code for Pine built-ins some where on TV if you look diligently enough.
Notice: Unfortunately, I will not provide any integration support into member's projects at all. I have my own projects that require way too much of my day already. While I was refactoring my life (forgoing many other "important" endeavors) in the early half of 2023, I primarily focused on this code over and over in my surplus time. During that same time I was working on other innovations that are far above and beyond what this code is. I hope you understand.
The best way programmatically may be to incorporate this code into your private Pine project directly, after brutal testing of course, but that may be too challenging for many in early development. Being able to see the periodogram is also beneficial, so input sourcing may be the "better" avenue to tether portions of the dominant cycle to algorithms. Unique indication being able to utilize the dominantCycle may be advantageous when tethering this script to those algorithms. The easiest way is to manually set your indicators to what ACP recognizes as the dominant cycle, but that's actually not considered dynamic real time adaption of an indicator. Different indicators may need a proportion of the dominantCycle, say half it's value, while others may need the full value of it. That's up to you to figure that out in practice. Sourcing one or more custom indicators dynamically to one detector's dominantCycle may require code like this: `int sourceDC = int(math.max(6, math.min(49, input.source(close, "Dominant Cycle Source"))))`. Keep in mind, some algos can use a float, while algos with a for loop require an integer.
I have witnessed a few attempts by talented TV members for a Pine based autocorrelation periodogram, but not in this caliber. Trust me, coding ACP is no ordinary task to accomplish in Pine and modifying it blessed with applicable improvements is even more challenging. For over 4 years, I have been slowly improving this code here and there randomly. It is beautiful just like a real flame, but... this one can still burn you! My mind was fried to charcoal black a few times wrestling with it in the distant past. My very first attempt at translating ACP was a month long endeavor because PSv3 simply didn't have arrays back then. Anyways, this is ACP with a newer engine, I hope you enjoy it. Any TV subscriber can utilize this code as they please. If you are capable of sufficiently using it properly, please use it wisely with intended good will. That is all I beg of you.
Lastly, you now see how I have rasterized my Pine with Ehlers' swami-like tech. Yep, this whole time I have been using hline() since PSv3, not plot(). Evidently, plot() still has a deficiency limited to only 32 plots when it comes to creating intense eye candy indicators, the last I checked. The use of hline() is the optimal choice for rasterizing Ehlers styled heatmaps. This does only contain two color schemes of the many I have formerly created, but that's all that is essentially needed for this gizmo. Anything else is generally for a spectacle or seeing how brutal Pine can be color treated. The real hurdle is being able to manipulate colors dynamically with Merlin like capabilities from multiple algo results. That's the true challenging part of these heatmap contraptions to obtain multi-colored "predator vision" level indication. You now have basic hline() food for thought empowerment to wield as you can imaginatively dream in Pine projects.
PERIODOGRAM UTILITY IN REAL WORLD SCENARIOS:
This code is a testament to the abilities that have yet to be fully realized with indication advancements. Periodograms, spectrograms, and heatmaps are a powerful tool with real-world applications in various fields such as financial markets, electrical engineering, astronomy, seismology, and neuro/medical applications. For instance, among these diverse fields, it may help traders and investors identify market cycles/periodicities in financial markets, support engineers in optimizing electrical or acoustic systems, aid astronomers in understanding celestial object attributes, assist seismologists with predicting earthquake risks, help medical researchers with neurological disorder identification, and detection of asymptomatic cardiovascular clotting in the vaxxed via full body thermography. In either field of study, technologies in likeness to periodograms may very well provide us with a better sliver of analysis beyond what was ever formerly invented. Periodograms can identify dominant cycles and frequency components in data, which may provide valuable insights and possibly provide better-informed decisions. By utilizing periodograms within aspects of market analytics, individuals and organizations can potentially refrain from making blinded decisions and leverage data-driven insights instead.
PERIODOGRAM INTERPRETATION:
The periodogram renders the power spectrum of a signal, with the y-axis representing the periodicity (frequencies/wavelengths) and the x-axis representing time. The y-axis is divided into periods, with each elevation representing a period. In this periodogram, the y-axis ranges from 6 at the very bottom to 49 at the top, with intermediate values in between, all indicating the power of the corresponding frequency component by color. The higher the position occurs on the y-axis, the longer the period or lower the frequency. The x-axis of the periodogram represents time and is divided into equal intervals, with each vertical column on the axis corresponding to the time interval when the signal was measured. The most recent values/colors are on the right side.
The intensity of the colors on the periodogram indicate the power level of the corresponding frequency or period. The fire color scheme is distinctly like the heat intensity from any casual flame witnessed in a small fire from a lighter, match, or camp fire. The most intense power would be indicated by the brightest of yellow, while the lowest power would be indicated by the darkest shade of red or just black. By analyzing the pattern of colors across different periods, one may gain insights into the dominant frequency components of the signal and visually identify recurring cycles/patterns of periodicity.
SETTINGS CONFIGURATIONS BRIEFLY EXPLAINED:
Source Options: These settings allow you to choose the data source for the analysis. Using the `Source` selection, you may tether to additional data streams (e.g. close, hlcc4, hl2), which also may include samples from any other indicator. For example, this could be my "Chirped Sine Wave Generator" script found in my member profile. By using the `SineWave` selection, you may analyze a theoretical sinusoidal wave with a user-defined period, something already incorporated into the code. The `SineWave` will be displayed over top of the periodogram.
Roofing Filter Options: These inputs control the range of the passband for ACP to analyze. Ehlers had two versions of his highpass filters for his releases, so I included an option for you to see the obvious difference when performing a comparison of both. You may choose between 1st and 2nd order high-pass filters.
Spectral Controls: These settings control the core functionality of the spectral analysis results. You can adjust the autocorrelation lag, adjust the level of smoothing for Fourier coefficients, and control the contrast/behavior of the heatmap displaying the power spectra. I provided two color schemes by checking or unchecking a checkbox.
Dominant Cycle Options: These settings allow you to customize the various types of dominant cycle values. You can choose between floating-point and integer values, and select the rounding method used to derive the final dominantCycle values. Also, you may control the level of smoothing applied to the dominant cycle values.
DOMINANT CYCLE VALUE SELECTIONS:
External to the acs() function, the code takes a dominant cycle value returned from acs() and changes its numeric form based on a specified type and form chosen within the indicator settings. The dominant cycle value can be represented as an integer or a decimal number, depending on the attached algorithm's requirements. For example, FIR filters will require an integer while many IIR filters can use a float. The float forms can be either rounded, smoothed, or floored. If the resulting value is desired to be an integer, it can be rounded up/down or just be in an integer form, depending on how your algorithm may utilize it.
AUTOCORRELATION SPECTRUM FUNCTION BASICALLY EXPLAINED:
In the beginning of the acs() code, the population of caches for precalculated angular frequency factors and smoothing coefficients occur. By precalculating these factors/coefs only once and then storing them in an array, the indicator can save time and computational resources when performing subsequent calculations that require them later.
In the following code block, the "Calculate AutoCorrelations" is calculated for each period within the passband width. The calculation involves numerous summations of values extracted from the roofing filter. Finally, a correlation values array is populated with the resulting values, which are normalized correlation coefficients.
Moving on to the next block of code, labeled "Decompose Fourier Components", Fourier decomposition is performed on the autocorrelation coefficients. It iterates this time through the applicable period range of 6 to 49, calculating the real and imaginary parts of the Fourier components. Frequencies 6 to 49 are the primary focus of interest for this periodogram. Using the precalculated angular frequency factors, the resulting real and imaginary parts are then utilized to calculate the spectral Fourier components, which are stored in an array for later use.
The next section of code smooths the noise ridden Fourier components between the periods of 6 and 49 with a selected filter. This species also employs numerous SuperSmoothers to condition noisy Fourier components. One of the big differences is Ehlers' versions used basic EMAs in this section of code. I decided to add SuperSmoothers.
The final sections of the acs() code determines the peak power component for normalization and then computes the dominant cycle period from the smoothed Fourier components. It first identifies a single spectral component with the highest power value and then assigns it as the peak power. Next, it normalizes the spectral components using the peak power value as a denominator. It then calculates the average dominant cycle period from the normalized spectral components using Ehlers' "Center of Gravity" calculation. Finally, the function returns the dominant cycle period along with the normalized spectral components for later external use to plot the periodogram.
POST SCRIPT:
Concluding, I have to acknowledge a newly found analyst for assistance that I couldn't receive from anywhere else. For one, Claude doesn't know much about Pine, is unfortunately color blind, and can't even see the Pine reference, but it was able to intuitively shred my code with laser precise realizations. Not only that, formulating and reformulating my description needed crucial finesse applied to it, and I couldn't have provided what you have read here without that artificial insight. Finding the right order of words to convey the complexity of ACP and the elaborate accompanying content was a daunting task. No code in my life has ever absorbed so much time and hard fricking work, than what you witness here, an ACP gem cut pristinely. I'm unveiling my version of ACP for an empowering cause, in the hopes a future global army of code wielders will tether it to highly functional computational contraptions they might possess. Here is ACP fully blessed poetically with the "Power of Pine" in sublime code. ENJOY!
Support & Resistance AI (K means/median) [ThinkLogicAI]█ OVERVIEW
K-means is a clustering algorithm commonly used in machine learning to group data points into distinct clusters based on their similarities. While K-means is not typically used directly for identifying support and resistance levels in financial markets, it can serve as a tool in a broader analysis approach.
Support and resistance levels are price levels in financial markets where the price tends to react or reverse. Support is a level where the price tends to stop falling and might start to rise, while resistance is a level where the price tends to stop rising and might start to fall. Traders and analysts often look for these levels as they can provide insights into potential price movements and trading opportunities.
█ BACKGROUND
The K-means algorithm has been around since the late 1950s, making it more than six decades old. The algorithm was introduced by Stuart Lloyd in his 1957 research paper "Least squares quantization in PCM" for telecommunications applications. However, it wasn't widely known or recognized until James MacQueen's 1967 paper "Some Methods for Classification and Analysis of Multivariate Observations," where he formalized the algorithm and referred to it as the "K-means" clustering method.
So, while K-means has been around for a considerable amount of time, it continues to be a widely used and influential algorithm in the fields of machine learning, data analysis, and pattern recognition due to its simplicity and effectiveness in clustering tasks.
█ COMPARE AND CONTRAST SUPPORT AND RESISTANCE METHODS
1) K-means Approach:
Cluster Formation: After applying the K-means algorithm to historical price change data and visualizing the resulting clusters, traders can identify distinct regions on the price chart where clusters are formed. Each cluster represents a group of similar price change patterns.
Cluster Analysis: Analyze the clusters to identify areas where clusters tend to form. These areas might correspond to regions of price behavior that repeat over time and could be indicative of support and resistance levels.
Potential Support and Resistance Levels: Based on the identified areas of cluster formation, traders can consider these regions as potential support and resistance levels. A cluster forming at a specific price level could suggest that this level has been historically significant, causing similar price behavior in the past.
Cluster Standard Deviation: In addition to looking at the means (centroids) of the clusters, traders can also calculate the standard deviation of price changes within each cluster. Standard deviation is a measure of the dispersion or volatility of data points around the mean. A higher standard deviation indicates greater price volatility within a cluster.
Low Standard Deviation: If a cluster has a low standard deviation, it suggests that prices within that cluster are relatively stable and less likely to exhibit sudden and large price movements. Traders might consider placing tighter stop-loss orders for trades within these clusters.
High Standard Deviation: Conversely, if a cluster has a high standard deviation, it indicates greater price volatility within that cluster. Traders might opt for wider stop-loss orders to allow for potential price fluctuations without getting stopped out prematurely.
Cluster Density: Each data point is assigned to a cluster so a cluster that is more dense will act more like gravity and
2) Traditional Approach:
Trendlines: Draw trendlines connecting significant highs or lows on a price chart to identify potential support and resistance levels.
Chart Patterns: Identify chart patterns like double tops, double bottoms, head and shoulders, and triangles that often indicate potential reversal points.
Moving Averages: Use moving averages to identify levels where the price might find support or resistance based on the average price over a specific period.
Psychological Levels: Identify round numbers or levels that traders often pay attention to, which can act as support and resistance.
Previous Highs and Lows: Identify significant previous price highs and lows that might act as support or resistance.
The key difference lies in the approach and the foundation of these methods. Traditional methods are based on well-established principles of technical analysis and market psychology, while the K-means approach involves clustering price behavior without necessarily incorporating market sentiment or specific price patterns.
It's important to note that while the K-means approach might provide an interesting way to analyze price data, it should be used cautiously and in conjunction with other traditional methods. Financial markets are influenced by a wide range of factors beyond just price behavior, and the effectiveness of any method for identifying support and resistance levels should be thoroughly tested and validated. Additionally, developments in trading strategies and analysis techniques could have occurred since my last update.
█ K MEANS ALGORITHM
The algorithm for K means is as follows:
Initialize cluster centers
assign data to clusters based on minimum distance
calculate cluster center by taking the average or median of the clusters
repeat steps 1-3 until cluster centers stop moving
█ LIMITATIONS OF K MEANS
There are 3 main limitations of this algorithm:
Sensitive to Initializations: K-means is sensitive to the initial placement of centroids. Different initializations can lead to different cluster assignments and final results.
Assumption of Equal Sizes and Variances: K-means assumes that clusters have roughly equal sizes and spherical shapes. This may not hold true for all types of data. It can struggle with identifying clusters with uneven densities, sizes, or shapes.
Impact of Outliers: K-means is sensitive to outliers, as a single outlier can significantly affect the position of cluster centroids. Outliers can lead to the creation of spurious clusters or distortion of the true cluster structure.
█ LIMITATIONS IN APPLICATION OF K MEANS IN TRADING
Trading data often exhibits characteristics that can pose challenges when applying indicators and analysis techniques. Here's how the limitations of outliers, varying scales, and unequal variance can impact the use of indicators in trading:
Outliers are data points that significantly deviate from the rest of the dataset. In trading, outliers can represent extreme price movements caused by rare events, news, or market anomalies. Outliers can have a significant impact on trading indicators and analyses:
Indicator Distortion: Outliers can skew the calculations of indicators, leading to misleading signals. For instance, a single extreme price spike could cause indicators like moving averages or RSI (Relative Strength Index) to give false signals.
Risk Management: Outliers can lead to overly aggressive trading decisions if not properly accounted for. Ignoring outliers might result in unexpected losses or missed opportunities to adjust trading strategies.
Different Scales: Trading data often includes multiple indicators with varying units and scales. For example, prices are typically in dollars, volume in units traded, and oscillators have their own scale. Mixing indicators with different scales can complicate analysis:
Normalization: Indicators on different scales need to be normalized or standardized to ensure they contribute equally to the analysis. Failure to do so can lead to one indicator dominating the analysis due to its larger magnitude.
Comparability: Without normalization, it's challenging to directly compare the significance of indicators. Some indicators might have a larger numerical range and could overshadow others.
Unequal Variance: Unequal variance in trading data refers to the fact that some indicators might exhibit higher volatility than others. This can impact the interpretation of signals and the performance of trading strategies:
Volatility Adjustment: When combining indicators with varying volatility, it's essential to adjust for their relative volatilities. Failure to do so might lead to overemphasizing or underestimating the importance of certain indicators in the trading strategy.
Risk Assessment: Unequal variance can impact risk assessment. Indicators with higher volatility might lead to riskier trading decisions if not properly taken into account.
█ APPLICATION OF THIS INDICATOR
This indicator can be used in 2 ways:
1) Make a directional trade:
If a trader thinks price will go higher or lower and price is within a cluster zone, The trader can take a position and place a stop on the 1 sd band around the cluster. As one can see below, the trader can go long the green arrow and place a stop on the one standard deviation mark for that cluster below it at the red arrow. using this we can calculate a risk to reward ratio.
Calculating risk to reward: targeting a risk reward ratio of 2:1, the trader could clearly make that given that the next resistance area above that in the orange cluster exceeds this risk reward ratio.
2) Take a reversal Trade:
We can use cluster centers (support and resistance levels) to go in the opposite direction that price is currently moving in hopes of price forming a pivot and reversing off this level.
Similar to the directional trade, we can use the standard deviation of the cluster to place a stop just in case we are wrong.
In this example below we can see that shorting on the red arrow and placing a stop at the one standard deviation above this cluster would give us a profitable trade with minimal risk.
Using the cluster density table in the upper right informs the trader just how dense the cluster is. Higher density clusters will give a higher likelihood of a pivot forming at these levels and price being rejected and switching direction with a larger move.
█ FEATURES & SETTINGS
General Settings:
Number of clusters: The user can select from 3 to five clusters. A good rule of thumb is that if you are trading intraday, less is more (Think 3 rather than 5). For daily 4 to 5 clusters is good.
Cluster Method: To get around the outlier limitation of k means clustering, The median was added. This gives the user the ability to choose either k means or k median clustering. K means is the preferred method if the user things there are no large outliers, and if there appears to be large outliers or it is assumed there are then K medians is preferred.
Bars back To train on: This will be the amount of bars to include in the clustering. This number is important so that the user includes bars that are recent but not so far back that they are out of the scope of where price can be. For example the last 2 years we have been in a range on the sp500 so 505 days in this setting would be more relevant than say looking back 5 years ago because price would have to move far to get there.
Show SD Bands: Select this to show the 1 standard deviation bands around the support and resistance level or unselect this to just show the support and resistance level by itself.
Features:
Besides the support and resistance levels and standard deviation bands, this indicator gives a table in the upper right hand corner to show the density of each cluster (support and resistance level) and is color coded to the cluster line on the chart. Higher density clusters mean price has been there previously more than lower density clusters and could mean a higher likelihood of a reversal when price reaches these areas.
█ WORKS CITED
Victor Sim, "Using K-means Clustering to Create Support and Resistance", 2020, towardsdatascience.com
Chris Piech, "K means", stanford.edu
█ ACKNOLWEDGMENTS
@jdehorty- Thanks for the publish template. It made organizing my thoughts and work alot easier.
ABC on Recursive Zigzag [Trendoscope]There are several implementations of ABC pattern in tradingview and pine script. However, we have made this indicator to provide users additional quantifiable information along with flexibility to experiment and develop their own strategy based on the patterns.
🎲 Highlights of this indicator over other ABC implementations are:
Implementation is based on recursive multi level zigzag allows bigger as well as smaller patterns to be identified
Allows users to set their trading rules with respect to entry, target and stop ratios, experiment and build their own strategy based on the ABC pattern.
Back test summary including win ratio and risk reward will help users understand the profitability based on different settings being used.
🎲 Concept of ABC Pattern
The ABC pattern, also known as the "Corrective Wave" or "Zigzag Pattern," is a fundamental concept in Elliott Wave Theory, which is widely used in technical analysis to identify and predict price movements in financial markets.
The ABC pattern is a three-wave corrective pattern that typically occurs within the context of a larger impulse or trending wave. It consists of two smaller waves in the opposite direction (A and C) separated by a corrective wave (B). These waves are labeled alphabetically and represent price movements.
Wave A (Impulse Wave): Wave A is the first leg of the ABC pattern and is characterized by a strong price move in the opposite direction of the prevailing trend. It is often driven by a fundamental or sentiment-driven event that temporarily disrupts the trend.
Wave B (Corrective Wave): Wave B is the corrective wave that follows Wave A. It represents a partial retracement of Wave A's price movement. Wave B can take various forms, such as a simple correction or a complex correction (e.g., a triangle or a flat correction). It typically doesn't retrace the entire length of Wave A.
Wave C (Impulse Wave): Wave C is the final leg of the ABC pattern and is characterized by a strong price move in the same direction as the prevailing trend. It often surpasses the starting point of Wave A and confirms the resumption of the larger trend.
🎲 Indicator Components
Upon loading the indicator on the chart, we can observe the following components on the chart.
Pattern Drawings is the graphical representation of present patterns. Please note that it is not necessary for patterns to be there on the chart all the time. Patterns will appear on the chart when price makes the patterns.
Trade Box is the box representing trade signals of the pattern. These trade levels are generated based on the user settings.
Summary Table is the back test summary containing details of historical pattern performance including Win Ratio and Risk Reward.
🎲 Indicator Settings
Details of each user settings are provided in the tooltips. Below is the snapshot of it.
🎲 Alerts
Basic level of alerts are built in the script using alert function to highlight the following conditions:
New ABC Pattern
Updates to existing Pattern
Both conditions will alert simple text messages. There is not much customization provided as part of this indicator. We will consider providing more options in future versions based on the interest and demand shown by users.
Signal AdapterThis Signal Adapter script can compose a signal based on inputs from other simple (non-signal) indicators and can forwards it to the "Template Trailing Strategy".
It allows the user to combine up to eight external inputs and define the conditions that will trigger the start, end, cancel start and cancel end deals.
A signal will be composed from those user-defined conditions. The "indicator on indicator" feature is needed so you can forward the resulted signal to the "Template Trailing Strategy".
Thus you should be Plus or Premium user to get it's full potential. It is very convenient for those who want to create a strategy without coding their own signal indicator and for those
who want to fast prototype various ideas based on simple conditions.
Multi-Asset Performance [Spaghetti] - By LeviathanThis indicator visualizes the cumulative percentage changes or returns of 30 symbols over a given period and offers a unique set of tools and data analytics for deeper insight into the performance of different assets.
Multi Asset Performance indicator (also called “Spaghetti”) makes it easy to monitor the changes in Price, Open Interest, and On Balance Volume across multiple assets simultaneously, distinguish assets that are overperforming or underperforming, observe the relative strength of different assets or currencies, use it as a tool for identifying mean reversion opportunities and even for constructing pairs trading strategies, detect "risk-on" or "risk-off" periods, evaluate statistical relationships between assets through metrics like correlation and beta, construct hedging strategies, trade rotations and much more.
Start by selecting a time period (e.g., 1 DAY) to set the interval for when data is reset. This will provide insight into how price, open interest, and on-balance volume change over your chosen period. In the settings, asset selection is fully customizable, allowing you to create three groups of up to 30 tickers each. These tickers can be displayed in a variety of styles and colors. Additional script settings offer a range of options, including smoothing values with a Simple Moving Average (SMA), highlighting the top or bottom performers, plotting the group mean, applying heatmap/gradient coloring, generating a table with calculations like beta, correlation, and RSI, creating a profile to show asset distribution around the mean, and much more.
One of the most important script tools is the screener table, which can display:
🔸 Percentage Change (Represents the return or the percentage increase or decrease in Price/OI/OBV over the current selected period)
🔸 Beta (Represents the sensitivity or responsiveness of asset's returns to the returns of a benchmark/mean. A beta of 1 means the asset moves in tandem with the market. A beta greater than 1 indicates the asset is more volatile than the market, while a beta less than 1 indicates the asset is less volatile. For example, a beta of 1.5 means the asset typically moves 150% as much as the benchmark. If the benchmark goes up 1%, the asset is expected to go up 1.5%, and vice versa.)
🔸 Correlation (Describes the strength and direction of a linear relationship between the asset and the mean. Correlation coefficients range from -1 to +1. A correlation of +1 means that two variables are perfectly positively correlated; as one goes up, the other will go up in exact proportion. A correlation of -1 means they are perfectly negatively correlated; as one goes up, the other will go down in exact proportion. A correlation of 0 means that there is no linear relationship between the variables. For example, a correlation of 0.5 between Asset A and Asset B would suggest that when Asset A moves, Asset B tends to move in the same direction, but not perfectly in tandem.)
🔸 RSI (Measures the speed and change of price movements and is used to identify overbought or oversold conditions of each asset. The RSI ranges from 0 to 100 and is typically used with a time period of 14. Generally, an RSI above 70 indicates that an asset may be overbought, while RSI below 30 signals that an asset may be oversold.)
⚙️ Settings Overview:
◽️ Period
Periodic inputs (e.g. daily, monthly, etc.) determine when the values are reset to zero and begin accumulating again until the period is over. This visualizes the net change in the data over each period. The input "Visible Range" is auto-adjustable as it starts the accumulation at the leftmost bar on your chart, displaying the net change in your chart's visible range. There's also the "Timestamp" option, which allows you to select a specific point in time from where the values are accumulated. The timestamp anchor can be dragged to a desired bar via Tradingview's interactive option. Timestamp is particularly useful when looking for outperformers/underperformers after a market-wide move. The input positioned next to the period selection determines the timeframe on which the data is based. It's best to leave it at default (Chart Timeframe) unless you want to check the higher timeframe structure of the data.
◽️ Data
The first input in this section determines the data that will be displayed. You can choose between Price, OI, and OBV. The second input lets you select which one out of the three asset groups should be displayed. The symbols in the asset group can be modified in the bottom section of the indicator settings.
◽️ Appearance
You can choose to plot the data in the form of lines, circles, areas, and columns. The colors can be selected by choosing one of the six pre-prepared color palettes.
◽️ Labeling
This input allows you to show/hide the labels and select their appearance and size. You can choose between Label (colored pointed label), Label and Line (colored pointed label with a line that connects it to the plot), or Text Label (colored text).
◽️ Smoothing
If selected, this option will smooth the values using a Simple Moving Average (SMA) with a custom length. This is used to reduce noise and improve the visibility of plotted data.
◽️ Highlight
If selected, this option will highlight the top and bottom N (custom number) plots, while shading the others. This makes the symbols with extreme values stand out from the rest.
◽️ Group Mean
This input allows you to select the data that will be considered as the group mean. You can choose between Group Average (the average value of all assets in the group) or First Ticker (the value of the ticker that is positioned first on the group's list). The mean is then used in calculations such as correlation (as the second variable) and beta (as a benchmark). You can also choose to plot the mean by clicking on the checkbox.
◽️ Profile
If selected, the script will generate a vertical volume profile-like display with 10 zones/nodes, visualizing the distribution of assets below and above the mean. This makes it easy to see how many or what percentage of assets are outperforming or underperforming the mean.
◽️ Gradient
If selected, this option will color the plots with a gradient based on the proximity of the value to the upper extreme, zero, and lower extreme.
◽️ Table
This section includes several settings for the table's appearance and the data displayed in it. The "Reference Length" input determines the number of bars back that are used for calculating correlation and beta, while "RSI Length" determines the length used for calculating the Relative Strength Index. You can choose the data that should be displayed in the table by using the checkboxes.
◽️ Asset Groups
This section allows you to modify the symbols that have been selected to be a part of the 3 asset groups. If you want to change a symbol, you can simply click on the field and type the ticker of another one. You can also show/hide a specific asset by using the checkbox next to the field.
SimilarityMeasuresLibrary "SimilarityMeasures"
Similarity measures are statistical methods used to quantify the distance between different data sets
or strings. There are various types of similarity measures, including those that compare:
- data points (SSD, Euclidean, Manhattan, Minkowski, Chebyshev, Correlation, Cosine, Camberra, MAE, MSE, Lorentzian, Intersection, Penrose Shape, Meehl),
- strings (Edit(Levenshtein), Lee, Hamming, Jaro),
- probability distributions (Mahalanobis, Fidelity, Bhattacharyya, Hellinger),
- sets (Kumar Hassebrook, Jaccard, Sorensen, Chi Square).
---
These measures are used in various fields such as data analysis, machine learning, and pattern recognition. They
help to compare and analyze similarities and differences between different data sets or strings, which
can be useful for making predictions, classifications, and decisions.
---
References:
en.wikipedia.org
cran.r-project.org
numerics.mathdotnet.com
github.com
github.com
github.com
Encyclopedia of Distances, doi.org
ssd(p, q)
Sum of squared difference for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the squared euclidean distance.
euclidean(p, q)
Euclidean distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the straight-line (or Euclidean).
manhattan(p, q)
Manhattan distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of absolute differences between both points.
minkowski(p, q, p_value)
Minkowsky Distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
p_value (float) : `float` P value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: Measure of similarity in the normed vector space.
chebyshev(p, q)
Chebyshev distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
correlation(p, q)
Correlation distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
cosine(p, q)
Cosine distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Cosine distance between vectors `p` and `q`.
---
angiogenesis.dkfz.de
camberra(p, q)
Camberra distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Weighted measure of absolute differences between both points.
mae(p, q)
Mean absolute error is a normalized version of the sum of absolute difference (manhattan).
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean absolute error of vectors `p` and `q`.
mse(p, q)
Mean squared error is a normalized version of the sum of squared difference.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean squared error of vectors `p` and `q`.
lorentzian(p, q)
Lorentzian distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Lorentzian distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
intersection(p, q)
Intersection distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Intersection distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
penrose(p, q)
Penrose Shape distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Penrose shape distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
meehl(p, q)
Meehl distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Meehl distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
edit(x, y)
Edit (aka Levenshtein) distance for indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Number of deletions, insertions, or substitutions required to transform source string into target string.
---
generated description:
The Edit distance is a measure of similarity used to compare two strings. It is defined as the minimum number of
operations (insertions, deletions, or substitutions) required to transform one string into another. The operations
are performed on the characters of the strings, and the cost of each operation depends on the specific algorithm
used.
The Edit distance is widely used in various applications such as spell checking, text similarity, and machine
translation. It can also be used for other purposes like finding the closest match between two strings or
identifying the common prefixes or suffixes between them.
---
github.com
www.red-gate.com
planetcalc.com
lee(x, y, dsize)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
dsize (int) : `int` Dictionary size.
Returns: Distance between two strings by accounting for dictionary size.
---
www.johndcook.com
hamming(x, y)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Length of different components on both sequences.
---
en.wikipedia.org
jaro(x, y)
Distance between two indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Measure of two strings' similarity: the higher the value, the more similar the strings are.
The score is normalized such that `0` equates to no similarities and `1` is an exact match.
---
rosettacode.org
mahalanobis(p, q, VI)
Mahalanobis distance between two vectors with population inverse covariance matrix.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
VI (matrix) : `matrix` Inverse of the covariance matrix.
Returns: The mahalanobis distance between vectors `p` and `q`.
---
people.revoledu.com
stat.ethz.ch
docs.scipy.org
fidelity(p, q)
Fidelity distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya Coefficient between vectors `p` and `q`.
---
en.wikipedia.org
bhattacharyya(p, q)
Bhattacharyya distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya distance between vectors `p` and `q`.
---
en.wikipedia.org
hellinger(p, q)
Hellinger distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The hellinger distance between vectors `p` and `q`.
---
en.wikipedia.org
jamesmccaffrey.wordpress.com
kumar_hassebrook(p, q)
Kumar Hassebrook distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Kumar Hassebrook distance between vectors `p` and `q`.
---
github.com
jaccard(p, q)
Jaccard distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Jaccard distance between vectors `p` and `q`.
---
github.com
sorensen(p, q)
Sorensen distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Sorensen distance between vectors `p` and `q`.
---
people.revoledu.com
chi_square(p, q, eps)
Chi Square distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Chi Square distance between vectors `p` and `q`.
---
uw.pressbooks.pub
stats.stackexchange.com
www.itl.nist.gov
kulczynsky(p, q, eps)
Kulczynsky distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Kulczynsky distance between vectors `p` and `q`.
---
github.com
All Candlestick Patterns on Backtest [By MUQWISHI]▋ INTRODUCTION :
The “All Candlestick Patterns on Backtest” indicator generates a table that offers a clear visualization of the historical return percentages for each candlestick pattern strategy over a specified time period. This table serves as an organized resource, serving as a launching point for in-depth research into candle formations. It may help to rectify any misconceptions surrounding candlestick patterns, refine trading approaches, and it could be foundation to make informed decisions in trading journey.
_______________________
▋ OVERVIEW:
_______________________
▋ CREDIT:
Credit to public technical “*All Candlestick Patterns*” indicator.
_______________________
▋ TABLE:
_______________________
▋ CHART:
_______________________
▋ INDICATOR SETTINGS:
#Section One: Table Setting
#Section Two: Backtest Setting
(1) Backtest Starting Period.
Note: If the datetime of the first candle on the chart is after the entreated datetime, the calculation will start from the first candle on the chart.
(2) Initial Equity ($).
(3) Leverage: Current Equity x Leverage Value.
(4) Entry Mode:
- “At Close”: Execute entry order as soon as the candle confirmed.
- “Breakout High (Low for Short)”: Stop limit buy order, entry order will be executed as soon as the next candle breakout the high of last pattern’s candle (low for short)
(5) Cancel Entry Within Bars: This option is applicable with {Entry Mode = Breakout High (Low for Short)}, to cancel the Entry Order if it's not executed within certain selected number of bars.
(6) Stoploss Range: the range refers to high of pattern - low of pattern.
(7) Risk:Reward: the calculation of risk:reward range start from entry price level. For example: A pattern triggered with range 10 points, and entry price is 100.
- For 1:1~risk:reward would the stoploss at 90 and takeprofit at 110.
- For 1:3~risk:reward would the stoploss at 90 and takeprofit at 130.
#Section Three: Technical & Candle Patterns
_______________________
▋ Comments:
This table was developed for research and educational purposes.
Candlestick patterns are almost similar as seen in “*All Candlestick Patterns*” indicator.
The table results should not be taken as a major concept to build a trading decision.
Personally, I see candlestick patterns as a means to comprehend the psychology of the market, and help to follow the price action.
Please let me know if you have any questions.
Thank you.
CE - 42MACRO Equity Factor Table This is Part 1 of 2 from the 42MACRO Recreation Series
The CE - 42MACRO Equity Factor Table is a whole toolbox packaged in a single indicator.
It aims to provide a probabilistic insight into the market realized GRID Macro Regime, use a multiplex of important Assets and Indices to form a high probability Implied Correlation expectation and allows to derive extra market insights by showing the most important aggregates and their performance over multiple timeframes... and what that might mean for the whole market direction, as well as the underlying asset.
WARNING
By the nature of the macro regimes, the outcomes are more accurate over longer Chart Timeframes (Week to Months).
However, it is also a valuable tool to form a proper,
market realized, short to medium term bias.
NOTE
This Indicator is intended to be used alongside the 2nd part "CE - 42MACRO Yield and Macro"
for a more wholistic approach and higher accuracy.
Due to coding limitations they can not be merged into one Indicator.
Methodology:
The Equity Factor Table tracks specifically chosen Assets to identify their performance and add the combined performances together to visualize 42MACRO's GRID Equity Model.
For this it uses the below Assets, with more to come:
Dividend Compounders ( AMEX:SPHD )
Mid Caps ( AMEX:VO )
Emerging Markets ( AMEX:EEM )
Small Caps ( AMEX:IWM )
Mega Cap Growth ( NASDAQ:QQQ )
Brazil ( AMEX:EWZ )
United Kingdom ( AMEX:EWU )
Growth ( AMEX:IWF )
United States ( AMEX:SPY )
Japan ( AMEX:DXJ )
Momentum ( AMEX:MTUM )
China ( AMEX:FXI )
Low Beta ( AMEX:SPLV )
International ex-US ( NASDAQ:ACWX )
India ( AMEX:INDA )
Eurozone ( AMEX:EZU )
Quality ( AMEX:QUAL )
Size ( AMEX:OEF )
Functionalities:
1. Correlations
Takes a measure of Cross Market Correlations
2. Implied Trend
Calculates the trend for each Asset and uses the Correlation to obtain the Implied Trend for the underlying Asset
There are multiple functionalities to enhance Signal Speed and precision...
Reading a signal only over a certain threshold, otherwise being colored in gray to signal noise or unclear market behavior
Normalization of Signal
Double Normalization of Signal for more Speed... ideal for the Crypto Market
Using an additional Hull Moving Average to enhance Signal Speed
Additional simple Background coloring to get a Signal from the HMA
Barcoloring based on the Implied Correlation
3. Equity Factor Table
Shows market realized Asset performance
Provides the approximate realized GRID market regimes
Informs about "Risk ON" and "Risk OFF" market states
Now into the juicy stuff...
Visuals:
There is a variety of options to change visual settings of what is plotted and where
+ additional considerations.
Everything that is relevant in the underlying logic which can improve comprehension can be visualized with these options.
More to come
Market Correlation:
The Market Correlation Table takes the Correlation of all the Assets to the Asset on the Chart,
it furthermore uses the Normalized KAMA Oscillator by IkkeOmar to analyse the current trend of every single Asset.
(To enhance the Signal you can apply the mentioned Indicator on the relevant Assets to find your target Asset movements that you intend to capture...
and then change the length of the Indicator in here)
It then Implies a Correlation based on the Trend and the Correlation to give a probabilistically adjusted expectation for the future Chart Asset Movement.
This is strengthened by taking the average of all Implied Trends.
Thus the Correlation Table provides valuable insights about probabilistically likely Movement of the Asset over the defined time duration,
providing alpha for Traders and Investors alike.
Equity Factors:
The table provides valuable information about the current market environment (whether it's risk on or risk off),
the rough GRID models from 42MACRO and the actual market performance.
This allows you to obtain a deeper understanding of how the market works and makes it simple to identify the actual market direction,
makes it possible to derive overall market Health and shows market strength or weakness.
Utility:
The Equity Factor Table is divided in 4 Sections which are the GRID regimes:
Economic Growth:
Goldilocks
Reflation
Economic Contraction:
Inflation
Deflation
Top 5 Equity Factors:
Are the values green for a specific Column?
If so then the market reflects the corresponding GRID behavior.
Bottom 5 Equity Factors:
Are the values red for a specific Column?
If so then the market reflects the corresponding GRID behavior.
So if we have Goldilocks as current regime we would see green values in the Top 5 Goldilocks Cells and red values in the Bottom 5 Goldilocks Cells.
You will find that Reflation will look similar, as it is also a sign of Economic Growth.
Same is the case for the two Contraction regimes.
This whole Indicator, as well as the second part, is based to a majority on 42MACRO's models.
I only brought them into TV and added things on top of it.
If you have questions or need a more in-depth guide DM me.
Will make a guide to all functionalities if necessity becomes apparent.
GM
[SS] Linear ModelerHello everyone,
This is the linear modeler indicator.
It is a statistical based indicator that provides a likely price target and range based on a linear regression time series analysis.
To represent it visually, all the indicator does is it represents a linear regression channel and actually plots out the range at various points based on the current trend (see the chart below):
The indicator will perform the same assessment, but give you a working range and timeline for targets.
As well, the indicator will back-test the range and variables to see how it is performing and how reliable the results are likely to be.
General Functions:
In the chart above you can see all the various parameters and functions.
The indicator will display the most likely target (MLT) to be expected within the next pre-determined timeframe (by candles).
So for the first target, the indicator is saying within the next 10 candles, BA's MLT is 221.46 and based on BT results the reliability of this assessment is around 46%.
The indicator will also display the anticipated range at each designated timeframe.
In the chart above, we can see that at 20 candles, the likely range that BA should be trading in is 204 and 238 with a reliability of around 62% based on previous performance.
Plot Functions:
As this is performing a linear time series projection, you can have the indicator plot the projected ranges. Simply go to the settings menu and select the desired forecast length:
This will plot out the desired range and result over the specified time period. Here is an example of BA plotted over the next 50 candles on the hourly:
You can technically use this as an SMA/EMA type indicator, just keep in mind it may be a bit slower than a traditional EMA and SMA indicator, as it is processing a lot of data and plotting out forecasted data as opposed to an SMA or EMA.
If you wish to use it as an EMA or SMA, you can unselect the "Display Chart" Function to hide the table, and you can also select the "Plot Label" function. This will display the current projection analytics directly on your plotted line so you don't need to reference the table at all:
Tips on use:
I use this on the larger and smaller timeframes. On all timeframes, I will look to targets that display 90% to 100% in the BT results.
Bear in mind, this does not mean that we will 100% of the time hit this target, these targets can fail, it just means that there is a higher confidence of hitting this target than other, less reliable targets.
I will plot these targets out if they fall within the implied range of the timeframe I am looking at and will act on them according to the price action.
This is a great indicator to use in combination with other range based indicators. If you use the implied range from options to help guide your trading, you can see which targets are likely to be hit based on the current trend that fall within that implied range.
You can also assess the strength of the trends at various points in time and have an actionable range with a reliability reading at various points in time.
That is pretty much the bulk of the indicator.
Hopefully you find it helpful and useful.
As always, leave your questions and suggestions below.
Thanks for reading and checking it out!