Understanding Sentiment Tracking

Individual models generate predictions for specific assets. Sentiment tracking aggregates predictions from multiple models you track into a unified view of collective market outlook. This allows you to see consensus across your tracked models rather than evaluating each prediction separately.

The Problem Sentiment Solves

Tracking multiple models creates information overload. If you track 10 models that each predict on Bitcoin, you have 10 separate predictions to evaluate every day. Are they agreeing? Disagreeing? How strongly?

Manually comparing predictions across models doesn't scale. You need a systematic way to understand what your tracked models collectively think about current market conditions.

Sentiment tracking solves this by aggregating predictions into a single metric that represents the collective outlook of all your tracked models for each asset.

How Sentiment Is Calculated

Each model generates daily predictions classified as Neutral, Buy, Sell, Strong Buy, or Strong Sell. To calculate sentiment, the system converts these classifications into numerical values:

  • Strong Sell - -2

  • Sell - -1

  • Neutral - 0

  • Buy - +1

  • Strong Buy - +2

These values are then aggregated across all models tracking the same asset.

Basic Aggregation

The simplest approach averages sentiment values across models:


This gives equal weight to each model's prediction.

Weighted Aggregation

In practice, sentiment calculation can incorporate model quality metrics. Higher-performing models may receive more weight in the aggregate calculation:


This approach ensures models with stronger track records have more influence on the aggregate sentiment.

Sentiment Score Range

The final sentiment score typically ranges from -3 to +3:

  • -3 to -1.5: Strong Bearish - Most tracked models predict sell conditions

  • -1.5 to -0.5: Bearish - Moderate bearish consensus

  • -0.5 to +0.5: Neutral - No clear consensus or mixed predictions

  • +0.5 to +1.5: Bullish - Moderate bullish consensus

  • +1.5 to +3: Strong Bullish - Most tracked models predict buy conditions

These thresholds are approximate. The key is understanding that larger absolute values indicate stronger consensus among your tracked models.

Sentiment Over Time

Sentiment tracking becomes most useful when viewed over time. The sentiment timeline shows how your tracked models' collective outlook evolves across days, weeks, or months.

This temporal view reveals patterns that single-day snapshots miss:

  • Sentiment shifts - When models transition from bullish to bearish (or vice versa)

  • Sentiment strength - How strongly models agree during different periods

  • Sentiment persistence - Whether models maintain consistent outlook or frequently change classification



This sequence shows a clear transition from bullish consensus to strong bearish period, then gradual return to bullish outlook. Understanding these patterns helps you see how your tracked models collectively respond to changing market conditions.

Sentiment vs Price Movement

Sentiment represents what your tracked models think current market conditions are. Price movement represents what actually happened in the market.

These two things are related but distinct:

  • Aligned periods - Bullish sentiment during upward price movement, or bearish sentiment during downward price movement

  • Misaligned periods - Bullish sentiment during downward price movement, or bearish sentiment during upward price movement

Aligned periods indicate your tracked models correctly classified market conditions. Misaligned periods reveal gaps in your tracking coverage - situations where your models collectively missed or misread the market.

Why Misalignment Happens

Sentiment can be misaligned with price movement for several reasons:

Bias in tracked models - If you primarily track models with bullish tendencies, your sentiment will skew bullish even during bearish market periods. The aggregate sentiment reflects the models you track, not necessarily market reality.

Lagging recognition - Models may take time to recognize changing market conditions. Sentiment might remain bullish for several days after a downward trend begins.

Model quality - Models that learned poor patterns during training will generate inaccurate predictions. Aggregating inaccurate predictions produces inaccurate sentiment.

Market regime changes - Models trained on certain market patterns may not recognize fundamentally different conditions. Their collective sentiment will be wrong during these periods.

Consensus and Disagreement

When sentiment values are strongly positive or negative (approaching ±2 to ±3), it indicates high consensus among your tracked models. Most models agree on current market conditions.

High consensus doesn't mean the sentiment is correct - it just means your tracked models agree with each other. If you track models with similar biases or training approaches, they might all be wrong together.

Low Consensus (Mixed Sentiment)

When sentiment hovers near zero despite tracking multiple models, it indicates disagreement. Some models classify conditions as bullish while others see bearish conditions.

Mixed sentiment can mean:

  • Uncertain market conditions - The market is genuinely unclear, and different analytical approaches yield different interpretations

  • Model diversity - You track models trained on different patterns, and current conditions match some patterns but not others

  • Transition periods - The market is changing, and some models recognize the shift faster than others

Practical Example

Tracking 8 models on ETH today:

Scenario A - High Bullish Consensus:

  • 6 models: Strong Buy (+2)

  • 2 models: Buy (+1)

  • Average sentiment: +1.75 (strong bullish)

Scenario B - Mixed Sentiment:

  • 3 models: Strong Buy (+2)

  • 2 models: Neutral (0)

  • 3 models: Sell (-1)

  • Average sentiment: +0.375 (weak bullish, but actually mixed)

Scenario C - High Bearish Consensus:

  • 7 models: Sell (-1)

  • 1 model: Strong Sell (-2)

  • Average sentiment: -1.125 (bearish)

Scenario A shows clear agreement. Scenario C also shows agreement (in bearish direction). Scenario B shows disagreement despite a positive average - the models are split on interpretation.

Interpreting Sentiment Strength

The absolute value of sentiment (ignoring positive/negative) indicates strength of consensus:

  • 0 to 0.5 - Weak or no consensus

  • 0.5 to 1.5 - Moderate consensus

  • 1.5 to 3.0 - Strong consensus

A sentiment of -1.8 and +1.8 both represent strong consensus - the direction differs but the strength is similar.

Number of Models Matters

Sentiment aggregated from 2 models is less statistically meaningful than sentiment from 10 models. More models generally provide more reliable aggregate sentiment, assuming the models are reasonably diverse.

However, tracking many similar models doesn't increase reliability - it just amplifies shared biases. Ten models with identical training approaches provide less diverse insight than five models with different analytical methodologies.

What Sentiment Is Not

Sentiment tracking has specific limitations:

Not a trading signal - Sentiment shows what your tracked models collectively think current conditions are. It's not a recommendation to buy or sell. You interpret sentiment for your own trading decisions.

Not a price prediction - Sentiment classifies current market conditions based on patterns models learned. It doesn't predict future price direction or magnitude.

Not inherently accurate - Sentiment accuracy depends entirely on the quality and diversity of models you track. Poor models produce poor aggregate sentiment.

Not a standalone metric - Sentiment is most useful alongside other information: individual model predictions, price data, and your own analysis. It's one input among many.

Key Takeaways

Aggregation reduces complexity - Sentiment converts multiple model predictions into a single metric representing collective outlook. This makes tracking many models practical.

Weighted calculation improves quality - Giving more weight to higher-performing models produces more reliable aggregate sentiment than treating all models equally.

Time reveals patterns - Sentiment becomes most valuable when viewed over time, showing how your tracked models collectively respond to changing market conditions.

Alignment matters - Comparing sentiment to actual price movement reveals whether your tracked models collectively understand current market conditions.

Consensus indicates agreement, not accuracy - High consensus means models agree with each other, but they might all be wrong if they share similar biases or training approaches.

Sentiment quality depends on model quality - Aggregating predictions from poor models produces poor sentiment. The metric is only as good as the models you track.

FAQ

How many models do I need to track for meaningful sentiment?

Three to five models provide basic aggregate sentiment. Ten or more models generally produce more statistically stable sentiment metrics. However, model diversity matters more than quantity - tracking many similar models doesn't improve sentiment quality.

Should I track only high-rarity models for better sentiment?

Not necessarily. High-rarity models have proven track records, but tracking only high-rarity models with similar approaches can create bias. A mix of quality levels and analytical approaches often provides more robust sentiment.

Can sentiment be positive while price goes down?

Yes. This is called misalignment and indicates your tracked models collectively classified conditions as bullish when the market actually moved bearish. Misalignment reveals gaps in your tracking coverage.

What causes sentiment to be neutral (near zero)?

Neutral sentiment can mean either: (1) models are predicting Neutral classification, or (2) models disagree with roughly equal bullish and bearish predictions that cancel out. The second scenario is more interesting - it indicates genuine disagreement about current conditions.

Does weighted sentiment always perform better than simple averaging?

Generally yes, because higher-quality models receive more influence. However, if all your tracked models share similar biases, weighting won't fix the fundamental issue. Weighting improves signal quality, but it cannot compensate for poor model selection.