Model Lifecycle 101

Every model on Criesnyse goes through the same stages from creation to continuous operation. Understanding this lifecycle helps you know what to expect when creating or tracking models.
Training
Training is where the model learns patterns from historical data. The model analyzes years of market data from Criesnyse's proprietary dataset, identifying statistical relationships between market features and labeled conditions.
Supervised learning: You label historical data by assigning market states (Buy, Sell, etc.) to different time periods. The model then learns to recognize patterns in the data that correspond to these labeled states. During training and validation, the model attempts to find features and combinations that predict these labels.
What the model learns: The model doesn't memorize specific scenarios or follow predefined rules. Instead, it learns to recognize patterns that historically corresponded to the states you labeled. Think of it as learning to recognize weather patterns rather than memorizing specific days.
Duration and resources: Training time varies based on model complexity. Basic models (analyzing up to 2 tickers simultaneously) train on standard infrastructure and typically complete within hours. Premium models (3+ tickers) require significantly more computational resources - the complexity grows exponentially with each additional ticker, not linearly.
Training can fail: Not all models successfully complete training. If a model fails to converge or cannot learn meaningful patterns from the labeled data, it receives a "failed" status. Failed models are preserved in the system but cannot proceed to validation or deployment.
Validation
After training completes, the model must prove it can work with data it has never seen before. This validation phase tests whether the model learned genuine patterns or simply memorized the training data.
Cross-validation methodology: The model is tested against historical data that was excluded from training. If the model performs well on this unseen data, it demonstrates that it learned generalizable patterns rather than overfitting to specific examples.
What's being checked: Validation examines multiple metrics including prediction accuracy, consistency across different market conditions, and the model's ability to maintain performance on data from different time periods.
Validation can fail: Models that perform well on training data but poorly on validation data have learned patterns that don't generalize. These models receive a "failed" status and cannot proceed to deployment. Like training failures, failed validations are preserved in the system but remain inactive.
Freezing
Once a model passes validation, it enters the freezing stage. This is the point of no return - the model becomes permanently locked in its current state.
What freezing means: The model is converted into an immutable binary file that contains the complete trained algorithm, all learned parameters, and weights. This file cannot be modified, updated, or retrained.
Why immutability matters: Freezing ensures the model will always behave exactly the same way. Every prediction it makes in the future will use the exact same logic it learned during training. This creates consistency and makes the model's behavior completely verifiable over time.
What cannot be changed: After freezing, nothing about the model's decision-making logic can be altered. It will never adapt to new market conditions, never retrain on recent data, never adjust its behavior based on performance. This is by design.
Backtesting
The frozen model is now run against historical data to generate every prediction it would have made in the past. This creates a complete performance baseline before the model ever sees live market data.
Running on historical data: The model processes years of historical data, day by day, outputting classifications as if it were running in real-time during those periods. This generates a full prediction history across different market conditions and time periods.
Calculating performance metrics: These historical predictions are compared against actual market outcomes to calculate performance metrics including win rate, return, sharpe ratio, risk metrics, and recovery patterns. These metrics determine the model's initial rankings and rarity level.
Establishing baseline track record: The backtest results become the model's verified track record before deployment. This gives users visibility into how the model performed historically before deciding whether to track it.
Deployment
Once backtesting completes, the model is ready for deployment. This is when the model transitions from historical analysis to live operation.
Going live: Deployment means the model begins receiving daily live market data and generating real predictions. The model starts its live track record from this point forward.
Operational transition: From deployment onward, every prediction the model makes is recorded permanently and contributes to its growing live performance history. The model's rankings and metrics will update based on live performance, not just backtest results.
Availability: Deployed models become available for other users to track. Anyone can subscribe to receive the model's daily predictions.
Live Operation
Deployed models run continuously, generating predictions every trading day indefinitely.
Daily prediction cycle: Live data is released on weekdays, several hours after US stock market close. Early the following trading day, the model processes this data and generates predictions for tracked tickers.
Performance tracking: Live performance metrics accumulate over time. The longer a model operates, the more data points contribute to its track record. This growing history makes the model's performance increasingly verifiable.
Continuous operation: There is no automatic shutdown based on performance - models that perform poorly continue generating predictions just like successful models. This "build and forget" approach means any deployed model will keep operating.
Key Takeaways
Immutability is fundamental - Once frozen, a model's logic never changes. This ensures consistent, verifiable behavior over time.
Any model can run - Performance doesn't determine whether a model continues operating. Failed models don't deploy, but any model that passes validation and freezing will run continuously regardless of results.
Track records grow with time - The longer a model operates, the more valuable its verified performance data becomes. Time validates models in ways backtests cannot.
FAQ
Can I stop a deployed model?
No. Once you start creating a model, it will complete the full lifecycle automatically. If the model doesn't fail during training or validation, it will be deployed and run continuously. You cannot retract or stop a deployed model.
You can choose not to track your own model, but it will continue operating and generating predictions regardless. Other users can still track it.
What happens to a model's predictions if it performs poorly?
Nothing. The model continues generating predictions and they continue being recorded. Poor performance doesn't stop a model from operating. This is intentional - all models run continuously regardless of results.
Can I see a model's algorithm or code?
No. Models exist as binary files. You can see the model's complete prediction history, performance metrics, and backtest results, but not the internal logic or code.