New models every week · Launching soon
New models every week · Launching soon
New models every week · Launching soon
New models every week · Launching soon
New models every week · Launching soon
New models every week · Launching soon
New models every week · Launching soon
New models every week · Launching soon

Find new models,
backtested for real.

Academic finance papers turned into executable strategies by PhDs and researchers. Full Python code, institutional backtests, overfitting detection.

Check your inbox
Magic link sent to

Your data stays private. We never share or sell your information.

Backed by operators from
Citadel
Two Sigma
University of Pennsylvania
UC Berkeley
Latest Model
Live May 10, 2026
Betting Against Beta
Frazzini & Pedersen, Journal of Financial Economics, 2014
factor long-short equity
Equity curve · 2010–2026
Sharpe
1.42
CAGR
14.7%
Max Drawdown
−18.3%
Calmar
0.80
bab_factor.py Python
def compute_bab_portfolio(returns, market_cap, window=252):
    """
    Betting Against Beta, Frazzini & Pedersen (2014), JFE.
    Constructs a dollar-neutral BAB factor portfolio.
    """
    betas = returns.rolling(window).cov(returns.mean(axis=1)) \
            / returns.mean(axis=1).rolling(window).var()

    beta_rank = betas.rank(axis=1, pct=True)
    w_low  = (beta_rank <= 0.3).astype(float)
    w_high = (beta_rank >= 0.7).astype(float)
    ...
Why not just use AI?
AI can't backtest.
Here's why.

Large language models are powerful. But when it comes to quantitative backtesting, they introduce biases and skip the verification steps that actually matter.

01

It hallucinates parameters

Ask an LLM for a moving average crossover and it'll pick 50/200. Not because it's optimal, but because it's the most common in training data. No statistical justification. No sensitivity analysis. Just heuristics dressed as research.

02

It skips overfitting detection

AI-generated backtests never run SPA tests, PBO, or walk-forward validation. The result: strategies that look perfect in-sample and collapse out-of-sample. A 2.5 Sharpe that's really a 0.3.

03

It introduces hidden biases

LLMs approximate academic papers instead of implementing them. They miss survivorship bias adjustments, ignore look-ahead bias in feature construction, skip transaction cost modeling, and use simplified factor definitions. The result looks right but is fundamentally flawed.

04

It never verifies anything

An LLM has no way to verify if its backtest matches the paper's reported results. It can't compare its Sharpe to Table 3 of Frazzini & Pedersen. It generates code, but it never verifies claims against the source.

Every model on QuantLab is verified against the original paper's results. Every backtest is checked for survivorship bias, look-ahead bias, and overfitting. Zero shortcuts. Zero unverified claims.

FAQ
What is QuantLab? +
QuantLab is a platform where we publish institutional-grade quant models based on peer-reviewed academic papers. Each model comes with full Python source code, detailed backtest results with overfitting detection, and a clear explanation of the underlying research.
Who builds the models? +
Our team includes PhDs in Physics, Applied Mathematics, and Financial Engineering, with operational experience at firms like Citadel and Two Sigma. Every model is verified against the original paper. Every backtest is checked for bias. Nothing is published until it passes.
What does "institutional-grade backtest" mean? +
It means we apply the same validation methods used by top quantitative hedge funds: Stepwise SPA tests for data snooping, Probability of Backtest Overfitting via combinatorial purged cross-validation, walk-forward analysis, deflated Sharpe ratios, and realistic transaction cost modeling.
Can I use the code in production? +
Yes. All published model code is open source Python. You can read it, adapt it, and deploy it in your own infrastructure. We provide the research and validation, how you use it is up to you.
How often are new models published? +
We publish new models every week. Each model is based on a specific academic paper, fully coded from scratch, and backtested before publication.
Will I be able to build my own models? +
That's the roadmap. QuantLab is the first step. We're building a full model builder where you can compose strategies from modular blocks, run your own backtests with institutional infrastructure, and iterate without overfitting. Stay tuned.