How to Compare a Multicap Fund Fairly When Its Benchmark Changed Mid-Period
Compare a multicap fund across a benchmark change by using rolling returns, excess return over the right benchmark for each window, and peer medians inside the same category. Raw trailing returns alone will mislead you whenever the yardstick changes mid-period.
You pulled up the five-year chart of your multicap fund and something feels off. The benchmark showing next to the NAV today is not the same one your fund was tracked against in 2021. That is a real problem when you are trying to figure out how to check mutual fund performance in India, because the goalposts have shifted mid-game.
In 2022, the market regulator forced every multicap fund to adopt a broader benchmark. Funds that once compared themselves to Nifty 500 moved to Nifty 500 Multicap 50:25:25 or a similar blend. A clean before-and-after comparison is no longer obvious to an average investor.
Why the benchmark change even happened
The regulator wanted multicap funds to truly hold a mix of large, mid, and small cap stocks. Before the rule change, many were tilted heavily toward large caps because that felt safer to fund managers. The new benchmark reflected that forced mix and removed the tilt.
For you as an investor, the shift means old performance numbers were run against one yardstick and new numbers run against another. Lining them up side by side without adjustment is like comparing a race run uphill with the same race run downhill. The runner looks faster in one of them for reasons that have nothing to do with effort.
This problem is not unique to multicap funds. Benchmark changes happen across categories whenever the regulator or the fund house updates classification rules. You just see it most clearly in multicap because so many funds changed at once.
Why your gut says the numbers are off
Most investors look at one number, the fund's trailing three-year or five-year return, and compare it to the benchmark shown on the screen. Your broker's app or the fund factsheet paints a single line for the benchmark, so the shift gets hidden in the middle of the chart.
Behind the scenes, the chart is splicing together two different benchmark series. The NAV line is real and continuous. The benchmark line is actually two different benchmarks joined at the date the switch happened. That join is where your gut feeling that the numbers are off is coming from.
A fair way to compare multicap performance across the switch
Start with rolling returns over shorter windows that sit entirely before or entirely after the benchmark change. Three-year rolling returns from the pre-change period can be compared with three-year rolling returns from the post-change period only when each window is measured against the benchmark that was actually in force.
Use excess return over benchmark for each period, not the raw NAV return. If the fund beat its benchmark by 2 percentage points before the change and by 1.5 points after, that is a meaningful comparison. Raw NAV returns mix the benchmark story with the manager's skill, and you will end up blaming or praising the wrong thing.
The third useful number is tracking error against each benchmark. A rising tracking error after the switch can mean the manager is struggling with the new, broader mandate. A falling tracking error can mean the fund has settled into its new skin.
What to look for on the factsheet
Every fund factsheet lists the current benchmark and, in most cases, the previous one. Look for footnotes that say "benchmark changed with effect from..." and note the date. That date splits your analysis into two clean halves.
Also check the category average for both periods. If the fund was in the top quartile before and dropped to the third quartile after, ask whether the new benchmark exposes a real weakness in the manager's process. Sometimes yes, sometimes the drop is just statistical noise from a wider, more volatile mandate.
Do not skip the portfolio breakdown. If the fund is still running a large cap tilt inside a multicap mandate, its numbers will look weak against a truly multicap benchmark. That is a fund problem, not a benchmark problem.
How to prevent this confusion in future
Keep a simple tracking sheet. Every time you start monitoring a fund, record the benchmark it uses on that date. If the fund switches benchmarks later, you know exactly when the break happened and can redo your analysis around that date.
Compare against peer medians inside the same category instead of relying only on the benchmark. Peer medians shift together when rules change, so the fund's rank inside the category stays readable even across regulatory resets. You can find published peer data on the AMFI website.
Do not judge a fund on less than three years of clean data. Benchmark changes, manager changes, and market cycles all hide the truth over short windows. If the available window is shorter than that, say so out loud and wait rather than drawing a confident conclusion.
Remember that benchmark changes are sometimes good news. They force funds to behave more like their label promises. A multicap fund that was quietly running like a large cap fund before 2022 was failing its mandate even if its returns looked fine. The broader benchmark just made the failure visible, which is exactly what a good measurement system should do.
Frequently Asked Questions
- When did multicap fund benchmarks change?
- Most multicap funds switched to a broader benchmark such as Nifty 500 Multicap 50:25:25 in 2022 after regulator classification rules were updated.
- Can I trust the five-year return shown on the app?
- Only for the NAV part. The benchmark line on the chart is stitched together from two different benchmarks if the switch falls inside your window.
- What is excess return over benchmark?
- It is the fund's return minus the benchmark's return for the same period. It isolates manager skill from benchmark behaviour.
- Why use peer medians instead of the benchmark?
- Peer medians inside the same category shift together when rules change, so the fund's relative rank stays readable across regulatory resets.
- How long should my analysis window be?
- At least three years of clean data under a single benchmark. Shorter windows hide too much cyclical and regulatory noise.