π Documentation
π¬ Join the Forum Discussion
| π Download PDF Documentation
| π View the Dashboard
π Thresholds Used
- Total Indexers: 175
- Active Indexers: 87
- Indexers with allocations: 79
- Underserving Subgraphs Threshold: 10 subgraphs
- Indexers serving less than 10 allocations: 22
- Small Active Indexers: 15 (less than 1M GRT allocated)
- Medium Active Indexers: 42 (between 1M and 20M GRT allocated)
- Large Active Indexers: 10 (more than 20M and 50M GRT allocated)
- Mega Active Indexers: 12 (more than 50M GRT allocated)
π How the Indexer Score is Calculated
The Indexer Score is a weighted combination of two critical performance metrics:
- AER (Allocation Efficiency Ratio): Reflects how efficiently an indexer spreads its GRT across subgraphs. Weight: 70%
- QFR (Query Fee Ratio): Indicates how efficiently an indexer generates query fees relative to its allocated GRT. Weight: 30%
This blended metric evaluates an indexerβs overall performance, balancing allocation efficiency and query fee generation.
In the following sections, we break down the methodology used to compute both AER and QFR,
including their normalization processes. The final score is adjusted to a uniform scale where 1 represents the best performance
and 10 the worst, despite differing normalization directions for AER and QFR.
π How Allocation Efficiency Ratio (AER) is calculated:
Total GRT Allocated
(Number of Allocations * Average GRT per Allocation)
The Allocation Efficiency Ratio (AER) measures how effectively an indexer distributes their staked GRT across subgraphs.
βΌοΈ A lower AER reflects more efficient allocation, while a higher AER suggests over-concentration.
Average allocation targets are based on indexer size:
- Small Indexers: 5,000 GRT per subgraph
- Medium Indexers: 10,000 GRT per subgraph
- Large Indexers: 20,000 GRT per subgraph
- Mega Indexers: 40,000 GRT per subgraph
π How AER is Normalized:
Normalized AER = 1 + 9 Γ (min(AER, 500) / 500)
AER is normalized to a scale from 1 (best) to 10 (worst) to account for varying efficiency levels across indexers:
- Capping: AER values are capped at 500 to limit the impact of extreme outliers (e.g., highly concentrated allocations).
- Scaling: The capped AER is scaled linearly from 0 to 500 onto a 1-to-10 range using the formula above.
- Interpretation:
- AER = 0 β Normalized = 1 (most efficient, best)
- AER = 500 or higher β Normalized = 10 (least efficient, worst)
- Example: AER = 23.788 β Normalized = 1 + 9 Γ (23.788 / 500) β 1.43
This ensures AER values are fairly compared, with lower ratios (better performance) resulting in lower scores.
π How Query Fee Ratio (QFR) is calculated:
Query Fees Generated
Total GRT Allocated
The Query Fee Ratio (QFR) measures how efficiently an indexer generates query fees per unit of allocated GRT.
βΌοΈ A higher QFR indicates better performance, as it means more query fees are earned per GRT allocated.
π How QFR is Normalized:
Normalized QFR = 10 - 9 Γ (min(QFR, 1.0) / 1.0)
QFR is normalized to a scale from 10 (best) to 1 (worst) to reflect its efficiency metric, where higher raw QFR values are better:
- Capping: QFR is capped at 1.0, representing a theoretical maximum where query fees equal the allocated GRT (a rare but ideal scenario).
- Scaling: The capped QFR is scaled inversely from 0 to 1.0 onto a 10-to-1 range using the formula above. This inversion ensures that higher QFR values (better performance) result in higher normalized scores.
- Interpretation:
- QFR = 1 or higher β Normalized = 10 (most efficient, best)
- QFR = 0 β Normalized = 1 (least efficient, worst)
- Example: QFR = 0.002856 β Normalized = 10 - 9 Γ (0.002856 / 1) β 9.97
This normalization preserves QFRβs meaning: indexers generating more query fees relative to their allocations receive higher (better) scores,
while those with little to no fees score lower.
π How the Final Indexer Score is calculated:
The final Indexer Score combines AER (70%) and QFR (30%) into a unified scale from 1 (best) to 10 (worst).
Since AER and QFR have opposite normalization directions, QFR is adjusted before combining:
- AER Normalized: Ranges from 1 (best, low AER) to 10 (worst, high AER).
- QFR Normalized: Ranges from 10 (best, high QFR) to 1 (worst, low QFR). To align with the final scoreβs direction, itβs adjusted using: QFR Adjusted = 11 - Normalized QFR, flipping it to 1 (best) to 10 (worst).
Final Score = (Normalized AER Γ 0.7) + ((11 - Normalized QFR) Γ 0.3)
Hereβs how it works:
- AER Contribution: Normalized AER (1 to 10) is multiplied by 0.7, contributing 70% to the final score. A lower AER (better efficiency) lowers the score.
- QFR Contribution: Normalized QFR (10 to 1) is inverted to QFR Adjusted (1 to 10) by subtracting it from 11, then multiplied by 0.3, contributing 30%. A higher raw QFR (better fee generation) results in a lower adjusted value, lowering the final score.
- Final Scaling: The weighted sum naturally falls between 1 and 10, with 1 indicating the best performance (low AER, high QFR) and 10 the worst (high AER, low QFR).
Examples:
- Best Case: AER = 0 (Normalized = 1), QFR = 1 (Normalized = 10, Adjusted = 1) β Final = (1 Γ 0.7) + (1 Γ 0.3) = 1.0
- Worst Case: AER = 500 (Normalized = 10), QFR = 0 (Normalized = 1, Adjusted = 10) β Final = (10 Γ 0.7) + (10 Γ 0.3) = 10.0
- Mixed Case: AER = 23.788 (Normalized β 1.43), QFR = 0.002856 (Normalized β 9.97, Adjusted β 1.03) β Final = (1.43 Γ 0.7) + (1.03 Γ 0.3) β 1.31
This method ensures that efficient allocation and high query fee generation both drive the final score toward 1 (best),
while poor performance in either metric increases it toward 10 (worst).
β οΈ π΄ Underserving Penalty: Indexers serving fewer than 10 subgraphs are considered underserving and receive a penalty to their final score.
The penalty is calculated as Penalty = 2.0 Γ (10 - number_of_subgraphs) / 10, and the new score is capped at 10.
For example, an Indexer with 1 subgraph receives a penalty of 1.8, while an Indexer with 5 subgraphs receives a penalty of 1.0.
This ensures that Indexers are incentivized to support a diverse set of subgraphs, contributing to the health and decentralization of The Graph Network.
π
Performance Flags
Each indexer is assigned a Performance Flag based on its final Indexer Score,
providing a quick visual indicator of its overall efficiency and effectiveness:
- Excellent π’ (1.0 - 1.25): Top-tier performance with highly efficient allocation and strong query fee generation.
- Fair π‘ (1.26 - 2.5): Average performance with moderate inefficiencies or lower query fee generation, indicating room for improvement.
- Poor π΄ (2.51 - 10.0): Poor performance with significant inefficiencies or negligible query fees, requiring attention.
These ranges are designed to reflect the distribution of scores across the network,
distinguishing top performers (π’) from those needing optimization (π΄).