Editorial Policy
What counts as a node, how often we review, and how to request a correction.
What qualifies as a node
Any published machine-learning interatomic potential with a public code repository is eligible. To merit its own node (versus a description inside a parent entry), a model must introduce a distinct architecture, training dataset, or use case. Minor version bumps or re-trainings are folded into their parent.
Distinct model vs. variant
If a model shares architecture and dataset with an existing entry and differs only in hyperparameters or training budget, we list it as a variant in the parent description — not as its own card. Forks with a genuinely new inductive bias or dataset get their own node.
Review cadence
Every entry carries a lastReviewed date. We aim to re-review each model at least once every 12 months, plus on any reported broken link or significant upstream change. Entries older than 12 months without a review surface in the coverage table below as candidates for refresh.
What “maintained” means
- active — commits, releases, or issue responses in the last 6 months.
- maintained — receives bug fixes and compatibility updates but no new features.
- archived — repository frozen or marked archived upstream.
- experimental — research code, may break; no maintenance commitment.
Corrections & additions
To report a broken link, stale description, or request a new model, open an issue on the GitHub repository or email support@mliphub.com. See Contribute for the full workflow.
Metadata coverage
MLIP Hub is progressively filling richer metadata (domain coverage, license, framework support, maintenance status, last-reviewed date) across 27 tracked models. Overall fill: 14%.
| Field | Filled | Coverage |
|---|---|---|
| coverage | 4/27 | 15% |
| useCases | 4/27 | 15% |
| properties | 4/27 | 15% |
| frameworks | 4/27 | 15% |
| license | 4/27 | 15% |
| maintenance | 4/27 | 15% |
| lastReviewed | 4/27 | 15% |
| lastUpdated | 0/27 | 0% |
| trainingData | 4/27 | 15% |
| tags | 4/27 | 15% |