A curated directory of machine-learning interatomic potentials (MLIPs). Each entry lists the model name, category, release year, authoring group, and a short description.
NequIP
Category:Equivariant · Year: · Authors:Harvard (Kozinsky) / MIT (Smidt)
E(3)-equivariant message-passing potential that set the template for data-efficient, high-accuracy force fields across molecules and materials.
Higher-order equivariant message passing (4-body messages) that reaches SOTA accuracy with only 1-2 layers; later extended into the universal MACE-MP foundation model family.
Graph Atomic Cluster Expansion: a foundation-scale implementation of ACE with explicit multi-body basis functions for wide-coverage materials modelling.
Non-equivariant, non-conservative graph neural network potential systematically trading off roto-equivariance, conservatism, and graph sparsity for >10x latency and >8x memory reduction at near-SOTA accuracy on large periodic systems.
Predecessor to Orb-v3. A non-equivariant graph network exploring trade-offs between accuracy and inference cost on materials simulation; trained on MPTrj + Alexandria with rotation-invariance learnt rather than imposed.
Orb-v3 variant for molecules, electrolytes, metal complexes, and biomolecules, trained on the ~100M-structure OMol25 dataset with explicit total-charge and spin-multiplicity conditioning.
License: Apache-2.0
Tags: transformer, charge-aware, spin-aware, foundation model
Category:Equivariant · Year: · Authors:Thomas et al.
First E(3)-equivariant convolutional architecture for point clouds, introducing spherical-harmonic tensor products that underlie nearly all modern equivariant MLIPs (NequIP, MACE, Equiformer).
License: MIT
Tags: equivariant, E(3), tensor field, point cloud
Category:Transformer · Year: · Authors:Fuchs et al.
Self-attention generalized to SE(3)-equivariant inputs via tensor field attention; an early blueprint for equivariant transformer architectures like Equiformer.
Category:Transformer · Year: · Authors:Shoghi et al. (CMU / Meta)
Joint Multi-domain Pre-training: a strategy that trains one shared GemNet-OC backbone simultaneously on OC20, OC22, ANI-1x and Transition-1x (~120M systems), demonstrating multi-dataset pretraining for transferable potentials — a precursor to the universal MLIP foundation models that followed.
License: CC-BY-NC-4.0
Tags: transformer, multi-task pretraining, foundation model
Equivariant Smooth Energy Network: conservative-force equivariant GNN with a smooth potential energy surface designed for stable long-horizon MD. Serves as the backbone underneath Meta's UMA foundation model.
Universal Model for Atoms: a Mixture of Linear Experts (MoLE) foundation model built on the eSEN backbone, trained on ~500M structures spanning OC20, ODAC23, OMat24, OMC25, and OMol25. NeurIPS 2025 spotlight.
License: proprietary
Tags: transformer, mixture-of-experts, foundation model, charge-aware, spin-aware
Compact E(3)-equivariant foundation potential pairing a simplified NequIP design with equivariant RMS layer normalization and the Muon optimizer; 700K parameters trained in ~100 A100 GPU-hours, with ~20x lower training cost and two orders of magnitude faster inference than the top Matbench-Discovery models.
License: MIT
Tags: equivariant, JAX, foundation model, Muon optimizer
Point Edge Transformer: an unconstrained graph transformer for atomistic systems. Drops strict equivariance in favour of attention-based message passing on point-and-edge inputs, with rotation-invariance learnt from data rather than imposed by the architecture. Direct precursor to PET-MAD.
Lightweight universal transformer-GNN potential (Point-Edge Transformer) trained on the Massive Atomistic Diversity (MAD) dataset spanning solids, surfaces, and molecules; competitive with larger uMLIPs across diverse atomistic systems.
License: BSD-3-Clause
Tags: transformer, PET, foundation model, lightweight
Category:Transformer · Year: · Authors:Liao et al. (MIT Atomic Architects)
Third-generation SE(3)-equivariant graph attention transformer with improved efficiency, expressivity, and generality; achieves the strongest results within the Equiformer family on OC20 S2EF-2M, MPtrj, OMat24, sAlex, and Matbench-Discovery.
License: MIT
Tags: transformer, SE(3)-equivariant, foundation model
Category:Equivariant · Year: · Authors:ACEsuit (Kovacs, Batatia, Csanyi et al.)
Polarisable electrostatic foundation model that augments MACE with a non-self-consistent polarisable field formalism, learning atomic charge and spin densities (Gaussian-type multipoles) directly from energies/forces; global charge/spin constraints are enforced via learnable Fukui equilibration functions. Trained on OMol25 (~100M structures at ωB97M-V), released in M (12 A) and L (18 A) receptive-field variants for molecular chemistry and non-covalent interactions.
License: MIT
Tags: equivariant, MACE, polarisable, charge-aware, spin-aware, long-range-electrostatics, foundation model
Multi-domain universal MACE-architecture potential extending the MACE-Osaka series to 97 elements — the broadest elemental coverage to date — by integrating MACE-Osaka24's inorganic + organic data with the newly constructed HE26 heavy-element dataset of minor actinides assembled from experimental and computational literature. Targets nuclear and actinide chemistry while retaining strong performance on the inorganic MPtrj and organic OFF23 test sets.
License: MIT
Tags: MACE, foundation model, actinides, nuclear, 97 elements
Efficient equivariant graph neural network MLIP that introduces a geometry-aware dual-path dynamic attention mechanism inside its message-passing layers and a physics-informed multi-perspective pooling strategy for global system representations. Demonstrates competitive accuracy with mainstream equivariant models at markedly lower computational cost across organic molecules (QM7, MD17), Li-containing crystals, two-dimensional materials (bilayer graphene, black phosphorus), surface catalytic reactions (formate decomposition), and charged systems, while remaining stable for long-time MD simulations.
Category:Equivariant · Year: · Authors:Zhang et al. (Shanghai AI Lab / CUHK)
Unified neural interatomic potential that embeds an Ewald-inspired reciprocal-space formulation inside an irreducible SO(3)-equivariant framework. Performs equivariant message passing in reciprocal space via learned equivariant k-space filters and an equivariant inverse transform, capturing anisotropic tensorial long-range correlations without sacrificing physical consistency; consistently improves energy and force accuracy, data efficiency, and long-range extrapolation across periodic systems, supramolecular assemblies, conjugated molecules, charged dimers, and biomolecular dynamics.
Scalable, energy-conserving, attention-based MLIP that pairs local neighborhood self-attention with a global all-to-all node attention layer in which every atom attends to every other atom. The data-driven all-to-all component captures long-range interactions without explicit electrostatic priors and remains the most durable ingredient as data and model size scale to O(100M) training samples. Sits atop the OMol25 leaderboard at release while remaining competitive on OMat24 (materials) and OC20 (catalysts); cuts long-range distance-scaling error by ~90% versus the next-best foundation model, with stable long-timescale MD recovering experimental densities and heats of vaporisation.
License: MIT
Tags: transformer, all-to-all attention, long-range-electrostatics, foundation model, charge-aware, spin-aware
Category:Equivariant · Year: · Authors:Ho, van der Oord, Darby, Csányi, Ortner et al. (Cambridge / UBC / ACEsuit)
Equivariant many-body message-passing interatomic potential extending the MACE framework to magnetic materials by embedding atomic magnetic moments as explicit degrees of freedom alongside positions. Learns physically consistent and transferable representations of magnetic behaviour beyond collinear approximations and can incorporate spin-orbit coupling, achieving near density-functional-theory accuracy with strong data efficiency by fine-tuning from a pre-trained foundation model. Targets structural transformations, finite-temperature magnetic phenomena, and high-throughput screening of strongly spin-orbit coupled materials.
Tags: equivariant, MACE, magnetic moments, spin-orbit coupling
Multifidelity Mixture-of-Experts framework built on the strictly local E(3)-equivariant Allegro architecture. Spatially partitions the simulation domain into chemically complex regions (e.g. reactive interfaces) and simple regions (e.g. bulk lattices) and assigns Allegro experts of different capacity to each, enabling expensive high-fidelity inference only where required while a cheaper expert handles the rest of the cell.
Hessian-informed machine learning interatomic potential trained with the Hessian-INformed Training (HINT) protocol — Hessian pre-training, configuration sampling, curriculum learning, and a stochastic projected Hessian loss — to attain Hessian-level accuracy with two to four orders of magnitude fewer high-fidelity Hessian labels than standard training. Substantially improves transition-state search and brings Gibbs free-energy predictions close to chemical accuracy in data-scarce regimes, and reproduces phonon renormalization and superconducting Tc of strongly anharmonic hydrides in close agreement with experiment.
MLIP that explicitly captures electronic degrees of freedom and nonlocal effects, modelled via self-attention in a transformer architecture. Total charge and spin multiplicity are injected as global tokens that condition the local atomic embeddings — a direct conceptual ancestor of charge/spin-conditioned foundation models such as UMA and MACE-POLAR-1.
SpookyNet-based biomolecular force-field framework that combines top-down (whole-protein) and bottom-up (fragment) sampling to train transferable, quantum-accurate ML potentials for proteins and condensed-phase biomolecular dynamics.
License: proprietary
Tags: transformer, biomolecular, fragment sampling, charge-aware
The Behler–Parrinello high-dimensional neural network potential — the 2007 paper that introduced symmetry functions and the atomic-decomposition framework underlying essentially every modern MLIP.
Atomic Cluster Expansion: a complete, systematically improvable many-body basis for the local atomic environment; the mathematical backbone of PACE/GRACE and a strong influence on MACE.
Gaussian Approximation Potentials: a kernel-based MLIP fit to ab-initio energies and forces using SOAP-like many-body descriptors; the canonical kernel reference for descriptor-based potentials.
Deep Potential Molecular Dynamics: local frame descriptors + deep networks giving ab-initio-level accuracy with linear scaling, widely used for large MD.
Message-passing graph neural network built on a Line Graph Series (LiGS) that updates bond, angle, and dihedral representations while preserving energy conservation and physical symmetries; designed for Large Atomistic Models with clean scaling in model size, data, and compute. The DPA-3.1-3M variant trained on OpenLAM-v1 tops zero-shot generalization across 12 downstream tasks.
License: LGPL-3.0
Tags: descriptor, message-passing, line graph, foundation model
Microsoft foundation MLIP using a Graphormer transformer backbone with explicit translation/periodic-boundary invariance and equivariant features for materials. Trained on large-scale ab-initio data spanning 0-5000 K and pressures up to 1000 GPa as a reusable simulator for materials discovery and high-throughput computation.
Universal neuroevolution-potential foundation model spanning 89 elements across inorganic and organic materials, trained via separable natural evolution strategies and distributed in GPUMD for empirical-potential-like speed.
License: GPL-3.0
Tags: descriptor, neuroevolution, foundation model
Category:Equivariant · Year: · Authors:Zhejiang Univ. (Hou lab) / Su et al.
Equivariant network with Tensorized Quadrangle Attention (TQA) that captures three- and four-body interactions in linear time; pre-trained on nablaDFT and fine-tuned on SPICE as a quantum-accurate biomolecular force-field foundation model with ~10x faster inference than MACE-OFF.
License: MIT
Tags: equivariant, linear tensor, biomolecular, attention
Category:Invariant · Year: · Authors:Zhou et al. (CAS / UCAS)
Invariant foundation MLIP using a separable O(N) attention mechanism for three-body interactions; 10M-parameter models trained on OMat24 / MPTrj / sAlex match equivariant SOTA on Matbench-Discovery (F1 0.847) at >13x lower training cost than eSEN-30M-MP.
Billion-parameter Mixture-of-Experts extension of MatRIS that inserts sparse expert modules around the self-attention layer — a message-update MoE for message construction and a feature-update MoE for post-attention refinement — with element-type routing that keeps the activated expert set time-independent and the potential energy surface continuous. Released in M (2.47B) and L (11.50B) variants and trained on heterogeneous domains (molecules, materials, catalysis, MOFs, and direct air capture) via the new Janus hybrid-parallel framework, attaining 1.0–1.2 EFLOPS at >90% parallel efficiency and compressing billion-parameter uMLIP training from weeks to hours on Exascale supercomputers.
License: BSD-3-Clause
Tags: invariant, mixture-of-experts, billion-parameter, multi-task, foundation model
Spectral Neighbor Analysis Potential: a linear descriptor MLIP fit to bispectrum components of the local atomic environment, designed for high-throughput LAMMPS simulations. The canonical industrial-scale linear descriptor potential alongside GAP and MTP.
Moment Tensor Potentials: a systematically improvable linear MLIP based on contractions of moment tensors of the local environment. Pairs naturally with D-optimality / MaxVol active learning (Podryabinkin & Shapeev 2017), making the MTP family the canonical reference for AL-native potential fitting.
License: BSD-2-Clause
Tags: descriptor, linear, moment tensors, active-learning
Symmetric Gradient-Domain Machine Learning: a kernel ridge regression force field fit directly in the gradient domain, with energy obtained by closed-form integration. Strong on small molecules where rigorous symmetry handling is critical.
Fast Learning of Atomistic Rare Events: a Gaussian-process-regression Bayesian potential trained on-the-fly during MD, with GP-uncertainty driving when to call DFT vs. trust the surrogate. Includes tabulated/mapped force-field export for production-speed MD; principal kernel-based AL framework and immediate predecessor of the Kozinsky group's NequIP/Allegro line.
Original Atoms-in-Molecules Network: a self-consistent message-passing potential that propagates atomic environment vectors through repeated neighbour updates, learning charge-aware atomic representations for organic chemistry. The conceptual precursor to AIMNet-NSE and AIMNet2.
Neural Spin Equilibration variant of AIMNet that handles arbitrary total charge and spin multiplicity through an SCF-like message-passing loop, predicting transferable atomic charges and spins. The conceptual ancestor of charge/spin-conditioned models like AIMNet2 and SpookyNet.
Hierarchically Interacting Particle Neural Network: a message-passing potential that decomposes atomic energies into a hierarchy of n-body terms with explicit residual structure, providing both interpretable hierarchical decomposition and natural uncertainty estimates from the residual stack.
ANI variant extending coverage to seven elements (H, C, N, O, S, F, Cl); widely used in drug discovery for fast geometry and energy scans on organic chemistry.
Original ANI: an atomistic neural network potential built on Behler-style symmetry function descriptors and per-element atomic networks, trained on ~22M DFT structures of organic molecules covering H, C, N, O.
Active-learned extension of ANI-1: applies query-by-committee active learning to grow the training set towards a chemically diverse organic-molecule sampling, dramatically improving generalisation while maintaining ANI-style efficiency.
ANI-1x transfer-learned to a CCSD(T)/CBS-quality reference, yielding a near-coupled-cluster-accuracy organic-chemistry potential at deep-network speed. The first widely deployed example of using transfer learning to push beyond DFT.
License: MIT
Tags: descriptor, atomic NN, transfer learning, CCSD(T)
Second-generation Atoms-in-Molecules Network potential covering 14 elements (H, B, C, N, O, F, Si, P, S, Cl, As, Se, Br, I) in neutral and charged states; combines ML-parameterised short-range terms with physics-based long-range electrostatics, trained on ~20M hybrid-DFT (wB97M-D3) calculations for routine use as a DFT replacement in organic and elemental-organic chemistry.
Drug-discovery-oriented MLIP built on the TensorNet2 architecture — a refined vector–scalar equivariant TensorNet that adds scalar partial-charge features, performs neutral charge equilibration, and includes a long-range Coulomb energy term. Pretrained on a large dataset of drug-like compounds covering H, B, C, N, O, F, Si, P, S, Cl, Br, I in neutral and charged states; balances DFT-level accuracy on torsion scans, MD trajectories, and batched minimisations with high-throughput inference suitable for FEP and lead-optimisation workflows.
License: Apache-2.0
Tags: TensorNet2, drug discovery, charge-aware, long-range-electrostatics
Category:Invariant · Year: · Authors:Schütt et al.
Continuous-filter convolutional network that introduced smooth, translation-invariant filters for molecules; the baseline for many later invariant message-passing potentials. Predicts energies and forces with all-atom symmetry preserved.
Category:Invariant · Year: · Authors:Gasteiger et al.
Directional message passing network with spherical basis functions that explicitly encode bond angles, improving data efficiency over SchNet-style models. The original paper (arXiv:2003.03123) introduced the directional MP framework; the follow-up DimeNet++ (arXiv:2011.14115) refined it with faster, uncertainty-aware variants.
Category:Equivariant · Year: · Authors:Schütt et al.
Polarizable Atom Interaction Neural Network: uses coupled scalar/vector features to capture forces and dipoles with E(3) equivariance at lower cost than full tensors.
Materials Graph Network with 3-body interactions; trained on Materials Project relaxations to give a universal potential over most of the periodic table.
Charge-aware graph neural network that extends M3GNet with oxidation state and local charge features; particularly strong for battery and redox-active materials.
Lightweight universal MLIP distilled from the SevenNet-Omni teacher, delivering over an order-of-magnitude speedup while retaining broad transferability for scalable atomistic simulations on thousands of atoms.
License: MIT
Tags: invariant, distilled, lightweight, foundation model
Multi-fidelity universal foundation MLIP built on the SevenNet-MF backbone and trained on 15 open datasets (~250M structures across molecules, crystals, and surfaces); serves as the teacher model for SevenNet-Nano.
Eighth release of the Preferred Potential: a universal MLIP trained on a large r2SCAN meta-GGA dataset, capable of reproducing 45 elements off-the-shelf across crystals, molecules, surfaces, and adsorption structures without fine-tuning. Distributed commercially via the Matlantis SaaS platform.
Category:Descriptor · Year: · Authors:Chen et al. (NEP framework)
Universal organic force field for C, H, O, N, S, P built within the Neuroevolution Potential (NEP) framework. Trained on a chemically rich dataset assembled through a unified top-down/bottom-up sampling strategy, providing a balanced description of bond breaking/formation, aromatic growth, hydrogen bonding, van der Waals interactions, and π-stacking; reaches near-DFT force accuracy while running ~200x faster than ReaxFF on identical hardware, enabling hundreds-of-nanoseconds reactive MD.
First universal neural network potential for molecular ground and excited electronic states. An ensemble of MS-ANI-style invariant potentials trained on PubChemQC TD-DFT (B3LYP/6-31+G*) excited-state data combined with CCSD(T)/CBS ground-state energies from ANI-1ccx, with a separate head predicting oscillator strengths of interstate transitions. Approaches TD-DFT accuracy for UV/vis spectra and photodynamics at a fraction of the cost while outperforming semiempirical methods.
Category:Equivariant · Year: · Authors:Picha, Karwounopoulos, Erhard, Boresch, Heid (TU Wien / U. Vienna)
GRACE-architecture MLIP for organic systems, trained on the SPICE v2.0 dataset and integrated with ASE for MD. Two-layer GRACE-OFF models outperform MACE-OFF (including MACE-OFF24(M)) on single-point energies, forces, torsional profiles, and condensed-phase properties of organic liquids and water; for water and hexane they also beat the much more expensive UMA(S) on densities and radial distribution functions. Established as an accurate, GPU-efficient foundation potential for organic-liquid and biomolecular MD.
Transformer-based small-molecule MLIP that adapts the Omnilearned Point-Edge-Transformer (PET) foundation model — pre-trained on ~1 billion LHC particle jets — to molecular dynamics via cross-domain transfer learning. Uses an interaction-matrix attention bias to inject pairwise atomic physics into transformer attention; on the OMol25 dataset OmniMol-M outperforms a 1B-parameter baseline transformer with ~20× fewer parameters, demonstrating the first cross-discipline transfer for scientific point-cloud foundation models.
Hybrid Invariant–Equivariant materials foundation potential that interleaves invariant and O(3)-equivariant message-passing layers to leverage invariant-layer scalability while reserving equivariant layers for high-order interactions. Force and stress are obtained as exact derivatives of a conservative energy, and the model achieves SOTA on Matbench Discovery while running ~90% faster than SevenNet-l3i5 and ~140% faster than EquiformerV2.
License: GPL-3.0
Tags: equivariant, hybrid invariant-equivariant, foundation model
Cross-learning multi-head MACE foundation model that bridges molecular, surface, and inorganic crystal chemistry in a single MLIP. Enhances the MACE architecture with stronger element-weight sharing and non-linear tensor-decomposition product bases, then post-trains a multi-head replay scheme on OMAT-24 (PBE crystals), MPTraj, OMol (ωB97M-VV10), OC20 (surfaces), SPICE, RGD1, and MATPES-r2SCAN heads to unify electronic-structure theories.
License: MIT
Tags: equivariant, MACE, multi-head, cross-domain, foundation model
Category:Invariant · Year: · Authors:Chang, Zhu (Wuhan University of Technology)
Moment Graph Neural Network: rotation-invariant message-passing architecture whose node and edge updates operate on Cartesian moment representations of 3D molecular graphs, capturing high-order angular structure without explicit equivariant tensor products. Reaches SOTA on QM9 and revised MD17 (incl. MD17-ethanol) and generalises to 3BPA and 25-element high-entropy alloys, including amorphous-electrolyte MD.
Tags: invariant, moment representation, molecular potential
Exascale multi-task atomistic graph foundation model built on the HydraGNN framework, with a PaiNN-based message-passing backbone selected via large-scale DeepHyper hyperparameter optimization on Frontier. Jointly pre-trained on 16 open first-principles datasets (~544M structures, 85+ elements) using shared message-passing layers and per-dataset output heads, scaled to 16,000 GPUs and able to evaluate 1.1 billion atomistic structures in 50 seconds for downstream materials screening.
License: BSD-3-Clause
Tags: equivariant, PaiNN, multi-task, exascale, foundation model
Charge-aware extension of the neuroevolution potential (NEP) framework that introduces explicit, environment-dependent partial charges represented per-ion by neural networks of the local descriptor vector. Implemented in GPUMD with both Ewald and particle-particle particle-mesh electrostatics, enabling Born-effective-charge tensors, dielectric properties, and infrared spectra alongside long-range MD scalable to million-atom systems on consumer GPUs.
Successor to PET-MAD: a generally applicable r²SCAN universal interatomic potential that extends elemental coverage to 102 elements via the curated MAD-1.5 dataset (~217k structures). Same Point Edge Transformer (PET) backbone with rotation-invariance learnt from data, retrained at the r²SCAN meta-GGA level with targeted enrichment strategies (molecules, clusters, surfaces, low-dimensional structures, bulk crystals) and uncertainty-quantification-driven outlier removal. Reported as more robust, more accurate, and faster than the original PET-MAD across challenging molecular dynamics benchmarks.
License: BSD-3-Clause
Tags: transformer, PET, foundation model, r²SCAN, 102 elements
Density-first machine-learned electronic-structure framework that learns the Hohenberg-Kohn map from nuclear configurations to the ground-state electron density using an SE(3)-equivariant neural network predicting density coefficients of an atom-centred Gaussian basis, with a Δ-learning prior built from superposed atomic densities. A second equivariant network then maps the predicted density to the total energy, providing a unified framework for molecular dynamics, energies, forces, and electronic observables (dipole moments, polarizabilities, infrared spectra). Validated on ethanol, ethanethiol, resorcinol, and polythiophene oligomers (extrapolating from 1-6 to 12 monomers).
Tags: equivariant, SE(3), electron density, Δ-learning, spectroscopy
Extra-large variant of the Point Edge Transformer (PET) trained with an OMat24 + sAlex + MPtrj recipe (the OAM data mixture). Pushes the limits of unconstrained MLIPs by trading explicit E(3) symmetry constraints for capacity, depth, and data, achieving the top position on the Matbench Discovery leaderboard at release. Configuration: d_pet 640, d_node 2560, 5 GNN layers + 3 attention layers, 10 Å cutoff with adaptive 40-neighbour cap, distributed via the upet package alongside the wider PET-MAD / PET-OMat / PET-SPICE family.
License: BSD-3-Clause
Tags: transformer, PET, unconstrained, foundation model, Matbench Discovery
Category:Equivariant · Year: · Authors:Yin, Zhang, Yang et al.
Local-frame-based equivariant interatomic potential that builds atom-centred frames with learnable geometric transitions, replacing the spherical-harmonic tensor products used by NequIP/MACE-style models with cheaper Cartesian-frame operations. Achieves SOTA accuracy and improved efficiency on Matbench Discovery and OC2M benchmarks across molecular reactions, crystal stability, and surface catalysis, with an OMat24-pretrained foundation variant (alphanet-v1-oam) released on the Matbench Discovery leaderboard.
Tensor Atomic Cluster Expansion: a unified Cartesian-space framework that decomposes atomic environments into a complete hierarchy of irreducible Cartesian tensors, providing symmetry-consistent invariant and equivariant representations without spherical-harmonic Clebsch-Gordan overhead. Universal embeddings expose computational level, total charge, magnetic moments, and external-field perturbations as conditioning inputs, while a Latent Ewald Summation module handles long-range electrostatics. Released with an OMat24-pretrained foundation variant (tace-v1-oam-m) on the Matbench Discovery leaderboard, with TorchSim, LAMMPS-ML-IAP, and ASE calculators.
License: MIT
Tags: equivariant, ACE, Cartesian tensors, long-range, charge-aware, magnetic, foundation model
Deep Tensor Neural Network — the direct precursor to SchNet that introduced learned per-element embeddings refined by tensorised pairwise interaction blocks for quantum-chemical energies on QM9-style organic molecules.
License: MIT
Tags: invariant, tensor, deep network, precursor to SchNet
Crystal Graph Convolutional Neural Network — the first GNN built explicitly on periodic crystal graphs with multi-edge bond convolutions. Foundational ancestor of nearly every later universal materials GNN (MEGNet / M3GNet / CHGNet) and a workhorse for materials property prediction.
License: MIT
Tags: invariant, crystal graph, convolutional, foundation
Modular invariant message-passing potential that simultaneously predicts energies, forces, dipole moments, and partial charges with explicit electrostatic and dispersion energy terms. One of the first MLIPs that handled molecules with non-zero net charge through learnable atomic charges plus Coulomb correction.
Category:Invariant · Year: · Authors:Chen, Ye, Zuo, Zheng, Ong (UCSD / Materials Virtual Lab)
MatErials Graph Network — universal graph network with global state attributes that unifies molecules and crystals in one framework. Direct architectural ancestor of M3GNet/CHGNet from the same group; widely used for materials property prediction.
License: BSD-3-Clause
Tags: invariant, graph network, global attributes, materials
Covariant Molecular Neural Network — an early end-to-end SO(3)-equivariant architecture using Clebsch-Gordan tensor products on irreducible representations of irreducible spherical tensors. Predates and informs the TFN/NequIP-style equivariant message-passing potential family.
E(n)-Equivariant Graph Neural Network: a simple, scalar-only equivariant architecture that achieves rotation/translation equivariance without higher-order tensor representations. Highly cited for its simplicity and broadly applied beyond MLIPs to generative modelling, protein structure, and dynamics.
Original Neuroevolution Potential — a per-element neural network on local descriptors trained with separable natural-evolution strategies rather than gradient descent. Designed for raw GPU throughput in the GPUMD code; ancestor of NEP89, qNEP, and ORION.
Category:Invariant · Year: · Authors:Takamoto et al. (Preferred Networks)
Original Preferred Potential — first universal NNP covering 45 elements and the foundation of the commercial Matlantis SaaS platform. Predated the academic universal MLIP wave (M3GNet, CHGNet, MACE-MP) by over a year and demonstrated viable industrial-scale deployment.
License: proprietary
Tags: invariant, TeaNet, foundation model, Matlantis
Original Equivariant Graph Attention Transformer — combines graph attention with TFN-style E(3)-equivariant tensor representations and depthwise tensor-product MLPs. Direct ancestor of Equiformer V2 and V3; influential for attention-based equivariant MLIPs.
Spherical Channels Network — represents atomic environments as multichannel spherical signals rotated into edge-aligned local frames, enabling efficient high-degree representations for OC20-class catalyst modelling. Direct precursor to eSCN and the Meta FAIR equivariant-transformer line.
SO(3)-equivariant attention on arbitrary length scales: factorises equivariant tensor products into invariant scalar attention plus an equivariant filter, enabling long-range-capable equivariant transformers. Foundation of the later SO3LR molecular response model.
Category:Equivariant · Year: · Authors:Wang, Zhao, Cui et al. (Microsoft Research)
Vector-scalar interactive message passing potential that captures geometric information through coupled scalar and vector channels without explicit higher-order tensor algebra. Combines competitive accuracy on QM9/MD17/MD22 with practical efficiency for biomolecular MD.
Category:Descriptor · Year: · Authors:Zhang, Bi et al. (DeepModeling)
First-generation Deep Potential with Attention — pretrained Deep Potential descriptor model that introduces an attention layer to the DeepMD framework, enabling cross-domain transfer learning. Direct bridge between DeepMD and DPA-2 in the DeepModeling lineage.
Reduces SO(3) tensor-product convolutions to SO(2) by aligning each pair to a common rotation axis, dramatically lowering the cost of high-degree equivariant convolutions. Direct precursor to Equiformer V2's higher-degree backbone and key SOTA on OC20 in 2023.
Transferable MACE force field for organic molecules covering 10 elements (H, C, N, O, F, P, S, Cl, Br, I) trained on SPICE quantum-chemistry data. The molecular sibling of MACE-MP-0 and template for later GRACE-OFF; widely used as a drop-in replacement for classical biomolecular force fields.
Graph Networks for Materials Exploration — published in Nature 624, 80 (2023), demonstrated discovery of 2.2M new crystal structures (380k stable) via active-learning-coupled NNP-driven materials search. Established graph-network MLIPs as a viable engine for autonomous crystal discovery at scale.
License: Apache-2.0
Tags: invariant, graph network, materials discovery, active learning
Category:Equivariant · Year: · Authors:Batatia, Benner, Chiang, Elena, Kovács et al.
First MACE foundation model — a single MACE-architecture potential trained on the Materials Project trajectory dataset (MPtrj) covering 89 elements. Demonstrated broadly transferable accuracy across inorganic crystals, surfaces, defects, and molecular crystals; the reference universal MACE model and ancestor of MACE-Osaka26 / MACE-MH-1 / MACE-Magnetic / MACE-POLAR-1.
License: MIT
Tags: equivariant, foundation model, universal, MACE family