Also interesting is that Spookynet uses classical nonbonded terms, albeit where the charges (in the Coulombic terms) and the atomic coefficients (in the dispersion terms) vary. Polarizable FFs might be expected to rival MLP performance in this particular aspect.
My colleague at Colorado State, Tony Rappé, developed the Universal Force Field here in the early 90s, and I still have a soft spot for those analytical forms! (A classic paper that still gets 800 citations a year)
I've been working for some months on a post about how NNPs handle "long-range forces" like electrostatics and dispersion (when I can find time, which is less and less). It's very interesting, particularly because there's so much algorithmic diversity in this fast-moving space.
I guess there are two (?) ways in which classical force fields can go wrong - the restricted functional form and the parameters: In terms of how much of the performance gap is attributable to the former or latter, I’d be really interested to see the comparison against a classical FF like AMBER that has been retrained using the same large set of QM reference values (including the larger peptide specific data) as used to train the MLP.
Yeah, totally agreed - I think there's going to be a pretty large design space between "massive fancy NNPs" and "regular forcefields with better parameters," and it'll be interesting to see the new Pareto frontier of approaches that emerge. Thanks for the comment!
Absolutely - enjoy your writing!
Also interesting is that Spookynet uses classical nonbonded terms, albeit where the charges (in the Coulombic terms) and the atomic coefficients (in the dispersion terms) vary. Polarizable FFs might be expected to rival MLP performance in this particular aspect.
My colleague at Colorado State, Tony Rappé, developed the Universal Force Field here in the early 90s, and I still have a soft spot for those analytical forms! (A classic paper that still gets 800 citations a year)
I've been working for some months on a post about how NNPs handle "long-range forces" like electrostatics and dispersion (when I can find time, which is less and less). It's very interesting, particularly because there's so much algorithmic diversity in this fast-moving space.
Long-time listener first-time caller here:
I guess there are two (?) ways in which classical force fields can go wrong - the restricted functional form and the parameters: In terms of how much of the performance gap is attributable to the former or latter, I’d be really interested to see the comparison against a classical FF like AMBER that has been retrained using the same large set of QM reference values (including the larger peptide specific data) as used to train the MLP.
Appreciate the blog!
Yeah, totally agreed - I think there's going to be a pretty large design space between "massive fancy NNPs" and "regular forcefields with better parameters," and it'll be interesting to see the new Pareto frontier of approaches that emerge. Thanks for the comment!