SCM cooperates with most major hardware vendors to optimize performance of the ADF modeling suite for all popular computer platforms. This includes fine-tuning of the code for different compilers and hardware configurations. The SCM team works hard to improve performance including optimal scaling on the latest HPC platforms.
Our standard binaries work well on typical platforms, including large-scale clusters with fast interconnects. Although our software is already highly optimized, we are happy to work with HPC system administrators to build ADF from sources on their systems to further tweak performance or port to non-standard systems.
M06-L force calculation (geometry step) of a 197-atom system, DZP basis set (2475 Cartesian basis functions).
Excellent parallelization for large-scale reaxFF calculations on water. Linear scaling for doubling to 1.67 million atoms.
Most of our software modules (ADF, BAND, DFTB, ReaxFF) have been efficiently parallelized for both shared-memory and distributed memory systems, such as multi-core multi-CPU machines and various Linux clusters. For many standard calculations, including NMR, analytical Hessians, and TDDFT calculations, ADF scales well up to hundreds of CPUs.
Hewlett-Packard and SCM have run a large ADF TDDFT benchmark in 2006, followed up by a geometry optimization benchmark in 2009. The calculations scaled well up to 128 cores, as summarized in the white papers linked above. A large amount of memory per core and Infiniband interconnect is recommended for optimum performance.
Our Japanese reseller Ryoka, in collaboration with the Japan Association for Chemical Innovation (JACI), has run extensive parallelization tests on the TSUBAME2.0 supercomputer, where speed-ups of > 1.8 are achieved up to 96 processors. Doubling the number processors after that still speeds up by 1.6 and 1.4 for using 192 and 384 processors, as demonstrated in the left graph above.
If you are interested in trying out parallelization yourself, request a trial and indicate how many CPUs you want to test. For supercomputer administrators and application scientists, please e-mail us if you want to know more about specialized builds for your particular architecture.