HPC center

Parallel scaling

SCM cooperates with most major hardware vendors to optimize performance of the ADF modeling suite for all popular computer platforms. This includes fine-tuning of the code for different compilers and hardware configurations. The SCM team works hard to improve performance including optimal scaling on the latest HPC platforms.

Our standard binaries work well on typical platforms, including large-scale clusters with fast interconnects. Although our software is already highly optimized, we are happy to work with HPC system administrators to build ADF from sources on their systems to further tweak performance or port to non-standard systems.


M06-L force calculation (geometry step) of a 197-atom system, DZP basis set (2475 Cartesian basis functions).


Excellent parallelization for large-scale reaxFF calculations on water. Linear scaling for doubling to 1.67 million atoms.

Most of our software modules (ADF, BAND, DFTB, ReaxFF) have been efficiently parallelized for both shared-memory and distributed memory systems, such as multi-core multi-CPU machines and various Linux clusters. For many standard calculations, including NMR, analytical Hessians, and TDDFT calculations, ADF scales well up to hundreds of CPUs.

Hewlett-Packard and SCM have run a large ADF TDDFT benchmark in 2006, followed up by a geometry optimization benchmark in 2009. The calculations scaled well up to 128 cores, as summarized in the white papers linked above. A large amount of memory per core and Infiniband interconnect is recommended for optimum performance.

Our Japanese reseller Ryoka, in collaboration with the Japan Association for Chemical Innovation (JACI), has run extensive parallelization tests on the TSUBAME2.0 supercomputer, where speed-ups of > 1.8 are achieved up to 96 processors. Doubling the number processors after that still speeds up by 1.6 and 1.4 for using 192 and 384 processors, as demonstrated in the left graph above.

If you are interested in trying out parallelization yourself, request a trial and indicate how many CPUs you want to test. For supercomputer administrators and application scientists, please e-mail us if you want to know more about specialized builds for your particular architecture. We also have HPC benchmark input files for ADF, for DFTB and for ReaxFF (based on the standard PETN benchmark from LAMMPS).

Integration with schedulers and MPI

Our standard parallel binaries ships with statically linked MPI libraries and work with most standard schedulers (SLURM, SGE, PBS). If you have a non-standard set up you may need to modify the start script. Our experts are happy to help!

Cloud Computing

Licensees can also run ADF on the high-performance cloud computing service CrunchYard. Running the Amsterdam Modeling Suite on their HPC cloud is as simple as just uploading your .job file! In the meantime we are also working on even more user-friendly cloud options such as cloud queues in ADFJobs and virtualization of the GUI.

Are you interested in running AMS in the cloud? We like to hear about your requirements and intended usage! Please leave your email address so we can contact you about this.

Workshops – easy installation

Expert ADF users and developers as well as our own technical staff regularly give hands-on workshops at universities and HPC centers. With automatic licenses for workshops, the Amsterdam Modeling Suite is easily deployed for attendees and system administrators alike. Attendees usually can enjoy trying out ADF for a few more weeks after the workshop.