DFTB+ Release 18.1 (deprecated)

Note: This version of DFTB+ has been DEPRECATED as a newer version is available now. There will be no bug fixes or any kind of support provided for this version.

Please use the current stable version instead!

Download

sourceSource code of the software with regression tests
executables
(x86_64/Linux)

Precompiled executables for x86_64 (64 bit) architecture with Linux operating system.
Use the OMP_NUM_THREADS environment variable to contol the number of threads used by the binaries.

Note: the executables were compiled with GNU Fortran and linked against OpenBLAS and only support OpenMP parallelism. You may obtain considerably faster and better scaling binaries by building the code yourself using a commercial compiler (e.g. Intel Fortran) and vendor optimised ScaLAPACK, LAPACK and BLAS libraries (e.g. Intel MKL). Depending on your hardware, you may obtain substantial benefits from compiling with MPI parallelism.

Compilation

See the INSTALL.rst file in the source for compilation instructions.

This release had been successfully compiled and tested by the developers on the following architectures:

MachineSystemCompilersMPINumerical librariesNotes
x86_64LinuxIntel Fortran/C 16.0MPICH 1.5MKL 2016, ARPACK96
x86_64LinuxIntel Fortran/C 17.0MPICH 3.2MKL 2017, ARPACK96
x86_64LinuxGNU Fortran/C 5.3OpenMPI 2.1ScaLAPACK 2.02, LAPACK 3.6.0, OpenBLAS 0.2.18, ARPACK96GNU1
x86_64LinuxGNU Fortran/C 7.1OpenMPI 2.1ScaLAPACK 2.02, LAPACK 3.6.0, OpenBLAS 0.2.18, ARPACK96
x86_64LinuxNAG 6.1 / GCC 5.4OpenMPI 2.1ScaLAPACK 2.02, LAPACK 3.8.0, OpenBLAS 0.2.20, ARPACK96
x86_64LinuxPGI Fortran 17.4OpenMPI 2.1

PGI ScaLAPACK, PGI LAPACK/BLAS, ARPACK96

PGI1, PGI2

Notes:

[GNU1] Older GNU compilers (especially versions 4.x) are known to fail to compile this release (due to insufficient implementation of the Fortran 2003 standard).

[PGI1] Older PGI compilers (before 17.4) are known to deliver incorrectly working binaries (due to erroneous implementation of Fortran 2003 features)

[PGI2] If you run DFTB+ with threads, make sure, the stack size limit is not set to unlimited, because PGI's diagonalizer seem to hang for certain matrices in those cases. Setting the stack size explicitely to 8192 (usual default value) seems to solve the problem.

Known issues

  • Issue: Disabling the WITH_SOCKETS settings in the make.arch file leads to problems when compiling the code.
    Workaround: Compile the code with socket communcation support. (This has been fixed in the development version.)
  • Issue: ProjectStates fails for non-periodic systems and for periodic systems with Gamma-point only, if the projection is done more than once in a run.
    Workaround: Use the ProjectStates option only for static calculations, e.g. do the geometry optimisation / MD in separate run first. (This has been fixed in the development version.)

Changes since release 17.1

Added

  • MPI-parallelism.
  • Various user settings for MPI-parallelism.
  • Improved thread-parallelism.
  • LBGFS geometry driver.
  • Evaluation of electrostatic potentials at specified points in space.
  • Blurred external charges for periodic systems.
  • Option to read/write restart charges as ASCII text.
  • Timer for collecting timings and printing them at program end.
  • Tolerance of Ewald summation can be set in user input.
  • Shutdown possibility when using socket driver.
  • Header for code prints release / git commit version information.
  • Warning when downloading license incompatible external components.
  • Tool straingen for distorting gen-files.

Changed

  • Using allocatables instead of pointers where possible.
  • Change to use the Fypp-preprocessor.
  • Excited state (non-force) properties for multiple excitations.
  • Broyden-mixer does not use file I/O.
  • Source code documentation is Ford-compatible.
  • Various refactorings to improve on modularity and code clarity.

    Fixed

    • Keyword Atoms in modes_in.hsd consider only the first specified entry
    • Excited window selection in Cassida time-dependent calculation
    • Formatting of eigenvalues and fillings in detailed.out and band.out
    • iPI interface with cluster geometries fixed (protocol contains redundant lattice information in these cases).