Codebase list gromacs / debian/2021_beta2-1 INSTALL
debian/2021_beta2-1

Tree @debian/2021_beta2-1 (Download .tar.gz)

INSTALL @debian/2021_beta2-1raw · history · blame

   1
   2
   3
   4
   5
   6
   7
   8
   9
  10
  11
  12
  13
  14
  15
  16
  17
  18
  19
  20
  21
  22
  23
  24
  25
  26
  27
  28
  29
  30
  31
  32
  33
  34
  35
  36
  37
  38
  39
  40
  41
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
Installation guide
******************


Introduction to building GROMACS
================================

These instructions pertain to building GROMACS 2021-beta2. You might
also want to check the up-to-date installation instructions.


Quick and dirty installation
----------------------------

1. Get the latest version of your C and C++ compilers.

2. Check that you have CMake version 3.13 or later.

3. Get and unpack the latest version of the GROMACS tarball.

4. Make a separate build directory and change to it.

5. Run "cmake" with the path to the source as an argument

6. Run "make", "make check", and "make install"

7. Source "GMXRC" to get access to GROMACS

Or, as a sequence of commands to execute:

   tar xfz gromacs-2021-beta2.tar.gz
   cd gromacs-2021-beta2
   mkdir build
   cd build
   cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
   make
   make check
   sudo make install
   source /usr/local/gromacs/bin/GMXRC

This will download and build first the prerequisite FFT library
followed by GROMACS. If you already have FFTW installed, you can
remove that argument to "cmake". Overall, this build of GROMACS will
be correct and reasonably fast on the machine upon which "cmake" ran.
On another machine, it may not run, or may not run fast. If you want
to get the maximum value for your hardware with GROMACS, you will have
to read further. Sadly, the interactions of hardware, libraries, and
compilers are only going to continue to get more complex.


Quick and dirty cluster installation
------------------------------------

On a cluster where users are expected to be running across multiple
nodes using MPI, make one installation similar to the above, and
another using "-DGMX_MPI=on" and which is building only mdrun, because
that is the only component of GROMACS that uses MPI. The latter will
install a single simulation engine binary, i.e. "mdrun_mpi" when the
default suffix is used. Hence it is safe and common practice to
install this into the same location where the non-MPI build is
installed.


Typical installation
--------------------

As above, and with further details below, but you should consider
using the following CMake options with the appropriate value instead
of "xxx" :

* "-DCMAKE_C_COMPILER=xxx" equal to the name of the C99 Compiler you
  wish to use (or the environment variable "CC")

* "-DCMAKE_CXX_COMPILER=xxx" equal to the name of the C++98 compiler
  you wish to use (or the environment variable "CXX")

* "-DGMX_MPI=on" to build using MPI support (generally good to
  combine with building only mdrun)

* "-DGMX_GPU=CUDA" to build with NVIDIA CUDA support enabled.

* "-DGMX_GPU=OpenCL" to build with OpenCL support enabled.

* "-DGMX_SIMD=xxx" to specify the level of SIMD support of the node
  on which GROMACS will run

* "-DGMX_BUILD_MDRUN_ONLY=on" for building only mdrun, e.g. for
  compute cluster back-end nodes

* "-DGMX_DOUBLE=on" to build GROMACS in double precision (slower,
  and not normally useful)

* "-DCMAKE_PREFIX_PATH=xxx" to add a non-standard location for CMake
  to search for libraries, headers or programs

* "-DCMAKE_INSTALL_PREFIX=xxx" to install GROMACS to a non-standard
  location (default "/usr/local/gromacs")

* "-DBUILD_SHARED_LIBS=off" to turn off the building of shared
  libraries to help with static linking

* "-DGMX_FFT_LIBRARY=xxx" to select whether to use "fftw3", "mkl" or
  "fftpack" libraries for FFT support

* "-DCMAKE_BUILD_TYPE=Debug" to build GROMACS in debug mode


Building older versions
-----------------------

Installation instructions for old GROMACS versions can be found at the
GROMACS documentation page.


Prerequisites
=============


Platform
--------

GROMACS can be compiled for many operating systems and architectures.
These include any distribution of Linux, Mac OS X or Windows, and
architectures including x86, AMD64/x86-64, several PowerPC including
POWER8, ARM v8, and SPARC VIII.


Compiler
--------

GROMACS can be compiled on any platform with ANSI C99 and C++17
compilers, and their respective standard C/C++ libraries. Good
performance on an OS and architecture requires choosing a good
compiler. We recommend gcc, because it is free, widely available and
frequently provides the best performance.

You should strive to use the most recent version of your compiler.
Since we require full C++17 support the minimum supported compiler
versions are

* GNU (gcc/libstdc++) 7

* Intel (icc) 19.1

* LLVM (clang/libc++) 5

* Microsoft (MSVC) 2017 15.7

Other compilers may work (Cray, Pathscale, older clang) but do not
offer competitive performance. We recommend against PGI because the
performance with C++ is very bad.

The xlc compiler is not supported and version 16.1 does not compile on
POWER architectures for GROMACS-2021-beta2. We recommend to use the
gcc compiler instead, as it is being extensively tested.

You may also need the most recent version of other compiler toolchain
components beside the compiler itself (e.g. assembler or linker);
these are often shipped by your OS distributions binutils package.

C++17 support requires adequate support in both the compiler and the
C++ library. The gcc and MSVC compilers include their own standard
libraries and require no further configuration. If your vendors
compiler also manages the standard library library via compiler flags,
these will be honored. For configuration of other compilers, read on.

On Linux, both the Intel and clang compiler use the libstdc++ which
comes with gcc as the default C++ library. For GROMACS, we require the
compiler to support libstc++ version 7.1 or higher. To select a
particular libstdc++ library, provide the path to g++ with
"-DGMX_GPLUSPLUS_PATH=/path/to/g++".

On Windows with the Intel compiler, the MSVC standard library is used,
and at least MSVC 2017 15.7 is required. Load the enviroment variables
with vcvarsall.bat.

To build with clang and llvms libcxx standard library, use
"-DCMAKE_CXX_FLAGS=-stdlib=libc++".

If you are running on Mac OS X, the best option is the Intel compiler.
Both clang and gcc will work, but they produce lower performance and
each have some shortcomings. clang 3.8 now offers support for OpenMP,
and so may provide decent performance.

For all non-x86 platforms, your best option is typically to use gcc or
the vendors default or recommended compiler, and check for
specialized information below.

For updated versions of gcc to add to your Linux OS, see

* Ubuntu: Ubuntu toolchain ppa page

* RHEL/CentOS: EPEL page or the RedHat Developer Toolset


Compiling with parallelization options
--------------------------------------

For maximum performance you will need to examine how you will use
GROMACS and what hardware you plan to run on. Often OpenMP parallelism
is an advantage for GROMACS, but support for this is generally built
into your compiler and detected automatically.


GPU support
~~~~~~~~~~~

GROMACS has excellent support for NVIDIA GPUs supported via CUDA. On
Linux, NVIDIA CUDA toolkit with minimum version unknown is required,
and the latest version is strongly encouraged. NVIDIA GPUs with at
least NVIDIA compute capability unknown are required. You are strongly
recommended to get the latest CUDA version and driver that supports
your hardware, but beware of possible performance regressions in newer
CUDA versions on older hardware. While some CUDA compilers (nvcc)
might not officially support recent versions of gcc as the back-end
compiler, we still recommend that you at least use a gcc version
recent enough to get the best SIMD support for your CPU, since GROMACS
always runs some code on the CPU. It is most reliable to use the same
C++ compiler version for GROMACS code as used as the host compiler for
nvcc.

To make it possible to use other accelerators, GROMACS also includes
OpenCL support. The minimum OpenCL version required is unknown and
only 64-bit implementations are supported. The current OpenCL
implementation is recommended for use with GCN-based AMD GPUs, and on
Linux we recommend the ROCm runtime. Intel integrated GPUs are
supported with the Neo drivers. OpenCL is also supported with NVIDIA
GPUs, but using the latest NVIDIA driver (which includes the NVIDIA
OpenCL runtime) is recommended. Also note that there are performance
limitations (inherent to the NVIDIA OpenCL runtime). It is not
possible to configure both CUDA and OpenCL support in the same build
of GROMACS, nor to support both Intel and other vendorsGPUs with
OpenCL. A 64-bit implementation of OpenCL is required and therefore
OpenCL is only supported on 64-bit platforms.


MPI support
~~~~~~~~~~~

GROMACS can run in parallel on multiple cores of a single workstation
using its built-in thread-MPI. No user action is required in order to
enable this.

If you wish to run in parallel on multiple machines across a network,
you will need to have

* an MPI library installed that supports the MPI 1.3 standard, and

* wrapper compilers that will compile code using that library.

To compile with MPI set your compiler to the normal (non-MPI) compiler
and add "-DGMX_MPI=on" to the cmake options. It is possible to set the
compiler to the MPI compiler wrapper but it is neither necessary nor
recommended.

The GROMACS team recommends OpenMPI version 1.6 (or higher), MPICH
version 1.4.1 (or higher), or your hardware vendors MPI installation.
The most recent version of either of these is likely to be the best.
More specialized networks might depend on accelerations only available
in the vendors library. LAM-MPI might work, but since it has been
deprecated for years, it is not supported.

For example, depending on your actual MPI library, use "cmake
-DMPI_C_COMPILER=mpicc -DGMX_MPI=on".


CMake
-----

GROMACS builds with the CMake build system, requiring at least version
3.13. You can check whether CMake is installed, and what version it
is, with "cmake --version". If you need to install CMake, then first
check whether your platforms package management system provides a
suitable version, or visit the CMake installation page for pre-
compiled binaries, source code and installation instructions. The
GROMACS team recommends you install the most recent version of CMake
you can.


Fast Fourier Transform library
------------------------------

Many simulations in GROMACS make extensive use of fast Fourier
transforms, and a software library to perform these is always
required. We recommend FFTW (version 3 or higher only) or Intel MKL.
The choice of library can be set with "cmake
-DGMX_FFT_LIBRARY=<name>", where "<name>" is one of "fftw3", "mkl", or
"fftpack". FFTPACK is bundled with GROMACS as a fallback, and is
acceptable if simulation performance is not a priority. When choosing
MKL, GROMACS will also use MKL for BLAS and LAPACK (see linear algebra
libraries). Generally, there is no advantage in using MKL with
GROMACS, and FFTW is often faster. With PME GPU offload support using
CUDA, a GPU-based FFT library is required. The CUDA-based GPU FFT
library cuFFT is part of the CUDA toolkit (required for all CUDA
builds) and therefore no additional software component is needed when
building with CUDA GPU acceleration.


Using FFTW
~~~~~~~~~~

FFTW is likely to be available for your platform via its package
management system, but there can be compatibility and significant
performance issues associated with these packages. In particular,
GROMACS simulations are normally run inmixedfloating-point
precision, which is suited for the use of single precision in FFTW.
The default FFTW package is normally in double precision, and good
compiler options to use for FFTW when linked to GROMACS may not have
been used. Accordingly, the GROMACS team recommends either

* that you permit the GROMACS installation to download and build
  FFTW from source automatically for you (use "cmake
  -DGMX_BUILD_OWN_FFTW=ON"), or

* that you build FFTW from the source code.

If you build FFTW from source yourself, get the most recent version
and follow the FFTW installation guide. Choose the precision for FFTW
(i.e. single/float vs. double) to match whether you will later use
mixed or double precision for GROMACS. There is no need to compile
FFTW with threading or MPI support, but it does no harm. On x86
hardware, compile with *both* "--enable-sse2" and "--enable-avx" for
FFTW-3.3.4 and earlier. From FFTW-3.3.5, you should also add "--
enable-avx2" also. On Intel processors supporting 512-wide AVX,
including KNL, add "--enable-avx512" also. FFTW will create a fat
library with codelets for all different instruction sets, and pick the
fastest supported one at runtime. On ARM architectures with SIMD
support and IBM Power8 and later, you definitely want version 3.3.5 or
later, and to compile it with "--enable-neon" and "--enable-vsx",
respectively, for SIMD support. If you are using a Cray, there is a
special modified (commercial) version of FFTs using the FFTW interface
which can be slightly faster.


Using MKL
~~~~~~~~~

Use MKL bundled with Intel compilers by setting up the compiler
environment, e.g., through "source /path/to/compilervars.sh intel64"
or similar before running CMake including setting
"-DGMX_FFT_LIBRARY=mkl".

If you need to customize this further, use

   cmake -DGMX_FFT_LIBRARY=mkl \
         -DMKL_LIBRARIES="/full/path/to/libone.so;/full/path/to/libtwo.so" \
         -DMKL_INCLUDE_DIR="/full/path/to/mkl/include"

The full list and order(!) of libraries you require are found in
Intels MKL documentation for your system.


Using ARM Performance Libraries
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The ARM Performance Libraries provides FFT transforms implementation
for ARM architectures. Preliminary support is provided for ARMPL in
GROMACS through its FFTW-compatible API. Assuming that the ARM HPC
toolchain environment including the ARMPL paths are set up (e.g.
through loading the appropriate modules like "module load Module-
Prefix/arm-hpc-compiler-X.Y/armpl/X.Y") use the following cmake
options:

   cmake -DGMX_FFT_LIBRARY=fftw3 \
         -DFFTWF_LIBRARY="${ARMPL_DIR}/lib/libarmpl_lp64.so" \
         -DFFTWF_INCLUDE_DIR=${ARMPL_DIR}/include


Other optional build components
-------------------------------

* Run-time detection of hardware capabilities can be improved by
  linking with hwloc. By default this is turned off since it might not
  be supported everywhere, but if you have hwloc installed it should
  work by just setting "-DGMX_HWLOC=ON"

* Hardware-optimized BLAS and LAPACK libraries are useful for a few
  of the GROMACS utilities focused on normal modes and matrix
  manipulation, but they do not provide any benefits for normal
  simulations. Configuring these is discussed at linear algebra
  libraries.

* The built-in GROMACS trajectory viewer "gmx view" requires X11 and
  Motif/Lesstif libraries and header files. You may prefer to use
  third-party software for visualization, such as VMD or PyMol.

* An external TNG library for trajectory-file handling can be used
  by setting "-DGMX_EXTERNAL_TNG=yes", but TNG 1.7.10 is bundled in
  the GROMACS source already.

* The lmfit library for Levenberg-Marquardt curve fitting is used in
  GROMACS. Only lmfit 7.0 is supported.  A reduced version of that
  library is bundled in the GROMACS distribution, and the default
  build uses it. That default may be explicitly enabled with
  "-DGMX_USE_LMFIT=internal". To use an external lmfit library, set
  "-DGMX_USE_LMFIT=external", and adjust "CMAKE_PREFIX_PATH" as
  needed.  lmfit support can be disabled with "-DGMX_USE_LMFIT=none".

* zlib is used by TNG for compressing some kinds of trajectory data

* Building the GROMACS documentation is optional, and requires
  ImageMagick, pdflatex, bibtex, doxygen, python 3.6, sphinx 1.6.1,
  and pygments.

* The GROMACS utility programs often write data files in formats
  suitable for the Grace plotting tool, but it is straightforward to
  use these files in other plotting programs, too.

* Set "-DGMX_PYTHON_PACKAGE=ON" when configuring GROMACS with CMake
  to enable additional CMake targets for the gmxapi Python package and
  sample_restraint package from the main GROMACS CMake build. This
  supports additional testing and documentation generation.


Doing a build of GROMACS
========================

This section will cover a general build of GROMACS with CMake, but it
is not an exhaustive discussion of how to use CMake. There are many
resources available on the web, which we suggest you search for when
you encounter problems not covered here. The material below applies
specifically to builds on Unix-like systems, including Linux, and Mac
OS X. For other platforms, see the specialist instructions below.


Configuring with CMake
----------------------

CMake will run many tests on your system and do its best to work out
how to build GROMACS for you. If your build machine is the same as
your target machine, then you can be sure that the defaults and
detection will be pretty good. However, if you want to control aspects
of the build, or you are compiling on a cluster head node for back-end
nodes with a different architecture, there are a few things you should
consider specifying.

The best way to use CMake to configure GROMACS is to do anout-of-
sourcebuild, by making another directory from which you will run
CMake. This can be outside the source directory, or a subdirectory of
it. It also means you can never corrupt your source code by trying to
build it! So, the only required argument on the CMake command line is
the name of the directory containing the "CMakeLists.txt" file of the
code you want to build. For example, download the source tarball and
use

   tar xfz gromacs-2021-beta2.tgz
   cd gromacs-2021-beta2
   mkdir build-gromacs
   cd build-gromacs
   cmake ..

You will see "cmake" report a sequence of results of tests and
detections done by the GROMACS build system. These are written to the
"cmake" cache, kept in "CMakeCache.txt". You can edit this file by
hand, but this is not recommended because you could make a mistake.
You should not attempt to move or copy this file to do another build,
because file paths are hard-coded within it. If you mess things up,
just delete this file and start again with "cmake".

If there is a serious problem detected at this stage, then you will
see a fatal error and some suggestions for how to overcome it. If you
are not sure how to deal with that, please start by searching on the
web (most computer problems already have known solutions!) and then
consult the gmx-users mailing list. There are also informational
warnings that you might like to take on board or not. Piping the
output of "cmake" through "less" or "tee" can be useful, too.

Once "cmake" returns, you can see all the settings that were chosen
and information about them by using e.g. the curses interface

   ccmake ..

You can actually use "ccmake" (available on most Unix platforms)
directly in the first step, but then most of the status messages will
merely blink in the lower part of the terminal rather than be written
to standard output. Most platforms including Linux, Windows, and Mac
OS X even have native graphical user interfaces for "cmake", and it
can create project files for almost any build environment you want
(including Visual Studio or Xcode). Check out running CMake for
general advice on what you are seeing and how to navigate and change
things. The settings you might normally want to change are already
presented. You may make changes, then re-configure (using "c"), so
that it gets a chance to make changes that depend on yours and perform
more checking. It may take several configuration passes to reach the
desired configuration, in particular if you need to resolve errors.

When you have reached the desired configuration with "ccmake", the
build system can be generated by pressing "g".  This requires that the
previous configuration pass did not reveal any additional settings (if
it did, you need to configure once more with "c").  With "cmake", the
build system is generated after each pass that does not produce
errors.

You cannot attempt to change compilers after the initial run of
"cmake". If you need to change, clean up, and start again.


Where to install GROMACS
~~~~~~~~~~~~~~~~~~~~~~~~

GROMACS is installed in the directory to which "CMAKE_INSTALL_PREFIX"
points. It may not be the source directory or the build directory.
You require write permissions to this directory. Thus, without super-
user privileges, "CMAKE_INSTALL_PREFIX" will have to be within your
home directory. Even if you do have super-user privileges, you should
use them only for the installation phase, and never for configuring,
building, or running GROMACS!


Using CMake command-line options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Once you become comfortable with setting and changing options, you may
know in advance how you will configure GROMACS. If so, you can speed
things up by invoking "cmake" and passing the various options at once
on the command line. This can be done by setting cache variable at the
cmake invocation using "-DOPTION=VALUE". Note that some environment
variables are also taken into account, in particular variables like
"CC" and "CXX".

For example, the following command line

   cmake .. -DGMX_GPU=CUDA -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/marydoe/programs

can be used to build with CUDA GPUs, MPI and install in a custom
location. You can even save that in a shell script to make it even
easier next time. You can also do this kind of thing with "ccmake",
but you should avoid this, because the options set with "-D" will not
be able to be changed interactively in that run of "ccmake".


SIMD support
~~~~~~~~~~~~

GROMACS has extensive support for detecting and using the SIMD
capabilities of many modern HPC CPU architectures. If you are building
GROMACS on the same hardware you will run it on, then you dont need
to read more about this, unless you are getting configuration warnings
you do not understand. By default, the GROMACS build system will
detect the SIMD instruction set supported by the CPU architecture (on
which the configuring is done), and thus pick the best available SIMD
parallelization supported by GROMACS. The build system will also check
that the compiler and linker used also support the selected SIMD
instruction set and issue a fatal error if they do not.

Valid values are listed below, and the applicable value with the
largest number in the list is generally the one you should choose. In
most cases, choosing an inappropriate higher number will lead to
compiling a binary that will not run. However, on a number of
processor architectures choosing the highest supported value can lead
to performance loss, e.g. on Intel Skylake-X/SP and AMD Zen.

1. "None" For use only on an architecture either lacking SIMD, or
   to which GROMACS has not yet been ported and none of the options
   below are applicable.

2. "SSE2" This SIMD instruction set was introduced in Intel
   processors in 2001, and AMD in 2003. Essentially all x86 machines
   in existence have this, so it might be a good choice if you need to
   support dinosaur x86 computers too.

3. "SSE4.1" Present in all Intel core processors since 2007, but
   notably not in AMD Magny-Cours. Still, almost all recent processors
   support this, so this can also be considered a good baseline if you
   are content with slow simulations and prefer portability between
   reasonably modern processors.

4. "AVX_128_FMA" AMD Bulldozer, Piledriver (and later Family 15h)
   processors have this.

5. "AVX_256" Intel processors since Sandy Bridge (2011). While this
   code will work on the  AMD Bulldozer and Piledriver processors, it
   is significantly less efficient than the "AVX_128_FMA" choice above
   - do not be fooled to assume that 256 is better than 128 in this
   case.

6. "AVX2_128" AMD Zen/Zen2 and Hygon Dhyana microarchitecture
   processors; it will enable AVX2 with 3-way fused multiply-add
   instructions. While these microarchitectures do support 256-bit
   AVX2 instructions, hence "AVX2_256" is also supported, 128-bit will
   generally be faster, in particular when the non-bonded tasks run on
   the CPUhence the default "AVX2_128". With GPU offload however
   "AVX2_256" can be faster on Zen processors.

7. "AVX2_256" Present on Intel Haswell (and later) processors
   (2013), and it will also enable Intel 3-way fused multiply-add
   instructions.

8. "AVX_512" Skylake-X desktop and Skylake-SP Xeon processors
   (2017); it will generally be fastest on the higher-end desktop and
   server processors with two 512-bit fused multiply-add units (e.g.
   Core i9 and Xeon Gold). However, certain desktop and server models
   (e.g. Xeon Bronze and Silver) come with only one AVX512 FMA unit
   and therefore on these processors "AVX2_256" is faster (compile-
   and runtime checks try to inform about such cases). Additionally,
   with GPU accelerated runs "AVX2_256" can also be faster on high-end
   Skylake CPUs with both 512-bit FMA units enabled.

9. "AVX_512_KNL" Knights Landing Xeon Phi processors

10. "Sparc64_HPC_ACE" Fujitsu machines like the K computer have
    this.

11. "IBM_VMX" Power6 and similar Altivec processors have this.

12. "IBM_VSX" Power7, Power8, Power9 and later have this.

13. "ARM_NEON" 32-bit ARMv7 with NEON support.

14. "ARM_NEON_ASIMD" 64-bit ARMv8 and later.

15. "ARM_SVE" 64-bit ARMv8 and later with the Scalable Vector
    Extensions (SVE). The SVE vector length is fixed at CMake
    configure time. The default vector length is 512 bits, and this
    can be changed via the "GMX_SIMD_ARM_SVE_LENGTH" CMake variable.

The CMake configure system will check that the compiler you have
chosen can target the architecture you have chosen. mdrun will check
further at runtime, so if in doubt, choose the lowest number you think
might work, and see what mdrun says. The configure system also works
around many known issues in many versions of common HPC compilers.

A further "GMX_SIMD=Reference" option exists, which is a special SIMD-
like implementation written in plain C that developers can use when
developing support in GROMACS for new SIMD architectures. It is not
designed for use in production simulations, but if you are using an
architecture with SIMD support to which GROMACS has not yet been
ported, you may wish to try this option instead of the default
"GMX_SIMD=None", as it can often out-perform this when the auto-
vectorization in your compiler does a good job. And post on the
GROMACS mailing lists, because GROMACS can probably be ported for new
SIMD architectures in a few days.


CMake advanced options
~~~~~~~~~~~~~~~~~~~~~~

The options that are displayed in the default view of "ccmake" are
ones that we think a reasonable number of users might want to consider
changing. There are a lot more options available, which you can see by
toggling the advanced mode in "ccmake" on and off with "t". Even
there, most of the variables that you might want to change have a
"CMAKE_" or "GMX_" prefix. There are also some options that will be
visible or not according to whether their preconditions are satisfied.


Helping CMake find the right libraries, headers, or programs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If libraries are installed in non-default locations their location can
be specified using the following variables:

* "CMAKE_INCLUDE_PATH" for header files

* "CMAKE_LIBRARY_PATH" for libraries

* "CMAKE_PREFIX_PATH" for header, libraries and binaries (e.g.
  "/usr/local").

The respective "include", "lib", or "bin" is appended to the path. For
each of these variables, a list of paths can be specified (on Unix,
separated with “:”). These can be set as enviroment variables like:

   CMAKE_PREFIX_PATH=/opt/fftw:/opt/cuda cmake ..

(assuming "bash" shell). Alternatively, these variables are also
"cmake" options, so they can be set like
"-DCMAKE_PREFIX_PATH=/opt/fftw:/opt/cuda".

The "CC" and "CXX" environment variables are also useful for
indicating to "cmake" which compilers to use. Similarly,
"CFLAGS"/"CXXFLAGS" can be used to pass compiler options, but note
that these will be appended to those set by GROMACS for your build
platform and build type. You can customize some of this with advanced
CMake options such as "CMAKE_C_FLAGS" and its relatives.

See also the page on CMake environment variables.


CUDA GPU acceleration
~~~~~~~~~~~~~~~~~~~~~

If you have the CUDA Toolkit installed, you can use "cmake" with:

   cmake .. -DGMX_GPU=CUDA -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda

(or whichever path has your installation). In some cases, you might
need to specify manually which of your C++ compilers should be used,
e.g. with the advanced option "CUDA_HOST_COMPILER".

By default, code will be generated for the most common CUDA
architectures. However, to reduce build time and binary size we do not
generate code for every single possible architecture, which in rare
cases (say, Tegra systems) can result in the default build not being
able to use some GPUs. If this happens, or if you want to remove some
architectures to reduce binary size and build time, you can alter the
target CUDA architectures. This can be done either with the
"GMX_CUDA_TARGET_SM" or "GMX_CUDA_TARGET_COMPUTE" CMake variables,
which take a semicolon delimited string with the two digit suffixes of
CUDA (virtual) architectures names, for instance35;50;51;52;53;60”.
For details, see theOptions for steering GPU code generationsection of the nvcc man / help or Chapter 6. of the nvcc manual.

The GPU acceleration has been tested on AMD64/x86-64 platforms with
Linux, Mac OS X and Windows operating systems, but Linux is the best-
tested and supported of these. Linux running on POWER 8 and ARM v8
CPUs also works well.

Experimental support is available for compiling CUDA code, both for
host and device, using clang (version 6.0 or later). A CUDA toolkit is
still required but it is used only for GPU device code generation and
to link against the CUDA runtime library. The clang CUDA support
simplifies compilation and provides benefits for development (e.g.
allows the use code sanitizers in CUDA host-code). Additionally, using
clang for both CPU and GPU compilation can be beneficial to avoid
compatibility issues between the GNU toolchain and the CUDA toolkit.
clang for CUDA can be triggered using the "GMX_CLANG_CUDA=ON" CMake
option. Target architectures can be selected with
"GMX_CUDA_TARGET_SM", virtual architecture code is always embedded for
all requested architectures (hence GMX_CUDA_TARGET_COMPUTE is
ignored). Note that this is mainly a developer-oriented feature and it
is not recommended for production use as the performance can be
significantly lower than that of code compiled with nvcc (and it has
also received less testing). However, note that since clang 5.0 the
performance gap is only moderate (at the time of writing, about 20%
slower GPU kernels), so this version could be considered in non
performance-critical use-cases.


OpenCL GPU acceleration
~~~~~~~~~~~~~~~~~~~~~~~

The primary targets of the GROMACS OpenCL support is accelerating
simulations on AMD and Intel hardware. For AMD, we target both
discrete GPUs and APUs (integrated CPU+GPU chips), and for Intel we
target the integrated GPUs found on modern workstation and mobile
hardware. The GROMACS OpenCL on NVIDIA GPUs works, but performance and
other limitations make it less practical (for details see the user
guide).

To build GROMACS with OpenCL support enabled, two components are
required: the OpenCL headers and the wrapper library that acts as a
client driver loader (so-called ICD loader). The additional, runtime-
only dependency is the vendor-specific GPU driver for the device
targeted. This also contains the OpenCL compiler. As the GPU compute
kernels are compiled  on-demand at run time, this vendor-specific
compiler and driver is not needed for building GROMACS. The former,
compile-time dependencies are standard components, hence stock
versions can be obtained from most Linux distribution repositories
(e.g. "opencl-headers" and "ocl-icd-libopencl1" on Debian/Ubuntu).
Only the compatibility with the required OpenCL version unknown needs
to be ensured. Alternatively, the headers and library can also be
obtained from vendor SDKs (e.g. from AMD), which must be installed in
a path found in "CMAKE_PREFIX_PATH" (or via the environment variables
"AMDAPPSDKROOT" or "CUDA_PATH").

To trigger an OpenCL build the following CMake flags must be set

   cmake .. -DGMX_GPU=OpenCL

To build with support for Intel integrated GPUs, it is required to add
"-DGMX_OPENCL_NB_CLUSTER_SIZE=4" to the cmake command line, so that
the GPU kernels match the characteristics of the hardware. The Neo
driver is recommended.

On Mac OS, an AMD GPU can be used only with OS version 10.10.4 and
higher; earlier OS versions are known to run incorrectly.

By default, any clFFT library on the system will be used with GROMACS,
but if none is found then the code will fall back on a version bundled
with GROMACS. To require GROMACS to link with an external library, use

   cmake .. -DGMX_GPU=OpenCL -DclFFT_ROOT_DIR=/path/to/your/clFFT -DGMX_EXTERNAL_CLFFT=TRUE


Static linking
~~~~~~~~~~~~~~

Dynamic linking of the GROMACS executables will lead to a smaller disk
footprint when installed, and so is the default on platforms where we
believe it has been tested repeatedly and found to work. In general,
this includes Linux, Windows, Mac OS X and BSD systems. Static
binaries take more space, but on some hardware and/or under some
conditions they are necessary, most commonly when you are running a
parallel simulation using MPI libraries (e.g. Cray).

* To link GROMACS binaries statically against the internal GROMACS
  libraries, set "-DBUILD_SHARED_LIBS=OFF".

* To link statically against external (non-system) libraries as
  well, set "-DGMX_PREFER_STATIC_LIBS=ON". Note, that in general
  "cmake" picks up whatever is available, so this option only
  instructs "cmake" to prefer static libraries when both static and
  shared are available. If no static version of an external library is
  available, even when the aforementioned option is "ON", the shared
  library will be used. Also note that the resulting binaries will
  still be dynamically linked against system libraries on platforms
  where that is the default. To use static system libraries,
  additional compiler/linker flags are necessary, e.g. "-static-libgcc
  -static- libstdc++".

* To attempt to link a fully static binary set
  "-DGMX_BUILD_SHARED_EXE=OFF". This will prevent CMake from
  explicitly setting any dynamic linking flags. This option also sets
  "-DBUILD_SHARED_LIBS=OFF" and "-DGMX_PREFER_STATIC_LIBS=ON" by
  default, but the above caveats apply. For compilers which dont
  default to static linking, the required flags have to be specified.
  On Linux, this is usually "CFLAGS=-static CXXFLAGS=-static".


gmxapi C++ API
~~~~~~~~~~~~~~

For dynamic linking builds and on non-Windows platforms, an extra
library and headers are installed by setting "-DGMXAPI=ON" (default).
Build targets "gmxapi-cppdocs" and "gmxapi-cppdocs-dev" produce
documentation in "docs/api-user" and "docs/api-dev", respectively. For
more project information and use cases, refer to the tracked Issue
2585, associated GitHub gmxapi projects, or DOI
10.1093/bioinformatics/bty484.

gmxapi is not yet tested on Windows or with static linking, but these
use cases are targeted for future versions.


Portability aspects
~~~~~~~~~~~~~~~~~~~

A GROMACS build will normally not be portable, not even across
hardware with the same base instruction set, like x86. Non-portable
hardware-specific optimizations are selected at configure-time, such
as the SIMD instruction set used in the compute kernels. This
selection will be done by the build system based on the capabilities
of the build host machine or otherwise specified to "cmake" during
configuration.

Often it is possible to ensure portability by choosing the least
common denominator of SIMD support, e.g. SSE2 for x86. In rare cases
of very old x86 machines, ensure that you use "cmake
-DGMX_USE_RDTSCP=off" if any of the target CPU architectures does not
support the "RDTSCP" instruction.  However, we discourage attempts to
use a single GROMACS installation when the execution environment is
heterogeneous, such as a mix of AVX and earlier hardware, because this
will lead to programs (especially mdrun) that run slowly on the new
hardware. Building two full installations and locally managing how to
call the correct one (e.g. using a module system) is the recommended
approach. Alternatively, as at the moment the GROMACS tools do not
make strong use of SIMD acceleration, it can be convenient to create
an installation with tools portable across different x86 machines, but
with separate mdrun binaries for each architecture. To achieve this,
one can first build a full installation with the least-common-
denominator SIMD instruction set, e.g. "-DGMX_SIMD=SSE2", then build
separate mdrun binaries for each architecture present in the
heterogeneous environment. By using custom binary and library suffixes
for the mdrun-only builds, these can be installed to the same location
as thegenerictools installation. Building just the mdrun binary is
possible by setting the "-DGMX_BUILD_MDRUN_ONLY=ON" option.


Linear algebra libraries
~~~~~~~~~~~~~~~~~~~~~~~~

As mentioned above, sometimes vendor BLAS and LAPACK libraries can
provide performance enhancements for GROMACS when doing normal-mode
analysis or covariance analysis. For simplicity, the text below will
refer only to BLAS, but the same options are available for LAPACK. By
default, CMake will search for BLAS, use it if it is found, and
otherwise fall back on a version of BLAS internal to GROMACS. The
"cmake" option "-DGMX_EXTERNAL_BLAS=on" will be set accordingly. The
internal versions are fine for normal use. If you need to specify a
non-standard path to search, use
"-DCMAKE_PREFIX_PATH=/path/to/search". If you need to specify a
library with a non-standard name (e.g. ESSL on Power machines or ARMPL
on ARM machines), then set
"-DGMX_BLAS_USER=/path/to/reach/lib/libwhatever.a".

If you are using Intel MKL for FFT, then the BLAS and LAPACK it
provides are used automatically. This could be over-ridden with
"GMX_BLAS_USER", etc.

On Apple platforms where the Accelerate Framework is available, these
will be automatically used for BLAS and LAPACK. This could be over-
ridden with "GMX_BLAS_USER", etc.


Building with MiMiC QM/MM support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

MiMiC QM/MM interface integration will require linking against MiMiC
communication library, that establishes the communication channel
between GROMACS and CPMD. The MiMiC Communication library can be
downloaded here. Compile and install it. Check that the installation
folder of the MiMiC library is added to CMAKE_PREFIX_PATH if it is
installed in non-standard location. Building QM/MM-capable version
requires double-precision version of GROMACS compiled with MPI
support:

* "-DGMX_DOUBLE=ON -DGMX_MPI -DGMX_MIMIC=ON"


Changing the names of GROMACS binaries and libraries
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

It is sometimes convenient to have different versions of the same
GROMACS programs installed. The most common use cases have been single
and double precision, and with and without MPI. This mechanism can
also be used to install side-by-side multiple versions of mdrun
optimized for different CPU architectures, as mentioned previously.

By default, GROMACS will suffix programs and libraries for such builds
with "_d" for double precision and/or "_mpi" for MPI (and nothing
otherwise). This can be controlled manually with "GMX_DEFAULT_SUFFIX
(ON/OFF)", "GMX_BINARY_SUFFIX" (takes a string) and "GMX_LIBS_SUFFIX"
(also takes a string). For instance, to set a custom suffix for
programs and libraries, one might specify:

   cmake .. -DGMX_DEFAULT_SUFFIX=OFF -DGMX_BINARY_SUFFIX=_mod -DGMX_LIBS_SUFFIX=_mod

Thus the names of all programs and libraries will be appended with
"_mod".


Changing installation tree structure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

By default, a few different directories under "CMAKE_INSTALL_PREFIX"
are used when when GROMACS is installed. Some of these can be changed,
which is mainly useful for packaging GROMACS for various
distributions. The directories are listed below, with additional notes
about some of them. Unless otherwise noted, the directories can be
renamed by editing the installation paths in the main CMakeLists.txt.

"bin/"
   The standard location for executables and some scripts. Some of the
   scripts hardcode the absolute installation prefix, which needs to
   be changed if the scripts are relocated. The name of the directory
   can be changed using "CMAKE_INSTALL_BINDIR" CMake variable.

"include/gromacs/"
   The standard location for installed headers.

"lib/"
   The standard location for libraries. The default depends on the
   system, and is determined by CMake. The name of the directory can
   be changed using "CMAKE_INSTALL_LIBDIR" CMake variable.

"lib/pkgconfig/"
   Information about the installed "libgromacs" library for "pkg-
   config" is installed here.  The "lib/" part adapts to the
   installation location of the libraries.  The installed files
   contain the installation prefix as absolute paths.

"share/cmake/"
   CMake package configuration files are installed here.

"share/gromacs/"
   Various data files and some documentation go here. The first part
   can be changed using "CMAKE_INSTALL_DATADIR", and the second by
   using "GMX_INSTALL_DATASUBDIR" Using these CMake variables is the
   preferred way of changing the installation path for
   "share/gromacs/top/", since the path to this directory is built
   into "libgromacs" as well as some scripts, both as a relative and
   as an absolute path (the latter as a fallback if everything else
   fails).

"share/man/"
   Installed man pages go here.


Compiling and linking
---------------------

Once you have configured with "cmake", you can build GROMACS with
"make". It is expected that this will always complete successfully,
and give few or no warnings. The CMake-time tests GROMACS makes on the
settings you choose are pretty extensive, but there are probably a few
cases we have not thought of yet. Search the web first for solutions
to problems, but if you need help, ask on gmx-users, being sure to
provide as much information as possible about what you did, the system
you are building on, and what went wrong. This may mean scrolling back
a long way through the output of "make" to find the first error
message!

If you have a multi-core or multi-CPU machine with "N" processors,
then using

   make -j N

will generally speed things up by quite a bit. Other build generator
systems supported by "cmake" (e.g. "ninja") also work well.


Building only mdrun
~~~~~~~~~~~~~~~~~~~

This is now supported with the "cmake" option
"-DGMX_BUILD_MDRUN_ONLY=ON", which will build a different version of
"libgromacs" and the "mdrun" program. Naturally, now "make install"
installs only those products. By default, mdrun-only builds will
default to static linking against GROMACS libraries, because this is
generally a good idea for the targets for which an mdrun-only build is
desirable.


Installing GROMACS
------------------

Finally, "make install" will install GROMACS in the directory given in
"CMAKE_INSTALL_PREFIX". If this is a system directory, then you will
need permission to write there, and you should use super-user
privileges only for "make install" and not the whole procedure.


Getting access to GROMACS after installation
--------------------------------------------

GROMACS installs the script "GMXRC" in the "bin" subdirectory of the
installation directory (e.g. "/usr/local/gromacs/bin/GMXRC"), which
you should source from your shell:

   source /your/installation/prefix/here/bin/GMXRC

It will detect what kind of shell you are running and set up your
environment for using GROMACS. You may wish to arrange for your login
scripts to do this automatically; please search the web for
instructions on how to do this for your shell.

Many of the GROMACS programs rely on data installed in the
"share/gromacs" subdirectory of the installation directory. By
default, the programs will use the environment variables set in the
"GMXRC" script, and if this is not available they will try to guess
the path based on their own location.  This usually works well unless
you change the names of directories inside the install tree. If you
still need to do that, you might want to recompile with the new
install location properly set, or edit the "GMXRC" script.

GROMACS also installs a CMake toolchains file to help with building
client software. For an installation at
"/your/installation/prefix/here", toolchain files will be installed at
"/your/installation/prefix/here/share/cmake/gromacs${GMX_LIBS_SUFFIX
}/gromacs-toolchain${GMX_LIBS_SUFFIX}.cmake" where
"${GMX_LIBS_SUFFIX}" is as documented above.


Testing GROMACS for correctness
-------------------------------

Since 2011, the GROMACS development uses an automated system where
every new code change is subject to regression testing on a number of
platforms and software combinations. While this improves reliability
quite a lot, not everything is tested, and since we increasingly rely
on cutting edge compiler features there is non-negligible risk that
the default compiler on your system could have bugs. We have tried our
best to test and refuse to use known bad versions in "cmake", but we
strongly recommend that you run through the tests yourself. It only
takes a few minutes, after which you can trust your build.

The simplest way to run the checks is to build GROMACS with
"-DREGRESSIONTEST_DOWNLOAD", and run "make check". GROMACS will
automatically download and run the tests for you. Alternatively, you
can download and unpack the GROMACS regression test suite http://ftp.
gromacs.org/pub/regressiontests/regressiontests-2021-beta2.tar.gz
tarball yourself and use the advanced "cmake" option
"REGRESSIONTEST_PATH" to specify the path to the unpacked tarball,
which will then be used for testing. If the above does not work, then
please read on.

The regression tests are also available from the download section.
Once you have downloaded them, unpack the tarball, source "GMXRC" as
described above, and run "./gmxtest.pl all" inside the regression
tests folder. You can find more options (e.g. adding "double" when
using double precision, or "-only expanded" to run just the tests
whose names matchexpanded”) if you just execute the script without
options.

Hopefully, you will get a report that all tests have passed. If there
are individual failed tests it could be a sign of a compiler bug, or
that a tolerance is just a tiny bit too tight. Check the output files
the script directs you too, and try a different or newer compiler if
the errors appear to be real. If you cannot get it to pass the
regression tests, you might try dropping a line to the |Gromacs| users
forum, but then you should include a detailed description of your
hardware, and the output of "gmx mdrun -version" (which contains
valuable diagnostic information in the header).


Testing for MDRUN_ONLY executables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A build with "-DGMX_BUILD_MDRUN_ONLY" cannot be tested with "make
check" from the build tree, because most of the tests require a full
build to run things like "grompp". To test such an mdrun fully
requires installing it to the same location as a normal build of
GROMACS, downloading the regression tests tarball manually as
described above, sourcing the correct "GMXRC" and running the perl
script manually. For example, from your GROMACS source directory:

   mkdir build-normal
   cd build-normal
   # First, build and install normally to allow full testing of the standalone simulator.
   cmake .. -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
   make -j 4
   make install
   cd ..
   mkdir build-mdrun-only
   cd build-mdrun-only
   # Next, build and install the GMX_BUILD_MDRUN_ONLY version (optional).
   cmake .. -DGMX_MPI=ON -DGMX_BUILD_MDRUN_ONLY=ON -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
   make -j 4
   make install
   cd /to/your/unpacked/regressiontests
   source /your/installation/prefix/here/bin/GMXRC
   ./gmxtest.pl all -np 2


Non-standard suffix
~~~~~~~~~~~~~~~~~~~

If your mdrun program has been suffixed in a non-standard way, then
the "./gmxtest.pl -mdrun" option will let you specify that name to the
test machinery. You can use "./gmxtest.pl -double" to test the double-
precision version. You can use "./gmxtest.pl -crosscompiling" to stop
the test harness attempting to check that the programs can be run. You
can use "./gmxtest.pl -mpirun srun" if your command to run an MPI
program is called "srun".


Running MPI-enabled tests
~~~~~~~~~~~~~~~~~~~~~~~~~

The "make check" target also runs integration-style tests that may run
with MPI if "GMX_MPI=ON" was set. To make these work with various
possible MPI libraries, you may need to set the CMake variables
"MPIEXEC", "MPIEXEC_NUMPROC_FLAG", "MPIEXEC_PREFLAGS" and
"MPIEXEC_POSTFLAGS" so that "mdrun-mpi-test_mpi" would run on multiple
ranks via the shell command

   ${MPIEXEC} ${MPIEXEC_NUMPROC_FLAG} ${NUMPROC} ${MPIEXEC_PREFLAGS} \
         mdrun-mpi-test_mpi ${MPIEXEC_POSTFLAGS} -otherflags

A typical example for SLURM is

   cmake .. -DGMX_MPI=on -DMPIEXEC=srun -DMPIEXEC_NUMPROC_FLAG=-n -DMPIEXEC_PREFLAGS= -DMPIEXEC_POSTFLAGS=


Testing GROMACS for performance
-------------------------------

We are still working on a set of benchmark systems for testing the
performance of GROMACS. Until that is ready, we recommend that you try
a few different parallelization options, and experiment with tools
such as "gmx tune_pme".


Validating GROMACS for source code modifications
------------------------------------------------

When building GROMACS from a release tarball, the build process
automatically checks if any file contributing to the build process
have been modified since they have been packed in the archive. This
results in the marking of the version as either "MODIFIED" (if the
source files have been modified) or "UNCHECKED" (if no validation was
possible, e.g. if no Python installation was found). The actual
checking is performed by comparing a checksum stored in the release
tarball against one generated by the "createFileHash.py" Python script
during the build configuration. When running a GROMACS binary, the
checksum is also printed in the log file, together with a message if
there is a mismatch or no validation has been possible.

This allows users to check whether the binary they are using was built
from source code that is identical to the source code released by the
GROMACS team. Thus unintentional modifications to the source code for
building binaries that are used for running production simulations are
easily detectable. Additionally, by manually setting a version tag
using the GMX_VERSION_STRING_OF_FORK cmake option, users can mark a
modified GROMACS release code with their custom version string suffix.


Having difficulty?
------------------

You are not alone - this can be a complex task! If you encounter a
problem with installing GROMACS, then there are a number of locations
where you can find assistance. It is recommended that you follow these
steps to find the solution:

1. Read the installation instructions again, taking note that you
   have followed each and every step correctly.

2. Search the GROMACS webpage and users emailing list for
   information on the error. Adding
   "site:https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users"
   to a Google search may help filter better results.

3. Search the internet using a search engine such as Google.

4. Post to the GROMACS users emailing list gmx-users for
   assistance. Be sure to give a full description of what you have
   done and why you think it did not work. Give details about the
   system on which you are installing.  Copy and paste your command
   line and as much of the output as you think might be relevant -
   certainly from the first indication of a problem. In particular,
   please try to include at least the header from the mdrun logfile,
   and preferably the entire file.  People who might volunteer to help
   you do not have time to ask you interactive detailed follow-up
   questions, so you will get an answer faster if you provide as much
   information as you think could possibly help. High quality bug
   reports tend to receive rapid high quality answers.


Special instructions for some platforms
=======================================


Building on Windows
-------------------

Building on Windows using native compilers is rather similar to
building on Unix, so please start by reading the above. Then, download
and unpack the GROMACS source archive. Make a folder in which to do
the out-of-source build of GROMACS. For example, make it within the
folder unpacked from the source archive, and call it "build-gromacs".

For CMake, you can either use the graphical user interface provided on
Windows, or you can use a command line shell with instructions similar
to the UNIX ones above. If you open a shell from within your IDE (e.g.
Microsoft Visual Studio), it will configure the environment for you,
but you might need to tweak this in order to get either a 32-bit or
64-bit build environment. The latter provides the fastest executable.
If you use a normal Windows command shell, then you will need to
either set up the environment to find your compilers and libraries
yourself, or run the "vcvarsall.bat" batch script provided by MSVC
(just like sourcing a bash script under Unix).

With the graphical user interface, you will be asked about what
compilers to use at the initial configuration stage, and if you use
the command line they can be set in a similar way as under UNIX.

Unfortunately "-DGMX_BUILD_OWN_FFTW=ON" (see Using FFTW) does not work
on Windows, because there is no supported way to build FFTW on
Windows. You can either build FFTW some other way (e.g. MinGW), or use
the built-in fftpack (which may be slow), or using MKL.

For the build, you can either load the generated solutions file into
e.g. Visual Studio, or use the command line with "cmake --build" so
the right tools get used.


Building on Cray
----------------

GROMACS builds mostly out of the box on modern Cray machines, but you
may need to specify the use of static binaries with
"-DGMX_BUILD_SHARED_EXE=off", and you may need to set the F77
environmental variable to "ftn" when compiling FFTW. The ARM ThunderX2
Cray XC50 machines differ only in that the recommended compiler is the
ARM HPC Compiler ("armclang").


Building on Solaris
-------------------

The built-in GROMACS processor detection does not work on Solaris, so
it is strongly recommended that you build GROMACS with
"-DGMX_HWLOC=on" and ensure that the "CMAKE_PREFIX_PATH" includes the
path where the hwloc headers and libraries can be found. At least
version 1.11.8 of hwloc is recommended.

Oracle Developer Studio is not a currently supported compiler (and
does not currently compile GROMACS correctly, perhaps because the
thread-MPI atomics are incorrectly implemented in GROMACS).


Fujitsu PRIMEHPC
----------------

This is the architecture of the K computer, which uses Fujitsu
Sparc64VIIIfx chips. On this platform, GROMACS has accelerated group
kernels using the HPC-ACE instructions, no accelerated Verlet kernels,
and a custom build toolchain. Since this particular chip only does
double precision SIMD, the default setup is to build GROMACS in
double. Since most users only need single, we have added an option
GMX_RELAXED_DOUBLE_PRECISION to accept single precision square root
accuracy in the group kernels; unless you know that you really need 15
digits of accuracy in each individual force, we strongly recommend you
use this. Note that all summation and other operations are still done
in double.

The recommended configuration is to use

   cmake .. -DCMAKE_TOOLCHAIN_FILE=Toolchain-Fujitsu-Sparc64-mpi.cmake \
            -DCMAKE_PREFIX_PATH=/your/fftw/installation/prefix \
            -DCMAKE_INSTALL_PREFIX=/where/gromacs/should/be/installed \
            -DGMX_MPI=ON \
            -DGMX_BUILD_MDRUN_ONLY=ON \
            -DGMX_RELAXED_DOUBLE_PRECISION=ON
   make
   make install


Intel Xeon Phi
--------------

Xeon Phi processors, hosted or self-hosted, are supported. Only
symmetric (aka native) mode is supported on Knights Corner. The
performance depends among other factors on the system size, and for
now the performance might not be faster than CPUs. When building for
it, the recommended configuration is

   cmake .. -DCMAKE_TOOLCHAIN_FILE=Platform/XeonPhi
   make
   make install

The Knights Landing-based Xeon Phi processors behave like standard x86
nodes, but support a special SIMD instruction set. When cross-
compiling for such nodes, use the "AVX_512_KNL" SIMD flavor. Knights
Landing processors support so-calledclustering modeswhich allow
reconfiguring the memory subsystem for lower latency. GROMACS can
benefit from the quadrant or SNC clustering modes. Care needs to be
taken to correctly pin threads. In particular, threads of an MPI rank
should not cross cluster and NUMA boundaries. In addition to the main
DRAM memory, Knights Landing has a high-bandwidth stacked memory
called MCDRAM. Using it offers performance benefits if it is ensured
that "mdrun" runs entirely from this memory; to do so it is
recommended that MCDRAM is configured inFlat modeand "mdrun" is
bound to the appropriate NUMA node (use e.g. "numactl --membind 1"
with quadrant clustering mode).


Tested platforms
================

While it is our best belief that GROMACS will build and run pretty
much everywhere, it is important that we tell you where we really know
it works because we have tested it. Every commit in our git source
code repository is currently tested with a range of configuration
options on x86 with gcc versions 7 and 8, clang versions 8 and 9, and
a beta version of oneAPI containing Intels compiler. For this
testing, we use Ubuntu 18.04 or 20.04 operating system. Other
compiler, library, and OS versions are tested less frequently. For
details, you can have a look at the continuous integration server used
by GROMACS, which uses GitLab runner on a local k8s x86 cluster with
NVIDIA and AMD GPU support.

We test irregularly on ARM v8, Cray, Power8, Power9, Google Native
Client and other environments, and with other compilers and compiler
versions, too.