The Linux-x86_64-ibverbs-smp and Solaris-x86_64-smp released binaries are based on ``smp'' builds of Charm++ that can be used with multiple threads on either a single machine like a multicore build, or across a network. SMP builds combine multiple worker threads and an extra communication thread into a single process. Since one core per process is used for the communication thread SMP builds are typically slower than non-SMP builds. The advantage of SMP builds is that many data structures are shared among the threads, reducing the per-core memory footprint when scaling large simulations to large numbers of cores.
SMP builds launched with charmrun use +p to specify the total number of PEs (worker threads) and +ppn to specify the number of PEs per process. Thus, to run one process with one communication and three worker threads on each of four quad-core nodes one would specify:
charmrun namd2 +p12 +ppn 3 <configfile>
For MPI-based SMP builds one would specify any mpiexec options needed for the required number of processes and pass +ppn to the NAMD binary as:
mpiexec -np 4 namd2 +ppn 3 <configfile>
See the Cray XE/XK/XC directions below for a more complex example.