Charmworks has an exhibition booth at the SuperComputing 2016 conference in Salt Lake City this week. Stop by booth #4354 to learn about Charm++’s programming model, to see a live fault tolerance demo, and to hear about commercial support for Charm++. Charmworks will co-host a Birds-of-a-Feather session titled “Charm++ and AMPI: Adaptive and Asynchronous Parallel Programming” on Wednesday, Nov 16th from 12:15 to 1:15 in room 250-E. There are a number of Charm++-related talks throughout the week, which you can read more about at charm.cs.illinois.edu/supercomputing.
Changes in this release are primarily bug fixes for 6.7.0. The major exception is AMPI, which has seen changes to its extension APIs and now complies with more of the MPI standard. A brief list of changes follows:
Charm++ Bug Fixes
- Startup and exit sequences are more robust
- Error and warning messages are generally more informative
- CkMulticast’s set and concat reducers work correctly
- AMPI’s extensions have been renamed to use the prefix AMPI_ instead of MPI_ and to generally follow MPI’s naming conventions
- AMPI_Migrate(MPI_Info) is now used for dynamic load balancing and all fault tolerance schemes (see the AMPI manual)
- AMPI officially supports MPI-2.2, and also implements the non-blocking collectives and neighborhood collectives from MPI-3.1
Platforms and Portability
- Cray regularpages build target has been fixed
- Clang compiler target for BlueGene/Q systems added
- Comm. thread tracing for SMP mode added
- AMPI’s compiler wrappers are easier to use with autoconf and cmake
The Parallel Programming Laboratory is holding its 14th annual Charm++ workshop on April 19th and 20th. The workshop is broadly focused on adaptivity in highly scalable parallel computing. It also takes stock of recent results in adaptive runtime techniques in Charm++ and the collaborative interdisciplinary research projects developed using it.
A live webcast will be available for remote viewers. The slides for the talks will be posted to the workshop website shortly after the talks conclude. The recorded talks will be available on the PPL YouTube channel.
Here is a list of significant changes that this release contains over version 6.6.1
- New API for efficient formula-based distributed spare array creation.
- Missing MPI-2.0 API additions to AMPI.
- Out-of-tree build is now supported.
- New target: multicore-linux-arm7
- PXSHM auto detects the node size.
- Added support for ++mpiexec with poe.
- Add new API related to migration in AMPI.
- CkLoop is now built by default.
- Scalable startup is now the default behavior when launching a job using chamrun.
Over 120 bug fixes, spanning areas across the entire system. Here is a list of the major fixes:
- Bug Fixes
- Bug fix to handle CUDA threads correctly at exit.
- Bug fix in the recovery code on a node failure.
- Bug fixes in AMPI functions – MPI_Comm_create, MPI_Testall.
- Disable ASLR on Darwin builds to fix multi-node executions.
- Add flags to enable compilation of Charm++ on newer Cray compilers with C++11 support.
- Deprecations and Deletions
- CommLib has been deleted.
- +nodesize option for PXSHM is deprecated
- CmiBool has been dropped in favor of C++’s bool.
- CBase_Foo::pup need not be called from Foo::pup.