Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

mvapich2-2.3.7-14.1 RPM for riscv64

From OpenSuSE Ports Tumbleweed for riscv64

Name: mvapich2 Distribution: openSUSE Tumbleweed
Version: 2.3.7 Vendor: openSUSE
Release: 14.1 Build date: Mon Feb 26 23:24:34 2024
Group: Development/Libraries/Parallel Build host: h02-ch1b
Size: 17490217 Source RPM: mvapich2-2.3.7-14.1.src.rpm
Packager: https://bugs.opensuse.org
Url: http://mvapich.cse.ohio-state.edu
Summary: OSU MVAPICH2 MPI package
This is an MPI-3 implementation which includes all MPI-1 features. It
is based on MPICH2 and MVICH.

Provides

Requires

License

BSD-3-Clause

Changelog

* Thu Feb 22 2024 pgajdos@suse.com
  - Use %patch -P N instead of deprecated %patchN.
* Thu Oct 26 2023 Nicolas Morey <nicolas.morey@suse.com>
  - Add mvapich2-openpa-add-memory-barriers.patch to fix testsuite issue
    on pcc64 (bsc#1216610, bsc#1216612)
* Mon Aug 07 2023 Nicolas Morey <nicolas.morey@suse.com>
  - Drop support for obsolete TrueScale (bsc#1212146)
* Mon Dec 05 2022 Stefan Brüns <stefan.bruens@rwth-aachen.de>
  - Reduce constraints to match the actual requirement. Exaggerating
    the requirements hurts both this package (time until build can
    start) as well as other OBS users (blocking large workers
    without need).
  - Use a reproducible timestamp instead of removing it altogether.
* Mon Nov 28 2022 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
  - Update reproducible.patch to remove timestamp generated at compilation time
* Wed Jul 06 2022 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
  - Add mvapich2-allow-building-with-external-hwloc.patch
    to allow building against an external hwloc library
  - Build mvapich2 HPC flavors against pmix and hwloc system libraries
* Wed Jun 29 2022 Klaus Kämpf <kkaempf@suse.com>
  - add pass-correct-size-to-snprintf.patch to fix potential buffer
    overflows (required to make 'sundials' testsuite pass)
  - Update to mvapich2 2.3.7
    * Features and Enhancements (since 2.3.6):
    - Added support for systems with Rockport's switchless networks
    * Added automatic architecture detection
    * Optimized performance for point-to-point operations
    - Added support for the Cray Slingshot 10 interconnect
    - Enhanced support for blocking collective offload using
      Mellanox SHARP
    * Scatter and Scatterv
    - Enhanced support for non-blocking collective offload using
      Mellanox SHARP
    * Iallreduce, Ibarrier, Ibcast, and Ireduce
    * Bug Fixes (since 2.3.6):
    - Removed several deprectated functions
    - Thanks to Honggang Li @RedHat for the report
    - Fixed a bug where tools like CMake FindMPI would not
      detect MVAPICH  when compiled without Hydra mpiexec
    - Thanks to Chris Chambreau and Adam Moody @LLNL for the report
    - Fixed compilation error when building with mpirun and without hydra
    - Thanks to James Long @University of Illinois for the report
    - Fixed issue with setting RoCE mode correctly without RDMA_CM.
    - Thanks to Nicolas Gagnon @Rockport Networks for the report
    - Fixed an issue on heterogeneous clusters where QP attributes were
      set incorrectly
    - Thanks to X-ScaleSolutions for the report and fix
    - Fixed a memory leak in improbe on the PSM channel
    - Thanks to Gregory Lee @LLNL Beichuan Yan @University of Colorado
      for the report
    - Added retry logic for PSM connection establishment
    - Thanks to Gregory Lee @LLNL for the report and X-ScaleSolutions
      for the patch
    - Fixed an initialization error when using PSM and gcc's -pg option
    - Thanks to Gregory Lee @LLNL for the report and X-ScaleSolutions for
      the patch
    - Fixed a potential integer overflow when transfering large arrays
    - Thanks to Alexander Melnikov for the report and patch
  - Fix Url: link
* Wed Feb 16 2022 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
  - Disable dlopen for verbs library (bsc#1196019)
* Tue Oct 19 2021 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
  - Move rpm macros to %_rpmmacrodir (bsc#1191386)
* Tue Sep 28 2021 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
  - Remove obsolete python dependency (bsc#1190996)
* Tue May 18 2021 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
  - Update to mvapich2 2.3.6
    - Enhanced performance for UD-Hybrid code
    - Add multi-rail support for UD-Hybrid code
    - Enhanced performance for shared-memory collectives
    - Enhanced job-startup performance for flux job launcher
    - Use PMI2 by default when SLURM is selected as process manager
    - Add support to use aligned memory allocations for multi-threaded
      applications
    - Architecture detection and enhanced point-to-point tuning for
      Oracle BM.HPC2 cloud shape
    - Add support for GCC compiler v11
    - Update hwloc v1 code to v1.11.14
    - Update hwloc v2 code to v2.4.2
  - Drop obsolete patches:
    - fix-missing-return-code.patch as it was fixed upstream
    - mvapich2-remove-deprecated-sys_siglist.patch
    - rdma_find_network_type-return-MV2_NETWORK_CLASS_UNKNOWN-when-dev_list-is-freed.patch
  - Refresh reproducible.patch
* Wed Mar 24 2021 Egbert Eich <eich@suse.com>
  - Update mvapich2 to 2.3.5.
    * Enhanced performance for MPI_Allreduce and MPI_Barrier
    * Support collective offload using Mellanox's SHARP for Barrier
    - Enhanced tuning framework for Barrier using SHARP
    * Remove dependency on underlying libibverbs, libibmad, libibumad, and
      librdmacm libraries using dlopen
    * Add support for Broadcom NetXtreme RoCE HCA
    - Enhanced inter-node point-to-point support
    * Support architecture detection for Fujitsu A64fx processor
    * Enhanced point-to-point and collective tuning for Fujitsu A64fx processor
    * Enhanced point-to-point and collective tuning for AMD ROME processor
    * Add support for process placement aware HCA selection
    - Add "MV2_PROCESS_PLACEMENT_AWARE_HCA_MAPPING" environment variable to
      enable process placement aware HCA mapping
    * Add support to auto-detect RoCE HCAs and auto-detect GID index
    * Add support to use RoCE/Ethernet and InfiniBand HCAs at the same time
    * Add architecture-specific flags to improve performance of certain CUDA
      operations
    - Thanks to Chris Chambreau @LLNL for the report
    * Read MTU and maximum outstanding RDMA operations from the device
    * Improved performance and scalability for UD-based communication
    * Update maximum HCAs supported by default from 4 to 10
    * Enhanced collective tuning for Frontera@TACC, Expanse@SDSC,
      Ookami@StonyBrook, and bb5@EPFL
    * Enhanced support for SHARP v2.1.0
    * Generalize code for GPU support
  - Obsolete: wrapper-revert-ldflag-order-change.patch.
  - Replace: mvapich2-fix-double-free.patch by
    rdma_find_network_type-return-MV2_NETWORK_CLASS_UNKNOWN-when-dev_list-is-freed.patch
* Thu Feb 18 2021 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
  - Re-add mvapich2-fix-double-free.patch as the bug was
    somehow be reintroduced (bsc#1144000)
  - Add mvapich2-remove-deprecated-sys_siglist.patch to
    fix compilation errors with newer glibc

Files

/usr/lib64/mpi
/usr/lib64/mpi/gcc
/usr/lib64/mpi/gcc/mvapich2
/usr/lib64/mpi/gcc/mvapich2/bin
/usr/lib64/mpi/gcc/mvapich2/bin/hydra_nameserver
/usr/lib64/mpi/gcc/mvapich2/bin/hydra_persist
/usr/lib64/mpi/gcc/mvapich2/bin/hydra_pmi_proxy
/usr/lib64/mpi/gcc/mvapich2/bin/mpic++
/usr/lib64/mpi/gcc/mvapich2/bin/mpicc
/usr/lib64/mpi/gcc/mvapich2/bin/mpichversion
/usr/lib64/mpi/gcc/mvapich2/bin/mpicxx
/usr/lib64/mpi/gcc/mvapich2/bin/mpiexec
/usr/lib64/mpi/gcc/mvapich2/bin/mpiexec.hydra
/usr/lib64/mpi/gcc/mvapich2/bin/mpiexec.mpirun_rsh
/usr/lib64/mpi/gcc/mvapich2/bin/mpif77
/usr/lib64/mpi/gcc/mvapich2/bin/mpif90
/usr/lib64/mpi/gcc/mvapich2/bin/mpifort
/usr/lib64/mpi/gcc/mvapich2/bin/mpiname
/usr/lib64/mpi/gcc/mvapich2/bin/mpirun
/usr/lib64/mpi/gcc/mvapich2/bin/mpirun_rsh
/usr/lib64/mpi/gcc/mvapich2/bin/mpispawn
/usr/lib64/mpi/gcc/mvapich2/bin/mpivars
/usr/lib64/mpi/gcc/mvapich2/bin/mpivars.csh
/usr/lib64/mpi/gcc/mvapich2/bin/mpivars.sh
/usr/lib64/mpi/gcc/mvapich2/bin/parkill
/usr/lib64/mpi/gcc/mvapich2/include
/usr/lib64/mpi/gcc/mvapich2/lib64
/usr/lib64/mpi/gcc/mvapich2/lib64/libmpi.so.12
/usr/lib64/mpi/gcc/mvapich2/lib64/libmpi.so.12.1.1
/usr/lib64/mpi/gcc/mvapich2/lib64/libmpicxx.so.12
/usr/lib64/mpi/gcc/mvapich2/lib64/libmpicxx.so.12.1.1
/usr/lib64/mpi/gcc/mvapich2/lib64/libmpifort.so.12
/usr/lib64/mpi/gcc/mvapich2/lib64/libmpifort.so.12.1.1
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_allgather
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_allgatherv
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_allreduce
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_alltoall
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_alltoallv
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_barrier
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_bcast
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_gather
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_gatherv
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_iallgather
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_iallgatherv
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_iallreduce
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_ialltoall
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_ialltoallv
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_ialltoallw
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_ibarrier
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_ibcast
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_igather
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_igatherv
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_ireduce
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_iscatter
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_iscatterv
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_reduce
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_reduce_scatter
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_scatter
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/collective/osu_scatterv
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/one-sided
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/one-sided/osu_acc_latency
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/one-sided/osu_cas_latency
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/one-sided/osu_fop_latency
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/one-sided/osu_get_acc_latency
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/one-sided/osu_get_bw
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/one-sided/osu_get_latency
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/one-sided/osu_put_bibw
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/one-sided/osu_put_bw
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/one-sided/osu_put_latency
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/pt2pt
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/pt2pt/osu_bibw
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/pt2pt/osu_bw
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/pt2pt/osu_latency
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/pt2pt/osu_latency_mp
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/pt2pt/osu_latency_mt
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/pt2pt/osu_mbw_mr
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/pt2pt/osu_multi_lat
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/startup
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/startup/osu_hello
/usr/lib64/mpi/gcc/mvapich2/lib64/osu-micro-benchmarks/mpi/startup/osu_init
/usr/lib64/mpi/gcc/mvapich2/share
/usr/lib64/mpi/gcc/mvapich2/share/man
/usr/lib64/mpi/gcc/mvapich2/share/man/man1
/usr/lib64/mpi/gcc/mvapich2/share/man/man1/hydra_nameserver.1
/usr/lib64/mpi/gcc/mvapich2/share/man/man1/hydra_persist.1
/usr/lib64/mpi/gcc/mvapich2/share/man/man1/hydra_pmi_proxy.1
/usr/lib64/mpi/gcc/mvapich2/share/man/man1/mpicc.1
/usr/lib64/mpi/gcc/mvapich2/share/man/man1/mpicxx.1
/usr/lib64/mpi/gcc/mvapich2/share/man/man1/mpiexec.1
/usr/lib64/mpi/gcc/mvapich2/share/man/man1/mpif77.1
/usr/lib64/mpi/gcc/mvapich2/share/man/man1/mpifort.1
/usr/lib64/mpi/gcc/mvapich2/share/man/man3
/usr/share/doc/mvapich2/CHANGELOG
/usr/share/doc/mvapich2/CHANGES
/usr/share/doc/mvapich2/COPYRIGHT
/usr/share/modules
/usr/share/modules/gnu-mvapich2
/usr/share/modules/gnu-mvapich2/.version
/usr/share/modules/gnu-mvapich2/2.3.7


Generated by rpm2html 1.8.1

Fabrice Bellet, Mon Apr 29 23:57:36 2024