Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

openmpi4-libs-4.1.1-150400.1.11 RPM for ppc64le

From OpenSuSE Leap 15.4 for ppc64le

Name: openmpi4-libs Distribution: SUSE Linux Enterprise 15
Version: 4.1.1 Vendor: SUSE LLC <https://www.suse.com/>
Release: 150400.1.11 Build date: Sun May 8 07:23:02 2022
Group: System/Libraries Build host: ibs-power9-11
Size: 20252832 Source RPM: openmpi4-4.1.1-150400.1.11.src.rpm
Packager: https://www.suse.com/
Url: http://www.open-mpi.org/
Summary: OpenMPI runtime libraries for OpenMPI version 4.1.1
OpenMPI is an implementation of the Message Passing Interface, a
standardized API typically used for parallel and/or distributed
computing. OpenMPI is the merged result of four prior implementations
where the team found for them to excel in one or more areas,
such as latency or throughput.

OpenMPI also includes an implementation of the OpenSHMEM parallel
programming API, which is a Partitioned Global Address Space (PGAS)
abstraction layer providing inter-process communication using
one-sided communication techniques.

This package provides the Open MPI/OpenSHMEM version 4
shared libraries.

Provides

Requires

License

BSD-3-Clause

Changelog

* Wed Apr 28 2021 nmoreychaisemartin@suse.com
  - openmpi4 is now the default openmpi for releases > 15.3
  - Add orted-mpir-add-version-to-shared-library.patch to fix unversionned library
  - Change RPM macros install path to %{_rpmmacrodir}
* Wed Apr 28 2021 nmoreychaisemartin@suse.com
  - Update to version 4.1.1
    - Fix a number of datatype issues, including an issue with
      improper handling of partial datatypes that could lead to
      an unexpected application failure.
    - Change UCX PML to not warn about MPI_Request leaks during
      MPI_FINALIZE by default.  The old behavior can be restored with
      the mca_pml_ucx_request_leak_check MCA parameter.
    - Reverted temporary solution that worked around launch issues in
      SLURM v20.11.{0,1,2}. SchedMD encourages users to avoid these
      versions and to upgrade to v20.11.3 or newer.
    - Updated PMIx to v3.2.2.
    - Disabled gcc built-in atomics by default on aarch64 platforms.
    - Disabled UCX PML when UCX v1.8.0 is detected. UCX version 1.8.0 has a bug that
      may cause data corruption when its TCP transport is used in conjunction with
      the shared memory transport. UCX versions prior to v1.8.0 are not affected by
      this issue. Thanks to @ksiazekm for reporting the issue.
    - Fixed detection of available UCX transports/devices to better inform PML
      prioritization.
    - Fixed SLURM support to mark ORTE daemons as non-MPI tasks.
    - Improved AVX detection to more accurately detect supported
      platforms.  Also improved the generated AVX code, and switched to
      using word-based MCA params for the op/avx component (vs. numeric
      big flags).
    - Improved OFI compatibility support and fixed memory leaks in error
      handling paths.
    - Improved HAN collectives with support for Barrier and Scatter. Thanks
      to @EmmanuelBRELLE for these changes and the relevant bug fixes.
    - Fixed MPI debugger support (i.e., the MPIR_Breakpoint() symbol).
      Thanks to @louisespellacy-arm for reporting the issue.
    - Fixed ORTE bug that prevented debuggers from reading MPIR_Proctable.
    - Removed PML uniformity check from the UCX PML to address performance
      regression.
    - Fixed MPI_Init_thread(3) statement about C++ binding and update
      references about MPI_THREAD_MULTIPLE.  Thanks to Andreas Lösel for
      bringing the outdated docs to our attention.
    - Added fence_nb to Flux PMIx support to address segmentation faults.
    - Ensured progress of AIO requests in the POSIX FBTL component to
      prevent exceeding maximum number of pending requests on MacOS.
    - Used OPAL's mutli-thread support in the orted to leverage atomic
      operations for object refcounting.
    - Fixed segv when launching with static TCP ports.
    - Fixed --debug-daemons mpirun CLI option.
    - Fixed bug where mpirun did not honor --host in a managed job
      allocation.
    - Made a managed allocation filter a hostfile/hostlist.
    - Fixed bug to marked a generalized request as pending once initiated.
    - Fixed external PMIx v4.x check.
    - Fixed OSHMEM build with `--enable-mem-debug`.
    - Fixed a performance regression observed with older versions of GCC when
      __ATOMIC_SEQ_CST is used. Thanks to @BiplabRaut for reporting the issue.
    - Fixed buffer allocation bug in the binomial tree scatter algorithm when
      non-contiguous datatypes are used. Thanks to @sadcat11 for reporting the issue.
    - Fixed bugs related to the accumulate and atomics functionality in the
      osc/rdma component.
    - Fixed race condition in MPI group operations observed with
      MPI_THREAD_MULTIPLE threading level.
    - Fixed a deadlock in the TCP BTL's connection matching logic.
    - Fixed pml/ob1 compilation error when CUDA support is enabled.
    - Fixed a build issue with Lustre caused by unnecessary header includes.
    - Fixed a build issue with IMB LSF workload manager.
    - Fixed linker error with UCX SPML.
* Wed Mar 24 2021 eich@suse.com
  - Update to version 4.1.0
    * collectives: Add HAN and ADAPT adaptive collectives components.
      Both components are off by default and can be enabled by specifying
      "mpirun --mca coll_adapt_priority 100 --mca coll_han_priority 100 ...".
      We intend to enable both by default in Open MPI 5.0.
    * OMPIO is now the default for MPI-IO on all filesystems, including
      Lustre (prior to this, ROMIO was the default for Lustre).  Many
      thanks to Mark Dixon for identifying MPI I/O issues and providing
      access to Lustre systems for testing.
    * Minor MPI one-sided RDMA performance improvements.
    * Fix hcoll MPI_SCATTERV with MPI_IN_PLACE.
    * Add AVX support for MPI collectives.
    * Updates to mpirun(1) about "slots" and PE=x values.
    * Fix buffer allocation for large environment variables.  Thanks to
      @zrss for reporting the issue.
    * Upgrade the embedded OpenPMIx to v3.2.2.
    * Fix issue with extra-long values in MCA files.  Thanks to GitHub
      user @zrss for bringing the issue to our attention.
    * UCX: Fix zero-sized datatype transfers.
    * Fix --cpu-list for non-uniform modes.
    * Fix issue in PMIx callback caused by missing memory barrier on Arm platforms.
    * OFI MTL: Various bug fixes.
    * Fixed issue where MPI_TYPE_CREATE_RESIZED would create a datatype
      with unexpected extent on oddly-aligned datatypes.
    * collectives: Adjust default tuning thresholds for many collective
      algorithms
    * runtime: fix situation where rank-by argument does not work
    * Portals4: Clean up error handling corner cases
    * runtime: Remove --enable-install-libpmix option, which has not
      worked since it was added
    * UCX: Allow UCX 1.8 to be used with the btl uct
    * UCX: Replace usage of the deprecated NB API of UCX with NBX
    * OMPIO: Add support for the IME file system
    * OFI/libfabric: Added support for multiple NICs
    * OFI/libfabric: Added support for Scalable Endpoints
    * OFI/libfabric: Added btl for one-sided support
    * OFI/libfabric: Multiple small bugfixes
    * libnbc: Adding numerous performance-improving algorithms
  - Removed: reproducible.patch - replaced by spec file settings.
* Tue Sep 08 2020 nmoreychaisemartin@suse.com
  - Update to version 4.0.5
    - See NEWS for the detailled changelog
* Thu Jun 11 2020 nmoreychaisemartin@suse.com
  - Update to version 4.0.4
    - See NEWS for the detailled changelog
* Tue Jun 09 2020 nmoreychaisemartin@suse.com
  - Update to version 4.0.3
    - See NEWS for the detailled changelog
    - Fixes compilation with UCX 1.8
  - Drop memory-patcher-fix-compiler-warning.patch which was merged upstream
* Thu Mar 19 2020 nmoreychaisemartin@suse.com
  - Drop different package string between SLES and Leap
* Wed Jan 15 2020 nmoreychaisemartin@suse.com
  - Add memory-patcher-fix-compiler-warning.patch to fix 64bit portability issues
* Thu Oct 31 2019 nmoreychaisemartin@suse.com
  - Link against libnuma (bsc#1155120)
* Thu Oct 24 2019 nmoreychaisemartin@suse.com
  - Initial version (4.0.2)
  - Add reproducible.patch for reproducible builds.

Files

/usr/lib64/mpi/gcc/openmpi4
/usr/lib64/mpi/gcc/openmpi4/lib64
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_dstore.so.1
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_dstore.so.1.0.2
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_monitoring.so.50
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_monitoring.so.50.20.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_ompio.so.41
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_ompio.so.41.29.1
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_sm.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_sm.so.40.30.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_ucx.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_ucx.so.40.30.1
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_verbs.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmca_common_verbs.so.40.30.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi.so.40.30.1
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_mpifh.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_mpifh.so.40.30.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_usempi_ignore_tkr.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_usempi_ignore_tkr.so.40.30.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_usempif08.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libmpi_usempif08.so.40.30.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libompitrace.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libompitrace.so.40.30.0
/usr/lib64/mpi/gcc/openmpi4/lib64/libopen-orted-mpir.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libopen-orted-mpir.so.40.30.1
/usr/lib64/mpi/gcc/openmpi4/lib64/libopen-pal.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libopen-pal.so.40.30.1
/usr/lib64/mpi/gcc/openmpi4/lib64/libopen-rte.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/libopen-rte.so.40.30.1
/usr/lib64/mpi/gcc/openmpi4/lib64/liboshmem.so.40
/usr/lib64/mpi/gcc/openmpi4/lib64/liboshmem.so.40.30.1
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/libompi_dbg_msgq.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_allocator_basic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_allocator_bucket.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_atomic_basic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_atomic_ucx.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_bml_r2.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_openib.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_self.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_sm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_tcp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_btl_vader.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_adapt.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_basic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_han.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_inter.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_libnbc.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_monitoring.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_self.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_sm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_sync.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_coll_tuned.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_compress_bzip.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_compress_gzip.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_crs_none.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_errmgr_default_app.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_errmgr_default_hnp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_errmgr_default_orted.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_errmgr_default_tool.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_env.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_hnp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_pmi.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_singleton.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_slurm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ess_tool.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fbtl_posix.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fcoll_dynamic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fcoll_dynamic_gen2.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fcoll_individual.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fcoll_two_phase.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fcoll_vulcan.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_filem_raw.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_fs_ufs.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_grpcomm_direct.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_io_ompio.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_io_romio321.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_iof_hnp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_iof_orted.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_iof_tool.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_memheap_buddy.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_memheap_ptmalloc.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_mpool_hugepage.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_odls_default.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_odls_pspawn.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_oob_tcp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_osc_monitoring.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_osc_pt2pt.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_osc_rdma.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_osc_sm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_osc_ucx.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_patcher_overwrite.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_plm_isolated.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_plm_rsh.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_plm_slurm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pmix_flux.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pmix_isolated.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pmix_pmix3x.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pml_cm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pml_monitoring.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pml_ob1.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pml_ucx.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_pstat_linux.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ras_simulator.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_ras_slurm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rcache_grdma.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_reachable_weighted.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_regx_fwd.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_regx_naive.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_regx_reverse.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_mindist.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_ppr.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_rank_file.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_resilient.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_round_robin.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rmaps_seq.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rml_oob.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_routed_binomial.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_routed_direct.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_routed_radix.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_rtc_hwloc.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_schizo_flux.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_schizo_jsm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_schizo_ompi.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_schizo_orte.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_schizo_slurm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_scoll_basic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_scoll_mpi.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sharedfp_individual.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sharedfp_lockedfile.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sharedfp_sm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_shmem_mmap.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_shmem_posix.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_shmem_sysv.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_spml_ucx.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sshmem_mmap.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sshmem_sysv.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_sshmem_ucx.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_state_app.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_state_hnp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_state_novm.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_state_orted.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_state_tool.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_topo_basic.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_topo_treematch.so
/usr/lib64/mpi/gcc/openmpi4/lib64/openmpi/mca_vprotocol_pessimist.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_bfrops_v12.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_bfrops_v20.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_bfrops_v21.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_bfrops_v3.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_gds_ds12.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_gds_ds21.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_gds_hash.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_plog_default.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_plog_stdfd.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_plog_syslog.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_preg_compress.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_preg_native.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psec_native.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psec_none.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psensor_file.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psensor_heartbeat.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_pshmem_mmap.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psquash_flex128.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_psquash_native.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_ptl_tcp.so
/usr/lib64/mpi/gcc/openmpi4/lib64/pmix/mca_ptl_usock.so


Generated by rpm2html 1.8.1

Fabrice Bellet, Tue Jul 9 15:47:03 2024