Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

benchdnn-1.6.3-bp153.1.13 RPM for aarch64

From OpenSuSE Leap 15.3 for aarch64

Name: benchdnn Distribution: SUSE Linux Enterprise 15 SP3
Version: 1.6.3 Vendor: openSUSE
Release: bp153.1.13 Build date: Sat Mar 6 08:55:38 2021
Group: Unspecified Build host: obs-arm-3
Size: 5472895 Source RPM: onednn-1.6.3-bp153.1.13.src.rpm
Packager: https://bugs.opensuse.org
Url: https://01.org/onednn
Summary: Header files of Intel(R) Math Kernel Library
Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) is an
open-source performance library for deep-learning applications. The library
accelerates deep-learning applications and frameworks on Intel architecture.
Intel MKL-DNN contains vectorized and threaded building blocks that you can use
to implement deep neural networks (DNN) with C and C++ interfaces.

This package only includes the benchmark utility including its input files.

Provides

Requires

License

Apache-2.0

Changelog

* Mon Oct 05 2020 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Obsoletes mkl-dnn* <= %{version}
* Fri Oct 02 2020 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Rename mkl-dnn to onednn to follow upstream
* Wed Sep 23 2020 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Update to 1.6.3
  - Drop upstream patch:
    * cmake-no-install-ocl-cmake.patch
* Wed Sep 23 2020 Guillaume GARDET <guillaume.gardet@opensuse.org>
  - Build on aarch64 and ppc64le which are now also supported
  - Provide oneDNN and oneDNN-devel as it is the new official name
* Tue May 05 2020 Tomáš Chvátal <tchvatal@suse.com>
  - Update to 1.4:
    * Performance improvements all over the board
  - Rebase patch cmake-no-install-ocl-cmake.patch
* Tue Mar 24 2020 Tomáš Chvátal <tchvatal@suse.com>
  - Add constraints to not crash during testing on OOM
* Thu Feb 27 2020 Tomáš Chvátal <tchvatal@suse.com>
  - Do not disable LTO there is no actual reason for that
  - Export LD_LIBRARY_PATH to fix older releases build
* Wed Feb 26 2020 Tomáš Chvátal <tchvatal@suse.com>
  - There is no actual reason to not use github tag for tarball
    fetching -> remove the service
  - Format with spec-cleaner
  - Use proper %cmake macros everywhere
  - Add configure options for cmake to set it up in a way we really
    want
  - Add patch from Debian to not install OpenCL cmake finder:
    * cmake-no-install-ocl-cmake.patch
* Thu Feb 20 2020 Christian Goll <cgoll@suse.com>
  - enabled tests
* Thu Jan 30 2020 Christian Goll <cgoll@suse.com>
  - packaged separate benchnn packae with its input files
  - updated to v1.1.3 which includes
    * Fixed the mean and variance memory descriptors in layer
    normalization (65f1908)
    * Fixed the layer normalization formula (c176ceb)
* Wed Jan 08 2020 Christian Goll <cgoll@suse.com>
  - updated to v1.1.2
    * Fixed threading over the spatial in bfloat16 batched
      normalization (017b6c9)
    * Fixed read past end-of-buffer error for int8 convolution (7d6f45e)
    * Fixed condition for dispatching optimized channel blocking in
      fp32 backward convolution on Intel Xeon Phi(TM) processor (846eba1)
    * Fixed fp32 backward convolution for shapes with spatial strides
      over the depth dimension (002e3ab)
    * Fixed softmax with zero sizes on GPU (936bff4)
    * Fixed int8 deconvolution with dilation when ih <= dh (3e3bacb)
    * Enabled back fp32 -> u8 reorder for RNN (a2c2507)
    * Fixed segmentation fault in bfloat16 backward convolution from
      kd_padding=0 computation (52d476c)
    * Fixed segmentation fault in bfloat16 forward convolution due
      to push/pop imbalance (4f6e3d5)
    * Fixed library version for OS X build (0d85005)
    * Fixed padding by channels in concat (a265c7d)
    * Added full text of third party licenses and
      copyright notices to LICENSE file (79f204c)
    * Added separate README for binary packages (28f4c96)
    * Fixed computing per-oc mask in RNN (ff3ffab)
    * Added workaround for number of cores calculation in Xbyak (301b088)
* Mon Feb 11 2019 cgoll@suse.com
  - added ARCH_OPT_FLAGS=""
* Tue Feb 05 2019 Christian Goll <cgoll@suse.com>
  - Initial checking of the Intel(R) Math Kernel Library for
    Deep Neural Networks which can be used by:
    * tensorflow
    * Caffee
    * PyTorch
    and other machine learning tools

Files

/usr/bin/benchdnn
/usr/share/benchdnn
/usr/share/benchdnn/inputs
/usr/share/benchdnn/inputs/binary
/usr/share/benchdnn/inputs/binary/harness_binary_bf16
/usr/share/benchdnn/inputs/binary/harness_binary_f32
/usr/share/benchdnn/inputs/binary/harness_binary_i8
/usr/share/benchdnn/inputs/binary/shapes_common
/usr/share/benchdnn/inputs/binary/test_binary_all
/usr/share/benchdnn/inputs/binary/test_binary_bfloat16
/usr/share/benchdnn/inputs/binary/test_binary_gpu
/usr/share/benchdnn/inputs/bnorm
/usr/share/benchdnn/inputs/bnorm/bnorm_1d
/usr/share/benchdnn/inputs/bnorm/bnorm_2d
/usr/share/benchdnn/inputs/bnorm/bnorm_3d
/usr/share/benchdnn/inputs/bnorm/bnorm_densenet_121
/usr/share/benchdnn/inputs/bnorm/bnorm_googlenet_v2
/usr/share/benchdnn/inputs/bnorm/bnorm_googlenet_v3
/usr/share/benchdnn/inputs/bnorm/bnorm_large
/usr/share/benchdnn/inputs/bnorm/bnorm_regressions
/usr/share/benchdnn/inputs/bnorm/bnorm_resnet_50
/usr/share/benchdnn/inputs/bnorm/bnorm_topo
/usr/share/benchdnn/inputs/bnorm/bnorm_topo_gpu
/usr/share/benchdnn/inputs/bnorm/bnorm_topo_small
/usr/share/benchdnn/inputs/bnorm/perf_bnorm_gpu
/usr/share/benchdnn/inputs/bnorm/set_topologies_gpu
/usr/share/benchdnn/inputs/bnorm/test_bnorm_all
/usr/share/benchdnn/inputs/bnorm/test_bnorm_bfloat16
/usr/share/benchdnn/inputs/bnorm/test_bnorm_gpu
/usr/share/benchdnn/inputs/bnorm/test_bnorm_large_gpu
/usr/share/benchdnn/inputs/bnorm/test_bnorm_regressions
/usr/share/benchdnn/inputs/bnorm/test_bnorm_regressions_large
/usr/share/benchdnn/inputs/concat
/usr/share/benchdnn/inputs/concat/test_concat_all
/usr/share/benchdnn/inputs/concat/test_concat_bfloat16
/usr/share/benchdnn/inputs/concat/test_concat_gpu
/usr/share/benchdnn/inputs/conv
/usr/share/benchdnn/inputs/conv/harness_conv_attrs_gpu
/usr/share/benchdnn/inputs/conv/harness_conv_attrs_int8
/usr/share/benchdnn/inputs/conv/harness_conv_auto
/usr/share/benchdnn/inputs/conv/harness_conv_depthwise_int8
/usr/share/benchdnn/inputs/conv/harness_conv_dilated_int8
/usr/share/benchdnn/inputs/conv/harness_conv_dw_bfloat16
/usr/share/benchdnn/inputs/conv/harness_conv_dw_bfloat16_nxc
/usr/share/benchdnn/inputs/conv/harness_conv_f32
/usr/share/benchdnn/inputs/conv/harness_conv_f32_nxc
/usr/share/benchdnn/inputs/conv/harness_conv_fused_depthwise
/usr/share/benchdnn/inputs/conv/harness_conv_int8
/usr/share/benchdnn/inputs/conv/harness_conv_regression_general
/usr/share/benchdnn/inputs/conv/harness_conv_tags
/usr/share/benchdnn/inputs/conv/harness_deepbench
/usr/share/benchdnn/inputs/conv/harness_saturation_int8
/usr/share/benchdnn/inputs/conv/perf_conv_gen12lp
/usr/share/benchdnn/inputs/conv/perf_conv_gen9
/usr/share/benchdnn/inputs/conv/set_all_topologies
/usr/share/benchdnn/inputs/conv/set_conv_3d
/usr/share/benchdnn/inputs/conv/set_conv_all
/usr/share/benchdnn/inputs/conv/set_conv_dw
/usr/share/benchdnn/inputs/conv/set_dilated-conv
/usr/share/benchdnn/inputs/conv/set_dilated-conv_1st
/usr/share/benchdnn/inputs/conv/set_dilated-conv_3d
/usr/share/benchdnn/inputs/conv/set_fastrcnn
/usr/share/benchdnn/inputs/conv/set_gpu
/usr/share/benchdnn/inputs/conv/set_maskrcnn
/usr/share/benchdnn/inputs/conv/set_perf_cpu_all_mb
/usr/share/benchdnn/inputs/conv/set_perf_cpu_large_mb
/usr/share/benchdnn/inputs/conv/set_perf_cpu_small_mb
/usr/share/benchdnn/inputs/conv/set_perf_gpu_all_mb
/usr/share/benchdnn/inputs/conv/set_perf_gpu_large_mb
/usr/share/benchdnn/inputs/conv/set_perf_gpu_small_mb
/usr/share/benchdnn/inputs/conv/shapes_1d
/usr/share/benchdnn/inputs/conv/shapes_1d_wavenet
/usr/share/benchdnn/inputs/conv/shapes_1x1
/usr/share/benchdnn/inputs/conv/shapes_3d
/usr/share/benchdnn/inputs/conv/shapes_3d_1st_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_3d_1x1_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_3d_1x1_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_3d_1x1_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_3d_1x1_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_3d_2d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_3d_gpu
/usr/share/benchdnn/inputs/conv/shapes_3d_i3d
/usr/share/benchdnn/inputs/conv/shapes_3d_resnext101
/usr/share/benchdnn/inputs/conv/shapes_3d_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_3d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_3d_unet
/usr/share/benchdnn/inputs/conv/shapes_3d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_3d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_a3c
/usr/share/benchdnn/inputs/conv/shapes_alexnet
/usr/share/benchdnn/inputs/conv/shapes_auto
/usr/share/benchdnn/inputs/conv/shapes_basic
/usr/share/benchdnn/inputs/conv/shapes_cosmictagger
/usr/share/benchdnn/inputs/conv/shapes_deepbench_inference_device
/usr/share/benchdnn/inputs/conv/shapes_deepbench_inference_server
/usr/share/benchdnn/inputs/conv/shapes_deepbench_training
/usr/share/benchdnn/inputs/conv/shapes_densnet
/usr/share/benchdnn/inputs/conv/shapes_dilated
/usr/share/benchdnn/inputs/conv/shapes_dilated_1d_1st_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_1d_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_1d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_1d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_1d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_2d_1st_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_2d_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_2d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_2d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_2d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_3d_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_3d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_3d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_3d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_dilated_rfcn
/usr/share/benchdnn/inputs/conv/shapes_dw_1d_stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dw_1d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dw_1d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_dw_2d_1d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dw_2d_strided_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dw_2d_strided_padding
/usr/share/benchdnn/inputs/conv/shapes_dw_2d_unit-stride_no-padding
/usr/share/benchdnn/inputs/conv/shapes_dw_2d_unit-stride_padding
/usr/share/benchdnn/inputs/conv/shapes_dw_minibatch_2d-spatial
/usr/share/benchdnn/inputs/conv/shapes_dw_minibatch_channel_2d-spatial
/usr/share/benchdnn/inputs/conv/shapes_fastrcnn_p1
/usr/share/benchdnn/inputs/conv/shapes_fastrcnn_p2
/usr/share/benchdnn/inputs/conv/shapes_fastrcnn_p3
/usr/share/benchdnn/inputs/conv/shapes_ffn
/usr/share/benchdnn/inputs/conv/shapes_fused_mobilenet_stride_1
/usr/share/benchdnn/inputs/conv/shapes_fused_mobilenet_stride_2
/usr/share/benchdnn/inputs/conv/shapes_gemm
/usr/share/benchdnn/inputs/conv/shapes_googlenet_v1
/usr/share/benchdnn/inputs/conv/shapes_googlenet_v2
/usr/share/benchdnn/inputs/conv/shapes_googlenet_v3
/usr/share/benchdnn/inputs/conv/shapes_maskrcnn_p1
/usr/share/benchdnn/inputs/conv/shapes_maskrcnn_p2
/usr/share/benchdnn/inputs/conv/shapes_mobilenet
/usr/share/benchdnn/inputs/conv/shapes_mobilenet_dw
/usr/share/benchdnn/inputs/conv/shapes_pointnet
/usr/share/benchdnn/inputs/conv/shapes_regression_1x1
/usr/share/benchdnn/inputs/conv/shapes_regression_dw
/usr/share/benchdnn/inputs/conv/shapes_regression_gemm
/usr/share/benchdnn/inputs/conv/shapes_regression_padding
/usr/share/benchdnn/inputs/conv/shapes_regression_small_spatial
/usr/share/benchdnn/inputs/conv/shapes_resnet_50
/usr/share/benchdnn/inputs/conv/shapes_resnet_50_sparse
/usr/share/benchdnn/inputs/conv/shapes_resnet_50_v1_5
/usr/share/benchdnn/inputs/conv/shapes_segnet
/usr/share/benchdnn/inputs/conv/shapes_src-transpose_padding
/usr/share/benchdnn/inputs/conv/shapes_ssd_300_voc0712
/usr/share/benchdnn/inputs/conv/shapes_ssd_mobilenet
/usr/share/benchdnn/inputs/conv/shapes_tails
/usr/share/benchdnn/inputs/conv/shapes_tails_gpu
/usr/share/benchdnn/inputs/conv/shapes_unet
/usr/share/benchdnn/inputs/conv/shapes_vgg_11
/usr/share/benchdnn/inputs/conv/shapes_vgg_19
/usr/share/benchdnn/inputs/conv/shapes_xception
/usr/share/benchdnn/inputs/conv/shapes_yolov2
/usr/share/benchdnn/inputs/conv/test_conv_3d
/usr/share/benchdnn/inputs/conv/test_conv_3d_f32_nxc
/usr/share/benchdnn/inputs/conv/test_conv_all
/usr/share/benchdnn/inputs/conv/test_conv_all_topologies
/usr/share/benchdnn/inputs/conv/test_conv_all_topologies_f32_nxc
/usr/share/benchdnn/inputs/conv/test_conv_attrs
/usr/share/benchdnn/inputs/conv/test_conv_attrs_f32_nxc
/usr/share/benchdnn/inputs/conv/test_conv_attrs_gpu
/usr/share/benchdnn/inputs/conv/test_conv_bfloat16
/usr/share/benchdnn/inputs/conv/test_conv_bfloat16_nxc
/usr/share/benchdnn/inputs/conv/test_conv_depthwise
/usr/share/benchdnn/inputs/conv/test_conv_dilated
/usr/share/benchdnn/inputs/conv/test_conv_dilated_f32_nxc
/usr/share/benchdnn/inputs/conv/test_conv_f32
/usr/share/benchdnn/inputs/conv/test_conv_f32_nxc
/usr/share/benchdnn/inputs/conv/test_conv_function
/usr/share/benchdnn/inputs/conv/test_conv_gemm_bfloat16
/usr/share/benchdnn/inputs/conv/test_conv_gemm_bfloat16_nxc
/usr/share/benchdnn/inputs/conv/test_conv_gemm_f32
/usr/share/benchdnn/inputs/conv/test_conv_gemm_f32_nxc
/usr/share/benchdnn/inputs/conv/test_conv_gemm_int8
/usr/share/benchdnn/inputs/conv/test_conv_gpu
/usr/share/benchdnn/inputs/conv/test_conv_int8
/usr/share/benchdnn/inputs/conv/test_conv_regression
/usr/share/benchdnn/inputs/conv/test_conv_wino_f32
/usr/share/benchdnn/inputs/conv/test_conv_wino_int8
/usr/share/benchdnn/inputs/deconv
/usr/share/benchdnn/inputs/deconv/deconv_1d
/usr/share/benchdnn/inputs/deconv/deconv_1x1
/usr/share/benchdnn/inputs/deconv/deconv_2d
/usr/share/benchdnn/inputs/deconv/deconv_3d
/usr/share/benchdnn/inputs/deconv/deconv_all
/usr/share/benchdnn/inputs/deconv/deconv_dilated
/usr/share/benchdnn/inputs/deconv/harness_deconv_regression_general
/usr/share/benchdnn/inputs/deconv/test_deconv_1x1
/usr/share/benchdnn/inputs/deconv/test_deconv_all
/usr/share/benchdnn/inputs/deconv/test_deconv_all_f32_nxc
/usr/share/benchdnn/inputs/deconv/test_deconv_bfloat16
/usr/share/benchdnn/inputs/deconv/test_deconv_bfloat16_nxc
/usr/share/benchdnn/inputs/deconv/test_deconv_gpu
/usr/share/benchdnn/inputs/eltwise
/usr/share/benchdnn/inputs/eltwise/harness_eltwise_all_alg
/usr/share/benchdnn/inputs/eltwise/shapes_eltwise
/usr/share/benchdnn/inputs/eltwise/test_eltwise_all
/usr/share/benchdnn/inputs/eltwise/test_eltwise_bfloat16
/usr/share/benchdnn/inputs/eltwise/test_eltwise_gpu
/usr/share/benchdnn/inputs/ip
/usr/share/benchdnn/inputs/ip/harness_saturation
/usr/share/benchdnn/inputs/ip/harness_tag
/usr/share/benchdnn/inputs/ip/ip_1d
/usr/share/benchdnn/inputs/ip/ip_3d
/usr/share/benchdnn/inputs/ip/ip_alexnet
/usr/share/benchdnn/inputs/ip/ip_all
/usr/share/benchdnn/inputs/ip/ip_bert
/usr/share/benchdnn/inputs/ip/ip_bert_large
/usr/share/benchdnn/inputs/ip/ip_dlrm
/usr/share/benchdnn/inputs/ip/ip_gnmt
/usr/share/benchdnn/inputs/ip/ip_googlenetv1
/usr/share/benchdnn/inputs/ip/ip_googlenetv3
/usr/share/benchdnn/inputs/ip/ip_gpu
/usr/share/benchdnn/inputs/ip/ip_maskrcnn
/usr/share/benchdnn/inputs/ip/ip_ncf
/usr/share/benchdnn/inputs/ip/ip_resnet50
/usr/share/benchdnn/inputs/ip/ip_resnet50_sparse
/usr/share/benchdnn/inputs/ip/ip_resnet50_swfmt
/usr/share/benchdnn/inputs/ip/ip_transformer_lt
/usr/share/benchdnn/inputs/ip/ip_vgg16
/usr/share/benchdnn/inputs/ip/ip_wd
/usr/share/benchdnn/inputs/ip/perf_ip_gen12lp
/usr/share/benchdnn/inputs/ip/perf_ip_gen9
/usr/share/benchdnn/inputs/ip/perf_ip_inference_lb
/usr/share/benchdnn/inputs/ip/perf_ip_inference_sb
/usr/share/benchdnn/inputs/ip/perf_ip_training
/usr/share/benchdnn/inputs/ip/set_gpu
/usr/share/benchdnn/inputs/ip/shapes_1d
/usr/share/benchdnn/inputs/ip/shapes_3d
/usr/share/benchdnn/inputs/ip/shapes_googlenet_v1
/usr/share/benchdnn/inputs/ip/shapes_googlenet_v3
/usr/share/benchdnn/inputs/ip/shapes_maskrcnn
/usr/share/benchdnn/inputs/ip/shapes_non-spatial
/usr/share/benchdnn/inputs/ip/shapes_non-spatial_gpu
/usr/share/benchdnn/inputs/ip/shapes_resnet_50
/usr/share/benchdnn/inputs/ip/shapes_resnet_50_sparse
/usr/share/benchdnn/inputs/ip/shapes_vgg16
/usr/share/benchdnn/inputs/ip/shapes_wd
/usr/share/benchdnn/inputs/ip/test_ip_all
/usr/share/benchdnn/inputs/ip/test_ip_bfloat16
/usr/share/benchdnn/inputs/ip/test_ip_gpu
/usr/share/benchdnn/inputs/lnorm
/usr/share/benchdnn/inputs/lnorm/lnorm_all
/usr/share/benchdnn/inputs/lnorm/test_lnorm_all
/usr/share/benchdnn/inputs/lnorm/test_lnorm_gpu
/usr/share/benchdnn/inputs/lrn
/usr/share/benchdnn/inputs/lrn/lrn_0d_all
/usr/share/benchdnn/inputs/lrn/lrn_2d_all
/usr/share/benchdnn/inputs/lrn/lrn_3d_all
/usr/share/benchdnn/inputs/lrn/set_lrn_all
/usr/share/benchdnn/inputs/lrn/test_lrn_all
/usr/share/benchdnn/inputs/lrn/test_lrn_bfloat16
/usr/share/benchdnn/inputs/lrn/test_lrn_gpu
/usr/share/benchdnn/inputs/matmul
/usr/share/benchdnn/inputs/matmul/shapes_2d
/usr/share/benchdnn/inputs/matmul/shapes_3d
/usr/share/benchdnn/inputs/matmul/shapes_bert
/usr/share/benchdnn/inputs/matmul/shapes_transformer
/usr/share/benchdnn/inputs/matmul/test_matmul_all
/usr/share/benchdnn/inputs/matmul/test_matmul_gpu
/usr/share/benchdnn/inputs/matmul/test_matmul_runtime
/usr/share/benchdnn/inputs/pool
/usr/share/benchdnn/inputs/pool/perf_pool_gpu
/usr/share/benchdnn/inputs/pool/pool_1d
/usr/share/benchdnn/inputs/pool/pool_1d_all
/usr/share/benchdnn/inputs/pool/pool_2d
/usr/share/benchdnn/inputs/pool/pool_2d_all
/usr/share/benchdnn/inputs/pool/pool_2d_small
/usr/share/benchdnn/inputs/pool/pool_3d
/usr/share/benchdnn/inputs/pool/pool_3d_all
/usr/share/benchdnn/inputs/pool/pool_3d_small
/usr/share/benchdnn/inputs/pool/pool_3d_unet
/usr/share/benchdnn/inputs/pool/pool_alexnet
/usr/share/benchdnn/inputs/pool/pool_googlenet_v1
/usr/share/benchdnn/inputs/pool/pool_googlenet_v3
/usr/share/benchdnn/inputs/pool/pool_i3d_resnet50_v1
/usr/share/benchdnn/inputs/pool/pool_ker_in_pad_1d
/usr/share/benchdnn/inputs/pool/pool_ker_in_pad_2d
/usr/share/benchdnn/inputs/pool/pool_ker_in_pad_2d_small
/usr/share/benchdnn/inputs/pool/pool_ker_in_pad_3d
/usr/share/benchdnn/inputs/pool/pool_ker_in_pad_3d_small
/usr/share/benchdnn/inputs/pool/pool_resnet_50
/usr/share/benchdnn/inputs/pool/set_pool_all
/usr/share/benchdnn/inputs/pool/set_pool_ker_in_pad_all
/usr/share/benchdnn/inputs/pool/set_topologies_gpu
/usr/share/benchdnn/inputs/pool/test_pool_all
/usr/share/benchdnn/inputs/pool/test_pool_bfloat16
/usr/share/benchdnn/inputs/pool/test_pool_gpu
/usr/share/benchdnn/inputs/pool/test_pool_large_gpu
/usr/share/benchdnn/inputs/reorder
/usr/share/benchdnn/inputs/reorder/harness_reorder_amx
/usr/share/benchdnn/inputs/reorder/test_reorder_all
/usr/share/benchdnn/inputs/reorder/test_reorder_bfloat16
/usr/share/benchdnn/inputs/reorder/test_reorder_gpu
/usr/share/benchdnn/inputs/reorder/test_reorder_runtime
/usr/share/benchdnn/inputs/resampling
/usr/share/benchdnn/inputs/resampling/maskrcnn
/usr/share/benchdnn/inputs/resampling/resampling_1d
/usr/share/benchdnn/inputs/resampling/resampling_2d
/usr/share/benchdnn/inputs/resampling/resampling_3d
/usr/share/benchdnn/inputs/resampling/set_resampling_all
/usr/share/benchdnn/inputs/resampling/test_resampling_all
/usr/share/benchdnn/inputs/resampling/test_resampling_gpu
/usr/share/benchdnn/inputs/rnn
/usr/share/benchdnn/inputs/rnn/lstmp_dic_ne_dhc
/usr/share/benchdnn/inputs/rnn/option_set_gnmt_decoder
/usr/share/benchdnn/inputs/rnn/option_set_gnmt_encoder
/usr/share/benchdnn/inputs/rnn/option_set_perf_inference_lb
/usr/share/benchdnn/inputs/rnn/option_set_perf_inference_sb
/usr/share/benchdnn/inputs/rnn/option_set_perf_training
/usr/share/benchdnn/inputs/rnn/perf_rnn_gen12lp
/usr/share/benchdnn/inputs/rnn/perf_rnn_gen9
/usr/share/benchdnn/inputs/rnn/perf_rnn_inference_lb
/usr/share/benchdnn/inputs/rnn/perf_rnn_inference_sb
/usr/share/benchdnn/inputs/rnn/perf_rnn_training
/usr/share/benchdnn/inputs/rnn/rnn_ds2
/usr/share/benchdnn/inputs/rnn/rnn_gnmt_decoder
/usr/share/benchdnn/inputs/rnn/rnn_gnmt_encoder
/usr/share/benchdnn/inputs/rnn/rnn_gru
/usr/share/benchdnn/inputs/rnn/rnn_gru_small
/usr/share/benchdnn/inputs/rnn/rnn_inference
/usr/share/benchdnn/inputs/rnn/rnn_large
/usr/share/benchdnn/inputs/rnn/rnn_large_nonuniform
/usr/share/benchdnn/inputs/rnn/rnn_small
/usr/share/benchdnn/inputs/rnn/rnn_training
/usr/share/benchdnn/inputs/rnn/shapes_deepspeech_2
/usr/share/benchdnn/inputs/rnn/test_gru_large
/usr/share/benchdnn/inputs/rnn/test_lstm_large
/usr/share/benchdnn/inputs/rnn/test_lstmp_all
/usr/share/benchdnn/inputs/rnn/test_rnn_all
/usr/share/benchdnn/inputs/rnn/test_rnn_gpu
/usr/share/benchdnn/inputs/rnn/test_rnn_inference
/usr/share/benchdnn/inputs/rnn/test_rnn_large
/usr/share/benchdnn/inputs/rnn/test_rnn_small
/usr/share/benchdnn/inputs/rnn/test_rnn_small_gpu
/usr/share/benchdnn/inputs/rnn/test_rnn_training
/usr/share/benchdnn/inputs/shuffle
/usr/share/benchdnn/inputs/shuffle/test_shuffle_all
/usr/share/benchdnn/inputs/shuffle/test_shuffle_bfloat16
/usr/share/benchdnn/inputs/shuffle/test_shuffle_gpu
/usr/share/benchdnn/inputs/softmax
/usr/share/benchdnn/inputs/softmax/softmax_2d
/usr/share/benchdnn/inputs/softmax/softmax_2d_all
/usr/share/benchdnn/inputs/softmax/softmax_4d
/usr/share/benchdnn/inputs/softmax/softmax_5d
/usr/share/benchdnn/inputs/softmax/softmax_6d
/usr/share/benchdnn/inputs/softmax/softmax_nlp
/usr/share/benchdnn/inputs/softmax/test_softmax_all
/usr/share/benchdnn/inputs/softmax/test_softmax_bfloat16
/usr/share/benchdnn/inputs/softmax/test_softmax_gpu
/usr/share/benchdnn/inputs/sum
/usr/share/benchdnn/inputs/sum/test_sum_all
/usr/share/benchdnn/inputs/sum/test_sum_bfloat16
/usr/share/benchdnn/inputs/sum/test_sum_gpu


Generated by rpm2html 1.8.1

Fabrice Bellet, Tue Apr 9 14:55:59 2024