Mock Version: 3.5 ENTER ['do_with_status'](['bash', '--login', '-c', '/usr/bin/rpmbuild -bs --target x86_64 --nodeps /builddir/build/SPECS/ollama.spec'], chrootPath='/var/lib/mock/openeuler-24.03_LTS-x86_64-1721047928.254163/root'env={'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'C.UTF-8'}shell=Falselogger=timeout=0uid=1000gid=135user='mockbuild'nspawn_args=[]unshare_net=FalseprintOutput=True) Executing command: ['bash', '--login', '-c', '/usr/bin/rpmbuild -bs --target x86_64 --nodeps /builddir/build/SPECS/ollama.spec'] with env {'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'C.UTF-8'} and shell False warning: Macro expanded in comment on line 10: %{name}/archive/refs/tags/v%{version}.tar.gz Building target platforms: x86_64 Building for target x86_64 Wrote: /builddir/build/SRPMS/ollama-0.2.5-1.src.rpm RPM build warnings: Macro expanded in comment on line 10: %{name}/archive/refs/tags/v%{version}.tar.gz Child return code was: 0 Mock Version: 3.5 ENTER ['do_with_status'](['bash', '--login', '-c', '/usr/bin/rpmbuild -bs --target x86_64 --nodeps /builddir/build/SPECS/ollama.spec'], chrootPath='/var/lib/mock/openeuler-24.03_LTS-x86_64-1721047928.254163/root'env={'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'C.UTF-8'}shell=Falselogger=timeout=0uid=1000gid=135user='mockbuild'nspawn_args=[]unshare_net=FalseprintOutput=True) Executing command: ['bash', '--login', '-c', '/usr/bin/rpmbuild -bs --target x86_64 --nodeps /builddir/build/SPECS/ollama.spec'] with env {'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'C.UTF-8'} and shell False warning: Macro expanded in comment on line 10: %{name}/archive/refs/tags/v%{version}.tar.gz Building target platforms: x86_64 Building for target x86_64 Wrote: /builddir/build/SRPMS/ollama-0.2.5-1.src.rpm RPM build warnings: Macro expanded in comment on line 10: %{name}/archive/refs/tags/v%{version}.tar.gz Child return code was: 0 ENTER ['do_with_status'](['bash', '--login', '-c', '/usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/ollama.spec'], chrootPath='/var/lib/mock/openeuler-24.03_LTS-x86_64-1721047928.254163/root'env={'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'C.UTF-8'}shell=Falselogger=timeout=0uid=1000gid=135user='mockbuild'nspawn_args=[]unshare_net=FalseprintOutput=True) Executing command: ['bash', '--login', '-c', '/usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/ollama.spec'] with env {'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'C.UTF-8'} and shell False warning: Macro expanded in comment on line 10: %{name}/archive/refs/tags/v%{version}.tar.gz Building target platforms: x86_64 Building for target x86_64 Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.C1wx35 + umask 022 + cd /builddir/build/BUILD + cd /builddir/build/BUILD + rm -rf ollama-0.2.5 + /usr/bin/mkdir -p ollama-0.2.5 + cd ollama-0.2.5 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + git clone https://gitee.com/mirrors/ollama.git Cloning into 'ollama'... + RPM_EC=0 ++ jobs -p + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.3Te3CW + umask 022 + cd /builddir/build/BUILD + cd ollama-0.2.5 + cd ollama + sed -i 's|https://github.com/ggerganov/llama.cpp.git|https://gitee.com/cxunmz/llama.cpp.git|' .gitmodules + export GOPROXY=https://goproxy.cn + GOPROXY=https://goproxy.cn + go generate ./... go: downloading go1.22.0 (linux/amd64) go: downloading github.com/google/uuid v1.1.2 go: downloading golang.org/x/crypto v0.23.0 go: downloading github.com/containerd/console v1.0.3 go: downloading github.com/mattn/go-runewidth v0.0.14 go: downloading github.com/olekukonko/tablewriter v0.0.5 go: downloading github.com/spf13/cobra v1.7.0 go: downloading golang.org/x/term v0.20.0 go: downloading google.golang.org/protobuf v1.34.1 go: downloading github.com/d4l3k/go-bfloat16 v0.0.0-20211005043715-690c3bdd05f1 go: downloading github.com/nlpodyssey/gopickle v0.3.0 go: downloading github.com/pdevine/tensor v0.0.0-20240510204454-f88f4562727c go: downloading github.com/x448/float16 v0.8.4 go: downloading golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa go: downloading golang.org/x/sys v0.20.0 go: downloading github.com/gin-gonic/gin v1.10.0 go: downloading golang.org/x/text v0.15.0 go: downloading golang.org/x/sync v0.3.0 go: downloading github.com/agnivade/levenshtein v1.1.1 go: downloading github.com/emirpasic/gods v1.18.1 go: downloading github.com/gin-contrib/cors v1.7.2 go: downloading github.com/rivo/uniseg v0.2.0 go: downloading github.com/spf13/pflag v1.0.5 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/apache/arrow/go/arrow v0.0.0-20211112161151-bc219186db40 go: downloading github.com/chewxy/hm v1.0.0 go: downloading github.com/chewxy/math32 v1.10.1 go: downloading github.com/google/flatbuffers v24.3.25+incompatible go: downloading go4.org/unsafe/assume-no-moving-gc v0.0.0-20231121144256-b99613f794b6 go: downloading gonum.org/v1/gonum v0.15.0 go: downloading gorgonia.org/vecf32 v0.9.0 go: downloading gorgonia.org/vecf64 v0.9.0 go: downloading github.com/gin-contrib/sse v0.1.0 go: downloading github.com/mattn/go-isatty v0.0.20 go: downloading golang.org/x/net v0.25.0 go: downloading golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 go: downloading github.com/gogo/protobuf v1.3.2 go: downloading github.com/golang/protobuf v1.5.4 go: downloading github.com/xtgo/set v1.0.0 go: downloading github.com/go-playground/validator/v10 v10.20.0 go: downloading github.com/pelletier/go-toml/v2 v2.2.2 go: downloading github.com/ugorji/go/codec v1.2.12 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading github.com/gabriel-vasile/mimetype v1.4.3 go: downloading github.com/go-playground/universal-translator v0.18.1 go: downloading github.com/leodido/go-urn v1.4.0 go: downloading github.com/go-playground/locales v0.14.1 + set -o pipefail Starting linux generate script + echo 'Starting linux generate script' + '[' -z '' ']' + '[' -x /usr/local/cuda/bin/nvcc ']' ++ command -v nvcc + export CUDACXX= + CUDACXX= + COMMON_CMAKE_DEFS='-DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_NATIVE=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DLLAMA_OPENMP=off' ++ dirname ./gen_linux.sh + source ./gen_common.sh + init_vars + case "${GOARCH}" in + ARCH=x86_64 + LLAMACPP_DIR=../llama.cpp + CMAKE_DEFS= + CMAKE_TARGETS='--target ollama_llama_server' + grep -- -g + echo '' + CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + case $(uname -s) in ++ uname -s + LIB_EXT=so + WHOLE_ARCHIVE=-Wl,--whole-archive + NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive + GCC_ARCH= + '[' -z '' ']' + CMAKE_CUDA_ARCHITECTURES='50;52;61;70;75;80' + git_module_setup + '[' -n '' ']' + '[' -d ../llama.cpp/gguf ']' + git submodule init Submodule 'llama.cpp' (https://gitee.com/cxunmz/llama.cpp.git) registered for path '../llama.cpp' + git submodule update --force ../llama.cpp Cloning into '/builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp'... From https://gitee.com/cxunmz/llama.cpp * branch 7c26775adb579e92b59c82e8084c07a1d0f75e9c -> FETCH_HEAD Submodule path '../llama.cpp': checked out '7c26775adb579e92b59c82e8084c07a1d0f75e9c' + apply_patches + grep ollama ../llama.cpp/CMakeLists.txt + echo 'add_subdirectory(../ext_server ext_server) # ollama' ++ ls -A ../patches/01-load-progress.diff ../patches/02-clip-log.diff ../patches/03-load_exception.diff ../patches/04-metal.diff ../patches/05-default-pretokenizer.diff ../patches/06-qwen2.diff + '[' -n '../patches/01-load-progress.diff ../patches/02-clip-log.diff ../patches/03-load_exception.diff ../patches/04-metal.diff ../patches/05-default-pretokenizer.diff ../patches/06-qwen2.diff' ']' + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/01-load-progress.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout common/common.cpp Updated 0 paths from the index + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout common/common.h Updated 0 paths from the index + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/02-clip-log.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout examples/llava/clip.cpp Updated 0 paths from the index + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/03-load_exception.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout llama.cpp Updated 0 paths from the index + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/04-metal.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout ggml-metal.m Updated 0 paths from the index + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/05-default-pretokenizer.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout llama.cpp Updated 0 paths from the index + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/06-qwen2.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout llama.cpp Updated 0 paths from the index + for patch in ../patches/*.diff + cd ../llama.cpp + git apply ../patches/01-load-progress.diff + for patch in ../patches/*.diff + cd ../llama.cpp + git apply ../patches/02-clip-log.diff + for patch in ../patches/*.diff + cd ../llama.cpp + git apply ../patches/03-load_exception.diff + for patch in ../patches/*.diff + cd ../llama.cpp + git apply ../patches/04-metal.diff + for patch in ../patches/*.diff + cd ../llama.cpp + git apply ../patches/05-default-pretokenizer.diff + for patch in ../patches/*.diff + cd ../llama.cpp + git apply ../patches/06-qwen2.diff + init_vars + case "${GOARCH}" in + ARCH=x86_64 + LLAMACPP_DIR=../llama.cpp + CMAKE_DEFS= + CMAKE_TARGETS='--target ollama_llama_server' + echo '' + grep -- -g + CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + case $(uname -s) in ++ uname -s + LIB_EXT=so + WHOLE_ARCHIVE=-Wl,--whole-archive + NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive + GCC_ARCH= + '[' -z '50;52;61;70;75;80' ']' + '[' -z '' -o '' = static ']' + init_vars + case "${GOARCH}" in + ARCH=x86_64 + LLAMACPP_DIR=../llama.cpp + CMAKE_DEFS= + CMAKE_TARGETS='--target ollama_llama_server' + echo '' + grep -- -g + CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + case $(uname -s) in ++ uname -s + LIB_EXT=so + WHOLE_ARCHIVE=-Wl,--whole-archive + NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive + GCC_ARCH= Building static library + '[' -z '50;52;61;70;75;80' ']' + CMAKE_TARGETS='--target llama --target ggml' + CMAKE_DEFS='-DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DLLAMA_OPENMP=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + BUILD_DIR=../build/linux/x86_64_static + echo 'Building static library' + build + cmake -S ../llama.cpp -B ../build/linux/x86_64_static -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DLLAMA_OPENMP=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off -- The C compiler identification is GNU 12.3.1 -- The CXX compiler identification is GNU 12.3.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.43.0") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Configuring done (0.6s) -- Generating done (0.1s) -- Build files have been written to: /builddir/build/BUILD/ollama-0.2.5/ollama/llm/build/linux/x86_64_static + cmake --build ../build/linux/x86_64_static --target llama --target ggml -j8 [ 33%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.o [ 33%] Building C object CMakeFiles/ggml.dir/ggml.c.o [ 33%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o [ 33%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o [ 50%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.o /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c: In function 'ggml_vec_mad_f16': /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:2044:45: warning: passing argument 1 of '__sse_f16x4_load' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers] 2044 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^ /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:1504:50: note: in definition of macro 'GGML_F32Cx4_LOAD' 1504 | #define GGML_F32Cx4_LOAD(x) __sse_f16x4_load(x) | ^ /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:2044:21: note: in expansion of macro 'GGML_F16_VEC_LOAD' 2044 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:1479:52: note: expected 'ggml_fp16_t *' {aka 'short unsigned int *'} but argument is of type 'const ggml_fp16_t *' {aka 'const short unsigned int *'} 1479 | static inline __m128 __sse_f16x4_load(ggml_fp16_t *x) { | ~~~~~~~~~~~~~^ [ 50%] Built target ggml [ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o [ 83%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.o [ 83%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.o [100%] Linking CXX static library libllama.a [100%] Built target llama [100%] Built target ggml + init_vars + case "${GOARCH}" in + ARCH=x86_64 + LLAMACPP_DIR=../llama.cpp + CMAKE_DEFS= + CMAKE_TARGETS='--target ollama_llama_server' + echo '' + grep -- -g + CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + case $(uname -s) in ++ uname -s + LIB_EXT=so + WHOLE_ARCHIVE=-Wl,--whole-archive + NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive + GCC_ARCH= + '[' -z '50;52;61;70;75;80' ']' + '[' -z '' ']' + '[' -n '' ']' + COMMON_CPU_DEFS='-DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_NATIVE=off -DLLAMA_OPENMP=off' + '[' -z '' -o '' = cpu ']' + init_vars + case "${GOARCH}" in + ARCH=x86_64 + LLAMACPP_DIR=../llama.cpp + CMAKE_DEFS= + CMAKE_TARGETS='--target ollama_llama_server' + echo '' + grep -- -g + CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + case $(uname -s) in ++ uname -s Building LCD CPU + LIB_EXT=so + WHOLE_ARCHIVE=-Wl,--whole-archive + NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive + GCC_ARCH= + '[' -z '50;52;61;70;75;80' ']' + CMAKE_DEFS='-DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_NATIVE=off -DLLAMA_OPENMP=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + BUILD_DIR=../build/linux/x86_64/cpu + echo 'Building LCD CPU' + build + cmake -S ../llama.cpp -B ../build/linux/x86_64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_NATIVE=off -DLLAMA_OPENMP=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off -- The C compiler identification is GNU 12.3.1 -- The CXX compiler identification is GNU 12.3.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.43.0") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Configuring done (0.6s) -- Generating done (0.1s) -- Build files have been written to: /builddir/build/BUILD/ollama-0.2.5/ollama/llm/build/linux/x86_64/cpu + cmake --build ../build/linux/x86_64/cpu --target ollama_llama_server -j8 [ 0%] Generating build details from Git [ 0%] Building C object CMakeFiles/ggml.dir/ggml.c.o [ 6%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o [ 6%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o -- Found Git: /usr/bin/git (found version "2.43.0") [ 13%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.o [ 20%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.o [ 26%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o [ 26%] Built target build_info /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c: In function 'ggml_vec_mad_f16': /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:2044:45: warning: passing argument 1 of '__sse_f16x4_load' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers] 2044 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^ /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:1504:50: note: in definition of macro 'GGML_F32Cx4_LOAD' 1504 | #define GGML_F32Cx4_LOAD(x) __sse_f16x4_load(x) | ^ /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:2044:21: note: in expansion of macro 'GGML_F16_VEC_LOAD' 2044 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:1479:52: note: expected 'ggml_fp16_t *' {aka 'short unsigned int *'} but argument is of type 'const ggml_fp16_t *' {aka 'const short unsigned int *'} 1479 | static inline __m128 __sse_f16x4_load(ggml_fp16_t *x) { | ~~~~~~~~~~~~~^ [ 26%] Built target ggml [ 33%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o [ 40%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.o [ 40%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.o [ 46%] Linking CXX static library libllama.a [ 46%] Built target llama [ 53%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o [ 53%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o [ 53%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o [ 60%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o [ 66%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o [ 66%] Building CXX object common/CMakeFiles/common.dir/grammar-parser.cpp.o [ 73%] Building CXX object common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o [ 73%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o [ 80%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o [ 80%] Built target llava [ 86%] Linking CXX static library libcommon.a [ 86%] Built target common [ 93%] Building CXX object ext_server/CMakeFiles/ollama_llama_server.dir/server.cpp.o [100%] Linking CXX executable ../bin/ollama_llama_server [100%] Built target ollama_llama_server + compress + echo 'Compressing payloads to reduce overall binary size...' + pids= Compressing payloads to reduce overall binary size... + rm -rf '../build/linux/x86_64/cpu/bin/*.gz' + for f in ${BUILD_DIR}/bin/* + pids+=' 3988' + '[' -d ../build/linux/x86_64/cpu/lib ']' + echo + for pid in ${pids} + wait 3988 + gzip -n --best -f ../build/linux/x86_64/cpu/bin/ollama_llama_server Finished compression + echo 'Finished compression' + '[' x86_64 == x86_64 ']' + '[' -z '' -o '' = cpu_avx ']' + init_vars + case "${GOARCH}" in + ARCH=x86_64 + LLAMACPP_DIR=../llama.cpp + CMAKE_DEFS= + CMAKE_TARGETS='--target ollama_llama_server' + echo '' + grep -- -g + CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + case $(uname -s) in ++ uname -s + LIB_EXT=so + WHOLE_ARCHIVE=-Wl,--whole-archive + NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive + GCC_ARCH= Building AVX CPU + '[' -z '50;52;61;70;75;80' ']' + CMAKE_DEFS='-DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_NATIVE=off -DLLAMA_OPENMP=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + BUILD_DIR=../build/linux/x86_64/cpu_avx + echo 'Building AVX CPU' + build + cmake -S ../llama.cpp -B ../build/linux/x86_64/cpu_avx -DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_NATIVE=off -DLLAMA_OPENMP=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off -- The C compiler identification is GNU 12.3.1 -- The CXX compiler identification is GNU 12.3.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.43.0") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Configuring done (0.6s) -- Generating done (0.1s) -- Build files have been written to: /builddir/build/BUILD/ollama-0.2.5/ollama/llm/build/linux/x86_64/cpu_avx + cmake --build ../build/linux/x86_64/cpu_avx --target ollama_llama_server -j8 [ 6%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o [ 6%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o [ 6%] Building C object CMakeFiles/ggml.dir/ggml.c.o [ 13%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o [ 20%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.o [ 26%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.o [ 26%] Built target build_info /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c: In function 'ggml_vec_mad_f16': /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:2044:45: warning: passing argument 1 of '__avx_f32cx8_load' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers] 2044 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^ /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:1224:51: note: in definition of macro 'GGML_F32Cx8_LOAD' 1224 | #define GGML_F32Cx8_LOAD(x) __avx_f32cx8_load(x) | ^ /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:2044:21: note: in expansion of macro 'GGML_F16_VEC_LOAD' 2044 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.2.5/ollama/llm/llama.cpp/ggml.c:1207:53: note: expected 'ggml_fp16_t *' {aka 'short unsigned int *'} but argument is of type 'const ggml_fp16_t *' {aka 'const short unsigned int *'} 1207 | static inline __m256 __avx_f32cx8_load(ggml_fp16_t *x) { | ~~~~~~~~~~~~~^ [ 26%] Built target ggml [ 33%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.o [ 40%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.o [ 40%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o [ 46%] Linking CXX static library libllama.a [ 46%] Built target llama [ 46%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o [ 53%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o [ 53%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o [ 60%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o [ 66%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o [ 66%] Building CXX object common/CMakeFiles/common.dir/grammar-parser.cpp.o [ 73%] Building CXX object common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o [ 73%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o [ 80%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o [ 80%] Built target llava [ 86%] Linking CXX static library libcommon.a [ 86%] Built target common [ 93%] Building CXX object ext_server/CMakeFiles/ollama_llama_server.dir/server.cpp.o [100%] Linking CXX executable ../bin/ollama_llama_server [100%] Built target ollama_llama_server + compress + echo 'Compressing payloads to reduce overall binary size...' Compressing payloads to reduce overall binary size... + pids= + rm -rf '../build/linux/x86_64/cpu_avx/bin/*.gz' + for f in ${BUILD_DIR}/bin/* + pids+=' 4196' + '[' -d ../build/linux/x86_64/cpu_avx/lib ']' + echo + for pid in ${pids} + wait 4196 + gzip -n --best -f ../build/linux/x86_64/cpu_avx/bin/ollama_llama_server Finished compression + echo 'Finished compression' + '[' -z '' -o '' = cpu_avx2 ']' + init_vars + case "${GOARCH}" in + ARCH=x86_64 + LLAMACPP_DIR=../llama.cpp + CMAKE_DEFS= + CMAKE_TARGETS='--target ollama_llama_server' + echo '' + grep -- -g + CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + case $(uname -s) in ++ uname -s + LIB_EXT=so + WHOLE_ARCHIVE=-Wl,--whole-archive + NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive Building AVX2 CPU + GCC_ARCH= + '[' -z '50;52;61;70;75;80' ']' + CMAKE_DEFS='-DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_NATIVE=off -DLLAMA_OPENMP=off -DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_AVX512=off -DLLAMA_FMA=on -DLLAMA_F16C=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + BUILD_DIR=../build/linux/x86_64/cpu_avx2 + echo 'Building AVX2 CPU' + build + cmake -S ../llama.cpp -B ../build/linux/x86_64/cpu_avx2 -DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_NATIVE=off -DLLAMA_OPENMP=off -DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_AVX512=off -DLLAMA_FMA=on -DLLAMA_F16C=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off -- The C compiler identification is GNU 12.3.1 -- The CXX compiler identification is GNU 12.3.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.43.0") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Configuring done (0.6s) -- Generating done (0.1s) -- Build files have been written to: /builddir/build/BUILD/ollama-0.2.5/ollama/llm/build/linux/x86_64/cpu_avx2 + cmake --build ../build/linux/x86_64/cpu_avx2 --target ollama_llama_server -j8 [ 6%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o [ 6%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o [ 13%] Building C object CMakeFiles/ggml.dir/ggml.c.o [ 13%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o [ 20%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.o [ 26%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.o [ 26%] Built target build_info [ 26%] Built target ggml [ 33%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o [ 40%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.o [ 40%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.o [ 46%] Linking CXX static library libllama.a [ 46%] Built target llama [ 53%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o [ 53%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o [ 53%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o [ 60%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o [ 66%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o [ 66%] Building CXX object common/CMakeFiles/common.dir/grammar-parser.cpp.o [ 73%] Building CXX object common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o [ 73%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o [ 80%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o [ 80%] Built target llava [ 86%] Linking CXX static library libcommon.a [ 86%] Built target common [ 93%] Building CXX object ext_server/CMakeFiles/ollama_llama_server.dir/server.cpp.o [100%] Linking CXX executable ../bin/ollama_llama_server [100%] Built target ollama_llama_server + compress + echo 'Compressing payloads to reduce overall binary size...' Compressing payloads to reduce overall binary size... + pids= + rm -rf '../build/linux/x86_64/cpu_avx2/bin/*.gz' + for f in ${BUILD_DIR}/bin/* + pids+=' 4460' + '[' -d ../build/linux/x86_64/cpu_avx2/lib ']' + echo + for pid in ${pids} + wait 4460 + gzip -n --best -f ../build/linux/x86_64/cpu_avx2/bin/ollama_llama_server Finished compression + echo 'Finished compression' + '[' -z '' ']' + '[' -d /usr/local/cuda/lib64 ']' + '[' -z '' ']' + '[' -d /opt/cuda/targets/x86_64-linux/lib ']' + '[' -z '' ']' + CUDART_LIB_DIR= + '[' -z '' -a -d '' ']' + '[' -z '' ']' + ONEAPI_ROOT=/opt/intel/oneapi + '[' -z '' -a -d /opt/intel/oneapi ']' + '[' -z '' ']' + ROCM_PATH=/opt/rocm + '[' -z '' ']' + '[' -d /usr/lib/cmake/CLBlast ']' + '[' -z '' -a -d /opt/rocm ']' + cleanup + cd ../llama.cpp/ + git checkout CMakeLists.txt Updated 1 path from the index ++ ls -A ../patches/01-load-progress.diff ../patches/02-clip-log.diff ../patches/03-load_exception.diff ../patches/04-metal.diff ../patches/05-default-pretokenizer.diff ../patches/06-qwen2.diff + '[' -n '../patches/01-load-progress.diff ../patches/02-clip-log.diff ../patches/03-load_exception.diff ../patches/04-metal.diff ../patches/05-default-pretokenizer.diff ../patches/06-qwen2.diff' ']' + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/01-load-progress.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout common/common.cpp Updated 1 path from the index + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout common/common.h Updated 1 path from the index + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/02-clip-log.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout examples/llava/clip.cpp Updated 1 path from the index + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/03-load_exception.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout llama.cpp Updated 1 path from the index + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/04-metal.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout ggml-metal.m Updated 1 path from the index + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/05-default-pretokenizer.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout llama.cpp Updated 0 paths from the index + for patch in ../patches/*.diff ++ grep '^+++ ' ../patches/06-qwen2.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) + cd ../llama.cpp + git checkout llama.cpp Updated 0 paths from the index ++ cd ../build/linux/x86_64/cpu_avx2/.. ++ echo cpu cpu_avx cpu_avx2 + echo 'go generate completed. LLM runners: cpu cpu_avx cpu_avx2' go generate completed. LLM runners: cpu cpu_avx cpu_avx2 + go build . + RPM_EC=0 ++ jobs -p + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.jnFIvC + umask 022 + cd /builddir/build/BUILD + '[' /builddir/build/BUILDROOT/ollama-0.2.5-1.x86_64 '!=' / ']' + rm -rf /builddir/build/BUILDROOT/ollama-0.2.5-1.x86_64 ++ dirname /builddir/build/BUILDROOT/ollama-0.2.5-1.x86_64 + mkdir -p /builddir/build/BUILDROOT + mkdir /builddir/build/BUILDROOT/ollama-0.2.5-1.x86_64 + cd ollama-0.2.5 + cd ollama + mkdir -p /builddir/build/BUILDROOT/ollama-0.2.5-1.x86_64/usr/bin + install -m 0755 ollama /builddir/build/BUILDROOT/ollama-0.2.5-1.x86_64/usr/bin/ollama + /usr/bin/find-debuginfo -j4 --strict-build-id -i --build-id-seed 0.2.5-1 --unique-debug-suffix -0.2.5-1.x86_64 --unique-debug-src-base ollama-0.2.5-1.x86_64 -S debugsourcefiles.list /builddir/build/BUILD/ollama-0.2.5 explicitly decompress any DWARF compressed ELF sections in /builddir/build/BUILDROOT/ollama-0.2.5-1.x86_64/usr/bin/ollama extracting debug info from /builddir/build/BUILDROOT/ollama-0.2.5-1.x86_64/usr/bin/ollama 1367 blocks + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/brp-ldconfig + /usr/lib/rpm/brp-compress + /usr/lib/rpm/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/brp-python-bytecompile /usr/bin/python 1 1 + /usr/lib/rpm/brp-python-hardlink Processing files: ollama-0.2.5-1.x86_64 Provides: ollama = 0.2.5-1 ollama(x86-64) = 0.2.5-1 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.7)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libresolv.so.2()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.11)(64bit) libstdc++.so.6(CXXABI_1.3.13)(64bit) libstdc++.so.6(CXXABI_1.3.2)(64bit) libstdc++.so.6(CXXABI_1.3.3)(64bit) libstdc++.so.6(CXXABI_1.3.5)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.14)(64bit) libstdc++.so.6(GLIBCXX_3.4.15)(64bit) libstdc++.so.6(GLIBCXX_3.4.17)(64bit) libstdc++.so.6(GLIBCXX_3.4.18)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) libstdc++.so.6(GLIBCXX_3.4.22)(64bit) libstdc++.so.6(GLIBCXX_3.4.29)(64bit) libstdc++.so.6(GLIBCXX_3.4.9)(64bit) rtld(GNU_HASH) Processing files: ollama-debuginfo-0.2.5-1.x86_64 Provides: ollama-debuginfo = 0.2.5-1 ollama-debuginfo(x86-64) = 0.2.5-1 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.2.5-1 Processing files: ollama-debugsource-0.2.5-1.x86_64 Provides: ollama-debugsource = 0.2.5-1 ollama-debugsource(x86-64) = 0.2.5-1 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Checking for unpackaged file(s): /usr/lib/rpm/check-files /builddir/build/BUILDROOT/ollama-0.2.5-1.x86_64 Wrote: /builddir/build/RPMS/ollama-debugsource-0.2.5-1.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-debuginfo-0.2.5-1.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-0.2.5-1.x86_64.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.c0WJNo + umask 022 + cd /builddir/build/BUILD + cd ollama-0.2.5 + /usr/bin/rm -rf /builddir/build/BUILDROOT/ollama-0.2.5-1.x86_64 + RPM_EC=0 ++ jobs -p + exit 0 Executing(rmbuild): /bin/sh -e /var/tmp/rpm-tmp.wbbr4s + umask 022 + cd /builddir/build/BUILD + rm -rf ollama-0.2.5 ollama-0.2.5.gemspec + RPM_EC=0 ++ jobs -p + exit 0 RPM build warnings: Macro expanded in comment on line 10: %{name}/archive/refs/tags/v%{version}.tar.gz Child return code was: 0