llama.cppsrcccad697b7c829c0fe3ea3dfb14a506f9af36d0ea57c9cac79e0f162e596bf7e2Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user tzing_tUnspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438120-20241106-04342llama.cppx86_64603e076081ce83cc88b7d75ce702472a12e113a370201d0c030371bd706e500dPort of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user tzing_tUnspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438120-20241106-04342llama.cpp-20241102-2.src.rpm/usr/bin/llama-baby-llama/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-cli/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmllama.cppsrcd51143dcf129c94c27690bcadb187702672151c3786f1ef376c093d4e02ec7bbPort of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user tzing_tUnspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438123-20241106-04532llama.cppsrc733f31de95b6a62b54b0cf0899ec144c2f2669dd6552a42946a8957a8ad5c799Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user tzing_tUnspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438141-20241106-12475llama.cppx86_64d0383eec8aa82658986c3ee2a2ee21326961977c91caac15e009e564320738e2Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user tzing_tUnspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438123-20241106-04532llama.cpp-20241105-2.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-cli/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmllama.cppx86_64756fc916b453d5a5d8bdc4affdbf7a64523234ebe7a024792e4cf940386e2582Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user tzing_tUnspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438141-20241106-12475llama.cpp-20241105-2.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/llama_cpp_main/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmopenstack-sig-toolnoarchfc34e56b2c1af1db19bd53ab413157b85208086c3b91df098ffd1220794571f7The command line tool for openEuler OpenStack SIGThe command line tool for openEuler OpenStack SIGhttps://gitee.com/openeuler/openstack-sig-toolApache-2.0openEuler Copr - user tzing_tUnspecifiedeur-prod-workerlocal-x86-64-normal-prod-00451807-20250318-03255openstack-sig-tool-1.0.3-1.src.rpm/usr/bin/oosopenstack-sig-toolnoarchc88d069ea278f30bdc13054a0367284dc45245eaa4eed12138f284c098d3e398The command line tool for openEuler OpenStack SIGThe command line tool for openEuler OpenStack SIGhttps://gitee.com/openeuler/openstack-sig-toolApache-2.0openEuler Copr - user tzing_tUnspecifiedeur-prod-workerlocal-x86-64-normal-prod-00451807-20250318-03255openstack-sig-tool-1.0.3-1.src.rpm/usr/bin/oosopenstack-sig-toolsrc0aa812c539dce80a7c8116c85c91d2db050ea2bf467923b4e79ab36cdd8a3a0dThe command line tool for openEuler OpenStack SIGThe command line tool for openEuler OpenStack SIGhttps://gitee.com/openeuler/openstack-sig-toolApache-2.0openEuler Copr - user tzing_tUnspecifiedeur-prod-workerlocal-x86-64-normal-prod-00451807-20250318-03255openstack-sig-toolsrc9debe5629760294237533af7e36d93415eb664f2ee1a4e128d93ecad60a88442The command line tool for openEuler OpenStack SIGThe command line tool for openEuler OpenStack SIGhttps://gitee.com/openeuler/openstack-sig-toolApache-2.0openEuler Copr - user tzing_tUnspecifiedeur-prod-workerlocal-x86-64-normal-prod-00451807-20250318-03255