llama.cppsrc8a0f173c60a5f602a3ecebf0f171fb1898ee006d0bae29715e018e73c4b4e5faPort of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438294-20241116-07085llama.cppx86_640dbf526a6301a59cd740cdd5f894347f4d9f7c39d735d7d58c4b88a26da5eda1Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438294-20241116-07085llama.cpp-20240531-2.src.rpm/usr/bin/baby-llama/usr/bin/batched/usr/bin/batched-bench/usr/bin/beam-search/usr/bin/benchmark/usr/bin/convert-llama2c-to-ggml/usr/bin/embedding/usr/bin/eval-callback/usr/bin/export-lora/usr/bin/finetune/usr/bin/gguf/usr/bin/gguf-split/usr/bin/gritlm/usr/bin/imatrix/usr/bin/infill/usr/bin/llama-bench/usr/bin/llama_convert-hf-to-gguf.py/usr/bin/llama_cpp_main/usr/bin/llava-cli/usr/bin/lookahead/usr/bin/lookup/usr/bin/lookup-create/usr/bin/lookup-merge/usr/bin/lookup-stats/usr/bin/parallel/usr/bin/passkey/usr/bin/perplexity/usr/bin/quantize/usr/bin/quantize-stats/usr/bin/retrieval/usr/bin/save-load-state/usr/bin/server/usr/bin/simple/usr/bin/speculative/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spm/usr/bin/tokenize/usr/bin/train-text-from-scratchllama.cppsrc5ba38ff620b6dbe69c69cf2634a147280d83fd8de018f3cc473f838789418043Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438275-20241115-09062llama.cppsrcefdd5c43be7639e6d4d6fcbfee3e6050ed5875e4f08f1c584fe68879b0e986b9Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438278-20241115-09205llama.cppsrc15572296b51d7410ce4cc1280360352892c8d51ab4d2f314282c0a8934f00203Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438285-20241115-13581llama.cppsrc9ace4da84055549dff6b801315a9cc33bcbf7f9473509ac34bc24c6aeef1eb73Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438288-20241116-00251llama.cppsrc9454a08c9efb21ffff7f71f4be4a78a5ad720415fe17c90de8ebb4a0d051f785Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438291-20241116-03155llama.cppx86_6444b6fb0b7d1731fafd3b2a2ffe335b3b2ff7200719cc6fbaccbe1adf88fb9bbaPort of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438275-20241115-09062llama.cpp-20241105-1.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/llama_cpp_main/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmllama.cppx86_64179fa51e151ccaac0df988eced3dcdb563e9bc6dd398c7522d5292724fab67aePort of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438278-20241115-09205llama.cpp-20241105-1.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/llama_cpp_main/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmllama.cppx86_6485fa216fa2dacf2afafe26624162cbd4c89cf673b5e4a0999d5e1cba4e08aa1ePort of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438285-20241115-13581llama.cpp-20241105-1.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/llama_cpp_main/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmllama.cppx86_648ba0789e28a98fa292f31810a68e70510ee17f351fd72f842420c5b2bd8b51b1Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438288-20241116-00251llama.cpp-20241105-1.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/llama_cpp_main/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmllama.cppx86_6404a5762797661479ffb1f9f1fd670efb5d57e0edaaa9f1dd8494dd35bd932441Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-x86-64-normal-prod-00438291-20241116-03155llama.cpp-20241105-1.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/llama_cpp_main/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spm