llama.cppaarch645354b3a29096ea577e3159e201ef087b0816694f4e34a0acc3bd39ea4097cc4ePort of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438292-20241116-0316llama.cpp-20240531-2.src.rpm/usr/bin/baby-llama/usr/bin/batched/usr/bin/batched-bench/usr/bin/beam-search/usr/bin/benchmark/usr/bin/convert-llama2c-to-ggml/usr/bin/embedding/usr/bin/eval-callback/usr/bin/export-lora/usr/bin/finetune/usr/bin/gguf/usr/bin/gguf-split/usr/bin/gritlm/usr/bin/imatrix/usr/bin/infill/usr/bin/llama-bench/usr/bin/llama_convert-hf-to-gguf.py/usr/bin/llama_cpp_main/usr/bin/llava-cli/usr/bin/lookahead/usr/bin/lookup/usr/bin/lookup-create/usr/bin/lookup-merge/usr/bin/lookup-stats/usr/bin/parallel/usr/bin/passkey/usr/bin/perplexity/usr/bin/quantize/usr/bin/quantize-stats/usr/bin/retrieval/usr/bin/save-load-state/usr/bin/server/usr/bin/simple/usr/bin/speculative/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spm/usr/bin/tokenize/usr/bin/train-text-from-scratchllama.cppsrc0cb59ce8c02499350ae9608a9895d88d5f45cbd0b685b9a63a7f95a67c8542a3Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438292-20241116-0316llama.cppaarch64ceeebf50da3a4f9bb1098a6dca2e0788eb75dbd8935469b28ee264358e25cf45Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438272-20241115-0646llama.cpp-20241105-1.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/llama_cpp_main/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmllama.cppaarch644ff8a2a78e268d5b03c919a5da305c639eb86dc7279ff2133ac1aeb477fd6a60Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438279-20241115-1051llama.cpp-20241105-1.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/llama_cpp_main/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmllama.cppaarch642b5e49abb5735a03bab3a7d8507db99a4c53aa70bf47d3763c33e482aabf5450Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438283-20241115-1345llama.cpp-20241105-1.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/llama_cpp_main/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmllama.cppaarch649d9cf19cb8fcd15e49c4175d088fb33258acc27d7b2daad8ef58eb0cf741b0e1Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438286-20241115-1358llama.cpp-20241105-1.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/llama_cpp_main/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmllama.cppaarch644d7a75a73adeaf4632e2fa8c79cce6b90adcab0ccd5c980451b22b3de726f6f7Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438289-20241116-0025llama.cpp-20241105-1.src.rpm/usr/bin/llama-batched/usr/bin/llama-batched-bench/usr/bin/llama-bench/usr/bin/llama-convert-llama2c-to-ggml/usr/bin/llama-cvector-generator/usr/bin/llama-embedding/usr/bin/llama-eval-callback/usr/bin/llama-export-lora/usr/bin/llama-gbnf-validator/usr/bin/llama-gguf/usr/bin/llama-gguf-hash/usr/bin/llama-gguf-split/usr/bin/llama-gritlm/usr/bin/llama-imatrix/usr/bin/llama-infill/usr/bin/llama-llava-cli/usr/bin/llama-lookahead/usr/bin/llama-lookup/usr/bin/llama-lookup-create/usr/bin/llama-lookup-merge/usr/bin/llama-lookup-stats/usr/bin/llama-minicpmv-cli/usr/bin/llama-parallel/usr/bin/llama-passkey/usr/bin/llama-perplexity/usr/bin/llama-quantize/usr/bin/llama-quantize-stats/usr/bin/llama-retrieval/usr/bin/llama-save-load-state/usr/bin/llama-server/usr/bin/llama-simple/usr/bin/llama-simple-chat/usr/bin/llama-speculative/usr/bin/llama-tokenize/usr/bin/llama_convert_hf_to_gguf.py/usr/bin/llama_cpp_main/usr/bin/test-arg-parser/usr/bin/test-autorelease/usr/bin/test-backend-ops/usr/bin/test-barrier/usr/bin/test-chat-template/usr/bin/test-grad0/usr/bin/test-grammar-integration/usr/bin/test-grammar-parser/usr/bin/test-json-schema-to-grammar/usr/bin/test-llama-grammar/usr/bin/test-log/usr/bin/test-model-load-cancel/usr/bin/test-quantize-fns/usr/bin/test-quantize-perf/usr/bin/test-rope/usr/bin/test-sampling/usr/bin/test-tokenizer-0/usr/bin/test-tokenizer-1-bpe/usr/bin/test-tokenizer-1-spmllama.cppsrc81191291519fe66cca0e99b76f138ccccb3d440a3ea8f43b6f32687849bc8d69Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438272-20241115-0646llama.cppsrc0fe436361e9134dd530922d0e70b009c93f22df6429dc3a20a6eb15828d39cdaPort of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438279-20241115-1051llama.cppsrc69ddbe9509a371de1fd40e73d6222af41afd48f63ce45ced388945bf139550b5Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438283-20241115-1345llama.cppsrc7ce1ba64b2588320aa05cbcd95e8f103e68d967c7b188f196b5aff56cce80924Port of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438286-20241115-1358llama.cppsrc00eb097a6dd1b15b5772a1f00c7fc0ca47d93ba8483a5fb4ced7771cd2e8770aPort of English lagre model LLaMA implemented based on C/C++Port of English lagre model LLaMA implemented based on C/C++,
it can be used for model dialogue based on local laptops.https://github.com/ggerganov/llama.cppMITopenEuler Copr - user openEuler-Summit-2024Unspecifiedeur-prod-workerlocal-aarch64-normal-prod-00438289-20241116-0025