Package: llamaR
Type: Package
Title: Interface for Large Language Models via 'llama.cpp'
Version: 0.2.3
Authors@R: c(
    person("Yuri", "Baramykov", email = "lbsbmsu@mail.ru", role = c("aut", "cre")),
    person("Georgi", "Gerganov", role = "cph",
           comment = "Author of the 'llama.cpp' library included in src/"))
Description: Provides 'R' bindings to 'llama.cpp' for running Large Language Models
    ('LLMs') locally with optional 'Vulkan' GPU acceleration via 'ggmlR'. Supports
    model loading, text generation, 'tokenization', token-to-piece conversion,
    'embeddings' (single and batch), encoder-decoder inference, low-level batch
    management, chat templates, 'LoRA' adapters, explicit backend/device selection,
    multi-GPU split, and 'NUMA' optimization. Includes a high-level 'ragnar'-compatible
    embedding provider ('embed_llamar'). Built on top of 'ggmlR' for efficient tensor
    operations.
License: MIT + file LICENSE
URL: https://github.com/Zabis13/llamaR
BugReports: https://github.com/Zabis13/llamaR/issues
Encoding: UTF-8
Depends: R (>= 4.1.0), ggmlR
LinkingTo: ggmlR
SystemRequirements: C++17, GNU make
Imports: jsonlite, utils
Suggests: testthat (>= 3.0.0), withr
RoxygenNote: 7.3.3
Config/testthat/edition: 3
NeedsCompilation: yes
Packaged: 2026-04-05 18:51:04 UTC; yuri
Author: Yuri Baramykov [aut, cre],
  Georgi Gerganov [cph] (Author of the 'llama.cpp' library included in
    src/)
Maintainer: Yuri Baramykov <lbsbmsu@mail.ru>
Repository: CRAN
Date/Publication: 2026-04-06 10:10:02 UTC
