Communicate with 'Ollama' to Run Large Language Models Locally


[Up] [Top]

Documentation for package ‘rollama’ version 0.3.0

Help Pages

chat Chat with a LLM through Ollama
chat_history Handle conversations
check_model_installed Check if one or several models are installed on the server
copy_model Pull, push, show and delete models
create_model Create a model from a Modelfile
create_schema Create a structured output schema
delete_model Pull, push, show and delete models
embed_text Generate Embeddings
list_models List models that are available locally.
list_running_models List running models
make_query Generate and format queries for a language model
new_chat Handle conversations
ping_ollama Ping server to see if Ollama is reachable
pull_model Pull, push, show and delete models
push_model Pull, push, show and delete models
query Chat with a LLM through Ollama
rollama-options rollama Options
show_model Pull, push, show and delete models
type_array Create a structured output schema
type_boolean Create a structured output schema
type_enum Create a structured output schema
type_integer Create a structured output schema
type_number Create a structured output schema
type_object Create a structured output schema
type_string Create a structured output schema