LLM Chatter, v0.0.2

Single HTML file interface to chat with Ollama local large language models (LLMs) or OpenAI.com LLMs.

Application screenshot

Installation

  1. Install Ollama and add at least one model.
    • curl https://ollama.ai/install.sh | sh
    • ollama pull mistral-openorca:7b
  2. Run wget https://raw.githubusercontent.com/rossuber/llm-chatter/master/dist/index.html
  3. Run python3 -m http.server 8181
  4. Open localhost:8181 in your web browser.
  5. Optional: Register an account at openai.com and subscribe for an API key. Paste it into the ‘Open AI’ password field while OpenAI Chat is selected.

Optional LangChain node.js server installation steps

Now supports LangChain URL embedding! The LangChain Ollama implementation is incompatible with something (like React? I am not sure), so it is necessary to run a separate node.js Express server to handle API requests at http://localhost:8080

  1. Run mkdir langchain-ollama
  2. Run cd langchain-ollama
  3. Run wget https://raw.githubusercontent.com/rossuber/llm-chatter/master/langchain-ollama/index.js
  4. Run wget https://raw.githubusercontent.com/rossuber/llm-chatter/master/langchain-ollama/package.json
  5. Run npm install
  6. Run node index.js

Built with: Vite / Bun / React / TailwindCSS / FontAwesome

The web app pulls icon images from https://ka-f.fontawesome.com.

The web app makes API calls to http://localhost:11434 (ollama), http://localhost:8080 (the langchain-ollama node.js Express server), and https://api.openai.com.

Ollama API docs

OpenAI API docs

GitHub

View Github