Ollama Changelog

最後更新: 2024-09-26

前言

只記錄了在 Linux 版上的重大亮點

目錄

 


Ollama Changelog Summary

 

v0.3.12

  • New models: Qwen 2.5 Coder(7b, 1.5b), Llama 3.2(1b, 3b)

v0.3.11

  • New models: Qwen 2.5, Reader-LM(HTML->Markdown), Mistral-Small(22B)

v0.3.10

  • New models: DeepSeek-V2.5(236b{133GiB}), MiniCPM-V(vision), Yi-Coder(9b, 1.5b)

v0.3.8 ~ v0.3.9

  • bugfix

v0.3.7

  • 加入了 Phi 3.5 (3.8b)
  • binary 及 libraries 打包成 .tgz, size 成 1.3G

v0.3.6 ~ v0.3.2

v0.3.1

  • Added support for min_p

v0.3.0

  • supports tool calling
     - Functions and APIs
     - Web browsing
     - Code interpreter

v0.2.8 ~ v0.2.1

v0.2.0

  • Parallel requests (using only a little bit of additional memory for each request)

情景: Handling multiple chat sessions at the same time

  • Multiple models (loading different models at the same time)

ollama ps

  • New models: GLM-4, CodeGeeX4

v0.1.48

  • bugfix

v0.1.47

  • New models - Gemma 2 (9B and 27B)

v0.1.46

  • Increased model loading speed with ollama run, especially if running an already-loaded model
  • Improved model loading times when models would not completely fit in system memory on Linux

v0.1.45

  • New models: DeepSeek-Coder-V2

v0.1.42

  • New models: Qwen 2

v0.1.40

  • New models: Codestral, IBM Granite Code

v0.1.39

  • New models - Phi-3 Mini 128K and Phi-3 Medium 128K
  • Added a Ctrl+W shortcut to ollama run

v0.1.38

  • ollama ps  # displays currently loaded models, their memory footprint, and

           # the processors used (GPU or CPU):

  • /clear     # To clear the chat history for a session when running ollama run

v0.1.35

  • 當 importing 時可以修改 models 的 quantize (--quantize, -q)

ollama create -f Modelfile --quantize q4_0 mymodel

  • Ctrl+J characters will now properly add newlines in ollama run

 


OpenWebUI Changelog Summary

 

v0.3.32

  • Bug Fix

v0.3.31

  • code blocks now allow live editing directly in the LLM response, with live reloads supported by artifacts.
  • New floating buttons appear when text is highlighted in LLM responses,
    offering deeper interactions like "Ask a Question" or "Explain".
  • Implemented lazy loading of large dependencies to minimize initial memory usage, boosting performance.
  • Expandable Content Markdown Support

v0.3.25~v0.3.30

  • Bug Fix

v0.3.24

  • Users can now mark responses as favorite directly from the chat overview
  • Create Message Pairs with Shortcut: Ctrl+Shift+Enter
  • Expanded User Prompt Variables: weekday, timezone, and language
  • support for 'audio/x-m4a' files
  • PDF citations now open at the associated page, streamlining reference checks

v0.3.23

  • 無加亮點

v0.3.22

  • Multiple Vector DB Support: Milvus

v0.3.21

  • Enabled /api/embed endpoint proxy support.
  • Now displays the total number of documents directly within the dashboard.

v0.3.19 ~ v0.3.20

  • Bug Fix

v0.3.18

  • Direct Database Execution(Python) for Tools & Functions

v0.3.16 - 2024-08-27

  • Migrated configuration handling from config.json to the database.
  • allowing users to click to copy content from code spans directly.

v0.3.15 - 2024-08-21

  • Temporary Chat Activation
    chat sessions directly through the URL (URL parameter 'temporary-chat=true')

v0.3.14 - 2024-08-21

  • 當同時問兩個不同 models 後, 可以 Merging 它們的回應為一個
  • Enhanced Shift key quick actions for hiding/unhiding and deleting models
    (Workspace > Models 按 shift key 可以改變 action)
  • User messages are now rendered in Markdown

v0.3.13 - 2024-08-14

  • Auto-Install Tools & Functions Python Dependencies
  • Websocket Reconnection
    (reconnect when a websocket is closed)

v0.3.12

  • 無加亮點

v0.3.11

  • Added 'Min P' parameter in the advanced settings for customized model precision control.

v0.3.10

  • 無加亮點

v0.3.9

  • A new "Action" function to write custom buttons to the message toolbar.

v0.3.8

  • Darker OLED Theme

v0.3.7

  • 無加亮點

v0.3.6

  • "Functions" Feature
  • Files API

 

 

Creative Commons license icon Creative Commons license icon