github longy2k/obsidian-bmo-chatbot 1.6.2

latest releases: 2.3.3, 2.3.2, 2.3.1...
22 months ago
  • gpt-3.5-turbo-16k points to gpt-3.5-turbo-0613 which has a content window of 4,096 tokens. Added gpt-3.5-turbo-16k-0613 to maintain context window of 16,385 tokens.
  • gpt-3.5-turbo-1106 - "The latest GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens." (Context window: 16,385 tokens) 💨 💨 💨
  • gpt-4-1106-preview (GPT-4 TURBO) - "The latest GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This preview model is not yet suited for production traffic." Really fast! 💨 💨 💨

Reference: https://platform.openai.com/docs/models/gpt-3-5

  • Will update again on December 11, 2023 to remove gpt-3.5-turbo-16k-0613 and gpt-3.5-turbo-1106 after gpt-3.5-turbo points to gpt-3.5-turbo-1106.

Don't miss a new obsidian-bmo-chatbot release

NewReleases is sending notifications on new releases.