What's Changed
- Added a new
/v1/embeddings
service, leveraging mlx-embeddings. Thanks to @0ssamaak0 for their PR! - Integrated the mlx-audio library, enabling support for more TTS models (such as
dia
,kokoro
,outetts
,bark
). Thanks to @zboyles for their PR! - Upgraded the
mlx-lm
base library to support more models, such asqwen3
.
New Contributors
- @zboyles made their first contribution in #26
- @0ssamaak0 made their first contribution in #36
Example
- Embeddings
from openai import OpenAI
client = OpenAI(
+ base_url="http://localhost:10240/v1", # Point to local server
)
response = client.embeddings.create(
- model="text-embedding-3-large",
+ model="mlx-community/all-MiniLM-L6-v2-4bit",
input="MLX is awsome"
)
- Text to speech
from openai import OpenAI
client = OpenAI(
+ base_url="http://localhost:10240/v1", # Point to local server
)
speech_file_path = "mlx_example.wav"
response = client.audio.speech.create(
- model="tts-1",
+ model="mlx-community/Kokoro-82M-4bit",
- voice="alloy",
+ voice="af_sky",
input="MLX is awsome.",
)
response.stream_to_file(speech_file_path)
Full Changelog: v0.3.5...v0.4.0