Highlights
🏠 NAT traversal & relays. Now, servers can join the swarm automatically even if your machine is located behind a NAT or a firewall, or has a dynamic IP address. You don't have to manually set up port forwarding or provide any arguments to make it work.
-
Please upgrade the Petals package and restart all your servers & clients to use this feature or access servers joined via relays:
pip install --upgrade petals
-
How does it work? If the server learns that it can't accept incoming connections due to NAT/firewall, it opens a long-term outcoming connection to one of relay nodes, then the relay node forwards all requests to this server through this connection. In turn, any server with a public IP may serve as a relay node if necessary. We use libp2p circuit relays under the hood: https://docs.libp2p.io/concepts/nat/circuit-relay/
💬 Chatbot app. We've released a chatbot app working over Petals: http://chat.petals.ml (source code).
-
Disclaimer: This chatbot uses the regular BLOOM, which is not fine-tuned for question answering. Please do not expect it to behave like ChatGPT.
-
How does it work? Under the hood, this web app uses our HTTP endpoint for running inference using the public Petals swarm. You can use this endpoint for your own projects, or set up another endpoint yourself (no GPU needed). See API docs here: https://github.com/borzunov/chat.petals.ml#http-api-methods
🏃♀️ Faster CPU-only clients. If your CPU supports the AVX512 instruction set, a CPU-only client now runs almost as fast as a GPU-enabled one. This way, you can rent cheap CPU instances to run the client or an HTTP endpoint, like the one we use for the chatbot app.
- How to use it? AVX512 is mostly present on late Intel Xeon CPUs. You can rent one by choosing a "dedicated CPU" instance with 16+ GB RAM on DigitalOcean.
🏥 Swarm health monitor. We've updated the swarm health monitor: http://health.petals.ml (source code). It provides an overview of servers who joined the public swarm and reports any connection issues.
What's Changed
- Add PyPI badge, update instructions and links in readme by @borzunov in #172
- Add link to PyPI by @borzunov in #173
- Add local tensor-parallel fwd/bwd by @justheuristic in #143
- Make Docker command more visible by @borzunov in #175
- Allow to disable chunked forward by @borzunov in #176
- Disable chunked_forward() on AVX512 CPUs by @borzunov in #179
- Use slightly less memory in .generate() by @borzunov in #177
- Import bitsandbytes only if it's going to be used by @borzunov in #180
- hotfix: add initial peer that did not crash :) by @justheuristic in #181
- Remove protobuf from requirements by @borzunov in #182
- Add more links to BLOOM to readme by @borzunov in #183
- Add link to health.petals.ml to readme by @borzunov in #184
- Add readme subsections by @borzunov in #185
- Fix GiBs in the "insufficient disk space" message by @borzunov in #187
- Support libp2p relays for NAT traversal by @Vahe1994 in #186
- Fix psutil-related AccessDenied crash, disable --load_in_8bit by default in case of TP by @borzunov in #188
- Bump version to 1.1.0 by @borzunov in #190
New Contributors
Full Changelog: v1.0.0...v1.1.0