github jdrbc/podly_pure_podcasts v1.1.0

latest release: v1.2.0
4 months ago

Docker

In v1.0 only hosts with NVidia GPUs could run Podly in a Docker container. v1.1 adds support for hosts without NVidia GPUs. This may be a more simple way to get local Whisper running on your machine.

No configuration changes are required to use Docker. In a new or existing Podly setup run ./run_podly_docker.sh to start Docker. Docker will then start with or without GPU support depending on the host machine. The database and ingestion pipeline are mounted to the container so there should be no state in the app depending on if it is running locally or via Docker.

Users of the v1.0 configuration may wish to copy the contents of the processing pipeline and database into the host directory to maintain state.

What's Changed

  • More db by @jdrbc in #27
  • add FAQ with information about whisper GPU by @frrad in #34
  • deprecate podcasts in config by @frrad in #33
  • Update README.md discord link by @xerootg in #36
  • put media link in enclosure for standard compliance by @frrad in #32
  • Split remote transcription into its own openai client by @xerootg in #29
  • fix bug getting audio len by @frrad in #37
  • Adds post summary page by @frrad in #40
  • handle some edge cases around transcript being missing by @frrad in #41
  • add delete button by @frrad in #47
  • sanitize full input path by @frrad in #43
  • Update README.md by @xmutantson in #50
  • whitelist all button by @jdrbc in #51
  • Automated background downloads via scheduler by @xmutantson in #49
  • Fix windows paths for realsies by @frrad in #54
  • remove env variable section from readme by @frrad in #56
  • move from pydub to ffmpeg-python by @frrad in #58
  • finish removing pydub by @frrad in #59
  • use litellm for completion requests by @frrad in #60
  • minor UI enhancements by @frrad in #61
  • Pass feed image metadata through to client by @wrotte in #66
  • Various bugfixes and logic changes for parallel processing by @xmutantson in #62
  • Refresh individual feeds asynchronously if the scheduler is enabled by @wrotte in #65
  • tweaks to post page by @frrad in #68
  • Fix bug when duration is missing by @frrad in #69
  • fixing scheduler crash if image metadata was missing by @xmutantson in #72
  • Update Docker base to nvidia/cuda:12.8.1-cudnn-devel-ubuntu24.04 by @ConorIA in #67
  • Add configurable timeout and chunksize for (remote) Whisper by @duracell in #80
  • remove last mention of pickle by @frrad in #83
  • Update FAQ with info to whitelisting by @duracell in #87
  • Add volumes to docker compose by @duracell in #85
  • change requested output format to include a confidence score for each segment by @frrad in #88
  • Fix missing duration and have show them as integer by @duracell in #94
  • Fix database commit issues in download_and_process by @o1y in #93
  • docker compose for machines without nvidia gpu by @jdrbc in #90
  • generalize llm setting & gemini example by @jdrbc in #95
  • Groq Whisper V3 support by @aburkard in #71
  • format response by @frrad in #89
  • Fix docker by @jdrbc in #96

New Contributors

Full Changelog: v1.0.0...v1.1.0

Don't miss a new podly_pure_podcasts release

NewReleases is sending notifications on new releases.