Skip to main content
Use this guide when processing many sources efficiently while preserving resumability and output consistency.

Processing a playlist

notewise process "https://youtube.com/playlist?list=PLAYLIST_ID"
notewise will:
  1. Fetch the full playlist and enumerate all video IDs
  2. Process videos concurrently (up to MAX_CONCURRENT_VIDEOS, default 5)
  3. Write each video’s notes as top-level Markdown files in the output directory

Batch file processing

Create a plain text file with one URL per line:
# my-course.txt
https://youtube.com/watch?v=VIDEO_ID_1
https://youtube.com/watch?v=VIDEO_ID_2
https://youtube.com/playlist?list=PLxxxxxx
Blank lines and lines starting with # are ignored. Each entry can be a video URL, playlist URL, or bare ID.
Then pass the file to process:
notewise process my-course.txt
notewise process ~/course/urls.txt -o ~/course/notes

Output structure

output/
├── First Video Title.md
├── First Video Title_quiz.md
└── Second Video Title.md

Tuning concurrency

# In ~/.notewise/config.env:
MAX_CONCURRENT_VIDEOS=10

# Or per run:
MAX_CONCURRENT_VIDEOS=2 notewise process my-course.txt
Higher concurrency processes faster but increases API costs per unit time and may hit YouTube rate limits. Use YOUTUBE_REQUESTS_PER_MINUTE to stay within limits.

Resuming an interrupted run

By default, notewise skips videos already in cache with existing output targets — making re-runs safe:
notewise process my-course.txt   # run 1 — processes all
notewise process my-course.txt   # run 2 — skips all (output exists)
Use --force to re-process everything:
notewise process my-course.txt --force

Useful batch flags

# Generate quizzes for every video
notewise process my-course.txt --quiz

# Non-interactive for cron / CI
notewise process my-course.txt --no-ui 2>&1 | tee run.log

# Use a specific model for the whole batch
notewise process my-course.txt --model gpt-4o

# High concurrency for large batches
MAX_CONCURRENT_VIDEOS=10 notewise process urls.txt --no-ui
A video is skipped when cache state indicates prior processing and expected output targets already exist on disk.
Lower MAX_CONCURRENT_VIDEOS when running on constrained hardware or when you are approaching provider or YouTube rate limits.