Compare commits

...

2 Commits

Author SHA1 Message Date
15c2d4713f Signed-off-by: OpenClaw Bot <ai-agent@topdoglabs.com> 2026-02-23 22:10:49 -06:00
9720390e1a Add podcast feature with TTS, RSS feed, and web player
- Multi-provider TTS service (OpenAI, Piper, macOS say)
- Supabase Storage integration for audio files
- RSS 2.0 feed with iTunes extensions for podcast distribution
- Web audio player at /podcast page
- Integration with daily digest workflow
- Manual TTS generation script
- Complete documentation in PODCAST_SETUP.md
2026-02-23 20:15:27 -06:00
16 changed files with 1925 additions and 6 deletions

39
Dockerfile Normal file
View File

@ -0,0 +1,39 @@
FROM node:22-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
FROM node:22-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY package.json package-lock.json ./
COPY next.config.ts tsconfig.json postcss.config.mjs eslint.config.mjs ./
COPY public ./public
COPY src ./src
COPY data ./data
RUN npm run build
FROM node:22-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV PORT=4002
ENV HOSTNAME=0.0.0.0
COPY package.json package-lock.json ./
COPY --from=deps /app/node_modules ./node_modules
RUN npm prune --omit=dev
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
COPY --from=builder /app/data ./data
EXPOSE 4002
HEALTHCHECK --interval=30s --timeout=5s --start-period=20s --retries=3 \
CMD wget -q -O - http://127.0.0.1:4002/favicon.ico > /dev/null || exit 1
CMD ["npm", "run", "start", "--", "-p", "4002", "-H", "0.0.0.0"]

97
PODCAST_ARCHITECTURE.md Normal file
View File

@ -0,0 +1,97 @@
# Podcast Architecture & Implementation Plan
## Overview
Convert Daily Digest blog posts into a podcast format with automated TTS generation, RSS feed for distribution, and Supabase Storage for audio file hosting.
## Architecture
### 1. Database Schema Updates
Add `audio_url` and `audio_duration` fields to the `blog_messages` table.
### 2. TTS Generation
**Option A: Piper TTS (Recommended - Free)**
- Local execution, no API costs
- High quality neural voices
- Fast processing
- No rate limits
**Option B: OpenAI TTS (Paid)**
- Premium quality voices
- Simple API integration
- ~$2-4/month for daily 5-min content
### 3. Audio Storage
- **Provider**: Supabase Storage
- **Bucket**: `podcast-audio`
- **Cost**: Free tier includes 1GB storage
- **Access**: Public read via signed URLs
### 4. RSS Feed Generation
- **Endpoint**: `/api/podcast/rss`
- **Format**: RSS 2.0 with iTunes extensions
- **Compatible with**: Apple Podcasts, Spotify, Google Podcasts
- **Auto-updates**: Pulls from blog_messages table
### 5. Integration Points
1. **Daily Digest Workflow** (`/api/digest` POST):
- After saving post, trigger async TTS generation
- Upload audio to Supabase Storage
- Update database with audio_url
2. **RSS Feed** (`/api/podcast/rss`):
- Returns XML RSS feed
- Includes all posts with audio_url
3. **Podcast Page** (`/podcast`):
- Web player for each episode
- Subscribe links
- Episode list
## Implementation Steps
### Phase 1: Database & Storage Setup
1. Create `podcast-audio` bucket in Supabase
2. Add columns to blog_messages table
### Phase 2: TTS Service
1. Create `src/lib/tts.ts` - TTS abstraction
2. Create `src/lib/storage.ts` - Supabase storage helpers
3. Create `src/scripts/generate-tts.ts` - TTS generation script
### Phase 3: API Endpoints
1. Create `src/app/api/podcast/rss/route.ts` - RSS feed
2. Update `src/app/api/digest/route.ts` - Add TTS trigger
### Phase 4: UI
1. Create `src/app/podcast/page.tsx` - Podcast page
2. Update post display to show audio player
### Phase 5: Documentation
1. Create `PODCAST_SETUP.md` - Setup instructions
2. Update README with podcast features
## Cost Estimate
- **Piper TTS**: $0 (local processing)
- **OpenAI TTS**: ~$2-4/month
- **Supabase Storage**: $0 (within free tier 1GB)
- **RSS Hosting**: $0 (generated by Next.js API)
- **Total**: $0 (Piper) or $2-4/month (OpenAI)
## File Structure
```
src/
├── app/
│ ├── api/
│ │ ├── digest/route.ts (updated)
│ │ └── podcast/
│ │ └── rss/route.ts (new)
│ ├── podcast/
│ │ └── page.tsx (new)
│ └── page.tsx (updated - add audio player)
├── lib/
│ ├── tts.ts (new)
│ ├── storage.ts (new)
│ └── podcast.ts (new)
└── scripts/
└── generate-tts.ts (new)
```

318
PODCAST_SETUP.md Normal file
View File

@ -0,0 +1,318 @@
# Podcast Setup Guide
This guide covers setting up and using the podcast feature for the Daily Digest blog.
## Overview
The podcast feature automatically converts Daily Digest blog posts into audio format using Text-to-Speech (TTS) and provides:
- 🎧 **Web Player** - Listen directly on the blog
- 📱 **RSS Feed** - Subscribe in any podcast app (Apple Podcasts, Spotify, etc.)
- 🔄 **Auto-generation** - TTS runs automatically when new posts are created
- 💾 **Audio Storage** - Files stored in Supabase Storage (free tier)
## Architecture
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Daily │────▶│ TTS API │────▶│ Supabase │
│ Digest │ │ (OpenAI/ │ │ Storage │
│ Post │ │ Piper) │ │ │
└─────────────┘ └─────────────┘ └──────┬──────┘
┌─────────────┐
│ RSS Feed │
│ (/api/ │
│ podcast/ │
│ rss) │
└──────┬──────┘
┌─────────────┐
│ Podcast │
│ Apps │
│ (Apple, │
│ Spotify) │
└─────────────┘
```
## Quick Start
### 1. Configure Environment Variables
Add to your `.env.local` and `.env.production`:
```bash
# Enable TTS generation
ENABLE_TTS=true
# Choose TTS Provider: "openai" (paid, best quality) or "macsay" (free, macOS only)
TTS_PROVIDER=openai
# For OpenAI TTS (recommended)
OPENAI_API_KEY=sk-your-key-here
TTS_VOICE=alloy # Options: alloy, echo, fable, onyx, nova, shimmer
```
### 2. Update Database Schema
Run this SQL in your Supabase SQL Editor to add audio columns:
```sql
-- Add audio URL column
ALTER TABLE blog_messages
ADD COLUMN IF NOT EXISTS audio_url TEXT;
-- Add audio duration column
ALTER TABLE blog_messages
ADD COLUMN IF NOT EXISTS audio_duration INTEGER;
-- Create index for faster RSS feed queries
CREATE INDEX IF NOT EXISTS idx_blog_messages_audio
ON blog_messages(audio_url)
WHERE audio_url IS NOT NULL;
```
### 3. Create Supabase Storage Bucket
The app will automatically create the `podcast-audio` bucket on first use, or you can create it manually:
1. Go to Supabase Dashboard → Storage
2. Click "New Bucket"
3. Name: `podcast-audio`
4. Check "Public bucket"
5. Click "Create"
### 4. Deploy
```bash
npm run build
vercel --prod
```
### 5. Subscribe to the Podcast
The RSS feed is available at:
```
https://blog-backup-two.vercel.app/api/podcast/rss
```
**Apple Podcasts:**
1. Open Podcasts app
2. Tap Library → Edit → Add a Show by URL
3. Paste the RSS URL
**Spotify:**
1. Go to Spotify for Podcasters
2. Submit RSS feed
**Other Apps:**
Just paste the RSS URL into any podcast app.
## TTS Providers
### Option 1: OpenAI TTS (Recommended)
**Cost:** ~$2-4/month for daily 5-minute episodes
**Pros:**
- Excellent voice quality
- Multiple voices available
- Simple API integration
- Fast processing
**Cons:**
- Paid service
- Requires API key
**Setup:**
```bash
TTS_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here
TTS_VOICE=alloy # or echo, fable, onyx, nova, shimmer
```
### Option 2: macOS `say` Command (Free)
**Cost:** $0
**Pros:**
- Free, built into macOS
- No API key needed
- Works offline
**Cons:**
- Lower voice quality
- macOS only
- Limited to macOS deployment
**Setup:**
```bash
TTS_PROVIDER=macsay
TTS_VOICE=Samantha # or Alex, Victoria, etc.
```
### Option 3: Piper TTS (Free, Local)
**Cost:** $0
**Pros:**
- Free and open source
- High quality neural voices
- Runs locally (privacy)
- No rate limits
**Cons:**
- Requires downloading voice models (~100MB)
- More complex setup
- Requires local execution
**Setup:**
1. Install Piper:
```bash
brew install piper-tts # or download from GitHub
```
2. Download voice model:
```bash
mkdir -p models
cd models
wget https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/en/en_US/lessac/medium/en_US-lessac-medium.onnx
wget https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/en/en_US/lessac/medium/en_US-lessac-medium.onnx.json
```
3. Configure:
```bash
TTS_PROVIDER=piper
PIPER_MODEL_PATH=./models/en_US-lessac-medium.onnx
```
## Usage
### Automatic Generation
When `ENABLE_TTS=true`, audio is automatically generated when a new digest is posted via the `/api/digest` endpoint.
### Manual Generation
Generate audio for a specific post:
```bash
npm run generate-tts -- <post_id>
```
Generate audio for all posts missing audio:
```bash
npm run generate-tts:all
```
Force regeneration (overwrite existing):
```bash
npm run generate-tts -- <post_id> --force
```
## API Endpoints
### GET /api/podcast/rss
Returns the podcast RSS feed in XML format with iTunes extensions.
**Headers:**
- `Accept: application/xml`
**Response:** RSS 2.0 XML feed
### POST /api/digest (Updated)
Now accepts an optional `generateAudio` parameter:
```json
{
"date": "2026-02-23",
"content": "# Daily Digest\n\nToday's news...",
"tags": ["daily-digest", "ai"],
"generateAudio": true // Optional, defaults to true
}
```
## Database Schema
The `blog_messages` table now includes:
| Column | Type | Description |
|--------|------|-------------|
| `audio_url` | TEXT | Public URL to the audio file in Supabase Storage |
| `audio_duration` | INTEGER | Estimated duration in seconds |
## File Structure
```
src/
├── app/
│ ├── api/
│ │ ├── digest/route.ts # Updated with TTS trigger
│ │ └── podcast/
│ │ └── rss/route.ts # RSS feed endpoint
│ ├── podcast/
│ │ └── page.tsx # Podcast web player page
│ └── page.tsx # Updated with audio player
├── lib/
│ ├── tts.ts # TTS service abstraction
│ ├── storage.ts # Supabase storage helpers
│ └── podcast.ts # RSS generation utilities
└── scripts/
└── generate-tts.ts # Manual TTS generation script
```
## Cost Analysis
### OpenAI TTS (Daily Digest ~5 min)
- Characters per day: ~4,000
- Cost: $0.015 per 1,000 chars (tts-1)
- Monthly cost: ~$1.80
- HD voice (tts-1-hd): ~$3.60/month
### Supabase Storage
- Free tier: 1 GB storage
- Audio files: ~5 MB per episode
- Monthly storage: ~150 MB (30 episodes)
- Well within free tier
### Total Monthly Cost
- **OpenAI TTS:** ~$2-4/month
- **Supabase Storage:** $0 (free tier)
- **RSS Hosting:** $0 (Next.js API route)
- **Total:** ~$2-4/month
## Troubleshooting
### TTS not generating
1. Check `ENABLE_TTS=true` in environment variables
2. Check `TTS_PROVIDER` is set correctly
3. For OpenAI: Verify `OPENAI_API_KEY` is valid
4. Check Vercel logs for errors
### Audio not playing
1. Check Supabase Storage bucket is public
2. Verify `audio_url` in database is not null
3. Check browser console for CORS errors
### RSS feed not updating
1. RSS is cached for 5 minutes (`max-age=300`)
2. Check that posts have `audio_url` set
3. Verify RSS URL is accessible: `/api/podcast/rss`
## Future Enhancements
- [ ] Background job queue for TTS generation (using Inngest/Upstash)
- [ ] Voice selection per post
- [ ] Chapter markers in audio
- [ ] Transcript generation
- [ ] Podcast analytics
## Resources
- [OpenAI TTS Documentation](https://platform.openai.com/docs/guides/text-to-speech)
- [Piper TTS GitHub](https://github.com/rhasspy/piper)
- [Apple Podcasts RSS Requirements](https://help.apple.com/itc/podcasts_connect/#/itcb54333f1)
- [Podcast RSS 2.0 Spec](https://cyber.harvard.edu/rss/rss.html)

126
PODCAST_SUMMARY.md Normal file
View File

@ -0,0 +1,126 @@
# Podcast Implementation Summary
## What Was Built
A complete podcast solution for the Daily Digest blog that automatically converts blog posts to audio using Text-to-Speech (TTS) and distributes them via RSS feed.
## Features Delivered
### 1. TTS Service (`src/lib/tts.ts`)
- **Multi-provider support**: OpenAI (paid), Piper (free), macOS say (free)
- **Text preprocessing**: Automatically strips markdown, URLs, and code blocks
- **Async generation**: Non-blocking to not delay post creation
- **Error handling**: Graceful fallback if TTS fails
### 2. Audio Storage (`src/lib/storage.ts`)
- **Supabase Storage integration**: Uses existing Supabase project
- **Automatic bucket creation**: Creates `podcast-audio` bucket if needed
- **Public access**: Audio files accessible for podcast apps
### 3. RSS Feed (`src/app/api/podcast/rss/route.ts`)
- **RSS 2.0 with iTunes extensions**: Compatible with Apple Podcasts, Spotify, Google Podcasts
- **Auto-updating**: Pulls latest episodes from database
- **Proper metadata**: Titles, descriptions, duration, publication dates
### 4. Podcast Page (`src/app/podcast/page.tsx`)
- **Web audio player**: Play episodes directly on the site
- **Episode listing**: Browse all available podcast episodes
- **Subscribe links**: RSS, Apple Podcasts, Spotify
- **Responsive design**: Works on mobile and desktop
### 5. Integration with Digest Workflow
- **Updated `/api/digest`**: Now triggers TTS generation after saving post
- **Optional audio**: Can disable with `generateAudio: false`
- **Background processing**: Doesn't block the main response
### 6. Manual TTS Script (`src/scripts/generate-tts.ts`)
- **Single post**: `npm run generate-tts -- <post_id>`
- **Batch processing**: `npm run generate-tts:all`
- **Force regeneration**: `--force` flag to overwrite existing
### 7. UI Updates
- **Blog post audio player**: Shows audio player for posts with audio
- **Podcast link in header**: Easy navigation to podcast page
- **Visual indicators**: Shows which posts have audio
## Files Created/Modified
### New Files
```
src/
├── app/
│ ├── api/podcast/rss/route.ts # RSS feed endpoint
│ └── podcast/page.tsx # Podcast web player
├── lib/
│ ├── tts.ts # TTS service abstraction
│ ├── storage.ts # Supabase storage helpers
│ └── podcast.ts # RSS generation utilities
├── scripts/
│ └── generate-tts.ts # Manual TTS generation script
└── ../PODCAST_SETUP.md # Setup documentation
```
### Modified Files
```
src/
├── app/
│ ├── api/digest/route.ts # Added TTS trigger
│ └── page.tsx # Added audio player, podcast link
├── ../package.json # Added generate-tts scripts
├── ../tsconfig.json # Added scripts to includes
└── ../.env.local # Added TTS environment variables
```
## Configuration
### Environment Variables
```bash
# Enable TTS generation
ENABLE_TTS=true
# TTS Provider: openai, piper, or macsay
TTS_PROVIDER=openai
# OpenAI settings (if using OpenAI)
OPENAI_API_KEY=sk-your-key-here
TTS_VOICE=alloy # alloy, echo, fable, onyx, nova, shimmer
# Piper settings (if using Piper)
PIPER_MODEL_PATH=./models/en_US-lessac-medium.onnx
```
### Database Schema
```sql
ALTER TABLE blog_messages
ADD COLUMN audio_url TEXT,
ADD COLUMN audio_duration INTEGER;
```
## URLs
- **RSS Feed**: `https://blog-backup-two.vercel.app/api/podcast/rss`
- **Podcast Page**: `https://blog-backup-two.vercel.app/podcast`
- **Blog**: `https://blog-backup-two.vercel.app`
## Cost Analysis
| Component | Provider | Monthly Cost |
|-----------|----------|--------------|
| TTS | OpenAI | ~$2-4 |
| TTS | Piper/macOS | $0 |
| Storage | Supabase | $0 (free tier) |
| RSS Hosting | Vercel | $0 |
| **Total** | | **$0-4/month** |
## Next Steps to Deploy
1. **Database**: Run the SQL to add `audio_url` and `audio_duration` columns
2. **Supabase Storage**: Create `podcast-audio` bucket (or let app auto-create)
3. **Environment**: Add `ENABLE_TTS=true` and `OPENAI_API_KEY` to production
4. **Deploy**: `npm run build && vercel --prod`
5. **Test**: Generate a test episode and verify RSS feed
6. **Submit**: Add RSS URL to Apple Podcasts, Spotify, etc.
## Documentation
See `PODCAST_SETUP.md` for complete setup instructions, troubleshooting, and usage guide.

28
docker-compose.yml Normal file
View File

@ -0,0 +1,28 @@
services:
blog-backup:
container_name: blog-backup
build:
context: .
dockerfile: Dockerfile
pull_policy: build
ports:
- "4002:4002"
environment:
NODE_ENV: production
PORT: "4002"
HOSTNAME: 0.0.0.0
env_file:
- .env.production
volumes:
- blog_backup_runtime:/app/.runtime
healthcheck:
test: ["CMD-SHELL", "wget -q -O - http://127.0.0.1:4002/favicon.ico > /dev/null || exit 1"]
interval: 30s
timeout: 5s
retries: 3
start_period: 20s
restart: unless-stopped
volumes:
blog_backup_runtime:
driver: local

View File

@ -0,0 +1,66 @@
# Daily Digest Podcast - Implementation Complete
**Task:** Research and build a digital podcast solution for the Daily Digest blog
**Status:** ✅ Complete (Ready for Review)
**Date:** 2026-02-23
## What Was Delivered
### 1. Research Findings
- **TTS Providers**: OpenAI TTS ($2-4/month, best quality) vs Piper (free, local) vs macOS say (free, basic)
- **Audio Storage**: Supabase Storage (free tier 1GB)
- **RSS Feed**: Next.js API route with iTunes extensions
- **Integration**: Async TTS generation triggered after digest post
### 2. Implementation
**Core Services:**
- `src/lib/tts.ts` - Multi-provider TTS abstraction (OpenAI, Piper, macOS)
- `src/lib/storage.ts` - Supabase Storage audio upload/management
- `src/lib/podcast.ts` - RSS generation utilities
**API Endpoints:**
- `GET /api/podcast/rss` - RSS 2.0 feed for podcast apps
- `POST /api/digest` - Updated to trigger async TTS generation
**UI Pages:**
- `/podcast` - Web audio player with episode listing
- Updated blog post view with inline audio player
- Added podcast link to header navigation
**Scripts:**
- `npm run generate-tts -- <post_id>` - Generate audio for specific post
- `npm run generate-tts:all` - Batch generate for all missing posts
**Documentation:**
- `PODCAST_SETUP.md` - Complete setup guide
- `PODCAST_ARCHITECTURE.md` - Architecture overview
- `PODCAST_SUMMARY.md` - Implementation summary
### 3. Key Features
- ✅ Automatic TTS when new digest posted
- ✅ RSS feed compatible with Apple Podcasts, Spotify
- ✅ Web audio player on podcast page
- ✅ Multiple TTS provider options
- ✅ Free tier coverage (Supabase storage + Piper TTS = $0)
- ✅ Async processing (doesn't block post creation)
### 4. URLs
- RSS Feed: `https://blog-backup-two.vercel.app/api/podcast/rss`
- Podcast Page: `https://blog-backup-two.vercel.app/podcast`
### 5. Next Steps for Deployment
1. Run SQL to add audio columns to database
2. Add `ENABLE_TTS=true` and `OPENAI_API_KEY` to Vercel env vars
3. Deploy: `npm run build && vercel --prod`
4. Submit RSS to Apple Podcasts, Spotify
### 6. Files Modified/Created
- 13 files changed, 1792 insertions
- Committed to main branch
## Cost Analysis
- OpenAI TTS: ~$2-4/month (optional)
- Piper/macOS TTS: $0
- Supabase Storage: $0 (within free tier)
- Total: $0-4/month depending on TTS provider

View File

@ -7,7 +7,9 @@
"build": "next build",
"start": "next start",
"lint": "eslint",
"deploy": "npm run build && vercel --prod"
"deploy": "npm run build && vercel --prod",
"generate-tts": "npx ts-node --project tsconfig.json src/scripts/generate-tts.ts",
"generate-tts:all": "npx ts-node --project tsconfig.json src/scripts/generate-tts.ts --all"
},
"dependencies": {
"@supabase/supabase-js": "^2.97.0",

View File

@ -1,34 +1,53 @@
/**
* Daily Digest API Endpoint
* Saves digest content and optionally generates TTS audio
*/
import { NextResponse } from "next/server";
import { createClient } from "@supabase/supabase-js";
import { generateSpeech } from "@/lib/tts";
import { uploadAudio } from "@/lib/storage";
import { extractTitle, extractExcerpt } from "@/lib/podcast";
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
);
const CRON_API_KEY = process.env.CRON_API_KEY;
const serviceSupabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!
);
export async function POST(request: Request) {
const apiKey = request.headers.get("x-api-key");
const CRON_API_KEY = process.env.CRON_API_KEY;
console.log("API Key from header:", apiKey);
console.log("CRON_API_KEY exists:", !!CRON_API_KEY);
if (!CRON_API_KEY || apiKey !== CRON_API_KEY) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}
const { content, date, tags } = await request.json();
const { content, date, tags, generateAudio = true } = await request.json();
if (!content || !date) {
return NextResponse.json({ error: "Content and date required" }, { status: 400 });
}
const id = Date.now().toString();
const newMessage = {
id: Date.now().toString(),
id,
date,
content,
timestamp: Date.now(),
tags: tags || ["daily-digest"],
audio_url: null as string | null,
audio_duration: null as number | null,
};
// Save message first (without audio)
const { error } = await supabase
.from("blog_messages")
.insert(newMessage);
@ -37,6 +56,67 @@ export async function POST(request: Request) {
console.error("Error saving digest:", error);
return NextResponse.json({ error: "Failed to save" }, { status: 500 });
}
// Generate TTS audio asynchronously (don't block response)
if (generateAudio && process.env.ENABLE_TTS === "true") {
// Use a non-blocking approach
generateTTSAsync(id, content, date).catch(err => {
console.error("TTS generation failed (async):", err);
});
}
return NextResponse.json({ success: true, id: newMessage.id });
return NextResponse.json({
success: true,
id,
audioGenerated: generateAudio && process.env.ENABLE_TTS === "true"
});
}
/**
* Generate TTS audio and upload to storage
* This runs asynchronously after the main response is sent
*/
async function generateTTSAsync(
id: string,
content: string,
date: string
): Promise<void> {
try {
console.log(`[TTS] Starting generation for digest ${id}...`);
// Generate speech
const { audioBuffer, duration, format } = await generateSpeech(content, {
provider: (process.env.TTS_PROVIDER as "piper" | "openai" | "macsay") || "openai",
voice: process.env.TTS_VOICE || "alloy",
});
// Determine file extension based on format
const ext = format === "audio/wav" ? "wav" :
format === "audio/aiff" ? "aiff" : "mp3";
// Create filename with date for organization
const filename = `digest-${date}-${id}.${ext}`;
// Upload to Supabase Storage
const { url } = await uploadAudio(audioBuffer, filename, format);
// Update database with audio URL
const { error: updateError } = await serviceSupabase
.from("blog_messages")
.update({
audio_url: url,
audio_duration: duration,
})
.eq("id", id);
if (updateError) {
console.error("[TTS] Error updating database:", updateError);
throw updateError;
}
console.log(`[TTS] Successfully generated audio for digest ${id}: ${url}`);
} catch (error) {
console.error(`[TTS] Failed to generate audio for digest ${id}:`, error);
// Don't throw - we don't want to fail the whole request
}
}

View File

@ -0,0 +1,46 @@
/**
* Podcast RSS Feed API Endpoint
* Returns RSS 2.0 with iTunes extensions for podcast distribution
* Compatible with Apple Podcasts, Spotify, Google Podcasts, etc.
*/
import { NextResponse } from "next/server";
import { fetchEpisodes, generateRSS, DEFAULT_CONFIG } from "@/lib/podcast";
export const dynamic = "force-dynamic"; // Always generate fresh RSS
export async function GET() {
try {
// Fetch episodes with audio from database
const episodes = await fetchEpisodes(50);
// Generate RSS XML
const rssXml = generateRSS(episodes, DEFAULT_CONFIG);
// Return as XML with proper content type
return new NextResponse(rssXml, {
headers: {
"Content-Type": "application/xml; charset=utf-8",
"Cache-Control": "public, max-age=300", // Cache for 5 minutes
},
});
} catch (error) {
console.error("Error generating RSS feed:", error);
return new NextResponse(
`<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
<channel>
<title>OpenClaw Daily Digest</title>
<description>Error loading podcast feed. Please try again later.</description>
</channel>
</rss>`,
{
status: 500,
headers: {
"Content-Type": "application/xml; charset=utf-8",
},
}
);
}
}

View File

@ -14,6 +14,8 @@ interface Message {
content: string;
timestamp: number;
tags?: string[];
audio_url?: string;
audio_duration?: number;
}
type Theme = "light" | "dark";
@ -159,6 +161,9 @@ function BlogPageContent() {
</Link>
<nav className="flex items-center gap-4 md:gap-6">
<Link href="/podcast" className="text-sm font-medium text-blue-600 hover:text-blue-800 dark:text-blue-400 dark:hover:text-blue-300">
🎧 Podcast
</Link>
<Link href="https://gantt-board.vercel.app" className="hidden md:inline text-sm text-gray-600 hover:text-gray-900 dark:text-slate-300 dark:hover:text-slate-100">
Tasks
</Link>
@ -217,6 +222,28 @@ function BlogPageContent() {
</p>
</header>
{/* Audio Player */}
{selectedPost.audio_url && (
<div className="mb-8 p-4 bg-gray-50 rounded-xl border border-gray-200 dark:bg-slate-900 dark:border-slate-800">
<div className="flex items-center gap-3 mb-2">
<span className="text-sm font-medium text-gray-700 dark:text-slate-300">🎧 Listen to this episode</span>
<Link
href="/podcast"
className="text-xs text-blue-600 hover:text-blue-800 dark:text-blue-400 dark:hover:text-blue-300"
>
View all episodes
</Link>
</div>
<audio
controls
className="w-full"
src={selectedPost.audio_url}
>
Your browser does not support the audio element.
</audio>
</div>
)}
<div className="markdown-content">
<ReactMarkdown remarkPlugins={[remarkGfm]}>
{selectedPost.content}

352
src/app/podcast/page.tsx Normal file
View File

@ -0,0 +1,352 @@
"use client";
import { useState, useEffect, useRef } from "react";
import Head from "next/head";
import Link from "next/link";
import { format } from "date-fns";
import { PodcastEpisode, DEFAULT_CONFIG } from "@/lib/podcast";
interface EpisodeWithAudio extends PodcastEpisode {
audioUrl: string;
audioDuration: number;
}
function formatDuration(seconds: number): string {
const mins = Math.floor(seconds / 60);
const secs = seconds % 60;
return `${mins}:${secs.toString().padStart(2, "0")}`;
}
export default function PodcastPage() {
const [episodes, setEpisodes] = useState<EpisodeWithAudio[]>([]);
const [loading, setLoading] = useState(true);
const [currentEpisode, setCurrentEpisode] = useState<EpisodeWithAudio | null>(null);
const [isPlaying, setIsPlaying] = useState(false);
const [progress, setProgress] = useState(0);
const [duration, setDuration] = useState(0);
const audioRef = useRef<HTMLAudioElement | null>(null);
useEffect(() => {
fetchEpisodes();
}, []);
useEffect(() => {
if (currentEpisode && audioRef.current) {
audioRef.current.play();
setIsPlaying(true);
}
}, [currentEpisode]);
async function fetchEpisodes() {
try {
const res = await fetch("/api/messages");
const data = await res.json();
// Filter to only episodes with audio
const episodesWithAudio = (data || [])
.filter((m: any) => m.audio_url)
.map((m: any) => ({
id: m.id,
title: extractTitle(m.content),
description: extractExcerpt(m.content),
content: m.content,
date: m.date,
timestamp: m.timestamp,
audioUrl: m.audio_url,
audioDuration: m.audio_duration || 300,
tags: m.tags || [],
}))
.sort((a: any, b: any) => b.timestamp - a.timestamp);
setEpisodes(episodesWithAudio);
} catch (err) {
console.error("Failed to fetch episodes:", err);
} finally {
setLoading(false);
}
}
function extractTitle(content: string): string {
const lines = content.split("\n");
const titleLine = lines.find((l) => l.startsWith("# ") || l.startsWith("## "));
return titleLine?.replace(/#{1,2}\s/, "").trim() || "Daily Digest";
}
function extractExcerpt(content: string, maxLength: number = 150): string {
const plainText = content
.replace(/#{1,6}\s/g, "")
.replace(/(\*\*|__|\*|_)/g, "")
.replace(/\[([^\]]+)\]\([^)]+\)/g, "$1")
.replace(/```[\s\S]*?```/g, "")
.replace(/`([^`]+)`/g, " $1 ")
.replace(/\n+/g, " ")
.trim();
if (plainText.length <= maxLength) return plainText;
return plainText.substring(0, maxLength).trim() + "...";
}
function handlePlay(episode: EpisodeWithAudio) {
if (currentEpisode?.id === episode.id) {
togglePlay();
} else {
setCurrentEpisode(episode);
setProgress(0);
}
}
function togglePlay() {
if (audioRef.current) {
if (isPlaying) {
audioRef.current.pause();
} else {
audioRef.current.play();
}
setIsPlaying(!isPlaying);
}
}
function handleTimeUpdate() {
if (audioRef.current) {
setProgress(audioRef.current.currentTime);
setDuration(audioRef.current.duration || currentEpisode?.audioDuration || 0);
}
}
function handleSeek(e: React.ChangeEvent<HTMLInputElement>) {
const newTime = parseFloat(e.target.value);
if (audioRef.current) {
audioRef.current.currentTime = newTime;
setProgress(newTime);
}
}
function handleEnded() {
setIsPlaying(false);
setProgress(0);
}
const rssUrl = "https://blog-backup-two.vercel.app/api/podcast/rss";
return (
<>
<Head>
<title>Podcast | OpenClaw Daily Digest</title>
<meta name="description" content={DEFAULT_CONFIG.description} />
</Head>
<div className="min-h-screen bg-white text-gray-900 dark:bg-slate-950 dark:text-slate-100">
{/* Header */}
<header className="border-b border-gray-200 sticky top-0 bg-white/95 backdrop-blur z-50 dark:border-slate-800 dark:bg-slate-950/95">
<div className="max-w-4xl mx-auto px-4 sm:px-6 lg:px-8">
<div className="flex items-center justify-between h-16">
<Link href="/" className="flex items-center gap-2">
<div className="w-8 h-8 bg-blue-600 rounded-lg flex items-center justify-center text-white font-bold">
🎧
</div>
<span className="font-bold text-xl">Daily Digest Podcast</span>
</Link>
<nav className="flex items-center gap-4">
<Link href="/" className="text-sm text-gray-600 hover:text-gray-900 dark:text-slate-300 dark:hover:text-slate-100">
Blog
</Link>
<Link href="/admin" className="text-sm text-gray-600 hover:text-gray-900 dark:text-slate-300 dark:hover:text-slate-100">
Admin
</Link>
</nav>
</div>
</div>
</header>
<main className="max-w-4xl mx-auto px-4 sm:px-6 lg:px-8 py-8">
{/* Podcast Header */}
<div className="bg-gradient-to-br from-blue-600 to-purple-600 rounded-2xl p-8 text-white mb-8">
<div className="flex flex-col md:flex-row gap-6">
<div className="w-32 h-32 bg-white/20 rounded-xl flex items-center justify-center text-5xl">
🎙
</div>
<div className="flex-1">
<h1 className="text-3xl font-bold mb-2">{DEFAULT_CONFIG.title}</h1>
<p className="text-blue-100 mb-4">{DEFAULT_CONFIG.description}</p>
<div className="flex flex-wrap gap-3">
<a
href={rssUrl}
target="_blank"
rel="noopener noreferrer"
className="inline-flex items-center gap-2 px-4 py-2 bg-white/20 hover:bg-white/30 rounded-lg text-sm font-medium transition-colors"
>
<svg className="w-4 h-4" fill="currentColor" viewBox="0 0 24 24">
<path d="M6.503 20.752c0 2.07-1.678 3.748-3.75 3.748S-.997 22.82-.997 20.75c0-2.07 1.68-3.748 3.75-3.748s3.753 1.678 3.753 3.748zm10.5-10.12c0 2.07-1.678 3.75-3.75 3.75s-3.75-1.68-3.75-3.75c0-2.07 1.678-3.75 3.75-3.75s3.75 1.68 3.75 3.75zm-1.5 0c0-1.24-1.01-2.25-2.25-2.25s-2.25 1.01-2.25 2.25 1.01 2.25 2.25 2.25 2.25-1.01 2.25-2.25zm4.5 10.12c0 2.07-1.678 3.748-3.75 3.748s-3.75-1.678-3.75-3.748c0-2.07 1.678-3.748 3.75-3.748s3.75 1.678 3.75 3.748zm1.5 0c0-2.898-2.355-5.25-5.25-5.25S15 17.852 15 20.75c0 2.898 2.355 5.25 5.25 5.25s5.25-2.352 5.25-5.25zm-7.5-10.12c0 2.898-2.355 5.25-5.25 5.25S3 13.61 3 10.713c0-2.9 2.355-5.25 5.25-5.25s5.25 2.35 5.25 5.25zm1.5 0c0-3.73-3.02-6.75-6.75-6.75S-3 6.983-3 10.713c0 3.73 3.02 6.75 6.75 6.75s6.75-3.02 6.75-6.75z"/>
</svg>
RSS Feed
</a>
<a
href={`https://podcasts.apple.com/?feedUrl=${encodeURIComponent(rssUrl)}`}
target="_blank"
rel="noopener noreferrer"
className="inline-flex items-center gap-2 px-4 py-2 bg-white/20 hover:bg-white/30 rounded-lg text-sm font-medium transition-colors"
>
🍎 Apple Podcasts
</a>
<a
href={`https://open.spotify.com/?feedUrl=${encodeURIComponent(rssUrl)}`}
target="_blank"
rel="noopener noreferrer"
className="inline-flex items-center gap-2 px-4 py-2 bg-white/20 hover:bg-white/30 rounded-lg text-sm font-medium transition-colors"
>
🎵 Spotify
</a>
</div>
</div>
</div>
</div>
{/* Now Playing */}
{currentEpisode && (
<div className="bg-gray-50 rounded-xl p-4 mb-8 border border-gray-200 sticky top-20 z-40 dark:bg-slate-900 dark:border-slate-700">
<div className="flex items-center gap-4">
<button
onClick={togglePlay}
className="w-12 h-12 bg-blue-600 hover:bg-blue-700 rounded-full flex items-center justify-center text-white transition-colors"
>
{isPlaying ? (
<svg className="w-5 h-5" fill="currentColor" viewBox="0 0 24 24">
<path d="M6 4h4v16H6V4zm8 0h4v16h-4V4z"/>
</svg>
) : (
<svg className="w-5 h-5 ml-0.5" fill="currentColor" viewBox="0 0 24 24">
<path d="M8 5v14l11-7z"/>
</svg>
)}
</button>
<div className="flex-1 min-w-0">
<h3 className="font-semibold text-gray-900 dark:text-slate-100 truncate">
{currentEpisode.title}
</h3>
<p className="text-sm text-gray-500 dark:text-slate-400">
{format(new Date(currentEpisode.date), "MMMM d, yyyy")}
</p>
</div>
<div className="hidden sm:block text-sm text-gray-500 dark:text-slate-400">
{formatDuration(Math.floor(progress))} / {formatDuration(currentEpisode.audioDuration)}
</div>
</div>
<input
type="range"
min={0}
max={duration || currentEpisode.audioDuration}
value={progress}
onChange={handleSeek}
className="w-full mt-3 accent-blue-600"
/>
<audio
ref={audioRef}
src={currentEpisode.audioUrl}
onTimeUpdate={handleTimeUpdate}
onEnded={handleEnded}
/>
</div>
)}
{/* Episode List */}
<div className="space-y-4">
<h2 className="text-xl font-semibold mb-4">Episodes</h2>
{loading ? (
<div className="text-center py-12">
<div className="w-8 h-8 border-2 border-blue-600 border-t-transparent rounded-full animate-spin mx-auto"/>
</div>
) : episodes.length === 0 ? (
<div className="text-center py-12 bg-gray-50 rounded-xl dark:bg-slate-900">
<p className="text-gray-500 dark:text-slate-400">No podcast episodes yet.</p>
<p className="text-sm text-gray-400 dark:text-slate-500 mt-2">
Episodes are generated when new digests are posted with audio enabled.
</p>
</div>
) : (
episodes.map((episode) => (
<div
key={episode.id}
className={`p-4 rounded-xl border transition-colors ${
currentEpisode?.id === episode.id
? "bg-blue-50 border-blue-200 dark:bg-blue-900/20 dark:border-blue-800"
: "bg-white border-gray-200 hover:border-gray-300 dark:bg-slate-900 dark:border-slate-800 dark:hover:border-slate-700"
}`}
>
<div className="flex items-start gap-4">
<button
onClick={() => handlePlay(episode)}
className="w-10 h-10 bg-gray-100 hover:bg-gray-200 rounded-full flex items-center justify-center text-gray-700 transition-colors flex-shrink-0 dark:bg-slate-800 dark:hover:bg-slate-700 dark:text-slate-300"
>
{currentEpisode?.id === episode.id && isPlaying ? (
<svg className="w-4 h-4" fill="currentColor" viewBox="0 0 24 24">
<path d="M6 4h4v16H6V4zm8 0h4v16h-4V4z"/>
</svg>
) : (
<svg className="w-4 h-4 ml-0.5" fill="currentColor" viewBox="0 0 24 24">
<path d="M8 5v14l11-7z"/>
</svg>
)}
</button>
<div className="flex-1 min-w-0">
<h3 className="font-semibold text-gray-900 dark:text-slate-100 mb-1">
<Link href={`/?post=${episode.id}`} className="hover:text-blue-600 dark:hover:text-blue-400">
{episode.title}
</Link>
</h3>
<p className="text-sm text-gray-600 dark:text-slate-400 line-clamp-2 mb-2">
{episode.description}
</p>
<div className="flex items-center gap-3 text-xs text-gray-500 dark:text-slate-500">
<span>{format(new Date(episode.date), "MMMM d, yyyy")}</span>
<span>·</span>
<span>{formatDuration(episode.audioDuration)}</span>
{episode.tags && episode.tags.length > 0 && (
<>
<span>·</span>
<span className="text-blue-600 dark:text-blue-400">
{episode.tags.slice(0, 2).join(", ")}
</span>
</>
)}
</div>
</div>
</div>
</div>
))
)}
</div>
{/* Subscribe Section */}
<div className="mt-12 p-6 bg-gray-50 rounded-xl border border-gray-200 dark:bg-slate-900 dark:border-slate-800">
<h3 className="font-semibold text-gray-900 dark:text-slate-100 mb-2">Subscribe to the Podcast</h3>
<p className="text-sm text-gray-600 dark:text-slate-400 mb-4">
Get the latest tech digest delivered to your favorite podcast app.
</p>
<div className="flex flex-wrap gap-2">
<code className="px-3 py-1.5 bg-white border border-gray-300 rounded text-sm font-mono text-gray-700 dark:bg-slate-950 dark:border-slate-700 dark:text-slate-300">
{rssUrl}
</code>
<button
onClick={() => navigator.clipboard.writeText(rssUrl)}
className="px-3 py-1.5 bg-blue-600 hover:bg-blue-700 text-white rounded text-sm font-medium transition-colors"
>
Copy RSS URL
</button>
</div>
</div>
</main>
{/* Footer */}
<footer className="border-t border-gray-200 mt-16 dark:border-slate-800">
<div className="max-w-4xl mx-auto px-4 sm:px-6 lg:px-8 py-8">
<p className="text-center text-gray-500 dark:text-slate-400 text-sm">
© {new Date().getFullYear()} {DEFAULT_CONFIG.author}
</p>
</div>
</footer>
</div>
</>
);
}

203
src/lib/podcast.ts Normal file
View File

@ -0,0 +1,203 @@
/**
* Podcast RSS Feed Generation Utilities
* Generates RSS 2.0 with iTunes extensions for podcast distribution
*/
import { createClient } from "@supabase/supabase-js";
const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL!;
const supabaseAnonKey = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!;
const supabase = createClient(supabaseUrl, supabaseAnonKey);
export interface PodcastEpisode {
id: string;
title: string;
description: string;
content: string;
date: string;
timestamp: number;
audioUrl?: string;
audioDuration?: number;
tags?: string[];
}
export interface PodcastConfig {
title: string;
description: string;
author: string;
email: string;
category: string;
language: string;
websiteUrl: string;
imageUrl: string;
explicit: boolean;
}
// Default podcast configuration
export const DEFAULT_CONFIG: PodcastConfig = {
title: "OpenClaw Daily Digest",
description: "Daily curated tech news covering AI coding assistants, iOS development, and digital entrepreneurship. AI-powered summaries delivered as audio.",
author: "OpenClaw",
email: "podcast@openclaw.ai",
category: "Technology",
language: "en-US",
websiteUrl: "https://blog-backup-two.vercel.app",
imageUrl: "https://blog-backup-two.vercel.app/podcast-cover.png",
explicit: false,
};
/**
* Parse title from markdown content
*/
export function extractTitle(content: string): string {
const lines = content.split("\n");
const titleLine = lines.find((l) => l.startsWith("# ") || l.startsWith("## "));
return titleLine?.replace(/#{1,2}\s/, "").trim() || "Daily Digest";
}
/**
* Extract plain text excerpt from markdown
*/
export function extractExcerpt(content: string, maxLength: number = 300): string {
const plainText = content
.replace(/#{1,6}\s/g, "")
.replace(/(\*\*|__|\*|_)/g, "")
.replace(/\[([^\]]+)\]\([^)]+\)/g, "$1")
.replace(/```[\s\S]*?```/g, "")
.replace(/`([^`]+)`/g, "$1")
.replace(/<[^>]+>/g, "")
.replace(/\n+/g, " ")
.trim();
if (plainText.length <= maxLength) return plainText;
return plainText.substring(0, maxLength).trim() + "...";
}
/**
* Format duration in seconds to HH:MM:SS or MM:SS
*/
export function formatDuration(seconds: number): string {
const hours = Math.floor(seconds / 3600);
const minutes = Math.floor((seconds % 3600) / 60);
const secs = seconds % 60;
if (hours > 0) {
return `${hours}:${minutes.toString().padStart(2, "0")}:${secs.toString().padStart(2, "0")}`;
}
return `${minutes}:${secs.toString().padStart(2, "0")}`;
}
/**
* Format date to RFC 2822 format for RSS
*/
export function formatRFC2822(date: Date | string | number): string {
const d = new Date(date);
return d.toUTCString();
}
/**
* Escape XML special characters
*/
export function escapeXml(text: string): string {
return text
.replace(/&/g, "&amp;")
.replace(/</g, "&lt;")
.replace(/>/g, "&gt;")
.replace(/"/g, "&quot;")
.replace(/'/g, "&apos;");
}
/**
* Generate unique GUID for episode
*/
export function generateGuid(episodeId: string): string {
return `openclaw-digest-${episodeId}`;
}
/**
* Fetch episodes from database
*/
export async function fetchEpisodes(limit: number = 50): Promise<PodcastEpisode[]> {
const { data, error } = await supabase
.from("blog_messages")
.select("id, date, content, timestamp, audio_url, audio_duration, tags")
.not("audio_url", "is", null) // Only episodes with audio
.order("timestamp", { ascending: false })
.limit(limit);
if (error) {
console.error("Error fetching episodes:", error);
throw error;
}
return (data || []).map(item => ({
id: item.id,
title: extractTitle(item.content),
description: extractExcerpt(item.content),
content: item.content,
date: item.date,
timestamp: item.timestamp,
audioUrl: item.audio_url,
audioDuration: item.audio_duration || 300, // Default 5 min if not set
tags: item.tags || [],
}));
}
/**
* Generate RSS feed XML
*/
export function generateRSS(episodes: PodcastEpisode[], config: PodcastConfig = DEFAULT_CONFIG): string {
const now = new Date();
const lastBuildDate = episodes.length > 0
? new Date(episodes[0].timestamp)
: now;
const itemsXml = episodes.map(episode => {
const guid = generateGuid(episode.id);
const pubDate = formatRFC2822(episode.timestamp);
const duration = formatDuration(episode.audioDuration || 300);
const enclosureUrl = escapeXml(episode.audioUrl || "");
const title = escapeXml(episode.title);
const description = escapeXml(episode.description);
const keywords = episode.tags?.join(", ") || "technology, ai, programming";
return `
<item>
<title>${title}</title>
<description>${description}</description>
<pubDate>${pubDate}</pubDate>
<guid isPermaLink="false">${guid}</guid>
<link>${escapeXml(`${config.websiteUrl}/?post=${episode.id}`)}</link>
<enclosure url="${enclosureUrl}" length="${episode.audioDuration || 300}" type="audio/mpeg"/>
<itunes:title>${title}</itunes:title>
<itunes:author>${escapeXml(config.author)}</itunes:author>
<itunes:summary>${description}</itunes:summary>
<itunes:duration>${duration}</itunes:duration>
<itunes:explicit>${config.explicit ? "yes" : "no"}</itunes:explicit>
<itunes:keywords>${escapeXml(keywords)}</itunes:keywords>
</item>`;
}).join("\n");
return `<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>${escapeXml(config.title)}</title>
<link>${escapeXml(config.websiteUrl)}</link>
<description>${escapeXml(config.description)}</description>
<language>${config.language}</language>
<lastBuildDate>${formatRFC2822(lastBuildDate)}</lastBuildDate>
<atom:link href="${escapeXml(`${config.websiteUrl}/api/podcast/rss`)}" rel="self" type="application/rss+xml"/>
<itunes:author>${escapeXml(config.author)}</itunes:author>
<itunes:summary>${escapeXml(config.description)}</itunes:summary>
<itunes:category text="${escapeXml(config.category)}"/>
<itunes:explicit>${config.explicit ? "yes" : "no"}</itunes:explicit>
<itunes:image href="${escapeXml(config.imageUrl)}"/>
<itunes:owner>
<itunes:name>${escapeXml(config.author)}</itunes:name>
<itunes:email>${escapeXml(config.email)}</itunes:email>
</itunes:owner>
${itemsXml}
</channel>
</rss>`;
}

130
src/lib/storage.ts Normal file
View File

@ -0,0 +1,130 @@
/**
* Supabase Storage Utilities for Podcast Audio
*/
import { createClient } from "@supabase/supabase-js";
const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL!;
const supabaseServiceKey = process.env.SUPABASE_SERVICE_ROLE_KEY!;
// Use service role key for storage operations
const supabase = createClient(supabaseUrl, supabaseServiceKey);
const BUCKET_NAME = "podcast-audio";
export interface UploadResult {
url: string;
path: string;
size: number;
}
/**
* Ensure the podcast-audio bucket exists
*/
export async function ensureBucket(): Promise<void> {
try {
// Check if bucket exists
const { data: buckets, error: listError } = await supabase.storage.listBuckets();
if (listError) {
console.error("Error listing buckets:", listError);
throw listError;
}
const bucketExists = buckets?.some(b => b.name === BUCKET_NAME);
if (!bucketExists) {
// Create bucket
const { error: createError } = await supabase.storage.createBucket(BUCKET_NAME, {
public: true, // Allow public access for RSS feed
fileSizeLimit: 50 * 1024 * 1024, // 50MB limit
allowedMimeTypes: ["audio/mpeg", "audio/wav", "audio/aiff", "audio/mp3"],
});
if (createError) {
console.error("Error creating bucket:", createError);
throw createError;
}
console.log(`Created bucket: ${BUCKET_NAME}`);
}
} catch (error) {
console.error("Error ensuring bucket:", error);
throw error;
}
}
/**
* Upload audio file to Supabase Storage
*/
export async function uploadAudio(
buffer: Buffer,
filename: string,
contentType: string = "audio/mpeg"
): Promise<UploadResult> {
await ensureBucket();
const { data, error } = await supabase.storage
.from(BUCKET_NAME)
.upload(filename, buffer, {
contentType,
upsert: true, // Overwrite if exists
});
if (error) {
console.error("Error uploading audio:", error);
throw error;
}
// Get public URL
const { data: urlData } = supabase.storage
.from(BUCKET_NAME)
.getPublicUrl(data.path);
return {
url: urlData.publicUrl,
path: data.path,
size: buffer.length,
};
}
/**
* Delete audio file from Supabase Storage
*/
export async function deleteAudio(path: string): Promise<void> {
const { error } = await supabase.storage
.from(BUCKET_NAME)
.remove([path]);
if (error) {
console.error("Error deleting audio:", error);
throw error;
}
}
/**
* Get public URL for audio file
*/
export function getAudioUrl(path: string): string {
const { data } = supabase.storage
.from(BUCKET_NAME)
.getPublicUrl(path);
return data.publicUrl;
}
/**
* List all audio files in the bucket
*/
export async function listAudioFiles(prefix?: string): Promise<string[]> {
const { data, error } = await supabase.storage
.from(BUCKET_NAME)
.list(prefix || "");
if (error) {
console.error("Error listing audio files:", error);
throw error;
}
return data?.map(file => file.name) || [];
}

248
src/lib/tts.ts Normal file
View File

@ -0,0 +1,248 @@
/**
* Text-to-Speech Service
* Supports multiple TTS providers: Piper (local/free), OpenAI (API/paid)
*/
export interface TTSOptions {
provider?: "piper" | "openai" | "macsay";
voice?: string;
model?: string;
speed?: number;
}
export interface TTSResult {
audioBuffer: Buffer;
duration: number; // estimated duration in seconds
format: string;
}
// Abstract TTS Provider Interface
interface TTSProvider {
synthesize(text: string, options?: TTSOptions): Promise<TTSResult>;
}
// Piper TTS Provider (Local, Free)
class PiperProvider implements TTSProvider {
private modelPath: string;
constructor() {
// Default model path - can be configured via env
this.modelPath = process.env.PIPER_MODEL_PATH || "./models/en_US-lessac-medium.onnx";
}
async synthesize(text: string, options?: TTSOptions): Promise<TTSResult> {
const { exec } = await import("child_process");
const { promisify } = await import("util");
const fs = await import("fs");
const path = await import("path");
const os = await import("os");
const execAsync = promisify(exec);
// Create temp directory for output
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "piper-"));
const outputPath = path.join(tempDir, "output.wav");
try {
// Check if piper is installed
await execAsync("which piper || which piper-tts");
// Run Piper TTS
const piperCmd = `echo ${JSON.stringify(text)} | piper --model "${this.modelPath}" --output_file "${outputPath}"`;
await execAsync(piperCmd, { timeout: 60000 });
// Read the output file
const audioBuffer = fs.readFileSync(outputPath);
// Estimate duration (rough: ~150 words per minute, ~5 chars per word)
const wordCount = text.split(/\s+/).length;
const estimatedDuration = Math.ceil((wordCount / 150) * 60);
// Cleanup
fs.unlinkSync(outputPath);
fs.rmdirSync(tempDir);
return {
audioBuffer,
duration: estimatedDuration,
format: "audio/wav",
};
} catch (error) {
// Cleanup on error
try {
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath);
if (fs.existsSync(tempDir)) fs.rmdirSync(tempDir);
} catch {}
throw new Error(`Piper TTS failed: ${error instanceof Error ? error.message : String(error)}`);
}
}
}
// OpenAI TTS Provider (API-based, paid)
class OpenAIProvider implements TTSProvider {
private apiKey: string;
constructor() {
this.apiKey = process.env.OPENAI_API_KEY || "";
if (!this.apiKey) {
throw new Error("OPENAI_API_KEY not configured");
}
}
async synthesize(text: string, options?: TTSOptions): Promise<TTSResult> {
const voice = options?.voice || "alloy";
const model = options?.model || "tts-1";
const speed = options?.speed || 1.0;
const response = await fetch("https://api.openai.com/v1/audio/speech", {
method: "POST",
headers: {
"Authorization": `Bearer ${this.apiKey}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model,
voice,
input: text,
speed,
response_format: "mp3",
}),
});
if (!response.ok) {
const error = await response.text();
throw new Error(`OpenAI TTS API error: ${response.status} ${error}`);
}
const audioBuffer = Buffer.from(await response.arrayBuffer());
// Estimate duration (rough calculation)
const wordCount = text.split(/\s+/).length;
const estimatedDuration = Math.ceil((wordCount / 150) * 60);
return {
audioBuffer,
duration: estimatedDuration,
format: "audio/mpeg",
};
}
}
// macOS say command (built-in, basic quality)
class MacSayProvider implements TTSProvider {
async synthesize(text: string, options?: TTSOptions): Promise<TTSResult> {
const { exec } = await import("child_process");
const { promisify } = await import("util");
const fs = await import("fs");
const path = await import("path");
const os = await import("os");
const execAsync = promisify(exec);
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "say-"));
const outputPath = path.join(tempDir, "output.aiff");
const voice = options?.voice || "Samantha";
try {
// Use macOS say command
const sayCmd = `say -v "${voice}" -o "${outputPath}" ${JSON.stringify(text)}`;
await execAsync(sayCmd, { timeout: 120000 });
const audioBuffer = fs.readFileSync(outputPath);
// Estimate duration
const wordCount = text.split(/\s+/).length;
const estimatedDuration = Math.ceil((wordCount / 150) * 60);
// Cleanup
fs.unlinkSync(outputPath);
fs.rmdirSync(tempDir);
return {
audioBuffer,
duration: estimatedDuration,
format: "audio/aiff",
};
} catch (error) {
try {
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath);
if (fs.existsSync(tempDir)) fs.rmdirSync(tempDir);
} catch {}
throw new Error(`macOS say failed: ${error instanceof Error ? error.message : String(error)}`);
}
}
}
// Main TTS Service
export class TTSService {
private provider: TTSProvider;
constructor(provider: "piper" | "openai" | "macsay" = "openai") {
switch (provider) {
case "piper":
this.provider = new PiperProvider();
break;
case "macsay":
this.provider = new MacSayProvider();
break;
case "openai":
default:
this.provider = new OpenAIProvider();
break;
}
}
async synthesize(text: string, options?: TTSOptions): Promise<TTSResult> {
// Clean up text for TTS (remove markdown, URLs, etc.)
const cleanText = this.cleanTextForTTS(text);
// Truncate if too long (TTS APIs have limits)
const maxChars = 4000;
const truncatedText = cleanText.length > maxChars
? cleanText.substring(0, maxChars) + "... That's all for today."
: cleanText;
return this.provider.synthesize(truncatedText, options);
}
private cleanTextForTTS(text: string): string {
return text
// Remove markdown headers
.replace(/#{1,6}\s/g, "")
// Remove markdown bold/italic
.replace(/(\*\*|__|\*|_)/g, "")
// Remove markdown links, keep text
.replace(/\[([^\]]+)\]\([^)]+\)/g, "$1")
// Remove code blocks
.replace(/```[\s\S]*?```/g, " code snippet ")
// Remove inline code
.replace(/`([^`]+)`/g, " $1 ")
// Remove HTML tags
.replace(/<[^>]+>/g, "")
// Replace multiple spaces/newlines with single space
.replace(/\s+/g, " ")
// Replace bullet points with spoken word
.replace(/^[\s]*[-*][\s]*/gm, "• ")
.trim();
}
}
// Convenience function
export async function generateSpeech(
text: string,
options?: TTSOptions
): Promise<TTSResult> {
const provider = (process.env.TTS_PROVIDER as "piper" | "openai" | "macsay") || "openai";
const service = new TTSService(provider);
return service.synthesize(text, options);
}
// Lazy-loaded default service instance
let _tts: TTSService | null = null;
export function getTTS(): TTSService {
if (!_tts) {
const provider = (process.env.TTS_PROVIDER as "piper" | "openai" | "macsay") || "openai";
_tts = new TTSService(provider);
}
return _tts;
}

156
src/scripts/generate-tts.ts Normal file
View File

@ -0,0 +1,156 @@
/**
* Standalone script to generate TTS for a digest post
* Usage: npx ts-node src/scripts/generate-tts.ts <post_id>
* Or: npm run generate-tts -- <post_id>
*/
import { createClient } from "@supabase/supabase-js";
import { generateSpeech } from "@/lib/tts";
import { uploadAudio } from "@/lib/storage";
import { extractTitle } from "@/lib/podcast";
const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL!;
const supabaseServiceKey = process.env.SUPABASE_SERVICE_ROLE_KEY!;
const supabase = createClient(supabaseUrl, supabaseServiceKey);
async function generateTTSForPost(postId: string) {
console.log(`Generating TTS for post ${postId}...`);
// Fetch post from database
const { data: post, error: fetchError } = await supabase
.from("blog_messages")
.select("id, content, date, audio_url")
.eq("id", postId)
.single();
if (fetchError || !post) {
console.error("Error fetching post:", fetchError);
process.exit(1);
}
if (post.audio_url) {
console.log("Post already has audio:", post.audio_url);
console.log("Use --force to regenerate");
if (!process.argv.includes("--force")) {
process.exit(0);
}
console.log("Forcing regeneration...");
}
try {
console.log("Generating speech...");
const title = extractTitle(post.content);
console.log(`Title: ${title}`);
const { audioBuffer, duration, format } = await generateSpeech(post.content, {
provider: (process.env.TTS_PROVIDER as "piper" | "openai" | "macsay") || "openai",
voice: process.env.TTS_VOICE || "alloy",
});
console.log(`Generated audio: ${duration}s, ${format}, ${audioBuffer.length} bytes`);
// Determine file extension
const ext = format === "audio/wav" ? "wav" :
format === "audio/aiff" ? "aiff" : "mp3";
const filename = `digest-${post.date}-${post.id}.${ext}`;
console.log(`Uploading to storage as ${filename}...`);
const { url, path } = await uploadAudio(audioBuffer, filename, format);
console.log(`Uploaded to: ${url}`);
// Update database
const { error: updateError } = await supabase
.from("blog_messages")
.update({
audio_url: url,
audio_duration: duration,
})
.eq("id", postId);
if (updateError) {
console.error("Error updating database:", updateError);
process.exit(1);
}
console.log("✅ Successfully generated and uploaded TTS audio!");
console.log(`Audio URL: ${url}`);
console.log(`Duration: ${duration} seconds`);
} catch (error) {
console.error("Error generating TTS:", error);
process.exit(1);
}
}
async function generateTTSForAllMissing() {
console.log("Generating TTS for all posts without audio...");
const { data: posts, error } = await supabase
.from("blog_messages")
.select("id, content, date, audio_url")
.is("audio_url", null)
.order("timestamp", { ascending: false });
if (error) {
console.error("Error fetching posts:", error);
process.exit(1);
}
console.log(`Found ${posts?.length || 0} posts without audio`);
for (const post of posts || []) {
console.log(`\n--- Processing post ${post.id} ---`);
try {
const title = extractTitle(post.content);
console.log(`Title: ${title}`);
const { audioBuffer, duration, format } = await generateSpeech(post.content, {
provider: (process.env.TTS_PROVIDER as "piper" | "openai" | "macsay") || "openai",
voice: process.env.TTS_VOICE || "alloy",
});
const ext = format === "audio/wav" ? "wav" :
format === "audio/aiff" ? "aiff" : "mp3";
const filename = `digest-${post.date}-${post.id}.${ext}`;
const { url } = await uploadAudio(audioBuffer, filename, format);
await supabase
.from("blog_messages")
.update({
audio_url: url,
audio_duration: duration,
})
.eq("id", post.id);
console.log(`✅ Generated: ${url} (${duration}s)`);
} catch (error) {
console.error(`❌ Failed for post ${post.id}:`, error);
}
}
console.log("\n✅ Batch processing complete!");
}
// Main execution
async function main() {
const args = process.argv.slice(2);
if (args.includes("--all")) {
await generateTTSForAllMissing();
} else if (args.length > 0 && !args[0].startsWith("--")) {
const postId = args[0];
await generateTTSForPost(postId);
} else {
console.log("Usage:");
console.log(" npx ts-node src/scripts/generate-tts.ts <post_id> # Generate for specific post");
console.log(" npx ts-node src/scripts/generate-tts.ts --all # Generate for all missing posts");
console.log(" npx ts-node src/scripts/generate-tts.ts <id> --force # Regenerate existing");
process.exit(1);
}
}
main().catch(console.error);

View File

@ -28,7 +28,8 @@
"**/*.tsx",
".next/types/**/*.ts",
".next/dev/types/**/*.ts",
"**/*.mts"
"**/*.mts",
"src/scripts/**/*.ts"
],
"exclude": ["node_modules"]
}