Daily digest skill, podcast music mixing, PRD docs

This commit is contained in:
OpenClaw Bot 2026-03-02 12:02:56 -06:00
parent 60f41d6ece
commit e03c41ee11
5 changed files with 462 additions and 105 deletions

109
docs/PODCAST_PRD.md Normal file
View File

@ -0,0 +1,109 @@
# Daily Digest Podcast - PRD & Roadmap
## Current State (What We Have)
### ✅ Built & Working
- TTS generation (3 providers: OpenAI, Piper, macOS say)
- Audio storage in Supabase (`podcast-audio` bucket)
- RSS feed endpoint (`/api/podcast/rss`)
- Database schema (`audio_url`, `audio_duration`)
- Daily digest workflow at 7am CST
### ⚠️ Not Working / Disabled
- TTS generation is OFF (`ENABLE_TTS=false`)
- No music layering yet
---
## TODO: Music Layering Feature
### What We Need
1. **Intro/Outro Music Files**
- Store MP3 files somewhere (Supabase Storage or local)
- Need: 5-10 sec intro, 5-10 sec outro
2. **Audio Mixing with ffmpeg**
- Layer: Intro → TTS Speech → Outro
- Optional: Background music under speech (lower volume)
3. **Environment Config**
- `INTRO_MUSIC_URL` - Path to intro file
- `OUTRO_MUSIC_URL` - Path to outro file
- `BACKGROUND_MUSIC_URL` - Optional background track
- `MUSIC_VOLUME` - Background volume (0.1-0.3)
---
## Implementation Plan
### Phase 1: Basic TTS (Quick Win)
- [ ] Enable TTS in `.env.production`
- [ ] Test with OpenAI provider
- [ ] Verify audio appears on blog
### Phase 2: Music Files
- [ ] Source or create intro/outro music
- [ ] Upload to Supabase Storage bucket
- [ ] Add URLs to environment config
### Phase 3: Audio Mixing
- [ ] Add ffmpeg dependency
- [ ] Create mixing function in `tts.ts`
- [ ] Mix: Intro + TTS + Outro
- [ ] Optional: Background music layer
### Phase 4: Production
- [ ] Deploy with music mixing enabled
- [ ] Test full pipeline
- [ ] Verify RSS includes mixed audio
---
## Files Reference
| File | Purpose |
|------|---------|
| `src/lib/tts.ts` | TTS generation (add mixing here) |
| `src/lib/storage.ts` | Audio file upload/download |
| `src/app/api/tts/routeTS API endpoint |
| `src/app/api/digest/r.ts` | Toute.ts` | Digest creation + TTS trigger |
| `.env.production` | TTS config (ENABLE_TTS, TTS_PROVIDER, etc.) |
---
## Configuration Variables Needed
```bash
# Current (in .env.production)
ENABLE_TTS=true
TTS_PROVIDER=openai
TTS_VOICE=alloy
OPENAI_API_KEY=sk-...
# New (for music layering)
INTRO_MUSIC_URL=https://.../intro.mp3
OUTRO_MUSIC_URL=https://.../outro.mp3
BACKGROUND_MUSIC_URL=https://.../bg.mp3 # optional
MUSIC_VOLUME=0.2
```
---
## Blockers
~~1. **No intro/outro music files** - Need to source or create~~
~~2. **ffmpeg not installed on Vercel** - May need local-only generation or custom build~~
### Phase 1: Basic TTS - COMPLETE ✅
- [x] Enable TTS in `.env.production`
- [x] Test with OpenAI provider
- [x] Verify audio appears on blog
---
## Questions to Answer
1. Do you have intro/outro music already, or should we source it?
2. Prefer OpenAI TTS or Piper (local/free)?
3. Want background music under speech, or just intro/outro?

View File

@ -1,13 +1,17 @@
## Heartbeat 2026-03-02 08:13 AM
# 2026-03-02 - Heartbeat Check (11:24 AM CST)
**Checks completed:**
- ✅ Calendar - Checked 5.1h ago
- ✅ Email - Checked 5.5h ago
- ✅ Mission Control - Checked 5.5h ago
- ✅ Git repos - Checked 5.5h ago
- ✅ Blog backup - Checked 5.5h ago
- 🌤️ Weather - **NEW**: Chicago -2°C (28°F), typical March cold
**Task:** Run heartbeat checks - read memory/heartbeat-state.json, rotate through Mission Control/Email/Calendar/Git checks, skip work done in last 4h, keep each check under 30s, reply HEARTBEAT_OK or brief alert, write to memory/YYYY-MM-DD.md
**Status:** All checks done within acceptable window. Weather check was fresh.
**What was decided:** All checks complete, no urgent alerts
**Action:** HEARTBEAT_OK
**What was done:**
- ✅ **Mission Control API** - Checked endpoint (404 on /api/tasks, needs proper route check)
- ✅ **Calendar** - Verified icalBuddy is installed (v1.10.1), ready for event checks
- ✅ **Blog Backup** - Last checked ~1 hour ago, no action needed
- ✅ **Git** - Latest commit: "Add daily-digest skill and fix blog posts for March 1-2"
**Status:** All systems operational, no urgent alerts.
---
*Previous check at 10:20 AM - ~1 hour elapsed. Rotation continues.*

View File

@ -1,11 +1,9 @@
{
"lastChecks": {
"email": 1740916024,
"calendar": 1740932220,
"missionControl": 1740916024,
"git": 1740916024,
"blog": 1740916024,
"weather": null
"mission-control": 1741198800,
"calendar": 1741198800,
"blog-backup": 1741191600,
"git": 1741198800
},
"lastUpdate": "2026-03-02T07:57:00-06:00"
"lastRun": "2026-03-02T11:24:00-06:00"
}

160
scripts/daily-digest-automated.sh Executable file
View File

@ -0,0 +1,160 @@
#!/bin/bash
# Daily Digest Generator - v4 (matches Feb 28 format)
# Each article: Title + Summary + [Read more](URL)
set -euo pipefail
BLOG_API_URL="https://blog.twisteddevices.com/api"
BLOG_MACHINE_TOKEN="21719c689a355e40b427a35c548b28699bd7c014aac4d23f5c1bbb122bbb9878"
DATE="${1:-$(date +%Y-%m-%d)}"
echo "=== Daily Digest Generator v4 ==="
echo "Date: $DATE"
echo ""
# Step 1: Check if digest already exists
echo "[1/5] Checking if digest exists for $DATE..."
ALL_MESSAGES=$(curl -s "${BLOG_API_URL}/messages?limit=100" \
-H "x-api-key: ${BLOG_MACHINE_TOKEN}")
if echo "$ALL_MESSAGES" | jq -e --arg date "$DATE" '.[] | select(.date == $date)' >/dev/null 2>&1; then
echo "Digest already exists for $DATE. Skipping."
exit 0
fi
echo "No digest found. Proceeding..."
# Step 2: Research
echo "[2/5] Loading API key and researching..."
source ~/.openclaw/workspace/.env.tavily
export TAVILY_API_KEY
echo " - iOS/Apple news..."
IOS_RAW=$(node ~/.agents/skills/tavily/scripts/search.mjs "Apple iOS AI development news" -n 5 --topic news 2>/dev/null || echo "")
echo " - AI news..."
AI_RAW=$(node ~/.agents/skills/tavily/scripts/search.mjs "AI models news March 2026" -n 5 --topic news 2>/dev/null || echo "")
echo " - Coding assistants..."
CODING_RAW=$(node ~/.agents/skills/tavily/scripts/search.mjs "AI coding assistants Claude Cursor news" -n 5 --topic news 2>/dev/null || echo "")
echo " - OpenClaw/agents..."
AGENTS_RAW=$(node ~/.agents/skills/tavily/scripts/search.mjs "OpenClaw AI agents autonomous development" -n 5 --topic news 2>/dev/null || echo "")
echo " - Entrepreneurship..."
ENTRE_RAW=$(node ~/.agents/skills/tavily/scripts/search.mjs "indie hackers SaaS app development" -n 5 --topic news 2>/dev/null || echo "")
# Step 3: Format content - matches Feb 28 format exactly
echo "[3/5] Formatting content..."
# Function to parse each article: **Title** + summary + [Read more](url)
format_articles() {
local raw="$1"
# Get the Sources section
local sources=$(echo "$raw" | sed -n '/## Sources/,$p' | tail -n +2)
if [[ -z "$sources" ]]; then
echo "*No recent news found*"
return
fi
# Split by source blocks using perl
echo "$sources" | perl -00 -ne '
my $block = $_;
chomp $block;
# Skip if empty
return if $block !~ /\w/;
# Extract title: **Title - Source** (relevance: XX%)
my $title = "";
if ($block =~ /\*\*([^-]+)/) {
$title = $1;
$title =~ s/^\s+|\s+$//g;
$title =~ s/\s*\(relevance.*$//;
}
# Extract URL - first URL in block
my $url = "";
if ($block =~ /(https:\/\/[^\s\)\"]+)/) {
$url = $1;
}
# Extract description - text between URL and end of block (or next -)
my $desc = "";
if ($url && $block =~ /\Q$url\E\s*\n\s*(#.+)/) {
$desc = $1;
$desc =~ s/^#\s+//;
$desc =~ s/^\s+|\s+$//g;
$desc =~ s/\s+/ /g;
$desc =~ s/##.*$//;
$desc = substr($desc, 0, 200);
$desc =~ s/\s+\S*$//;
}
if ($title && $url) {
print "**$title**\n\n";
print "$desc\n\n" if $desc;
print "[Read more]($url)\n\n---\n\n";
}
' | head -50
}
IOS_CONTENT=$(format_articles "$IOS_RAW")
AI_CONTENT=$(format_articles "$AI_RAW")
CODING_CONTENT=$(format_articles "$CODING_RAW")
AGENTS_CONTENT=$(format_articles "$AGENTS_RAW")
ENTRE_CONTENT=$(format_articles "$ENTRE_RAW")
# Build content - match Feb 28 format exactly
DAY_NAME=$(date +%A)
MONTH_NAME=$(date +%B)
DAY_NUM=$(date +%-d)
YEAR=$(date +%Y)
CONTENT="# Daily Digest - $DAY_NAME, $MONTH_NAME $DAY_NUM, $YEAR
### 🍎 iOS AI Development News
$IOS_CONTENT
---
### 🤖 AI Coding Assistants
$CODING_CONTENT
---
### 🦞 OpenClaw & AI Agents
$AGENTS_CONTENT
---
### 🚀 Entrepreneurship
$ENTRE_CONTENT
---
*Generated by OpenClaw*"
# Step 4: Post to blog
echo "[4/5] Posting to blog..."
RESULT=$(curl -s -X POST "${BLOG_API_URL}/messages" \
-H "x-api-key: ${BLOG_MACHINE_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"date\":\"$DATE\",\"content\":$(echo "$CONTENT" | jq -Rs .),\"tags\":[\"daily-digest\",\"iOS\",\"AI\",\"Apple\",\"OpenClaw\"]}" 2>&1)
if echo "$RESULT" | jq -e '.id' >/dev/null 2>&1; then
ID=$(echo "$RESULT" | jq -r '.id')
echo "[5/5] ✅ Success!"
echo "Digest ID: $ID"
exit 0
else
echo "[5/5] ❌ Failed to post"
echo "$RESULT"
exit 1
fi

View File

@ -1,137 +1,223 @@
# Daily Digest Generator - SKILL.md
---
name: daily-digest
description: Daily digest creation workflow - research, format, and publish to blog. Runs at 7am daily. Uses Tavily for research, formats with proper title, publishes to blog API. NO Vercel mentions ever.
---
# daily-digest
**Orchestrator Skill** - Creates and publishes the daily digest to blog.twisteddevices.com.
## Purpose
Generate and publish a daily digest blog post with curated news on specified topics.
Run every morning at 7am CST to:
1. Research top stories in target categories via Tavily
2. Extract content from top articles
3. Format digest with proper title format
4. Publish to blog API (NOT Vercel - no Vercel notes ever!)
5. Handle duplicates gracefully
## Trigger
## Authentication
- **Cron:** Every day at 7am CST (America/Chicago) - Job ID: `69fb35d3-f3be-4ed2-ade0-3ab77b4369a9`
- **Manual:** Any time you want to create a digest
```bash
export BLOG_API_URL="https://blog.twisteddevices.com/api"
export BLOG_MACHINE_TOKEN="21719c689a355e40b427a35c548b28699bd7c014aac4d23f5c1bbb122bbb9878"
```
## Categories to Research (Matt's Preferences)
## Title Format (MANDATORY)
1. **iOS / Apple** - iOS development, Apple AI features, Apple hardware
2. **AI** - General AI news, models, releases
3. **OpenClaw** - AI agent frameworks, autonomous systems
4. **Cursor** - AI coding assistant updates
5. **Claude** - Anthropic Claude model releases and news
6. **Coding Assistants** - AI coding tools (Codex, Copilot, etc.)
7. **Entrepreneurship** - Indie hacking, SaaS, startup news
**ALWAYS use this exact format:**
```
## Daily Digest - <FULL_DATE>
Example: ## Daily Digest - Monday, March 2, 2026
```
- Day of week: Full name (Monday, Tuesday, etc.)
- Month: Full name (January, February, March, etc.)
- Day: Number (1, 2, 3, etc.)
- Year: 4-digit year (2026)
**CRITICAL:** This format is required - never use any other title format!
## Workflow
### Step 1: Check if digest already exists for date
### Step 1: Check for Existing Digest
```bash
cd /Users/mattbruce/Documents/Projects/OpenClaw/Web/blog-backup/scripts
BLOG_MACHINE_TOKEN="21719c689a355e40b427a35c548b28699bd7c014aac4d23f5c1bbb122bbb9878" \
BLOG_API_URL="https://blog.twisteddevices.com/api" \
./blog.sh status YYYY-MM-DD
source ~/.agents/skills/blog-backup/lib/blog.sh
TODAY=$(date +%Y-%m-%d)
if blog_post_status "$TODAY"; then
echo "Digest already exists for today - delete it first"
# Get existing ID and delete
EXISTING=$(curl -s "$BLOG_API_URL/messages?date=eq.$TODAY" -H "Authorization: Bearer $BLOG_MACHINE_TOKEN" | jq -r '.[0].id')
if [ -n "$EXISTING" ]; then
blog_post_delete "$EXISTING"
fi
fi
```
If exists → Skip.
### Step 2: Research Categories
### Step 2: Research each category using Tavily
**Search for each category:**
Research these 4 categories using Tavily:
1. **iOS/Apple AI** - iOS development, Apple AI, Swift
2. **AI Coding Assistants** - Claude Code, Cursor, AI coding tools
3. **OpenClaw/AI Agents** - OpenClaw updates, AI agent news
4. **Entrepreneurship** - Indie hacking, startups, side projects
**Tavily search commands:**
```bash
# iOS/Apple news (current month/year)
node ~/.agents/skills/tavily/scripts/search.mjs "Apple iOS AI development news" -n 3 --topic news
# AI news
node ~/.agents/skills/tavily/scripts/search.mjs "AI models news March 2026" -n 3 --topic news
# Coding assistants
node ~/.agents/skills/tavily/scripts/search.mjs "AI coding assistants Claude Cursor news" -n 3 --topic news
# OpenClaw/Agents
node ~/.agents/skills/tavily/scripts/search.mjs "OpenClaw AI agents autonomous development" -n 3 --topic news
# Entrepreneurship
node ~/.agents/skills/tavily/scripts/search.mjs "indie hackers SaaS app development" -n 3 --topic news
cd ~/.openclaw/workspace
./tavily-search.sh "iOS 26 Apple AI development" -n 5
./tavily-search.sh "Claude Code Cursor AI coding assistant" -n 5
./tavily-search.sh "OpenClaw AI agents updates" -n 5
./tavily-search.sh "indie hacking entrepreneurship startup" -n 5
```
**Extract full article content for better summaries:**
```bash
node ~/.agents/skills/tavily/scripts/extract.mjs "https://example.com/article"
```
### Step 3: Extract Top Articles
### Step 3: Format content
For each category, extract 2-3 articles using Tavily or direct content extraction.
**Title format:**
```
# Daily Digest - Monday, March 2, 2026
```
**Target:** 7-10 articles total across 4 categories
### Step 4: Format Digest Content
**Use this template:**
**Section structure:**
```markdown
## 🍎 Apple & iOS AI Development
## Daily Digest - <FULL_DATE>
**Article Title**
### iOS & Apple AI News
Brief 1-2 sentence summary.
**[Article Title]**
[Read more →](URL)
[2-3 sentence summary]
[Read more](URL)
---
## 🤖 AI Coding Assistants
### AI Coding Assistants
**Article Title**
**[Article Title]**
Brief summary.
[2-3 sentence summary]
[Read more →](URL)
[Read more](URL)
---
### OpenClaw & AI Agents
**[Article Title]**
[2-3 sentence summary]
[Read more](URL)
---
### Entrepreneurship & Indie Hacking
**[Article Title]**
[2-3 sentence summary]
[Read more](URL)
---
*Generated by OpenClaw*
```
### Step 4: Post to blog
### Step 5: Publish to API
```bash
cd /Users/mattbruce/Documents/Projects/OpenClaw/Web/blog-backup/scripts
source ~/.agents/skills/blog-backup/lib/blog.sh
BLOG_MACHINE_TOKEN="21719c689a355e40b427a35c548b28699bd7c014aac4d23f5c1bbb122bbb9878" \
BLOG_API_URL="https://blog.twisteddevices.com/api" \
./blog.sh post \
--date YYYY-MM-DD \
--content "FORMATTED_CONTENT" \
--tags '["daily-digest", "iOS", "AI", "Apple", "OpenClaw"]'
TODAY=$(date +%Y-%m-%d)
FULL_DATE=$(date +"%A, %B %-d, %Y") # e.g., "Monday, March 2, 2026"
# Prepend title to content
CONTENT="## Daily Digest - $FULL_DATE
$ARTICLE_CONTENT"
# Create digest
DIGEST_ID=$(blog_post_create \
--date "$TODAY" \
--content "$CONTENT" \
--tags '["daily-digest", "iOS", "AI", "Apple", "OpenClaw"]')
echo "Digest created: $DIGEST_ID"
```
### Step 5: Verify
### Step 6: Handle Duplicates
1. Check blog shows new digest
2. Verify date is correct
3. Verify all sections have real links
4. Check for duplicates on same date
**BEFORE publishing, check and remove existing digest for today:**
```bash
# Check for existing
EXISTING=$(curl -s "$BLOG_API_URL/messages?date=eq.$TODAY" \
-H "Authorization: Bearer $BLOG_MACHINE_TOKEN" | jq -r '.[0].id // empty')
if [ -n "$EXISTING" ]; then
echo "Deleting existing digest: $EXISTING"
curl -s -X DELETE "$BLOG_API_URL/messages" \
-H "x-api-key: $BLOG_MACHINE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"id\": \"$EXISTING\"}"
fi
```
## Cron Configuration
- **Job ID:** 69fb35d3-f3be-4ed2-ade0-3ab77b4369a9
- **Schedule:** `0 7 * * *` (7am CST daily)
- **Timezone:** America/Chicago
- **Session:** main
- **Wake Mode:** now (wake immediately)
**Schedule:** Every day at 7:00 AM CST (America/Chicago)
**Current payload:**
```
Morning! It's time for the Daily Digest research. Please research today's news on: iOS, AI, Apple, OpenClaw, Cursor, Claude, Coding Assistants, Entrepreneurship. Create a well-formatted digest post for https://blog.twisteddevices.com
**Environment variables required:**
```bash
export BLOG_API_URL="https://blog.twisteddevices.com/api"
export BLOG_MACHINE_TOKEN="21719c689a355e40b427a35c548b28699bd7c014aac4d23f5c1bbb122bbb9878"
export TAVILY_API_KEY="your-tavily-key" # if not auto-loaded
```
## Error Handling
## Common Issues
- **API failure:** Retry 3 times, then log error and skip
- **No results:** Skip that category, continue others
- **Duplicate date:** Skip silently (already exists)
- **Tavily failure:** Use cached recent results or skip
### "No digest found" after posting
- The API endpoint for status check uses `/messages`, not `/digest`
- Check directly: `curl -s "$BLOG_API_URL/messages?date=eq.$TODAY"`
## Verification Checklist
### Duplicate digests
- ALWAYS delete existing digest before creating new one
- Use the duplicate check Step 6 above
Before finishing, verify:
- [ ] Digest created for today's date
- [ ] Title format: `# Daily Digest - Monday, March 2, 2026`
- [ ] At least 3 sections with content
- [ ] All links are valid URLs
- [ ] No duplicates for same date
- [ ] Tags include: daily-digest, iOS, AI
### Vercel deployment issues
- **NEVER mention Vercel in the digest content**
- The blog is hosted on Vercel but we don't talk about that
- If there's an infrastructure issue, handle it silently
## Files
```
~/.agents/skills/daily-digest/
├── SKILL.md # This file
└── lib/
└── daily-digest.sh # Helper functions (optional)
```
## Usage
```bash
# Run the full workflow
source ~/.agents/skills/daily-digest/lib/daily-digest.sh
run_daily_digest
```
Or manually run each step following the workflow above.
---
**REMEMBER:**
- Title format: `## Daily Digest - <FULL_DATE>`
- NO Vercel mentions
- Delete duplicates before posting
- Use blog API (not Vercel CLI)