refactor: Update skills to use API endpoints with machine token auth
- gantt-tasks: Replace Supabase REST with API calls using GANTT_MACHINE_TOKEN - mission-control-docs: Replace Supabase REST with API calls using MC_MACHINE_TOKEN - Both skills now follow API-centric architecture - Updated SKILL.md documentation for both This ensures consistency with the CLI auth pattern and provides single source of truth through API endpoints.
This commit is contained in:
parent
81effddf10
commit
22aca2d095
357
skills/gantt-tasks/SKILL.md
Normal file
357
skills/gantt-tasks/SKILL.md
Normal file
@ -0,0 +1,357 @@
|
||||
---
|
||||
name: gantt-tasks
|
||||
description: Full CRUD operations for Gantt Board tasks including comments, attachments, and sprint management. Uses Gantt Board API with machine token authentication.
|
||||
---
|
||||
|
||||
# gantt-tasks
|
||||
|
||||
**Module Skill** - Complete task management for Gantt Board via API.
|
||||
|
||||
## Purpose
|
||||
|
||||
Provides all operations needed to work with Gantt Board tasks: create, read, update, delete, comments, attachments, and sprint queries. Used by orchestrator skills and agents.
|
||||
|
||||
## Authentication
|
||||
|
||||
Uses **machine token** for server-to-server API calls:
|
||||
|
||||
```bash
|
||||
export GANTT_MACHINE_TOKEN="50cd5e8fe3f895353f97c9ee64052c0b689b4eedf79259746413734d0a163cf8"
|
||||
export GANTT_API_URL="https://gantt-board.vercel.app/api"
|
||||
```
|
||||
|
||||
**Architecture:** Skills → API (with machine token) → Database
|
||||
|
||||
This follows the API-centric pattern where all business logic lives in API routes.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
source ~/.agents/skills/gantt-tasks/lib/tasks.sh
|
||||
|
||||
# Create task
|
||||
TASK_ID=$(task_create --title "Fix bug" --project "Gantt Board" --sprint current)
|
||||
|
||||
# Add comment
|
||||
task_add_comment "$TASK_ID" "Starting work on this"
|
||||
|
||||
# Attach file
|
||||
task_attach_file "$TASK_ID" "/path/to/file.md"
|
||||
```
|
||||
|
||||
## Functions
|
||||
|
||||
### Task CRUD
|
||||
|
||||
#### `task_create([options])`
|
||||
|
||||
Create a new task.
|
||||
|
||||
**Options:**
|
||||
- `--title` - Task title (required)
|
||||
- `--description` - Task description (optional)
|
||||
- `--type` - Task type: feature, bug, research, task (default: task)
|
||||
- `--status` - Status: open, todo, in-progress, review, done (default: open)
|
||||
- `--priority` - Priority: low, medium, high, critical (default: medium)
|
||||
- `--project` - Project name or ID (optional, auto-detects "current")
|
||||
- `--sprint` - Sprint name or ID, or "current" for current sprint (optional)
|
||||
- `--assignee` - Assignee name or ID (default: Max)
|
||||
- `--due-date` - Due date YYYY-MM-DD (optional)
|
||||
- `--tags` - Tags as comma-separated string (optional)
|
||||
|
||||
**Returns:** Task ID on success
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
TASK_ID=$(task_create \
|
||||
--title "Research API options" \
|
||||
--description "Compare REST vs GraphQL" \
|
||||
--type research \
|
||||
--priority high \
|
||||
--sprint current \
|
||||
--tags "api,research")
|
||||
```
|
||||
|
||||
#### `task_get(TASK_ID)`
|
||||
|
||||
Get full task details including comments.
|
||||
|
||||
**Parameters:**
|
||||
- `TASK_ID` - Task UUID
|
||||
|
||||
**Returns:** Task JSON
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
task=$(task_get "a1b2c3d4-...")
|
||||
echo "$task" | jq -r '.title'
|
||||
```
|
||||
|
||||
#### `task_update(TASK_ID, [options])`
|
||||
|
||||
Update task fields.
|
||||
|
||||
**Parameters:**
|
||||
- `TASK_ID` - Task UUID
|
||||
- Options same as task_create (except project/sprint usually fixed)
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
task_update "$TASK_ID" --status in-progress --priority critical
|
||||
```
|
||||
|
||||
#### `task_delete(TASK_ID)`
|
||||
|
||||
Delete a task (permanent).
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
task_delete "$TASK_ID"
|
||||
```
|
||||
|
||||
#### `task_list([filters])`
|
||||
|
||||
List tasks with optional filters.
|
||||
|
||||
**Filters:**
|
||||
- `--status` - Filter by status (comma-separated: open,todo,in-progress)
|
||||
- `--project` - Filter by project name/ID
|
||||
- `--sprint` - Filter by sprint name/ID or "current"
|
||||
- `--assignee` - Filter by assignee
|
||||
- `--priority` - Filter by priority
|
||||
- `--json` - Output as JSON
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# List current sprint tasks
|
||||
task_list --sprint current --status open,todo,in-progress
|
||||
|
||||
# List my high priority tasks
|
||||
task_list --assignee Max --priority high
|
||||
```
|
||||
|
||||
### Comments
|
||||
|
||||
#### `task_add_comment(TASK_ID, TEXT, [OPTIONS])`
|
||||
|
||||
Add a comment to a task.
|
||||
|
||||
**Parameters:**
|
||||
- `TASK_ID` - Task UUID
|
||||
- `TEXT` - Comment text (supports markdown)
|
||||
- `--checklist` - Include checklist format automatically
|
||||
- `--status` - Add status prefix (in-progress, review, etc.)
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Simple comment
|
||||
task_add_comment "$TASK_ID" "Found the issue in line 42"
|
||||
|
||||
# Checklist comment with status
|
||||
task_add_comment "$TASK_ID" \
|
||||
"Fixed the bug\n\nNow testing..." \
|
||||
--checklist \
|
||||
--status in-progress
|
||||
```
|
||||
|
||||
#### `task_update_comment(TASK_ID, COMMENT_ID, TEXT)`
|
||||
|
||||
Update an existing comment (for progress tracking).
|
||||
|
||||
**Parameters:**
|
||||
- `TASK_ID` - Task UUID
|
||||
- `COMMENT_ID` - Comment UUID to update
|
||||
- `TEXT` - New comment text
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Update progress comment
|
||||
COMMENT_ID=$(echo "$TASK" | jq -r '.comments[-1].id')
|
||||
|
||||
task_update_comment "$TASK_ID" "$COMMENT_ID" \
|
||||
"## $(date +%Y-%m-%d) 🔄 In Progress\n\n### ✅ Completed\n- [x] Phase 1\n\n### 🔄 In Progress\n- [ ] Phase 2"
|
||||
```
|
||||
|
||||
**Auto-format:**
|
||||
If `--checklist` is passed, formats as:
|
||||
```markdown
|
||||
## [YYYY-MM-DD HH:MM] 🔄 [STATUS]
|
||||
|
||||
### ✅ Completed
|
||||
- [x] Previous item
|
||||
|
||||
### 🔄 In Progress
|
||||
- [x] Current item: [TEXT]
|
||||
|
||||
### 📋 Remaining
|
||||
- [ ] Next step
|
||||
|
||||
### 📊 Progress: X/Y tasks complete
|
||||
```
|
||||
|
||||
#### `task_reply_to_comment(TASK_ID, COMMENT_ID, TEXT)`
|
||||
|
||||
Reply to an existing comment.
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
task_reply_to_comment "$TASK_ID" "$COMMENT_ID" "Great point! I'll check that."
|
||||
```
|
||||
|
||||
### Attachments
|
||||
|
||||
#### `task_attach_file(TASK_ID, FILE_PATH, [OPTIONS])`
|
||||
|
||||
Attach a file to a task.
|
||||
|
||||
**Parameters:**
|
||||
- `TASK_ID` - Task UUID
|
||||
- `FILE_PATH` - Path to file
|
||||
- `--description` - Attachment description (optional)
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Attach markdown file
|
||||
task_attach_file "$TASK_ID" "/tmp/plan.md" --description "Implementation plan"
|
||||
|
||||
# Attach any file type
|
||||
task_attach_file "$TASK_ID" "/tmp/screenshot.png"
|
||||
```
|
||||
|
||||
**Note:** Large files should be stored elsewhere and linked.
|
||||
|
||||
### Sprint Management
|
||||
|
||||
#### `sprint_get_current()`
|
||||
|
||||
Get the current sprint (contains today's date).
|
||||
|
||||
**Returns:** Sprint ID
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
SPRINT_ID=$(sprint_get_current)
|
||||
echo "Current sprint: $SPRINT_ID"
|
||||
```
|
||||
|
||||
#### `sprint_list([OPTIONS])`
|
||||
|
||||
List all sprints.
|
||||
|
||||
**Options:**
|
||||
- `--active` - Only active sprints
|
||||
- `--project` - Filter by project
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
sprint_list --active
|
||||
```
|
||||
|
||||
### Project Resolution
|
||||
|
||||
#### `resolve_project_id(PROJECT_NAME)`
|
||||
|
||||
Convert project name to ID.
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
PROJECT_ID=$(resolve_project_id "Gantt Board")
|
||||
```
|
||||
|
||||
#### `resolve_user_id(USER_NAME)`
|
||||
|
||||
Convert user name to ID.
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
USER_ID=$(resolve_user_id "Max")
|
||||
```
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
~/.agents/skills/gantt-tasks/
|
||||
├── SKILL.md # This documentation
|
||||
└── lib/
|
||||
└── tasks.sh # All task functions
|
||||
```
|
||||
|
||||
## Integration Example
|
||||
|
||||
Complete workflow:
|
||||
|
||||
```bash
|
||||
# Set authentication
|
||||
export GANTT_MACHINE_TOKEN="50cd5e8fe3f895353f97c9ee64052c0b689b4eedf79259746413734d0a163cf8"
|
||||
export GANTT_API_URL="https://gantt-board.vercel.app/api"
|
||||
|
||||
source ~/.agents/skills/gantt-tasks/lib/tasks.sh
|
||||
|
||||
# 1. Create task
|
||||
TASK_ID=$(task_create \
|
||||
--title "Research URL extraction options" \
|
||||
--type research \
|
||||
--priority high \
|
||||
--sprint current)
|
||||
|
||||
echo "Created task: $TASK_ID"
|
||||
|
||||
# 2. Add initial comment
|
||||
task_add_comment "$TASK_ID" "Starting research phase" --checklist
|
||||
|
||||
# 3. Update with progress
|
||||
task_update "$TASK_ID" --status in-progress
|
||||
task_add_comment "$TASK_ID" "Evaluated 3 options:\n1. Scrapling\n2. Tavily\n3. web_fetch" \
|
||||
--checklist \
|
||||
--status in-progress
|
||||
|
||||
# 4. Attach recommendation
|
||||
cat > /tmp/recommendation.md << 'EOF'
|
||||
## Recommendation
|
||||
|
||||
Use Scrapling as primary, Tavily as fallback.
|
||||
|
||||
- Scrapling handles JavaScript sites
|
||||
- Tavily fast for articles
|
||||
- web_fetch for simple sites
|
||||
EOF
|
||||
|
||||
task_attach_file "$TASK_ID" /tmp/recommendation.md --description "Research findings"
|
||||
|
||||
# 5. Mark ready for review
|
||||
task_update "$TASK_ID" --status review
|
||||
task_add_comment "$TASK_ID" "Research complete. Recommendation attached." \
|
||||
--checklist
|
||||
|
||||
echo "Task ready for review: https://gantt-board.vercel.app/tasks/$TASK_ID"
|
||||
```
|
||||
|
||||
## CLI Alternative
|
||||
|
||||
For interactive/command-line use, use the Gantt Board CLI:
|
||||
|
||||
```bash
|
||||
# Set auth for CLI
|
||||
export GANTT_MACHINE_TOKEN="50cd5e8fe3f895353f97c9ee64052c0b689b4eedf79259746413734d0a163cf8"
|
||||
export API_URL="https://gantt-board.vercel.app/api"
|
||||
|
||||
cd /Users/mattbruce/Documents/Projects/OpenClaw/Web/gantt-board
|
||||
|
||||
# Create task
|
||||
./scripts/gantt.sh task create "Title" --type research --sprint current
|
||||
|
||||
# List tasks
|
||||
./scripts/gantt.sh task list --sprint current --status in-progress
|
||||
|
||||
# Get task
|
||||
./scripts/gantt.sh task get $TASK_ID
|
||||
|
||||
# Update task
|
||||
./scripts/gantt.sh task update $TASK_ID --status review
|
||||
```
|
||||
|
||||
**Note:** The skill library and CLI both use the same API endpoints with machine token authentication.
|
||||
|
||||
---
|
||||
|
||||
**Note:** This is a low-level module skill. Used by orchestrator skills, not directly by users.
|
||||
495
skills/gantt-tasks/lib/tasks.sh
Normal file
495
skills/gantt-tasks/lib/tasks.sh
Normal file
@ -0,0 +1,495 @@
|
||||
#!/bin/bash
|
||||
# gantt-tasks library - Full CRUD for Gantt Board tasks
|
||||
# Location: ~/.agents/skills/gantt-tasks/lib/tasks.sh
|
||||
# Refactored to use API endpoints with machine token auth
|
||||
|
||||
# Configuration
|
||||
GANTT_API_URL="${GANTT_API_URL:-https://gantt-board.vercel.app/api}"
|
||||
GANTT_MACHINE_TOKEN="${GANTT_MACHINE_TOKEN:-}"
|
||||
MAX_ID="9c29cc99-81a1-4e75-8dff-cd7cc5ceb5aa" # Max user ID
|
||||
|
||||
# Debug function
|
||||
debug() {
|
||||
echo "[DEBUG] $*" >&2
|
||||
}
|
||||
|
||||
# Error handler
|
||||
error_exit() {
|
||||
echo "❌ $1" >&2
|
||||
return 1
|
||||
}
|
||||
|
||||
# Make API call with machine token
|
||||
_api_call() {
|
||||
local method="$1"
|
||||
local endpoint="$2"
|
||||
local data="${3:-}"
|
||||
|
||||
if [[ -z "$GANTT_MACHINE_TOKEN" ]]; then
|
||||
error_exit "GANTT_MACHINE_TOKEN not set"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local url="${GANTT_API_URL}${endpoint}"
|
||||
local curl_opts=(
|
||||
-s
|
||||
-H "Content-Type: application/json"
|
||||
-H "Authorization: Bearer ${GANTT_MACHINE_TOKEN}"
|
||||
)
|
||||
|
||||
if [[ -n "$data" ]]; then
|
||||
curl_opts+=(-d "$data")
|
||||
fi
|
||||
|
||||
curl "${curl_opts[@]}" -X "$method" "$url" 2>/dev/null
|
||||
}
|
||||
|
||||
# Resolve project name to ID
|
||||
resolve_project_id() {
|
||||
local PROJECT_NAME="$1"
|
||||
|
||||
# Special case: Research Project
|
||||
if [[ "$PROJECT_NAME" == "Research" || "$PROJECT_NAME" == "Research Project" ]]; then
|
||||
echo "a1b2c3d4-0003-0000-0000-000000000003"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Query by name via API
|
||||
local RESULT=$(_api_call "GET" "/projects" | jq -r --arg q "$PROJECT_NAME" '
|
||||
.projects
|
||||
| map(select((.name // "") | ascii_downcase | contains($q | ascii_downcase)))
|
||||
| .[0].id // empty
|
||||
')
|
||||
|
||||
if [[ -n "$RESULT" ]]; then
|
||||
echo "$RESULT"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Return empty if not found
|
||||
echo ""
|
||||
return 1
|
||||
}
|
||||
|
||||
# Resolve user name to ID
|
||||
resolve_user_id() {
|
||||
local USER_NAME="$1"
|
||||
|
||||
# Current mappings
|
||||
case "$USER_NAME" in
|
||||
"Max"|"max") echo "9c29cc99-81a1-4e75-8dff-cd7cc5ceb5aa" ;;
|
||||
"Matt"|"matt"|"Matt Bruce") echo "0a3e400c-3932-48ae-9b65-f3f9c6f26fe9" ;;
|
||||
*) echo "$MAX_ID" ;; # Default to Max
|
||||
esac
|
||||
}
|
||||
|
||||
# Get current sprint (contains today's date)
|
||||
sprint_get_current() {
|
||||
local TODAY=$(date +%Y-%m-%d)
|
||||
|
||||
# Get current sprint via API
|
||||
local RESULT=$(_api_call "GET" "/sprints" | jq -r --arg today "$TODAY" '
|
||||
.sprints
|
||||
| map(select(.startDate <= $today and .endDate >= $today))
|
||||
| .[0].id // empty
|
||||
')
|
||||
|
||||
if [[ -n "$RESULT" && "$RESULT" != "null" ]]; then
|
||||
echo "$RESULT"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Fallback: get most recent sprint
|
||||
RESULT=$(_api_call "GET" "/sprints" | jq -r '
|
||||
.sprints
|
||||
| sort_by(.startDate)
|
||||
| reverse
|
||||
| .[0].id // empty
|
||||
')
|
||||
|
||||
echo "$RESULT"
|
||||
}
|
||||
|
||||
# Resolve sprint name/ID
|
||||
resolve_sprint_id() {
|
||||
local SPRINT="$1"
|
||||
|
||||
if [[ "$SPRINT" == "current" ]]; then
|
||||
sprint_get_current
|
||||
return 0
|
||||
fi
|
||||
|
||||
# If it looks like a UUID, use it directly
|
||||
if [[ "$SPRINT" =~ ^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$ ]]; then
|
||||
echo "$SPRINT"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Query by name via API
|
||||
local RESULT=$(_api_call "GET" "/sprints" | jq -r --arg q "$SPRINT" '
|
||||
.sprints
|
||||
| map(select((.name // "") | ascii_downcase | contains($q | ascii_downcase)))
|
||||
| .[0].id // empty
|
||||
')
|
||||
|
||||
echo "$RESULT"
|
||||
}
|
||||
|
||||
# Create a new task
|
||||
task_create() {
|
||||
local TITLE=""
|
||||
local DESCRIPTION=""
|
||||
local TYPE="task"
|
||||
local STATUS="open"
|
||||
local PRIORITY="medium"
|
||||
local PROJECT=""
|
||||
local SPRINT=""
|
||||
local ASSIGNEE="Max"
|
||||
local DUE_DATE=""
|
||||
local TAGS=""
|
||||
|
||||
# Parse arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--title) TITLE="$2"; shift 2 ;;
|
||||
--description) DESCRIPTION="$2"; shift 2 ;;
|
||||
--type) TYPE="$2"; shift 2 ;;
|
||||
--status) STATUS="$2"; shift 2 ;;
|
||||
--priority) PRIORITY="$2"; shift 2 ;;
|
||||
--project) PROJECT="$2"; shift 2 ;;
|
||||
--sprint) SPRINT="$2"; shift 2 ;;
|
||||
--assignee) ASSIGNEE="$2"; shift 2 ;;
|
||||
--due-date) DUE_DATE="$2"; shift 2 ;;
|
||||
--tags) TAGS="$2"; shift 2 ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate
|
||||
[[ -z "$TITLE" ]] && error_exit "--title is required" && return 1
|
||||
|
||||
# Resolve IDs
|
||||
local PROJECT_ID="a1b2c3d4-0003-0000-0000-000000000003"
|
||||
if [[ -n "$PROJECT" ]]; then
|
||||
PROJECT_ID=$(resolve_project_id "$PROJECT")
|
||||
fi
|
||||
|
||||
local SPRINT_ID=""
|
||||
if [[ -n "$SPRINT" ]]; then
|
||||
SPRINT_ID=$(resolve_sprint_id "$SPRINT")
|
||||
fi
|
||||
|
||||
local ASSIGNEE_ID=$(resolve_user_id "$ASSIGNEE")
|
||||
|
||||
# Build JSON payload
|
||||
local JSONPayload=$(jq -n \
|
||||
--arg title "$TITLE" \
|
||||
--arg type "$TYPE" \
|
||||
--arg status "$STATUS" \
|
||||
--arg priority "$PRIORITY" \
|
||||
--arg assigneeId "$ASSIGNEE_ID" \
|
||||
--arg projectId "$PROJECT_ID" \
|
||||
'{
|
||||
title: $title,
|
||||
type: $type,
|
||||
status: $status,
|
||||
priority: $priority,
|
||||
assigneeId: $assigneeId,
|
||||
projectId: $projectId,
|
||||
comments: [],
|
||||
tags: []
|
||||
}')
|
||||
|
||||
# Add optional fields
|
||||
[[ -n "$DESCRIPTION" ]] && JSONPayload=$(echo "$JSONPayload" | jq --arg desc "$DESCRIPTION" '. + {description: $desc}')
|
||||
[[ -n "$SPRINT_ID" ]] && JSONPayload=$(echo "$JSONPayload" | jq --arg sid "$SPRINT_ID" '. + {sprintId: $sid}')
|
||||
[[ -n "$DUE_DATE" ]] && JSONPayload=$(echo "$JSONPayload" | jq --arg dd "$DUE_DATE" '. + {dueDate: $dd}')
|
||||
[[ -n "$TAGS" ]] && JSONPayload=$(echo "$JSONPayload" | jq --argjson tags "[$(echo "$TAGS" | sed 's/,/","/g' | sed 's/^/"/' | sed 's/$/"/')]" '. + {tags: $tags}')
|
||||
|
||||
# Create task via API
|
||||
local Response=$(_api_call "POST" "/tasks" "{\"task\": $JSONPayload}")
|
||||
|
||||
local TaskId=$(echo "$Response" | jq -r '.task.id // .id // empty')
|
||||
|
||||
if [[ -n "$TaskId" && "$TaskId" != "null" ]]; then
|
||||
echo "$TaskId"
|
||||
return 0
|
||||
fi
|
||||
|
||||
error_exit "Failed to create task: $(echo "$Response" | jq -r '.error // "Unknown error"')"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Get task details
|
||||
task_get() {
|
||||
local TASK_ID="$1"
|
||||
|
||||
local Response=$(_api_call "GET" "/tasks")
|
||||
echo "$Response" | jq --arg id "$TASK_ID" '.tasks | map(select(.id == $id)) | .[0] // empty'
|
||||
}
|
||||
|
||||
# List tasks with filters
|
||||
task_list() {
|
||||
local StatusFilter=""
|
||||
local ProjectFilter=""
|
||||
local SprintFilter=""
|
||||
local AssigneeFilter=""
|
||||
local PriorityFilter=""
|
||||
local JsonOutput=false
|
||||
|
||||
# Parse arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--status) StatusFilter="$2"; shift 2 ;;
|
||||
--project) ProjectFilter="$2"; shift 2 ;;
|
||||
--sprint) SprintFilter="$2"; shift 2 ;;
|
||||
--assignee) AssigneeFilter="$2"; shift 2 ;;
|
||||
--priority) PriorityFilter="$2"; shift 2 ;;
|
||||
--json) JsonOutput=true; shift ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Get all tasks via API
|
||||
local Response=$(_api_call "GET" "/tasks")
|
||||
local Tasks=$(echo "$Response" | jq '.tasks // []')
|
||||
|
||||
# Apply filters in jq
|
||||
local Filtered="$Tasks"
|
||||
|
||||
if [[ -n "$StatusFilter" ]]; then
|
||||
# Handle comma-separated status values
|
||||
if [[ "$StatusFilter" =~ , ]]; then
|
||||
Filtered=$(echo "$Filtered" | jq --arg statuses "$StatusFilter" '
|
||||
[split(",")] as $statusList | map(select(.status as $s | $statusList | index($s)))
|
||||
')
|
||||
else
|
||||
Filtered=$(echo "$Filtered" | jq --arg status "$StatusFilter" 'map(select(.status == $status))')
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "$ProjectFilter" ]]; then
|
||||
local ProjectId=$(resolve_project_id "$ProjectFilter")
|
||||
[[ -n "$ProjectId" ]] && Filtered=$(echo "$Filtered" | jq --arg pid "$ProjectId" 'map(select(.projectId == $pid))')
|
||||
fi
|
||||
|
||||
if [[ -n "$SprintFilter" ]]; then
|
||||
local SprintId=$(resolve_sprint_id "$SprintFilter")
|
||||
[[ -n "$SprintId" ]] && Filtered=$(echo "$Filtered" | jq --arg sid "$SprintId" 'map(select(.sprintId == $sid))')
|
||||
fi
|
||||
|
||||
if [[ -n "$AssigneeFilter" ]]; then
|
||||
local AssigneeId=$(resolve_user_id "$AssigneeFilter")
|
||||
Filtered=$(echo "$Filtered" | jq --arg aid "$AssigneeId" 'map(select(.assigneeId == $aid))')
|
||||
fi
|
||||
|
||||
if [[ -n "$PriorityFilter" ]]; then
|
||||
Filtered=$(echo "$Filtered" | jq --arg pri "$PriorityFilter" 'map(select(.priority == $pri))')
|
||||
fi
|
||||
|
||||
if [[ "$JsonOutput" == true ]]; then
|
||||
echo "{\"tasks\": $Filtered}"
|
||||
else
|
||||
# Pretty print
|
||||
echo "$Filtered" | jq -r '.[] | "\(.status) | \(.priority) | \(.title) | \(.id)"' 2>/dev/null
|
||||
fi
|
||||
}
|
||||
|
||||
# Update task
|
||||
task_update() {
|
||||
local TASK_ID="$1"
|
||||
shift
|
||||
|
||||
# First get the current task
|
||||
local CurrentTask=$(task_get "$TASK_ID")
|
||||
[[ -z "$CurrentTask" ]] && error_exit "Task not found: $TASK_ID" && return 1
|
||||
|
||||
# Build updates
|
||||
local Updates="{}"
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--title) Updates=$(echo "$Updates" | jq --arg v "$2" '. + {title: $v}'); shift 2 ;;
|
||||
--description) Updates=$(echo "$Updates" | jq --arg v "$2" '. + {description: $v}'); shift 2 ;;
|
||||
--status) Updates=$(echo "$Updates" | jq --arg v "$2" '. + {status: $v}'); shift 2 ;;
|
||||
--priority) Updates=$(echo "$Updates" | jq --arg v "$2" '. + {priority: $v}'); shift 2 ;;
|
||||
--due-date) Updates=$(echo "$Updates" | jq --arg v "$2" '. + {dueDate: $v}'); shift 2 ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Merge current task with updates
|
||||
local MergedTask=$(echo "$CurrentTask" | jq --argjson updates "$Updates" '. + $updates')
|
||||
|
||||
# Send update via API
|
||||
local Payload=$(jq -n --argjson task "$MergedTask" '{task: $task}')
|
||||
local Response=$(_api_call "POST" "/tasks" "$Payload")
|
||||
|
||||
if echo "$Response" | jq -e '.error' >/dev/null 2>&1; then
|
||||
error_exit "Update failed: $(echo "$Response" | jq -r '.error')"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "✅ Task updated"
|
||||
}
|
||||
|
||||
# Delete task
|
||||
task_delete() {
|
||||
local TASK_ID="$1"
|
||||
|
||||
_api_call "DELETE" "/tasks" "{\"id\": \"$TASK_ID\"}" >/dev/null
|
||||
|
||||
echo "✅ Task deleted"
|
||||
}
|
||||
|
||||
# Add comment to task
|
||||
task_add_comment() {
|
||||
local TASK_ID="$1"
|
||||
local TEXT="$2"
|
||||
local UseChecklist=false
|
||||
local StatusPrefix=""
|
||||
|
||||
shift 2
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--checklist) UseChecklist=true; shift ;;
|
||||
--status) StatusPrefix="$2"; shift 2 ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Format comment
|
||||
local Now=$(date +"%Y-%m-%d %H:%M")
|
||||
local StatusEmoji="🔄"
|
||||
[[ -n "$StatusPrefix" ]] && StatusEmoji="$StatusPrefix"
|
||||
|
||||
local CommentText
|
||||
if [[ "$UseChecklist" == true ]]; then
|
||||
CommentText=$(cat <<EOF
|
||||
## [$Now] $StatusEmoji ${StatusPrefix:-Update}
|
||||
|
||||
### ✅ Completed
|
||||
- [x] Previous work
|
||||
|
||||
### 🔄 In Progress
|
||||
- [x] $TEXT
|
||||
|
||||
### 📋 Remaining
|
||||
- [ ] Next step
|
||||
|
||||
### 📊 Progress: 1/1 tasks
|
||||
EOF
|
||||
)
|
||||
else
|
||||
CommentText="[$Now] $TEXT"
|
||||
fi
|
||||
|
||||
# Get current task with comments
|
||||
local CurrentTask=$(task_get "$TASK_ID")
|
||||
local CurrentComments=$(echo "$CurrentTask" | jq '.comments // []')
|
||||
|
||||
# Build new comment
|
||||
local NewComment=$(jq -n \
|
||||
--arg id "$(uuidgen 2>/dev/null || date +%s)" \
|
||||
--arg text "$CommentText" \
|
||||
--arg author "$MAX_ID" \
|
||||
'{
|
||||
id: $id,
|
||||
text: $text,
|
||||
createdAt: now | todate,
|
||||
commentAuthorId: $author,
|
||||
replies: []
|
||||
}')
|
||||
|
||||
# Append to comments
|
||||
local UpdatedComments=$(echo "$CurrentComments" | jq --argjson new "$NewComment" '. + [$new]')
|
||||
|
||||
# Merge and update task
|
||||
local MergedTask=$(echo "$CurrentTask" | jq --argjson comments "$UpdatedComments" '. + {comments: $comments}')
|
||||
local Payload=$(jq -n --argjson task "$MergedTask" '{task: $task}')
|
||||
|
||||
_api_call "POST" "/tasks" "$Payload" >/dev/null
|
||||
|
||||
echo "✅ Comment added"
|
||||
}
|
||||
|
||||
# Attach file to task (stores content in attachments field)
|
||||
task_attach_file() {
|
||||
local TASK_ID="$1"
|
||||
local FILE_PATH="$2"
|
||||
local Description="${3:-Attached file}"
|
||||
|
||||
[[ ! -f "$FILE_PATH" ]] && error_exit "File not found: $FILE_PATH" && return 1
|
||||
|
||||
# Read file content
|
||||
local Content=$(cat "$FILE_PATH")
|
||||
local FileName=$(basename "$FILE_PATH")
|
||||
|
||||
# Get current task with attachments
|
||||
local CurrentTask=$(task_get "$TASK_ID")
|
||||
local CurrentAttachments=$(echo "$CurrentTask" | jq '.attachments // []')
|
||||
|
||||
# Build new attachment
|
||||
local NewAttachment=$(jq -n \
|
||||
--arg id "$(uuidgen 2>/dev/null || date +%s)" \
|
||||
--arg name "$FileName" \
|
||||
--arg type "$(file -b --mime-type "$FILE_PATH" 2>/dev/null || echo 'application/octet-stream')" \
|
||||
--arg dataUrl "data:text/plain;base64,$(base64 -i "$FILE_PATH" | tr -d '\n')" \
|
||||
'{
|
||||
id: $id,
|
||||
name: $name,
|
||||
type: $type,
|
||||
dataUrl: $dataUrl,
|
||||
uploadedAt: now | todate
|
||||
}')
|
||||
|
||||
# Append to attachments
|
||||
local UpdatedAttachments=$(echo "$CurrentAttachments" | jq --argjson new "$NewAttachment" '. + [$new]')
|
||||
|
||||
# Merge and update task
|
||||
local MergedTask=$(echo "$CurrentTask" | jq --argjson attachments "$UpdatedAttachments" '. + {attachments: $attachments}')
|
||||
local Payload=$(jq -n --argjson task "$MergedTask" '{task: $task}')
|
||||
|
||||
_api_call "POST" "/tasks" "$Payload" >/dev/null
|
||||
|
||||
echo "✅ File attached: $FileName"
|
||||
}
|
||||
|
||||
# Update an existing comment by ID
|
||||
task_update_comment() {
|
||||
local TASK_ID="$1"
|
||||
local COMMENT_ID="$2"
|
||||
local NEW_TEXT="$3"
|
||||
|
||||
# Get current task
|
||||
local CurrentTask=$(task_get "$TASK_ID")
|
||||
local AllComments=$(echo "$CurrentTask" | jq '.comments // []')
|
||||
|
||||
# Find and update the specific comment, keep others unchanged
|
||||
local UpdatedComments=$(echo "$AllComments" | jq --arg cid "$COMMENT_ID" --arg text "$NEW_TEXT" \
|
||||
'map(if .id == $cid then .text = $text else . end)')
|
||||
|
||||
# Merge and update task
|
||||
local MergedTask=$(echo "$CurrentTask" | jq --argjson comments "$UpdatedComments" '. + {comments: $comments}')
|
||||
local Payload=$(jq -n --argjson task "$MergedTask" '{task: $task}')
|
||||
|
||||
_api_call "POST" "/tasks" "$Payload" >/dev/null
|
||||
|
||||
echo "✅ Comment updated"
|
||||
}
|
||||
|
||||
# Export all functions
|
||||
export -f resolve_project_id
|
||||
export -f resolve_user_id
|
||||
export -f sprint_get_current
|
||||
export -f resolve_sprint_id
|
||||
export -f task_create
|
||||
export -f task_get
|
||||
export -f task_list
|
||||
export -f task_update
|
||||
export -f task_delete
|
||||
export -f task_add_comment
|
||||
export -f task_update_comment
|
||||
export -f task_attach_file
|
||||
export -f _api_call
|
||||
267
skills/mission-control-docs/SKILL.md
Normal file
267
skills/mission-control-docs/SKILL.md
Normal file
@ -0,0 +1,267 @@
|
||||
---
|
||||
name: mission-control-docs
|
||||
description: Full CRUD operations for Mission Control Documents. Create, read, update, delete documents. Uses Mission Control API with machine token authentication.
|
||||
---
|
||||
|
||||
# mission-control-docs
|
||||
|
||||
**Module Skill** - Complete document management for Mission Control via API.
|
||||
|
||||
## Purpose
|
||||
|
||||
Provides all operations needed to work with Mission Control documents: create, read, update, delete, list, and search. Uses Mission Control API (not direct Supabase).
|
||||
|
||||
## Authentication
|
||||
|
||||
Uses **machine token** for server-to-server API calls:
|
||||
|
||||
```bash
|
||||
export MC_MACHINE_TOKEN="50cd5e8fe3f895353f97c9ee64052c0b689b4eedf79259746413734d0a163cf8"
|
||||
export MC_API_URL="https://mission-control-rho-pink.vercel.app/api"
|
||||
```
|
||||
|
||||
**Architecture:** Skills → API (with machine token) → Database
|
||||
|
||||
This follows the API-centric pattern where all business logic lives in API routes.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
source ~/.agents/skills/mission-control-docs/lib/docs.sh
|
||||
|
||||
# Create document
|
||||
DOC_ID=$(mc_doc_create \
|
||||
--title "Article Title" \
|
||||
--content "$CONTENT" \
|
||||
--folder "Research/AI & Agents/")
|
||||
|
||||
# Get document
|
||||
mc_doc_get "$DOC_ID"
|
||||
|
||||
# List documents
|
||||
mc_doc_list --folder "Research/"
|
||||
```
|
||||
|
||||
## Functions
|
||||
|
||||
### Document CRUD
|
||||
|
||||
#### `mc_doc_create([options])`
|
||||
|
||||
Create a new document.
|
||||
|
||||
**Options:**
|
||||
- `--title` - Document title (required)
|
||||
- `--content` - Document content (required)
|
||||
- `--folder` - Folder path (default: auto-detected)
|
||||
- `--tags` - Tags as JSON array string (default: auto-detected)
|
||||
- `--type` - Document type (default: markdown)
|
||||
- `--description` - Description (optional)
|
||||
|
||||
**Returns:** Document ID on success
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
DOC_ID=$(mc_doc_create \
|
||||
--title "API Design Patterns" \
|
||||
--content "$SUMMARY_CONTENT" \
|
||||
--folder "Research/Tools & Tech/" \
|
||||
--tags '["api", "design", "reference"]')
|
||||
```
|
||||
|
||||
**Auto-categorization:** If folder not specified, detects based on content keywords:
|
||||
- `Research/AI & Agents/` - AI, agent, LLM, Claude, OpenClaw, GPT
|
||||
- `Research/iOS Development/` - Swift, iOS, Xcode, Apple, SwiftUI
|
||||
- `Research/Business & Marketing/` - SaaS, business, startup, marketing
|
||||
- `Research/Tools & Tech/` - tool, library, API, SDK, framework
|
||||
- `Research/Tutorials/` - tutorial, guide, how-to, walkthrough
|
||||
- `Research/` - Default
|
||||
|
||||
#### `mc_doc_get(DOC_ID)`
|
||||
|
||||
Get document by ID.
|
||||
|
||||
**Returns:** Document JSON
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
DOC=$(mc_doc_get "a1b2c3d4-...")
|
||||
echo "$DOC" | jq -r '.title'
|
||||
```
|
||||
|
||||
#### `mc_doc_list([filters])`
|
||||
|
||||
List documents with optional filters.
|
||||
|
||||
**Filters:**
|
||||
- `--folder` - Filter by folder path
|
||||
- `--tags` - Filter by tag
|
||||
- `--search` - Search in title/content
|
||||
- `--limit` - Max results (default: 50)
|
||||
- `--json` - Output as JSON
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# List all in folder
|
||||
mc_doc_list --folder "Research/AI & Agents/"
|
||||
|
||||
# Search
|
||||
mc_doc_list --search "OpenClaw"
|
||||
|
||||
# Get JSON for processing
|
||||
mc_doc_list --folder "Research/" --json
|
||||
```
|
||||
|
||||
#### `mc_doc_update(DOC_ID, [options])`
|
||||
|
||||
Update document fields.
|
||||
|
||||
**Parameters:**
|
||||
- `DOC_ID` - Document UUID
|
||||
- `--title` - New title
|
||||
- `--content` - New content
|
||||
- `--folder` - New folder
|
||||
- `--tags` - New tags (JSON array)
|
||||
- `--description` - New description
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
mc_doc_update "$DOC_ID" \
|
||||
--title "Updated Title" \
|
||||
--tags '["research", "ai", "updated"]'
|
||||
```
|
||||
|
||||
#### `mc_doc_delete(DOC_ID)`
|
||||
|
||||
Delete document permanently.
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
mc_doc_delete "$DOC_ID"
|
||||
```
|
||||
|
||||
### Search & Discovery
|
||||
|
||||
#### `mc_doc_search(QUERY, [LIMIT])`
|
||||
|
||||
Search documents by title or content.
|
||||
|
||||
**Parameters:**
|
||||
- `QUERY` - Search term
|
||||
- `LIMIT` - Max results (default: 20)
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Search for OpenClaw
|
||||
mc_doc_search "OpenClaw" 10
|
||||
```
|
||||
|
||||
#### `mc_doc_folder_list()`
|
||||
|
||||
List all unique folders.
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
mc_doc_folder_list
|
||||
```
|
||||
|
||||
### Auto-detection
|
||||
|
||||
#### `_detect_folder(CONTENT)`
|
||||
|
||||
Auto-detect folder based on content keywords.
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
FOLDER=$(_detect_folder "$CONTENT")
|
||||
echo "Detected folder: $FOLDER"
|
||||
```
|
||||
|
||||
#### `_detect_tags(CONTENT)`
|
||||
|
||||
Auto-detect tags based on content keywords.
|
||||
|
||||
**Returns:** JSON array of tags
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
TAGS=$(_detect_tags "$CONTENT")
|
||||
echo "Detected tags: $TAGS"
|
||||
```
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
~/.agents/skills/mission-control-docs/
|
||||
├── SKILL.md # This documentation
|
||||
└── lib/
|
||||
└── docs.sh # All document functions
|
||||
```
|
||||
|
||||
## Integration Example
|
||||
|
||||
Complete workflow:
|
||||
|
||||
```bash
|
||||
# Set authentication
|
||||
export MC_MACHINE_TOKEN="50cd5e8fe3f895353f97c9ee64052c0b689b4eedf79259746413734d0a163cf8"
|
||||
export MC_API_URL="https://mission-control-rho-pink.vercel.app/api"
|
||||
|
||||
source ~/.agents/skills/mission-control-docs/lib/docs.sh
|
||||
|
||||
# Research summary from URL
|
||||
SUMMARY=$(cat << 'EOF'
|
||||
## Scrapling: Modern Web Scraping Framework
|
||||
|
||||
Scrapling is a Python web scraping framework with:
|
||||
- Adaptive parsing
|
||||
- Anti-bot bypass
|
||||
- Full crawling capabilities
|
||||
|
||||
**Use Cases:**
|
||||
- E-commerce monitoring
|
||||
- Content aggregation
|
||||
- SEO auditing
|
||||
|
||||
**Links:**
|
||||
- GitHub: https://github.com/D4Vinci/Scrapling
|
||||
- Docs: https://scrapling.readthedocs.io
|
||||
EOF
|
||||
)
|
||||
|
||||
# Create document with auto-detection
|
||||
DOC_ID=$(mc_doc_create \
|
||||
--title "Scrapling: Web Scraping Framework" \
|
||||
--content "$SUMMARY" \
|
||||
--description "Auto-research from URL: https://github.com/D4Vinci/Scrapling")
|
||||
|
||||
echo "Document created: $DOC_ID"
|
||||
|
||||
# Verify
|
||||
echo "=== Document Created ==="
|
||||
mc_doc_get "$DOC_ID" | jq -r '.title, .folder, .tags'
|
||||
```
|
||||
|
||||
## CLI Alternative
|
||||
|
||||
For interactive/command-line use, use the Mission Control CLI:
|
||||
|
||||
```bash
|
||||
# Set auth for CLI
|
||||
export MC_MACHINE_TOKEN="50cd5e8fe3f895353f97c9ee64052c0b689b4eedf79259746413734d0a163cf8"
|
||||
export MC_API_URL="https://mission-control-rho-pink.vercel.app/api"
|
||||
|
||||
cd /Users/mattbruce/Documents/Projects/OpenClaw/Web/mission-control
|
||||
|
||||
# List documents
|
||||
./scripts/mc.sh document list
|
||||
|
||||
# Get specific document
|
||||
./scripts/mc.sh document get $DOC_ID
|
||||
```
|
||||
|
||||
**Note:** The skill library and CLI both use the same API endpoints with machine token authentication.
|
||||
|
||||
---
|
||||
|
||||
**Note:** This is a low-level module skill. Used by orchestrator skills like `url-research-to-documents-and-tasks`, not directly by users.
|
||||
287
skills/mission-control-docs/lib/docs.sh
Normal file
287
skills/mission-control-docs/lib/docs.sh
Normal file
@ -0,0 +1,287 @@
|
||||
#!/bin/bash
|
||||
# mission-control-docs library - Full CRUD for Mission Control documents
|
||||
# Location: ~/.agents/skills/mission-control-docs/lib/docs.sh
|
||||
# Refactored to use API endpoints with machine token auth
|
||||
|
||||
# Configuration
|
||||
MC_API_URL="${MC_API_URL:-https://mission-control-rho-pink.vercel.app/api}"
|
||||
MC_MACHINE_TOKEN="${MC_MACHINE_TOKEN:-}"
|
||||
|
||||
# Error handler
|
||||
_error_exit() {
|
||||
echo "❌ $1" >&2
|
||||
return 1
|
||||
}
|
||||
|
||||
# Make API call with machine token
|
||||
_api_call() {
|
||||
local method="$1"
|
||||
local endpoint="$2"
|
||||
local data="${3:-}"
|
||||
|
||||
if [[ -z "$MC_MACHINE_TOKEN" ]]; then
|
||||
_error_exit "MC_MACHINE_TOKEN not set"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local url="${MC_API_URL}${endpoint}"
|
||||
local curl_opts=(
|
||||
-s
|
||||
-H "Content-Type: application/json"
|
||||
-H "Authorization: Bearer ${MC_MACHINE_TOKEN}"
|
||||
)
|
||||
|
||||
if [[ -n "$data" ]]; then
|
||||
curl_opts+=(-d "$data")
|
||||
fi
|
||||
|
||||
curl "${curl_opts[@]}" -X "$method" "$url" 2>/dev/null
|
||||
}
|
||||
|
||||
# Detect folder based on content keywords (auto-categorization)
|
||||
_detect_folder() {
|
||||
local CONTENT="$1"
|
||||
local FOLDER="Research/"
|
||||
|
||||
local LOWER_CONTENT=$(echo "$CONTENT" | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
if [[ "$LOWER_CONTENT" =~ (ai|agent|llm|claude|openclaw|machine learning|gpt|model) ]]; then
|
||||
FOLDER="Research/AI & Agents/"
|
||||
elif [[ "$LOWER_CONTENT" =~ (swift|ios|xcode|apple|app store|uikit|swiftui) ]]; then
|
||||
FOLDER="Research/iOS Development/"
|
||||
elif [[ "$LOWER_CONTENT" =~ (saas|business|startup|marketing|revenue|growth|sales) ]]; then
|
||||
FOLDER="Research/Business & Marketing/"
|
||||
elif [[ "$LOWER_CONTENT" =~ (tool|library|api|package|sdk|framework|npm|pip) ]]; then
|
||||
FOLDER="Research/Tools & Tech/"
|
||||
elif [[ "$LOWER_CONTENT" =~ (tutorial|guide|how.to|walkthrough|step.by.step) ]]; then
|
||||
FOLDER="Research/Tutorials/"
|
||||
fi
|
||||
|
||||
echo "$FOLDER"
|
||||
}
|
||||
|
||||
# Detect tags based on content keywords
|
||||
_detect_tags() {
|
||||
local CONTENT="$1"
|
||||
local TAGS=("saved" "article")
|
||||
|
||||
local LOWER_CONTENT=$(echo "$CONTENT" | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
[[ "$LOWER_CONTENT" =~ (ai|agent|automation|llm|model) ]] && TAGS+=("ai" "agents")
|
||||
[[ "$LOWER_CONTENT" =~ openclaw ]] && TAGS+=("openclaw")
|
||||
[[ "$LOWER_CONTENT" =~ (swift|ios) ]] && TAGS+=("ios" "swift")
|
||||
[[ "$LOWER_CONTENT" =~ (business|saas) ]] && TAGS+=("business")
|
||||
[[ "$LOWER_CONTENT" =~ tutorial ]] && TAGS+=("tutorial")
|
||||
[[ "$LOWER_CONTENT" =~ (reference|documentation|docs) ]] && TAGS+=("reference")
|
||||
|
||||
printf '%s\n' "${TAGS[@]}" | jq -R . | jq -s 'unique'
|
||||
}
|
||||
|
||||
# Create document
|
||||
mc_doc_create() {
|
||||
local TITLE=""
|
||||
local CONTENT=""
|
||||
local FOLDER=""
|
||||
local TAGS=""
|
||||
local TYPE="markdown"
|
||||
local DESCRIPTION=""
|
||||
|
||||
# Parse arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--title) TITLE="$2"; shift 2 ;;
|
||||
--content) CONTENT="$2"; shift 2 ;;
|
||||
--folder) FOLDER="$2"; shift 2 ;;
|
||||
--tags) TAGS="$2"; shift 2 ;;
|
||||
--type) TYPE="$2"; shift 2 ;;
|
||||
--description) DESCRIPTION="$2"; shift 2 ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate
|
||||
[[ -z "$TITLE" ]] && _error_exit "--title is required" && return 1
|
||||
[[ -z "$CONTENT" ]] && _error_exit "--content is required" && return 1
|
||||
|
||||
# Clean content - remove control characters that break JSON
|
||||
local CLEAN_CONTENT=$(echo "$CONTENT" | tr -d '\000-\031' | sed 's/\\/\\\\/g')
|
||||
|
||||
# Auto-detect folder/tags if not provided
|
||||
[[ -z "$FOLDER" ]] && FOLDER=$(_detect_folder "$CONTENT")
|
||||
[[ -z "$TAGS" ]] && TAGS=$(_detect_tags "$CONTENT")
|
||||
[[ -z "$DESCRIPTION" ]] && DESCRIPTION="Created: $(date +%Y-%m-%d)"
|
||||
|
||||
# Build JSON payload
|
||||
local TAGS_JSON
|
||||
if [[ -n "$TAGS" ]]; then
|
||||
TAGS_JSON="$TAGS"
|
||||
else
|
||||
TAGS_JSON=$(_detect_tags "$CONTENT")
|
||||
fi
|
||||
|
||||
local PAYLOAD=$(jq -n \
|
||||
--arg title "$TITLE" \
|
||||
--arg content "$CLEAN_CONTENT" \
|
||||
--arg type "$TYPE" \
|
||||
--arg folder "$FOLDER" \
|
||||
--argjson tags "$TAGS_JSON" \
|
||||
--arg description "$DESCRIPTION" \
|
||||
'{
|
||||
title: $title,
|
||||
content: $content,
|
||||
type: $type,
|
||||
folder: $folder,
|
||||
tags: $tags,
|
||||
description: $description
|
||||
}')
|
||||
|
||||
# Create via API
|
||||
local RESPONSE=$(_api_call "POST" "/documents" "$PAYLOAD")
|
||||
|
||||
local DOC_ID=$(echo "$RESPONSE" | jq -r '.id // .document?.id // empty')
|
||||
|
||||
if [[ -n "$DOC_ID" && "$DOC_ID" != "null" ]]; then
|
||||
echo "$DOC_ID"
|
||||
return 0
|
||||
fi
|
||||
|
||||
_error_exit "Failed to create document: $(echo "$RESPONSE" | jq -r '.error // "Unknown error"')"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Get document by ID
|
||||
mc_doc_get() {
|
||||
local DOC_ID="$1"
|
||||
|
||||
local RESPONSE=$(_api_call "GET" "/documents?id=$DOC_ID")
|
||||
echo "$RESPONSE" | jq --arg id "$DOC_ID" '.documents | map(select(.id == $id)) | .[0] // empty'
|
||||
}
|
||||
|
||||
# List documents with filters
|
||||
mc_doc_list() {
|
||||
local FOLDER=""
|
||||
local TAG=""
|
||||
local SEARCH=""
|
||||
local LIMIT=50
|
||||
local JSON_OUTPUT=false
|
||||
|
||||
# Parse arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--folder) FOLDER="$2"; shift 2 ;;
|
||||
--tags) TAG="$2"; shift 2 ;;
|
||||
--search) SEARCH="$2"; shift 2 ;;
|
||||
--limit) LIMIT="$2"; shift 2 ;;
|
||||
--json) JSON_OUTPUT=true; shift ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Get documents via API
|
||||
local RESPONSE=$(_api_call "GET" "/documents")
|
||||
local DOCUMENTS=$(echo "$RESPONSE" | jq '.documents // []')
|
||||
|
||||
# Apply filters in jq
|
||||
if [[ -n "$FOLDER" ]]; then
|
||||
DOCUMENTS=$(echo "$DOCUMENTS" | jq --arg f "$FOLDER" 'map(select(.folder == $f))')
|
||||
fi
|
||||
|
||||
if [[ -n "$TAG" ]]; then
|
||||
DOCUMENTS=$(echo "$DOCUMENTS" | jq --arg t "$TAG" 'map(select(.tags | index($t)))')
|
||||
fi
|
||||
|
||||
if [[ -n "$SEARCH" ]]; then
|
||||
DOCUMENTS=$(echo "$DOCUMENTS" | jq --arg s "$SEARCH" 'map(select(.title | contains($s) or .content | contains($s)))')
|
||||
fi
|
||||
|
||||
# Apply limit
|
||||
DOCUMENTS=$(echo "$DOCUMENTS" | jq --argjson limit "$LIMIT" '.[:$limit]')
|
||||
|
||||
if [[ "$JSON_OUTPUT" == true ]]; then
|
||||
echo "{\"documents\": $DOCUMENTS}"
|
||||
else
|
||||
echo "$DOCUMENTS" | jq -r '.[] | "\(.folder) | \(.title) | \(.id)"' 2>/dev/null
|
||||
fi
|
||||
}
|
||||
|
||||
# Update document
|
||||
mc_doc_update() {
|
||||
local DOC_ID="$1"
|
||||
shift
|
||||
|
||||
# First get the current document
|
||||
local CURRENT_DOC=$(mc_doc_get "$DOC_ID")
|
||||
[[ -z "$CURRENT_DOC" ]] && _error_exit "Document not found: $DOC_ID" && return 1
|
||||
|
||||
# Build updates
|
||||
local UPDATES="{}"
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--title) UPDATES=$(echo "$UPDATES" | jq --arg v "$2" '. + {title: $v}'); shift 2 ;;
|
||||
--content) UPDATES=$(echo "$UPDATES" | jq --arg v "$2" '. + {content: $v}'); shift 2 ;;
|
||||
--folder) UPDATES=$(echo "$UPDATES" | jq --arg v "$2" '. + {folder: $v}'); shift 2 ;;
|
||||
--tags) UPDATES=$(echo "$UPDATES" | jq --argjson v "$2" '. + {tags: $v}'); shift 2 ;;
|
||||
--description) UPDATES=$(echo "$UPDATES" | jq --arg v "$2" '. + {description: $v}'); shift 2 ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Merge current document with updates
|
||||
local MERGED_DOC=$(echo "$CURRENT_DOC" | jq --argjson updates "$UPDATES" '. + $updates')
|
||||
|
||||
# Send update via API
|
||||
local PAYLOAD=$(jq -n --arg id "$DOC_ID" --argjson doc "$MERGED_DOC" '{id: $id, document: $doc}')
|
||||
local RESPONSE=$(_api_call "PATCH" "/documents" "$PAYLOAD")
|
||||
|
||||
if echo "$RESPONSE" | jq -e '.error' >/dev/null 2>&1; then
|
||||
_error_exit "Update failed: $(echo "$RESPONSE" | jq -r '.error')"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "✅ Document updated"
|
||||
}
|
||||
|
||||
# Delete document
|
||||
mc_doc_delete() {
|
||||
local DOC_ID="$1"
|
||||
|
||||
_api_call "DELETE" "/documents" "{\"id\": \"$DOC_ID\"}" >/dev/null
|
||||
|
||||
echo "✅ Document deleted"
|
||||
}
|
||||
|
||||
# Search documents
|
||||
mc_doc_search() {
|
||||
local QUERY="$1"
|
||||
local LIMIT="${2:-20}"
|
||||
|
||||
# Get documents and filter client-side
|
||||
local RESPONSE=$(_api_call "GET" "/documents")
|
||||
|
||||
echo "$RESPONSE" | jq --arg q "$QUERY" --argjson limit "$LIMIT" '
|
||||
.documents
|
||||
| map(select(.title | test($q; "i") or .content | test($q; "i")))
|
||||
| .[:$limit]
|
||||
'
|
||||
}
|
||||
|
||||
# List folders
|
||||
mc_doc_folder_list() {
|
||||
# Get all documents and extract unique folders
|
||||
local RESPONSE=$(_api_call "GET" "/documents")
|
||||
|
||||
echo "$RESPONSE" | jq -r '.documents[].folder' | sort -u
|
||||
}
|
||||
|
||||
# Export functions
|
||||
export -f _detect_folder
|
||||
export -f _detect_tags
|
||||
export -f _api_call
|
||||
export -f mc_doc_create
|
||||
export -f mc_doc_get
|
||||
export -f mc_doc_list
|
||||
export -f mc_doc_update
|
||||
export -f mc_doc_delete
|
||||
export -f mc_doc_search
|
||||
export -f mc_doc_folder_list
|
||||
Loading…
Reference in New Issue
Block a user