Clawist
🟑 Intermediate12 min readβ€’β€’By Lin6

Automating Tasks with OpenClaw Cron Jobs: Complete Guide

The real power of an AI assistant isn't just responding to requestsβ€”it's working autonomously on your behalf. OpenClaw's cron system lets you schedule recurring tasks that run without any human intervention, turning your agent into a true automation engine.

This guide shows you how to create, manage, and optimize cron jobs in OpenClaw for everything from daily content generation to hourly monitoring tasks.

Understanding OpenClaw Cron Jobs

OpenClaw cron jobs are isolated agent sessions that run on a schedule. Unlike traditional cron tasks that execute scripts, OpenClaw cron launches a full AI agent session with access to all toolsβ€”browser automation, file operations, API calls, and external integrations.

Each cron job runs in its own isolated context, preventing interference with your main conversation sessions and avoiding context overflow from accumulating conversation history.

Cron vs Heartbeats

OpenClaw has two automation mechanisms:

Heartbeats: Periodic checks within the main agent session. Best for:

  • Checking multiple things together (inbox + calendar in one turn)
  • Tasks that need conversational context
  • Flexible timing (every ~30 min is fine)

Cron Jobs: Isolated scheduled tasks. Best for:

  • Exact timing requirements ("9 AM sharp every Monday")
  • Content generation or heavy processing
  • One-shot reminders
  • Tasks that should deliver directly to channels without main session involvement

Rule of thumb: Use cron for precise schedules and standalone tasks. Use heartbeats for batch checks that can drift slightly.

Creating Your First Cron Job

Let's start with a simple example: a daily morning briefing.

Using the CLI

# Create a daily briefing at 7 AM
openclaw cron add daily-briefing \
  --schedule "0 7 * * *" \
  --task "Generate morning briefing: check weather, calendar for today, and top 3 news headlines. Format as concise summary."

The --schedule flag uses standard cron syntax:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ minute (0 - 59)
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ hour (0 - 23)
β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ day of month (1 - 31)
β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ month (1 - 12)
β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ day of week (0 - 6, Sunday = 0)
β”‚ β”‚ β”‚ β”‚ β”‚
0 7 * * *

Using Configuration File

For more complex setups, edit ~/.openclaw/config.yaml:

cron:
  - id: daily-briefing
    schedule: "0 7 * * *"
    task: |
      Generate morning briefing:
      1. Check weather for today
      2. List calendar events for next 24 hours
      3. Search for top 3 tech news headlines
      4. Format as concise summary and send via Discord
    channel: discord
    channelTarget: "@Shaun"
    model: claude-sonnet-4
    thinking: low

After editing config, reload:

openclaw gateway restart

Common Cron Patterns

Daily Content Creation

Generate blog posts automatically:

- id: chaf-growth-engine
  schedule: "0 9 * * *"  # 9 AM daily
  task: |
    CHAF Growth Engine Daily Run:
    1. Find 20 trending AI/LLM topics from Twitter, Reddit, HackerNews
    2. Write 20 blog posts (1000+ words each) on these topics
    3. Deploy to claw.ist via git commit and push
    4. Submit URLs to Google Search Console
    5. Report summary to Discord
  channel: discord
  channelTarget: "#growth"
  model: claude-opus-4
  thinking: high

Hourly Monitoring

Check system status every hour:

- id: system-health-check
  schedule: "0 * * * *"  # Every hour
  task: |
    Check system health:
    - Disk space (warn if >80% full)
    - Memory usage (warn if >90%)
    - Gateway status
    - Recent error logs
    Only notify if issues found, otherwise silent.
  channel: discord
  channelTarget: "@Shaun"

Weekly Reports

Generate analytics summaries:

- id: weekly-analytics
  schedule: "0 9 * * 1"  # 9 AM every Monday
  task: |
    Generate weekly analytics report:
    1. Fetch Google Analytics data for claw.ist (past 7 days)
    2. Compare to previous week
    3. Identify top 10 performing posts
    4. Check Google Search Console for new indexed pages
    5. Format as report with insights and send to email
  channel: email
  channelTarget: "shaun@example.com"

Monthly Maintenance

Archive old data monthly:

- id: monthly-cleanup
  schedule: "0 2 1 * *"  # 2 AM on 1st of each month
  task: |
    Monthly maintenance:
    1. Archive memory logs older than 90 days to `memory/archive/`
    2. Compress old screenshots
    3. Clean up temp files
    4. Generate storage usage report
    5. Commit archive to git
  model: claude-sonnet-4

One-Time Reminders

Schedule future one-off tasks:

# Remind me in 2 hours
openclaw cron add remind-meeting \
  --schedule "$(date -d '+2 hours' '+%M %H %d %m *')" \
  --task "Reminder: Team meeting in 15 minutes. Review meeting notes in ~/Documents/meeting-prep.md" \
  --once

Advanced Cron Features

Model Selection

Different tasks need different models:

- id: quick-check
  schedule: "*/15 * * * *"  # Every 15 minutes
  task: "Check for urgent emails"
  model: claude-haiku-4  # Fastest, cheapest

- id: deep-analysis
  schedule: "0 9 * * 1"  # Weekly
  task: "Analyze competitor strategies and write detailed report"
  model: claude-opus-4  # Best quality

Thinking Levels

Control reasoning depth:

- id: simple-task
  schedule: "0 * * * *"
  task: "Post current time to Discord"
  thinking: off  # No extended thinking needed

- id: complex-task
  schedule: "0 9 * * *"
  task: "Design marketing strategy for new product launch"
  thinking: high  # Full reasoning enabled

Channel Routing

Send output to different destinations:

- id: public-post
  schedule: "0 12 * * *"
  task: "Generate motivational quote and image"
  channel: twitter
  channelTarget: "@YourHandle"

- id: private-alert
  schedule: "0 */6 * * *"
  task: "Check server health"
  channel: discord
  channelTarget: "@Shaun"

Environment Variables

Pass dynamic data to cron jobs:

- id: context-aware-task
  schedule: "0 8 * * *"
  task: "Generate daily briefing for ${USER_LOCATION}"
  env:
    USER_LOCATION: "San Francisco"
    WEATHER_UNITS: "fahrenheit"

Managing Cron Jobs

List All Jobs

openclaw cron list

Output:

ID                  Schedule       Next Run             Status
daily-briefing      0 7 * * *      2026-02-25 07:00    enabled
system-health       0 * * * *      2026-02-24 20:00    enabled
weekly-analytics    0 9 * * 1      2026-02-26 09:00    enabled

View Job Details

openclaw cron show daily-briefing

Disable/Enable Jobs

# Temporarily disable
openclaw cron disable daily-briefing

# Re-enable
openclaw cron enable daily-briefing

Delete Jobs

openclaw cron remove daily-briefing

View Execution History

# Show last 10 runs
openclaw cron history daily-briefing --limit 10

# Show today's runs
openclaw cron history daily-briefing --date 2026-02-24

Manual Trigger

Test a cron job without waiting:

openclaw cron run daily-briefing

Best Practices

1. Keep Tasks Focused

Bad: One cron that does everything

task: "Check email, calendar, weather, news, generate blog post, update analytics, clean files, etc."

Good: Multiple focused crons

- id: morning-brief
  task: "Weather + calendar for today"
- id: content-gen
  task: "Write daily blog post"
- id: cleanup
  task: "Archive old files"

2. Use Appropriate Models

# Quick checks: Haiku (fast, cheap)
- task: "Check for new emails"
  model: claude-haiku-4

# Routine work: Sonnet (balanced)
- task: "Generate daily blog post"
  model: claude-sonnet-4

# Complex analysis: Opus (best quality)
- task: "Competitive research and strategy"
  model: claude-opus-4

3. Verify Cron Output

CRITICAL: For content-creating crons, always verify output:

- id: blog-publisher
  task: |
    1. Write 5 blog posts
    2. Deploy to claw.ist
    3. VERIFY each post loads correctly via browser
    4. Only report success after verification passes
    5. If verification fails, report the failures

Never trust a cron's self-reported "done" status without verification.

4. Handle Failures Gracefully

task: |
  Try to fetch analytics data.
  If API fails, retry twice with 30s delay.
  If still failing, send error report but don't crash.
  Include error details and timestamp in report.

5. Avoid Peak Hours

Schedule heavy tasks during off-peak hours:

# Good: 2 AM for heavy processing
- id: data-backup
  schedule: "0 2 * * *"
  task: "Full database backup and compression"

# Bad: 9 AM when humans need responsiveness
- id: heavy-scrape
  schedule: "0 9 * * *"
  task: "Scrape 1000 websites"

6. Log Important Events

task: |
  Generate weekly report.
  Save report to ~/reports/weekly-$(date +%Y-%m-%d).md
  Log completion to memory/cron-activity.log
  Send summary to Discord.

7. Use Descriptive IDs

# Good IDs
- id: daily-briefing-7am
- id: weekly-analytics-monday
- id: hourly-health-check

# Bad IDs
- id: task1
- id: cron-job
- id: test

Real-World Examples

Content Pipeline

- id: youtube-to-blog
  schedule: "0 10 * * *"
  task: |
    Daily YouTube content pipeline:
    1. Search YouTube for "AI tutorials" uploaded in last 24h
    2. Select top 5 videos by views
    3. Download transcripts using yt-dlp
    4. Write blog posts based on transcripts (1000+ words each)
    5. Add proper frontmatter and code examples
    6. Save to ~/workspace/clawist/content/posts/
    7. Git commit and push to deploy
    8. Submit URLs to Google Search Console
    9. Verify posts load correctly
    10. Report results to Discord
  model: claude-opus-4
  thinking: high
  channel: discord
  channelTarget: "#content-pipeline"

Email Automation

- id: inbox-zero
  schedule: "0 */3 * * *"  # Every 3 hours
  task: |
    Email management:
    1. Check inbox for new messages
    2. Categorize: urgent, action-needed, informational, spam
    3. Draft replies for simple questions
    4. Flag urgent items for human review
    5. Archive informational emails
    6. Notify about urgent emails only
  channel: discord
  channelTarget: "@Shaun"

Social Media Scheduler

- id: twitter-scheduler
  schedule: "0 9,13,17 * * *"  # 9 AM, 1 PM, 5 PM
  task: |
    Post to Twitter:
    1. Read ~/social-queue.md for scheduled posts
    2. Pick next post for this time slot
    3. Generate image if needed
    4. Post to Twitter
    5. Mark as posted in queue file
    6. Log result
  channel: twitter

Data Collection

- id: competitor-monitor
  schedule: "0 8 * * *"
  task: |
    Monitor competitors:
    1. Visit competitor websites via browser
    2. Check for new blog posts, product updates
    3. Screenshot any changes
    4. Compare to yesterday's snapshot
    5. If significant changes found, analyze and report
    6. Update tracking spreadsheet
  model: claude-sonnet-4

Troubleshooting

Cron Doesn't Run

# Check cron is enabled
openclaw cron list

# Check Gateway is running
openclaw gateway status

# View Gateway logs
openclaw gateway logs | grep cron

# Verify schedule syntax
# Use https://crontab.guru/ to validate

Output Not Received

Check channel configuration:

# Verify Discord token is set
openclaw config get message.discord.token

# Test channel manually
openclaw message send --channel discord --target "@Shaun" --message "Test"

Task Fails Silently

Add explicit error handling:

task: |
  try:
    [your task]
  catch error:
    Report error to Discord with full details
    Log to ~/cron-errors.log

High API Costs

# Check which crons run most frequently
openclaw cron list

# Switch to lighter models for frequent tasks
# Haiku: ~$0.25/million tokens
# Sonnet: ~$3/million tokens
# Opus: ~$15/million tokens

Next Steps

Now that you understand OpenClaw cron jobs:

  1. Start simple: Create one daily briefing cron
  2. Monitor execution: Check logs and verify output
  3. Expand gradually: Add more crons as needs arise
  4. Optimize costs: Use appropriate models for each task
  5. Build workflows: Chain multiple crons together

Your AI assistant can now work 24/7 on autopilotβ€”generating content, monitoring systems, and handling routine tasks while you focus on what matters.

Happy automating! πŸ€–β°