Tutorial: Advanced Use Cases
Goal: Master sophisticated workflows that combine multiple Rephlo features for maximum productivity. Level: Expert Time: 30-60 minutes to read; ongoing mastery
Table of Contents
- Multi-Provider Workflow Optimization
- Space Chaining and Context Orchestration
- Advanced Command Composition
- Enterprise Workflow Integration
- AI-Assisted Code Review Pipeline
- Multilingual Content Pipeline
- Research Synthesis Workflow
- Privacy-First Sensitive Data Processing
- Real-Time Meeting Assistant
- Creative Writing Workflows
1. Multi-Provider Workflow Optimization
Complexity Level: Expert Time Investment: 30 minutes setup, ongoing optimization Prerequisites: 2+ providers configured (recommended: Groq + Claude or GPT-5.1)
Scenario
You process hundreds of text operations daily. Simple tasks (grammar fixes, formatting) don't need expensive models, but complex reasoning tasks require top-tier AI. By strategically routing tasks to different providers, you can reduce costs by 60-80% while maintaining quality.
Workflow Architecture
┌─────────────────────────────────────┐
│ TASK CLASSIFICATION │
└─────────────────────────────────────┘
│
┌───────────────┼───────────────┐
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ SIMPLE │ │ MEDIUM │ │ COMPLEX │
│ TASKS │ │ TASKS │ │ TASKS │
└───────────┘ └───────────┘ └───────────┘
│ │ │
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ Groq │ │ GPT-5.1 │ │ Claude │
│ (Llama 4) │ │ Mini │ │ Sonnet │
│ $0.05/M │ │ $0.10/M │ │ 4.5 │
└───────────┘ └───────────┘ │ $3/M │
└───────────┘
Step-by-Step Setup
Step 1: Create Provider-Specific Command Groups
- Open Dashboard > Commands > Manage Groups.
- Create three groups:
- "Quick Tasks" (Green, Lightning icon)
- "Standard Tasks" (Blue, Gear icon)
- "Deep Analysis" (Purple, Brain icon)
Step 2: Configure Commands by Complexity
Quick Tasks (Groq - Fast/Cheap)
- Grammar Check
- Spelling Fix
- Text Formatting
- Simple Translation
- Bullet Point Extraction
Standard Tasks (GPT-5.1 Mini - Balanced)
- Email Rewriting
- Code Documentation
- Summary Generation
- Meeting Notes Cleanup
Deep Analysis (Claude Sonnet 4.5 - Premium)
- Legal Document Review
- Complex Code Refactoring
- Strategic Analysis
- Long-form Content Creation
- Multi-document Synthesis
Step 3: Assign Providers to Commands
- Edit each command in the appropriate group.
- In the Provider dropdown, select the designated provider.
- Save the command.
Step 4: Mid-Conversation Provider Switching
For chat conversations that evolve in complexity:
- Start a conversation with Groq for initial exploration.
- When you need deeper analysis, click the Provider selector in the chat header.
- Switch to Claude Sonnet 4.5.
- The conversation context is preserved; only the model changes.
Cost Calculation Example
| Task Type | Volume/Month | Without Optimization | With Optimization |
|---|---|---|---|
| Grammar checks | 500 | $1.50 (Claude) | $0.025 (Groq) |
| Email rewrites | 200 | $0.60 (Claude) | $0.02 (GPT-5.1 Mini) |
| Deep analysis | 50 | $0.15 (Claude) | $0.15 (Claude) |
| Total | 750 | $2.25 | $0.21 |
| Savings | 91% |
Pro Tips
- Batch similar tasks: Process all grammar checks together before switching providers.
- Use hotkeys by group: Assign
Ctrl+1for Quick Tasks,Ctrl+2for Standard,Ctrl+3for Deep. - Monitor History: Review the History page to identify tasks that could be downgraded.
Potential Pitfalls
- Under-routing complex tasks: If output quality drops, move the command to a higher tier.
- Over-reliance on fast models: Groq models have smaller context windows (32K vs 200K).
- API key management: Ensure all provider keys remain valid and funded.
2. Space Chaining and Context Orchestration
Complexity Level: Expert Time Investment: 45 minutes setup, variable ongoing Prerequisites: 3+ Spaces created with different knowledge domains
Scenario
You're preparing a comprehensive proposal that requires legal compliance knowledge, technical specifications, and business strategy. Each knowledge domain lives in a separate Space. You need to orchestrate them together without merging files.
Workflow Architecture
┌─────────────────────────────────────────────────────────────────┐
│ PROPOSAL CREATION PIPELINE │
└─────────────────────────────────────────────────────────────────┘
│
┌───────────────────────┼───────────────────────┐
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ LEGAL SPACE │ │ TECHNICAL │ │ BUSINESS │
│ │ │ SPACE │ │ SPACE │
├───────────────┤ ├───────────────┤ ├───────────────┤
│ - Contracts │ │ - API Docs │ │ - Pricing │
│ - Compliance │ │ - Specs │ │ - Case Studies│
│ - Terms │ │ - Standards │ │ - Strategy │
└───────────────┘ └───────────────┘ └───────────────┘
│ │ │
└───────────────────────┼───────────────────────┘
▼
┌───────────────────────┐
│ SYNTHESIZED OUTPUT │
│ (Final Proposal) │
└───────────────────────┘
Step-by-Step Workflow
Step 1: Create Domain-Specific Spaces
- Legal Space: Upload compliance documents, contract templates, regulatory guidelines.
- Technical Space: Upload API documentation, technical specifications, architecture diagrams.
- Business Space: Upload pricing sheets, case studies, competitive analysis.
Step 2: Create Section-Specific Commands
Command: "Draft Compliance Section"
Using the regulatory requirements and compliance standards in the Space,
draft a compliance section for a proposal that addresses:
- Data protection requirements
- Industry-specific regulations
- Certification requirements
Format as formal proposal text with numbered sections.
Assign to: Legal Space
Command: "Draft Technical Architecture"
Based on the technical specifications and API documentation,
create a technical architecture section covering:
- System integration approach
- Security architecture
- Scalability considerations
Include diagrams in ASCII format where helpful.
Assign to: Technical Space
Command: "Draft Business Case"
Using the pricing models and case studies, draft a business case section:
- ROI analysis framework
- Competitive positioning
- Success metrics
Reference specific case study outcomes where applicable.
Assign to: Business Space
Step 3: Execute Space-Chained Workflow
- Activate Legal Space > Run "Draft Compliance Section" > Save output.
- Activate Technical Space > Run "Draft Technical Architecture" > Save output.
- Activate Business Space > Run "Draft Business Case" > Save output.
Step 4: Synthesize Across Spaces
Create a "Meta-Synthesis" command that doesn't require a Space:
Command: "Synthesize Proposal Sections"
I have the following sections from different domains:
{{input_text}}
Create an executive summary that:
1. Highlights the key points from each section
2. Identifies connections between technical, legal, and business aspects
3. Provides a cohesive narrative for decision-makers
Output as a 2-page executive summary.
Then paste all three outputs into the input and run the synthesis command.
Variable Injection from Multiple Spaces
For advanced users, create a master command with explicit file references:
Review this proposal draft: {{input_text}}
Cross-reference with:
- Compliance: {{compliance_checklist_pdf}} [from Legal Space]
- Technical: {{api_v3_spec_md}} [from Technical Space]
- Pricing: {{enterprise_pricing_xlsx}} [from Business Space]
Identify any inconsistencies or gaps.
Note: This requires the files to be in the currently active Space or using multi-Space commands (advanced configuration).
Pro Tips
- Create a "Meta-Space": A Space containing only summaries/indexes of your other Spaces.
- Use consistent naming: Prefix files with domain (legal_, tech_, biz_) for easier variable injection.
- Version your Spaces: Archive old versions before major document updates.
Potential Pitfalls
- Context window limits: Combining multiple Spaces can exceed token limits. Use compaction.
- Stale data: Ensure all Spaces are updated before critical workflows.
- Switching overhead: Frequent Space switching can break workflow momentum.
3. Advanced Command Composition
Complexity Level: Expert Time Investment: 20 minutes setup per chain Prerequisites: Understanding of variables, intermediate command creation
Scenario
You need to create sophisticated command chains that output structured data (JSON, Markdown tables) which can be consumed by other commands or external tools.
Workflow Architecture
┌─────────────────────────────────────────────────────────────────┐
│ COMMAND CHAIN PIPELINE │
└─────────────────────────────────────────────────────────────────┘
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ STAGE 1 │───▶│ STAGE 2 │───▶│ STAGE 3 │
│ Extract │ │ Transform │ │ Format │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Raw Text │ │ Structured │ │ Final │
│ Input │ │ JSON │ │ Markdown │
└─────────────┘ └─────────────┘ └─────────────┘
Pattern 1: Structured Data Extraction
Command: "Extract Meeting Actions"
Analyze this meeting transcript and extract all action items.
Output ONLY valid JSON in this exact format:
{
"meeting_date": "YYYY-MM-DD",
"action_items": [
{
"owner": "Person Name",
"task": "Task description",
"due_date": "YYYY-MM-DD or null",
"priority": "high|medium|low"
}
]
}
Transcript:
{{input_text}}
Pattern 2: Transform JSON to Markdown
Command: "Actions to Markdown Table"
Convert this JSON action items list to a Markdown table:
{{input_text}}
Output format:
| Owner | Task | Due Date | Priority |
|-------|------|----------|----------|
| ... | ... | ... | ... |
Include a summary row with total counts per priority level.
Pattern 3: Self-Referential Prompts Using Space Context
Command: "Apply Style Guide" (with Space context)
Rewrite the following text according to the style rules in my Space:
Text to rewrite:
{{input_text}}
Style requirements from Space:
{{style_guide_md}}
Additional rules:
1. Maintain the original meaning
2. Flag any terms that need glossary definitions
3. Output the rewritten text followed by a "CHANGES MADE:" section
Pattern 4: Multi-Step Chain Command
Create a command that explicitly chains multiple operations:
Command: "Full Content Pipeline"
Process this content through the following stages:
INPUT:
{{input_text}}
STAGE 1 - ANALYSIS:
Identify the following:
- Main topic (1 sentence)
- Key points (bullet list)
- Target audience
- Tone assessment
STAGE 2 - ENHANCEMENT:
Based on the analysis, improve the content by:
- Strengthening weak arguments
- Adding transitions
- Improving clarity
STAGE 3 - FORMATTING:
Format the enhanced content as:
- Headline (max 10 words)
- Subheadline (max 20 words)
- Body (with H2 sections)
- Call to action
Output all three stages clearly separated with markdown headers.
Pro Tips
- JSON validation: Test extracted JSON in a validator before using in downstream tools.
- Escape handling: Use triple backticks for code blocks to prevent injection issues.
- Deterministic outputs: For structured data, set temperature to 0.1-0.3.
Potential Pitfalls
- Format drift: AI may deviate from exact JSON schema. Include explicit examples.
- Token bloat: Multi-stage commands consume more tokens. Monitor usage.
- Error propagation: Bad Stage 1 output cascades through the chain.
4. Enterprise Workflow Integration
Complexity Level: Expert Time Investment: 2-4 hours initial setup Prerequisites: Admin access, standardized team processes
Scenario
Your team needs consistent AI-assisted workflows across multiple users. You want to create a standardized command library with audit trails for compliance.
Workflow Architecture
┌─────────────────────────────────────────────────────────────────┐
│ ENTERPRISE DEPLOYMENT │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ ADMIN SETUP │───▶│ DISTRIBUTION │───▶│ TEAM USAGE │
│ │ │ │ │ │
│ - Command Lib │ │ - Export JSON │ │ - Import │
│ - Spaces │ │ - Share Spaces │ │ - Execute │
│ - Policies │ │ - Guidelines │ │ - Audit Trail │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ COMPLIANCE │
│ REPORTING │
└─────────────────┘
Step 1: Create Standardized Command Library
Naming Convention: [DEPT]-[CATEGORY]-[ACTION]
LEGAL-CONTRACT-ReviewLEGAL-NDA-GenerateHR-POLICY-SummarizeSALES-PROPOSAL-Draft
Command Template for Audit Compliance:
[AUDIT HEADER]
Department: {{department}}
Document Type: {{doc_type}}
Processing Date: {{current_date}}
Operator: To be logged by Rephlo History
[TASK]
{{actual_instruction}}
[OUTPUT REQUIREMENTS]
1. Include a "Compliance Notes" section
2. Flag any regulatory concerns
3. Mark confidence level (High/Medium/Low)
4. List sources referenced
Step 2: Export Command Library
- Navigate to Commands page.
- Click Export (top-right menu).
- Select all commands to include.
- Export as JSON file:
company-commands-v1.0.json.
Step 3: Distribute to Team
Option A: Direct Import
- Share JSON file via internal file share.
- Each team member: Commands > Import > Select file.
Option B: Shared Space for Policies
- Create a "Company Policies" Space with all compliance documents.
- Export Space configuration.
- Team imports both Commands and Space.
Step 4: Batch Processing for Reports
Command: "Batch Compliance Check"
Review the following documents for compliance with our policies:
Documents to review:
{{input_text}}
For EACH document, provide:
1. Document identifier
2. Compliance status: PASS / FAIL / NEEDS REVIEW
3. Issues found (if any)
4. Recommended actions
Output as a CSV table:
Document ID, Status, Issues, Actions
Step 5: Audit Trail Management
- Open History page.
- Use filters to view by date range, command, or user.
- Export history as CSV for compliance reporting.
- Store exports in your document management system.
Pro Tips
- Version commands: Include version in command names (
v2.1) for change tracking. - Lock critical commands: Mark compliance-critical commands as non-editable (via admin settings).
- Regular audits: Schedule monthly reviews of History exports.
Potential Pitfalls
- Version drift: Team members modifying shared commands. Use naming conventions.
- Stale Spaces: Ensure policy documents are updated across all team Spaces.
- Incomplete audit trails: Some actions may not be captured if users bypass Rephlo.
5. AI-Assisted Code Review Pipeline
Complexity Level: Expert Time Investment: 30 minutes setup Prerequisites: Technical Space with coding standards, vision-capable provider
Scenario
You need to review code from screenshots (legacy systems, PRs without direct access), extract it, analyze it, generate suggestions, and create documentation.
Workflow Architecture
┌─────────────────────────────────────────────────────────────────┐
│ CODE REVIEW PIPELINE │
└─────────────────────────────────────────────────────────────────┘
┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
│ CAPTURE │───▶│ EXTRACT │───▶│ ANALYZE │───▶│ DOCUMENT │
│ │ │ │ │ │ │ │
│ Screenshot│ │ Vision │ │ Code │ │ Generate │
│ of Code │ │ OCR │ │ Review │ │ Report │
└───────────┘ └───────────┘ └───────────┘ └───────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
│ Image │ │ Raw Code │ │ Issues │ │ Markdown │
│ File │ │ Text │ │ List │ │ Report │
└───────────┘ └───────────┘ └───────────┘ └───────────┘
Step-by-Step Workflow
Step 1: Create Code Standards Space
- Create Space: "Code Standards".
- Upload your coding standards documents:
coding-guidelines.mdsecurity-checklist.mdnaming-conventions.mdperformance-patterns.md
Step 2: Create Pipeline Commands
Command 1: "Extract Code from Screenshot" (Vision Command)
Extract all code visible in this screenshot.
Output requirements:
1. Preserve exact formatting and indentation
2. Use appropriate syntax highlighting markers
3. Note any text that's unclear with [UNCLEAR: best guess]
4. Identify the programming language
Output format:
```[language]
[extracted code]
Confidence: [High/Medium/Low]
**Command 2: "Review Code Quality"** (Uses Code Standards Space)
Review this code against our coding standards:
{{input_text}}
Check for:
- Style guide violations (reference: {{coding_guidelines_md}})
- Security issues (reference: {{security_checklist_md}})
- Naming convention violations (reference: {{naming_conventions_md}})
- Performance anti-patterns (reference: {{performance_patterns_md}})
For each issue found, provide:
- Line number (approximate if from screenshot)
- Severity: CRITICAL / WARNING / INFO
- Description
- Suggested fix
Output as structured list.
**Command 3: "Generate Review Report"**
Create a formal code review document from these findings:
{{input_text}}
Report format:
Code Review Report
Summary
- Total issues: [count]
- Critical: [count]
- Warnings: [count]
- Info: [count]
Detailed Findings
[For each issue, format as a table]
Recommendations
[Prioritized action items]
Appendix
- Original code reviewed
- Standards referenced
#### Step 3: Multi-File Review Strategy
For reviewing multiple related files:
**Command: "Multi-File Architecture Review"**
Review these related code files for architectural consistency:
{{input_text}}
Analyze:
- Dependency relationships
- Circular dependency risks
- Separation of concerns
- Interface consistency
- Error handling patterns
Create a dependency diagram using ASCII: [Draw module relationships]
Highlight any architectural concerns.
#### Step 4: Version Control Integration
After generating the review report:
1. Copy the Markdown report.
2. Paste into your PR comment or code review tool.
3. Reference the History entry ID for audit purposes.
### Pro Tips
- **Batch screenshots**: Capture all relevant code screens before starting the pipeline.
- **Use consistent naming**: Name screenshot files with line number ranges.
- **Create review templates**: Different commands for different review types (security, performance, style).
### Potential Pitfalls
- **OCR errors**: Vision extraction may miss subtle syntax. Always verify critical code.
- **Context loss**: Screenshots lack git history context. Supplement with text descriptions.
- **Large files**: Very long code files may need to be processed in chunks.
---
## 6. Multilingual Content Pipeline
**Complexity Level**: Expert
**Time Investment**: 1 hour setup, ongoing refinement
**Prerequisites**: Multilingual glossary Space, native speaker reviewers (optional)
### Scenario
You create content that needs to be published in 5+ languages while maintaining brand consistency, cultural appropriateness, and terminology accuracy.
### Workflow Architecture
┌─────────────────────────────────────────────────────────────────┐ │ MULTILINGUAL CONTENT PIPELINE │ └─────────────────────────────────────────────────────────────────┘
┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │ SOURCE │───▶│ TRANSLATE │───▶│ LOCALIZE │───▶│ VALIDATE │ │ │ │ │ │ │ │ │ │ English │ │ Base │ │ Cultural │ │ Quality │ │ Master │ │ Translation│ │ Adaptation│ │ Check │ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │ ▼ ┌─────────────────┐ │ TERMINOLOGY │ │ SPACE │ │ (Glossary) │ └─────────────────┘
### Step 1: Create Terminology Space
**Space: "Brand Terminology"**
Upload:
- `brand-glossary.csv` (term, EN, ES, FR, DE, JA, ZH)
- `style-guide-translations.md`
- `cultural-notes.md`
- `forbidden-terms.md`
**Example glossary format:**
```csv
term,EN,ES,FR,DE,JA,ZH
Product Name,SuperApp,SuperApp,SuperApp,SuperApp,SuperApp,SuperApp
Free Trial,Free Trial,Prueba Gratuita,Essai Gratuit,Kostenlose Testversion,無料トライアル,免费试用
Step 2: Create Translation Commands
Command: "Translate to Spanish" (Uses Brand Terminology Space)
Translate this content to Spanish (Spain variant):
{{input_text}}
Requirements:
1. Use formal "usted" form
2. Apply terminology from: {{brand_glossary_csv}}
3. Maintain markdown formatting
4. Preserve any URLs or code blocks unchanged
5. Flag any culturally sensitive content with [REVIEW: reason]
Output the translation directly, no explanations.
Command: "Localize for Japan"
Adapt this Spanish translation for Japanese audience:
{{input_text}}
Localization requirements:
1. Convert to appropriate politeness level (teineigo)
2. Adapt examples to Japanese context
3. Use terminology from: {{brand_glossary_csv}}
4. Adjust date/time/currency formats
5. Note any Western concepts needing explanation with [EXPLAIN: term]
Cultural notes to reference: {{cultural_notes_md}}
Step 3: Translation Memory via Spaces
Create language-specific Spaces that serve as translation memory:
Space: "Translation Memory - Spanish"
- Upload previously translated and approved content.
- The AI references these for consistency in future translations.
Command: "Translate with Memory"
Translate this to Spanish, maintaining consistency with previous translations:
New content:
{{input_text}}
Reference our translation history in this Space for:
- Consistent terminology
- Matching tone and style
- Previously approved phrases
Flag any NEW terminology not in our translation memory with [NEW: term].
Step 4: Quality Validation
Command: "Translation Quality Check"
Compare this translation against the original and our standards:
ORIGINAL (English):
[Paste original]
TRANSLATION (Spanish):
{{input_text}}
Check for:
1. Meaning accuracy (any mistranslations?)
2. Terminology consistency (using glossary: {{brand_glossary_csv}})
3. Natural flow (reads naturally to native speakers?)
4. Forbidden terms (checking: {{forbidden_terms_md}})
5. Format preservation (markdown, links intact?)
Output a quality score (1-10) with specific issues found.
Step 5: Batch Processing
For large content sets:
Command: "Batch Translate Strings"
Translate these UI strings to Spanish:
{{input_text}}
Input format: key=English value
Output format: key=Spanish value
Use glossary: {{brand_glossary_csv}}
Maintain exact key names.
Do not translate placeholder variables like {username} or {{count}}.
Pro Tips
- One language per Space: Keep translation memories separate for cleaner context.
- Version glossaries: Update glossary versions when brand terminology changes.
- Back-translation test: Translate back to English to catch meaning drift.
Potential Pitfalls
- Context loss: Short strings may translate incorrectly without context.
- Glossary conflicts: Same term may need different translations in different contexts.
- Character limits: Translated text may be longer. Account for UI constraints.
7. Research Synthesis Workflow
Complexity Level: Expert Time Investment: 1-2 hours per project Prerequisites: PDF ingestion capability, citation management system
Scenario
You're conducting academic or business research across 20+ sources. You need to extract key findings, identify themes, synthesize insights, and maintain proper citations.
Workflow Architecture
┌─────────────────────────────────────────────────────────────────┐
│ RESEARCH SYNTHESIS PIPELINE │
└─────────────────────────────────── ──────────────────────────────┘
INGESTION EXTRACTION SYNTHESIS OUTPUT
┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ┌───────────┐
│ │ │ │ │ │ │ │
│ PDF Upload │──▶│ Key Points │──▶│ Thematic │──▶│ Literature│
│ to Space │ │ Extraction │ │ Analysis │ │ Review │
│ │ │ │ │ │ │ │
└───────────────┘ └───────────────┘ └───────────────┘ └───────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ┌───────────┐
│ 20+ Sources │ │ Citation │ │ Cross-Ref │ │ Final │
│ in Space │ │ + Summary │ │ Matrix │ │ Document │
└───────────────┘ └───────────────┘ └───────────────┘ └───────────┘
Step-by-Step Workflow
Step 1: Create Research Space with Citations
Space: "Market Research 2025"
When uploading PDFs, use descriptive names:
Smith2024_AI_Market_Analysis.pdfJones2023_Consumer_Behavior_Study.pdfTechReport2024_Industry_Trends.pdf
Step 2: Extract Key Points with Citations
Command: "Extract Findings with Citations"
Extract the key findings from this research document:
{{input_text}}
For each finding, provide:
1. Finding summary (1-2 sentences)
2. Direct quote (if available)
3. Page number or section
4. Methodology used
5. Limitations noted by authors
Output format:
### Finding 1: [Title]
- **Summary**: [text]
- **Quote**: "[direct quote]" (p. XX)
- **Method**: [methodology]
- **Limitations**: [any caveats]
- **Citation**: [Author, Year, Title]
Step 3: Build Cross-Reference Matrix
Command: "Create Cross-Reference Matrix"
Analyze these research summaries and create a cross-reference matrix:
{{input_text}}
Matrix format:
| Theme/Topic | Source 1 | Source 2 | Source 3 | Consensus |
|-------------|----------|----------|----------|-----------|
| [Theme] | [✓/✗/~] | [✓/✗/~] | [✓/✗/~] | [Yes/No/Mixed] |
Legend:
✓ = Supports
✗ = Contradicts
~ = Neutral/No data
After the matrix, provide:
1. Strong consensus areas (all sources agree)
2. Contested areas (sources disagree)
3. Research gaps (no sources address)
Step 4: Synthesize Literature Review
Command: "Generate Literature Review Section"
Using the research summaries and cross-reference matrix, write a literature review section:
{{input_text}}
Structure:
1. Introduction to the topic
2. Thematic analysis (organize by theme, not by source)
3. Areas of consensus
4. Areas of debate
5. Identified gaps
6. Conclusion
Requirements:
- Use APA 7th edition in-text citations
- Include all sources from the matrix
- Maintain academic tone
- 1500-2000 words
Step 5: Generate Bibliography
Command: "Format Bibliography"
Generate a properly formatted bibliography from these citations:
{{input_text}}
Format: APA 7th Edition
Order: Alphabetical by first author's last name
Include:
- All in-text citations mentioned
- DOIs where available
- Access dates for web sources
Output as a formatted reference list.
Pro Tips
- Chunk large PDFs: If a PDF is 100+ pages, extract sections separately.
- Use consistent naming:
AuthorYear_ShortTitle.pdfformat enables easier citation tracking. - Create sub-Spaces: For very large projects, create Spaces by sub-topic.
Potential Pitfalls
- Citation accuracy: Always verify AI-generated citations against original sources.
- Hallucinated quotes: AI may generate plausible but incorrect direct quotes.
- Recency bias: AI may over-weight recent sources. Ensure balanced coverage.
8. Privacy-First Sensitive Data Processing
Complexity Level: Expert Time Investment: 1-2 hours initial setup Prerequisites: Ollama installed, local models downloaded
Scenario
You work with confidential data (PII, PHI, financial records) that cannot be sent to cloud AI providers. You need to process this data locally while maintaining compliance.
Workflow Architecture
┌─────────────────────────────────────────────────────────────────┐
│ AIR-GAPPED DATA PIPELINE │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ LOCAL MACHINE │
│ │
│ ┌───────────┐ ┌───────────────┐ ┌───────────────────┐ │
│ │ SENSITIVE │───▶│ OLLAMA │───▶│ REPHLO │ │
│ │ DATA │ │ (Local LLM) │ │ PROCESSING │ │
│ │ │ │ │ │ │ │
│ │ - PII │ │ - Llama 4 │ │ - Commands │ │
│ │ - PHI │ │ - Mistral │ │ - Spaces │ │
│ │ - Financial│ │ - DeepSeek │ │ - History (local) │ │
│ └───────────┘ └───────────────┘ └───────────────────┘ │
│ │
│ ╔═══════════════════╗ │
│ ║ NO DATA LEAVES ║ │
│ ║ THIS MACHINE ║ │
│ ╚═══════════════════╝ │
└─────────────────────────────────────────────────────────────────┘
X
┌─────────────────┐
│ CLOUD APIs │ BLOCKED
│ (OpenAI, etc) │
└─────────────────┘
Step-by-Step Setup
Step 1: Install and Configure Ollama
- Download Ollama from
ollama.ai. - Install and start the service.
- Pull a capable model:
ollama pull llama3.1:8borollama pull deepseek-coder:33b. - Verify:
ollama listshows your models.
Step 2: Configure Rephlo for Ollama-Only
- Open Settings > Providers.
- Configure Ollama with local endpoint (
http://localhost:11434). - Disable or remove all cloud provider API keys.
- Test connection.
Step 3: Create Privacy-Compliant Commands
Command: "Anonymize PII"
Anonymize all personally identifiable information in this text:
{{input_text}}
Replacement rules:
- Names: Replace with [NAME_1], [NAME_2], etc.
- SSN: Replace with [SSN_REDACTED]
- Phone: Replace with [PHONE_REDACTED]
- Email: Replace with [EMAIL_REDACTED]
- Address: Replace with [ADDRESS_REDACTED]
- DOB: Replace with [DOB_REDACTED]
Create a reference key at the end:
[NAME_1] = First name encountered (DO NOT include actual name)
Command: "Summarize Medical Record"
Summarize this medical record for clinical review:
{{input_text}}
Include:
- Chief complaint
- Relevant history
- Current medications
- Assessment and plan
PRIVACY NOTICE: This command runs entirely locally. No PHI is transmitted.
Output in SOAP note format.
Command: "Analyze Financial Statement"
Analyze this financial statement for anomalies:
{{input_text}}
Check for:
1. Unusual transaction patterns
2. Round number red flags
3. Timing irregularities
4. Missing documentation indicators
Format as a compliance review checklist.
Step 4: Create Compliance Documentation
Command: "Generate Processing Log"
Create a data processing log entry for compliance:
Processing Details:
- Data Type: [Auto-detect from input]
- Date/Time: [Current]
- Purpose: {{purpose}}
- Retention Policy: [Per company guidelines]
{{input_text}}
Generate a processing record in the format required by [GDPR/HIPAA/SOC2].
Include: Legal basis, data minimization confirmation, and storage location.
Step 5: Air-Gapped Verification
Verify no network traffic during processing:
- Disconnect from internet (Wi-Fi off, Ethernet unplugged).
- Run a test command on sample sensitive data.
- Verify command completes successfully.
- This confirms true local processing.
Pro Tips
- Model selection: Use
llama3.1:8bfor general tasks,deepseek-coder:33bfor code analysis. - Regular updates: Update Ollama and models periodically (when on non-sensitive network).
- Separate user profiles: Use a dedicated Windows user profile for sensitive data work.
Potential Pitfalls
- Lower quality: Local models may not match GPT-5.1/Claude Sonnet 4.5 quality. Adjust expectations.
- Hardware limits: Large models need significant RAM (16GB+) and GPU (optional but faster).
- History storage: Rephlo History is stored locally. Ensure disk encryption is enabled.
9. Real-Time Meeting Assistant
Complexity Level: Expert Time Investment: 15 minutes setup Prerequisites: Dual monitor or split screen, fast provider (Groq recommended)
Scenario
You're in a live meeting and need to quickly summarize discussions, extract action items, and draft follow-up emails in real-time.
Workflow Architecture
┌─────────────────────────────────────────────────────────────────┐
│ REAL-TIME MEETING FLOW │
└─────────────────────────────────────────────────────────────────┘
DURING MEETING BETWEEN TOPICS
┌───────────────────────────┐ ┌───────────────────────────┐
│ │ │ │
│ 📹 Video Call │ │ Quick Capture: │
│ ┌─────────────────────┐ │ │ - Select key quote │
│ │ │ │ ────▶ │ - Hotkey (Ctrl+Shift+Alt+C) │
│ │ Meeting in Progress│ │ │ - "Summarize" command │
│ │ │ │ │ - Copy result │
│ └─────────────────────┘ │ │ │
│ │ └───────────────────────────┘
└───────────────────────────┘
POST-MEETING (IMMEDIATE)
┌──────────────── ───────────────────────────────────────────────┐
│ │
│ Paste full transcript or notes ──▶ "Extract Actions" │
│ ──▶ "Draft Follow-up Email" │
│ ──▶ "Create Summary" │
│ │
└───────────────────────────────────────────────────────────────┘
Step-by-Step Setup
Step 1: Create Meeting Command Group
Group: "Meeting Tools" (Lightning emoji, Yellow)
Commands in group:
- Quick Summarize
- Extract Actions
- Draft Follow-up
- Clarify Point
- Translate (for international meetings)
Step 2: Optimize for Speed
- Set Groq as default provider for Meeting Tools group (near-instant responses).
- Configure Global Hotkey to something single-handed:
Ctrl+\orCtrl+. - Enable "Copy to Clipboard Automatically" in Settings.
Step 3: Create Meeting-Specific Commands
Command: "Quick Summarize" (< 5 seconds)
Summarize this in 2-3 bullet points. Be extremely concise:
{{input_text}}
Command: "Extract Actions"
Extract action items from this meeting discussion:
{{input_text}}
Format each as:
[ ] [Owner] - [Task] - [Due date if mentioned]
If no owner is clear, use [UNASSIGNED].
Command: "Draft Follow-up Email"
Draft a brief follow-up email based on this meeting discussion:
{{input_text}}
Structure:
- Subject line
- Thank attendees
- Key decisions made
- Action items (with owners)
- Next steps
Keep it under 200 words. Professional but warm tone.
Command: "Clarify Point"
Someone just said this in a meeting and I need to understand it better:
{{input_text}}
Provide:
1. What they likely mean (1 sentence)
2. A clarifying question I could ask
3. Related context that might be relevant
Step 4: Real-Time Workflow
During Meeting:
- Hear an important point.
- Type/paste the key phrase in notepad.
- Select text, press Hotkey.
- Run "Quick Summarize" or "Clarify Point".
- Use the insight immediately in the discussion.
Immediately After:
- Paste full meeting notes.
- Run "Extract Actions".
- Run "Draft Follow-up Email".
- Send follow-up within 5 minutes of meeting end.
Pro Tips
- Prepare beforehand: Have Rephlo open and Meeting Tools group visible.
- Use transcription: If meeting platform has transcription, use that as input.
- Practice the hotkey: Muscle memory is essential for real-time use.
Potential Pitfalls
- Distraction: Focusing on Rephlo during meeting may cause you to miss content.
- Audio lag: Transcription may lag behind live discussion.
- Context missing: Short snippets may be misinterpreted without full context.
10. Creative Writing Workflows
Complexity Level: Expert Time Investment: Variable (ongoing project) Prerequisites: Style guide Space, character/world-building documents
Scenario
You're writing a novel or long-form content and need to maintain consistency across characters, plot points, and writing style over hundreds of pages.
Workflow Architecture
┌─────────────────────────────────────────────────────────────────┐
│ CREATIVE WRITING PIPELINE │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ WORLD SPACE │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Characters │ │ Plot │ │ World │ │
│ │ Bible │ │ Outline │ │ Building │ │
│ │ │ │ │ │ │ │
│ │ - Profiles │ │ - Chapters │ │ - Locations │ │
│ │ - Arcs │ │ - Beats │ │ - Rules │ │
│ │ - Voice │ │ - Themes │ │ - History │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ CHAPTER CREATION FLOW │
│ │
│ [Outline] ──▶ [Draft] ──▶ [Consistency Check] ──▶ [Polish] │
│ │
└─────────────────────────────────────────────────────────────────┘
Step-by-Step Setup
Step 1: Create World Bible Space
Space: "Novel - Project Nightfall"
Documents to include:
characters.md- Full character profiles with voice samplesplot-outline.md- Chapter-by-chapter synopsisworld-rules.md- Magic system, technology, society ruleslocations.md- Setting descriptionstimeline.md- Chronological eventsstyle-guide.md- Your writing style, POV rules, tense
Character Profile Format:
# SARAH CHEN
## Basics
- Age: 32
- Occupation: Quantum physicist
- Appearance: Short black hair, wire-frame glasses, always wears her mother's jade pendant
## Personality
- Analytical but empathetic
- Dry sense of humor
- Avoids conflict but fierce when cornered
## Speech Patterns
- Uses scientific metaphors
- Rarely uses contractions when stressed
- Catchphrase: "That's statistically improbable."
## Arc
- Chapter 1-5: Skeptic
- Chapter 6-12: Reluctant believer
- Chapter 13-20: Leader
## Voice Sample
"The probability of two people meeting in a city of eight million isn't romantic—it's just math. But math never explained why I couldn't stop thinking about him."
Step 2: Create Chapter-Specific Commands
Command: "Draft Chapter Scene"
Write a scene for Chapter {{chapter_number}} based on:
Scene brief:
{{input_text}}
Requirements:
1. Maintain POV: [Third person limited, Sarah's perspective]
2. Match tone: [Reference {{style_guide_md}}]
3. Include characters: [Check {{characters_md}} for voice consistency]
4. Setting: [Check {{locations_md}} for accurate details]
5. Plot alignment: [Verify against {{plot_outline_md}}]
Write approximately 1500 words. End on a hook for the next scene.
Command: "Character Voice Check"
Review this dialogue for character voice consistency:
{{input_text}}
Cross-reference with character profiles in {{characters_md}}.
For each character's lines, check:
1. Speech pattern match (contractions, vocabulary, etc.)
2. Personality alignment
3. Arc-appropriate attitude (based on chapter)
Flag any lines that feel out of character with [OOC: reason].
Command: "Continuity Check"
Check this chapter excerpt for continuity errors:
{{input_text}}
Verify against:
- Timeline: {{timeline_md}}
- Previous chapters: [Note any callbacks]
- World rules: {{world_rules_md}}
Flag:
1. Time inconsistencies
2. Character knowledge errors (character shouldn't know X yet)
3. World-building violations
4. Dropped plot threads
Output as a continuity report.
Step 3: Style Guide Enforcement
Command: "Apply Style Guide"
Edit this passage to match my writing style:
{{input_text}}
Style requirements from {{style_guide_md}}:
- Sentence rhythm: [Varied, mix short punch with flowing descriptions]
- Dialogue tags: [Said, asked primarily; avoid adverbs]
- Description ratio: [30% description, 40% dialogue, 30% action]
- POV discipline: [Strict limited; no head-hopping]
Rewrite to match these guidelines while preserving the scene's intent.
Step 4: Chapter-by-Chapter Workflow
For each chapter:
- Review plot outline for the chapter.
- Draft scene by scene using "Draft Chapter Scene".
- Run "Character Voice Check" on dialogue.
- Run "Continuity Check" on full chapter.
- Run "Apply Style Guide" for final polish.
- Add chapter to Space for future reference.
Step 5: Series Bible Maintenance
After completing each chapter:
Command: "Update Series Bible"
Based on this completed chapter, identify any updates needed for the series bible:
{{input_text}}
Check for:
1. New character revelations → Update {{characters_md}}
2. New locations introduced → Update {{locations_md}}
3. Timeline events → Update {{timeline_md}}
4. World-building details → Update {{world_rules_md}}
Output as a list of suggested updates with specific text changes.
Pro Tips
- Version your drafts: Save each draft version in the Space with chapter + version numbering.
- Character voice warm-up: Before writing dialogue, re-read that character's voice samples.
- Outline flexibility: Update plot outline as the story evolves. The Space reflects current canon.
Potential Pitfalls
- Over-reliance on AI: The AI should assist, not replace your creative voice.
- Consistency drift: As the Space grows, older entries may conflict with newer ones.
- Token limits: Very long novels may exceed Space capacity. Archive completed arcs.
Summary: Power User Principles
After mastering these advanced workflows, remember these principles:
- Right Provider for Right Task: Don't use premium models for simple tasks.
- Spaces as Memory: Your Spaces are your second brain. Invest in organizing them.
- Commands as Recipes: Build reusable command chains for repeated workflows.
- Audit Everything: Use History for compliance and continuous improvement.
- Local When Sensitive: Default to Ollama for any data you wouldn't email.
- Iterate and Refine: These workflows evolve. Update your commands regularly.
This tutorial represents advanced techniques for experienced Rephlo users. Start with the basic tutorials if these concepts are unfamiliar.
Related Documentation: