Claude Code Automation and CI/CD Workflows
Automate development workflows and integrate Claude Code with CI/CD pipelines
Prerequisites
Claude Code Automation and Workflows
Transform your development process with powerful automation and custom workflows using Claude Code. This comprehensive guide covers everything from simple task automation to complex CI/CD integrations and enterprise-scale workflow orchestration.
Table of Contents
- Introduction to Automation
- Building Custom Workflows
- Task Automation Patterns
- CI/CD Pipeline Integration
- Scheduled and Event-Driven Workflows
- Advanced Workflow Orchestration
- Monitoring and Optimization
- Enterprise Workflow Patterns
- Hands-On Project: Complete DevOps Pipeline
- Best Practices and Troubleshooting
Introduction to Automation
Why Automate with Claude Code?
Claude Code automation provides:
- Intelligent Decision Making: AI-powered analysis for smarter automation
- Context-Aware Actions: Understands your codebase and project structure
- Natural Language Configuration: Define workflows in plain English
- Adaptive Behavior: Learns from patterns and improves over time
- Seamless Integration: Works with existing tools and processes
Core Automation Concepts
# .claude-code/automation-config.yaml
version: '1.0'
settings:
defaultModel: claude-3-opus
maxConcurrentWorkflows: 5
retryPolicy:
maxAttempts: 3
backoffMultiplier: 2
notifications:
email: team@company.com
slack: '#dev-notifications'
globalVariables:
PROJECT_NAME: ${env:PROJECT_NAME}
ENVIRONMENT: ${env:ENVIRONMENT}
API_ENDPOINT: ${env:API_ENDPOINT}
Building Custom Workflows
Workflow Structure
Basic Workflow Definition
# .claude-code/workflows/code-review.yaml
name: Automated Code Review
description: Comprehensive code review with fixes and documentation
version: 1.0
triggers:
- type: git
events: [push, pull_request]
branches: [main, develop, 'feature/*']
- type: manual
command: review-code
- type: schedule
cron: '0 9 * * MON' # Weekly on Monday at 9 AM
inputs:
files:
type: array
description: Files to review
default: ${git.changed_files}
reviewDepth:
type: string
enum: [quick, standard, deep]
default: standard
steps:
- id: analyze
name: Analyze Code Changes
action: claude:analyze
with:
files: ${inputs.files}
context:
projectType: ${project.type}
standards: ${project.codingStandards}
outputs:
analysis: ${step.result}
- id: review
name: Perform Code Review
action: claude:review
with:
files: ${inputs.files}
depth: ${inputs.reviewDepth}
previousAnalysis: ${steps.analyze.outputs.analysis}
focusAreas:
- security
- performance
- maintainability
- best-practices
outputs:
issues: ${step.result.issues}
suggestions: ${step.result.suggestions}
- id: autofix
name: Apply Automatic Fixes
action: claude:fix
condition: ${steps.review.outputs.issues.some(i => i.autoFixable)}
with:
issues: ${steps.review.outputs.issues.filter(i => i.autoFixable)}
strategy: conservative
outputs:
fixedFiles: ${step.result.files}
- id: test
name: Run Tests on Fixed Files
action: shell:run
condition: ${steps.autofix.outputs.fixedFiles.length > 0}
with:
command: npm test ${steps.autofix.outputs.fixedFiles.join(' ')}
continueOnError: true
- id: document
name: Update Documentation
action: claude:document
with:
files: ${steps.autofix.outputs.fixedFiles || inputs.files}
updateExisting: true
generateMissing: true
- id: report
name: Generate Report
action: claude:report
with:
template: code-review-detailed
data:
analysis: ${steps.analyze.outputs}
review: ${steps.review.outputs}
fixes: ${steps.autofix.outputs}
tests: ${steps.test.outputs}
documentation: ${steps.document.outputs}
outputs:
report: ${step.result}
outputs:
reportUrl: ${steps.report.outputs.report.url}
issueCount: ${steps.review.outputs.issues.length}
fixedCount: ${steps.autofix.outputs.fixedFiles.length}
notifications:
- type: slack
condition: always
channel: ${notifications.slack.channel}
message: |
Code Review Complete for ${inputs.files.length} files
Issues Found: ${outputs.issueCount}
Auto-fixed: ${outputs.fixedCount}
Report: ${outputs.reportUrl}
Advanced Workflow Features
Parallel Processing
# .claude-code/workflows/parallel-analysis.yaml
name: Parallel Code Analysis
steps:
- id: split-files
name: Group Files by Type
action: core:group
with:
items: ${inputs.files}
groupBy: extension
- id: analyze-groups
name: Analyze Each Group
action: core:parallel
with:
items: ${steps.split-files.outputs.groups}
maxConcurrency: 4
workflow:
- id: analyze-type
action: claude:analyze
with:
files: ${item.files}
analysisType: ${item.key} # Extension-specific analysis
- id: optimize-type
action: claude:optimize
with:
files: ${item.files}
targetMetrics: ${getOptimizationMetrics(item.key)}
Conditional Logic
# .claude-code/workflows/smart-deployment.yaml
name: Intelligent Deployment
steps:
- id: check-quality
name: Code Quality Gate
action: claude:quality-check
with:
threshold: 85
metrics: [coverage, complexity, duplication]
- id: decide-deployment
name: Deployment Decision
action: core:switch
with:
value: ${steps.check-quality.outputs.score}
cases:
- condition: '>= 95'
steps:
- action: deploy:production
with:
strategy: blue-green
- condition: '>= 85'
steps:
- action: deploy:staging
- action: notify:qa
with:
message: 'Staging deployment ready for testing'
- condition: '< 85'
steps:
- action: notify:dev
with:
message: 'Quality gate failed: ${steps.check-quality.outputs.report}'
- action: core:fail
with:
message: 'Deployment blocked due to quality issues'
Custom Actions
// .claude-code/actions/security-scan.js
module.exports = {
name: 'security-scan',
description: 'Advanced security vulnerability scanning',
inputs: {
files: { type: 'array', required: true },
severity: { type: 'string', default: 'medium' },
scanDependencies: { type: 'boolean', default: true }
},
async execute(context) {
const { files, severity, scanDependencies } = context.inputs;
const claude = context.claude;
// Scan for vulnerabilities
const codeScan = await claude.analyze(files.join('\n'), {
prompt: `Perform a security audit looking for:
- SQL injection vulnerabilities
- XSS vulnerabilities
- Authentication/authorization issues
- Sensitive data exposure
- Insecure dependencies
Minimum severity: ${severity}`,
model: 'claude-3-opus'
});
let results = {
vulnerabilities: parseScanResults(codeScan),
scannedFiles: files.length,
timestamp: new Date().toISOString()
};
if (scanDependencies) {
// Scan package files
const packageFiles = files.filter(f =>
f.includes('package.json') ||
f.includes('requirements.txt') ||
f.includes('Gemfile')
);
for (const pkgFile of packageFiles) {
const depScan = await scanDependencyFile(pkgFile, context);
results.vulnerabilities.push(...depScan);
}
}
// Generate remediation suggestions
if (results.vulnerabilities.length > 0) {
const fixes = await claude.generate({
prompt: `Generate fixes for these vulnerabilities:
${JSON.stringify(results.vulnerabilities)}`,
context: { files, projectType: context.project.type }
});
results.suggestedFixes = fixes;
}
return results;
}
};
async function scanDependencyFile(file, context) {
// Implementation for dependency scanning
const content = await context.readFile(file);
const dependencies = parseDependencies(content, file);
// Check against vulnerability databases
const vulnerabilities = [];
for (const dep of dependencies) {
const issues = await checkVulnerabilityDatabase(dep);
vulnerabilities.push(...issues);
}
return vulnerabilities;
}
Task Automation Patterns
Common Automation Patterns
1. Code Generation Pipeline
# .claude-code/workflows/feature-generation.yaml
name: Complete Feature Generation
description: Generate entire feature with tests, docs, and API
inputs:
featureName:
type: string
required: true
featureSpec:
type: string
description: Natural language feature specification
steps:
- id: plan
name: Create Implementation Plan
action: claude:plan
with:
specification: ${inputs.featureSpec}
architecture: ${project.architecture}
constraints: ${project.constraints}
- id: generate-api
name: Generate API Layer
action: claude:generate
with:
template: rest-api
spec: ${steps.plan.outputs.apiSpec}
outputPath: src/api/${inputs.featureName}
- id: generate-backend
name: Generate Backend Logic
action: claude:generate
with:
template: service-layer
spec: ${steps.plan.outputs.backendSpec}
outputPath: src/services/${inputs.featureName}
- id: generate-frontend
name: Generate Frontend Components
action: claude:generate
with:
template: react-components
spec: ${steps.plan.outputs.frontendSpec}
outputPath: src/components/${inputs.featureName}
- id: generate-tests
name: Generate Test Suite
action: core:parallel
with:
workflows:
- action: claude:generate-tests
with:
type: unit
target: ${steps.generate-backend.outputs.files}
- action: claude:generate-tests
with:
type: integration
target: ${steps.generate-api.outputs.files}
- action: claude:generate-tests
with:
type: component
target: ${steps.generate-frontend.outputs.files}
- id: generate-docs
name: Generate Documentation
action: claude:document
with:
files: ${getAllGeneratedFiles()}
includeApi: true
includeExamples: true
includeArchitecture: true
2. Intelligent Refactoring
// .claude-code/workflows/smart-refactor.js
const { Workflow } = require('@anthropic/claude-code-sdk');
class SmartRefactorWorkflow extends Workflow {
async execute(context) {
const { targetPattern, refactorGoal } = context.inputs;
// Step 1: Identify refactoring candidates
const candidates = await this.findRefactoringCandidates(
context.files,
targetPattern
);
// Step 2: Analyze impact
const impactAnalysis = await context.claude.analyze({
prompt: `Analyze the impact of refactoring these items:
${JSON.stringify(candidates)}
Goal: ${refactorGoal}
Consider:
- Breaking changes
- Performance impact
- Test coverage
- Dependencies`,
context: {
projectStructure: context.project.structure,
dependencies: context.project.dependencies
}
});
// Step 3: Create refactoring plan
const plan = await this.createRefactoringPlan(
candidates,
impactAnalysis,
refactorGoal
);
// Step 4: Execute refactoring in phases
const results = [];
for (const phase of plan.phases) {
const phaseResult = await this.executePhase(phase, context);
// Run tests after each phase
const testResult = await context.runCommand(
`npm test -- ${phase.affectedFiles.join(' ')}`
);
if (testResult.exitCode !== 0) {
// Rollback this phase
await this.rollbackPhase(phase, context);
throw new Error(`Phase ${phase.name} failed tests`);
}
results.push(phaseResult);
}
// Step 5: Update documentation
await this.updateDocumentation(results, context);
return {
refactoredFiles: results.flatMap(r => r.files),
improvements: this.calculateImprovements(results),
report: await this.generateReport(results, context)
};
}
async findRefactoringCandidates(files, pattern) {
// Complex pattern matching logic
const candidates = [];
for (const file of files) {
const content = await this.readFile(file);
const ast = this.parseAST(content);
// Find patterns that match refactoring criteria
const matches = this.findPatterns(ast, pattern);
candidates.push(...matches.map(m => ({
file,
location: m.location,
code: m.code,
complexity: this.calculateComplexity(m.node)
})));
}
// Prioritize by complexity and impact
return candidates.sort((a, b) => b.complexity - a.complexity);
}
async createRefactoringPlan(candidates, impact, goal) {
// Group related changes into phases
const phases = [];
const processed = new Set();
for (const candidate of candidates) {
if (processed.has(candidate.file)) continue;
// Find all related candidates
const related = candidates.filter(c =>
this.areRelated(candidate, c) && !processed.has(c.file)
);
phases.push({
name: `Refactor ${candidate.file}`,
candidates: [candidate, ...related],
affectedFiles: [candidate.file, ...related.map(r => r.file)],
estimatedImpact: this.estimatePhaseImpact(candidate, related, impact)
});
related.forEach(r => processed.add(r.file));
processed.add(candidate.file);
}
return { phases, totalImpact: impact };
}
}
module.exports = SmartRefactorWorkflow;
3. Automated Testing Strategy
# .claude-code/workflows/test-automation.yaml
name: Intelligent Test Generation and Optimization
triggers:
- type: file-change
patterns: ['src/**/*.{js,ts,jsx,tsx}']
excludes: ['**/*.test.*', '**/*.spec.*']
steps:
- id: analyze-changes
name: Analyze Code Changes
action: claude:analyze-diff
with:
base: ${git.base_ref}
head: ${git.head_ref}
- id: identify-test-gaps
name: Find Test Coverage Gaps
action: claude:coverage-analysis
with:
changedFiles: ${steps.analyze-changes.outputs.files}
existingTests: ${findTests(steps.analyze-changes.outputs.files)}
targetCoverage: 90
- id: generate-tests
name: Generate Missing Tests
action: claude:generate-tests
with:
strategy: ${determineTestStrategy()}
files: ${steps.identify-test-gaps.outputs.uncoveredFiles}
testTypes:
- unit:
framework: jest
style: describe-it
assertions: expect
- integration:
framework: supertest
database: mock
- e2e:
framework: playwright
browsers: [chrome, firefox]
- id: optimize-tests
name: Optimize Test Suite
action: claude:optimize-tests
with:
tests: ${steps.generate-tests.outputs.tests}
goals:
- reduce-duplication
- improve-performance
- increase-reliability
- id: run-tests
name: Execute Test Suite
action: core:parallel
with:
commands:
- npm run test:unit -- ${steps.generate-tests.outputs.unitTests}
- npm run test:integration -- ${steps.generate-tests.outputs.integrationTests}
- npm run test:e2e -- ${steps.generate-tests.outputs.e2eTests}
maxConcurrency: 3
CI/CD Pipeline Integration
GitHub Actions Integration
# .github/workflows/claude-code-pipeline.yml
name: Claude Code Enhanced CI/CD
on:
push:
branches: [main, develop]
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
inputs:
analysis_depth:
description: 'Depth of Claude analysis'
required: false
default: 'standard'
type: choice
options: ['quick', 'standard', 'deep']
env:
CLAUDE_API_KEY: ${{ secrets.CLAUDE_API_KEY }}
jobs:
claude-analysis:
runs-on: ubuntu-latest
outputs:
should_deploy: ${{ steps.decision.outputs.deploy }}
risk_level: ${{ steps.analysis.outputs.risk }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0 # Full history for better analysis
- name: Setup Claude Code
uses: anthropic/claude-code-action@v1
with:
api-key: ${{ env.CLAUDE_API_KEY }}
config-file: .claude-code/ci-config.yaml
- name: Intelligent Code Analysis
id: analysis
run: |
claude-code analyze \
--depth=${{ github.event.inputs.analysis_depth || 'standard' }} \
--compare-with=origin/main \
--output=json > analysis.json
echo "risk=$(jq -r '.riskLevel' analysis.json)" >> $GITHUB_OUTPUT
echo "score=$(jq -r '.qualityScore' analysis.json)" >> $GITHUB_OUTPUT
- name: Security Scan
run: |
claude-code security-scan \
--severity=medium \
--include-dependencies \
--fix-suggestions > security-report.md
- name: Performance Impact Analysis
run: |
claude-code performance-impact \
--baseline=main \
--metrics=bundle-size,runtime,memory \
--threshold=10 > performance-report.md
- name: Deployment Decision
id: decision
run: |
claude-code decide-deployment \
--quality-score=${{ steps.analysis.outputs.score }} \
--risk-level=${{ steps.analysis.outputs.risk }} \
--branch=${{ github.ref_name }} \
--pr=${{ github.event.pull_request.number || 'none' }} \
> deployment-decision.json
echo "deploy=$(jq -r '.shouldDeploy' deployment-decision.json)" >> $GITHUB_OUTPUT
- name: Generate PR Comment
if: github.event_name == 'pull_request'
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');
const analysis = JSON.parse(fs.readFileSync('analysis.json', 'utf8'));
const security = fs.readFileSync('security-report.md', 'utf8');
const performance = fs.readFileSync('performance-report.md', 'utf8');
const comment = `## 🤖 Claude Code Analysis
### Summary
- **Quality Score**: ${analysis.qualityScore}/100
- **Risk Level**: ${analysis.riskLevel}
- **Issues Found**: ${analysis.issues.length}
- **Auto-fixable**: ${analysis.autoFixable}
### Security Report
${security}
### Performance Impact
${performance}
### Recommendations
${analysis.recommendations.map(r => `- ${r}`).join('\n')}
<details>
<summary>View detailed analysis</summary>
\`\`\`json
${JSON.stringify(analysis, null, 2)}
\`\`\`
</details>`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: comment
});
automated-fixes:
needs: claude-analysis
if: github.event_name == 'pull_request'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: Apply Claude Fixes
run: |
claude-code fix \
--auto-approve=safe \
--commit-message="🤖 Apply Claude Code automatic fixes" \
--test-after-fix
- name: Push fixes
run: |
git push origin HEAD:${{ github.head_ref }}
intelligent-testing:
needs: claude-analysis
runs-on: ubuntu-latest
strategy:
matrix:
test-suite: ${{ fromJson(needs.claude-analysis.outputs.test_matrix) }}
steps:
- uses: actions/checkout@v3
- name: Generate Contextual Tests
run: |
claude-code generate-tests \
--changes-only \
--test-type=${{ matrix.test-suite }} \
--coverage-target=95
- name: Run Intelligent Test Suite
run: |
claude-code test \
--optimize-run-order \
--parallel \
--fail-fast=${{ needs.claude-analysis.outputs.risk_level == 'high' }}
deploy:
needs: [claude-analysis, intelligent-testing]
if: needs.claude-analysis.outputs.should_deploy == 'true'
runs-on: ubuntu-latest
steps:
- name: Intelligent Deployment
run: |
claude-code deploy \
--strategy=${{ needs.claude-analysis.outputs.deployment_strategy }} \
--environment=${{ github.ref_name == 'main' && 'production' || 'staging' }} \
--rollback-on-error \
--monitor-duration=300
Jenkins Integration
// Jenkinsfile
@Library('claude-code-library') _
pipeline {
agent any
environment {
CLAUDE_API_KEY = credentials('claude-api-key')
}
stages {
stage('Claude Code Analysis') {
steps {
script {
def analysis = claudeCode.analyze(
depth: params.ANALYSIS_DEPTH ?: 'standard',
focus: ['security', 'performance', 'quality']
)
env.QUALITY_SCORE = analysis.qualityScore
env.RISK_LEVEL = analysis.riskLevel
if (analysis.criticalIssues > 0) {
error "Critical issues found: ${analysis.criticalIssues}"
}
}
}
}
stage('Automated Improvements') {
when {
expression { env.QUALITY_SCORE.toInteger() < 90 }
}
steps {
script {
claudeCode.improve(
target: 'all',
goals: ['performance', 'readability', 'maintainability'],
testAfter: true
)
}
}
}
stage('Intelligent Test Generation') {
parallel {
stage('Unit Tests') {
steps {
script {
claudeCode.generateTests(
type: 'unit',
coverage: 95,
framework: 'jest'
)
}
}
}
stage('Integration Tests') {
steps {
script {
claudeCode.generateTests(
type: 'integration',
scenarios: 'common-paths',
framework: 'mocha'
)
}
}
}
}
}
stage('Deployment Decision') {
steps {
script {
def decision = claudeCode.deploymentDecision(
qualityScore: env.QUALITY_SCORE,
testResults: currentBuild.result,
environment: env.BRANCH_NAME == 'main' ? 'production' : 'staging'
)
if (decision.approve) {
env.DEPLOY_STRATEGY = decision.strategy
} else {
error "Deployment blocked: ${decision.reason}"
}
}
}
}
}
post {
always {
claudeCode.generateReport(
includeMetrics: true,
includeSuggestions: true,
format: 'html'
)
}
}
}
Scheduled and Event-Driven Workflows
Scheduled Workflows
# .claude-code/workflows/scheduled-maintenance.yaml
name: Automated Codebase Maintenance
schedule:
- cron: '0 2 * * *' # Daily at 2 AM
- cron: '0 9 * * MON' # Weekly on Monday at 9 AM
jobs:
daily-maintenance:
schedule: '0 2 * * *'
steps:
- id: cleanup
name: Code Cleanup
action: claude:cleanup
with:
targets:
- remove-unused-imports
- format-code
- fix-linting-errors
- remove-console-logs
- id: dependency-check
name: Check Dependencies
action: claude:check-dependencies
with:
checkSecurity: true
checkOutdated: true
autoUpdate: patch # Only auto-update patch versions
- id: optimize-images
name: Optimize Assets
action: claude:optimize-assets
with:
types: [images, fonts, icons]
compression: aggressive
weekly-analysis:
schedule: '0 9 * * MON'
steps:
- id: architecture-review
name: Architecture Analysis
action: claude:architecture-review
with:
checkPatterns: [mvc, solid, dry]
detectAntiPatterns: true
suggestImprovements: true
- id: performance-audit
name: Performance Audit
action: claude:performance-audit
with:
profile: true
identifyBottlenecks: true
suggestOptimizations: true
- id: technical-debt
name: Technical Debt Analysis
action: claude:analyze-debt
with:
calculateScore: true
prioritizeItems: true
estimateEffort: true
Event-Driven Workflows
// .claude-code/events/handlers.js
const { EventHandler } = require('@anthropic/claude-code-sdk');
class CodebaseEventHandler extends EventHandler {
constructor() {
super();
// Register event handlers
this.on('file:created', this.handleFileCreated);
this.on('file:modified', this.handleFileModified);
this.on('dependency:added', this.handleDependencyAdded);
this.on('pr:opened', this.handlePullRequest);
this.on('issue:created', this.handleNewIssue);
this.on('deploy:completed', this.handleDeployment);
}
async handleFileCreated(event) {
const { file, author } = event.data;
// Check if file follows conventions
const analysis = await this.claude.analyze(file, {
checkNamingConvention: true,
checkStructure: true,
suggestBoilerplate: true
});
if (analysis.issues.length > 0) {
await this.notify(author, {
type: 'file-conventions',
file,
issues: analysis.issues,
suggestions: analysis.suggestions
});
}
// Generate corresponding test file
if (file.match(/\.(js|ts|jsx|tsx)$/) && !file.includes('.test')) {
await this.claude.generateTest(file, {
framework: this.project.testFramework,
autoCreate: true
});
}
}
async handleFileModified(event) {
const { file, changes } = event.data;
// Incremental review
if (changes.additions > 50 || changes.deletions > 30) {
const review = await this.claude.reviewChanges(file, {
focus: 'changes-only',
compareWith: 'HEAD~1'
});
if (review.severity === 'high') {
await this.createTask({
title: `Review required for ${file}`,
description: review.summary,
assignee: event.data.author,
priority: 'high'
});
}
}
}
async handleDependencyAdded(event) {
const { dependency, version, dev } = event.data;
// Security check
const security = await this.claude.checkDependency(dependency, {
version,
checkVulnerabilities: true,
checkLicense: true,
checkSize: true
});
if (security.hasIssues) {
await this.blockInstallation({
dependency,
reason: security.issues,
alternatives: security.alternatives
});
}
// Update documentation
await this.claude.updateDocs({
file: 'README.md',
section: dev ? 'Development Dependencies' : 'Dependencies',
content: `Added ${dependency}@${version}`
});
}
async handlePullRequest(event) {
const { pr, files } = event.data;
// Comprehensive PR analysis
const workflow = await this.runWorkflow('pr-analysis', {
pr,
files,
depth: this.getPRAnalysisDepth(pr),
autoFix: pr.draft === false
});
// Post results
await this.github.createComment(pr.number, {
body: this.formatPRReport(workflow.results)
});
// Auto-assign reviewers
if (workflow.results.suggestedReviewers) {
await this.github.assignReviewers(
pr.number,
workflow.results.suggestedReviewers
);
}
}
async handleNewIssue(event) {
const { issue } = event.data;
// Analyze issue with Claude
const analysis = await this.claude.analyzeIssue(issue, {
checkDuplicates: true,
suggestSolution: true,
estimateEffort: true
});
// Auto-label
await this.github.addLabels(issue.number, analysis.labels);
// Generate implementation plan
if (issue.labels.includes('feature-request')) {
const plan = await this.claude.generateImplementationPlan(issue);
await this.github.createComment(issue.number, {
body: this.formatImplementationPlan(plan)
});
}
}
async handleDeployment(event) {
const { environment, version, status } = event.data;
if (status === 'success') {
// Post-deployment verification
await this.runWorkflow('post-deploy-verification', {
environment,
version,
checks: ['health', 'performance', 'errors']
});
// Update documentation
await this.claude.updateDeploymentDocs({
version,
environment,
timestamp: new Date().toISOString()
});
} else {
// Analyze failure
const analysis = await this.claude.analyzeDeploymentFailure(event.data);
// Create incident
await this.createIncident({
severity: analysis.severity,
title: `Deployment failed: ${version}`,
description: analysis.summary,
suggestedFixes: analysis.fixes
});
}
}
}
module.exports = CodebaseEventHandler;
Advanced Workflow Orchestration
Complex Multi-Stage Workflows
# .claude-code/workflows/release-orchestration.yaml
name: Intelligent Release Orchestration
version: 2.0
parameters:
releaseType:
type: string
enum: [major, minor, patch, hotfix]
required: true
targetEnvironments:
type: array
default: [staging, production]
orchestration:
strategy: parallel-sequential
errorHandling: rollback-on-failure
monitoring: true
stages:
- name: Preparation
parallel: false
steps:
- id: version-bump
name: Intelligent Version Bump
action: claude:version-bump
with:
type: ${parameters.releaseType}
updateFiles: [package.json, version.py, VERSION]
- id: changelog
name: Generate Changelog
action: claude:changelog
with:
fromTag: ${git.latestTag}
format: keep-a-changelog
includeStats: true
- id: release-notes
name: Create Release Notes
action: claude:release-notes
with:
audience: [developers, users, stakeholders]
includeBreaking: true
includeUpgrade: true
- name: Quality Assurance
parallel: true
continueOnError: false
steps:
- id: full-test-suite
name: Comprehensive Testing
action: workflow:run
with:
workflow: comprehensive-test-suite
parameters:
coverage: 95
includeE2e: true
includePerfTests: true
- id: security-audit
name: Security Audit
action: workflow:run
with:
workflow: security-audit-deep
parameters:
scanDependencies: true
penetrationTest: ${parameters.releaseType == 'major'}
- id: performance-baseline
name: Performance Baseline
action: claude:performance-test
with:
scenarios: [typical, peak, stress]
compareWith: previous-release
acceptableRegression: 5%
- name: Build and Package
parallel: true
steps:
- id: build-artifacts
name: Build Release Artifacts
matrix:
platform: [linux, macos, windows]
arch: [x64, arm64]
action: build:cross-platform
with:
platform: ${matrix.platform}
arch: ${matrix.arch}
optimize: production
- id: create-containers
name: Build Container Images
matrix:
variant: [standard, alpine, distroless]
action: docker:build
with:
variant: ${matrix.variant}
scanVulnerabilities: true
signImages: true
- name: Staged Deployment
parallel: false
forEach: ${parameters.targetEnvironments}
as: environment
steps:
- id: pre-deploy-check
name: Environment Readiness Check
action: claude:environment-check
with:
environment: ${environment}
checks: [connectivity, dependencies, capacity]
- id: deploy
name: Intelligent Deployment
action: claude:deploy
with:
environment: ${environment}
strategy: ${getDeploymentStrategy(environment)}
monitoring: enhanced
rollbackTriggers:
- errorRate: 5%
- responseTime: 2000ms
- cpuUsage: 80%
- id: verify
name: Post-Deployment Verification
action: claude:verify-deployment
with:
environment: ${environment}
tests: [smoke, integration, synthetic]
duration: ${environment == 'production' ? '30m' : '10m'}
- id: gradual-rollout
name: Gradual Traffic Shift
condition: ${environment == 'production'}
action: traffic:shift
with:
steps: [10, 25, 50, 100]
interval: 15m
metrics: [errors, latency, success-rate]
rollbackThreshold:
errors: 1%
latency: 500ms
- name: Post-Release
parallel: true
steps:
- id: notify-stakeholders
name: Stakeholder Notifications
action: notify:multi-channel
with:
channels: [email, slack, teams]
template: release-announcement
includeMetrics: true
- id: update-documentation
name: Documentation Updates
action: claude:update-docs
with:
version: ${stages.preparation.steps.version-bump.outputs.version}
apiDocs: true
userGuide: true
migrationGuide: ${parameters.releaseType == 'major'}
- id: monitoring-setup
name: Configure Monitoring
action: monitoring:configure
with:
version: ${stages.preparation.steps.version-bump.outputs.version}
alerts: ${getAlertConfig(parameters.releaseType)}
dashboards: [performance, errors, business-metrics]
hooks:
onSuccess:
- action: celebrate
with:
message: "🎉 Release ${version} successfully deployed!"
notifyChannels: [slack, email]
- action: metrics:record
with:
event: release-success
metadata: ${workflow.metadata}
onFailure:
- action: incident:create
with:
severity: ${parameters.releaseType == 'hotfix' ? 'critical' : 'high'}
title: "Release ${version} failed"
runbooks: [rollback, investigation]
- action: rollback:execute
with:
strategy: immediate
preserveLogs: true
Workflow Composition
// .claude-code/orchestration/composer.js
class WorkflowComposer {
constructor(claude) {
this.claude = claude;
this.workflows = new Map();
}
async composeWorkflow(specification) {
// Parse natural language specification
const parsed = await this.claude.parseSpecification(specification, {
context: {
availableWorkflows: Array.from(this.workflows.keys()),
projectType: this.project.type,
team: this.project.team
}
});
// Generate workflow definition
const workflow = {
name: parsed.name,
description: parsed.description,
triggers: this.determineTriggers(parsed),
parameters: this.extractParameters(parsed),
steps: await this.generateSteps(parsed),
outputs: this.defineOutputs(parsed)
};
// Optimize workflow
return this.optimizeWorkflow(workflow);
}
async generateSteps(parsed) {
const steps = [];
for (const requirement of parsed.requirements) {
// Find matching workflows or create new steps
const matchingWorkflows = await this.findMatchingWorkflows(requirement);
if (matchingWorkflows.length > 0) {
// Reuse existing workflow
steps.push({
id: this.generateId(requirement),
name: requirement.name,
action: 'workflow:run',
with: {
workflow: matchingWorkflows[0].id,
parameters: this.mapParameters(requirement, matchingWorkflows[0])
}
});
} else {
// Generate new step
const generatedStep = await this.claude.generateStep(requirement, {
availableActions: this.getAvailableActions(),
projectContext: this.project
});
steps.push(generatedStep);
}
}
// Add dependencies and conditions
return this.addStepRelationships(steps, parsed);
}
async optimizeWorkflow(workflow) {
// Analyze workflow for optimization opportunities
const analysis = await this.claude.analyzeWorkflow(workflow, {
goals: ['performance', 'reliability', 'cost'],
constraints: this.project.constraints
});
// Apply optimizations
if (analysis.optimizations.parallelization) {
workflow = this.applyParallelization(workflow, analysis.parallelization);
}
if (analysis.optimizations.caching) {
workflow = this.applyCaching(workflow, analysis.caching);
}
if (analysis.optimizations.conditionals) {
workflow = this.applyConditionals(workflow, analysis.conditionals);
}
return workflow;
}
applyParallelization(workflow, parallelization) {
// Group independent steps
const groups = parallelization.groups;
const optimizedSteps = [];
for (const group of groups) {
if (group.steps.length > 1) {
optimizedSteps.push({
id: `parallel-${group.id}`,
name: `Parallel execution: ${group.name}`,
action: 'core:parallel',
with: {
steps: group.steps,
maxConcurrency: group.maxConcurrency || 5
}
});
} else {
optimizedSteps.push(...group.steps);
}
}
workflow.steps = optimizedSteps;
return workflow;
}
}
Monitoring and Optimization
Workflow Analytics
// .claude-code/monitoring/analytics.ts
import { MetricsCollector, AnalyticsEngine } from '@anthropic/claude-code-sdk';
interface WorkflowMetrics {
executionTime: number;
successRate: number;
errorRate: number;
resourceUsage: {
cpu: number;
memory: number;
apiCalls: number;
tokens: number;
};
costEstimate: number;
}
class WorkflowAnalytics {
private collector: MetricsCollector;
private engine: AnalyticsEngine;
async analyzeWorkflowPerformance(
workflowId: string,
timeRange: { start: Date; end: Date }
): Promise<WorkflowMetrics> {
const executions = await this.collector.getExecutions(workflowId, timeRange);
return {
executionTime: this.calculateAverageTime(executions),
successRate: this.calculateSuccessRate(executions),
errorRate: this.calculateErrorRate(executions),
resourceUsage: this.aggregateResourceUsage(executions),
costEstimate: this.estimateCost(executions)
};
}
async generateOptimizationReport(workflowId: string): Promise<OptimizationReport> {
const metrics = await this.analyzeWorkflowPerformance(
workflowId,
{ start: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000), end: new Date() }
);
const bottlenecks = await this.identifyBottlenecks(workflowId);
const recommendations = await this.claude.generateRecommendations({
metrics,
bottlenecks,
workflowDefinition: await this.getWorkflow(workflowId)
});
return {
summary: this.generateSummary(metrics, bottlenecks),
bottlenecks,
recommendations,
estimatedImprovements: this.estimateImprovements(recommendations),
implementationPlan: await this.createImplementationPlan(recommendations)
};
}
async identifyBottlenecks(workflowId: string): Promise<Bottleneck[]> {
const executions = await this.collector.getDetailedExecutions(workflowId);
const bottlenecks: Bottleneck[] = [];
// Analyze step durations
const stepDurations = this.aggregateStepDurations(executions);
for (const [stepId, durations] of stepDurations) {
const stats = this.calculateStats(durations);
if (stats.p95 > stats.median * 2) {
bottlenecks.push({
type: 'performance',
stepId,
severity: 'high',
impact: stats.p95 - stats.median,
description: `Step ${stepId} has high variance in execution time`
});
}
}
// Analyze error patterns
const errorPatterns = this.analyzeErrorPatterns(executions);
for (const pattern of errorPatterns) {
if (pattern.frequency > 0.1) {
bottlenecks.push({
type: 'reliability',
stepId: pattern.stepId,
severity: pattern.frequency > 0.3 ? 'critical' : 'high',
impact: pattern.frequency,
description: pattern.description
});
}
}
// Analyze resource usage
const resourceBottlenecks = await this.analyzeResourceUsage(executions);
bottlenecks.push(...resourceBottlenecks);
return bottlenecks.sort((a, b) => b.impact - a.impact);
}
}
Real-time Monitoring Dashboard
# .claude-code/monitoring/dashboard-config.yaml
name: Claude Code Workflow Dashboard
refresh: 30s
panels:
- title: Workflow Execution Status
type: timeseries
queries:
- metric: workflow.executions
groupBy: [status, workflow_name]
aggregation: count
- title: Average Execution Time
type: gauge
queries:
- metric: workflow.duration
aggregation: avg
thresholds:
- value: 60
color: green
- value: 300
color: yellow
- value: 600
color: red
- title: API Token Usage
type: piechart
queries:
- metric: claude.tokens.used
groupBy: [workflow_name]
aggregation: sum
- title: Error Rate by Workflow
type: heatmap
queries:
- metric: workflow.errors
groupBy: [workflow_name, error_type]
aggregation: rate
- title: Cost Analysis
type: table
queries:
- metric: workflow.cost
groupBy: [workflow_name, model]
aggregation: sum
columns:
- Workflow
- Model
- Executions
- Total Cost
- Cost per Execution
alerts:
- name: High Error Rate
condition: workflow.errors.rate > 0.1
severity: warning
notification:
channels: [slack, email]
- name: Excessive Token Usage
condition: claude.tokens.used.rate > 10000
severity: warning
notification:
channels: [slack]
- name: Workflow Timeout
condition: workflow.duration > 1800
severity: critical
notification:
channels: [pagerduty, slack, email]
Enterprise Workflow Patterns
Multi-Team Collaboration Workflow
# .claude-code/workflows/enterprise-collaboration.yaml
name: Enterprise Feature Development
description: Coordinated workflow for multi-team feature development
teams:
backend:
approvers: [john.doe, jane.smith]
slackChannel: '#backend-team'
frontend:
approvers: [bob.wilson, alice.johnson]
slackChannel: '#frontend-team'
qa:
approvers: [charlie.brown]
slackChannel: '#qa-team'
stages:
- name: Planning
owner: product
steps:
- id: analyze-requirements
name: Analyze Requirements
action: claude:analyze-requirements
with:
specification: ${inputs.featureSpec}
constraints: ${project.constraints}
- id: create-tasks
name: Create Team Tasks
action: claude:decompose-feature
with:
requirements: ${steps.analyze-requirements.outputs}
teams: [backend, frontend, qa]
- id: estimate-effort
name: Estimate Effort
action: claude:estimate
with:
tasks: ${steps.create-tasks.outputs.tasks}
historicalData: true
- name: Parallel Development
parallel: true
branches:
- name: Backend Development
owner: backend
steps:
- action: claude:generate-api
with:
spec: ${stages.planning.outputs.backendSpec}
- action: claude:generate-tests
with:
type: [unit, integration]
- action: notify:team
with:
team: backend
message: 'Backend implementation ready for review'
- name: Frontend Development
owner: frontend
steps:
- action: claude:generate-ui
with:
spec: ${stages.planning.outputs.frontendSpec}
mockApi: true
- action: claude:generate-tests
with:
type: [component, e2e]
- action: notify:team
with:
team: frontend
message: 'Frontend implementation ready for review'
- name: Integration
owner: qa
steps:
- id: integrate
name: Integrate Components
action: claude:integrate
with:
backend: ${stages['Parallel Development'].branches.backend.outputs}
frontend: ${stages['Parallel Development'].branches.frontend.outputs}
- id: integration-tests
name: Run Integration Tests
action: test:integration
with:
environment: integration
coverage: 90
- name: Approval Gate
type: approval
approvers: ${getAllApprovers()}
timeout: 48h
conditions:
- ${stages.Integration.outputs.testsPassed}
- ${stages.Integration.outputs.coverage >= 90}
- name: Deployment
steps:
- action: deploy:staged
with:
stages: [dev, staging, production]
approvalRequired: [false, true, true]
Compliance and Governance Workflow
// .claude-code/workflows/compliance.ts
interface ComplianceWorkflow {
name: string;
regulations: string[];
checks: ComplianceCheck[];
remediation: RemediationStrategy;
}
class ComplianceAutomation {
async runComplianceWorkflow(
code: string[],
regulations: string[]
): Promise<ComplianceReport> {
const workflow: ComplianceWorkflow = {
name: 'Regulatory Compliance Check',
regulations,
checks: this.getRequiredChecks(regulations),
remediation: {
automatic: true,
requiresApproval: true,
notifyCompliance: true
}
};
// Run compliance checks
const results = await Promise.all(
workflow.checks.map(check => this.runCheck(check, code))
);
// Generate remediation plan
const violations = results.filter(r => !r.compliant);
const remediationPlan = await this.claude.generateRemediationPlan({
violations,
regulations,
projectContext: this.project
});
// Apply automatic fixes
if (workflow.remediation.automatic) {
const fixes = await this.applyAutomaticFixes(
remediationPlan.automaticFixes
);
// Re-run checks
const verificationResults = await Promise.all(
workflow.checks.map(check => this.runCheck(check, fixes.code))
);
return {
initialResults: results,
remediationPlan,
fixes,
finalResults: verificationResults,
compliant: verificationResults.every(r => r.compliant)
};
}
return {
results,
remediationPlan,
compliant: false
};
}
private getRequiredChecks(regulations: string[]): ComplianceCheck[] {
const checks: ComplianceCheck[] = [];
if (regulations.includes('GDPR')) {
checks.push(
{ id: 'gdpr-data-privacy', type: 'privacy', severity: 'critical' },
{ id: 'gdpr-data-retention', type: 'retention', severity: 'high' },
{ id: 'gdpr-consent', type: 'consent', severity: 'critical' }
);
}
if (regulations.includes('HIPAA')) {
checks.push(
{ id: 'hipaa-encryption', type: 'security', severity: 'critical' },
{ id: 'hipaa-access-control', type: 'access', severity: 'critical' },
{ id: 'hipaa-audit-trail', type: 'audit', severity: 'high' }
);
}
if (regulations.includes('PCI-DSS')) {
checks.push(
{ id: 'pci-card-data', type: 'data-handling', severity: 'critical' },
{ id: 'pci-network-security', type: 'security', severity: 'critical' },
{ id: 'pci-vulnerability', type: 'vulnerability', severity: 'high' }
);
}
return checks;
}
}
Hands-On Project: Complete DevOps Pipeline
Let's build a comprehensive DevOps pipeline that showcases all the advanced features:
Project Structure
devops-pipeline/
├── .claude-code/
│ ├── config.yaml
│ ├── workflows/
│ │ ├── main-pipeline.yaml
│ │ ├── feature-development.yaml
│ │ ├── hotfix-process.yaml
│ │ └── release-automation.yaml
│ ├── actions/
│ │ ├── quality-gate.js
│ │ ├── security-scan.js
│ │ └── performance-test.js
│ ├── monitoring/
│ │ ├── dashboards/
│ │ └── alerts/
│ └── templates/
│ ├── microservice/
│ └── deployment/
├── scripts/
│ ├── setup.sh
│ └── validate.sh
└── docs/
└── pipeline-guide.md
Main Pipeline Definition
# .claude-code/workflows/main-pipeline.yaml
name: Intelligent DevOps Pipeline
version: 3.0
description: Complete CI/CD pipeline with AI-powered automation
parameters:
triggerType:
type: string
enum: [push, pull_request, manual, schedule]
branch:
type: string
default: ${git.branch}
deploymentTarget:
type: string
enum: [development, staging, production, none]
default: ${determineTarget(git.branch)}
globals:
slackWebhook: ${secrets.SLACK_WEBHOOK}
sonarToken: ${secrets.SONAR_TOKEN}
artifactRegistry: ${env.ARTIFACT_REGISTRY}
stages:
- name: Initialization
id: init
steps:
- id: context-analysis
name: Analyze Context
action: claude:analyze-context
with:
branch: ${parameters.branch}
triggerType: ${parameters.triggerType}
recentCommits: 10
outputs:
- riskLevel
- suggestedActions
- estimatedDuration
- id: dynamic-configuration
name: Configure Pipeline
action: claude:configure-pipeline
with:
baseConfig: ${loadConfig('base')}
context: ${steps.context-analysis.outputs}
overrides: ${parameters}
- id: notify-start
name: Notify Pipeline Start
action: notify:slack
with:
webhook: ${globals.slackWebhook}
message: |
🚀 Pipeline started
Branch: ${parameters.branch}
Risk Level: ${steps.context-analysis.outputs.riskLevel}
Estimated Duration: ${steps.context-analysis.outputs.estimatedDuration}
- name: Code Quality
id: quality
parallel: true
continueOnError: false
steps:
- id: lint-and-format
name: Lint and Format
action: parallel:run
with:
commands:
- name: ESLint
command: npm run lint
autoFix: true
- name: Prettier
command: npm run format
autoFix: true
- name: Claude Style
action: claude:style-check
with:
autoFix: true
style: ${project.styleGuide}
- id: code-review
name: AI Code Review
action: claude:review
with:
depth: ${stages.init.steps.context-analysis.outputs.riskLevel == 'high' ? 'deep' : 'standard'}
focus: [security, performance, maintainability, best-practices]
autoFix: true
requireApproval: ${stages.init.steps.context-analysis.outputs.riskLevel == 'high'}
- id: sonar-analysis
name: SonarQube Analysis
action: sonar:scan
with:
token: ${globals.sonarToken}
qualityGate: true
- id: dependency-check
name: Dependency Analysis
action: claude:check-dependencies
with:
checkVulnerabilities: true
checkLicenses: true
checkOutdated: true
autoUpdate:
security: true
minor: ${parameters.branch != 'main'}
major: false
- name: Testing
id: test
dependsOn: [quality]
steps:
- id: test-strategy
name: Determine Test Strategy
action: claude:test-strategy
with:
changes: ${git.changes}
riskLevel: ${stages.init.steps.context-analysis.outputs.riskLevel}
timeConstraint: ${parameters.deploymentTarget == 'production' ? 'none' : '30m'}
- id: generate-tests
name: Generate Missing Tests
condition: ${steps.test-strategy.outputs.generateTests}
action: claude:generate-tests
with:
gaps: ${steps.test-strategy.outputs.testGaps}
types: ${steps.test-strategy.outputs.requiredTypes}
- id: run-tests
name: Execute Test Suite
action: dynamic:test-runner
with:
strategy: ${steps.test-strategy.outputs.strategy}
parallel: true
failFast: ${parameters.deploymentTarget == 'production'}
retryFlaky: true
suites:
- name: Unit Tests
command: npm run test:unit
coverage: 90
timeout: 10m
- name: Integration Tests
command: npm run test:integration
coverage: 80
timeout: 20m
condition: ${steps.test-strategy.outputs.runIntegration}
- name: E2E Tests
command: npm run test:e2e
browsers: ${steps.test-strategy.outputs.browsers}
timeout: 30m
condition: ${steps.test-strategy.outputs.runE2e}
- name: Performance Tests
action: claude:performance-test
scenarios: ${steps.test-strategy.outputs.perfScenarios}
condition: ${steps.test-strategy.outputs.runPerformance}
- id: mutation-testing
name: Mutation Testing
condition: ${parameters.branch == 'main'}
action: mutation:test
with:
threshold: 80
timeout: 60m
- name: Security
id: security
dependsOn: [test]
steps:
- id: security-scan
name: Comprehensive Security Scan
action: claude:security-scan
with:
scanners: [static, dynamic, dependency, container]
severity: ${parameters.deploymentTarget == 'production' ? 'low' : 'medium'}
autoFix: true
breakBuild: ${parameters.deploymentTarget == 'production'}
- id: penetration-test
name: Automated Penetration Testing
condition: ${parameters.deploymentTarget == 'production'}
action: pentest:run
with:
profile: web-application
duration: 30m
- id: compliance-check
name: Compliance Verification
action: claude:compliance-check
with:
standards: ${project.complianceStandards}
generateReport: true
- name: Build and Package
id: build
dependsOn: [security]
strategy: matrix
matrix:
include:
- platform: linux
arch: [amd64, arm64]
- platform: darwin
arch: [amd64, arm64]
- platform: windows
arch: [amd64]
steps:
- id: optimize-build
name: Optimize Build Configuration
action: claude:optimize-build
with:
target: ${matrix.platform}-${matrix.arch}
optimizationGoals: [size, performance, compatibility]
- id: build
name: Build Application
action: build:execute
with:
platform: ${matrix.platform}
arch: ${matrix.arch}
config: ${steps.optimize-build.outputs.config}
- id: package
name: Create Packages
action: package:create
with:
formats: ${getPackageFormats(matrix.platform)}
signing: true
compression: maximum
- id: scan-artifact
name: Scan Build Artifact
action: claude:scan-artifact
with:
artifact: ${steps.package.outputs.path}
checks: [malware, vulnerabilities, size, performance]
- id: publish
name: Publish Artifacts
action: artifact:publish
with:
registry: ${globals.artifactRegistry}
tags: [
${git.sha},
${matrix.platform}-${matrix.arch},
${parameters.branch}
]
- name: Deployment Decision
id: decision
dependsOn: [build]
condition: ${parameters.deploymentTarget != 'none'}
steps:
- id: analyze-metrics
name: Analyze Deployment Metrics
action: claude:analyze-metrics
with:
stages: [quality, test, security, build]
historicalData: true
- id: risk-assessment
name: Deployment Risk Assessment
action: claude:assess-risk
with:
target: ${parameters.deploymentTarget}
metrics: ${steps.analyze-metrics.outputs}
factors: [
code-quality,
test-coverage,
security-score,
performance-impact,
rollback-capability
]
- id: deployment-decision
name: Make Deployment Decision
action: claude:decide
with:
riskAssessment: ${steps.risk-assessment.outputs}
policies: ${project.deploymentPolicies}
humanApprovalRequired: ${parameters.deploymentTarget == 'production'}
- id: approval-gate
name: Approval Gate
condition: ${steps.deployment-decision.outputs.requiresApproval}
action: approval:request
with:
approvers: ${project.deploymentApprovers[parameters.deploymentTarget]}
timeout: ${parameters.deploymentTarget == 'production' ? '2h' : '30m'}
details: ${steps.deployment-decision.outputs.summary}
- name: Deployment
id: deploy
dependsOn: [decision]
condition: ${stages.decision.steps.deployment-decision.outputs.approved}
steps:
- id: pre-deployment
name: Pre-deployment Tasks
action: parallel:run
with:
tasks:
- name: Database Migration
action: db:migrate
with:
dryRun: ${parameters.deploymentTarget == 'production'}
- name: Cache Warmup
action: cache:warm
with:
endpoints: ${project.criticalEndpoints}
- name: Feature Flags
action: flags:update
with:
environment: ${parameters.deploymentTarget}
- id: deploy-strategy
name: Determine Deployment Strategy
action: claude:deployment-strategy
with:
environment: ${parameters.deploymentTarget}
riskLevel: ${stages.decision.steps.risk-assessment.outputs.level}
currentTraffic: ${getTrafficMetrics()}
- id: execute-deployment
name: Execute Deployment
action: deploy:execute
with:
strategy: ${steps.deploy-strategy.outputs.strategy}
environment: ${parameters.deploymentTarget}
artifacts: ${stages.build.outputs.artifacts}
config: ${steps.deploy-strategy.outputs.config}
monitoring: enhanced
rollback: automatic
- id: health-check
name: Post-deployment Health Check
action: health:comprehensive
with:
endpoints: ${project.healthEndpoints}
timeout: 5m
retries: 3
- id: smoke-tests
name: Smoke Tests
action: test:smoke
with:
environment: ${parameters.deploymentTarget}
critical: true
- id: gradual-rollout
name: Gradual Rollout
condition: ${steps.deploy-strategy.outputs.strategy == 'canary'}
action: traffic:shift
with:
stages: ${steps.deploy-strategy.outputs.rolloutStages}
monitoring: ${steps.deploy-strategy.outputs.monitoringConfig}
rollbackTriggers: ${steps.deploy-strategy.outputs.rollbackTriggers}
- name: Post-Deployment
id: post-deploy
dependsOn: [deploy]
parallel: true
steps:
- id: performance-validation
name: Performance Validation
action: performance:validate
with:
baseline: ${getPerfBaseline(parameters.deploymentTarget)}
duration: 15m
acceptableDegradation: 5%
- id: security-validation
name: Security Validation
action: security:validate
with:
environment: ${parameters.deploymentTarget}
tests: [authentication, authorization, encryption]
- id: update-documentation
name: Update Documentation
action: claude:update-docs
with:
version: ${deployment.version}
releaseNotes: ${stages.init.outputs.releaseNotes}
apiChanges: ${detectApiChanges()}
- id: notify-stakeholders
name: Notify Stakeholders
action: notify:multi
with:
channels: [email, slack, teams]
recipients: ${project.stakeholders[parameters.deploymentTarget]}
template: deployment-success
includeMetrics: true
- id: update-monitoring
name: Update Monitoring
action: monitoring:update
with:
version: ${deployment.version}
alerts: ${generateAlerts(parameters.deploymentTarget)}
dashboards: ${project.dashboards}
slos: ${project.slos[parameters.deploymentTarget]}
handlers:
onFailure:
- action: rollback:auto
condition: ${stage.id == 'deploy'}
- action: incident:create
with:
severity: ${calculateSeverity(stage, parameters.deploymentTarget)}
runbook: ${getRunbook(stage, error)}
- action: notify:emergency
with:
channels: ${project.emergencyChannels}
onSuccess:
- action: metrics:record
with:
pipeline: ${execution.id}
duration: ${execution.duration}
stages: ${execution.stages}
- action: insights:generate
with:
execution: ${execution}
recommendations: true
always:
- action: logs:archive
with:
retention: ${project.logRetention}
- action: artifacts:cleanup
with:
keepSuccessful: 5
keepFailed: 10
Custom Quality Gate Action
// .claude-code/actions/quality-gate.js
const { Action } = require('@anthropic/claude-code-sdk');
class QualityGateAction extends Action {
constructor() {
super({
name: 'quality-gate',
description: 'Intelligent quality gate with ML-based predictions',
inputs: {
metrics: { type: 'object', required: true },
thresholds: { type: 'object', required: true },
historicalData: { type: 'boolean', default: true },
predictive: { type: 'boolean', default: true }
}
});
}
async execute(context) {
const { metrics, thresholds, historicalData, predictive } = context.inputs;
const results = {
passed: true,
score: 0,
failures: [],
warnings: [],
predictions: {}
};
// Check current metrics against thresholds
for (const [metric, value] of Object.entries(metrics)) {
const threshold = thresholds[metric];
if (!threshold) continue;
const check = this.checkThreshold(metric, value, threshold);
if (!check.passed) {
results.passed = false;
results.failures.push(check);
} else if (check.warning) {
results.warnings.push(check);
}
}
// Historical analysis
if (historicalData) {
const history = await this.getHistoricalData(context.project.id);
const trends = this.analyzeTrends(history, metrics);
for (const trend of trends) {
if (trend.deteriorating && trend.significance > 0.8) {
results.warnings.push({
metric: trend.metric,
message: `${trend.metric} has deteriorated by ${trend.change}% over last ${trend.period}`,
severity: 'medium'
});
}
}
}
// Predictive analysis
if (predictive) {
const predictions = await context.claude.predict({
prompt: `Based on these metrics and trends, predict the likelihood of issues in production:
Current Metrics: ${JSON.stringify(metrics)}
Historical Trends: ${JSON.stringify(trends)}
Project Type: ${context.project.type}
Provide predictions for:
1. Production incident likelihood
2. Performance degradation risk
3. User impact severity
4. Estimated MTTR if issues occur`,
context: {
deploymentHistory: history.deployments,
incidentHistory: history.incidents
}
});
results.predictions = this.parsePredictions(predictions);
// Add warnings based on predictions
if (results.predictions.incidentLikelihood > 0.7) {
results.warnings.push({
metric: 'prediction',
message: `High likelihood (${results.predictions.incidentLikelihood * 100}%) of production incident`,
severity: 'high',
recommendation: results.predictions.mitigationSteps
});
}
}
// Calculate overall score
results.score = this.calculateQualityScore(metrics, results);
// Generate recommendations
if (!results.passed || results.warnings.length > 0) {
results.recommendations = await this.generateRecommendations(
context,
results,
metrics
);
}
return results;
}
checkThreshold(metric, value, threshold) {
const result = {
metric,
value,
threshold,
passed: true,
warning: false
};
if (threshold.min !== undefined && value < threshold.min) {
result.passed = false;
result.message = `${metric} (${value}) is below minimum threshold (${threshold.min})`;
} else if (threshold.max !== undefined && value > threshold.max) {
result.passed = false;
result.message = `${metric} (${value}) exceeds maximum threshold (${threshold.max})`;
} else if (threshold.warning) {
if (threshold.warning.min !== undefined && value < threshold.warning.min) {
result.warning = true;
result.message = `${metric} (${value}) is approaching minimum threshold`;
} else if (threshold.warning.max !== undefined && value > threshold.warning.max) {
result.warning = true;
result.message = `${metric} (${value}) is approaching maximum threshold`;
}
}
return result;
}
async generateRecommendations(context, results, metrics) {
const prompt = `Generate specific, actionable recommendations based on these quality gate results:
Failures: ${JSON.stringify(results.failures)}
Warnings: ${JSON.stringify(results.warnings)}
Current Metrics: ${JSON.stringify(metrics)}
Predictions: ${JSON.stringify(results.predictions)}
For each issue, provide:
1. Root cause analysis
2. Immediate actions to take
3. Long-term improvements
4. Estimated effort and impact`;
const recommendations = await context.claude.generate({
prompt,
context: {
projectType: context.project.type,
team: context.project.team,
constraints: context.project.constraints
}
});
return this.parseRecommendations(recommendations);
}
}
module.exports = QualityGateAction;
Monitoring Configuration
# .claude-code/monitoring/dashboards/pipeline-dashboard.yaml
name: DevOps Pipeline Monitoring
refresh: 30s
timeRange: 24h
variables:
- name: environment
type: query
query: label_values(deployment_info, environment)
default: all
- name: pipeline
type: query
query: label_values(pipeline_execution, pipeline_name)
default: main-pipeline
rows:
- title: Pipeline Overview
height: 300
panels:
- title: Success Rate
type: stat
span: 3
targets:
- expr: |
rate(pipeline_execution_total{status="success"}[5m])
/ rate(pipeline_execution_total[5m]) * 100
legendFormat: Success Rate
- title: Average Duration
type: stat
span: 3
targets:
- expr: |
avg(pipeline_execution_duration_seconds{pipeline_name="$pipeline"})
legendFormat: Avg Duration
unit: seconds
- title: Active Pipelines
type: stat
span: 3
targets:
- expr: |
count(pipeline_execution_active{pipeline_name="$pipeline"})
legendFormat: Active
- title: Failed Stages
type: stat
span: 3
targets:
- expr: |
sum(rate(pipeline_stage_failures_total[5m])) by (stage)
legendFormat: "{{ stage }}"
- title: Stage Performance
height: 400
panels:
- title: Stage Duration Comparison
type: graph
span: 6
targets:
- expr: |
avg(pipeline_stage_duration_seconds) by (stage)
legendFormat: "{{ stage }}"
- title: Stage Success Rate
type: graph
span: 6
targets:
- expr: |
rate(pipeline_stage_success_total[5m])
/ rate(pipeline_stage_total[5m]) by (stage)
legendFormat: "{{ stage }}"
- title: Resource Usage
height: 300
panels:
- title: Claude API Usage
type: graph
span: 4
targets:
- expr: |
sum(rate(claude_api_tokens_total[5m])) by (model)
legendFormat: "{{ model }}"
- title: Build Resources
type: graph
span: 4
targets:
- expr: |
avg(build_resource_usage) by (resource_type)
legendFormat: "{{ resource_type }}"
- title: Cost Tracking
type: table
span: 4
targets:
- expr: |
sum(pipeline_cost_dollars) by (pipeline_name, stage)
format: table
alerts:
- name: Pipeline Failure Rate High
expr: |
rate(pipeline_execution_total{status="failed"}[15m])
/ rate(pipeline_execution_total[15m]) > 0.1
for: 5m
severity: warning
annotations:
summary: "High pipeline failure rate: {{ $value | humanizePercentage }}"
- name: Stage Performance Degradation
expr: |
avg(pipeline_stage_duration_seconds) by (stage)
> avg_over_time(pipeline_stage_duration_seconds[7d]) * 1.5
for: 10m
severity: warning
annotations:
summary: "Stage {{ $labels.stage }} is 50% slower than 7-day average"
- name: Claude API Rate Limit
expr: |
sum(rate(claude_api_errors_total{error="rate_limit"}[5m])) > 0
for: 1m
severity: critical
annotations:
summary: "Claude API rate limit exceeded"
Best Practices and Troubleshooting
Workflow Development Best Practices
-
Start Simple, Iterate:
# Start with basic workflow version: 1.0 steps: - action: claude:review - action: test:run - action: deploy:staging # Gradually add complexity version: 2.0 steps: - action: claude:review with: depth: standard autoFix: true # ... more sophisticated steps
-
Use Composition:
# Reusable workflow components components: security-check: steps: - action: claude:security-scan - action: dependency:check workflows: main: steps: - include: components.security-check - action: deploy
-
Error Handling:
steps: - id: risky-operation action: some:action continueOnError: true retry: attempts: 3 backoff: exponential fallback: action: alternative:action
-
Performance Optimization:
- Use parallel execution where possible
- Cache results of expensive operations
- Use appropriate Claude models for each task
- Implement progressive workflows
Common Issues and Solutions
Issue: Workflow Timeouts
Solution:
# Global timeout configuration
settings:
defaultTimeout: 30m
maxTimeout: 2h
# Step-specific timeouts
steps:
- action: long:running
timeout: 45m
# Timeout handlers
handlers:
onTimeout:
- action: notify:team
- action: logs:capture
- action: workflow:abort
Issue: Flaky Tests
Solution:
// Intelligent retry logic
class FlakyTestHandler {
async handleFlakyTest(test, context) {
const history = await this.getTestHistory(test.name);
const flakinessScore = this.calculateFlakinessScore(history);
if (flakinessScore > 0.3) {
// Automatically retry flaky tests
const retryConfig = {
attempts: Math.min(5, Math.ceil(flakinessScore * 10)),
delay: 1000,
backoff: 1.5
};
// Analyze failure patterns
const analysis = await context.claude.analyze({
testHistory: history,
recentFailures: history.failures.slice(-10),
prompt: 'Identify patterns in test failures and suggest fixes'
});
// Apply suggested fixes if confidence is high
if (analysis.confidence > 0.8 && analysis.autoFixable) {
await this.applyFixes(test, analysis.fixes);
}
return retryConfig;
}
return null;
}
}
Issue: Resource Constraints
Solution:
# Resource-aware workflow execution
resources:
limits:
cpu: 4
memory: 8Gi
concurrent_workflows: 5
quotas:
claude_tokens_per_hour: 100000
builds_per_day: 100
prioritization:
- production
- staging
- development
# Adaptive resource management
steps:
- id: resource-check
action: resources:check
- id: adaptive-execution
action: core:switch
with:
value: ${steps.resource-check.outputs.availability}
cases:
- condition: high
action: workflow:full
- condition: medium
action: workflow:optimized
- condition: low
action: workflow:minimal
Performance Optimization Tips
-
Intelligent Caching:
cache: strategy: intelligent key: ${hashFiles('package-lock.json', 'src/**/*.js')} restore-keys: - ${workflow.name}-${branch}- - ${workflow.name}- ttl: 7d
-
Parallel Execution:
optimization: parallelism: automatic: true maxWorkers: ${runtime.availableCores} taskDistribution: intelligent
-
Model Selection:
models: quick-tasks: claude-3-haiku standard-tasks: claude-3-sonnet complex-tasks: claude-3-opus # Automatic model selection steps: - action: claude:analyze with: model: ${selectModel(task.complexity)}
Summary
You've mastered advanced automation and workflow capabilities with Claude Code:
- ✅ Built sophisticated custom workflows
- ✅ Implemented CI/CD pipeline integrations
- ✅ Created event-driven and scheduled automations
- ✅ Developed enterprise-grade workflow patterns
- ✅ Implemented comprehensive monitoring and optimization
- ✅ Built a complete DevOps pipeline from scratch
Next Steps
-
Explore Advanced Topics:
-
Join the Community:
-
Certification:
Resources
Last updated: January 2025 | Report an issue | Contribute workflows