Home/Docs/Caching & Rate Limits

Caching & Rate Limits

How RepoPulse manages performance, reliability, and fair usage

Why Caching Matters

RepoPulse serves thousands of requests daily while staying within GitHub API limits. Intelligent caching ensures fast responses, reliable service, and fair resource usage.

Key Balance: Fresh data when possible, cached data when necessary. Performance without sacrificing accuracy.

Cache Layers

GitHub API Data

Repository metadata, commits, issues, contributors

Cache Duration: 1 hour

Why This Duration?

GitHub data changes infrequently, reduces API calls

Impact on Freshness

May delay detection of very recent changes

Analysis Results

Health scores, insights, derived metrics

Cache Duration: 1 hour

Why This Duration?

Expensive calculations, consistent results

Impact on Freshness

Analysis reflects data from up to 1 hour ago

SVG Generation

Cards, badges, visualizations

Cache Duration: 24 hours

Why This Duration?

Static content, CDN optimization

Impact on Freshness

Visual elements update daily

Theme Assets

CSS, fonts, theme definitions

Cache Duration: 1 year

Why This Duration?

Static resources rarely change

Impact on Freshness

Theme updates may take time to propagate

Rate Limiting Strategies

Request Throttling

Limits requests per IP address to prevent abuse

Implementation

Server-side rate limiting with Redis

Effectiveness

Prevents malicious overuse

GitHub API Respect

Stays within GitHub API rate limits

Implementation

Token rotation, request queuing

Effectiveness

Ensures reliable service availability

Caching Optimization

Reduces API calls through intelligent caching

Implementation

Multi-layer cache with appropriate TTL

Effectiveness

Handles high traffic efficiently

Graceful Degradation

Serves cached data when limits exceeded

Implementation

Fallback to stale data with warnings

Effectiveness

Maintains service during peak usage

GitHub API Rate Limits

Unauthenticated Requests

  • • 60 requests per hour per IP
  • • Basic repository metadata only
  • • No private repository access
  • • Rate limit resets hourly

Authenticated Requests

  • • 5,000 requests per hour per token
  • • Higher limits for registered apps
  • • Access to private repositories
  • • Better reliability and performance

How RepoPulse Handles Limits

RepoPulse uses authenticated requests and intelligent caching to stay within limits. During high traffic or API issues, cached data ensures continuous service availability.

Performance Tips

Use repository URLs over user analysis

Why?

Repository data is more cacheable and faster to compute

Performance Impact

2-3x faster response times

Avoid frequent theme changes

Why?

Theme variations bypass SVG cache

Performance Impact

First request slower, subsequent cached

Use default analysis windows

Why?

Custom windows require fresh calculations

Performance Impact

30-90 day windows are pre-computed

Batch similar requests

Why?

Sequential requests hit warm caches

Performance Impact

Subsequent requests 10x faster

Forcing Fresh Data

In rare cases where you need the absolute latest data, you can bypass caching:

URL Parameters

Add cache-busting parameters to force fresh API calls:

/repo/vercel/next.js?cache=false&t=1234567890

When to Use

  • • Immediately after major repository changes
  • • Debugging caching issues
  • • Time-sensitive analysis

Note: Forced refreshes consume API quota and may be slower. Use sparingly and only when necessary.

Service Health

RepoPulse includes monitoring to ensure reliable service:

API Health

GitHub API availability monitoring

Cache Performance

Hit rates and response time tracking

Rate Limit Status

API quota monitoring and alerts

Questions about performance? Check our FAQ orreport issues