Wiring Analytics Into Your Stack: 5 API Integration Patterns
Analytics gets exponentially more useful the moment it leaves the dashboard and starts feeding the rest of your stack. Here are five concrete integration patterns — from a one-script daily digest to a full BI warehouse — with the code shape for each.
The dashboard is the starting point, not the destination
Every analytics tool ships with a dashboard, and for most teams that's where the usage ends. You log in, look at the numbers, log out. The data is trapped in the UI.
This is a missed opportunity. Analytics becomes meaningfully more valuable when it stops being a separate place you visit and starts being a signal that flows into the systems you already use: your team's Slack, your business intelligence reports, your CRM, your operational alerting. The dashboard becomes the default view — but it's no longer the only one.
The unlock is a public API with simple key-based authentication. Once you have that, every pattern below is a small script away.
Pattern 1: The Monday morning Slack digest
The simplest, highest-leverage integration. Every Monday at 9am a script pulls last week's stats, formats them into a short message, and posts to a Slack channel.
#!/bin/bash
# weekly-digest.sh — run via cron Mondays 09:00
LOGLY_KEY=$LOGLY_KEY
SITE=my-blog
WEBHOOK=$SLACK_WEBHOOK
stats=$(curl -sH "Authorization: Bearer $LOGLY_KEY" \
"https://app.logly.uk/api/sites/$SITE/stats?days=7")
pv=$(echo "$stats" | jq -r '.totals.pageviews')
prev=$(echo "$stats" | jq -r '.prev_totals.pageviews')
delta=$(awk "BEGIN { print (($pv - $prev) / $prev) * 100 }")
curl -X POST -H 'Content-Type: application/json' \
-d "{\"text\":\"📊 Last 7 days: *$pv pageviews* (${delta}% vs prev week)\"}" \
"$WEBHOOK"
Twenty lines of bash, zero ongoing maintenance, the data lands where your team already lives. This is the kind of integration that gets built in 30 minutes and runs reliably for years.
Why this matters more than it sounds
Most teams have a "we should look at analytics more" problem. The fix isn't more discipline — it's putting the relevant numbers in front of people automatically, in the channel they already check. A Monday digest in #marketing or #product changes the conversation from "I should log into the dashboard" to "interesting that traffic was up — anyone know why?". That's the small behavioural shift that turns analytics from a thing you have into a thing you use.
Pattern 2: Joining traffic with revenue in a spreadsheet
You ship a SaaS product and you want to know which acquisition channels actually generate paying customers — not just signups, not just trials, but paying customers. No single dashboard can answer this because the data lives in three places: traffic in your analytics, signups in your product database, subscriptions in Stripe.
A scheduled pull into Google Sheets bridges them:
// Google Apps Script — runs daily, appends to a tab
function updateAnalyticsRow() {
const url = "https://app.logly.uk/api/sites/my-saas/stats?days=1";
const r = UrlFetchApp.fetch(url, {
headers: { Authorization: "Bearer " + PropertiesService.getScriptProperties().getProperty('LOGLY_KEY') }
});
const data = JSON.parse(r.getContentText());
const sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Daily traffic');
const today = Utilities.formatDate(new Date(), 'UTC', 'yyyy-MM-dd');
sheet.appendRow([today, data.totals.pageviews, data.totals.sessions, data.totals.visitors]);
}
Add similar pulls for signups (from your DB export) and subscriptions (from Stripe's API). Three tabs in one sheet, two days of effort. Now your investor update has a chart titled "Revenue per acquisition channel" that no individual dashboard could have produced.
Pattern 3: BI warehouse with Metabase, Looker Studio, or Superset
When the spreadsheet approach starts groaning under its own weight, the next step is a proper warehouse. Postgres, BigQuery, DuckDB, or whatever your team prefers. The pattern is the same regardless of database: schedule a daily pull, append to a table, point a BI tool at it.
# Python — runs daily, appends to Postgres
import os, requests, psycopg2, csv, io
LOGLY_KEY = os.environ['LOGLY_KEY']
DB = psycopg2.connect(os.environ['DATABASE_URL'])
r = requests.get(
'https://app.logly.uk/api/sites/my-site/export',
params={'type': 'daily', 'days': 2},
headers={'Authorization': f'Bearer {LOGLY_KEY}'},
)
reader = csv.DictReader(io.StringIO(r.text))
with DB.cursor() as cur:
for row in reader:
cur.execute("""
INSERT INTO analytics_daily (date, pageviews, sessions, visitors, avg_duration_s, bounce_rate)
VALUES (%(date)s, %(pageviews)s, %(sessions)s, %(visitors)s, %(avg_duration_s)s, %(bounce_rate)s)
ON CONFLICT (date) DO UPDATE SET
pageviews = EXCLUDED.pageviews,
sessions = EXCLUDED.sessions
""", row)
DB.commit()
The CSV export endpoint is doing the heavy lifting — one HTTP request, all daily aggregates, parsed by Python's standard library. Once the data is in Postgres, every BI tool in existence can query it. You're no longer dependent on what the analytics dashboard chooses to visualise.
The principle: The API endpoint is the same whether you call it from a one-liner shell script, a Google Apps Script, or a serious data pipeline. The integration grows with you — no rewrite needed when you move from manual to automated to warehouse-scale.
Pattern 4: Programmatic funnel definition for A/B test variants
You run experiments. Each variant might need its own funnel — "did users on hero_v2 progress from /pricing to /signup at a higher rate than hero_v1?". Manually creating these funnels in the dashboard before every test is friction. The API lets you script it.
// Node.js — creates one funnel per variant when a test starts
const variants = ['control', 'urgency', 'social_proof'];
for (const v of variants) {
await fetch('https://app.logly.uk/api/sites/my-site/funnels', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.LOGLY_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
name: `Hero test — ${v}`,
steps: [`/?variant=${v}`, '/pricing', '/signup', '/dashboard'],
}),
});
}
Combined with tagging the variant in the URL or as a custom event, you get per-variant funnel data without manual setup. When the test ends, a second script can fetch results for all three funnels, post a winner to Slack, and clean up the funnel definitions.
Pattern 5: Operational alerting on traffic anomalies
You want to know if traffic drops 50% overnight — because that probably means your site is down, your DNS broke, or your tracker stopped working. A simple polling script that checks recent activity makes this catch happen automatically.
# Python — runs every 15 min via cron
import requests, os, sys
key = os.environ['LOGLY_KEY']
r = requests.get(
'https://app.logly.uk/api/sites/my-site/active',
headers={'Authorization': f'Bearer {key}'},
)
active = r.json()['active']
if active == 0:
requests.post(os.environ['PAGERDUTY_WEBHOOK'], json={
'incident_key': 'logly-zero-traffic',
'description': 'Logly reports zero active visitors — possible site outage',
})
sys.exit(1)
Pair it with a baseline threshold (median active visitors for that hour-of-week) and you have basic anomaly detection without standing up a real monitoring stack. For most small teams, this is the right amount of operational visibility — not over-engineered, but catches the worst failure modes.
What makes a good analytics API
Not all "we have an API" claims are equal. Worth checking before you commit to a stack:
- One canonical authentication mechanism. A long-lived bearer key beats OAuth or rotating tokens for server-to-server work. You should be able to script a curl call in a minute.
- Same endpoints as the dashboard. If the public API is a separate subset of the data the dashboard sees, you have two products to keep in sync. A single shared backend means anything you see in the UI you can pull via API.
- Standard formats out. JSON for queries, CSV for bulk data, ISO timestamps, standard country codes. No proprietary serialisation, no XML, no required SDKs.
- Revocable keys with a usage signal. You should be able to revoke a leaked key instantly, and ideally see when a key was last used so you can clean up forgotten integrations.
Start small
The trap with integrations is to imagine the full architecture before shipping the first one. Don't. The path that works for almost every team:
- Week 1: Ship Pattern 1 — a Monday Slack digest. Twenty lines of bash.
- Month 2: Add Pattern 2 if you need cross-system joins. A Google Sheet with three scheduled pulls.
- Quarter 2: Migrate to Pattern 3 if the sheet is groaning. Warehouse + BI tool.
- As needed: Add Patterns 4 and 5 when specific use cases justify them.
Each step builds on the previous one's pattern. The bash script becomes a Python script becomes a scheduled job in your data platform. Same endpoint, same auth, growing sophistication.
Analytics with a real public API
Bearer-key authentication. Same endpoints as the dashboard. CSV export, funnels, real-time, custom events — all scriptable. Free up to 10,000 pageviews/month.
Get started free →