The 18-Hour Sync: How I Built a Production-Grade Cross-Device Sync Layer in One Day
From RLS hell to deterministic sync—the real story of building a sync proxy for iPad + Mac Catalyst in 18 hours. What broke, how we fixed it, and the playbook to save you time.
Every high-stakes conversation has a moment where it either moves forward—or quietly breaks.
This article documents building a production-grade sync proxy system for cross-platform (iPad + Mac Catalyst) data synchronization with Supabase, including authentication, schema contracts, deployment issues, and debugging strategies that would typically take a full dev team weeks.
By Best ROI Media
The 18-Hour Sync: How I Built a Production-Grade Cross-Device Sync Layer in One Day
Why Sync Is Harder Than Features
Most features are straightforward: add a button, wire up an API call, ship it. Sync is different. Sync is about trust. When a contractor opens your app on their iPad, makes changes, then switches to their Mac, they expect to see those changes. Not "maybe." Not "eventually." Now.
If sync breaks, trust breaks. And trust is the foundation of any SaaS product.
This is the story of how I built a production-grade sync system in 18 hours—work that would typically take a full dev team 1–2 weeks. It's not a story about AI writing code. It's a story about systems thinking, debugging methodology, and understanding contracts between systems. AI accelerated execution, but the real value was in understanding what was breaking and why.
The Context: iPad + Mac Catalyst + Supabase
The app is Best Estimator, a contractor estimation tool. It runs on iPad and Mac (via Catalyst). The backend is Supabase—Postgres with Row Level Security (RLS), real-time subscriptions, and auth.
The original plan was simple: both iPad and Mac would sync directly to Supabase. The Supabase Swift SDK handles auth, sessions, and queries. What could go wrong?
Everything.
The First Failure: Mac Catalyst Session Persistence
On iPad, everything worked. Users logged in, sessions persisted, queries returned data. On Mac Catalyst, sessions would disappear between app launches. The SDK thought users were logged out. Users had to re-authenticate constantly.
This wasn't a bug in our code—it was a Catalyst-specific issue with how the Supabase SDK stores sessions. Catalyst apps run in a different sandbox than native Mac apps, and session persistence wasn't reliable.
The Second Failure: RLS Returning Empty Arrays
Even when sessions worked, RLS policies were returning empty arrays. Queries that worked in the Supabase dashboard returned nothing from the client. The policies were correct. The user was authenticated. But the data wasn't there.
After hours of debugging, we realized the issue: RLS policies were evaluating correctly, but something in the query path was breaking. The client SDK was making requests that looked correct, but Supabase was filtering everything out.
The Decision: Stop Syncing Mac Directly to Supabase
The solution wasn't to fix Catalyst session persistence or debug RLS edge cases. The solution was to stop syncing Mac directly to Supabase.
New architecture:
- iPad → Supabase (direct, works fine)
- Mac → bestroi.media API → Supabase (service role, bypasses RLS)
This gave us:
- Control: We own the sync logic server-side
- Reliability: Service role bypasses RLS, so we can enforce tenant scoping ourselves
- Consistency: Same sync behavior on both platforms
- Debugging: Server-side logs show exactly what's happening
The sync proxy would have three endpoints:
GET /api/sync/canary— Validate auth before syncingGET /api/sync/pull— Pull data for authenticated contractorPOST /api/sync/push— Upsert data for authenticated contractor
The Build: 18 Hours of Phases
Hour 1–3: Proxy Design + Endpoints
I started with the canary endpoint—the simplest one. It validates authentication and returns the user ID. If this works, auth is working.
// app/api/sync/canary/route.ts
import { NextRequest, NextResponse } from "next/server";
import { createClient } from "@supabase/supabase-js";
const SUPABASE_URL = process.env.SUPABASE_URL || process.env.NEXT_PUBLIC_SUPABASE_URL;
const SUPABASE_ANON_KEY = process.env.SUPABASE_ANON_KEY || process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY;
export async function GET(req: NextRequest) {
const authHeader = req.headers.get("authorization");
if (!authHeader?.toLowerCase().startsWith("bearer ")) {
return NextResponse.json(
{ error: "Missing or invalid Authorization header" },
{ status: 401 }
);
}
const token = authHeader.substring(7).trim();
const authClient = createClient(SUPABASE_URL, SUPABASE_ANON_KEY, {
auth: { persistSession: false, autoRefreshToken: false },
});
const { data: { user }, error } = await authClient.auth.getUser(token);
if (error || !user) {
return NextResponse.json(
{ error: "Invalid or expired token" },
{ status: 401 }
);
}
return NextResponse.json({ ok: true, userId: user.id });
}
The pull endpoint was next. It needs to:
- Authenticate the user
- Derive
contractor_idfrom the user - Query all sync tables filtered by
contractor_id - Return data with a server timestamp
The push endpoint would upsert data, but I started with pull to get the read path working.
First problem: How do we query Supabase with service role privileges? We need to bypass RLS.
// lib/supabase/admin.ts
import { createClient } from "@supabase/supabase-js";
const SUPABASE_URL = process.env.SUPABASE_URL || process.env.NEXT_PUBLIC_SUPABASE_URL;
const SUPABASE_SERVICE_ROLE_KEY = process.env.SUPABASE_SERVICE_ROLE_KEY;
if (!SUPABASE_SERVICE_ROLE_KEY) {
throw new Error("SUPABASE_SERVICE_ROLE_KEY is required");
}
export function createSupabaseAdminClient() {
return createClient(SUPABASE_URL!, SUPABASE_SERVICE_ROLE_KEY!, {
auth: {
persistSession: false,
autoRefreshToken: false,
},
});
}
The service role key bypasses RLS, so we can query any table. But we still enforce tenant scoping server-side by filtering on contractor_id.
Hour 4–7: Tenant Security (contractor_id Derivation)
The critical security question: how do we get contractor_id from an authenticated user?
We can't trust the client. Never accept contractor_id from the client. Always derive it server-side.
Our schema has two patterns:
- Team members:
public.users.auth_user_id→public.users.contractor_id - Contractor owners:
contractor.id=auth.users.id(direct match)
// lib/sync/helpers.ts
export async function requireContractorId(authUserId: string): Promise<{
contractorId: string | null;
error: NextResponse | null
}> {
const adminClient = createSupabaseAdminClient();
// Try public.users table first (for team members)
const { data: userData } = await adminClient
.from("users")
.select("contractor_id")
.eq("auth_user_id", authUserId)
.maybeSingle();
if (userData?.contractor_id) {
return { contractorId: userData.contractor_id, error: null };
}
// If not found, check if authUserId is itself a contractor
const { data: contractorData } = await adminClient
.from("contractors")
.select("id")
.eq("id", authUserId)
.maybeSingle();
if (contractorData?.id) {
return { contractorId: contractorData.id, error: null };
}
return {
contractorId: null,
error: NextResponse.json(
{ error: "User is not associated with a contractor" },
{ status: 403 }
),
};
}
This function is the security boundary. Every sync operation calls it, and we never accept contractor_id from the client.
Hour 8–10: Deployment Reality (Preview Works, Prod 404)
I tested locally, everything worked. I deployed to Vercel preview, everything worked. I merged to main, deployed to production, and... 404.
The endpoints didn't exist in production.
The problem: Vercel route handlers need to be in the app/api directory, and the build process wasn't picking them up. I checked the file structure, checked the imports, checked the Next.js config. Everything looked right.
The fix: I had to redeploy. The preview branch had the routes, but production didn't. After merging and triggering a new deployment, the routes appeared.
But then I hit another issue: the routes were returning 500 errors. The logs showed "Server configuration error" — missing environment variables.
The real problem: Vercel environment variables weren't set for production. I had set them for preview, but not for production. After adding SUPABASE_URL, SUPABASE_ANON_KEY, and SUPABASE_SERVICE_ROLE_KEY to production, everything worked.
Lesson: Always test the deployment path, not just the code path. Preview and production are different environments.
Hour 11–13: Schema Contract Mismatch
The pull endpoint was working, but the data shape didn't match what the client expected.
Problem 1: Table name mismatch. The client was looking for items, but the database table was costbook_items. The client code had hardcoded items in the decoding logic.
Problem 2: Missing columns. The client expected estimate_json, but that column didn't exist. The estimates table had individual columns, not a JSON blob.
Problem 3: Column name mismatches. The client expected updatedAt (camelCase), but the database had updated_at (snake_case).
The fix: I updated the pull endpoint to return the exact table names and column names the client expected:
// app/api/sync/pull/route.ts
return NextResponse.json({
ok: true,
serverTime: new Date().toISOString(),
contractors: contractors || [],
costbook_items: costbookItems || [],
costbook_folders: costbookFolders || [],
estimates: estimates || [],
estimate_events: estimateEvents || [],
pdf_settings: pdfSettings || [],
pricing_rules: pricingRules || [],
step_a_configs: stepAConfigs || [],
step_a_rules: stepARules || [],
});
The client would decode these with Swift's Codable, mapping snake_case to camelCase automatically.
Lesson: The schema is the contract. Both sides must agree on table names, column names, and data types. Document it, test it, enforce it.
Hour 14–15: Client Decoding Wars
The pull endpoint was returning data, but the Swift client was crashing on decode.
Error 1: DecodingError.keyNotFound(CodingKeys(stringValue: "id", intValue: nil))
The client expected every table to have an id field, but singleton tables like pdf_settings and step_a_configs don't have id—they're keyed by contractor_id.
Error 2: DecodingError.dataCorrupted — date format mismatch. The database returns ISO 8601 strings, but the client was expecting a different format.
Error 3: DecodingError.keyNotFound for serverTime. The client expected serverTime in the response, but I was returning it in a header.
The fixes:
-
Separate models for singleton tables: Created
PDFSettingsandStepAConfigmodels that don't requireid. -
Date decoding: Used
ISO8601DateFormatterin Swift:
let formatter = ISO8601DateFormatter()
formatter.formatOptions = [.withInternetDateTime, .withFractionalSeconds]
let date = formatter.date(from: dateString)
- Response structure: Moved
serverTimeinto the JSON body, not just headers:
return NextResponse.json({
ok: true,
serverTime: new Date().toISOString(),
// ... tables
});
Lesson: Test decoding with real data, not mocks. Real data has edge cases mocks don't.
Hour 16: Push Wars
The pull endpoint worked. Now for push—upserting data from the client.
Problem 1: Payload shape mismatch. The client was sending:
{
"items": [...]
}
But the endpoint expected:
{
"costbook_items": [...],
"estimates": [...],
...
}
Problem 2: Missing required fields. The client was sending records without id for singleton tables, causing 400 errors: "Missing required field 'items'".
Problem 3: Conflict resolution. For singleton tables, we need ON CONFLICT (contractor_id), not ON CONFLICT (id).
The fixes:
-
Accept both shapes: Updated the endpoint to accept either
items(legacy) or table-specific arrays. -
Validate required fields: Added validation that checks each table's requirements:
// For tables with id, validate that id exists
if (TABLES_WITH_ID.has(tableName)) {
if (!("id" in item) || typeof item.id !== "string") {
return NextResponse.json(
{ ok: false, error: "BAD_REQUEST", message: `Item missing required field 'id'` },
{ status: 400 }
);
}
}
- Conflict resolution strategy: Different tables use different conflict keys:
const TABLES_WITH_ID = new Set([
"contractors",
"costbook_items",
"costbook_folders",
"estimates",
"estimate_events",
"pricing_rules",
"step_a_rules",
]);
const SINGLETON_TABLES = new Set([
"step_a_configs",
"pdf_settings",
]);
// Determine conflict resolution
const onConflict = TABLES_WITH_ID.has(tableName) ? "id" : "contractor_id";
Hour 17: Postgres ON CONFLICT Duplicate Row Error
After fixing the conflict keys, I hit a new error:
ON CONFLICT DO UPDATE cannot affect row a second time
This happens when the same row appears multiple times in the upsert array. Postgres tries to update the same row twice in a single statement, which it doesn't allow.
The fix: Dedupe the array before upserting, keeping the last occurrence (last-write-wins):
function dedupeByKey<T extends Record<string, unknown>>(
array: T[],
key: keyof T
): { deduped: T[]; originalCount: number } {
const seen = new Map<string | number, T>();
for (const item of array) {
const keyValue = item[key];
if (keyValue !== undefined && keyValue !== null) {
seen.set(keyValue as string | number, item);
}
}
return {
deduped: Array.from(seen.values()),
originalCount: array.length
};
}
// Before upsert
const { deduped, originalCount } = dedupeByKey(tableArray, conflictKey);
if (originalCount !== deduped.length) {
console.log(`[SYNC_PUSH_DEDUPE] ${tableName}: ${originalCount} → ${deduped.length}`);
}
Lesson: Always dedupe client payloads. Clients can send duplicates due to retries, network issues, or bugs.
Hour 18: Push Response Decoding Mismatch
The push endpoint was working, but the client was crashing on the response decode.
The error: DecodingError.keyNotFound(CodingKeys(stringValue: "serverTime", intValue: nil))
The client expected serverTime in the push response, but I was only returning:
{
ok: true,
upserted: { costbook_items: 5, estimates: 2 }
}
The fix: The client had a single SyncResponse model used for both pull and push. I created separate models:
struct SyncPullResponse: Codable {
let ok: Bool
let serverTime: String
let contractors: [Contractor]
let costbook_items: [CostbookItem]
// ... other tables
}
struct SyncPushResponse: Codable {
let ok: Bool
let upserted: [String: Int]
}
Lesson: Separate response models for different endpoints. Don't try to reuse models when the shapes are different.
Final State: Deterministic Sync
After 18 hours, the sync system was working:
- Canary endpoint: Validates auth before any sync operation
- Pull endpoint: Returns all tables filtered by
contractor_id, withserverTimefor delta sync - Push endpoint: Upserts data with proper conflict resolution and deduplication
- Security:
contractor_idis always derived server-side, never from client - Reliability: Service role bypasses RLS, but we enforce tenant scoping ourselves
The client uses lastSyncAt to do delta pulls:
let since = lastSyncAt?.ISO8601String()
let url = URL(string: "https://bestroi.media/api/sync/pull?since=\(since ?? "")")!
Only records updated after lastSyncAt are returned, making syncs fast even with large datasets.
How Long Would a Team Take?
A typical team would take 1–2 weeks minimum for the first stable iteration, longer if they're new to Supabase + Catalyst edge cases.
Why so long?
-
Coordination overhead: Backend engineer designs API, iOS engineer implements client, QA tests both, DevOps handles deployment. Each handoff adds latency.
-
Contract drift: Backend ships API v1, iOS implements against it, backend changes it, iOS breaks. Multiple iterations to align.
-
Debugging cycles: Backend sees 500, thinks it's client. Client sees 400, thinks it's backend. Back-and-forth to isolate issues.
-
Deployment friction: Preview works, prod doesn't. Environment variables missing. Route handlers not deployed. Each issue takes hours to diagnose.
-
Learning curve: If the team is new to Supabase RLS, Catalyst session issues, or Postgres conflict resolution, they'll spend time learning before building.
With AI acceleration: I could iterate faster because I could:
- Generate boilerplate quickly
- Ask "what's wrong with this error?" and get targeted answers
- Search codebases and documentation faster
- Write tests and validation logic quickly
But the real value wasn't AI writing code—it was AI helping me understand systems faster. When I hit "ON CONFLICT cannot affect row a second time," AI helped me understand Postgres conflict resolution. When I hit RLS empty arrays, AI helped me understand the difference between anon key and service role key.
The 18 hours wasn't just coding—it was systems thinking, debugging methodology, and understanding contracts.
Key Takeaways: Save Yourself 18 Hours
1. Don't Trust SDK Sessions on Catalyst
Mac Catalyst has different session persistence than iOS. If sessions are disappearing, don't debug the SDK—route through your own backend.
Test: Call a canary endpoint on app launch. If it returns 401, the session is broken.
curl -X GET https://bestroi.media/api/sync/canary \
-H "Authorization: Bearer $TOKEN" \
-H "Cache-Control: no-cache"
2. Treat Sync Like a Contract
The schema is the contract. Both client and server must agree on:
- Table names
- Column names (snake_case vs camelCase)
- Data types
- Required fields
- Conflict resolution keys
Document it: Write down the exact shape of each table, each endpoint's request/response, and the conflict resolution strategy.
3. Tenant Scoping Belongs Server-Side
Never accept user_id or contractor_id from the client. Always derive it server-side from the authenticated user.
Pattern:
// ❌ BAD: Client sends contractor_id
const { contractor_id } = await req.json();
// ✅ GOOD: Server derives contractor_id
const userId = authResult.user.id;
const contractorId = await requireContractorId(userId);
4. Singleton Tables Need Special Handling
Tables like pdf_settings and step_a_configs have one row per contractor, keyed by contractor_id, not id.
Database constraint:
ALTER TABLE pdf_settings
ADD CONSTRAINT pdf_settings_contractor_id_unique
UNIQUE (contractor_id);
ALTER TABLE step_a_configs
ADD CONSTRAINT step_a_configs_contractor_id_unique
UNIQUE (contractor_id);
Upsert with contractor_id conflict:
await adminClient
.from("pdf_settings")
.upsert(items, { onConflict: "contractor_id" });
5. Don't Retry 400s; Retry Only 5xx/Timeouts
400 errors are client errors—invalid payload, missing fields, bad data. Retrying won't help.
500 errors are server errors—database failures, timeouts, transient issues. Retry these.
Client retry logic:
if response.statusCode >= 500 || isTimeout {
// Retry with exponential backoff
} else if response.statusCode == 400 {
// Don't retry - fix the payload
}
6. Dedupe Payloads Before Upsert
Clients can send duplicate records due to retries, network issues, or bugs. Postgres will error if the same row appears twice in an upsert.
Server-side dedupe:
const { deduped } = dedupeByKey(items, "id");
await adminClient.from("table").upsert(deduped, { onConflict: "id" });
Client-side dedupe (before sending):
let uniqueItems = Array(Set(items)) // If items are Hashable
// Or use a dictionary keyed by id
7. Separate PullResponse vs PushResponse Models
Don't try to reuse a single model for different endpoint responses. They have different shapes.
Swift models:
struct SyncPullResponse: Codable {
let ok: Bool
let serverTime: String
let costbook_items: [CostbookItem]
// ... tables
}
struct SyncPushResponse: Codable {
let ok: Bool
let upserted: [String: Int]
}
8. Add Verbose Debug Logs in Dev Builds
When debugging sync, you need to see:
- Request status code
- Response headers
- Response body size
- Response body preview (first 500 chars)
Swift debug logging:
#if DEBUG
print("Sync Response: \(response.statusCode)")
print("Headers: \(response.allHeaderFields)")
if let data = response.data {
print("Body size: \(data.count) bytes")
if let preview = String(data: data.prefix(500), encoding: .utf8) {
print("Body preview: \(preview)")
}
}
#endif
The Sync Playbook
A repeatable checklist for building sync systems:
Phase 1: Design the Contract
- [ ] Document all tables that need syncing
- [ ] Document table names, column names, data types
- [ ] Document conflict resolution strategy (id vs contractor_id)
- [ ] Document required fields for each table
- [ ] Create TypeScript types for server, Swift types for client
Phase 2: Build the Canary Endpoint
- [ ] Create
GET /api/sync/canarythat validates auth - [ ] Returns
{ ok: true, userId: "..." }on success - [ ] Returns 401 on invalid token
- [ ] Test with curl:
curl -H "Authorization: Bearer $TOKEN" /api/sync/canary
Phase 3: Build the Pull Endpoint
- [ ] Create
GET /api/sync/pullthat queries all tables - [ ] Authenticate user, derive
contractor_idserver-side - [ ] Filter all queries by
contractor_id - [ ] Return
{ ok: true, serverTime: "...", tables: {...} } - [ ] Support
?since=ISO8601for delta pulls - [ ] Test with curl, verify all tables return data
Phase 4: Build the Push Endpoint
- [ ] Create
POST /api/sync/pushthat upserts data - [ ] Accept table-specific arrays:
{ costbook_items: [...], estimates: [...] } - [ ] Validate required fields (id for regular tables, contractor_id for singletons)
- [ ] Dedupe arrays before upsert (last-write-wins)
- [ ] Apply server-side overrides: strip client
contractor_id/user_id, set server values - [ ] Use correct conflict resolution:
onConflict: "id"oronConflict: "contractor_id" - [ ] Return
{ ok: true, upserted: { table: count } } - [ ] Test with curl, verify upserts work
Phase 5: Client Implementation
- [ ] Create separate
SyncPullResponseandSyncPushResponsemodels - [ ] Implement canary check on app launch
- [ ] Implement pull with
lastSyncAtfor delta sync - [ ] Implement push with retry logic (only retry 5xx/timeouts)
- [ ] Add verbose debug logging in dev builds
- [ ] Test on both iPad and Mac Catalyst
Phase 6: Deployment
- [ ] Set environment variables in Vercel (SUPABASE_URL, SUPABASE_ANON_KEY, SUPABASE_SERVICE_ROLE_KEY)
- [ ] Set for production, preview, and development
- [ ] Test canary endpoint in production:
curl https://your-app.vercel.app/api/sync/canary -H "Authorization: Bearer $TOKEN" - [ ] Verify routes are deployed (check Vercel function logs)
Phase 7: Database Constraints
- [ ] Add unique constraints for singleton tables:
ALTER TABLE pdf_settings ADD CONSTRAINT pdf_settings_contractor_id_unique UNIQUE (contractor_id); ALTER TABLE step_a_configs ADD CONSTRAINT step_a_configs_contractor_id_unique UNIQUE (contractor_id);
Phase 8: Error Handling
- [ ] Server: Return consistent error format
{ ok: false, error: "CODE", message: "..." } - [ ] Client: Handle 400 (don't retry), 401 (re-auth), 403 (no access), 5xx (retry)
- [ ] Log errors server-side (but never log tokens/secrets)
- [ ] Add error monitoring (Sentry, etc.)
Phase 9: Testing
- [ ] Test with empty database (should return empty arrays, not errors)
- [ ] Test with large datasets (1000+ items per table)
- [ ] Test delta pulls (only return items updated since lastSyncAt)
- [ ] Test push with duplicates (should dedupe server-side)
- [ ] Test push with missing required fields (should return 400)
- [ ] Test on both iPad and Mac Catalyst
- [ ] Test with network interruptions (should retry 5xx, not 400)
Phase 10: Monitoring
- [ ] Add metrics: sync frequency, payload sizes, error rates
- [ ] Monitor for duplicate upserts (log when dedupe happens)
- [ ] Monitor for 401s (session issues)
- [ ] Monitor for 400s (client bugs, schema drift)
The Real Value: Systems Thinking
This wasn't just about writing code. It was about:
- Understanding contracts: The schema is the contract between client and server
- Security boundaries: Never trust client input; always derive tenant scoping server-side
- Debugging methodology: Isolate the problem, prove the hypothesis, fix it, verify
- Deployment reality: Preview and production are different; test both
- Error handling: Different errors need different responses (400 vs 5xx)
AI accelerated execution, but the value was in understanding systems. When I hit "ON CONFLICT cannot affect row a second time," I didn't just fix it—I understood why it happened and how to prevent it. When I hit RLS empty arrays, I didn't just work around it—I understood the security model and built a better solution.
That's the real story: not "AI wrote the code," but "AI accelerated the debugging."
Before vs After: A Log Comparison
Before (failing):
[ERROR] Sync pull failed: DecodingError.keyNotFound(CodingKeys(stringValue: "serverTime", intValue: nil))
[ERROR] Sync push failed: 400 Bad Request - Missing required field 'items'
[ERROR] Database error: ON CONFLICT DO UPDATE cannot affect row a second time
After (working):
[INFO] Sync canary: ok=true, userId=abc123
[INFO] Sync pull: ok=true, serverTime=2025-01-15T12:00:00.000Z, costbook_items=45, estimates=12
[INFO] Sync push: ok=true, upserted={costbook_items: 5, estimates: 2}
[INFO] Sync push dedupe: costbook_items 7 → 5
Clean, deterministic, debuggable.
The Tables We Ended Up Syncing
The final sync system handles these tables:
- contractors — One per contractor (singleton, but uses
idas conflict key) - costbook_items — Items in the costbook (many per contractor)
- costbook_folders — Folders organizing costbook items
- estimates — Estimates/quotes created by contractors
- estimate_events — Audit trail for estimate changes
- pdf_settings — PDF generation settings (singleton, one per contractor)
- pricing_rules — Pricing rules for estimates
- step_a_configs — Step A configuration (singleton, one per contractor)
- step_a_rules — Step A rules
Each table has different conflict resolution:
- Regular tables:
ON CONFLICT (id) - Singleton tables:
ON CONFLICT (contractor_id)
Conclusion
Building a production-grade sync system in 18 hours isn't about writing code fast. It's about:
- Understanding the problem: Why is direct Supabase sync failing on Catalyst?
- Designing the solution: Route through our own backend for control
- Building incrementally: Canary → Pull → Push, one endpoint at a time
- Debugging systematically: Isolate, prove, fix, verify
- Learning from errors: Each error teaches something about the system
The sync system is now production-ready, handling iPad and Mac Catalyst reliably, with proper security, error handling, and monitoring.
If you're building a contractor app or any multi-tenant SaaS and you're stuck in sync hell, we can help. Best Estimator uses this sync system in production, and Best ROI Media can help you build similar systems for your apps.
Reach out if you're wrestling with Supabase RLS, Catalyst session issues, or cross-platform sync. Sometimes the most valuable thing is a conversation with someone who's been there.
Why We Write About This
We build software for people who rely on it to do real work. Sharing how we think about stability, judgment, and systems is part of building that trust.