Skip to content
System status

Ardent Seller system status

Live health for the public web application and database, plus the static status of every dependency we run on top — auth, email, billing, marketplaces, and backups.

The summary below refreshes the moment you load this page and again every 60 seconds. Historical incidents and the platforms we depend on are listed further down. For the full security posture behind these systems, see the security page.

Page last updated Live probe runs against /api/v1/status

Checking status…

Running a live health check against our infrastructure.

Last check: Waiting…

Live components

These checks run against our own infrastructure on every page load. A green dot means the component responded within its normal latency budget.

  • Web application

    Waiting for live health check…

    Checking
  • Database (PostgreSQL)

    Waiting for live health check…

    Checking

Dependencies we run on

We don't probe third-party APIs on every page load, but here's the canonical list of platforms the app depends on, who owns each one, and where to check their own status pages.

How we run reliability

A short, honest read on the controls behind the green dots above.

Latency budgets, not vibes

Each live check has a fixed response-time budget — database checks degrade above 750 ms and time out at 2.5 seconds. The probe never blocks the page for more than a couple of seconds, even when an underlying dependency stalls.

How we notice before you do

Unhandled errors land in Sentry with stack traces. AWS CloudWatch alarms cover the marketplace sync, inactivity evaluator, and backup jobs, and a SNS topic pages on-call when an alarm fires. Most incidents get a posted card here before customer reports arrive.

Background jobs with dead letters

Marketplace syncs and other long-running jobs run on FIFO queues with dead-letter queues, capped retries, and reserved concurrency. A noisy or failing job cannot starve the rest of the system, and a poisoned message is parked for inspection rather than retried forever.

Recovery posture

If the primary database is unavailable, Supabase's daily backup is the fast-path restore. If that is also affected, our independent weekly snapshot in a customer-managed AWS bucket (us-east-2, with S3 Object Lock) is the disaster-grade backup. Backup-storage specifics are documented on the security page.

Data-handling controls — encryption, multi-tenant isolation, audit trail, rate limiting, telemetry scrubbing — are documented on the security page.

Incident history

Every customer-facing incident since launch is recorded here. If the list is short, that's the point.

No reported incidents

We haven't recorded a customer-facing incident on Ardent Seller yet. New incidents are posted here within an hour of detection.

Stay informed

Three ways to know if something is wrong before you have to refresh this page.

This page

Refreshes every 60 seconds and lights up the moment a live probe fails. Bookmark /status if your team monitors uptime manually.

In-app notifications

If an incident affects your account — sync failures, billing problems, marketplace disconnects — we send an in-app and email alert through the same lifecycle pipeline that handles the rest of your notifications.

Talk to a human

If something is wrong and this page hasn't caught it yet, the contact form is the fastest way to reach us. Every report is read and we'll reply with a real status update.

Frequently asked questions

Short, honest answers about how this page works.

How "real-time" is this page?

When you load this page, the browser probes our /api/v1/status endpoint, which runs a live database query inside our infrastructure. The page also refreshes the probe every 60 seconds while you have it open. There is no edge cache in front of the probe — every check is fresh.

Why are auth, email, and billing listed without a live status indicator?

We don't probe third-party APIs synchronously on every page view. Doing so would burn vendor rate limits, push secrets into anonymous-callable code paths, and slow the page down for everyone. Those components surface incidents through their own status pages (linked on their cards) and through in-app notifications when an issue affects your account.

What counts as an "incident"?

Any customer-facing event longer than a few minutes that prevents a normal workflow — sign-in, inventory edits, sales sync, checkout, exports. Brief blips that auto-recover are not posted because logging them as incidents would dilute the signal.

Do you publish uptime percentages?

Not yet, because the honest answer requires a longer measurement window than we have. Once we have a year of probe history at 60-second granularity, a rolling 30-day and 90-day uptime number will appear in this section. Until then, the incident history above is the source of truth.

I think something is broken — what's the fastest way to tell you?

Use the contact form. Every report is read by a human. If multiple reports arrive on the same issue, we'll post an incident card to this page within an hour of confirming the impact.

Seeing something odd?

Something broken that isn't on this page?

If this page says everything is fine but your account is misbehaving, that's the kind of report we want to read first. Tell us what you tried and what happened — we'll get back to you as quickly as we can.