TikTok Ads Preflight

UserBrand advertiser / Account manager
Problem~35% of ads hit review issues on first submit; 47% of community complaints point to 'rejected without knowing why'
ProposalSurface TikTok's existing review logic as pre-submit risk hints
GoalReduce avoidable rejections (format / URL / keyword-related)

Product Walkthrough

I walked through TikTok Ads Manager's Simplified Mode: Objective → Ad Content → Targeting & Budget → Publish. Four steps, clean experience.

But there was zero compliance feedback throughout the flow. The interface showed estimated clicks, yet when filling in URLs and copy, there was no way for advertisers to check whether anything might trigger a review. Until hitting 'Publish,' the advertiser has no idea what the review outcome will be.

I also tried Google Ads and Meta Ads. None of the three platforms offer pre-submit compliance checks—Google's yellow prompt is form validation (required fields), Meta's Creative Hub is visual preview—neither involves policy checking.

Community Research

I collected and analyzed 32 posts and comments related to rejections and bans on r/TikTokAds, categorized by pain point:

CategoryCount%
Rejection/ban reason unclear1547%
URL / landing page issues619%
Industry classification errors413%
Support unhelpful413%
Other39%

TikTok banned my account, said I violated some policy, but won't tell me which one.

r/TikTokAds

I've never run an ad on TikTok. Brand new account… I don't even know how it violated that many policies.

r/TikTokAds

A week later it just stopped running. TikTok said it was rejected for 'non-compliant URL.'

r/TikTokAds

Core Insight

TikTok's review system is thorough—a three-layer architecture (AI auto-screening → regional human review → expert content committee) covers multi-dimensional content matching. But there's a wall between advertisers and the review system: what inputs trigger review? Which fields are high-risk? Zero feedback before submit. The review capability is there; what's missing is transparency.

Current
CreateSubmitReviewResult
Improved
CreateReadinessSubmitReviewResult

I compared the pre- and post-submit mechanisms across three major platforms:

Google AdsMeta AdsTikTok Ads
Pre-submit checkAd Strength score (performance optimization, not compliance) + URL yellow warningCreative Hub preview tool (format preview, not compliance)None
Post-rejection feedbackPolicy Manager + Policy details column + Email (3 paths)Field-level reason + Edit button"Disapproved" label → View more (1 path)
AI supportStructured 3-step troubleshooting + proactively asks for URLCreative Hub + third-party toolsLinks to help docs
Guideline accessEmbedded in creation flowSeparate help docsSeparate help center page

None of the three platforms offer pre-submit compliance checks—Google's Ad Strength is a performance score, Meta's Creative Hub is a visual preview, neither involves policy validation. TikTok has no pre-submit mechanism and fewer post-rejection feedback paths (only 1), which is exactly where there's room to improve.

Third-party tools are already filling the gap—AdAmigo.ai and Madgicx provide pre-submit compliance checks for Meta, proving the demand is real.

* Industry data: Google blocked 5.1B ads / suspended 39.2M accounts in 2024 (Google Ads Safety Report); Meta removed 130M fraudulent ads in 2024

After confirming the industry gap, the next step was choosing an entry point. I considered three directions:

A

Pre-submit check

Scan and flag risks before submission

Catches problems early, reduces wasted submissionsRequires maintaining a rule library, false positive risk
B

Better rejection feedback

Improve post-rejection feedback quality

Directly addresses current user painDoesn't reduce rejection count itself
C

Real-time input guidance

Real-time hints during the ad creation process

Seamless experienceHighest engineering effort, requires modifying the ad creation flow

Why Option A?

Option A is the only one that proactively reduces wasted submissions, and can be implemented on the frontend independently without modifying the review system. Option B matters but only treats symptoms—saved for Phase 2. Option C is ideal but too heavy for an MVP.

Why doesn't the industry do pre-submit checks?

  • Business incentives — any friction could reduce ad submission rates
  • Technical limits — many reviews require human judgment (video content, brand authorization), AI accuracy isn't sufficient
  • Liability concerns — if the pre-check gives a false positive, advertisers blame the platform

So what's the point?

This isn't 'AI review'—it's 'embedding TikTok Help Center's existing 5 review best practices into the ad creation flow.' It doesn't replace human review, only covers machine-checkable issues. It's skippable, so it won't hurt submission rates. Core logic: TikTok's AI already has multi-dimensional checking capabilities, but advertisers can't see them. We're surfacing review transparency, not replacing review.

MVP Scope

  • Advisory hints, not blocking ("Submit anyway" available)
  • Only covers auto-detectable rules (URL reachability, copy keywords, creative specs)
  • False positive strategy: err on the side of missing issues (under-report) rather than false alarms (over-report)
MVP

Readiness Check

Surface risks before submit

  • 1 DetectIdentify potential issues
  • 2 ExplainReason & suggestion
  • 3 JumpOne-click to fix
Click to see interaction
Phase 2

Actionable Rejections

Field-level reason + fix deep link

Phase 3

Change Impact Prediction

Which edits trigger re-review + estimated time

Rule Source

TikTok has 21 independent ad policy categories (e.g. healthcare, finance, gambling), most requiring human judgment. I selected 5 rules from the Help Center that can be automated:

RuleSourceMethodSev.
Landing page consistencyAd Creatives & Landing PageURL reachability check + headline keyword matchingHIGH
Language matchAd Format & FunctionalityCopy language detection vs. target regionHIGH
Absolute/misleading claimsProhibited ContentKeyword matching (guaranteed, #1, best)MED
Unauthorized brand usageRestricted ContentBrand name keyword scanMED
Creative spec violationsAd Format & FunctionalityFile metadata check (duration/resolution/text density)LOW

Only 5 of 21 policy categories are automatable → ~24% coverage. Source: TikTok Help Center, pages note 'This is not a comprehensive list'

When does it trigger?

Per TikTok's official documentation, modifying the following fields triggers re-review. The readiness check runs automatically after these fields are edited:

Triggers review
  • Video/Image
  • Display name
  • URL / Deep link
  • Avatar
  • Targeting (age/region/language)
  • Tracking parameters
No review
  • Budget & schedule
  • Bid & optimization
  • Call-to-action button
  • Ad group/ad name

Baseline Estimates

TikTok doesn't publish ad review approval rates (their transparency report only covers community content and government requests). The following data is estimated from third-party sources and community research:

MetricValueSource
First-submit review issue rate~35%oreateai.com / crayo.ai
Rate needing revision to pass~15%oreateai.com
Review timeUsually within 24hTikTok Help Center

Target

Of the 35% with review issues, ~40% (≈14%) are avoidable format/URL/keyword issues. Goal: reduce this 14% to <5%.

Guardrails:False positive rate < 5%·Flow completion rate

False positive = pre-check flags an issue but the ad actually passes review. False positives erode user trust in the tool.

Validation Plan

SS: Cognitive walkthrough + 3-person task test
MM: 5-8 person usability test + fix-time tracking
LL: A/B test (new account cohort)
MVP
Rule-matching pre-check (5 Best Practices automated)
False positive rate < 5%, adoption rate > 30%Frontend-only implementation
Phase 2
Rejection feedback optimization (field-level + fix deep links)
Post-rejection fix time -40%Review system API
Phase 3
Change impact prediction (which edits trigger re-review)
Re-review trigger transparencyML model

Three Trade-offs

  • Downgraded from 'AI-powered review' to 'rule matching.' After research, I realized the industry avoids pre-checks for good reason—many reviews require human judgment and AI accuracy isn't there yet. The downgrade itself is a product decision.
  • Changed from 'mandatory blocking' to 'skippable.' False positives destroy trust; an immature tool shouldn't stop users from spending money.
  • Narrowed from 'full coverage' to 'MVP with only 5 rules.' Most of the 21 policy categories require human judgment. Do what's feasible first, then use data to prove expansion is worth it.

Honest Limitations

None of the three major platforms have pre-submit compliance checks. After research, I understand why. But from another angle—embedding the Help Center's existing review suggestions into the ad creation flow costs almost nothing and doesn't require building new capabilities. It's just showing information one step earlier.

If I Could Start Over

Start with 5 advertiser interviews to validate whether 'knowing risks before submit' is actually wanted, then decide what to build. The biggest gap in this case study is the lack of primary user validation—all pain points come from secondhand forum data.

The hardest part of this project wasn't design—it was making trade-offs.


//