TikTok Ads Preflight
Research
Product Walkthrough
I walked through TikTok Ads Manager's Simplified Mode: Objective → Ad Content → Targeting & Budget → Publish. Four steps, clean experience.
But there was zero compliance feedback throughout the flow. The interface showed estimated clicks, yet when filling in URLs and copy, there was no way for advertisers to check whether anything might trigger a review. Until hitting 'Publish,' the advertiser has no idea what the review outcome will be.
I also tried Google Ads and Meta Ads. None of the three platforms offer pre-submit compliance checks—Google's yellow prompt is form validation (required fields), Meta's Creative Hub is visual preview—neither involves policy checking.
Community Research
I collected and analyzed 32 posts and comments related to rejections and bans on r/TikTokAds, categorized by pain point:
| Category | Count | % |
|---|---|---|
| Rejection/ban reason unclear | 15 | 47% |
| URL / landing page issues | 6 | 19% |
| Industry classification errors | 4 | 13% |
| Support unhelpful | 4 | 13% |
| Other | 3 | 9% |
“TikTok banned my account, said I violated some policy, but won't tell me which one.”
— r/TikTokAds
“I've never run an ad on TikTok. Brand new account… I don't even know how it violated that many policies.”
— r/TikTokAds
“A week later it just stopped running. TikTok said it was rejected for 'non-compliant URL.'”
— r/TikTokAds
Core Insight
TikTok's review system is thorough—a three-layer architecture (AI auto-screening → regional human review → expert content committee) covers multi-dimensional content matching. But there's a wall between advertisers and the review system: what inputs trigger review? Which fields are high-risk? Zero feedback before submit. The review capability is there; what's missing is transparency.
Flow
Competitive Analysis
I compared the pre- and post-submit mechanisms across three major platforms:
| Google Ads | Meta Ads | TikTok Ads | |
|---|---|---|---|
| Pre-submit check | Ad Strength score (performance optimization, not compliance) + URL yellow warning | Creative Hub preview tool (format preview, not compliance) | None |
| Post-rejection feedback | Policy Manager + Policy details column + Email (3 paths) | Field-level reason + Edit button | "Disapproved" label → View more (1 path) |
| AI support | Structured 3-step troubleshooting + proactively asks for URL | Creative Hub + third-party tools | Links to help docs |
| Guideline access | Embedded in creation flow | Separate help docs | Separate help center page |
None of the three platforms offer pre-submit compliance checks—Google's Ad Strength is a performance score, Meta's Creative Hub is a visual preview, neither involves policy validation. TikTok has no pre-submit mechanism and fewer post-rejection feedback paths (only 1), which is exactly where there's room to improve.
Third-party tools are already filling the gap—AdAmigo.ai and Madgicx provide pre-submit compliance checks for Meta, proving the demand is real.
* Industry data: Google blocked 5.1B ads / suspended 39.2M accounts in 2024 (Google Ads Safety Report); Meta removed 130M fraudulent ads in 2024
Trade-offs
After confirming the industry gap, the next step was choosing an entry point. I considered three directions:
Why Option A?
Option A is the only one that proactively reduces wasted submissions, and can be implemented on the frontend independently without modifying the review system. Option B matters but only treats symptoms—saved for Phase 2. Option C is ideal but too heavy for an MVP.
Design
Why doesn't the industry do pre-submit checks?
- Business incentives — any friction could reduce ad submission rates
- Technical limits — many reviews require human judgment (video content, brand authorization), AI accuracy isn't sufficient
- Liability concerns — if the pre-check gives a false positive, advertisers blame the platform
So what's the point?
This isn't 'AI review'—it's 'embedding TikTok Help Center's existing 5 review best practices into the ad creation flow.' It doesn't replace human review, only covers machine-checkable issues. It's skippable, so it won't hurt submission rates. Core logic: TikTok's AI already has multi-dimensional checking capabilities, but advertisers can't see them. We're surfacing review transparency, not replacing review.
MVP Scope
- Advisory hints, not blocking ("Submit anyway" available)
- Only covers auto-detectable rules (URL reachability, copy keywords, creative specs)
- False positive strategy: err on the side of missing issues (under-report) rather than false alarms (over-report)
Readiness Check
Surface risks before submit
- 1 Detect → Identify potential issues
- 2 Explain → Reason & suggestion
- 3 Jump → One-click to fix
Actionable Rejections
Field-level reason + fix deep link
Change Impact Prediction
Which edits trigger re-review + estimated time
Rule Source
TikTok has 21 independent ad policy categories (e.g. healthcare, finance, gambling), most requiring human judgment. I selected 5 rules from the Help Center that can be automated:
| Rule | Source | Method | Sev. |
|---|---|---|---|
| Landing page consistency | Ad Creatives & Landing Page | URL reachability check + headline keyword matching | HIGH |
| Language match | Ad Format & Functionality | Copy language detection vs. target region | HIGH |
| Absolute/misleading claims | Prohibited Content | Keyword matching (guaranteed, #1, best) | MED |
| Unauthorized brand usage | Restricted Content | Brand name keyword scan | MED |
| Creative spec violations | Ad Format & Functionality | File metadata check (duration/resolution/text density) | LOW |
Only 5 of 21 policy categories are automatable → ~24% coverage. Source: TikTok Help Center, pages note 'This is not a comprehensive list'
When does it trigger?
Per TikTok's official documentation, modifying the following fields triggers re-review. The readiness check runs automatically after these fields are edited:
- Video/Image
- Display name
- URL / Deep link
- Avatar
- Targeting (age/region/language)
- Tracking parameters
- Budget & schedule
- Bid & optimization
- Call-to-action button
- Ad group/ad name
Metrics & Validation
Baseline Estimates
TikTok doesn't publish ad review approval rates (their transparency report only covers community content and government requests). The following data is estimated from third-party sources and community research:
| Metric | Value | Source |
|---|---|---|
| First-submit review issue rate | ~35% | oreateai.com / crayo.ai |
| Rate needing revision to pass | ~15% | oreateai.com |
| Review time | Usually within 24h | TikTok Help Center |
Target
Of the 35% with review issues, ~40% (≈14%) are avoidable format/URL/keyword issues. Goal: reduce this 14% to <5%.
False positive = pre-check flags an issue but the ad actually passes review. False positives erode user trust in the tool.
Validation Plan
Roadmap
Reflection
Three Trade-offs
- Downgraded from 'AI-powered review' to 'rule matching.' After research, I realized the industry avoids pre-checks for good reason—many reviews require human judgment and AI accuracy isn't there yet. The downgrade itself is a product decision.
- Changed from 'mandatory blocking' to 'skippable.' False positives destroy trust; an immature tool shouldn't stop users from spending money.
- Narrowed from 'full coverage' to 'MVP with only 5 rules.' Most of the 21 policy categories require human judgment. Do what's feasible first, then use data to prove expansion is worth it.
Honest Limitations
None of the three major platforms have pre-submit compliance checks. After research, I understand why. But from another angle—embedding the Help Center's existing review suggestions into the ad creation flow costs almost nothing and doesn't require building new capabilities. It's just showing information one step earlier.
If I Could Start Over
Start with 5 advertiser interviews to validate whether 'knowing risks before submit' is actually wanted, then decide what to build. The biggest gap in this case study is the lack of primary user validation—all pain points come from secondhand forum data.
The hardest part of this project wasn't design—it was making trade-offs.