FPAI Business Logic Documentation

Smart Listing Optimizer

A unified platform that takes an Amazon product and produces an optimized listing grounded in real customer intent, differentiated from competitors, keyword-rich, compliance-safe, and AI-visible.

Then tests it via A/B experiments, creates off-Amazon content for AI platform visibility, and proves the results with before/after measurement.

The Pipeline

Each module produces structured data that feeds the next. One continuous assembly line from product intake to proven results.

DIFFERENTIATE
BUILD
VALIDATE
score ≥ 70
Track A — Amazon
M5MYE A/B Testing
Track B — AI Visibility
M6Off-Amazon Contentoptional
30 days later
Measure
M7AI Rescore Loopoptional
Hover nodes for details
optionalCan be skipped
GATEMust pass to proceed

How data flows

Every module consumes artifacts from upstream modules and produces new artifacts for downstream modules. The product profile feeds into competitor analysis, which feeds into customer intent, which feeds into USP extraction, and so on. No module operates in isolation — each decision is grounded in the accumulated intelligence of all previous steps.

12
Pipeline Modules
20+
Data Artifacts
6
Quality Dimensions
MODULE 0

No-ASIN Product Intake

Entry point for products that don't have an Amazon listing yet.

When this is used

Pre-launch
Product isn't on Amazon yet — seller wants an optimized listing ready for launch day
Demo runs
Testing the platform without committing a real product
Cross-platform
Product exists on Walmart/Shopify/eBay but not Amazon
Documentation only
Seller has spec sheets, manuals, or marketing PDFs but no live listing

Three additional input channels

1
Product URL
Link to the product on another marketplace. System scrapes it to pre-fill form fields.
2
Image Uploads
Product photos, packaging shots, ingredient labels, certifications. Vision AI extracts text, specs, and certifications.
3
Document Uploads
PDFs or Word docs (spec sheets, safety data sheets, marketing materials). System extracts structured product data.

The merge rule

User-typed fields always win. Everything else fills in blanks.

If the user typed “500ml” and the URL scrape says “16oz”, the system keeps “500ml”. If the field is empty, the system fills it with whatever was found.

Output

product_profile — the exact same data structure as Module 1. Everything downstream works identically regardless of entry point.

MODULE 1

Product Context

Standard entry point when the product already has a live Amazon listing (has an ASIN).

Step 1: Product Data Intake

Required: Market, ASIN, MSKU
Optional: Brand name, product features, up to 10 images

If ASIN is provided, system fetches public Amazon data: current title, bullets, description, category, images, rating, review count.

Step 2: Product Profile Creation

AI analyzes all available information to determine:

Product type and purpose
Key attributes and specs
Target audience
Category and subcategory
Brand tone and positioning
Initial keyword seeds

Output

product_profile — Product identity, category, attributes, features, benefits, target audience, use cases, brand tone, and keyword seeds. The foundation for everything that follows.

MODULE 1.5

AI Landscape Scan

Scan what AI platforms (ChatGPT, Perplexity, Google AI) say about the product before any listing work begins.

Why this exists

Traditional Amazon SEO (A9/A10) has only 22% overlap with AI shopping recommendations. A product optimized only for Amazon search is invisible to shoppers who ask ChatGPT or Perplexity for advice.

How it works

1
Generate 20-30 Shopper Questions
AI creates questions real shoppers would ask AI assistants — not search terms, but conversational queries like "What's the best manuka honey for immune support?" across 6 intent types: best-of, comparison, problem-solution, attribute, use-case, safety.
2
Query AI Platforms
Send each question to ChatGPT (API), Perplexity (API), Google AI Overviews (SerpAPI), and Amazon Rufus (manual). Record every response.
3
Analyze Responses
Extract: product mentions and position, citation sources (which URLs/sites the AI references), sentiment, competitor set, language patterns, and hallucination checks.

Outputs

ai_visibility_baselineHow visible the product is across AI platforms (e.g., mentioned in 3/28 queries)
ai_gap_reportWhich queries/platforms the product is missing from, what external sources are missing
citation_source_mapWhich external sources (Reddit threads, blogs, YouTube) the AI platforms are actually citing

Feeds forward to

M2 — AI-recommended competitorsM2.1 — Real AI platform questionsM3 — Natural language query patternsM4 — GEO-optimized writing signalsM6 — Gap & citation data for content
MODULE 2

Competitor Discovery

Find, analyze, and select the products that compete directly with the user's product on Amazon.

Three-step process

1
Competitor Collection
System suggests 10 search terms based on the product profile. User selects 5. System searches Amazon (20-25 results per term), captures title/brand/price/rating/bullets, deduplicates. Result: competitor_list_raw (40-50 listings).
2
Relevancy Scoring
Each competitor is scored on: category alignment, shared features, similar positioning, target audience overlap. A Sales Velocity Score is estimated where possible.
3
User Selection
Top 20 displayed (competitor_list_trimmed). User picks 5-10 true market comparables (competitor_list_final). User can drag and reorder by priority.

Why two lists matter

Trimmed (20 competitors)
Broad market context. Used for frequency analysis — “how common is this claim in the market?”
Final (5-10 competitors)
The user's actual competition. Used for strategic decisions — “is this USP differentiated against my direct rivals?”
MODULE 2.1

Customer Intent Extraction (Rufus)

Understand what customers actually care about — independent of what the product claims or competitors say.

Step 1: Collect Raw Signals

Sources:

Competitor reviews (titles + bodies)
Competitor Q&A (questions on PDPs)
User's own product reviews
Long-tail search queries from category
AI-inferred questions

Review statements are converted to questions: “This serum feels sticky” becomes “Will this serum feel sticky?”. Result: 50-200 raw signals.

Step 2: Cluster Into Themes

AI groups signals into themes (e.g., Texture, Ingredients, Performance, Safety, Value). Each theme gets:

Frequency Score (0-100)
How often customers mention this
Importance Score (0-100)
Complaint severity, desire strength, emotional tone

Step 3: Package Output

Top 5-10 intent themes with scores
Curated customer questions (up to 10)
Pain points — frustrations ("sticky texture", "doesn't absorb")
Desire points — outcomes sought ("long-lasting hydration")
Natural search-language phrases
Feature-Interest Map — which features customers care about most
Structured tags (TEXTURE, HYDRATION, etc.) for cross-module reference
MODULE 2.2

USP Extraction

Identify every possible unique selling point by analyzing the product, competitors, and customer intent.

Three data sources compared

Product Profile
What does the product claim? Features, benefits, ingredients, specs.
Competitors (20)
What do competitors emphasize? Common claims, repeated selling points, and gaps.
Customer Intent
What do customers care about? Themes, pain points, desire points.

What the AI finds

The AI compares across all three sources to identify:

1
Product features competitors don't mention
Gaps in competitor messaging that the product can fill.
2
Customer concerns competitors fail to address
E.g., customers ask about "stickiness" but no competitor addresses it.
3
Advantages aligned with high-importance themes
Product attributes that match what customers care about most.

Output

usp_raw — Each potential USP with: clean text, competitor frequency, product presence (0/1), impact score (0-100), linked intent themes, pain and desire points.

MODULE 2.3

USP Evaluation

Score and rank USPs to determine which ones are most valuable for the listing.

Three scoring dimensions

45%Customer Relevance

How strongly this USP aligns with what customers care about. Uses theme frequency, importance, and pain/desire alignment. Pain-heavy USPs score higher — solving a problem beats fulfilling a want.

25%Competitive Uniqueness

How differentiated vs. competitors. Formula: (1 - competitor_frequency / total_competitors) x 100. Being unique but irrelevant gets penalized.

30%Market Impact Potential

How much this USP can realistically move CTR, CVR, and revenue. Based on impact_score from extraction.

Formula

Total USP Score = 0.45 x CR + 0.25 x CU + 0.30 x MI

Validation & Selection

High-scoring USPs are validated for factual accuracy, Amazon compliance, clear phrasing, and content fit. Then selected:

Primary Set (3-5 USPs)
High scores, strong validation, diverse themes. Lead the listing.
Secondary Pool
Good scores or niche themes. For supporting bullets, A+ content, MYE retests.
MODULE 3

Keyword Intelligence

Build a comprehensive, scored keyword strategy from five data sources.

Five keyword sources

S1
Product Profile
What the product is
S2
Competitors
How the market describes it
S3
Customer Intent
How customers search
S4
External Tools
Search volume data
S5
AI Expansion
Synonyms & variants

Scoring formula

base = 0.60 x product_intent_relevance + 0.20 x competitor_alignment + 0.20 x search_demand
keyword_strength_score = min(100, base + usp_bonus)
USP bonus: +10 (strong), +5 (weak), +0 (none)
TierScoreDescriptionPlacement
Primary>= 75Most important, defines the productTitle, Bullet 1
Secondary60-74Strong support, reinforces positioningLater bullets, description
Long-tail40-59Specific, high-intent phrasesBackend, description
Excluded< 40Weak, irrelevant, risky, or competitor-onlyNot used

USP Keyword Bundles

For each approved USP, all supporting keywords are grouped into a bundle (primary, secondary, long-tail). Neutral category keywords stay separate as baseline terms. When MYE tests different variants, each variant emphasizes a different USP bundle while keeping category keywords constant — making tests clean and attributable.

MODULE 4

Listing Creation

Turn approved USPs and keyword bundles into an actual Amazon listing. Hybrid approach: rules decide structure, AI writes copy.

Inputs

Product Truth Set
Facts from the product profile. The listing can only claim what's in here.
USP Package
Approved USPs with safe phrasing, proof points, placements.
Keyword Package
Keywords with scores, tiers, and USP links.
Compliance Config
Banned terms, risk words, tone guidance, character limits.

Five-step creation process

1
Step A — Build Listing Blueprint (Rules Only)
Decide: which USP leads, USP order for Bullets 1-5, which keywords go where, minimum inclusion rules, dedup rules, compliance constraints. No AI — pure structural planning.
2
Step B — Draft Copy (AI Within Blueprint)
AI generates title, bullets, description, backend terms — constrained by the blueprint. Must follow: must-include keywords, allowed USP phrasing, banned terms, character limits.
3
Step C — QA Pass 1 (Automated Rules Check)
Validate every section: length limits, required keywords included, no keyword stuffing, no banned terms, USP alignment per bullet. Output: pass/fail per section.
4
Step D — Targeted Rewrite (AI Fixes Failures)
Only failed sections are regenerated with explicit instructions: "Remove banned term X", "Include missing keyword Y", "Reduce by N characters." Passing sections untouched.
5
Step E — QA Pass 2 (Final Check + Fallbacks)
Re-check everything. If still failing, apply deterministic fallbacks: trim lowest-priority phrase, swap keyword for same-USP alternative, replace with pre-approved safe phrasing.

Key principles

Truth-first
Never invent facts
Compliance-first
Risk-high language never enters
Structure-first
Rules decide, AI writes
No stuffing
Reuse caps + section quotas
Explainable
Every phrase maps to a source
LQS

Listing Quality Score

Scores listings across 6 dimensions. Used twice: before optimization (baseline) and after (quality gate).

LQS runs at two points in the pipeline

LQS BASELINE
Before optimization (after M1)
Scores the existing live listing using the same 6 dimensions. Establishes a measurable starting point so the client can see exactly where their listing is weak and how much improvement the platform delivers.
Input: current Amazon listing (fetched via ASIN)
Output: lqs_baseline_report
LQS GATE
After optimization (after M4)
Scores the optimized listingproduced by Module 4. Must score ≥ 70 to proceed to MYE testing. Below 70 blocks the listing and requires fixes. This ensures only strategically strong listings reach customers.
Input: M4 final listing + all pipeline data
Output: lqs_report (gate decision)

Six quality dimensions

Keyword Optimization (25%)
Keywords in the right places?
USP Effectiveness (20%)
Approved USPs present and unique?
Readability (15%)
Flesch score, scannability
Competitive Position (15%)
Differentiated vs. competitors?
Customer Alignment (15%)
Addresses pain points?
Compliance (10%)
Zero banned terms, format rules

Before/after comparison example

BASELINE (before)
57.3
Grade F
OPTIMIZED (after)
78.5
Grade C — MYE eligible

The delta between baseline and optimized LQS is the measurable proof of the optimization's value.

GradeScoreMYE Eligible
A90-100Yes (auto-approve)
B80-89Yes (auto-approve)
C70-79Yes (review recommended)
D60-69No (require fixes)
F< 60No (major rewrite)
MODULE 5

MYE Integration (A/B Testing)

Test the listing on Amazon via Manage Your Experiments. One attribute at a time.

Core principle

One attribute at a time, in sequence: Title first, then Bullet 1, then Bullets 2-5, then Description.

If you change title and bullets simultaneously and performance improves, you don't know which change caused it.

Six-step process

1
Preparation
Pull all inputs, validate content, create Test Subject Registry. AI generates 2-3 title variants: USP-led, keyword-led, balanced.
2
Experiment Setup
Create experiment via MYE API. Set control (current live) and treatments (AI variants). One experiment per ASIN per attribute.
3
Data Collection
Automated daily metric retrieval: impressions, clicks, CTR, CVR, sessions, sales. Typically runs 2-4 weeks.
4
Result Evaluation
Win: replace live listing, move to next attribute. Loss: AI analyzes patterns (CTR drop = unclear USP, stable CTR but low CVR = disconnect), creates Retest Recommendation.
5
Retest Queue
Maintains queue per ASIN: Running/Passed/Failed/Retest Pending. Failed tests get new variants that address identified issues.
6
Learning Record
Every test result is saved: which USP + keyword + tone combinations produced the strongest performance. Cross-product learnings improve future optimizations.
MODULE 6

Off-Amazon Content Package

Create external content for AI platform visibility. AI assistants pull from Reddit, blogs, YouTube — not Amazon listings.

Seven content types

StepContent TypeGenerationPublishing
1Content StrategyFully automatedPrioritized action plan
2Reddit ContentAutomatedManual (needs karma)
3Quora/Forum AnswersAutomatedManual
4Blog/Articles (GEO-optimized)AutomatedSemi-auto (CMS)
5YouTube/Video ScriptsAutomatedManual (record)
6Outreach EmailsAutomatedSemi-auto (send API)
7Technical SEO/AEOAutomatedManual (dev implements)

GEO visibility impact

+42.6%
Expert quotes
+32.8%
Statistics
+28.7%
Fluent writing
+27.7%
Source citations

Technical AEO package

robots.txt — Allow AI crawlers (OAI-SearchBot, PerplexityBot, ClaudeBot)
llms.txt — Table of contents for LLM crawlers
JSON-LD schema markup — Product, FAQPage, HowTo
SSR audit — Flag client-side pages invisible to AI crawlers
MODULE 7

AI Rescore Loop

Prove it worked. Re-run the same AI queries after 30 days and measure the change.

The cycle

M1.5
Baseline 3/28 visible
M2-M4
Amazon Listing Opt
M6
Off-Amazon Content
Wait
14-30 days
M7
Rescore 14/28 visible
Proof
Client Report

Delta analysis

Visibility change (new mentions, lost mentions)
Position change (3rd to 1st mention)
Sentiment change
Citation source change (new blog/Reddit cited)
Competitor displacement
Hallucination resolution

Attribution

For each visibility gain, attribute to: Amazon listing changes (M4), Reddit content (M6), blog content (M6), technical fixes (M6), or organic/external. Attribution is probabilistic with confidence levels.

Complete Data Flow

What each module produces and who consumes it.

ModuleProducesConsumed By
M0/M1product_profileLQS Baseline, M1.5, M2, M2.1, M2.2, M3, M4
LQS Baselinelqs_baseline_report (pre-optimization score)Client report (proof of improvement delta)
M1.5ai_visibility_baseline, ai_gap_report, citation_source_mapM2, M2.1, M3, M4, M6, M7
M2competitor_list_raw, trimmed, finalM2.1, M2.2, M3, M4, LQS
M2.1intent_themes, Customer Intent PackageM2.2, M2.3, M3, M4, LQS
M2.2usp_rawM2.3
M2.3usp_test_set (approved USPs)M3, M4, M5, LQS
M3Keyword Package, USP Keyword BundlesM4, M5, LQS
M4Final Listing + QA ReportsLQS Gate, M5, M6
LQS Gatelqs_report (score, grade, MYE eligibility)M5 (must pass >= 70)
M5experiment_results, learning_recordM4 (retest), M7
M6content_package (Reddit, Quora, Blog, Video, etc.)M7
M7rescore_report, attribution, client_reportM1.5 (next cycle)

SLO - Smart Listing Optimizer | Business Logic Documentation

Generated April 2026 | FPAI Deliverable