Open methodology
We grade websites on traditional SEO and AI-search readiness using an open, deterministic algorithm. Every check has a fixed ID, every score has a documented weight, and every AI-generated recommendation is grounded in those checks. No black box, no made-up metrics.
GEO (Generative Engine Optimization) is the discipline of making content discoverable, parseable, and citable by AI search systems — ChatGPT, Perplexity, Google AI Overviews, Claude, Gemini. Traditional SEO ranks you in a list of links; GEO gets you quoted in the answer.
The overall score is a weighted average of category scores. Weights are loaded from the same code path that runs the audit, so this table never drifts from production.
| Category | Weight | Checks |
|---|---|---|
| Content | 23% | 15 |
| Technical SEO | 22% | 26 |
| On-Page SEO | 18% | 25 |
| Structured Data | 14% | 22 |
| Performance | 12% | 11 |
| AI Search (GEO) | 8% | 16 |
| Images | 3% | 13 |
| Total | 100% | 169 |
Some categories run checks but do not contribute to the headline score yet: Security, Accessibility, Frontend.
Every AI-generated bullet — in the summary, in the developer brief, in the action plan — must reference at least one real check ID with its status, e.g. tech.canonical (FAIL) or schema.faq (MISSING). The model is instructed to quote the actual current title, current meta description, and actual headings of the audited page, not synthesize generic advice.
After the model responds, we validate the output for grounded references. If a summary fails to cite any real findings, we retry once with stricter instructions. We would rather ship a slightly less polished sentence than a confident hallucination.
All 168 checks are deterministic — running the same site twice returns the exact same pass/fail/warn for every check, the exact same category scores, and the exact same overall grade. We hash the result set internally to verify this. Issues, severity, code examples, recommendations — all byte-stable across runs.
AI-generated text (summary, developer brief, 30-day plan) runs at temperature 0 to minimize variation. Wording is highly consistent — the same facts, check IDs, and recommendations appear every time — but the underlying LLM (DeepSeek) is not byte-deterministic across requests, so phrasing can differ by a few words. Performance metrics (LCP, CLS, INP) come from Google PageSpeed Insights and reflect real-time measurements that may fluctuate ±5 points between runs, occasionally shifting a borderline grade by one letter (e.g. B↔C). Everything below the AI text and PSI layer is byte-stable.
Every check that runs against your site, grouped by category. Severity reflects the engine's default — individual results may upgrade or downgrade based on what's found.
Spotted a check that feels wrong? Email info@seoport.com.ua