For LCA consultants
Your 60% is database lookup. Cortex compresses it: ask in Chinese or English, get DQI-scored candidates across HiQLCD and Ecoinvent, with every proxy assumption attached.
See BOM matching →HiQ Cortex · A product of HiQ-AI
Vol. 01 · On the slow answer
Cortex matches emission factors across twelve databases, flags gaps, and exports with full provenance — every row sourced, DQI-scored, and ready for an auditor question.
It does not replace experts. It scales their judgment.
Three doors in
Which door depends on the work in front of you.
Your 60% is database lookup. Cortex compresses it: ask in Chinese or English, get DQI-scored candidates across HiQLCD and Ecoinvent, with every proxy assumption attached.
See BOM matching →Your auditor wants the reasoning, not the number. Cortex pauses at three HITL gates — coverage, proxy, cross-db spread — so every proxy decision has a name, not a footnote.
Read the methodology →Four protocols: REST, MCP, AG-UI, A2A. One API key. Embed LCA retrieval in your own product without rebuilding the search layer.
Read the API docs →What it is
Cortex Chat handles conversations — single-session questions, BOM uploads, cross-database comparison. Cortex Cowork handles projects — local files, multi-session memory, agentic execution. Same engine underneath. Different surface for different work.
LCA data retrieval, in a conversation. Ask in Chinese or English — results come back ranked, DQI-scored, and sourced.
The desktop agent that stays with your LCA project — across sessions, across restarts, across handoffs.
Why a snail
Every AI company chose a lightning bolt. We chose a snail. Here is the argument.
Speed in LCA is a trap. The CBAM verifier does not want an answer — she wants the reasoning behind the answer. Cortex does not skip the system-boundary discussion to hand you a number. Slow is the discipline that keeps the output auditable.
A snail carries everything it needs. Cortex works the same way: every dataset returned carries its source, its version, its system model, and its DQI scores — Temporal, Geographic, Technology, Completeness, Reliability. The output is not a number. It is a package you can hand to an auditor.
A snail leaves a mark on everything it touches. Cortex keeps the full trail: which databases were searched, which candidates were ranked, which proxy was chosen and why, which gate paused the run. The auditor and the next practitioner can walk it back to the origin.
Cortex does the fastest things on the backend. What it delivers is always the slowest thing: a chain of reasoning you can audit, line by line.
What you actually get
Each row returns DQI-scored candidates, source citations, and every proxy assumption named.
HiQLCD, Ecoinvent, EF (Environmental Footprint), CarbonMinds, Tiangong, World Steel, and more — ranked by relevance and data quality, not alphabetically.
Cortex pauses when coverage drops below 80%, when a proxy is required, or when cross-database spread exceeds 2×. You decide at each gate. Nothing proceeds without you.
Cortex Chat takes a material name, a BOM row, or a standards question and returns ranked datasets with provenance. No spreadsheet setup. No database license required to ask.