HiQ Cortex
中文 Open Chat

HiQ Cortex · A product of HiQ-AI

Your BOM has 300 rows.
Your auditor has questions about
all of them.

Cortex matches emission factors across twelve databases, flags gaps, and exports with full provenance — every row sourced, DQI-scored, and ready for an auditor question.

It does not replace experts. It scales their judgment.

Three doors in

§ II

Cortex Chat and Cortex Cowork serve the same engine.

Which door depends on the work in front of you.

§ 01

For LCA consultants

Your 60% is database lookup. Cortex compresses it: ask in Chinese or English, get DQI-scored candidates across HiQLCD and Ecoinvent, with every proxy assumption attached.

See BOM matching →
§ 02

For sustainability and CBAM teams

Your auditor wants the reasoning, not the number. Cortex pauses at three HITL gates — coverage, proxy, cross-db spread — so every proxy decision has a name, not a footnote.

Read the methodology →
§ 03

For developers building on LCA data

Four protocols: REST, MCP, AG-UI, A2A. One API key. Embed LCA retrieval in your own product without rebuilding the search layer.

Read the API docs →

What it is

§ III

One Cortex. Two products.

Cortex Chat handles conversations — single-session questions, BOM uploads, cross-database comparison. Cortex Cowork handles projects — local files, multi-session memory, agentic execution. Same engine underneath. Different surface for different work.

Why a snail

§ IV

We picked the opposite of a rocket, on purpose.

Every AI company chose a lightning bolt. We chose a snail. Here is the argument.

§ I Slow

Slow

Speed in LCA is a trap. The CBAM verifier does not want an answer — she wants the reasoning behind the answer. Cortex does not skip the system-boundary discussion to hand you a number. Slow is the discipline that keeps the output auditable.

§ II Shell

Shell

A snail carries everything it needs. Cortex works the same way: every dataset returned carries its source, its version, its system model, and its DQI scores — Temporal, Geographic, Technology, Completeness, Reliability. The output is not a number. It is a package you can hand to an auditor.

§ III Trail

Trail

A snail leaves a mark on everything it touches. Cortex keeps the full trail: which databases were searched, which candidates were ranked, which proxy was chosen and why, which gate paused the run. The auditor and the next practitioner can walk it back to the origin.

Cortex does the fastest things on the backend. What it delivers is always the slowest thing: a chain of reasoning you can audit, line by line.

Slow is the most important promise we can make.

What you actually get

§ V

Three claims. All provable from how Cortex works today.

§ 01
~10 min for a 100-row BOM

Each row returns DQI-scored candidates, source citations, and every proxy assumption named.

§ 02
Twelve LCA databases

HiQLCD, Ecoinvent, EF (Environmental Footprint), CarbonMinds, Tiangong, World Steel, and more — ranked by relevance and data quality, not alphabetically.

§ 03
Three human-in-the-loop gates

Cortex pauses when coverage drops below 80%, when a proxy is required, or when cross-database spread exceeds 2×. You decide at each gate. Nothing proceeds without you.

Start with one question.

Cortex Chat takes a material name, a BOM row, or a standards question and returns ranked datasets with provenance. No spreadsheet setup. No database license required to ask.