HiQ Cortex
中文 Open Chat

Solutions · BOM Matching

Every line in your BOM deserves the same question an expert would ask.

Cortex reads a Bill of Materials the way a practiced LCA consultant reads it — noticing which rows are clear, which are underspecified, and which need to be argued about before any number is assigned.

The flow

§ I

Five steps, always in order.

01 step

Upload or paste

Excel, CSV, or pasted text. No schema required. Cortex reads columns as a colleague would — names, specs, quantities, whatever free-form notes came with the BOM.

02 step

Clarify where it matters

For ambiguous rows — "aluminum", "steel", "plastic" — Cortex asks one or two questions before searching. Alloy grade, production route, region. Silence on low-signal fields.

03 step

Search wide, score honest

HiQLCD, Ecoinvent, EF, CarbonMinds, and more — searched in parallel. Each candidate comes back with a Data Quality Indicator (DQI), system model, and a clickable source URL.

04 step

Human-in-the-loop at the thresholds

Three hard gates pause automation: coverage below 80%, proxy substitution, or cross-database GWP spread over 2×. You decide — the agent doesn't silently pick.

05 step

Deliver with provenance

Export to Excel with matched dataset, source database, region, DQI, system model, match type (exact / proxy / low-DQI), and notes. Every cell auditable back to the original factor.

How we differ

§ II

Speed without provenance is a number you can't defend.

Most AI-for-PCF tools race to a single number. We race to a line of reasoning you can hand a regulator. Here's where the two approaches diverge.

Typical AI-for-PCF

Upload → minutes → one PCF number.

Cortex § 1

Upload → clarifying questions → top-k candidates → your approved match → PCF with every assumption attached.

noteSpeed is a side effect. Auditability is the product.

Typical AI-for-PCF

AI decomposes the product automatically.

Cortex § 2

AI proposes. The expert decides. Coverage, proxy use, and cross-database disagreements are never silent.

noteIf a machine can't explain why it picked factor A over B, neither can you when the auditor asks.

Typical AI-for-PCF

Confidence score from a model.

Cortex § 3

DQI — the ISO-recognized Data Quality Indicator. Same vocabulary your reviewers use.

noteUse the industry's words. Don't invent new ones that need translating.

The output

§ III

One spreadsheet. Ten columns of accountability.

Every export contains the fields reviewers look for first: matched dataset, source database, region, DQI, system model, match type, proxy notes. The coverage gap is its own sheet, not a footnote.

Exported fields
  • 01 Material name (as provided) col.
  • 02 Matched dataset key + URL col.
  • 03 Source database (HiQLCD / Ecoinvent / EF / CarbonMinds) col.
  • 04 Region · Unit · System model col.
  • 05 DQI score col.
  • 06 Match type: exact · proxy · low-DQI col.
  • 07 Notes (what was substituted and why) col.