Support Investors Partners About us
Platform · Source·AI

Agentic
enterprise procurement.

Source·AI drafts the technical specification, discovers qualified vendors, parses multi-vendor offers into a defensible comparison, and authors the award recommendation - with full traceability from requirement to contract.

Artifact: Evaluation matrix, award recommendation, contract redline
Authored by Source·AI · 14:22
RFP-HERMA-0412
SPEC Technical spec with CPV 42994100 47 req
RFI 12 vendors invited, 7 responded auto
RFP Offers parsed · 4 short-listed auto
MTX Scored matrix · MEAT weights applied running
AWD Draft award · defensible rationale pending
Reviewed by category manager. Audit trail: every clarification, every score, every signature. Defensible to procedural review.
Scope

The full procurement lifecycle - one agent.

Source·AI covers the entire source-to-contract surface area. Technical spec drafting, vendor discovery, RFI, RFP, offer parsing, evaluation, negotiation prep, and contract clause extraction. One agent. One audit trail. One defensible decision.

Technical spec drafting CPV & UNSPSC classification Vendor discovery & shortlisting RFI drafting & distribution RFP authoring Multi-vendor PDF offer parsing Requirement-to-offer traceability Weighted evaluation matrix Gap & exception reports Negotiation prep & counter-positions Contract clause extraction Redline drafting Spend analytics Supplier risk scoring Audit-ready award file
How Source·AI works

Ingest → reason → draft → execute → defend.

Five agent stages, fully observable. Every step produces a category-aligned artifact, signed by a human reviewer, with an immutable audit trail back to the stakeholder requirement.

01 / INGEST

Read the requirement

Stakeholder briefs, prior specs, reference drawings, P&IDs, ERP master data, category SOPs, approved vendor list. Everything parsed and indexed inside your tenant.

02 / REASON

Classify & frame

CPV and UNSPSC codes assigned automatically. Category strategy applied. Market structure mapped. Threshold rules, internal policy, and incumbent risk surfaced before drafting.

03 / DRAFT

Author artifacts

Technical specification, RFI, RFP, evaluation matrix with weights. Drafts match your approved template, clause library, and commercial boilerplate.

04 / EXECUTE

Parse & score

Multi-vendor PDF offers parsed into a structured comparison. Requirements mapped to offer clauses. Scores computed per your MEAT weights. Gaps flagged for clarification.

05 / DEFEND

Audit on demand

Tamper-evident ledger. Every clarification, every version, every score traceable to source. Award file is ready for internal audit and procedural review on day zero.

Inputs · Outputs

What it reads. What it produces.

Source·AI connects to the data you already have and produces the artifacts your category team, legal, and audit function already recognise - nothing invented, nothing orphaned.

Data sources

  • Stakeholder briefs, prior URS, category strategies
  • Vendor offers (PDF, Word, Excel, email attachments)
  • Approved vendor list, prior contracts, rate cards
  • SAP Ariba, Coupa, Jaggaer, Ivalua, Oracle Fusion
  • Drawings, P&IDs, BOMs, equipment master data
  • CPV, UNSPSC, eCl@ss, NAICS classification corpora
  • ERP spend data (SAP S/4HANA, Oracle, NetSuite)
Source·AI

Artifacts produced

  • Technical specification with CPV / UNSPSC codes
  • RFI, RFP, and ITT packages
  • Requirement-to-offer traceability matrix
  • Weighted evaluation matrix (MEAT)
  • Gap report · clarification requests
  • Award recommendation with defensible rationale
  • Contract redline · clause library diff
Worked example

HERMA labeling-machine procurement.

A packaging line needed a new labeling machine. Four vendors, 47 technical requirements, 280 pages of offer PDFs, and a commercial deadline in two weeks. Source·AI parsed the offers, mapped every requirement to every vendor clause, and produced the evaluation matrix in under a day.

Source a servo-driven self-adhesive labeling machine for line 3. 47 requirements in SPEC-LBL-0412. Four vendors: HERMA, Krones, Sidel, Label-Aire. Evaluate per MEAT weights - 60% technical, 30% commercial, 10% lifecycle.
Parsing offers. HERMA 362C responds to 46/47 requirements. Krones Canmatic responds to 44/47 - missing integrated vision. Sidel misses three critical items. Label-Aire scopes 42/47. Drafting clarification requests for the three partial vendors. Sources: SPEC-LBL-0412, 4 vendor PDFs (280 pages), category MEAT template v3.1
Score on current evidence. Explain the ranking.
HERMA leads on technical (92%) and lifecycle (88%). Krones leads commercial (94%) but trails on vision integration. Weighted: HERMA 89.4, Krones 86.1, Sidel 78.3, Label-Aire 74.9. Award recommendation draft is ready with full requirement-by-vendor citation trail. Every score linked to offer clause and requirement ID. Defensible to internal audit.

Evaluation Matrix · LBL-0412

SCORED
Leader

HERMA 362C · weighted score 89.4 / 100 · technical gap closed via clarification LBL-0412-C03.

Scored comparison
Vendor
Tech
Weighted
HERMA
92%
89.4
Krones
86%
86.1
Sidel
79%
78.3
LabelAire
74%
74.9
RTM
47/47
cited
Trace

SPEC-LBL-0412 → REQ-01…47 → OFFER-{HRM,KRN,SID,LAI} → SCORE

- Authored by Source·AI. Reviewed by category manager. Audit trail: every clarification.

Regulatory & policy framing

Architected around the regimes that matter.

Source·AI produces award files that survive internal audit, external review, and - where applicable - procedural challenge. Compliance isn’t a policy memo; it’s how the system works.

EU DIR 2014/24
Public procurement

Open, restricted, competitive dialogue, competitive with negotiation, innovation partnership. DPS and framework agreements. Proportionality and non-discrimination by construction.

OJEU
Notice publication

Contract notices drafted to eForms schema. CPV coded. Thresholds respected. Publication windows respected. Defensible to the OJ archive on day one.

UNCITRAL
Model procurement law

Transparency, competition, objectivity, accountability. The four pillars of defensible procurement encoded in every evaluation.

FAR / DFARS
US federal procurement

Source selection procedures, CLIN structures, representations and certifications, supplier responsibility determinations - mapped per category.

GDPR · NIS2
Data & cyber due diligence

Processor due-diligence questionnaires, supplier cyber-posture scoring, DPIA triggers - executed in-line during sourcing, not bolted on at contract.

ISO 20400
Sustainable procurement

ESG scoring embedded in MEAT weights. Scope 3 disclosures requested in RFI. Human-rights red flags surfaced from supplier geography and sector.

SOX · INT. AUDIT
Financial controls

Segregation of duties preserved. Agent authors; human approves. Tamper-evident evidence for materiality testing and controls testing.

EU AI ACT
High-risk AI governance

Human oversight, transparency, risk management, post-market monitoring. The agent’s own behaviour is governed alongside the procurement it executes.

Integrations

Sits on your stack. Your data never leaves your tenant.

S2P suite
SAP Ariba

Bi-directional connector. Source·AI drafts, Ariba transacts. Or archive agent outputs as validated sourcing events.

S2P suite
Coupa

Sourcing Optimization, Contract Lifecycle, and Supplier Information handoff of agent-authored content.

S2P suite
Jaggaer · Ivalua

Enterprise sourcing and contract modules synced with agent outputs via validated connector.

S2P suite
Oracle Fusion Procurement

Requisition, sourcing, purchasing, and supplier management objects bi-directionally aligned.

ERP
SAP S/4HANA · Oracle

Spend master data, material master, vendor master, and PO history feed into category reasoning.

CLM
Icertis · DocuSign CLM

Clause library diff, redline proposals, and negotiation history synchronised with agent drafts.

Risk
EcoVadis · D&B

Supplier ESG, financial, and cyber risk scores pulled live into shortlisting and evaluation.

Storage
SharePoint · Google Drive

Prior specs, drawings, and offer archives - read-native, write with governance and retention policy.

Identity
Okta · Entra ID

Enterprise SSO, SCIM provisioning, role-based agent permissions by category.

- Qualitum sits on your stack. Your data never leaves your tenant.

Proof · Production deployment

4 vendors, 280 pages of offer PDFs, evaluation matrix in under a day.

A European packaging manufacturer deployed Source·AI for capex sourcing of a labeling machine. Instead of two category managers for three weeks, one category manager supervised the agent from spec to award file in under a week - with a full, clause-by-clause defensible trail to every score.

1d
PDF offers → scored matrix
47
Requirements traced to every vendor
68%
Sourcing cycle compression
// agent log · 08:12:04 ingest: 4 offers · 280 pp · OCR + table classify: CPV 42994100 · UNSPSC 24.11.15 map: REQ-01..47 → OFFER-{HRM,KRN,SID,LAI} flag: OFFER-KRN · REQ-33 · gap · vision score: MEAT 60/30/10 · HRM 89.4 leader draft: award file · defensible rationale // file · 16:47:22 elapsed: 8h 35m · human review: 58m

Stop authoring RFPs. Start awarding them.

Book a 45-minute working session with a forward-deployed engineer. Bring a real category, a real set of offer PDFs, or a real sourcing backlog.

Book a working session

Or email hello@qualitum.ai