Multimodal Agent Interface
What if the agent were the interface — and the forms were the fallback?
Scorecard
| Dimension | Score | Evidence |
|---|---|---|
| Pain | 5/5 | Every workflow requires navigation + form-fill. 8 WorkCharts with no conversational entry. |
| Demand | 4/5 | HubSpot AI, Salesforce Einstein, Clay shipping conversational CRM. Not validated with users. |
| Edge | 3/5 | Agency lib (8 WorkCharts + skill router + A2A) + modalities knowledge. No proprietary data. |
| Trend | 5/5 | Omnimodal models (GPT-4o, Claude, Gemini Live) make multimodal input table stakes by 2027. |
| Conversion | 2/5 | AI feature pricing not validated. Bundled or premium tier unclear. No pilot. |
| Composite | 600 | 5 x 4 x 3 x 5 x 2 |
Kill signal: Chat widget ships but <10% of tasks go through conversation after 30 days. Users bypass agent and navigate directly to forms.
Context
- Agent Platform — Dependency: identity, memory, comms, dispatch
- Identity & Access — Dependency: auth required before any user-facing interface
- Sales CRM & RFP — Peer: same vertical, different layer (data model vs access method)
- Sales Dev Agent — Peer: autonomous agent logic consumes this conversation surface
- AI Modalities — The 7x7 transformation matrix informing input channels
- Commissioning Dashboard — L0-L4 status for all features
Questions
What separates an AI assistant from an AI colleague — and which one would a sales rep actually use?
- If the agent can't remember last week's conversation about the Acme deal, does it save time or create more work?
- Which of the 49 modality transformations (7x7 matrix) produces the highest ROI for the first version?
- At what point does the agent's initiative cross from helpful to intrusive — and who decides the boundary?
- If competitors ship conversational CRM before us, does the WorkChart architecture still differentiate?