Multimodal
Getting started
The @nlxai/multimodal package is used to implement multimodal conversational applications. By installing the SDK and creating a client instance specific to a journey, you can send steps in response to any user interaction, triggering voice feedback on a second channel (e.g. voice).
Setup
On a webpage:
1<script defer src="https://unpkg.com/@nlxai/multimodal/lib/index.umd.js"></script> 2<script> 3 const client = nlxai.multimodal.create({ 4 // hard-coded params 5 apiKey: "REPLACE_WITH_API_KEY", 6 workspaceId: "REPLACE_WITH_WORKSPACE_ID", 7 journeyId: "REPLACE_WITH_JOURNEY_ID", 8 // dynamic params 9 conversationId: "REPLACE_WITH_CONVERSATION_ID", 10 languageCode: "REPLACE_WITH_LANGUAGE_CODE", 11 }); 12 13 client.sendStep("REPLACE_WITH_STEP_ID"); 14</script>
In a bundled JavaScript application or Node.js:
1import * as multimodal from "@nlxai/multimodal"; 2 3const client = multimodal.create({ 4 // hard-coded params 5 apiKey: "REPLACE_WITH_API_KEY", 6 workspaceId: "REPLACE_WITH_WORKSPACE_ID", 7 journeyId: "REPLACE_WITH_JOURNEY_ID", 8 // dynamic params 9 conversationId: "REPLACE_WITH_CONVERSATION_ID", 10 languageCode: "REPLACE_WITH_LANGUAGE_CODE", 11}); 12 13client.sendStep("REPLACE_WITH_STEP_ID");
- Previous
- Headless API: API reference