Information
Documentation API Reference Web PlaygroundLog In Documentation API Reference How to build an Interactive AI Tutor with Llama 3.1 All Documentation Reference Introduction Introduction Quickstart Quickstart OpenAI compatibility OpenAI compatibility DeepSeek-R1 Quickstart DeepSeek-R1 Quickstart Reasoning Models Guide Reasoning Models Guide Prompting DeepSeek-R1 Prompting DeepSeek-R1 DeepSeek FAQs DeepSeek FAQs Serverless models Serverless models Dedicated models Dedicated models Together Cookbooks Together Cookbooks Example Apps Example Apps Chat Chat JSON Mode JSON Mode Function calling Function calling Logprobs Logprobs Integrations Integrations Images Images Quickstart: Flux Tools Models Quickstart: Flux Tools Models Quickstart: Flux LoRA Inference Quickstart: Flux LoRA Inference Vision Vision Text-to-Speech Text-to-Speech Code/Language Code/Language Rerank Rerank QuickStart: LlamaRank QuickStart: LlamaRank Embeddings Embeddings RAG Integrations RAG Integrations Dedicated inference Dedicated inference Serverless LoRA Inference Serverless LoRA Inference CodeSandbox SDK (Code execution) CodeSandbox SDK (Code execution) Fine-tuning Fine-tuning How-to: Fine-tuning How-to: Fine-tuning Data preparation Data preparation Fine-tuning Models Fine-tuning Models GPU Clusters GPU Clusters Cluster user management Cluster user management Cluster storage Cluster storage Slurm management system Slurm management system Quickstart: Retrieval Augmented Generation (RAG) Quickstart: Retrieval Augmented Generation (RAG) Quickstart: Next.js Quickstart: Next.js Quickstart: Using Hugging Face Inference with Together Quickstart: Using Hugging Face Inference with Together Quickstart: Using Vercel's AI SDK with Together AI Quickstart: Using Vercel's AI SDK with Together AI Together Mixture-Of-Agents (MoA) Together Mixture-Of-Agents (MoA) Search and RAG Search and RAG How to Improve Search with Rerankers How to Improve Search with Rerankers How to build an AI search engine (OSS Perplexity Clone) How to build an AI search engine (OSS Perplexity Clone) Building a RAG Workflow Building a RAG Workflow How to Implement Contextual RAG from Anthropic How to Implement Contextual RAG from Anthropic Apps Apps How to build a Claude Artifacts Clone with Llama 3.1 405B How to build a Claude Artifacts Clone with Llama 3.1 405B Building an AI data analyst Building an AI data analyst Fine-tuning Llama-3 to get 90% of GPT-4’s performance Fine-tuning Llama-3 to get 90% of GPT-4’s performance How to build a real-time image generator with Flux and Together AI How to build a real-time image generator with Flux and Together AI How to build an Open Source NotebookLM: PDF to Podcast How to build an Open Source NotebookLM: PDF to Podcast How to build an Interactive AI Tutor with Llama 3.1 How to build an Interactive AI Tutor with Llama 3.1 Deployment Options Deployment Options Rate limits Rate limits Error codes Error codes Deploy dedicated endpoints in the web Deploy dedicated endpoints in the web Priority Support Priority Support Create tickets in Slack Create tickets in Slack Customer Ticket Portal Customer Ticket Portal Deprecations Deprecations Playground Playground Inference FAQs Inference FAQs Fine-tuning FAQs Fine-tuning FAQs Multiple API Keys Multiple API Keys // app/page.tsx function Page const topic setTopic = useState '' const grade setGrade = useState '' return < form > < input value topic onChange e => setTopic e target value placeholder "Teach me about..." /> < select value grade onChange e => setGrade e target value > < option > option > < option > option > < option > option > < option > option > < option > option > < option > option > select > form > /getSources // app/page.tsx function Page const topic setTopic = useState '' const grade setGrade = useState '' async function handleSubmit e e preventDefault let response = await fetch '/api/getSources' method 'POST' body JSON stringify topic let sources = await response json // This fetch() will 404 for now return < form onSubmit handleSubmit > < input value topic onChange e => setTopic e target value placeholder "Teach me about..." /> < select value grade onChange e => setGrade e target value > < option > option > < option > option > < option > option > < option > option > < option > option > < option > option > select > form > /getSources app/api/getSources/route.js // app/api/getSources/route.js export async function POST req let json = await req json // `json.topic` has the user's text // app/api/getSources/route.js import NextResponse from 'next/server' export async function POST req const json = await req json const params = new URLSearchParams q json topic mkt 'en-US' count '6' safeSearch 'Strict' const response = await fetch `https://api.bing.microsoft.com/v7.0/search?${ params }` method 'GET' headers 'Ocp-Apim-Subscription-Key' process env 'BING_API_KEY' const webPages = await response json return NextResponse json webPages value map result => name result name url result url .env.local // .env.local BING_API_KEY = xxxxxxxxxxxx // app/page.tsx function Page const topic setTopic = useState '' const grade setGrade = useState '' async function handleSubmit e e preventDefault const response = await fetch '/api/getSources' method 'POST' body JSON stringify topic const sources = await response json // log the response from our new endpoint console log sources return < form onSubmit handleSubmit > < input value topic onChange e => setTopic e target value placeholder "Teach me about..." /> < select value grade onChange e => setGrade e target value > < option > option > < option > option > < option > option > < option > option > < option > option > < option > option > select > form > // app/page.tsx function Page const topic setTopic = useState '' const grade setGrade = useState '' const sources setSources = useState async function handleSubmit e e preventDefault const response = await fetch '/api/getSources' method 'POST' body JSON stringify topic const sources = await response json // Update the sources with our API response setSources sources return <> < form onSubmit handleSubmit > /* ... */ form > /* Display the sources */ sources length > 0 && < div > < p > p > < ul > sources map source => < li key source url > < a href source url > source name a > li > ul > div > > /api/getParsedSources // app/page.tsx function Page // ... async function handleSubmit e e preventDefault const response = await fetch '/api/getSources' method 'POST' body JSON stringify question const sources = await response json setSources sources // Send the sources to a new endpoint const parsedSourcesRes = await fetch '/api/getParsedSources' method 'POST' headers 'Content-Type' 'application/json' body JSON stringify sources // The second fetch() will 404 for now // ... app/api/getParsedSources/route.js // app/api/getParsedSources/route.js export async function POST req let json = await req json // `json.sources` has the websites from Bing getTextFromURL async function getTextFromURL url // 1. Use fetch() to get the HTML content // 2. Use the `jsdom` library to parse the HTML into a JavaScript object // 3. Use `@mozilla/readability` to clean the document and // return only the main text of the page jsdom @mozilla/readability npm i jsdom @ mozilla / readability async function getTextFromURL url // 1. Use fetch() to get the HTML content const response = await fetch url const html = await response text // 2. Use the `jsdom` library to parse the HTML into a JavaScript object const virtualConsole = new jsdom VirtualConsole const dom = new JSDOM html virtualConsole // 3. Use `@mozilla/readability` to clean the document and // return only the main text of the page const textContent = new Readability doc parse getTextFromURL // app/api/getParsedSources/route.js export async function POST req let json = await req json let textContent = await getTextFromURL json sources 0 url console log textContent Promise.all // app/api/getAnswer/route.js export async function POST req let json = await req json let results = await Promise all json sources map source => getTextFromURL source url console log results // app/page.tsx function Page // ... async function handleSubmit e e preventDefault const response = await fetch '/api/getSources' method 'POST' body JSON stringify question const sources = await response json setSources sources const parsedSourcesRes = await fetch '/api/getParsedSources' method 'POST' headers 'Content-Type' 'application/json' body JSON stringify sources // The text from each source const parsedSources = await parsedSourcesRes json // ... // app/page.tsx function Page const messages setMessages = useState // ... async function handleSubmit e // ... // The text from each source const parsedSources = await parsedSourcesRes json // Start our chatbot const systemPrompt = ` You're an interactive personal tutor who is an expert at explaining topics. Given a topic and the information to teach, please educate the user about it at a ${ grade } level. Here's the information to teach: ${ parsedSources map result index => `## Webpage #${ index }:\\n ${ result fullContent } \\n\\n` } ` const initialMessages = role 'system' content systemPrompt role 'user' content topic setMessages initialMessages // This will 404 for now const chatRes = await fetch '/api/chat' method 'POST' headers 'Content-Type' 'application/json' body JSON stringify messages initialMessages // ... /chat npm i together - ai // api/chat/route.js import Together from 'togetherai' const together = new Together export async function POST req const json = await req json const res = await together chat completions create model 'meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo' messages json messages stream true return new Response res toReadableStream chat.completions.create stream: true ChatCompletionStream messages // app/page.tsx function Page const messages setMessages = useState // ... async function handleSubmit e // ... const chatRes = await fetch '/api/chat' method 'POST' headers 'Content-Type' 'application/json' body JSON stringify messages initialMessages ChatCompletionStream fromReadableStream chatRes body on 'content' delta => setMessages prev => const lastMessage = prev prev length - 1 if lastMessage role === 'assistant' return ... prev slice 0 - 1 ... lastMessage content lastMessage content + delta else return ... prev role 'assistant' content delta // ... role messages // app/page.tsx function Page const topic setTopic = useState '' const grade setGrade = useState '' const sources setSources = useState const messages setMessages = useState async function handleSubmit e // ... return <> < form onSubmit handleSubmit > /* ... */ form > /* Display the sources */ sources length > 0 && < div > < p > p > < ul > sources map source => < li key source url > < a href source url > source name a > li > ul > div > /* Display the messages */ messages map message i => < p key i > message content p > > chat // app/page.tsx function Page // ... const newMessageText setNewMessageText = useState '' return <> /* Form for initial messages */ messages length === 0 && < form onSubmit handleSubmit > /* ... */ form > sources length > 0 && <> /* ... */ > messages map message i => < p key i > message content p > /* Form for follow-up messages */ messages length > 0 && < form > < input value newMessageText onChange e => setNewMessageText e target value type "text" /> form > > handleMessage handleSubmit // app/page.tsx function Page const messages setMessages = useState // ... async function handleMessage e e preventDefault const newMessages = ... messages role 'user' content newMessageText const chatRes = await fetch '/api/chat' method 'POST' headers 'Content-Type' 'application/json' body JSON stringify messages newMessages setMessages newMessages ChatCompletionStream fromReadableStream chatRes body on 'content' delta => setMessages prev => const lastMessage = prev prev length - 1 if lastMessage role === 'assistant' return ... prev slice 0 - 1 ... lastMessage content lastMessage content + delta else return ... prev role 'assistant' content delta // ... chat Yes No Learn we built LlamaTutor from scratch – an open source AI tutor with 90k users. LlamaTutor is an app that creates an interactive tutoring session for a given topic using Together AI’s open-source LLMs. It pulls multiple sources from the web with either Bing’s API or Serper's API, then uses the text from the sources to kick off an interactive tutoring session with the user. In this post, you’ll learn how to build the core parts of LlamaTutor. The app is open-source and built with Next.js and Tailwind, but Together’s API work great with any language or framework. LlamaTutor’s core interaction is a text field where the user can enter a topic, and a dropdown that lets the user choose which education level the material should be taught at: In the main page component, we’ll render an and , and control both using some new React state: When the user submits our form, our submit handler ultimately needs to do three things: Let’s start by fetching the websites with Bing. We’ll wire up a submit handler to our form that makes a POST request to a new /getSources endpoint: If we submit the form, we see our React app makes a request to /getSources : Let’s go implement this API route. To create our API route, we’ll make a new app/api/getSources/route.js file: The Bing API lets you make a fetch request to get back search results, so we’ll use it to build up our list of sources: In order to make a request to Bing’s API, you’ll need to get an API key from Microsoft. Once you have it, set it in .env.local: and our API handler should work. Let’s try it out from our React app! We’ll log the sources in our submit handler: and if we try submitting a topic, we’ll see an array of pages logged in the console! Let’s create some new React state to store the responses and display them in our UI: If we try it out, our app is working great so far! We’re taking the user’s topic, fetching six relevant web sources from Bing, and displaying them in our UI. Next, let’s get the text content from each website so that our AI model has some context for its first response. Let’s make a request to a second endpoint called /api/getParsedSources, passing along the sources in the request body: We’ll create a file at app/api/getParsedSources/route.js for our new route: Now we’re ready to actually get the text from each one of our sources. Let’s write a new getTextFromURL function and outline our general approach: Let’s implement this new function. We’ll start by installing the jsdom and @mozilla/readability libraries: Next, let’s implement the steps: Looks good - let’s try it out! We’ll run the first source through getTextFromURL: If we submit our form , we’ll see the text show up in our server terminal from the first page! Let’s update the code to get the text from all the sources. Since each source is independent, we can use Promise.all to kick off our functions in parallel: If we try again, we’ll now see an array of each web page’s text logged to the console: We’re ready to use the parsed sources in our React frontend! Back in our React app, we now have the text from each source in our submit handler: We’re ready to kick off our chatbot. We’ll use the selected grade level and the parsed sources to write a system prompt, and pass in the selected topic as the user’s first message: We also created some new React state to store all the messages so that we can display and update the chat history as the user sends new messages. We’re ready to implement our final API endpoint at /chat! Let’s install Together AI’s node SDK: and use it to query Llama 3.1 8B Turbo: Since we’re passing the array of messages directly from our React app, and the format is the same as what Together’s chat.completions.create method expects, our API handler is mostly acting as a simple passthrough. We’re also using the stream: true option so our frontend will be able to show partial updates as soon as the LLM starts its response. We’re read to display our chatbot’s first message in our React app! Back in our page, we’ll use the ChatCompletionStream helper from Together’s SDK to update our messages state as our API endpoint streams in text: Note that because we’re storing the entire history of messages as an array, we check the last message’s role to determine whether to append the streamed text to it, or push a new object with the assistant’s initial text. Now that our messages React state is ready, let’s update our UI to display it: If we try it out, we’ll see the sources come in, and once our chat endpoint responds with the first chunk, we’ll see the answer text start streaming into our UI! To let the user ask our tutor follow-up questions, let’s make a new form that only shows up once we have some messages in our React state: We’ll make a new submit handler called handleMessage that will look a lot like the end of our first handleSubmit function: Because we have all the messages in React state, we can just create a new object for the user’s latest message, send it over to our existing chat endpoint, and reuse the same logic to update our app’s state as the latest response streams in. The core features of our app are working great! React and Together AI are a perfect match for building powerful chatbots like LlamaTutor. The app is fully open-source, so if you want to keep working on the code from this tutorial, be sure to check it out on GitHub: https://github.com/Nutlope/llamatutor And if you’re ready to start building your own chatbots, sign up for Together AI today and make your first query in minutes! Updated 3 months ago