Every Canvas Quiz Question Type, Explained
A field guide to every question type Canvas supports — multiple choice, multi-select, dropdown, fill-in-the-blank, multi-blank, numerical, matching, and more — plus what most AI tools get wrong about each one.
Canvas supports more question types than most students realize, and the differences matter a lot when you're trying to get answers quickly. Here's a complete walkthrough of every Canvas quiz question type — Classic Quizzes and New Quizzes both — and what most AI quiz tools get wrong about each one.
Multiple Choice
The default. One question, four to six options, exactly one correct answer. Renders as radio buttons.
What goes wrong:
- Multiple choice is the easiest case for any AI tool. Almost every option (Cheatmate, Answerly, AnswersAi, ChatGPT) handles it correctly.
- The only real failure mode is when the AI returns an answer that isn't one of the options — which happens when the model paraphrases instead of matching the exact text. Tools that validate the returned answer against the actual options (like ExamClutch) avoid this; tools that just spit out text don't.
Multi-Select (Multiple Answers)
One question, several options, two or more correct answers. Renders as checkboxes. Canvas grades partial credit by default — you only get full credit if you select every right answer and no wrong ones.
What goes wrong:
- Most AI tools are tuned to return one answer. When the question is multi-select, they pick the single most likely option and miss the others.
- A correct multi-select tool needs to know the question type before asking the AI, and explicitly request an array of answers.
- ExamClutch's backend tags the question type before asking the model and validates that the returned answer is an array of exact option matches — see our main Canvas Quiz Answers page for the technical breakdown.
Dropdown
Single-select, but rendered as a <select> element instead of radios. Canvas uses dropdowns mostly inside multi-blank questions ("Fill in the blank with the correct dropdown choice for each missing word").
What goes wrong:
- Sidebar tools that just give you text don't know which dropdown to apply the answer to — you do that part manually.
- Inline tools have to handle Canvas's custom dropdown rendering (some Canvas dropdowns aren't native
<select>elements; they're styled divs that need a different click strategy).
Fill-in-the-Blank (Short Answer)
A text input. You type the answer. Canvas grades against an instructor-defined list of accepted strings — usually case-insensitive, often with multiple acceptable spellings.
What goes wrong:
- Models will sometimes return punctuation or articles ("The answer is X.") instead of just the answer ("X"). Good tools strip the noise; bad ones submit the wrong string.
- The instructor's accepted-answer list is invisible — the AI has to guess the form the professor wants. Generally returning the most concise, standard form is the safest bet.
Fill-in-Multiple-Blanks
A single question with multiple text inputs, each with a unique blank ID. Common in language quizzes ("Complete the sentence: The capital of France is [blank1] and the capital of Germany is [blank2].")
What goes wrong:
- Almost every sidebar tool fails here — they return one answer string and you have to manually figure out which blank gets which word.
- Inline tools need to detect the multiple inputs and split the AI's response into the right blanks. ExamClutch handles this case explicitly: each blank gets its own answer field-mapped value.
Multiple Dropdowns
Same idea as multiple blanks, but each blank is a <select> instead of a text input. Common in chemistry, anatomy, and language quizzes.
What goes wrong:
- Same as multi-blank — sidebar tools punt, you do the mapping by hand.
Numerical Answer
Like fill-in-the-blank but the input only accepts numbers. Canvas grades against an exact value or a tolerance range the instructor sets.
What goes wrong:
- Models occasionally return units ("9.8 m/s²") when Canvas expects just the number ("9.8"). The number-only validation in Canvas will reject the units.
- For tolerance-graded questions, you don't need to be exact — but you don't know the tolerance, so being precise is always safer.
Matching
Two columns. You match items in column A to items in column B, usually with a dropdown next to each item.
What goes wrong:
- This one is genuinely hard for AI tools because the question structure is non-linear. Sidebar tools struggle to even parse it; you end up dictating each pair manually.
- Inline tools that understand Canvas's matching DOM (each row is a separate dropdown) can solve all pairs in one shot.
Essay
Free-form long answer. Canvas does not auto-grade these; the professor reads them.
What goes wrong:
- Useful for outlining your answer, but you cannot meaningfully "auto-fill" an essay — your professor will read it and notice if it sounds like ChatGPT.
- Best practice: get a rough outline from the AI, then write the actual essay in your own voice.
File Upload
You upload a document. Canvas does not auto-grade.
What goes wrong:
- AI tools can help you produce the file, but you upload it yourself.
True/False
Two-option multiple choice. The simplest case — every tool handles it correctly.
Formula Question (Calculated)
Canvas substitutes random values into a formula and asks you to solve. Each student sees different numbers.
What goes wrong:
- The question text contains the actual numbers for your specific attempt, so an AI can solve it — but the answer is unique to your variable values and won't be cached for other students.
- ExamClutch handles this correctly because it reads your specific question text on each attempt.
New Quizzes question types
Canvas's New Quizzes (the Quiz LTI replacement for Classic Quizzes) supports everything above plus:
- Categorization — drag items into category buckets.
- Hot Spot — click on a region of an image.
- Ordering — arrange items in the correct sequence.
- Stimulus — a passage with multiple sub-questions about it.
What goes wrong:
- Categorization, Hot Spot, and Ordering are inherently click-and-drag interfaces. AI tools can tell you the answer ("category 1: A, B; category 2: C, D"), but most can't perform the drag for you.
- Stimulus questions wrap a passage around several sub-questions. Tools that don't carry the passage context into each sub-question miss easy points.
Quick reference
| Question type | Sidebar tool | Web app | Inline extension |
|---|---|---|---|
| Multiple Choice | ✅ | ✅ | ✅ |
| Multi-Select | ⚠️ Partial | ⚠️ Partial | ✅ |
| Dropdown | ⚠️ Manual apply | ⚠️ Manual apply | ✅ |
| Fill-in-the-Blank | ✅ | ✅ | ✅ |
| Multi-Blank | ⚠️ Manual mapping | ⚠️ Manual mapping | ✅ |
| Multiple Dropdowns | ⚠️ Manual mapping | ⚠️ Manual mapping | ✅ |
| Numerical | ✅ | ✅ | ✅ |
| Matching | ⚠️ Tedious | ⚠️ Tedious | ✅ |
| True/False | ✅ | ✅ | ✅ |
| Formula | ✅ | ✅ | ✅ |
| Essay | Outline only | Outline only | Outline only |
Bottom line
If your professor only uses multiple choice and true/false, almost any AI tool works fine. The moment your quizzes mix in multi-select, multi-blank, matching, or New Quizzes interactive types, the gap between an inline-DOM tool and a sidebar/web app gets very wide — sidebars and web apps return correct text but make you do the mapping work, and that work adds up across a 25-question quiz.
ExamClutch handles every Canvas question type listed above (except essay, which we deliberately don't auto-fill). If you're studying on Canvas regularly and your quizzes go beyond multiple choice, the inline workflow is the difference between finishing in 3 minutes and finishing in 18.
Related reading
Ready to stop fighting your LMS?