Math Academy

An impressive new CAI platform.

  • Things I admire
    • Interleaving progressively scaffolded (Cognitive scaffolding) active learning tasks into the reading experience—i.e. instead of reading for 10-20m, then doing exercises, you read for 1m, then do a task, then 1m, etc
    • Worked examples consistently presented before user tasks Worked example effect
    • Diagnostics offer empowering clarity: take a test, then see “OK, here’s where your holes are—and we’re going to fill them.” It’s powerful even though the visual presentation of the data is very poor.
    • XP mechanisms—a standardized representation of “pace” (1 XP is meant to approximate 1 min of work)
    • Input/effort-based, (mostly) not output/result-based
    • Displays predicted completion time for course based on current pace; also displays how long it would take if you had various other paces, so you can see consequences of change.
    • Leaderboards offer another answer to “Is my pace reasonable?” You can see: “OK, this pace is roughly average” or “I’m spending a lot more time on this than most people”.
    • Small XP bonuses for 100%-correctness discourage sloppiness
    • Interleaves prerequisites from prior courses into queue
    • Novel hierarchical spaced repetition system lowers review burden by marking knowledge implicitly partially reviewed when later lessons make use of it
    • Periodic “quizzes” interleave multiple lessons’ content and increase transfer (because student must determine what is being asked)
    • Review/quiz tasks don’t repeat from lesson—avoids Spaced repetition memory prompts should be written to discourage shallow “pattern matching”
    • Like Minerva, they are (or Justin Skycak is) documenting their designs in a book! Outstanding!
  • Challenges
    • “Algorithmic lesson queue” conceptual design has same “black-box” difficulty we had with Khan Academy’s mastery system. No sense of what these items are, where they’re coming from in the map of the course, how they relate to each other, etc. Creates emotional disconnect and a feeling of passivity.
    • This also makes it difficult to create a coherent narrative arc across the material.
    • Explanation is very instructional, mostly lacks discussion of motivation, implication, meaning. Theorems are often presented as things-to-be-learned, rather than things-to-be-understood. A sense of learning a sea of isolated facts, rather than a beautiful and coherent whole. Great math texts offer much richer exposition.
    • Tasks are over-weighted on the “apply” dimension of Bloom’s taxonomy. Conceptual elements of the exposition are often unreinforced. Names and definitions, too—I’ve put them into my own SRS to compensate.
    • I admire the hierarchical SRS, but it may be too conservative: my felt forgetting rate for earlier material is much higher than with my own SRS. I feel the need to duplicate tasks into my own SRS to compensate.
    • Proof-oriented lessons are rote templates, without challenges likely to develop mathematical creativity.
    • More broadly: no far transfer tasks.
    • Visual representations are underdeveloped: e.g. of the course structure, of progress toward’s one’s goal, etc
    • I love that the team members talk about their design thinking on Twitter, but their posts are intolerably LinkedIn/TED-voice. Needs a heavy dose of Anti-marketing, after Michael Nielsen.

Scrap logs

  • 2024-06-27
    • CAI platform launched in 2023
    • relatively expensive in this market at $50/mo
    • At registration, asks about your goals (trying to pass an exam? learn for fun?) and to choose a course (I chose “methods of proof”); then there’s an adaptive diagnostic.
    • Problems are pretty difficult! Answered 24/34 correctly.
    • Almost entirely multiple choice. One problem where I had to fill in values in a truth table, and another where I had to input a number.
    • Took me about an hour.
    • After the diagnostic, took me to an “analysis” page listing correct/incorrect problems with links to corresponding sections of the courses. No calls to action, and not clear what the product intends me to do next.
    • Separately, I got an email which lists the topics it estimates I’m missing from the mathematical foundations courses. The email includes a line confirming that I’m “correctly placed” in Methods of Proof (even though it estimates I have only completed 20% of its Mathematical Foundations III course?)
    • After the diagnostic, my “learn” page lists a set of 5 recommended lessons, each described by a simple phrase like “indexed sets”. Not clear how these relate to the course as a whole, or to its structure.
    • It assigns estimated completions for the earlier “mathematical foundation” courses.
    • Trying the “indexed sets” lesson.
    • Lessons seem to comprise a set of small “modules” (explication, example, question, etc), unfurling when I press a “continue” button.
    • The top of the screen displays a visual indicator of which “module” I’m on, and how many there are to go. (Of courses, modules vary enormously in how long they take)
    • If I miss a question in the lesson, I don’t think the lesson changes structure.
    • I’m pleasantly surprised by the degree to which multiple choice questions seem to interrogate my understanding. Not a ton of practice on the topic—I don’t feel quite fluent yet, and I certainly won’t remember all these details for the long term without more practice.
    • Took me like 15 minutes. (the topic was worth 7XP—I guess I’m slow?)
  • Looking at their pedagogy page…
    • Is it really mastery learning? I can miss several problems in a lesson and still continue. If I miss more, maybe it’ll mark the lesson as failed, and I’ll need to repeat it?
    • They mention spaced repetition and interleaving. I wonder how that fits into the design—I haven’t seen it yet.
    • Ah! Several days after my first lesson, that topic reappeared in my queue marked “review”, instead of a lesson. It consisted of 5 exercises matched to that topic (not identical with those I already did). Very interesting! No UI indication of the schedule.
    • One problem with this design is that the review exercises are grouped together by topic, and more importantly, I get to know in advance what the topic is when I’m doing the exercises. I predict this will significantly harm transfer performance.
    • They mention Deliberate practice. I don’t think this quite qualifies: the definition involves focused practice on the elements you’re having most trouble with (not just whatever’s newest). If I’m having trouble with a topic, the system doesn’t figure out what, specifically, I’m struggling with and intercede. It’s deliberate practice insofar as Khan Academy is, I guess.
    • They mention taking advantage of the Worked example effect. But there’s usually just one worked example before students are asked to do exercises—advocates of that effect usually point to studies with many worked examples.
  • 2024-07-04
    • A few days in, it assigned me a “supplemental diagnostic” and prevented me from doing more lessons until I completed it. I guess maybe it estimated that I’m “doing better than” it had originally expected?
    • There were only two questions. Strange. I believe the questions were ones I’d been asked in the earlier diagnostic. And so something unfortunate happened: I remembered from the earlier diagnostic how to solve one of the questions… but I don’t think that should be taken as an indication of my understanding of the underlying topic.
  • 2024-07-05
    • Now one week in, my queue contains only an “assessment” marked “quiz 1”. It has an 8 minute time limit for 6 questions.
    • I got 5/6 correct. It’s not clear how this affects my learning plan or progress.
    • Then I was locked out of doing more lessons until I completed a review of indexed sets. This review was present in my queue before, but the topic is related to the question I missed, so maybe it’s follow-up from the missed question from the quiz?
  • 2024-07-10
    • I missed three questions on a lesson today. It ended the lesson and removed it from the queue. I’m guessing it’ll reappear in a few days with the verbatim content. Will be interesting to see.
    • Another quiz appeared. It has a note saying “this quiz is optional until 27 more XP have been earned”. Interesting—so there are gates.
    • Got a couple questions wrong, and my queue now contains only review for those lessons.
  • 2024-07-24
    • Decided to enable “holistic mode”. This required me to take an extra diagnostic (80 questions—~3 hours) to figure out exactly which “Mathematical Foundations” I’m missing. I have a bunch of holes in MFIII, no surprise. Filling them in would add an extra ~8 months to Methods of Proof at 30XP/day. Interesting.
    • In a lesson, I missed a few questions near the end of the lesson, and I noticed that it added more different questions. Very nice.
  • 2024-07-25
    • In proof by induction, there are multi-step exercises where the later steps (e.g. inductive case after the base case) are revealed only after the initial steps are solved.
    • The proof exercises feel very template-y, though, and I’m skeptical that they’d be effective for developing understanding in someone unfamiliar.
  • 2024-08-31
    • These days doing many more proofs via their interface, which I find very cumbersome, though I appreciate that they’re retaining the active practice. And I can see that the menu-based design is effective scaffolding.
Last updated 2024-09-06.