For (LU decomposition of a nearly singular matrix), she deliberately broke the code by introducing a zero pivot, then showed how to use partial pivoting, and finally demonstrated np.linalg.solve as the safe, practical choice—but only after understanding the algorithm.
For (Boundary Value Problems), she included a comparison of the finite difference method versus the shooting method, with a runtime table. The table revealed something surprising: on a stiff ODE, the shooting method failed unless you used an adaptive Runge-Kutta. The finite difference method with a sparse matrix solver was faster and more stable.
“When do we start?”
Liam did it. His reflection was surprisingly honest: “I thought the manual would save time. But I realized I don’t actually know how to debug a matrix inversion anymore. I just learned to copy-paste.”
Her reply came twelve minutes later:
He would spend hours manually re-running student code snippets, hunting for misplaced indices or a forgotten import numpy as np . It was exhausting. It was unsustainable. And at 64, he was tired.
The next morning, he uploaded the PDF to the course website. He added a single line in the syllabus: “The solutions manual is now a learning tool, not a shortcut. Use it wisely. And if you copy without understanding, the algorithm will find you—because the residual won’t converge to zero.” For (LU decomposition of a nearly singular matrix),
Then he opened his laptop and started writing an email to Maya: