Slow Burning Torch

Akshay Mestry's photo Akshay Mestry

National Louis University

Apr 18, 2025

Speed is for production, serving machines while slowness is for humans, for understanding

So, it all started on a quiet afternoon in early January of 2025, just after I wrapped up xsNumPy. I was still basking in the glow of having finally built NumPy (at least a fraction of it) from scratch to demystify how array operations worked. It was a project born out of curiosity. I didn’t want performance, nor did I care. I just wanted to understand. And as I was pushing the final commit to the repository, I felt two things very clearly:

  • Relief, because rewriting xsnumpy.ndarray and debugging the broadcasting logic from first principles is as mentally taxing as it is rewarding

  • Restlessness, because once you’ve peeled back the curtain on one big library, you start seeing others in a whole new light

Rewriting NumPy had shown me the power of writing slow code to understand fast abstractions. Now, I wanted to take that same approach to something I’d been using for over half a decade, PyTorch.

Rewriting PyTorch Badly, On Purpose

Now, I’ve been working with PyTorch since 2018, back when torch.nn.Sequential felt like magic and torch.Tensor.backward() method seemed to compute gradients by sorcery.