Discussion about this post

User's avatar
Neural Foundry's avatar

The breakdown of contrastive loss with hand-written explanations is super helpful. What clicked for me was how the denominator implicitly handles negative pairs without needing explicit negative sampling, which I've always found tricky to wrap my head around when implementing this stuff. The NanoVLM demo wiht the 3D embedding visualization before/after training makes the alignment concept way more tangible than abstract loss curves.

Expand full comment

No posts

Ready for more?