Subscribe
Sign in
Home
Chat
AI/ML Courses
Leaderboard
Latest
Top
Discussions
Introduction to VLMs
Vision Language Models
Nov 10
•
Vizuara AI Labs
and
Sreedath Panat
6
Dissecting the Vision Transformer paper: In 3 hours and 40 minutes
Let us cultivate the habit of reading research papers
Nov 5
•
Vizuara AI Labs
and
Sreedath Panat
5
LIVE workshop: Build a NanoVLM from scratch
Happening on Saturday, November 8th
Nov 5
•
Vizuara AI Labs
and
Sreedath Panat
I just built a Vision Transformer from Scratch
Starting with random weights
Nov 3
•
Vizuara AI Labs
and
Sreedath Panat
1
Engineering CI/CD Pipelines for Machine Learning Systems
This article delves into the concept of CI/CD, explaining its fundamentals and highlighting its importance in building reliable, scalable, and automated…
Nov 2
•
Prathamesh Dinesh Joshi
and
Vizuara AI Labs
3
2/3rd of trainable parameters in GPT-3 belong to MLP. Not attention heads.
2-minute read
Nov 1
•
Vizuara AI Labs
and
Sreedath Panat
2
October 2025
If you have 96 attention heads, will you run 96 loops?
Deeply understanding multi-head attention with weight splits
Oct 30
•
Vizuara AI Labs
and
Sreedath Panat
3
1
Why do we really need more than one attention head?
Understanding Multi-Head Attention: The Heart of Transformers
Oct 28
•
Vizuara AI Labs
and
Sreedath Panat
4
Why do we need "masking" in attention?
Understanding causal or masked self attention
Oct 25
•
Vizuara AI Labs
and
Sreedath Panat
4
The Lost Art of Reading Research Papers
During my PhD years at MIT, I spent countless evenings surrounded by printed research papers, a pen in hand, marking every paragraph, tracing every…
Oct 21
•
Vizuara AI Labs
and
Sreedath Panat
6
1
Understanding Self-Attention with Trainable Weights
K, Q, V intuition
Oct 18
•
Vizuara AI Labs
and
Sreedath Panat
6
Let us implement a simplified self attention
Introduction to attention mechanism
Oct 12
•
Vizuara AI Labs
and
Sreedath Panat
3
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts