How should I decide how deep to go on each topic?
This is a thought that has bothered me for a long time, more so while learning ML because there is so much to learn.
New models are being introduced on a weekly basis.
If you open LinkedIn you get to hear about dozens of models.
New research papers are being published at faster than ever speed.
So as a person deeply interested in ML, what exactly should I do? What should I go after?
One thing is for sure. It is impossible to catch up with everything that is happening in AI/ML, let along comprehending everything deeply.
A few weeks ago, my entire LinkedIn feed was filled with DeepSeek.
So as an ML enthusiast what exactly should I do?
Should I try to understand DeepSeek?
If yes how much time should I spend?
If I spend a lot of time, I will understand DeepSeek very well, but will that be worth it?
What if DeepSeek becomes obsolete next month due to a radically new model?
These are all some questions I had. For me, this issue is not with just DeepSeek, this is there with literally all fascinating developments happening in ML.
I simply cannot keep up with every new developments in ML at a deeper level. I can do so superficially, but I hate doing superficial stuff.
Therefore, there is one thing I have decided that I am doing for the rest of my life: eliminate noise.
I want to be guided by path that I am truly convinced has long term value. Not by noise that may appeal in the short term (Please note: I am not implying that DeepSeek is noise. DeepSeek is simply fascinating.).
It is however very difficult to know what is signal and what is noise. But I am fully clear on one thing: foundational knowledge is very valuable. Foundational knowledge makes you confident. And it is timeless.
But there is a problem when I venture into foundational knowledge. It is an endless ocean. I can spend one full month on Lasso regularization in depth. I could spend 3 weeks on mastering bias-variance trade-off. I could spend 8 months only on Graph Neural Network architecture. So what should I do?
Is it worth it to spend 4 weeks on regularization? What if I spend similar amount of time on a broader topic that is more useful to people?
Over the last 1 year I have developed a technique which helps me identify the depth I should go to for any topic.
I ask myself: “Do I truly understand this topic or am I fooling myself”?" Because it is very easy to fool ourselves into believing we know something.
But the moment you start discussing about this topic or idea, your knowledge gets questioned. Your assumptions about the strength of your knowledge will be reassessed.
So here is what I do.
Once I am “kind of” convinced that I know a topic well, I record a quick lecture explaining it (only for those topics that are interesting to me). During the recording, as I speak, I understand the areas where/if my understanding is poor. If I get the self-realization that my understanding has to significantly improve, I trash the recording.
I go back to my table, learn more about the topic, fill my gaps and record again. I repeat this 2 or 3 times until I am convinced that I know what I am talking about.
This approach has benefitted me the most in my entire life. I have never learned anything faster and more efficiently. There is also a high chance that I remember the concepts I taught for a very long time.
The downside for me personally is that I don’t work has a full-time teacher. As someone who conducts research on AI, builds AI products and manages a product development in our startup vizuara.ai, teaching definitely consumes a lot of my time which I could otherwise spend building products. But the amount of satisfaction you get from learning something deeply and simplifying it to teach others cannot be conveyed in words. You have to experience it yourself.
Now back to the noise. Am I saying that you should not pay attention to any new developments in AI because most of it will be noise? No that is not what I am saying.
I think you should keep a close a eye on what is happening. But do not get distracted by what others say. You keep a vision for yourself and follow it religiously. And try to not get into FOMO (Fear Of Missing Out).
We all have come across people in our life who has a lot of information. They will know the latest news, technology, gadgets, cars, software etc. But they would not know anything deeply. So beware of confusing information with knowledge.
If you wish to transition to AI/ML in depth, without fluff, here is a detailed 5-phase roadmap.
Please note, this is not a capsule path, it will take you 8-months to follow this. But you will come out stronger at the other end if you can persevere. These are the best playlists we offer on Vizuara’s YouTube.
Phase 1️⃣: Mathematical Foundations (20-25 hours)
Playlist 1: Foundations for ML: https://lnkd.in/gKz-eybU
-Why Begin Here: Grasp the basics- Linear algebra, Probability, Statistics, Calculus, Optimization, Programming fundamentals
-Commitment: 2-3 hours weekly for 8 weeks.
Phase 2️⃣: Machine Learning (60-65 hours)
📌Playlist 1: ML Teach by Doing: https://lnkd.in/gn2dEcE2
-Why It’s Important: Practical, project-based learning to understand ML workflows.
-Commitment: 4 hours weekly for 10 weeks.
📌Playlist 2: Decision Trees from Scratch: https://lnkd.in/g3cmj2BR
-Why It’s Useful: Master decision tree algorithms are the backbone of many ML models.
-Commitment: 4 hours weekly for 5 weeks.
Phase 3️⃣: Deep Learning (35-40 hours)
📌Playlist 1: Neural Networks from Scratch: https://lnkd.in/gj8kHe2T
-Why It Matters: Understand the mechanics of neural networks through implementation.
-Commitment: 5 hours weekly for 8 weeks.
Phase 4️⃣: Advanced topics: Graph Neural Networks (40-45 hours)
📌Playlist 1: Graph Neural Networks - Theory, Applications and Research: https://lnkd.in/g3RCPS8e
-Why Learn This: Graph-based ML is becoming increasingly relevant in fields like social networks and biology.
-Commitment: 3 hours weekly for 8 weeks.
📌Playlist 2: ML Project-Based Course: Explainable AI: https://lnkd.in/gNEx3ghr
Why XAI?: Build ML projects with a focus on interpretability
-Commitment: 3 hours weekly for 5 weeks.
-Outcome: Publish your first research paper using XAI techniques.
Phase 5️⃣: Generative AI, Transformers, and LLMs (100-110 hours)
📌Playlist 1: GenAI for Beginners (8 hours): https://lnkd.in/gUgXxVzh
📌Playlist 2: LLMs from scratch (40-45 hours): https://lnkd.in/gjcyfCcE
📌Playlist 3: Hands-on LLMs (40-45 hours): https://lnkd.in/gJQ7ryE4
📌Playlist 4: Transformers (15 hours): https://lnkd.in/g_3Qdu6d
-Why These Topics?: Learning about LLMs, transformers, and generative AI will make you future-ready.
-Commitment: 5 hours weekly for 20 weeks.
🔸Optional [140 hours]
📌Introduction to Machine Learning in Julia [40 hours]: https://lnkd.in/g8A3DtQW
📌Zero to Hero in Data Science [40 hours]: https://lnkd.in/gNEgx2Cz
📌Hands-on PINN [20 hours]: https://lnkd.in/gta5hgHZ
📌ML in Hindi [40 hours]: https://lnkd.in/giD88GzZ
✅ Total Duration: 300 hours + optional 140 hours
✅Timeline: 6-8 months, balancing learning with practical application.
✅Outcome: Build foundational ML knowledge, gain practical skills, and stay ahead with advanced topics.
If you are willing to spend time, this roadmap will help you get there.
Follow Vizuara’s YouTube channel for structured and beginner-friendly playlists: https://www.youtube.com/@vizuara
Your ML journey begins now—start building your expertise today.