In partnership with

Hey, here’s a thought for you I had this week.

Ever notice how your phone unlocks when it sees your face, or how voice assistants somehow recognize you specifically?

It feels a little magical. Or creepy. Or both.

It’s not really spying on you, though (well… mostly). What’s actually doing the heavy lifting is deep learning, a subset of machine learning that’s behind a lot of modern AI.

I made a short video breaking this down because I realized most explanations either get way too technical or stay so high-level they don’t explain anything.

The Brain Vibes

Deep learning was inspired by the brain, not the biology, but the learning behavior. Your brain has billions of neurons passing signals around. Some connections get stronger when they’re useful. Others fade away.

Your brain is brutally efficient.

That’s why you forget why you walked into a room… but remember that one painfully awkward rejection from years ago.

Deep learning copies this idea — not the biology but the learning structure. Artificial neurons pass numbers, and weights decide which connections matter more.

Neural Networks

A neural network is just a stack of layers made up of artificial neurons.

The input layer takes raw data exactly as is…pixel values from an image, numbers representing words, and so on.

The hidden layers are where the real learning happens. Early layers learn simple things, such as edges and shapes. Deeper layers combine those into more complex ideas, like faces or objects.

Neural networks copy that idea using math.

Data flows through layers of artificial neurons. Early layers pick up simple patterns. Deeper layers combine those into more meaningful ideas — edges become shapes, shapes become faces, sounds become words.

The more layers involved, the “deeper” the model is.

The more hidden layers a model has, the “deeper” it is. Hence, “deep learning”.

Finally, the output layer provides an answer, by giving a prediction, classification, or a set of probabilities.

Training In Deep Learning

At first, deep learning models are…pretty underwhelming. Actually, it’s pretty damn bad.

Data moves through the network in a forward pass, the model makes a guess, and then a loss function steps in and says, “Yeah, you fumbled.”

The bigger the mistake, the bigger the loss.

A loss function measures how wrong a model’s prediction is compared to the correct answer.

Loss Function

Then backpropagation kicks in, sending that error backward and adjusting the weights so the model does better next time. This loop repeats over and over until the model stops embarrassing itself.

Why Activation Functions Matter

One thing that really matters here is activation functions. They decide which signals move forward and, more importantly, they add non-linearity.

Without that, neural networks would just learn boring straight-line relationships — and deep learning wouldn’t really work at all.

They all introduce this neat little trick called nonlinearity:

Nonlinearity lets the model learn complex patterns instead of boring straight lines.

Nonlinearity

Deep Learning vs Machine Learning

The biggest difference between machine learning and deep learning can be summed up in three words: less human intervention.

The biggest difference between traditional machine learning and deep learning is how much humans have to help. Classic ML needs people to hand-engineer features. Deep learning figures out useful features on its own, but it needs way more data and compute to do it.

That tradeoff is why deep learning powers things like image recognition, speech recognition, recommendations, and large language models like ChatGPT.

Check out this chart I made, showing the differences between ML and DL:

Conclusion

Understanding deep learning changed how I look at AI. It stopped feeling mysterious and started feeling… logical, even if still kind of wild.

Once you see it this way, AI feels a lot less like magic and a lot more like engineering.

If you’ve ever wondered how AI got this good this fast, deep learning is the answer.

The Tech newsletter for Engineers who want to stay ahead

Tech moves fast, but you're still playing catch-up?

That's exactly why 100K+ engineers working at Google, Meta, and Apple read The Code twice a week.

Here's what you get:

  • Curated tech news that shapes your career - Filtered from thousands of sources so you know what's coming 6 months early.

  • Practical resources you can use immediately - Real tutorials and tools that solve actual engineering problems.

  • Research papers and insights decoded - We break down complex tech so you understand what matters.

All delivered twice a week in just 2 short emails.

Reply

Avatar

or to participate

Keep Reading

No posts found