<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Fundamentals on RoboSathi</title><link>https://robosathi.com/docs/deep_learning/fundamentals/</link><description>Recent content in Fundamentals on RoboSathi</description><generator>Hugo</generator><language>en</language><lastBuildDate>Mon, 27 Apr 2026 22:25:34 +0530</lastBuildDate><atom:link href="https://robosathi.com/docs/deep_learning/fundamentals/index.xml" rel="self" type="application/rss+xml"/><item><title>Intro to DL</title><link>https://robosathi.com/docs/deep_learning/fundamentals/intro-to-dl/</link><pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate><guid>https://robosathi.com/docs/deep_learning/fundamentals/intro-to-dl/</guid><description>&lt;div class="video-link-container"&gt;&lt;a href="https://www.youtube.com/playlist?list=PLnpa6KP2ZQxe749nPGDV2cd6SR6zIZIJl" target="_blank" rel="noopener" class="video-btn video-btn-playlist"&gt;&lt;span class="video-btn__icon" aria-hidden="true"&gt;&lt;i class="fab fa-youtube"&gt;&lt;/i&gt;&lt;/span&gt;&lt;span class="video-btn__content"&gt;&lt;span class="video-btn__eyebrow"&gt;Playlist&lt;/span&gt;&lt;span class="video-btn__title"&gt;Deep Learning Fundamentals | Full Course&lt;/span&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;




&lt;div class="rs-callout rs-callout-definition rs-callout-blue"&gt;
 
 &lt;div class="rs-callout__title rs-callout__title--panel"&gt;
 Deep Learning
 &lt;/div&gt;
 

 &lt;div class="rs-callout__body"&gt;
 &lt;p&gt;📘 Deep learning is a subset of AI and machine learning that uses multi-layered artificial neural networks to simulate
human-like learning, analyzing vast data to identify complex patterns, such as recognizing objects in photos,
detecting medical anomalies, or processing natural language, like LLMs.&lt;/p&gt;
 &lt;figure class="rs-figure"&gt;
 &lt;img src="https://robosathi.com/images/deep_learning/fundamentals/intro_to_dl/dl_ai_hu_cd9c8d64c5d614b4.webp"
 width="800"
 height="597"
 loading="lazy"
 fetchpriority="auto"
 decoding="async"
 class="rs-figure__image"
 alt="images/deep_learning/fundamentals/intro_to_dl/dl_ai.png"&gt;
 &lt;/figure&gt;
&lt;p&gt;💡 The ‘&lt;strong&gt;deep&lt;/strong&gt;’ in ‘deep learning’ stands for the idea of successive layers of representations.&lt;/p&gt;</description></item><item><title>XOR Problem</title><link>https://robosathi.com/docs/deep_learning/fundamentals/xor-problem/</link><pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate><guid>https://robosathi.com/docs/deep_learning/fundamentals/xor-problem/</guid><description>&lt;div class="video-link-container"&gt;&lt;a href="https://www.youtube.com/playlist?list=PLnpa6KP2ZQxe749nPGDV2cd6SR6zIZIJl" target="_blank" rel="noopener" class="video-btn video-btn-playlist"&gt;&lt;span class="video-btn__icon" aria-hidden="true"&gt;&lt;i class="fab fa-youtube"&gt;&lt;/i&gt;&lt;/span&gt;&lt;span class="video-btn__content"&gt;&lt;span class="video-btn__eyebrow"&gt;Playlist&lt;/span&gt;&lt;span class="video-btn__title"&gt;Deep Learning Fundamentals | Full Course&lt;/span&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;


&lt;div class="rs-callout rs-callout-panel rs-callout-grey"&gt;&lt;div class="rs-callout__body"&gt;Before we dive into the XOR problem, lets get familiar with few terms and concepts first.&lt;/div&gt;
&lt;/div&gt;



&lt;div class="rs-callout rs-callout-definition rs-callout-blue"&gt;
 
 &lt;div class="rs-callout__title rs-callout__title--panel"&gt;
 Perceptron (1958)
 &lt;/div&gt;
 

 &lt;div class="rs-callout__body"&gt;
 &lt;p&gt;Simplest form of an artificial neural network, acting as a single-layer binary classifier that categorizes input data into one of two groups.&lt;/p&gt;
&lt;p&gt;It serves as a mathematical model of a biological neuron, receiving multiple signals (inputs), weighting their importance,
and deciding whether to ‘fire’ (output 1) or stay ‘inactive’ (output 0).&lt;/p&gt;</description></item><item><title>Activation Functions</title><link>https://robosathi.com/docs/deep_learning/fundamentals/activation-functions/</link><pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate><guid>https://robosathi.com/docs/deep_learning/fundamentals/activation-functions/</guid><description>&lt;div class="video-link-container"&gt;&lt;a href="https://www.youtube.com/playlist?list=PLnpa6KP2ZQxe749nPGDV2cd6SR6zIZIJl" target="_blank" rel="noopener" class="video-btn video-btn-playlist"&gt;&lt;span class="video-btn__icon" aria-hidden="true"&gt;&lt;i class="fab fa-youtube"&gt;&lt;/i&gt;&lt;/span&gt;&lt;span class="video-btn__content"&gt;&lt;span class="video-btn__eyebrow"&gt;Playlist&lt;/span&gt;&lt;span class="video-btn__title"&gt;Deep Learning Fundamentals | Full Course&lt;/span&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;


&lt;div class="rs-callout rs-callout-question rs-callout-green"&gt;
 &lt;div class="rs-callout__label"&gt;Question&lt;/div&gt;
 &lt;div class="rs-callout__body"&gt;
 Why do we need activation function?
 &lt;/div&gt;
&lt;/div&gt;


&lt;div class="rs-callout rs-callout-answer rs-callout-magenta"&gt;
 &lt;div class="rs-callout__label"&gt;Answer&lt;/div&gt;
 &lt;div class="rs-callout__body"&gt;
 Activation Function introduces &lt;strong&gt;non-linearity&lt;/strong&gt;, which allows networks to learn complex patterns in the data.
 &lt;/div&gt;
&lt;/div&gt;


&lt;div class="rs-callout rs-callout-question rs-callout-green"&gt;
 &lt;div class="rs-callout__label"&gt;Question&lt;/div&gt;
 &lt;div class="rs-callout__body"&gt;
 Why is non-linearity important ?
 &lt;/div&gt;
&lt;/div&gt;


&lt;div class="rs-callout rs-callout-answer rs-callout-magenta"&gt;
 &lt;div class="rs-callout__label"&gt;Answer&lt;/div&gt;
 &lt;div class="rs-callout__body"&gt;
 &lt;p&gt;Real-world data (images, speech, text, financial trends) is rarely linear.
Non-linearity allows the network to learn and represent complex mappings between inputs and outputs. &lt;br&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It enables the network to become a ‘Universal Function Approximator’.&lt;/li&gt;
&lt;/ul&gt;
 &lt;/div&gt;
&lt;/div&gt;


&lt;div class="rs-callout rs-callout-panel rs-callout-orange"&gt;&lt;div class="rs-callout__title rs-callout__title--panel"&gt;Universal Approximation Theorem&lt;/div&gt;&lt;div class="rs-callout__body"&gt;&lt;p&gt;A neural network with following properties can approximate any continuous function.&lt;/p&gt;</description></item><item><title>Optimization Methods</title><link>https://robosathi.com/docs/deep_learning/fundamentals/optimization-methods/</link><pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate><guid>https://robosathi.com/docs/deep_learning/fundamentals/optimization-methods/</guid><description>&lt;div class="video-link-container"&gt;&lt;a href="https://www.youtube.com/playlist?list=PLnpa6KP2ZQxe749nPGDV2cd6SR6zIZIJl" target="_blank" rel="noopener" class="video-btn video-btn-playlist"&gt;&lt;span class="video-btn__icon" aria-hidden="true"&gt;&lt;i class="fab fa-youtube"&gt;&lt;/i&gt;&lt;/span&gt;&lt;span class="video-btn__content"&gt;&lt;span class="video-btn__eyebrow"&gt;Playlist&lt;/span&gt;&lt;span class="video-btn__title"&gt;Deep Learning Fundamentals | Full Course&lt;/span&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;


&lt;div class="rs-callout rs-callout-panel rs-callout-red"&gt;&lt;div class="rs-callout__title rs-callout__title--panel"&gt;Non-Convex Loss Surface&lt;/div&gt;&lt;div class="rs-callout__body"&gt;&lt;p&gt;The loss function surface in deep learning is non-convex, i.e, it has multiple local minima, saddle points,
and plateaus rather than a single, global minimum. &lt;br&gt;
So, in the context of neural network training, we usually do not care about finding the exact (global) minimum of a function,
but seek only to reduce its value sufficiently to obtain good generalization error.&lt;/p&gt;</description></item></channel></rss>