<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Error-Handling on Sourav AI Labs</title><link>https://souravailabs.ai/tags/error-handling/</link><description>Recent content in Error-Handling on Sourav AI Labs</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 11 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://souravailabs.ai/tags/error-handling/index.xml" rel="self" type="application/rss+xml"/><item><title>Week 6: Error Handling, Logging, and the Unreliable API Problem</title><link>https://souravailabs.ai/posts/week-6-error-handling-logging-and-the-unreliable-api-problem/</link><pubDate>Wed, 11 Feb 2026 00:00:00 +0000</pubDate><guid>https://souravailabs.ai/posts/week-6-error-handling-logging-and-the-unreliable-api-problem/</guid><description>&lt;div class="week-post-meta"&gt;
&lt;span class="week-post-badge"&gt;Week 6 of 52&lt;/span&gt;
&lt;span&gt;Phase 0: Foundation&lt;/span&gt;
&lt;span&gt;Status: Complete&lt;/span&gt;
&lt;/div&gt;
&lt;p&gt;Week 6 was about building software that fails gracefully. The topic: &lt;code&gt;try/except/finally&lt;/code&gt;, custom exceptions, and the Python logging module.&lt;/p&gt;
&lt;p&gt;The mini-project made the stakes concrete. LLM APIs are not like database queries. They time out. They rate limit. They return partial responses. They fail in ways your happy-path tests will never catch.&lt;/p&gt;</description></item></channel></rss>