<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Part A: Cognitive Blogs on Daffa Abhipraya</title><link>https://blog.abhipraya.dev/ppl/part-a/</link><description>Recent content in Part A: Cognitive Blogs on Daffa Abhipraya</description><generator>Hugo</generator><language>en-us</language><copyright>© Daffa Abhipraya</copyright><lastBuildDate>Sat, 11 Apr 2026 00:00:00 +0700</lastBuildDate><atom:link href="https://blog.abhipraya.dev/ppl/part-a/index.xml" rel="self" type="application/rss+xml"/><item><title>PPL: Self-Hosted Error Monitoring with Custom Instrumentation</title><link>https://blog.abhipraya.dev/ppl/part-a/monitoring/</link><pubDate>Sat, 11 Apr 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-a/monitoring/</guid><description>&lt;p>Error monitoring is one of those things that feels optional until the first production bug slips through unnoticed. A user reports &amp;ldquo;the page is broken,&amp;rdquo; you check the server, everything looks fine, and three hours later you discover a background task has been silently failing since the last deploy. This blog covers how we built monitoring that catches those failures before users do.&lt;/p>
&lt;blockquote>
&lt;p>&lt;strong>Note:&lt;/strong> Our project is hosted on an internal GitLab instance, so we use the term &lt;strong>MR (Merge Request)&lt;/strong> throughout this blog. If you&amp;rsquo;re coming from GitHub, MRs are the equivalent of &lt;strong>Pull Requests (PRs)&lt;/strong>.&lt;/p></description></item><item><title>PPL: Quality Gates That Actually Block Bad Code</title><link>https://blog.abhipraya.dev/ppl/part-a/qa/</link><pubDate>Thu, 09 Apr 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-a/qa/</guid><description>&lt;p>Quality tools are useless if they are not enforced. Our project had SonarQube analyzing every commit, schema fuzzing running against our API, and load tests measuring latency, but every single one of them was set to &lt;code>allow_failure: true&lt;/code>. Violations were silently ignored, real bugs slipped through, and nobody noticed because the pipeline was always green. This blog covers how we turned those advisory checks into blocking gates, built automated CI reporting so reviewers could actually see the results, and watched the tools catch a JWT vulnerability, a production crash, and 31 code quality violations that had been accumulating for weeks.&lt;/p></description></item><item><title>PPL: SOLID Principles in a FastAPI Invoice System</title><link>https://blog.abhipraya.dev/ppl/part-a/solid/</link><pubDate>Thu, 09 Apr 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-a/solid/</guid><description>&lt;p>SIRA (Smart Invoice Reminder AI) is a system that automates invoice collection reminders. It monitors payment status, scores client risk using a weighted formula, and sends personalized reminders by email or messaging. The backend is built with FastAPI, Supabase (Postgres via REST), Celery for background jobs, and Redis as the message broker.&lt;/p>
&lt;p>This blog walks through how SOLID principles shaped the architecture at three levels: the overall layer structure, service-level design patterns, and individual function design. The goal is not to explain what SOLID stands for (there are plenty of articles for that), but to show what it looks like in production code and why certain design choices were made over simpler alternatives.&lt;/p></description></item><item><title>PPL: When 91% Test Coverage Means Nothing</title><link>https://blog.abhipraya.dev/ppl/part-a/tdd/</link><pubDate>Thu, 09 Apr 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-a/tdd/</guid><description>&lt;p>We had 91% line coverage and felt good about it. Then we ran mutation testing and scored 0%. Every line of our service layer was executed by tests, but almost nothing was actually verified. This is the story of how we discovered the gap between &amp;ldquo;code was run&amp;rdquo; and &amp;ldquo;code was checked,&amp;rdquo; and what we changed to close it.&lt;/p>
&lt;blockquote>
&lt;p>&lt;strong>Note:&lt;/strong> Our project is hosted on an internal GitLab instance, so we use the term &lt;strong>MR (Merge Request)&lt;/strong> throughout this blog. If you&amp;rsquo;re coming from GitHub, MRs are the equivalent of &lt;strong>Pull Requests (PRs)&lt;/strong>.&lt;/p></description></item><item><title>PPL: When 91% Test Coverage Means Nothing</title><link>https://blog.abhipraya.dev/ppl/part-a/tdd-and-qa/</link><pubDate>Sun, 15 Mar 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-a/tdd-and-qa/</guid><description>&lt;p>We had 91% line coverage and felt good about it. Then we ran mutation testing and scored 0%. Every line of our service layer was executed by tests; almost nothing was actually verified. This is the story of how six advanced testing tools exposed the gap between &amp;ldquo;code was run&amp;rdquo; and &amp;ldquo;code was checked,&amp;rdquo; and what that means for any team relying on coverage as a quality signal.&lt;/p>
&lt;blockquote>
&lt;p>&lt;strong>Note:&lt;/strong> Our project is hosted on an internal GitLab instance, so we use the term &lt;strong>MR (Merge Request)&lt;/strong> throughout this blog. If you&amp;rsquo;re coming from GitHub, MRs are the equivalent of &lt;strong>Pull Requests (PRs)&lt;/strong>.&lt;/p></description></item><item><title>PPL: Building a Production-Safe Migration Pipeline with Automated Rollback</title><link>https://blog.abhipraya.dev/ppl/part-a/data-seeding/</link><pubDate>Thu, 12 Mar 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-a/data-seeding/</guid><description>&lt;h2 id="why-database-migrations-need-safety-nets">
 &lt;a class="anchor" href="#why-database-migrations-need-safety-nets" data-anchor="why-database-migrations-need-safety-nets" aria-hidden="true">#&lt;/a>
 Why Database Migrations Need Safety Nets
&lt;/h2>
&lt;p>Imagine this scenario: a developer adds a new column to the invoices table, pushes to &lt;code>main&lt;/code>, and the CI/CD pipeline deploys it to production. Everything looks fine until the next morning, when the team discovers that the migration also dropped a constraint that was silently relied on by another service. Rolling back means manually writing SQL against the production database at 2 AM.&lt;/p></description></item><item><title>PPL: Building an Integrated Tool Ecosystem for a 9-Person University Team</title><link>https://blog.abhipraya.dev/ppl/part-a/teamwork-tools/</link><pubDate>Thu, 12 Mar 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-a/teamwork-tools/</guid><description>&lt;h2 id="why-tooling-is-a-team-problem-not-a-devops-problem">
 &lt;a class="anchor" href="#why-tooling-is-a-team-problem-not-a-devops-problem" data-anchor="why-tooling-is-a-team-problem-not-a-devops-problem" aria-hidden="true">#&lt;/a>
 Why Tooling Is a Team Problem, Not a DevOps Problem
&lt;/h2>
&lt;p>Most university software engineering courses teach you &lt;em>which&lt;/em> tools to use (Git for version control, Jira for tickets, Docker for deployment). What they rarely teach is &lt;strong>how tools interact with each other&lt;/strong>, and what happens when they don&amp;rsquo;t.&lt;/p>
&lt;p>In a professional environment, a single commit can trigger a cascade: CI runs tests, a Slack bot notifies the team, a coverage report lands in SonarQube, and the ticket moves to &amp;ldquo;In Review&amp;rdquo; on the project board. This doesn&amp;rsquo;t happen by accident. Someone has to build that integration layer, and in a university team, that someone is usually whoever cares enough about developer experience to do it.&lt;/p></description></item></channel></rss>