<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Qa on Daffa Abhipraya</title><link>https://blog.abhipraya.dev/tags/qa/</link><description>Recent content in Qa on Daffa Abhipraya</description><generator>Hugo</generator><language>en-us</language><copyright>© Daffa Abhipraya</copyright><lastBuildDate>Thu, 09 Apr 2026 00:00:00 +0700</lastBuildDate><atom:link href="https://blog.abhipraya.dev/tags/qa/index.xml" rel="self" type="application/rss+xml"/><item><title>PPL: Quality Gates That Actually Block Bad Code</title><link>https://blog.abhipraya.dev/ppl/part-a/qa/</link><pubDate>Thu, 09 Apr 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-a/qa/</guid><description>&lt;p>Quality tools are useless if they are not enforced. Our project had SonarQube analyzing every commit, schema fuzzing running against our API, and load tests measuring latency, but every single one of them was set to &lt;code>allow_failure: true&lt;/code>. Violations were silently ignored, real bugs slipped through, and nobody noticed because the pipeline was always green. This blog covers how we turned those advisory checks into blocking gates, built automated CI reporting so reviewers could actually see the results, and watched the tools catch a JWT vulnerability, a production crash, and 31 code quality violations that had been accumulating for weeks.&lt;/p></description></item><item><title>PPL: When 91% Test Coverage Means Nothing</title><link>https://blog.abhipraya.dev/ppl/part-a/tdd-and-qa/</link><pubDate>Sun, 15 Mar 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-a/tdd-and-qa/</guid><description>&lt;p>We had 91% line coverage and felt good about it. Then we ran mutation testing and scored 0%. Every line of our service layer was executed by tests; almost nothing was actually verified. This is the story of how six advanced testing tools exposed the gap between &amp;ldquo;code was run&amp;rdquo; and &amp;ldquo;code was checked,&amp;rdquo; and what that means for any team relying on coverage as a quality signal.&lt;/p>
&lt;blockquote>
&lt;p>&lt;strong>Note:&lt;/strong> Our project is hosted on an internal GitLab instance, so we use the term &lt;strong>MR (Merge Request)&lt;/strong> throughout this blog. If you&amp;rsquo;re coming from GitHub, MRs are the equivalent of &lt;strong>Pull Requests (PRs)&lt;/strong>.&lt;/p></description></item></channel></rss>