<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai-Literacy on Daffa Abhipraya</title><link>https://blog.abhipraya.dev/tags/ai-literacy/</link><description>Recent content in Ai-Literacy on Daffa Abhipraya</description><generator>Hugo</generator><language>en-us</language><copyright>© Daffa Abhipraya</copyright><lastBuildDate>Wed, 15 Apr 2026 00:00:00 +0700</lastBuildDate><atom:link href="https://blog.abhipraya.dev/tags/ai-literacy/index.xml" rel="self" type="application/rss+xml"/><item><title>PPL: AI at Different Cognitive Distances [Sprint 2, Week 3]</title><link>https://blog.abhipraya.dev/ppl/part-b/s2w3-ai-literacy/</link><pubDate>Wed, 15 Apr 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-b/s2w3-ai-literacy/</guid><description>&lt;h2 id="what-i-worked-on">
 &lt;a class="anchor" href="#what-i-worked-on" data-anchor="what-i-worked-on" aria-hidden="true">#&lt;/a>
 What I Worked On
&lt;/h2>
&lt;p>Two weeks where AI was the primary productivity multiplier across very different task shapes. Week 1 leaned on iterative CI debugging (ten MRs of &amp;ldquo;run pipeline, read failure, ask AI, apply fix&amp;rdquo;) and integration-test infrastructure design. Week 2 leaned on bulk test generation (400+ mutation-killing assertions), bash-with-tricky-primitives design (a &lt;code>flock&lt;/code>-based slot allocator), and cross-agent documentation research. The common thread across all six patterns: AI is best when the task is a known pattern applied to new context, and weakest when the task is about how primitives interact in a specific environment.&lt;/p></description></item><item><title>PPL: AI Literacy [Sprint 2, Week 2]</title><link>https://blog.abhipraya.dev/ppl/part-b/s2w2-ai-literacy/</link><pubDate>Mon, 30 Mar 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-b/s2w2-ai-literacy/</guid><description>&lt;h2 id="what-i-worked-on">
 &lt;a class="anchor" href="#what-i-worked-on" data-anchor="what-i-worked-on" aria-hidden="true">#&lt;/a>
 What I Worked On
&lt;/h2>
&lt;p>With 19 MRs landing this week across CI scripting, application features, and infra fixes, AI assistance was involved throughout. The most interesting usage this week was not code generation but rather iterative verification: describe what the output should look like, generate, run against real CI output, observe differences, fix, repeat.&lt;/p>
&lt;hr>
&lt;h2 id="ai-for-complex-bash-scripting">
 &lt;a class="anchor" href="#ai-for-complex-bash-scripting" data-anchor="ai-for-complex-bash-scripting" aria-hidden="true">#&lt;/a>
 AI for Complex Bash Scripting
&lt;/h2>
&lt;p>The CI report scripts (&lt;code>scripts/ci-report.sh&lt;/code>) grew from 0 to 356 lines in MR !126 alone, and another 111 lines across !129, !132, and !140. Writing bash that parses CI job output, formats it as GitLab-flavored markdown, and posts it via the GitLab API is exactly the kind of code that AI handles well — repetitive structure, lots of string manipulation, easy to verify.&lt;/p></description></item><item><title>PPL: AI-Assisted Sprint Planning [Sprint 2, Week 1]</title><link>https://blog.abhipraya.dev/ppl/part-b/s2w1-ai-literacy/</link><pubDate>Mon, 23 Mar 2026 00:00:00 +0700</pubDate><guid>https://blog.abhipraya.dev/ppl/part-b/s2w1-ai-literacy/</guid><description>&lt;h2 id="what-i-worked-on">
 &lt;a class="anchor" href="#what-i-worked-on" data-anchor="what-i-worked-on" aria-hidden="true">#&lt;/a>
 What I Worked On
&lt;/h2>
&lt;p>This week I used Claude Code (Anthropic&amp;rsquo;s CLI agent) as my primary development partner for Sprint 2 planning, feature brainstorming, code implementation, and CI debugging. The session produced 20 PBIs, 64 subtasks, 3 service file changes, and a merged MR, all in one sitting.&lt;/p>
&lt;h2 id="ai-for-sprint-planning">
 &lt;a class="anchor" href="#ai-for-sprint-planning" data-anchor="ai-for-sprint-planning" aria-hidden="true">#&lt;/a>
 AI for Sprint Planning
&lt;/h2>
&lt;h3 id="feature-gap-analysis">
 &lt;a class="anchor" href="#feature-gap-analysis" data-anchor="feature-gap-analysis" aria-hidden="true">#&lt;/a>
 Feature Gap Analysis
&lt;/h3>
&lt;p>I asked Claude to explore the entire codebase (frontend + backend + infra) and catalog what&amp;rsquo;s implemented vs missing. It launched parallel exploration agents that traced through routes, services, workers, and the database schema, producing a comprehensive feature inventory.&lt;/p></description></item></channel></rss>