This resonates! I'm building automated reverse documentation workflows in my repos - every git checkin triggers a documentation update to spec files (some of which are quite granular)
I use the granular docs as context to "@" claude or codex.
I think you're absolutely right about agents using docs as a great way to store and refresh context, and to get bonus points from their human counterparts. I was quite surprised the first time I used Claude Code and it built robust docs right out of the box without ever being asked, and continued to maintain them. Unfortunately they did drift over time and I did have to tell it to refresh a few times.
Insightful post, thanks for sharing. There's a new pattern I'm observing among engineers since coding co-pilots emerged: asking questions of the codebase, especially for legacy code (i.e., over a year old by present-day standards).
AI is adept at identifying and synthesizing application logic, red flags, code changes (when there's git history), and much more. I foresee a new genre of agents being trained solely on codebase explanations or explanations of fixes (think linear/jira), which can help with architecture, modularization, and potentially extending or editing the codebase for efficiency, portability, or migrating tech stacks.
This resonates! I'm building automated reverse documentation workflows in my repos - every git checkin triggers a documentation update to spec files (some of which are quite granular)
I use the granular docs as context to "@" claude or codex.
Using this while building Socratify
I think you're absolutely right about agents using docs as a great way to store and refresh context, and to get bonus points from their human counterparts. I was quite surprised the first time I used Claude Code and it built robust docs right out of the box without ever being asked, and continued to maintain them. Unfortunately they did drift over time and I did have to tell it to refresh a few times.
Insightful post, thanks for sharing. There's a new pattern I'm observing among engineers since coding co-pilots emerged: asking questions of the codebase, especially for legacy code (i.e., over a year old by present-day standards).
AI is adept at identifying and synthesizing application logic, red flags, code changes (when there's git history), and much more. I foresee a new genre of agents being trained solely on codebase explanations or explanations of fixes (think linear/jira), which can help with architecture, modularization, and potentially extending or editing the codebase for efficiency, portability, or migrating tech stacks.