Understanding the “See Stuff, Say Stuff” Workflow: A Breakdown of Karpathy’s Famous Mantra and How to Apply It Practically
In the rapidly evolving landscape of AI-assisted development, few concepts have captured the essence of modern programming as succinctly as Andrej Karpathy’s “see stuff, say stuff” workflow. This deceptively simple phrase encapsulates a fundamental shift in how we approach software development—one that moves away from the traditional paradigm of memorizing syntax and wrestling with implementation details toward a more intuitive, conversational model of programming.
Karpathy, the renowned AI researcher and former Director of AI at Tesla, coined this phrase to describe his own approach to working with large language models for coding tasks. But what appears to be a casual observation actually represents a profound insight into the future of human-computer collaboration in software development. The “see stuff, say stuff” workflow isn’t just about using AI tools more effectively—it’s about fundamentally reimagining the relationship between human intention and code implementation.
At its core, this workflow represents the democratization of programming expertise. Where traditional development required years of accumulated knowledge about syntax, APIs, design patterns, and best practices, the “see stuff, say stuff” approach allows developers to focus on problem-solving and creative thinking while leveraging AI to handle the mechanical aspects of code generation. This shift, sometimes referred to as vibe coding, has profound implications for how we learn programming, structure development teams, and approach complex software projects.
Understanding this workflow requires examining not just the mechanics of how it operates, but the cognitive and philosophical changes it represents. When we truly embrace “see stuff, say stuff,” we’re not just changing our tools—we’re changing how we think about the act of programming itself.
The Origins and Context of Karpathy’s Insight

Andrej Karpathy’s formulation of the “see stuff, say stuff” workflow emerged from his extensive experience working at the intersection of artificial intelligence research and practical software development. As someone who has operated at the highest levels of both academic AI research and industry application—including his pivotal role in developing Tesla’s Autopilot system—Karpathy brings a unique perspective to understanding how AI tools can augment human programming capabilities.
The phrase itself emerged from Karpathy’s observations about his own programming workflow when using advanced language models. He noticed that his most productive coding sessions had evolved into a pattern where he would examine existing code, documentation, or problem descriptions (“see stuff”) and then articulate his intentions, questions, or desired modifications in natural language (“say stuff”), allowing AI tools to handle the translation from intent to implementation.
This observation was significant because it came from someone with deep technical expertise who could certainly implement complex solutions manually but chose to work collaboratively with AI tools instead. The insight wasn’t born from necessity or lack of skill, but from recognizing that this collaborative approach could be more effective than traditional solo development, even for experts.
The Broader Context of AI-Assisted Development
Karpathy’s workflow insight emerged during a period of rapid advancement in large language models’ coding capabilities. The release of increasingly sophisticated tools like GitHub Copilot, ChatGPT’s coding features, and specialized programming assistants created new possibilities for human-AI collaboration that hadn’t existed before.
What made Karpathy’s observation particularly valuable was his recognition that these tools weren’t just advanced autocomplete systems or sophisticated documentation lookup mechanisms. Instead, he saw them as partners in a genuinely collaborative programming process where the human’s role shifts from implementation to orchestration, from coding to conducting.
This shift represented more than just a productivity improvement—it suggested a fundamental evolution in what it means to be a programmer. The traditional model of programming expertise, built around accumulating vast stores of syntactic knowledge and implementation patterns, was being supplemented (and in some cases replaced) by skills in communication, problem decomposition, and creative synthesis.
The Psychology Behind the Workflow
The effectiveness of the “see stuff, say stuff” workflow isn’t just about the technical capabilities of AI tools—it’s also about how this approach aligns with natural human cognitive patterns. Traditional programming often requires holding complex syntactic structures in working memory while simultaneously reasoning about high-level logic and system architecture. This cognitive juggling act is mentally taxing and often leads to errors or suboptimal solutions.
By externalizing the syntactic and implementation details to AI tools, the “see stuff, say stuff” workflow allows developers to focus their cognitive resources on the aspects of programming that humans excel at: creative problem-solving, understanding user needs, making architectural decisions, and synthesizing different requirements into coherent solutions.
This alignment with natural cognitive patterns helps explain why many developers report that AI-assisted programming feels more intuitive and less mentally exhausting than traditional approaches. Instead of fighting against the limitations of human working memory and attention, the workflow leverages these natural patterns to achieve better results with less cognitive strain.
Deconstructing “See Stuff”: The Art of Observation and Context Building
The “see stuff” component of Karpathy’s workflow represents far more than passive observation. It encompasses the active, analytical process of understanding context, identifying patterns, recognizing problems, and building the mental models that inform effective collaboration with AI tools. This observational phase is where the human developer’s expertise and judgment become most crucial.
Reading Code with AI-Assisted Intent
When experienced developers examine existing code, they’re not just parsing syntax—they’re building comprehensive mental models of how systems work, identifying potential improvements, understanding the reasoning behind design decisions, and spotting opportunities for enhancement or extension. In the “see stuff, say stuff” workflow, this analytical reading becomes even more important because it forms the foundation for productive AI collaboration.
Effective code reading in this context involves multiple layers of analysis. At the surface level, you’re understanding what the code does—its immediate functionality and behavior. But deeper analysis involves understanding why the code was written the way it was, what constraints or requirements influenced its design, and how it fits into the broader system architecture.
This deeper understanding becomes crucial when working with AI tools because it allows you to provide rich context for your requests and to evaluate AI suggestions intelligently. When you truly understand the existing codebase, you can guide AI tools toward solutions that integrate well with existing patterns and avoid introducing inconsistencies or architectural debt.
The “see stuff” phase also involves recognizing what’s missing. Experienced developers develop an intuition for gaps in functionality, potential edge cases that aren’t handled, opportunities for optimization, or areas where the code could be more maintainable or extensible. These insights become the seeds for productive AI collaboration.
Pattern Recognition and System Understanding
One of the most valuable skills in the “see stuff” phase is the ability to recognize patterns—both positive patterns that should be maintained or extended, and negative patterns that represent opportunities for improvement. This pattern recognition operates at multiple levels, from low-level code patterns to high-level architectural decisions.
At the code level, pattern recognition involves understanding the conventions and idioms used in the existing codebase, recognizing recurring solutions to common problems, and identifying inconsistencies that might indicate bugs or maintenance issues. When working with AI tools, this understanding allows you to request modifications that maintain consistency with existing patterns or deliberately break from them when appropriate.
At the architectural level, pattern recognition involves understanding how different components of the system interact, how data flows through the application, and how various concerns are separated and organized. This systems-level understanding is crucial for making requests that enhance rather than compromise the overall architecture.
The ability to recognize patterns also extends to understanding user experience patterns, performance characteristics, and operational concerns. When you can see how existing code serves (or fails to serve) these higher-level goals, you can guide AI tools toward solutions that address real user needs rather than just technical requirements.
Contextual Analysis and Domain Understanding
Effective “see stuff” analysis requires understanding not just the technical aspects of code, but the broader context in which it operates. This includes understanding the business requirements the code serves, the user experience it enables, the operational constraints it must satisfy, and the team dynamics that influence its maintenance and evolution.
This contextual understanding becomes particularly important when working with AI tools because it allows you to provide rich background information that helps the AI generate more appropriate solutions. An AI tool that understands the business context, user requirements, and operational constraints can suggest solutions that address real needs rather than just technical puzzles.
Domain understanding also involves recognizing the specific challenges and opportunities that characterize your particular problem space. Different domains—whether web development, data processing, embedded systems, or machine learning—have their own patterns, constraints, and best practices. Understanding these domain-specific concerns allows you to guide AI tools toward solutions that are appropriate for your specific context.
Critical Evaluation and Quality Assessment

The “see stuff” phase isn’t just about understanding what exists—it’s also about critically evaluating quality, identifying opportunities for improvement, and developing opinions about what good solutions look like in your specific context. This critical evaluation capacity is what allows you to work productively with AI tools rather than just accepting whatever they suggest.
Effective critical evaluation involves multiple dimensions of quality: correctness, maintainability, performance, security, usability, and alignment with business requirements. When you can assess existing code across these dimensions, you can provide more targeted requests to AI tools and evaluate their suggestions more effectively.
This evaluation capacity also involves understanding trade-offs. Most programming decisions involve balancing competing concerns—performance versus maintainability, flexibility versus simplicity, feature completeness versus development speed. Understanding these trade-offs allows you to guide AI tools toward solutions that make appropriate compromises for your specific situation.
Mastering “Say Stuff”: The Art of AI Communication
The “say stuff” component of Karpathy’s workflow represents the communicative bridge between human intention and AI capability. This isn’t simply about issuing commands or asking questions—it’s about engaging in genuine dialogue with AI tools in ways that leverage their strengths while compensating for their limitations. Mastering this communication requires understanding both the technical capabilities of AI systems and the psychological dynamics of effective collaboration.
Prompt Crafting as a Core Skill
In the “see stuff, say stuff” workflow, the ability to craft effective prompts becomes as important as traditional programming skills. An effective prompt isn’t just a clear request—it’s a communication strategy that provides appropriate context, specifies desired outcomes, acknowledges constraints, and guides the AI toward solutions that align with your broader goals. For a deep dive into this skill, resources like the Prompting Guide are invaluable.
Effective prompts typically include several key elements: clear specification of what you want to accomplish, relevant context about the existing system and requirements, information about constraints or preferences that should guide the solution, and success criteria that define what a good solution looks like. The art lies in providing enough information to guide the AI effectively without overwhelming it with irrelevant details.
The best prompts also demonstrate understanding of the AI’s capabilities and limitations. They request the types of tasks that AI tools excel at while acknowledging areas where human judgment will be required. This collaborative framing leads to more productive interactions than prompts that treat the AI as either a simple tool or an infallible expert.
Prompt crafting also involves understanding how to structure requests for maximum effectiveness. This might involve breaking complex requests into smaller components, providing examples of desired outcomes, or structuring the conversation to build understanding incrementally rather than trying to communicate everything at once.
Conversational Development Techniques
The most effective practitioners of the “see stuff, say stuff” workflow engage in genuine conversations with AI tools rather than simply issuing requests and accepting responses. This conversational approach allows for iterative refinement of ideas, exploration of alternative approaches, and collaborative problem-solving that leverages both human creativity and AI capability.
Effective conversational development involves asking follow-up questions, requesting explanations or justifications for AI suggestions, and exploring alternative approaches to the same problem. This dialogue helps build understanding on both sides—the AI gains more context about your specific needs and preferences, while you develop insight into the AI’s reasoning and capabilities.
The conversational approach also involves being willing to challenge or refine AI suggestions rather than accepting them uncritically. When an AI suggests a solution that doesn’t quite meet your needs, the most productive response is often to explain what’s missing or problematic and ask for modifications rather than starting over with a new request.
This iterative refinement process often leads to solutions that neither human nor AI would have developed independently. The human brings creative insight, domain expertise, and understanding of real-world constraints, while the AI contributes technical knowledge, pattern recognition, and the ability to explore solution spaces more comprehensively than human developers typically can.
Context Management and Information Architecture
One of the most challenging aspects of effective AI communication is managing context across extended conversations. AI tools have limitations in how much information they can process simultaneously, and humans have limitations in how much complexity they can track across long development sessions. Effective “say stuff” techniques involve strategies for managing this complexity while maintaining productive collaboration.
Context management involves deciding what information to include in each prompt, how to structure conversations to maintain relevant context without overwhelming the AI, and when to restart conversations with fresh context rather than trying to maintain increasingly complex conversational threads.
Effective practitioners develop techniques for summarizing important context efficiently, referencing previous parts of the conversation without repeating extensive details, and structuring their communication to build understanding incrementally. This might involve creating explicit summaries of key decisions, maintaining notes about important context that can be referenced as needed, or using structured formats that help organize complex information.
The goal is to create a shared understanding between human and AI that enables productive collaboration without requiring either party to manage more complexity than they can handle effectively. This often involves finding the right balance between providing sufficient context and maintaining conversational clarity.
Feedback and Iteration Strategies
The “say stuff” component of the workflow isn’t just about making initial requests—it’s also about providing effective feedback that guides the AI toward better solutions. This feedback process is where much of the collaborative magic happens, as human judgment and AI capability combine to refine and improve solutions iteratively.
Effective feedback involves being specific about what works and what doesn’t in AI-generated solutions. Rather than simply accepting or rejecting suggestions, skilled practitioners explain what aspects of a solution are appropriate and what aspects need modification. This detailed feedback helps the AI understand not just what you want, but why you want it.
The feedback process also involves understanding how to guide AI tools toward improvements without micromanaging every detail. This requires developing a sense for when to provide specific direction and when to describe desired outcomes and let the AI determine the best approach to achieve them.
Iteration strategies involve knowing when to continue refining an existing solution and when to start fresh with a different approach. Sometimes the most productive path involves building on AI suggestions incrementally, while other situations call for stepping back and reconceptualizing the problem entirely.
Practical Implementation Strategies
Moving from understanding the conceptual framework of “see stuff, say stuff” to implementing it effectively in real-world development requires practical strategies that address the messy realities of software projects, team dynamics, and varying levels of AI tool sophistication.
Integrating the Workflow into Existing Development Processes
The “see stuff, say stuff” workflow doesn’t exist in isolation—it needs to integrate smoothly with existing development processes, team practices, and project requirements. This integration involves adapting the workflow to fit different types of projects, team sizes, and organizational cultures while maintaining its core benefits.
For individual developers, integration might involve establishing new routines for code analysis and AI collaboration, developing personal standards for when and how to engage AI tools, and creating systems for documenting and sharing insights gained through AI-assisted development. The key is finding ways to incorporate the workflow that enhance rather than disrupt existing productive practices.
In team environments, integration becomes more complex because it involves coordinating between developers who may have different comfort levels with AI tools, different communication styles, and different perspectives on the appropriate role of AI in software development. Successful team integration often involves establishing shared standards for AI tool usage, creating processes for reviewing and validating AI-generated code, and developing team practices that leverage the workflow’s benefits while maintaining code quality and consistency.
The integration process also involves adapting the workflow to different types of development tasks. The approach that works well for exploratory prototyping might not be appropriate for production bug fixes, and the techniques that are effective for greenfield development might need modification for legacy system maintenance.
Tool Selection and Configuration
Implementing the “see stuff, say stuff” workflow effectively requires thoughtful selection and configuration of AI tools that support the collaborative approach rather than just providing code generation capabilities. The best tools for this workflow combine powerful language understanding with interfaces that support sustained conversation and iterative refinement.
Tool selection involves evaluating factors like conversation quality, context retention, code generation accuracy, integration with existing development environments, and the subjective experience of using the tool for extended development sessions. The goal is finding tools that feel like genuine collaborators rather than sophisticated utilities.
Configuration involves customizing tools to match your specific development context, communication style, and project requirements. This might involve setting up custom prompts or templates, configuring integrations with your development environment, or establishing workflows that streamline the transition between analysis, conversation, and implementation.
The tool landscape is evolving rapidly, so effective implementation also involves staying aware of new capabilities and options while avoiding the temptation to constantly switch between different tools. The key is finding a stable foundation that supports your work while remaining open to beneficial improvements.
Quality Assurance and Code Review
Implementing the “see stuff, say stuff” workflow effectively requires developing new approaches to quality assurance that account for the collaborative nature of AI-assisted development. Traditional code review processes may need adaptation to address the unique characteristics of AI-generated code and the different types of errors or issues that can arise in collaborative development.
Quality assurance in AI-assisted development involves multiple layers of validation. At the technical level, this includes verifying that AI-generated code works correctly, follows appropriate patterns and conventions, and integrates well with existing systems. But it also involves higher-level validation of whether the solutions address real requirements and align with project goals.
Code review processes may need to evolve to include evaluation of the collaborative process itself—assessing whether AI tools were used appropriately, whether the resulting solutions demonstrate good judgment about trade-offs and design decisions, and whether the development approach produced maintainable, extensible results.
The review process also provides opportunities for team learning about effective AI collaboration techniques. When team members share their experiences with different approaches to the “see stuff, say stuff” workflow, the entire team can benefit from accumulated insights and develop more sophisticated collaborative practices.
Measuring Success and Iteration
Implementing the “see stuff, say stuff” workflow successfully requires developing metrics and feedback mechanisms that help you understand whether the approach is delivering the intended benefits and where improvements might be needed. This measurement involves both quantitative metrics and qualitative assessment of the development experience.
Quantitative metrics might include development speed, code quality measures, bug rates, or productivity indicators that can be compared to traditional development approaches. But the most meaningful measures often involve qualitative factors like developer satisfaction, learning rate, creative output, and the subjective experience of the development process.
The measurement process should also include regular reflection on what’s working well and what could be improved in your implementation of the workflow. This might involve keeping development journals, conducting regular retrospectives, or simply maintaining awareness of your own experience and making adjustments as needed.
Successful implementation of the workflow is an iterative process that improves over time as you develop better skills in AI collaboration, discover more effective techniques, and adapt the approach to your specific context and requirements.
Advanced Applications and Techniques
As developers become proficient with the basic “see stuff, say stuff” workflow, they can explore more sophisticated applications that leverage the full potential of human-AI collaboration in software development.
Architectural Design and System Planning
One of the most powerful applications of the “see stuff, say stuff” workflow involves using AI collaboration for architectural design and system planning. This goes beyond code generation to include high-level system design, technology selection, and strategic technical decision-making.
In architectural design, the “see stuff” phase involves analyzing existing systems, understanding requirements and constraints, and identifying opportunities for improvement or extension. The “say stuff” phase involves engaging AI tools in conversations about design alternatives, trade-offs between different approaches, and the implications of various architectural decisions.
This collaborative approach to architecture can be particularly valuable because it allows developers to explore design alternatives more thoroughly than they might independently, while benefiting from AI tools’ ability to consider a broader range of patterns and approaches than any individual developer might recall.
The key to successful architectural collaboration is maintaining human responsibility for final decisions while leveraging AI tools to expand the range of options considered and to think through the implications of different approaches more systematically.
Legacy Code Analysis and Modernization
The “see stuff, say stuff” workflow is particularly powerful for working with legacy codebases, where understanding existing systems and planning improvements can be especially challenging. AI tools can help analyze complex legacy code, identify patterns and dependencies, and suggest modernization strategies that might not be immediately obvious to human developers.
In legacy analysis, the “see stuff” phase involves systematically examining existing code to understand its structure, dependencies, and behavior. AI tools can assist with this analysis by helping to identify patterns, extract documentation from code, and build comprehensive models of system behavior.
The “say stuff” phase involves using these insights to plan modernization strategies, refactor existing code, and gradually improve system architecture while maintaining functionality. The collaborative approach allows for more systematic and less risky legacy modernization than traditional approaches.
Performance Optimization and Debugging
Advanced practitioners of the “see stuff, say stuff” workflow use AI collaboration for sophisticated performance optimization and debugging tasks that leverage both human insight and AI analytical capabilities.
In performance optimization, the workflow involves analyzing system behavior and bottlenecks (“see stuff”) and then collaborating with AI tools to identify optimization opportunities and implement improvements (“say stuff”). This collaborative approach can be particularly effective because it combines human understanding of system requirements and user needs with AI tools’ ability to analyze code patterns and suggest specific optimizations.
For debugging, the workflow involves systematic analysis of system behavior and error patterns, followed by collaborative exploration of potential causes and solutions. AI tools can help identify patterns in logs, suggest potential causes for observed behavior, and recommend debugging strategies that might not occur to human developers.
Domain-Specific Applications
The “see stuff, say stuff” workflow can be adapted for domain-specific applications that leverage specialized knowledge and requirements. Different domains—such as web development, data science, embedded systems, or machine learning—have their own patterns, constraints, and best practices that influence how the workflow is most effectively applied.
In web development, the workflow might focus on user experience analysis, accessibility considerations, and performance optimization. In data science, it might emphasize data quality analysis, statistical methodology, and result interpretation. In embedded systems, it might prioritize resource constraints, real-time requirements, and hardware integration concerns.
The key to successful domain-specific application is developing understanding of how AI tools can best support the specific types of analysis and decision-making that characterize your domain, while maintaining awareness of domain-specific constraints and requirements that should guide AI collaboration.
Common Pitfalls and How to Avoid Them
Even experienced developers can encounter significant challenges when implementing the “see stuff, say stuff” workflow. Understanding these common pitfalls and developing strategies to avoid them is crucial for successful adoption of AI-assisted development practices.
Over-Reliance on AI Suggestions
One of the most common pitfalls is gradually shifting from collaborative development to passive consumption of AI suggestions. This can happen subtly as developers become comfortable with AI-generated code and begin accepting suggestions without sufficient critical evaluation or understanding.
The problem with over-reliance isn’t just that it can lead to lower-quality code—it’s that it undermines the collaborative nature of the workflow that makes it most effective. When developers stop engaging critically with AI suggestions, they lose the opportunity to guide the AI toward better solutions and miss chances to learn from the collaborative process.
Avoiding over-reliance requires maintaining active engagement with AI-generated code, regularly asking follow-up questions to understand reasoning behind suggestions, and being willing to reject or modify AI recommendations when they don’t align with project requirements or quality standards.
Insufficient Context Provision
Another common pitfall involves failing to provide sufficient context for effective AI collaboration. This might manifest as vague prompts that don’t give the AI enough information to generate appropriate solutions, or as requests that ignore important constraints or requirements that should guide the solution.
Insufficient context often leads to AI suggestions that are technically correct but inappropriate for the specific situation. The solutions might work in isolation but fail to integrate well with existing systems, or they might address the immediate request while missing important broader considerations.
Avoiding this pitfall requires developing skills in context analysis and communication that allow you to provide AI tools with the information they need to generate appropriate solutions. This involves both technical context about the existing system and broader context about requirements, constraints, and goals.
Losing Sight of the Big Picture
The iterative, conversational nature of the “see stuff, say stuff” workflow can sometimes lead developers to focus too narrowly on immediate problems while losing track of broader architectural concerns or project goals. This can result in solutions that solve local problems while creating or ignoring systemic issues.
Maintaining big-picture awareness requires regularly stepping back from detailed implementation discussions to consider how current work fits into the broader system architecture and project objectives. This might involve periodic review sessions, documentation practices that maintain architectural context, or explicit conversations with AI tools about system-level concerns.
Inadequate Validation and Testing
The speed and ease of AI-assisted development can sometimes lead to inadequate validation and testing of generated solutions. When code can be produced quickly through conversation with AI tools, it’s tempting to assume that it works correctly without thorough testing.
This pitfall can be particularly dangerous because AI-generated code often looks correct and may work in simple test cases while failing in edge cases or under specific conditions that weren’t considered during generation.
Avoiding inadequate validation requires maintaining disciplined testing practices regardless of how code is generated, developing systematic approaches to validating AI-generated solutions, and being particularly careful about testing edge cases and integration scenarios that might not have been explicitly considered during AI collaboration.
The Future Evolution of the Workflow
The “see stuff, say stuff” workflow represents an early stage in the evolution of human-AI collaboration in software development. Understanding current trends and anticipating future developments can help developers prepare for continued evolution in this space.
Emerging AI Capabilities
As AI tools become more sophisticated, the nature of the “see stuff, say stuff” workflow will likely evolve to take advantage of new capabilities. Emerging developments in areas like multimodal AI, improved context understanding, and better integration with development environments will create new opportunities for human-AI collaboration.
Future AI tools might be able to analyze visual designs and generate corresponding code, understand voice commands and engage in spoken conversations about code, or maintain much longer context windows that support more extended collaborative sessions.
These evolving capabilities will likely enable more sophisticated forms of collaboration while potentially changing the balance of responsibilities between human and AI participants in the development process.
Integration with Development Ecosystems
The workflow will likely become more deeply integrated with existing development tools and processes. Instead of requiring separate interfaces for AI collaboration, future development environments might provide seamless integration that makes AI assistance feel like a natural part of the coding process.
This deeper integration might include AI tools that understand project context automatically, provide suggestions that are aware of existing code patterns and architectural decisions, and support collaborative development workflows that involve multiple team members working with AI assistance.
Evolving Best Practices
As more developers adopt AI-assisted development workflows, best practices will continue to evolve based on accumulated experience and research. This includes better understanding of when and how to use AI assistance most effectively, improved techniques for managing the collaborative process, and more sophisticated approaches to quality assurance in AI-assisted development.
The community of practice around AI-assisted development is still forming, and the best practices that emerge from this community will likely be more nuanced and context-specific than current general guidelines.
Transforming Your Development Practice
Successfully implementing the “see stuff, say stuff” workflow requires more than just learning new techniques—it involves transforming your fundamental approach to software development. This transformation affects not just how you write code, but how you think about problems, approach learning, and collaborate with both AI tools and human team members.
The workflow represents a shift from individual expertise to collaborative intelligence, from memorizing implementation details to mastering communication and synthesis, from working in isolation to engaging in ongoing dialogue with AI partners. These changes can be profound and may require significant adjustment for developers accustomed to traditional development approaches.
The most successful practitioners of the “see stuff, say stuff” workflow report that it changes not just their productivity but their relationship with programming itself. Instead of feeling like a struggle against complex syntax and implementation details, programming becomes a creative conversation about problems and solutions. Instead of being limited by individual knowledge and experience, developers can explore much broader solution spaces and learn continuously through collaboration with AI partners.
This transformation isn’t automatic or immediate—it requires practice, patience, and willingness to experiment with new approaches. But for developers who embrace the collaborative potential of AI-assisted development, the “see stuff, say stuff” workflow offers a glimpse into a future where human creativity and artificial intelligence combine to produce software solutions that neither could achieve independently.