Bespoke means custom-made and ‘to speak for something’

A bespoke approach to AI and project management highlights people over ritual: People want to do what they enjoy at work, and not get mired down by process… However, there’s a balance - most people will take on a few extra well-defined tasks in their day - if doing so demonstrably leads to more predictable product development and delivery.

The foundation of agile is distributed collaboration. Modern Agile provides a template and framework for this approach. However, when we use individual perspectives to evolve and strengthen how we work together, people gain trust in the system, stakeholders gain transparency into product delivery schedules, and better project planning leads to better business outcomes.

The foundation of AI is pattern recognition over a large context of information. If we offload those those extra well-defined tasks to Agents and use our AI’s much larger context to coordinate work - we can combine the best of AI, Agile, and Project Management.

A personalized and adaptive approach to both AI and project management encourages wider adoption of best practices and leads towards a happier company culture. This site is an ‘open science’ project to discuss successes, failures, and lessons learned; and to iteratively improve how we all can work together.

Process.jpg

Examples

Development, test, marketing, and more…

Design Goals

Design Goals

Create a vision - that is sustainable and scalable…

Business.jpg

Science it out…

It’s important to practice working well together…

The long-term strengths of adaptive action

World Models in Product Development

or How to Stop Worrying and Love AI

Introduction: The Pattern We've Seen Before

95% of AI projects deliver no return on investment. Companies are spending $500 billion this year. Software development teams question whether they need project managers when AI can ideate, develop requirements, and iterate with the team. In the wider corporate world, AI prompts in browsers automate the operational tasks that once defined project management.

This exact pattern happened before with Agile—and we spent twenty years figuring it out through trial, error, and painful transformation.

The difference is that AI adoption is measured in months and years, not decades. Delay will have existential costs.

The Central Challenge

AI adoption requires the same fundamental shift Agile needed: role clarity combined with reliable data pipelines. But this time we're armed with experience, and we can't afford another generation of costly mistakes.

The solution lies in world models—co-evolving representations of intent, roles, and data that adapt as AI changes where bottlenecks emerge in the product development cycle. This isn't a new framework to learn. It's an evolution of the roles you already have, informed by a decade of experience moving product development teams toward the processes needed to successfully adopt AI.

The Core Problem: Why AI Projects Are Failing

The Numbers Tell a Story

Initial enthusiasm for AI tools leads to high usage rates that fall off quickly and precipitously. Companies introduce AI programs and see short-term productivity increases followed by rapid decreases. The pattern is consistent across industries and use cases.

Three Root Causes

1. The Trust Gap

AI is running into serious hurdles even in the world of developers. There is a cost when tools don't provide a reliable user experience. When AI tools break, are inconsistent, or return too much slop, no amount of prompt engineering or agentic magic will overcome lagging trust and the high costs of engagement.

Questions plague adoption:

  • What can we safely enter into context windows?
  • How do we validate AI outputs in high-stakes domains like legal, medical, and financial services?
  • What accountability frameworks apply when AI produces errors?

The slop problem creates a vicious cycle: when AI breaks or produces inconsistent results, developer collaboration suffers, and impressive solo accomplishments may not scale to teams. No amount of technical sophistication can recover from broken trust.

2. Data Fragmentation

Many organizations are finding that their internal data is too fragmented or messy for AI to use reliably. Connecting AI with legacy infrastructure is often more expensive than the AI software itself.

This problem has deep roots in Agile philosophy. Agile conditioned us to maximize work for short-term investment, creating a lossy approach to information and documentation. The marginal value of documentation was deemed low, and capturing the intent behind why changes were made was considered even lower. Product managers focused on the next release rather than the past.

The result: data wastelands. "A wiki is where information goes to die."

AI tools are particularly good at creating value from data. Given the product development philosophy influenced by Agile, there should be little surprise that AI is not attracting paying customers. Without data, there is no value proposition.

3. Role Confusion

Outside of software development, AI diffusion is not delivering on the promise to provide systemic core business value. Experience is sparse on how to calculate ROI. Staff require guidance toward cultural adaptation as well as coaching and clarity about their roles along the way.

The fundamental questions remain unanswered:

  • Where does AI fit in existing workflows?
  • Who can direct and oversee AI operations?
  • How do we address fears about job displacement?
  • What happens when AI lowers the cost of cognitive tasks but creates new bottlenecks in human review?

CEOs adopt AI-first initiatives with savvy investor outreach but unclear expectations. The force to conform is strong, especially with actions that create soundbites to please investors. "Reduced unnecessary headcount by adopting AI" is currently in vogue, even when the productivity gains don't materialize.

Why This Feels Familiar

These challenges mirror exactly what we experienced during early Agile adoption:

  • Trust issues with new tools and processes
  • Data infrastructure problems
  • Role confusion and cultural resistance
  • CEO pressure for quick wins to demonstrate value to investors

We've seen this all before. The question is whether we've learned enough to avoid repeating the same mistakes.

What Agile Taught Us: The Evolution from Iteration to Data-First

The Agile Philosophy and Its Blindspot

All flavors of Agile evolved to protect the most expensive and time-consuming aspects of product development—generally building and testing a product. Agile introduced team-based rituals to factor product requirements into smaller parts and then apply a dynamic "measure twice, cut once" approach to implementation.

What Agile Optimized For:

  • Protecting expensive resources: development time and testing cycles
  • Adapting requirements within sprints based on feedback
  • Team collaboration and iterative delivery

What Agile Deprioritized:

  • Documentation ("low marginal value")
  • Capturing intent behind decisions ("even lower value")
  • Data versioning and curation
  • Cross-functional knowledge sharing beyond immediate sprint needs

By design, feature requirements and use cases evolved within sprints, and it took tremendous work to coordinate and sequence integration, validation, and documentation. Market pressures drove teams to spend time on work that provided immediate value in support of sprint goals.

Agile never solved the core challenge of curating and versioning data related to the product development cycle. Good GitFlow and DevOps mitigated this data loss to some extent, but even the clear benefits of automation weren't enough to support maintaining test cases over time.

The Inflection Point: When Automation Changed Everything

Prior to AI, the previous "big shift" was toward automation. QA morphed into Test Automation, which became part of automated DevOps pipelines with change control and deployment. This introduced one problem that ruled them all: automation requires clear requirements against which to test.

The Challenges That Emerged (circa 2010-2015):

Teams from startups to global power companies had to adjust to a much faster develop-and-deploy cycle. Just-in-time changes made during refinement, in-sprint, and in feature branches required integration with other teams' work.

Prior to automation, the relatively slow manual merge process provided time for PM review, "enough" testing, and multiple attempts at deployment—ideally with some trailing initiative to create documentation and occasionally give Customer Support insights into changes and issues.

With automation, that buffer disappeared. The costs of production suddenly valued accurate data and documentation. To support automation, Agile teams had to spend time on work that provided longer-term value: versioned requirements, documentation, and validated test coverage.

The Failed First Response

Very few focused on how role-based tribal knowledge, experience, and wisdom were the key to successful digital transformation. The initial response was to include everyone: Agile teams adjusted by including members from design teams as well as customer support, finance, and marketing. Stakeholders were invited into the fold.

Why It Failed:

This helped surface a few issues but more generally increased overhead, slowed development, and made it more difficult for team members to understand what was going on. People are not very good at understanding their own roles in complex systems.

The Solution That Worked: Role-Based Sources of Truth

The next iteration of data-first Agile resolved most of the problems. Representatives from Product Management, Design, Development, Validation, Deployment, and other stakeholders were still included in teams—but they were now accountable for being the sources of truth for their respective roles.

Team Contracts with SLAs:

  • If a requirement was unclear or inconsistent, the design team was required to solve the problem
  • Changes were reflected in versioned design specifications and integration tests
  • Documentation was updated before a change was considered done
  • Customer support and marketing were involved in the business decision to deploy features

Each Role as Source of Truth:

  • Product Management: Requirements and business context
  • Design: User flows and specifications
  • Development: Implementation and technical decisions
  • Validation: Test coverage and acceptance criteria
  • Deployment: Release management and infrastructure

Building Trust Through Success

The companies that successfully incorporated automation and DevOps pipelines focused on working with the people in each role. They understood what each role needed to gain trust in the rest of the system and, just as importantly, secured critical buy-in for taking on a few small tasks and processes in addition to existing responsibilities.

A problem with this type of distributed model to increase productivity is that everyone has to trust that it works. The most effective way to build trust moving to a data-focused Agile process is success.

The Timeline:

In general, it took three to six months for teams to look back and wonder how they did things any other way. Successful feature launches involved Marketing and Finance. Customer Service had the new user flows and issues in hand prior to release. Documentation always reflected the current version.

Stakeholders provided more autonomy. Customers were happier. CEOs were rewarded by investors.

Key Lessons for AI Adoption

  1. Role clarity matters more than tools - The best AI won't fix unclear responsibilities
  2. Data pipelines require buy-in from everyone - One broken link breaks the chain
  3. Trust takes time but not decades - 3-6 months for process adoption, 2-3 years for full cultural change
  4. Success breeds adoption - Small wins matter more than grand visions
  5. Incremental role changes work better than wholesale transformation

Why AI Is Different (And Why the Fundamentals Remain the Same)

The Accelerated Timeline

Agile adoption played out over twenty-plus years across industries, with each organization learning painful lessons independently. Digital transformations were decades-long journeys, with shiny new approaches emerging every few years to encourage CEO engagement and cajole the laggards.

AI adoption is happening at a fundamentally different pace. The growing pains and failures appear more quickly and publicly. The successes are difficult to quantify but far easier than the early days of Agile. Market pressure, competitive advantage, and investor expectations compress decision cycles from years to months.

Critical difference: Delay has existential costs in the AI era. Organizations that fall behind may not have time to catch up.

The Expanded Scope

The rate of AI adoption is faster than Agile and across a more broad range of industry. Agile primarily focused on software development and gradually diffused to other business verticals. AI represents simultaneous adoption across finance, healthcare, legal, marketing, and operations.

More diverse use cases mean more complex failure modes. The variants and markets for AI tools are in infancy for both frontier models and how to use them. However, AI-washing already provides CEOs leverage to rationalize massive capital investments and layoffs.

The Dynamic Bottleneck Problem

This is where AI fundamentally differs from Agile in a way that demands new thinking.

Agile's Assumption:

Development and testing are the expensive bottlenecks. Build team rituals and processes to optimize around protecting these resources.

AI's Reality:

AI lowers the cost of cognitive tasks that can be expressed as language. The bottleneck moves dynamically depending on which tasks AI can handle effectively.

Examples of Shifting Bottlenecks:

  • Code generation: Development is no longer the primary constraint‚Äîcode review, integration testing, and technical validation become bottlenecks
  • Content creation: Writing is cheap and fast, but editorial judgment, brand consistency, and strategic messaging require more human oversight
  • Data analysis: Generating insights happens quickly, but validating accuracy, determining business implications, and making decisions based on analysis require expertise
  • Customer support: Response generation is instant, but empathy, escalation judgment, complex problem-solving, and relationship management can't be automated

The Catch-22:

The bottleneck for adopting AI is not incremental improvements to frontier models. No matter the hype, AI does not have domain experience, intuition, and wisdom to divine human intent, ask the right questions, or reward innovation.

The costs of human labor in data preparation and cleaning far eclipse even the costs of training, which in itself is magnitudes more costly in compute and intellectual effort than improving fine-tuning, context window expansion, and RAG.

Without clean, structured data, AI can't deliver value. But without AI proving value, organizations won't invest in the data infrastructure needed to make it work.

The Similarities That Matter

Despite these differences, the fundamental problem is identical to what we faced with Agile.

Both require:

  • Clear role definitions that adapt to new production economics
  • Data pipelines that capture intent and context, not just output
  • Trust-building through incremental success
  • Cultural change that respects human capabilities rather than trying to replace them

The human problems remain constant:

  • Skills gaps and training needs
  • Cultural resistance to change
  • CEO pressure for quick wins and investor optics
  • Fear of job displacement
  • Difficulty measuring ROI during transition periods

What's Genuinely New

Challenges Specific to AI:

  • Opacity: AI decision-making is harder to audit than human or rule-based decisions
  • Reliability variance: Performance varies wildly by domain and task type
  • Context limitations: How much information can be safely and effectively provided?
  • Hallucinations and slop: AI confidently produces wrong answers
  • Rapid evolution: Best practices for current models may not apply to next-generation models

Why Traditional Change Management Fails

Standard change management approaches assume:

  • Stable tools with predictable capabilities
  • Clear best practices emerging from early adopters
  • ROI calculations based on established use cases

AI reality demands different thinking:

  • Tools and capabilities evolving monthly
  • Best practices still being discovered through experimentation
  • ROI dependent on organization-specific data quality and role adaptation

We can't wait for someone else to figure it out. Each organization needs a framework that co-evolves with AI capabilities.

The World Model Approach: Co-Evolving with AI

What Is a World Model?

A world model is a dynamic, shared representation of how intent, roles, data, and validation interact within your product development cycle—designed to adapt as AI changes the economics of production.

This is not another framework to learn. It's an approach to thinking about your existing roles and processes through a lens that makes AI integration natural rather than disruptive.

The Core Concept:

In AI research, world models refer to internal representations that systems build to model how states change over time in response to specific actions. These models enable AI to predict outcomes and plan effectively.

In business, world models serve a parallel function: they create explicit, shared representations of how decisions, data, and outcomes connect across roles. Just as AI builds internal representations of patterns, organizations need external representations of their knowledge flows that both humans and AI can navigate.

Why "World Models" and Not Just "Process Documentation":

The term emphasizes that these representations:

  • Are dynamic (they evolve as capabilities and bottlenecks change)
  • Are shared (everyone can see how the pieces fit together)
  • Model change (they capture not just what is, but how things transform)
  • Enable co-evolution between human practices and AI capabilities

The Three Layers of a World Model

Layer 1: Intent Mapping

This layer captures the "why" behind work:

  • What problem are we solving? (Product Management domain)
  • What does success look like? (Design and Business domains)
  • What constraints matter? (Technical, regulatory, resource domains)

AI's role in this layer: Help articulate and validate intent through natural language interaction. Surface conflicts or ambiguities in stated goals.

Human's role in this layer: Provide domain expertise, business intuition, and strategic wisdom that AI lacks. Make judgment calls about priorities and tradeoffs.

Layer 2: Role-Based Data Pipelines

This layer defines how work flows between roles:

  • Each role maintains its source of truth
  • Data flows between roles with clear handoffs and SLAs
  • Changes are versioned with context about why, not just what
  • Dependencies and blockers are explicit

AI's role in this layer: Automate data transformation between formats. Flag inconsistencies. Suggest connections between related information.

Human's role in this layer: Make judgment calls when data conflicts. Resolve ambiguities. Validate that AI suggestions make sense in context.

Layer 3: Continuous Validation

This layer ensures quality at every stage:

  • Are we building the right thing? (Product validation)
  • Are we building it right? (Technical validation)
  • Can we support what we built? (Operational validation)

AI's role in this layer: Pattern recognition across large datasets. Identify anomalies and edge cases. Suggest test scenarios based on historical issues.

Human's role in this layer: Contextual judgment about which issues matter most. Risk assessment for edge cases. Stakeholder communication about tradeoffs.

Why This Enables AI Adoption

Addresses the Trust Gap:

Clear accountability at each step means AI suggestions can be traced back to source data and validated by the responsible human role. There are explicit checkpoints at critical junctures where humans review and approve AI contributions.

Trust builds gradually as teams see AI catching issues they would have missed while also learning when to override AI suggestions based on context it doesn't have.

Solves the Data Problem:

When each role sees immediate utility from maintaining their source of truth, data quality becomes a natural priority rather than an afterthought. Documentation becomes a byproduct of the work rather than additional overhead.

The data pipelines serve humans first: clearer handoffs, fewer misunderstandings, less rework. AI benefits second by having clean, contextualized data to learn from.

Clarifies Roles:

AI handles tasks expressible as language and pattern recognition. Humans focus on intent, judgment, and validation. As AI capabilities grow, roles evolve rather than disappear.

The question shifts from "will AI replace this role?" to "how does this role's focus change as AI handles routine cognitive tasks?"

Provides an Evolutionary Path:

No "big bang" transformation required. Start with current roles and minimal process additions. Add AI tools where they provide clear value. Expand as trust and data quality improve.

Teams can adopt at human pace because the framework anticipates that AI capabilities will continue evolving. The goal is not to optimize for today's AI but to create structures that adapt as AI improves.

How This Differs from Other Approaches

vs. AI-First Transformation:

AI-first approaches often assume AI can replace complex human judgment and push organizations to reorganize around AI capabilities. This creates resistance and often fails when AI can't deliver on promises.

World models start with humans and add AI incrementally. The focus is on amplifying existing strengths rather than wholesale replacement.

vs. Traditional Change Management:

Traditional change management assumes you're moving from a known current state to a known future state and that the tools are relatively stable.

World models are designed for dynamic capabilities. They emphasize co-evolution rather than transformation to a fixed end-state and include built-in adaptation mechanisms.

vs. Data Governance Initiatives:

Data governance typically focuses on compliance, access control, and preventing misuse. It's often seen as overhead that slows teams down.

World models focus on active data utility in daily workflows. They connect data quality directly to productivity gains, making maintenance feel valuable rather than burdensome.

The Feedback Loop

World models improve over time through a continuous cycle:

  1. Explicit representation of current state (roles, data flows, workflows)
  2. AI interaction reveals gaps, inefficiencies, and opportunities
  3. Human refinement based on what actually works in practice
  4. Update the model with new patterns and improved practices
  5. Repeat as AI capabilities and business needs evolve

This creates institutional knowledge that:

  • Survives role transitions and employee turnover
  • Can be queried by both humans and AI
  • Evolves with your organization's unique context
  • Becomes increasingly valuable over time as it captures more experience

The world model becomes a living asset that reflects not just how work is done but why it's done that way—the accumulated wisdom that makes your organization effective.

Your Roadmap: Implementation by Role

Before You Start: Map Your Current State

Weeks 1-2: Identify Your Sources of Truth

For each role in your product development cycle, ask:

  • Where does authoritative information currently live? (Often: someone's head, scattered documents, tribal knowledge)
  • What decisions does this role own?
  • What information do they need from other roles to do their work?
  • What information do other roles need from them?

Tools: Use a simple spreadsheet or diagram. Don't overcomplicate this step.

Output: A visual map of information flows and dependencies that everyone can understand.

Weeks 3-4: Find Your Current Bottleneck

AI changes production economics. Where is your constraint NOW—not where it was two years ago?

  • Is it still development time? (Unlikely if you're using any AI coding tools)
  • Is it code review and integration?
  • Is it requirements clarity and product decisions?
  • Is it validation and testing?
  • Is it deployment and infrastructure?
  • Is it business decision-making and stakeholder alignment?

Critical insight: If you optimize for the wrong bottleneck, AI won't help and might actively hurt productivity.

Method: Track where work waits in queue. Time waiting reveals bottlenecks more accurately than subjective assessments.

For Individual Contributors

Month 1: Add Intent Documentation

What to change: When you complete a task, spend 2-5 minutes capturing WHY you made key decisions.

Developer examples:

  • "Chose library X over Y because it handles edge case Z that we've seen in production"
  • "Refactored this section to reduce coupling with authentication system for future flexibility"

Designer examples:

  • "Simplified this flow because user testing showed confusion at step 3"
  • "Color choice balances accessibility requirements with brand consistency standards"

Product Manager examples:

  • "Prioritized Feature A over B due to customer requests from top 3 enterprise clients"
  • "This requirement intentionally leaves flexibility for future third-party integrations"

Why this matters:

  • AI can learn from your reasoning, not just your output
  • Future you (or your replacement) understands the context when changes are needed
  • You're building the dataset for AI to provide actually useful suggestions rather than generic patterns

What to expect:

  • First 2 weeks: Feels like overhead. You'll question whether it's worth the time.
  • Weeks 3-4: Becomes habitual. You find yourself thinking more clearly about decisions.
  • Months 2-3: You start benefiting when AI can reference this context in its suggestions.

Month 2-3: Use AI to Amplify Your Work

Now that you're documenting intent, you can use AI more effectively:

  • Use AI to draft documentation based on your decision logs
  • Ask AI to identify patterns in your past decisions
  • Have AI suggest edge cases based on your historical context
  • Generate test cases from your implementation notes

The critical difference: AI suggestions are now grounded in YOUR domain knowledge and decision-making patterns, not just generic best practices.

Month 4-6: Validate and Refine

Close the loop on what's working:

  • When AI suggestions work well: Note why they were helpful
  • When AI suggestions fail: Note what context was missing
  • Update your intent documentation with these learnings
  • Share successful patterns with your team

Success indicator: You're spending less time on routine cognitive tasks and more time on complex judgment calls that actually require your expertise.

For Managers (Team Leads, Engineering Managers, Design Leads)

Month 1: Establish Role-Based SLAs

Create simple agreements between roles that make expectations explicit.

Example SLA: Design ‚Üî Development

Design provides:

  • Mockups, user flows, edge case specifications
  • Within 2 business days of development request
  • Format: Figma files + written specs in [agreed tool]

Development provides:

  • Technical feasibility feedback
  • Within 1 business day of design proposal
  • Format: Written assessment with alternatives if original design isn't feasible

Example SLA: Product ‚Üî Validation

Product provides:

  • Acceptance criteria and success metrics
  • Before development work begins on a feature
  • Format: Structured template in [agreed tool]

Validation provides:

  • Test coverage report and identified gaps
  • Within 1 sprint of feature completion
  • Format: Dashboard link + written summary of risks

Why SLAs matter:

  • Makes implicit expectations explicit, reducing friction
  • Creates accountability without creating blame
  • Provides clear targets for what AI could automate
  • Reveals where processes actually break down in practice

Month 2-3: Identify High-Value Data Pipelines

Pick ONE pipeline to optimize first. Don't try to fix everything at once.

Example: Requirements ‚Üí Development ‚Üí Validation

Current state (typical):

  • Requirements exist in meeting notes, Slack messages, PM's head
  • Development interprets requirements during sprint planning
  • Validation writes tests based on implemented code
  • Misalignment discovered late in the process or after deployment

Optimized with world model approach:

  • Requirements captured in structured template with intent documented
  • AI suggests technical implications and potential edge cases
  • Development references structured requirements + documented intent
  • AI generates initial test cases directly from requirements
  • Validation refines AI-generated tests with domain knowledge
  • Changes to requirements automatically flag affected tests and implementations

Measure before and after:

  • Number of rework cycles (requirements changes discovered mid-sprint)
  • Validation cycles (bugs found in testing vs. production)
  • Time from clear requirement to validated, deployable feature

Month 4-6: Pilot AI Tools Within Established Pipelines

Only after you have clear role responsibilities and data flowing between them should you introduce AI tools strategically.

For requirements pipeline:

  • AI summarizes customer feedback into structured format
  • AI identifies conflicting requirements across feature requests
  • AI suggests acceptance criteria based on similar past features

For code review pipeline:

  • AI flags potential security vulnerabilities
  • AI checks consistency with established coding standards
  • AI suggests test coverage gaps based on code changes

Critical rule: AI suggestions must always be validated by the accountable human role. No auto-merge. No blind acceptance. Humans remain responsible for outcomes.

Month 7-12: Expand Based on Success

What worked in your pilot:

  • Identify specific AI capabilities that consistently saved time without introducing errors
  • Document the pattern: task type, AI tool used, validation method, success rate
  • Replicate to similar workflows across other teams

What didn't work:

  • Did AI lack necessary context? Fix: Improve data pipeline to provide context
  • Did AI produce too much slop? Fix: Try different tool or keep human-only for now
  • Did validation take longer than doing it manually? Fix: This workflow isn't ready for AI yet

Success indicators:

  • Team reports higher quality output in less time
  • Rework cycles decreased measurably
  • AI suggestions accepted more than 50% of the time
  • Team actively requests more AI integration (rather than tolerating mandated tools)

For Executives (Directors, VPs, C-Suite)

Months 1-3: The Honest ROI Conversation

Stop AI-washing. Start measuring what actually matters.

Instead of: "We're AI-first now" (this means nothing to customers or employees)

Measure these specific outcomes:

  • Time saved on specific, repeatable tasks (document generation, code reviews, data analysis)
  • Quality improvement: Reduced rework, faster validation, fewer production issues
  • Bottleneck shifts: Where is work waiting now versus six months ago?
  • Team sentiment: Are people using AI tools voluntarily or under duress? Are they requesting more capabilities?

Red flags you're in a J-curve decline:

  • Productivity metrics actually dropping after AI introduction
  • Team spending more time correcting AI output than previous manual processes took
  • AI tool adoption declining after initial enthusiasm
  • Increased errors reaching customers despite AI "quality checks"

Green flags you're on the right track:

  • Specific workflows showing measurable, sustained improvement
  • Teams adapting tools to their needs (not just using out-of-the-box configurations)
  • Data quality improving because people see value in documentation
  • Cross-functional coordination getting easier and requiring fewer meetings

Months 4-6: Investment Priorities

Reframe capital allocation around what actually drives AI success.

High ROI (do first):

  1. Data infrastructure: Making internal data clean, accessible, and reliable
  2. Role clarity workshops: Helping teams define sources of truth and SLAs
  3. Training on world model approach: Not generic "AI literacy" but specific workflows
  4. Pilot programs in high-value workflows with clear success metrics

Medium ROI (do after pilots succeed):

  1. Expanding AI tools to more workflows based on pilot learnings
  2. Custom AI integrations with your specific internal systems
  3. Advanced capabilities like fine-tuning or specialized models

Low ROI (avoid for now):

  1. Wholesale platform replacements before proving value with existing tools
  2. AI tools without clear, specific use cases tied to actual workflows
  3. "AI transformation consultants" selling generic frameworks
  4. Layoffs based on projected AI efficiency gains (you'll end up rehiring)

The critical lesson from companies like Salesforce: You need experienced people whose judgment can compensate for current AI limitations. Over-optimize for headcount reduction and you'll crater productivity while destroying institutional knowledge.

Months 7-12: Cultural Change Timeline

Set realistic expectations with your organization.

What to communicate:

  • 3-6 months: Teams will trust the process and see value in new workflows
  • 6-12 months: Data pipelines become habitual, AI tools integrate naturally
  • 12-18 months: Measurable productivity gains and reduced time-to-market become evident
  • 18-24 months: Competitive advantage from institutional knowledge captured in world models

Warning: Demanding faster results will kill trust and create performative adoption where teams claim to use AI but quietly revert to old methods to actually get work done.

Your role as executive:

  • Communicate clearly: Why this matters, what success looks like, realistic timeline
  • Buffer pressure: Protect teams from quarterly demands during transition period
  • Celebrate progress: Small wins, specific improvements, team innovations
  • Model the behavior: Use world model thinking in your own strategic work

Success story to aim for at Month 12:

"Our product releases have 40% fewer post-launch issues. Customer Support receives release notes and known issues before deployment instead of discovering problems through customer complaints. Marketing launches coordinated campaigns aligned with feature releases. Development team spends less time in alignment meetings and more time building because requirements are clear and documented. And we've captured institutional knowledge that makes onboarding new team members three times faster than before."

The Universal Implementation Pattern

This pattern works across all roles and organization sizes.

Phase 1: Map (Weeks 1-4)

  • Current sources of truth (where authoritative information actually lives)
  • Information flows (who needs what from whom, when, and why)
  • Current bottlenecks (where work consistently waits)
  • Success metrics (how we'll know if this is working)

Phase 2: Pilot (Months 2-4)

  • Choose ONE high-value data pipeline to optimize
  • Define clear role-based SLAs for that pipeline
  • Add minimal tasks to each role (intent documentation, handoff clarity)
  • Measure baseline performance ‚Üí implement changes ‚Üí measure improvement

Phase 3: Validate (Months 4-6)

Answer these questions with data:

  • Did rework decrease? By how much?
  • Did validation cycles shorten? What's the time savings?
  • Did stakeholder confidence increase? How do we know?
  • Are people actually using the new process or working around it?

Phase 4: Introduce AI (Months 5-7)

Critical: Only AFTER the data pipeline is working with humans should you add AI.

  • Start with low-risk, high-value tasks that have clear validation
  • Ensure AI suggestions are always validated by accountable humans
  • Measure time saved versus validation overhead
  • Be willing to remove AI from tasks where it's not adding value

Phase 5: Expand (Months 7-12)

  • Replicate successful patterns to other data pipelines
  • Remove AI from workflows where it's creating more work than it saves
  • Deepen AI integration in areas where it's proving consistently valuable
  • Update your world model based on what's actually working

Phase 6: Evolve (Ongoing)

  • As AI capabilities improve, revisit what's possible in workflows that weren't ready
  • As bottlenecks shift due to AI adoption, adapt role responsibilities accordingly
  • As teams learn and share knowledge, capture new patterns in the world model
  • As business priorities change, update what's considered high-value

Conclusion: What Success Looks Like

Month 6: Early Wins

Your team conversations change in noticeable ways:

  • "The AI caught three edge cases in the requirements that we would have discovered in production"
  • "I spent 30 minutes on documentation this sprint and it saved us four hours of rework later"
  • "Customer Support knew about the changes and potential issues before we deployed"
  • Fewer hours in meetings clarifying requirements because they're already clearly documented
  • Designers and developers resolving questions through structured documentation rather than Slack messages

Year 1: Transformation Underway

Measurable improvements accumulate:

  • Feature delivery time reduced by 20-30% with higher quality
  • Post-launch defects down 30-40%
  • Cross-functional coordination smoother and requiring less synchronous time
  • Team autonomy increasing as roles and expectations become clearer
  • AI tools naturally integrated into 60-70% of workflows
  • New team members onboarding faster because knowledge is documented

Year 3: Competitive Advantage

What you've built becomes genuinely differentiating:

  • A living knowledge base that captures WHY decisions were made, not just what was decided
  • Institutional knowledge that survives employee turnover and role changes
  • AI systems that understand your specific domain, context, and decision-making patterns
  • Roles that co-evolve with AI capabilities rather than being disrupted by them
  • A culture of continuous adaptation that welcomes capability improvements
  • Speed and quality advantages that competitors can't easily replicate

The Difference Between This and the Last Twenty Years

Agile adoption was painful because we didn't know where it was going. We experimented, failed, learned slowly, and gradually converged on practices that worked.

We do know where AI adoption needs to go now. We've already learned the hard lessons:

  • Role clarity matters more than sophisticated tools
  • Data pipelines create value for humans first, AI second
  • Trust comes from demonstrated success, not executive mandates
  • Incremental change beats big-bang transformation
  • People with domain expertise can't be replaced by prompts‚Äîbut they can be amplified

The Promise of World Models

You don't need perfect AI. You don't need complete data. You don't need to transform everything at once.

You need a shared understanding of how work flows through your organization and a framework that adapts as AI changes what's possible.

Start small. Measure honestly. Evolve constantly.

The companies that will thrive with AI aren't the ones with the fanciest models or the most ambitious transformation programs.

They're the ones who figured out how to make AI and humans genuinely better together—not by replacing people with prompts, but by building world models that amplify human judgment with machine capability.

You already have the roles. You already have the people. You already have the domain knowledge.

Now you have a roadmap to put them together in a way that actually works.

Next Steps

Ready to start?

  1. Map your current state (Weeks 1-2): Identify where your sources of truth actually live
  2. Find your bottleneck (Weeks 3-4): Track where work waits in your current process
  3. Pick one pipeline (Month 2): Choose a high-value workflow to optimize first

Need deeper guidance?

Each section of this summary will expand into a full chapter with:

  • Detailed examples from specific industries
  • Templates and tools for each role
  • Case studies of successful implementations
  • Common pitfalls and how to avoid them

About This Work

This summary draws on ten years of experience guiding product development teams through the exact role and process transformations needed to adopt AI successfully. The approach, rooted in Bespoke Agile principles developed since 2016, anticipated the data-first, role-based solutions that AI adoption requires.

For organizations ready to navigate AI adoption without repeating Agile's costly mistakes, consulting and full-time engagement opportunities are available.

Learn more: bespokeagile.com

What is Bespoke Agile (and why make up a new name)?

 

Bespoke Agile lives at the intersection of Project Management and Agile.

‘Agile transformation’ is a highly overloaded term, and its meaning has become diluted. Around 2015 the Agile Alliance began updating their documentation to focus on more sustainable models. This is a work in progress and it takes far too long for critical improvements to see more general use. Engineering and company cultures change slowly. A customized agile approach might provide a vehicle to more quickly validate and socialize these improvements.

In 2021, the Project Management Institute also changed their approach to focus on three new domains: People, Process, and Business Environment. This reflects the need of Project Management Professionals to use ‘more diverse skills and approaches than ever before.’ I took and passed the pilot test for the new PMP certification; and actively support the PMI’s commitment to updating the project management body of knowledge.

Bespoke identifies custom patterns that extend both Agile and Project Management concepts; with a focus on individual perspectives: what is the impact of organizational change on each and every company employee over time? How do we hone in on the realistic tasks people are willing to practice each day to synchronize and marshal products from ideation onward?

Everyone in a company has a frame of reference through which they view change; and so how do we develop models that lead towards company-wide adoption of adaptive process that considers every person’s point of view?

 
IMG_5519.jpg

Problem statements and design goals

 

To co-opt a quote by E. O. Wilson “No administrative phenomenon can be fully understood without attention to it’s evolutionary history.” This is from ‘The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies’: I hope the humor is not lost on anyone - and yes, the quote actually reads “No biological phenomenon…”

Problem statements

Problem: The solutions we care about are large in scope

Enterprise projects are complex, and by far the most complicated is the human side. An agile approach facilitates points of contact - but it does very little to provide actionable collaboration driven by the respective department heads. Yes, there are ‘scrums of scrums’, and other attempts at hierarchical solutions - but this runs into ‘computational irreducibility’: there just isn’t enough time for the relevant problems to be surfaced and solved in this way.

Problem: The perspectives from all departments must be taken into consideration

The interactions between like knowledge domains such as Development and Quality Assurance are complex enough; in addition, departments such as customer Support, finance, Sales, and IT are often under-represented within the teams, and so parallel and inefficient process evolves to address their concerns…

Problem: Ownership of solutions is difficult to craft

Implementing the intra- and inter-department adaptive processes that CAN lead to predictable development and deployment is simply more complex than most people can address without going beyond the bounds of their day-to-day responsibilities. There is often disruptive cognitive dissonance when real people grapple with requests from other teams to do extra work, or believe the work at hand is above their pay grade.

Problem: Unexpected events can be impactful to the business

There are serious business consequences if the impact of a problem is not recognized, or once recognized, there is no run-book available to help craft solutions. Events such as stop-ship bugs, or drop-everything-and-fix-a-problem-in-production defects can roller-coaster over the best of develop and deploy expectations, and certainly derail delivery of new feature milestones.

There are upstream and downstream impacts when reported problems require an unexpected amount of development and test time. Upstream - executives may need to adjust financial projections. Downstream - Marketing needs to be told their favorite new feature may not be ready for a scheduled advertising spend.

Customers may require immediate assistance to prevent erosion of brand loyalty. Worse - if the problem relates to security or data leakage - the viability of the company might be at risk.

Design Goals

Goal: Allow specialization within teams, and require greater personal responsibility in completing distributed clerical tasks

A primary design goal of bespoke-style program management is simple: have everyone opt-in to doing just a little bit of extra work in addition to their career focus. Implicitly this assumes that most developers really like to develop, many people with ‘quality’ in their names may not love to code (yes - QA is dead, long live QA): for example, if those who make code changes opt-in to curate and version manual test cases directly related to each pull request for a feature - the world is a better place for quality-focused team members. QA can ask ‘How do we optimize automation in our feature-branch and release testing?’ instead of ‘Why do I have write a bug because this widget is not implemented to product and design team specs, or the coder did little exploratory testing?”

Goal: A single source of truth (in progress)

There are several concepts combined to deliver ‘truthful data.’ The first is that variables are immutable at any point in time. As time moves forward and values change - we can look for rules related to change. From the variable’s perspective, local rewrite rules (closures or lambdas) move it from one state to the next. This idea applies to information in sets and graphs. This concept is related to mutation and redux in functional programming models.

The second concept is that there is one canonical instance of the variable that is rewritten, and all other instance of the variable are just that - immutable instances embedded at a point in time, updated as needed from the canonical instance. This idea also applies to information in sets and graphs. This concept is related to consistency in distributed database models.

The third relates to scope and namespaces. Variables do not stand alone - they take on life and meaning in time and context. They have their own perspectives, indelibly related to the filters used to create reference frames. We generally use names to identify variables, but it’s useful to reuse those names, and copy variables. With more than one variable with the same name, scope is the region of local process space where a name is available. Namespace is how we ‘look up’ the name associate with a unique variable in the region. This concept is related to maps: I live in Cambridge Massachusetts, USA and not in Cambridge, UK…

Goal: Transparency

This covers the agile initiative to have all information available to everyone at any time. Transparency exists in the context of reference frames. A reference frame is a filtered view through which the exact set of information needed for a specific perspective is available. The promise is that all information is available to all frames of references, as close to real-time as possible. For example, anyone who looks at the value of a variable, set, or graph is guaranteed to see the same values in any instance at the same point in time and scope.

Goal: Degrade gracefully

Everyone has been in the situation where there’s more work to be done than can realistically be met with the resources available. The bespoke approach recognizes this is always the reality, and the focus is on best practices to plan collaborative projects. The relevant goals can be stated as:

- Mitigating areas of high risk with incremental and small changes to both deployed products, and the processes themselves used to develop and deploy the products

- Empower product management to adjust the scope of deployable features to keep schedules in balance with available resources

- Create effective plans for unexpected events and practice them: have a process run-book for the unexpected…

Ideally teams can change the scope and focus of current tickets on demand. Historically scrum teams must complete a sprint, bound to those tickets sized and accepted… but this approach can lead to poor outcomes.

To accommodate these business realities, an adaptive approach to sprints allows current work to be put aside, backlog items reprioritized, fires tended to, and then a smooth recovery to the previous stories… The normal flow is interrupted by design for the time needed, and not ad-hoc with all the mayhem that usually entails.

This approach is often labelled ‘ScrumBan’, as a melding of strict Scrum rules for sprints and Kanban, an approach focused on serialized ticket completion. For both, sprint boundaries are useful for metrics and to track story completion. For those who who view deviation from sprint rules heretical: consider the model of a 3-day sprint, with day-1 being a spike; and days 2-3 being reserved for clearing all active tickets above the backlog… rinse and repeat.

In fact, degrading gracefully can be a sustainable approach. It’s a catchy phrase - but the more accurate description is ‘adaptive development.’

 
Flower_01.jpg

Chapter 1

 

Disclaimer

This book covers speculative and exploratory ideas about Agile and Product Management. With discussion and iteration, some of these ideas may influence project management and agile framework choices - but that is aspirational.

The Agile Alliance and PMI organizations count on members to improve and build on their respective bodies of knowledge. I believe the two groups aim to solve similar challenges. Cross-fertilization will benefit both fields, which is why this is an open science project.

Overview…

Over the years I’ve used phrases such as acyclic graph, and friction to describe properties of feature development, deployment, and support. In hindsight, those terms needed more context. I own that entirely, and a goal of this book is to provide context: ways of thinking about frameworks and process that help develop intuition and practical approaches to getting stuff done. Another goal is to provide mathematical tools using multiway and causal graphs that deliver reasonably precise metrics that all business groups can use for planning. Real-time metrics and well-understood adaptive process can provide a competitive advantage from every perspective. A personal hope is that a study of these graphs will reveal simple rules that underly project complexity.

Systems of the world

This book explores best practices in functional product development. This chapter presents an overview of ideas, vocabulary, and concepts; with a few examples… and is very much a work in progress.

Additional chapters will present case studies, and explore how these concepts can be used to create a system to guide dynamic and adaptive choices in both framework and process. In other words - provide clear paths for individuals and teams to deliver consistently good releases and outcomes, even as the business needs and work environments change over time.

Modeling the processes used by an entire company seems daunting. While there are any number of ways to get from ideation to product delivery, for the most part people are the most complex component. Each business unit within a company along with customer behavior can be viewed as a ‘Production System’. The set of all production systems within a company, along with their lifecycles comprise a ‘System of the world’ from the perspective, or frame of reference of the company as a whole.

Reference frames, Rules, and Meta-project management

Reference frames

The idea that a company itself has an evolving frame of reference is so important that the tagline of this site is ‘adaptive systems of the world.’ Each team has a perspective that is the aggregate of each team-member’s point of view, or frame of reference… Looking at the larger problems as a set of finite smaller perspectives provides the scope to allow meta-analysis of both how people behave, and the processes they really follow. Quantifying individual behavior can be a sensitive topic, but identifying what influences individual choices in the context of process and management technique is functionally useful.

Rules

The term Software Development Life Cycle describes the aggregate systems of the world… As we look at how different companies evolve their life cycles, there are common sets of constraints that conform the choice of frameworks and processes. Individuals and teams choose systems of the world with sets of rules that allow work to get done without having to think about the whole system all the time.

People have an intuitive understanding of which rules to follow. As these rules are identified and explored, conscious choices can be made about process in the context of reference frames. The rules create maps that teams can follow to efficiently choose the right process at the right time.

Meta-analysis (and riverbeds)

In a system of the world for a software company we know what goes in, and we know what comes out. If the system functions well, we know properties of what comes out, such as quality and cost of production. The meta is identifying the constraints that explain how process and behavioral rules change over time. The idea is to consider all possible ‘rules that act on rules’ (meta-rules), and identify the ones that appear to be most useful: If the rules that govern process form a riverbed that both channels water (work done), and is itself changed by the water’s flow; the meta-rules describe how a map of this riverbed changes over time. If businesses identify relevant meta-rules, they can control the speed of flow, and where to strengthen the riverbed and shores.

Maps, computation, pattern matching and substitution

How I learned to stop worrying and love computational irreducibility

Maps are computational shortcuts. They are ways to reduce the amount of work needed to get from point A to point B. A silly example is that without a map while on a journey, you might have to evaluate which direction to head, and how far to travel (angle and length) with every step. This represents a lot of work! Since you may have to detour or backtrack, you probably can’t predict when you will arrive, if at all - until you get there.

The hypothetical example I prefer is a future with self-driving vehicles that receive real-time updates about the road ahead: elevation and turns over the next hundred feet, road condition, speed and location of other vehicles, perhaps as a dynamically updated point cloud… this information reduces the amount of computation required to navigate safely down the road with a minimum of risk. In this context the roadmaps are a shortcut and provide intuition about the best path to follow.

What happens when the information from the road is reduced, or you take your vehicle ‘off the grid’? Well - your self-driving car will take on a greater computational load to create that point cloud from onboard sensors - and very likely slow down. What if road and environmental conditions become more complex? There will be an inflection point where your vehicle’s ability to predict what will happen next at at a specific speed falls below the number of seconds required to respond to typical road hazard. At some level of complexity, that speed limit will approach zero.

A popular example is finding the value of Pi… the only way to find the next digit in Pi is to compute the value. All those digits are ‘there’, but we simply cannot know them any faster than we can compute them…

All of the above are examples of computational irreducibility… and for the first two at least, maps allow us to get some ‘headroom’ in which to be clever about the problems at hand at a level of complexity where they can still be solved.

Pattern matching and substitution

Everyday maps represent topographical data, usually with agreed-upon patterns: icons, legends, or keys… When we look at a map, and see the legend for a river, we know there is a body of water. In our heads we substitute the pattern with the idea of a river. Functionally we need to plan for a crossing, or traveling up or down the river. We take yet another step in pattern matching and substitution by planning “bring a canoe” as a means of conveyance when we are at the river represented by the legend on the map…

Rules as graphs; graphs as maps…

How do these ideas apply to software or more general product development? The classic approach to build a thing is to define requirements, factor them into prioritized epics, with prioritized stories - with a develop <—> deliver continuous delivery strategy… OK - at both a process level, and a product detail level this implicitly uses pattern matching and substitution.

What are these rules: An intuitive look at software development complexity

Think of all the ways we start a software project, do some work, get a product out the door, prove an idea, create customer relationships, and iterate on all the above… An intuitive way to view the set of rules by which these actions evolve is a virtual space filled with graphs that represent all ways this stuff moves from ideation to done. This feels overwhelming because it is: “modeling the processes used by an entire company” is not just complex - at any given time we probably can’t predict exactly what will happen next faster than it actually happens. This is a wonderful example of computational irreducibility.

However, if we instead use Agile distributed collaboration, we can capture useful metrics from the point of view, or reference frame of each person, team, business unit, and of the company (implicitly in the views of board members, perhaps). If everyone takes on a lightweight set of tasks in their day that generate relevant metrics, we can identify a few rules that underly the paths actually taken through this space of all graphs! The maps that emerge from the choice of paths provides shortcuts and intuition that saves enough computation to make conscious choices about how to get stuff done before it happens.

It turns out that a coordinated distributed effort can change the paths from one possible graph to another that follows different rules (we change the riverbed!). Better maps of how to get stuff done emerge from better paths along more efficient graphs. Adapting our resources to keep production systems in balance (develop vs deploy, for example) can reduce computation. A feel for the meta-rules that underly a system of the world begins to satisfy the goal of ‘a run book for the unexpected.’ My claim is that there are a fairly small number of meta-rule patterns (graphs) that describe most product development.

Note: Illustrations will help

The processes and stages used in iterative design, implementation, delivery, and support can be captured by sets, or bundles of related graphs. Units of work being done move from one node to another. Consider a product design team defining a feature, and a dev team implementing and testing that feature as two nodes in a graph. To reflect reality, there are feedback loops between the teams, and so there are two ‘edges’ connecting the nodes: one leading forward and one leading back. That’s it - a simple multigraph… this can be extended to subsequent nodes for merging or aggregating features in code repository branches, iterative build and deploy strategies; and nodes representing comprehensive support and customer behavior.

The develop <—> deploy system can be represented by two graphs whose nodes have a lot of inter-dependancies. Business groups such as Marketing, Finance, IT, and Sales have their own graphs whose nodes are similarly constructed, and nodes from across all the graphs have interactions and dependancies that are relevant.

For completeness - here is some intuition related to thinking about these graphs: There is no going back in time. So the edges that lead back are really connecting an implementation node to a future design node; and onward in time to another implementation node. Nodes can branch (edges to more than one node) and merge. For a feature under development, the nodes all must eventually merge. If they do not - what is accomplished in the earlier nodes is not included in subsequent (in time) nodes that might represent ‘merge to release branch.’ More on this later…

For the dev graph, there are any number of nodes that might take place between design and implementation: the work has to be factored out, tickets created; incremental releases require versioning of features to manage complexity; dev often needs to load balance implementations across teams using concurrent branching strategies; and on top of all of this - effective testing requires essentially a real-time description of expected behavior for each version of a feature in the pipeline… Otherwise what’s the point? You can’t evaluate tests if you don’t know their expected results.

Computation, or ‘the daily grind…’

Another important idea is that each node represents a certain amount of work. For the purposes of measurement - let’s call the work done in a node ‘computation’. Well - we can count the nodes to get from one point in a process to another. If we know the amount of computation needed for each node - we are able to compute how much time a feature will really take to develop. With a few other details such as distance between nodes (how many branches from a shared ancestor), and angle (can it converge to be included in a release) - we might be able to re-order and combine nodes to attain a faster path through the related bundle of graphs.

Are they done yet?

Along with distance and angles - we can create causal graphs that represent all the dependancies in the system. Again - this sounds complex - but compared to the actual work that people have to do - it is the easier problem to solve! A causal graph can help determine the set of nodes that must be completed to achieve a defined outcome. It also helps in choosing which nodes can be pruned to ‘degrade gracefully’ - by which I mean the business consciously chooses the quality and completeness of all code that is deployed, representing the amount of computation available.

On the tools side, these processes are complex, but reasonable to model with sets of directed multiway graphs. For example, the optimal number of teams working on concurrent implementation, or depth of feature branches might be determined by analyzing causally dependent branches in their respective graphs, and within the bundle of dependent graphs.

No boiling the ocean…

An important design goal in constructing these graphs mirrors the approach we take for curating tests cases during development! Start with a clean slate - no graphs other than what can be put together from a few team discussions… and then add nodes to each graph as tickets (units of work) flow through the development graph; add nodes to the design process graph as these same tickets required iteration with designers. As a concrete example of ‘combining nodes’ to increase velocity - have the Dev <—> designer iterations collapsed to one node shared by both the Design and Dev graphs (see the example below).. Updating the graphs, and testing-out process changes becomes another iterative and adaptive daily light-weight task for the program and project managers.

Putting it together…

To sneak in an agile principle - combining nodes between graphs represents a ‘one touch approach.’ Specifically, if ‘reviewed by design’ is added as a definition of done for every relevant ticket in a sprint, developers are then empowered to go to the designer and solve problems during implementation. If a problem can’t be solved, the product manager can call the current ticket a ‘spike’, create new tickets in the backlog, update the road map - and there are no surprises…

If a problem can be solved - versioned test cases are updated right then and there, and are available to any team who touches that feature. Consider the time saved by developers who no longer have to context switch back to complete a feature from a previous sprint to address an insufficiently spec’d design…

The bespoke design goals of transparency often take the form of incrementally updated heads-up displays that can track features and fixes as they move through the ‘pipeline.’ This information can be customized for the reference frames of designers, developers, automators, CI/CD DevOps; as well as marketing, finance, and customer support. Under the hood these displays depend on units of work traversing related nodes in bundles of multiway graphs… but the intuitive story of what is really going on can be customized to satisfy individual frames of reference: for individuals, teams, and executive summaries.

Next steps

This chapter will be expanded and made more readable. Some of the current pictures will be substituted with illustrations. Additional chapters will present case studies and analysis in the context of these ideas. Ideally they will have wolfram notebooks that allow readers to interactively adjust rules to create visual graphs, and see the consequences of different develop <—> deploy paths taken through the graphs to create a system of the world…

Many thanks to Stephen Wolfram’s physics project, which continues to help me intuit the metaphors presented here.

More to come!

 
Flower_04.jpg