Your team needs to solve a problem, and there's no clear solution path. Multiple approaches might work, but you're not sure which. Success isn't guaranteed.
This is Research, not Development. And if you manage it like Development, things aren't going to go well for you.
Maybe you've seen this: A talented engineer spends three weeks trying different approaches, switching between them until the original problem is no longer relevant. Or a researcher declares after two days "it's impossible" and gives up, though later it turns out the problem was actually solvable. Or worse and most frustrating of all: Research succeeds technically but doesn't impact the product, and you realize you've been solving the wrong problem.
Here's the thing: managing Research is fundamentally different from managing Development.
Development has known solution paths, relatively predictable timelines, and measurable progress. You probably learned to manage Development, and know how to do it well. Managing Development is challenging, yet there are established best practices that are well known in the industry.
But Research? Research is inherently uncertain. The techniques that work for Development fail spectacularly for Research. Most organizations either approach Research by deploying techniques devised for managing Development, or entirely give up on managing Research and treat it as mystical work by brilliant people, where the best you can do is not interrupt.
It doesn't have to be this way.
What You'll Learn
By reading this book, you’ll gain practical tools for managing Research that create real product impact.
You’ll learn your two critical roles as a Research leader:
Ensure Research connects to product impact.
Ensure Research is done effectively.
You’ll understand why Research is different from Development. You’ll get concrete tools for managing Research execution, and learn proven heuristics for ensuring product impact.
Throughout the book, you'll see the mentioned methods applied to real challenges.
You’ll feel confident managing Research. You’ll understand when to use which tools. And you’ll know how to keep Research connected to product value while managing its inherent uncertainty.
Who Is This Book For?
Any engineering leader who manages Research, or has researchers on their team.
If you're a CTO, VP of Engineering, R&D Director, or Engineering Manager responsible for Research initiatives – this book is for you.
This book is for product-focused leaders who need their Research to ship value, not just produce interesting findings.
You’ll also notice that I use a casual style throughout the book. I believe that learning Research management should be insightful and practical. These are hard problems, and writing in an overly academic style wouldn't serve you well. This book is for you, written to help you succeed.
Who Am I?
This book is about you, and your journey managing Research. But let me tell you why I think I can contribute to that journey.
I am the CTO and co-founder of Swimm.io, a knowledge management platform for code. At Swimm, I've led multiple Research initiatives — from automatically keeping documentation in sync with code changes, to extracting business rules from legacy COBOL systems. We've faced genuine Research challenges where the path to success wasn't clear, where we didn't know if solutions were even possible.
I've managed Research in product environments where "interesting findings" aren't enough — where Research must ship and create measurable value. I've experienced the failure modes firsthand: Research that succeeded technically but didn't impact the product, teams that got stuck exploring endlessly, and the challenge of managing uncertainty while maintaining velocity.
I've also managed teams of researchers — brilliant people who needed guidance not on technical capability, but on connecting their work to product value and working systematically through uncertainty.
This book combines my experience leading product-focused Research with my background in teaching and making complex ideas practical and actionable.
The Approach of This Book
This is not an academic piece on Research methodology. When writing it, I had three principles in mind:
1. Practical: In this book, you will learn how to accomplish things in Research management. You will understand frameworks not just for the sake of understanding, but with a practical mindset. I sometimes refer to this as the "practicality principle" – which guides me in deciding whether to include certain topics, and to what extent.
2. Based on proven theory: While practical, the methods are grounded in Alan Schoenfeld's problem-solving research — a framework developed by studying how people actually solve uncertain problems. You'll see how Schoenfeld's components (knowledge, heuristics, control, beliefs) map directly to practical Research management tools.
3. Real examples: You will see these methods applied to actual Research challenges. Not toy problems, but real initiatives involving complex problems with genuine uncertainty. These examples show the methods in action, including when they're messy, when approaches fail, and how to adapt.
Structure of This Book
The book is organized in three parts:
Part 1: Foundations: Read this to understand what makes Research different and why it needs specialized management. This part is short — just enough to establish the framework that organizes everything else.
Part 2: Research Management Methods: These are tools that work for any Research.
Part 3: Ensuring Product Impact: Methods specifically for connecting Research to product value.
Why Is This Book Publicly Available?
In short, I'd like this book to get to as many people as possible.
If you would like to support this book, you are welcome to buy E-Book version, Paperback, Hardcover , or buy me a coffee. Thank you!
Feedback Is Welcome
I created this book to help you and people like you manage Research effectively and ensure it creates product impact.
From the beginning, I asked for feedback from experienced leaders and researchers to make sure the book achieves these goals. If you found something valuable, felt something was missing, or thought something needed improvement — I would love to hear from you.
Your feedback helps make this book better for everyone. Please reach out at: gitting.things@gmail.com.
Now, let's begin. In Chapter 1, you'll learn exactly what makes Research different from Development, and why managing them differently matters.
Table of Contents
Part 1: Foundations
Part 2: Research Management Methods
Part 3: Ensuring Product Impact
Chapter 6 - How to Choose Research Initiatives {#how-to-choose-research-initiatives}
Book Summary
Note
This book is provided for free on freeCodeCamp as described above and according to Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International.
If you would like to support this book, you are welcome to buy E-Book version, Paperback, Hardcover , or buy me a coffee. Thank you!
Part 1: Foundations
Chapter 1 - What is Research?
To manage research effectively, you first need to understand what research is, and what it is not.
From now on, I will use Research (capital R) when referring to our specific concept of research in this book, to distinguish it from general uses of the word.
Consider this scenario: Your team needs to optimize a critical API endpoint. It's slow, users complain, and you know exactly what to do: profile the code, identify bottlenecks, apply standard optimization techniques. This is challenging work, but it's not Research.
Now consider this: Your team needs to automatically extract business rules from 40-year-old COBOL codebases, consisting of millions of lines of code where the original developers are long retired. You're not even sure if extracting these rules automatically is possible. Multiple approaches might work. Or none might. This is Research.
The Difference
The distinction isn't about difficulty or technical sophistication. It's about uncertainty of approach.
Throughout this book, I will adopt the following definition: Research is confronting problems where you don't know if solutions exist or which approaches will work.

Research confronts problems where:
You don't know if a solution exists.
Multiple approaches might work, but you don't know which.
The path to success is not immediately clear.
You may need to invent new techniques.
Development involves:
Applying known techniques to build specific features.
Following established approaches, even if complex.
Clear success criteria tied to working software.

I've found Alan Schoenfeld's model of problem solving [1] to be a useful framework for defining and analyzing research. Schoenfeld, studying how people solve mathematical problems, identified four** components that determine success when facing genuinely uncertain problems. His framework applies directly to software Research:
Schoenfeld's Framework for Problem Solving
1. Knowledge Base — What you know
Are you familiar with relevant tools, algorithms, and techniques?
For COBOL business rule extraction: Do you understand COBOL syntax? Static analysis? Program comprehension techniques?
Without the right knowledge, you will have to spend time acquiring it before making progress, and might miss options that could be obvious to someone with more background.
2. Heuristics — Strategies for approaching problems
We’ll cover heuristics in much more detail later. For now, here are some examples of effective heuristics:
"Work backwards from the desired output".
"Break the problem into smaller pieces".
"Try a simpler version first".
"List all assumptions and test each one".
For our COBOL business rule extraction case: "Start by manually extracting rules from one small program to understand what 'success' looks like".
3. Control — Monitoring and adjusting your approach
Recognizing when your current strategy isn't working. Deciding when to pivot to a different approach. Managing your time and resources effectively.
This is what separates experienced researchers from novices: it's not just what you know, and the heuristics that you may deploy, but when and how you use them. If you choose one approach, reflect on its effectiveness, and decide to try something different when needed, that's an example of control.
4. Beliefs and Attitudes — Your mindset toward the problem
Schoenfeld found that successful problem solvers held certain beliefs that helped them persist through challenges. Examples include:
"I can figure this out" vs. "I'm not good at this kind of thing".
"Problems have multiple solutions" vs. "There's one right answer".
"I should write things down and work systematically" vs. "I should solve this in my head".
These beliefs profoundly affect your ability to persist and succeed.

Why This Matters for R&D Leaders
You've likely seen this: A capable engineer spends three weeks on a Research task, trying approach after approach, until the original problem is no longer relevant. Or they declare after two days "it's impossible" and give up, though it turns out later that the problem was actually solvable.
The issue isn't (necessarily) capability. It's that Research requires different management, and different skills, than Development:
Knowledge base: Did they have the right background, or access to people who did?
Heuristics: Were they using effective problem-solving strategies, or just "trying things"?
Control: Did they know when to pivot? When to ask for help? When to break the problem down differently?
Beliefs: Did they believe the problem was solvable? That systematic approaches work better than random attempts?
The good news: All four components can be improved. People get better at Research through practice, exposure to effective heuristics, and environments that support good Control and healthy Beliefs.
The rest of this book provides concrete tools, like using a Research Tree, drawing backwards, and time-boxing methods, that put Schoenfeld's framework into action in a Product-led environment. These tools help you and your team apply better heuristics, maintain effective control, and build the beliefs that sustain successful Research.
But first, let's make sure we're clear on when you actually need these tools. The next chapter dives deeper into distinguishing Research from Development work.
Notes
** Actually, Schoenfeld (1992) described five components (which he terms "categories"), but I focused on four of them. For the curious reader – the one I skipped is called "Practices" – the habits and cultural norms of the mathematical environment that shape how a student approaches a problem. I chose to skip it as applying it to Research felt artificial.
References
[1] Schoenfeld, A. H. (1992). Learning to think mathematically: Problem solving, metacognition, and sense-making in mathematics. In D. Grouws (Ed.), Handbook for Research on Mathematics Teaching and Learning (pp. 334-370). New York: MacMillan.
Chapter 2 - Research and Development
Most R&D departments have plenty of Development work. Everyone agrees on what Development is. But Research? That's murkier.
Some claim every development task involves "research" – you have to test your code, try different things, read documentation. Is this Research?
Let's be precise.
Defining Research vs. Development
We established in chapter 1 the core distinction: Research involves fundamental uncertainty about whether solutions exist and which approaches will work, while Development applies known techniques to build specific features.
With this foundation, let's explore how to identify Research in practice.
A Quick Test
You're asked to reverse-engineer a specific compiled function: disassemble it and provide the equivalent code in C language. You know assembly, you know C, you have a disassembler. Is this Research?
No. You know how to proceed. It might take three days of careful work, especially if the function is complex, but it's not Research. You're applying known techniques, and know how to progress to a solution.
But if you need to understand how an entire program operates, and one possible approach is reverse engineering its compiled form, and you're not sure if that approach is even feasible time-wise or whether it will yield the insights you need? That's Research.
Key Indicators You're Doing Research
1. Fundamental Uncertainty About Solution Viability
You're asking "Can this even be done?" rather than "How should we do this?" This isn't about implementation details – rather, it's about whether the approach itself is viable.
2. Multiple Competing Approaches Without Clear Superiority
Research often means exploring several paths simultaneously, knowing that many will fail, to discover which approach (if any) can solve the problem.
3. Need for New Fundamental Techniques
You may need to invent new methods rather than adapting existing ones. Note: Not all Research creates new techniques, but the possibility exists.

Common Misconceptions
Misconception 1: Technical Complexity = Research
Many challenging Development tasks involve sophisticated algorithms, large-scale systems, or cutting-edge technologies without requiring Research approaches.
Building a distributed system with complex consensus algorithms? Challenging Development. Figuring out whether a distributed system can meet your latency requirements given your unusual constraints? Might be Research.
Technical complexity is not the same as Research.

Misconception 2: Using Advanced Algorithms = Research
Implementing machine learning pipelines with random forests or neural networks isn't Research – even though the underlying algorithms are sophisticated. The Research happened when those algorithms were first developed. However, if you are using those algorithms when trying to solve a problem where it's unclear if they will work at all, that could be Research.

Misconception 3: Research Can Be Managed Like Development
Perhaps the most damaging misconception. This leads to:
Demanding precise time estimates for uncertain work.
Expecting steady, measurable progress on fixed schedules.
Evaluating Research with Development metrics.
Research requires different approaches. This is exactly what the rest of this book addresses.

Misconception 4: Research Cannot Be Managed
The opposite extreme: treating Research as mystical work by brilliant people. The best a manager can do is not interrupt.
I've had the pleasure and privilege to work with many extremely skilled researchers. I can confidently say that this is simply not the case, as even the most talented researcher can benefit from skillful guidance.
Specifically, even the most talented researcher benefits from:
Clear connections between their work and product goals.
Structured approaches to exploring alternatives.
Regular checkpoints to assess direction.
Team collaboration and brainstorming.
Research is not magic, it can be managed effectively.

Your Role as Research Leader
When leading Product-led Research, your job has two parts:
1. Ensure the Research makes the biggest possible impact on the product
This is your most important responsibility. "Successful" Research that doesn't impact the product is a failed project. Your job is to maintain the connection between Research work and product value – not just at the start, but continuously.
This means:
Starting with clear product needs, not interesting technical questions.
Regularly validating that the Research still serves the product goal.
Making trade-offs between thorough exploration and shipping impact.
We'll cover this in detail in Part 3.
2. Ensure the Research is done in the most effective manner
Even brilliant researchers benefit from structured approaches. Your role is to help the team work systematically rather than randomly.
This means:
Helping identify which questions are worth answering.
Introducing better heuristics when the team is stuck ("Let's work backwards," "Let's time-box this investigation").
Preventing common failure modes like endless exploration or premature commitment.
(We'll cover this in detail in Part 2.)
Responsibility (1) is defining the right goals. Responsibility (2) is reaching these goals effectively.
The rest of this book provides concrete tools for both responsibilities.

Part 1 - Summary
Research is work where the path to success is not immediately obvious. It requires:
Knowledge of relevant domains.
Effective problem-solving strategies (heuristics) .
Monitoring and adjusting approaches (control).
Healthy beliefs and persistence.
Your role as a Research leader is to:
Ensure Research connects to product impact.
Ensure Research is done effectively.
You need different tools to manage Research versus Development – in order to make sure Research is done effectively. Part 2 provides those tools.
Part 3 will focus on connecting Research to product impact.
Part 2: Research Management Methods
Chapter 3 - Why Methodology Matters: A True Story
It was a late evening, and the classroom was filled with three dozen students. They were all sitting in front of their computers, working in silence. I was leading a cybersecurity training focused on reverse engineering. Each exercise included a single compiled program without its source code, with one goal: "understand how this program works." The output would be either a detailed document, an equivalent program implemented in a high-level language, or both.
This particular evening, the students were furiously working on reverse-engineering a game. The instruction was: "understand the game's rules, and document them thoroughly." The game had a user interface with two dimensions and could be played against the computer. Moving behind the students, I could see their screens with various reverse engineering tools open.
At some point we asked them to stop, turn around and look at the instructor. The instructor then provided a guided solution – this was a technique we used quite frequently, showing the students the "right" way to approach a problem they had spent some time tackling. The instructor opened the game, looked at the screen, opened the "File" menu, clicked on "Help" – and there it was, the entire description of the game rules.
The Lesson
The room erupted in nervous laughter. Some students looked embarrassed. Others seemed frustrated. But everyone understood the point.
These were capable people. They had the relevant knowledge base, in Schoenfeld's terms, as presented in chapter 1. That is, they had the relevant technical skills – they knew both assembly and C, they knew how to use disassemblers, debuggers, and all the sophisticated tools of reverse engineering. Yet they had missed something fundamental: checking if there was a simpler solution first.
This happens all the time in Research work.
A team spends weeks diving deep into a complex technical approach, when a simpler path existed that they never explored. Or they try one thing, then another, then another – never systematically evaluating which approaches make sense before starting.
The problem isn't capability. It's approach.
This is exactly why you need structured methods for Research management. Without them, even your most talented people will waste time, miss obvious solutions, and burn out trying random approaches.
As a reminder, in chapter 2, we discussed your role as a Research leader:

Focusing on responsibility (2), the illustration here shows how the same peak can be reached via difficult climbing (reverse engineering) or by using a hot air balloon (reading the Help menu). Part of your job as a Research leader is to help your team find the easiest path to the goal.
What This Part Covers
The rest of Part 2 provides concrete tools to prevent these problems. You'll learn:
The Research Tree – A visual framework for systematically exploring solution paths.
Time-boxing methods – How to limit exploration without killing creativity.
These aren't abstract concepts. They're battle-tested techniques that directly address the failure modes you've probably seen: teams spinning their wheels, giving up too early, or getting stuck on the wrong approach.
Let's get started.
Chapter 4 - The Research Tree
You've probably seen this: An engineer/researcher starts down one path, hits an obstacle, tries something else, hits another obstacle, then tries a third approach. Three weeks later, they're still stuck – or worse, they've built something that technically works but doesn't solve the actual problem.
The issue isn't persistence. It's that they never mapped out the solution space. They never visualized which approaches might work, which questions need answering, and how everything relates to each other.
The Research Tree solves this problem.
It’s a way to visualize and manage the Control component of Schoenfeld's framework from chapter 1 – monitoring and adjusting your approach. This is where most Research efforts struggle.

What Is a Research Tree?
A Research Tree is a living visual representation of your Research journey. It captures three things:
Possible solution paths – different approaches you might take.
Open questions – what you need to learn.
Closed questions – what you've already discovered.
Unlike a static plan, the Research Tree grows and changes as you learn (which is one reason I like the name "tree" 😇). You start with what you know, then update it continuously as you investigate. Dead ends get marked. New branches appear. Questions get answered and new questions emerge.
Think of it like this: You're exploring a cave system. You don't have a map – you're creating the map as you explore. You mark passages you've tried. You note which ones are dead ends. You write down questions: "Does this passage connect to the main chamber?" "Is there water down this route?" As you explore, you answer some questions and discover new ones you hadn't considered. You write down the answers you found, and the experiments you conducted to find them ("I tried going left, hit a wall after 50 feet").
Research works the same way. The Research Tree is both your map and your log.
Your First Research Tree
Let's build one together. Consider a common engineering challenge:
Goal: Reduce API response time from 800ms to under 100ms
(Note: despite not being a real Research task, I chose this example for its simplicity to illustrate the process of creating a Research Tree. It can also show you how Research Trees can be useful in a variety of scenarios.)
You start with a fundamental question that needs answering. What's the first thing you need to know?
Take a moment to think about this before reading on.
The first question is usually: Where is the bottleneck?
Until you answer this, you don't know which approaches make sense. Let's draw this:

You can use a simple pen and paper, a whiteboard, or a digital tool to draw this out. When creating your very first tree, I highly recommend doing it by hand – the physical act of drawing will help you feel comfortable with the process.
Now, how can you answer this question? What approaches might tell you where the bottleneck is?
You might identify:
Profile the application with a performance monitoring tool.
Add detailed logging to measure each operation.
Use database query analysis tools.
Let's add these as branches:

(Note: the Brown status means "uncertain, needs investigation" – more on this later.)
Each approach is an investigation you could run to answer the question.
The Power of Stopping to Think
Before we go further, notice what just happened. You stopped.
Instead of immediately jumping into "Let's add more logging!" or "I bet it's the database, let's check queries," you identified multiple possible approaches. You're looking at three different ways to answer the same question.
This is already valuable. Most engineers would have jumped straight into whichever approach came to mind first. Maybe you've done this yourself: spent two days adding detailed logging, only to discover later that a profiler would have given you the answer in 30 minutes.
By creating this tree, you've avoided that trap. You can see all the approaches before committing to any of them. It doesn't guarantee you will choose the "right" path - you can't always do that in advance – but it will minimize the chances of you omitting it completely.
Remember the reverse engineering students from chapter 3? They never created this tree. They jumped straight to the first approach they knew: disassemblers and debuggers. They didn't stop to think: "What are all the ways we could understand this game's rules?" If they had, they would have listed approaches like: reverse engineer the binary, check the Help menu, just play the game, examine config files, watch network traffic. And if they'd evaluated those approaches using the framework you're about to learn, "check the Help menu" would have scored perfectly: fastest feedback (30 seconds), lowest cost (zero), best coverage (complete rules). Instead, they spent hours on complex reverse engineering when a simple menu click would have worked. Don't be those students.

Now comes the critical question: Which branch do you try first?
Consider our tree again:

Choosing Your First Path
You're looking at three approaches: Profile, Logging, DB Analysis. Each is a heuristic (in Schoenfeld's terms, as presented in chapter 1) – a problem-solving strategy that might work. How do you decide which one to try?
Sometimes, the answer is obvious (I wouldn't really use a framework to check if there's a "Help" menu). Let me show you the framework I use when it's not so clear which path to choose. Ask yourself these questions:
1. Which gives the fastest feedback?
How long until you get an answer?
Profile: Can run in 10 minutes. Setup is probably 5 minutes.
Logging: Need to add logging code, deploy, wait for traffic – maybe 4 hours.
DB Analysis: Need to enable slow query log, wait for data – maybe 2 hours.
Fastest feedback wins. You want to learn quickly.
2. Which has the lowest cost?
What do you need to set up or change?
Profile: Just attach a profiler – no code changes.
Logging: Need to modify code, test, deploy.
DB Analysis: Need database permissions, might need config changes.
Lower cost wins. Why spend hours adding logging if you can get the answer without changing any code?
(In your environment, it may be different. Perhaps logging is really easy, while profiling is super hard to set up. I am not claiming that profiling is a better heuristic than logging – it depends on your circumstances.)
3. Which answers the most questions?
Some approaches answer not just your immediate question, but related questions, too.
Profile: Shows you CPU, memory, database, network – a complete picture.
Logging: Only shows what you logged.
DB Analysis: Only shows database queries.
Broader coverage wins. A profiler might show you that 70% of time is database queries and that 20% is network latency – information you wouldn't get from narrow approaches.

So this is an easy one. Profiling wins on all three criteria. That's your first approach to try.
Update your tree:

This doesn't mean the other approaches are bad, or that this one will necessarily turn out to be the best. It means that profiling is the best starting point given what you know right now.
What If Your First Choice Doesn't Work?
Let's say you try profiling and hit a problem: Your profiler can't attach to the production environment due to security restrictions.
This is valuable information. Mark Profile as Red and move to your next best option:

Now you try Logging. But notice: You didn't waste days trying Profile in production. You tried it, hit a blocker, immediately pivoted to Logging. The tree helped you move quickly.
Choosing When Everything Seems Equal
Sometimes multiple approaches look equally good. Profile and DB Analysis might both be fast, low-cost, and have decent coverage. How do you choose?
Pick one and move on. Don't spend an hour analyzing which approach to try for 30 minutes. The meta-work (deciding) shouldn't take longer than the actual work (trying it).
When approaches seem equal, ask: "Which one am I more familiar with?" or "Which one does the team have experience with?" Use your judgment, make a choice, and start learning.
The worst decision is no decision.
In general, this approach might feel like overkill. Should you really sketch out trees and compare branches before actually doing something?
The surprising answer is that while almost always it feels like overkill, almost every single time, it turns out to be worth it. Try it a few times and you will see for yourself.
Heuristics Can Be Combined
Sometimes you don't have to choose just one. You might run Profile and enable DB Analysis simultaneously. If they don't conflict and you have the time, parallel investigations can be powerful.
But be careful: Don't try to do everything at once. Start with your best option. If that doesn't fully answer your question, then add another approach.
How Answers Lead to New Questions
After marking "Profile" as red, you moved on to "Logging". You added a few indicative log messages and let the system run for a day. You discover: 70% of response time is database queries.

This answer eliminates the need for other approaches (you don't need DB Analysis now – you found the answer). But more importantly, it reveals new questions:

See how the tree grows? One answered question spawns two new questions. Each of these new questions will have its own approaches for answering them.
And you'll apply the same framework to choose which question to answer first: Which gives fastest feedback? Which has lowest cost? Which answers the most questions?
Let's expand "Which queries are slowest?":

Again, you'd evaluate: Which approach gives fastest feedback? Enabling slow query log is probably fastest, as you’d just flip a config flag and wait a few minutes.
You decide to enable the slow query log. After investigating, you discover: User profile queries are slowest: they make 15 separate database calls (N+1 problem).
(What is the N+1 problem in this context? It means that when fetching user profiles, the code first queries for a list of users (1 query), then for each user, it makes an additional query to fetch related data (N queries). If there are 15 users, that's 16 queries total. This is inefficient and slows down response time.)
This answer leads to a new question:

Now you have three solution approaches. Again, evaluate them:
Rewrite queries with JOINs: Fast to implement, proven pattern.
Add Eager Loading: Depends on your ORM, might be quick.
DataLoader Pattern: Requires learning new pattern, takes longer.
Rewrite queries with JOINs probably gives the fastest feedback if your team knows SQL well.
The Complete Picture
Let's see how the full tree looks after a few days of investigation:

(Note: for the "How many queries per request?" node – I skipped the approaches for brevity.)
Reading this tree:
We started with "Where is the bottleneck?" and chose profiling (fastest feedback).
We found out the profiler can't attach to the production environment due to security restrictions, so we pivoted to logging.
Logging revealed that 70% of response time is database queries.
That answer led to two new questions about specific queries.
For "Which queries are slowest?", we chose slow query log (fastest to enable).
Answering it led to another question about fixing the N+1 problem.
For "How can we fix N+1?", we evaluated the approaches and chose "Rewrite queries with JOINs" (team knows SQL, fastest to implement).
Meanwhile, "Can we reduce query count?" is still open and has its own approaches to investigate.
Color-Coding Status
When you're creating your Research Tree, you'll mark both questions and approaches with a particular status. Of course, the following specific colors are just suggestions – the important thing is to keep something consistently so you can quickly see the status at a glance.
For Questions:
Open: Not yet answered.
Closed: Answered (show the answer).
Blocking: Must answer before proceeding with an approach.
For Approaches:
Green: Viable, worth pursuing.
Brown: Uncertain, needs investigation.
Red: Dead end or not viable.
In our example above:
"Rewrite with Joins" is Green because we've identified that it addresses the specific N+1 problem and that the team is confident in implementing it.
"Redesign API" is Red because it would take too long for this project.
Other approaches are Brown because we haven't investigated them yet.
Additional Tips
Keeping the tree clean and simple is important, and obsessing over its looks and details really misses the point. That said, some readers will find benefits by adding a few more details to the tree, specifically:
Order: add a number next to a specific branch when tackling it, so it is easy to track which direction you tried first, which one followed and so on.
Pivot Explanations: if you chose to pivot from one branch to another, write why. This might help when you revise your decisions later, or when reviewing with your team (as described later in this chapter).
The Research Tree Prevents Common Pitfalls
The Research Tree with this decision-making framework addresses five critical failure modes:
1. Jumping on First Idea: Without a tree, people implement the first approach they think of. The tree forces you to identify alternatives and evaluate them systematically before starting.
2. Tunnel Vision: Even when considering alternatives in the beginning, people tend to lock onto one approach and not pivot from it even when it turns out to be the wrong choice. The tree makes alternatives visible and helps you not only choose the best starting point, but also reevaluate continuously.
3. Inefficient Learning: Teams might try expensive, slow approaches first when faster, cheaper ones exist. The decision framework helps you learn quickly.
4. Answering Questions You Don't Need To: Teams waste time investigating interesting but irrelevant questions. The tree shows how questions connect – you only need to answer questions that lead to your goal.

Time to Practice
Open your favorite drawing tool, or just grab a piece of paper. Think of a Research problem you're currently facing or recently faced.
Now answer these questions:
What's your goal? Write it at the top.
What's the first question you need to answer? Write it below the goal.
What are 2-4 approaches to answer that question? Draw them as branches.
For each approach, evaluate:
How fast is the feedback?
What's the cost?
How much does it answer?
Which approach scores best? Mark it "TRY FIRST".
Actually do this. Don't just read and think "I understand." Drawing the tree and evaluating approaches forces you to be explicit, and you'll immediately see gaps in your thinking.
I'll wait.
...
Don't worry about me, I'm enjoying some really great coffee in the meanwhile.
...
Done? Good. You now have your first Research Tree with a clear starting point.
Using the Tree with Your Team
Research Trees become even more powerful when shared with a team. It actually provides you, the Lead, with a way to see what directions the team is executing upon, and why. Your job here is to make sure the framework is used. Help your team stop and ask: are we asking the right questions? Are there approaches that we missed? Are we choosing the right approach?
During Planning:
Draw the tree together as a group.
Brainstorm questions and approaches.
Evaluate approaches using the framework (fastest feedback, lowest cost, best coverage).
Everyone sees why you're trying a particular approach first.
During Execution:
Update the tree as you learn.
When stuck, revisit the tree to identify alternative approaches.
Make sure you consider whether you are asking all of the important questions, and whether you are considering all relevant approaches.
If you pivoted from a branch, explain your reason and ask if someone can challenge your logic.
Conduct regular tree reviews (weekly or bi-weekly).
Note that the Research Tree is also useful for one-on-one sessions: you can review the tree with individual team members to understand their progress and help them choose next steps. It actually makes the Control component of Schoenfeld's framework much easier to manage - as you see the variolus questions and approaches laid out visually.
Tools
Pen and paper (seriously, this works great).
Whiteboard (for team sessions).
Miro, Mural, or similar digital whiteboards.
Mind mapping software (XMind, MindNode, and so on).
Even a simple text file with indentation.
The tool doesn't matter. What matters is that the tree exists, is visible, and gets updated.
Pro Tips
Start with the most important question
Don't try to list all possible questions upfront. Start with the one question that, if answered, would most clarify your path forward. Answer it, then see what new questions emerge. More on finding the questions to start from in chapter 7.
Show how answers lead to new questions
When you close a question, immediately ask: "What new questions does this answer reveal?" Draw those as branches from the answer.
Update questions weekly
In your weekly check-ins, explicitly review: Which questions did we close? What did we learn? Which new questions emerged? Which questions are blocking progress?
Re-evaluate when context changes
If you learn something new that changes the evaluation (maybe a tool you thought was fast turns out to be slow), re-evaluate your approaches. The tree is living – update it.
Recap - The Research Tree
The Research Tree is a living visual framework that:
Shows questions you need to answer to reach your goal.
Maps approaches for answering each question.
Helps you choose the best starting point for each question.
Prevents jumping on the first idea without considering alternatives.
Captures how answers lead to new questions.
Tracks status of questions (open/closed/blocking) and approaches (green/brown/red).
Documents the investigation path so the team understands why decisions were made.
Evolves as you learn – questions get answered, new questions emerge.
Key structure:
- Goal → Question → Approaches to answer it → Answer → New questions
Decision framework for choosing approaches:
Which gives fastest feedback?
Which has lowest cost?
Which answers the most questions?
In the next chapter, you'll learn how to manage execution using time-boxing and decision points to keep your Research moving forward without getting stuck.
Chapter 5 - Time-Boxing Research Explorations
In the previous chapter, you learned about the Research Tree, a powerful tool for visualizing and managing Research efforts. The tree helps you systematically explore different solution paths and keep track of open questions. It also helps you decide which path to try first.
But once you've chosen a branch, how long should you pursue it before stepping back to reconsider?
The Problem: Research Without Time Limits
A researcher investigates whether a machine learning model can predict code complexity. Day one goes well. Day two, they need more features. Day three, a different architecture might work better. They switch. Day four, the new architecture needs different preprocessing.
Two weeks later, they're still on this path. When you ask about trying alternatives, they say "I'm close. Just need a few more days."
Three weeks in, they admit this approach isn't viable. Meanwhile, a simpler approach sat unexplored on the Research Tree.
Without a defined checkpoint, there's no natural moment to ask: "Given what I've learned, is this still the best path?" The sunk cost fallacy (continuing an approach because you've already invested time, rather than because it's still the best option) takes over. This is extremely common by Researchers, who tend to be dedicated, brilliant people who get fixated on problems they try to solve.
Time-Boxing: Creating Decision Points
Let's face it: it's hard, even impossible, to estimate how long a Research task will take. But you should still provide time limits based on how long you're willing to invest before reconsidering.
Time-boxing provides mandatory decision points.
Note that we are not talking about deadlines, but decision points. These are moments where you stop and evaluate: What did I learn? Is this still the most promising path?
As a rule, for every task longer than a day, define a time limit. For example: "I'll spend three days investigating whether we can cluster files into logical folders based on their I/O operations."
Three things can happen:
Early success: The researcher figures it out in a few hours. Done early, move to the next question.
Early blocker: After one hour, they discover they don't have access to filenames. They can immediately reconsider: "Without filenames, is this viable?"
Time box expires: Three days pass with partial progress. Now comes the mandatory decision point.

The Decision Point
When your time box expires, stop. Pull out your Research Tree (yes, you already love that Tree). Ask three questions:
1. What was your goal in pursuing this direction?
Which question were you trying to answer? Why did you choose this approach over alternatives?
2. What did you learn?
Document your discoveries, even if incomplete:
"The approach works but needs more sophisticated algorithms than we thought."
"We need data we don't currently have."
"This is harder than expected – would take at least 2 more weeks."
"We're 70% there – just need to handle edge cases."
3. Given what you learned, is this still the most promising path?
Look at your Research Tree. You have other branches. Given what you now know, is continuing the best use of time?

You have three options:
You can continue with a new time box: "I've solved the core challenge. One more day for edge cases." Define the new time box. The key: you consciously decided to continue based on what you learned, not inertia.
You can pivot: "This would take two more weeks, and I'm not confident it'll work. There's a simpler approach on my tree." Mark this branch Red. Move to a different branch.
Or you can reconsider the question: "Files in this codebase don't have clear I/O patterns. Maybe I should try clustering by function dependencies instead." Go back to your tree and identify a different question.
Example: Detecting God Objects
You're detecting God Objects (classes that do too much) using static analysis. Your Research Tree shows three approaches: complexity metrics, method naming patterns, or machine learning on AST features.
You choose complexity metrics (fastest feedback, lowest cost) and set a 2-day time box.
Day 1-2: You implement cyclomatic complexity and SLOC metrics. Results: 65% accuracy, but 40% false positives.
Decision point: You stop. Looking at your tree, method naming patterns might provide complementary information. You decide to spend 1 day exploring whether combining both approaches improves accuracy.
Day 3: Combined approach: 78% accuracy, 25% false positives. Promising.
New decision: Time-box 3 more days to refine and test on larger codebases.
Notice what happened: The 2-day time box forced assessment before perfecting the first approach. You learned combining approaches was better. Without time-boxing, you might have spent a week perfecting complexity metrics alone.
How to Set Time Boxes
When choosing a branch on your Research Tree, ask: Is this shorter than a day? A few days? Longer than a week?
Shorter than a day: Just do it. Don't overthink time-boxing for tasks this short.
A few days (2-7 days): Time-box for 2-5 days. I usually time-box for slightly less than my estimate – if I think 4 days, I set 3 days. This forces reflection based on learning, not arbitrary completion.
Longer than a week: Time-box for one week maximum. Even if you're making progress, a week is long enough that stepping back is valuable.

Don't overthink the exact duration. The goal is creating natural stopping points. The specific number matters less than having some checkpoint.
Critical: Time-boxing is not about pressure. You're not failing if the time box expires without solving the problem. That's expected in Research. What time-boxing does is force moments of reflection that prevent weeks spent on directions you'd have abandoned if you'd stopped to reconsider.
Using Time-Boxing with Your Team
When managing researchers, set time boxes together during planning. Make sure they understand it's a decision point, not a deadline. Schedule the review explicitly.
At the review, look at the Research Tree together. Ask the three questions. Make the decision collaboratively. This prevents researchers from getting stuck without asking for help, and makes pivoting feel like a positive decision rather than failure.
Integration with the Research Tree
Time-boxing combines naturally with the Research Tree (chapter 4): each branch gets a time box, and when it expires, the tree shows your alternatives.
Time-boxing creates moments to ask "given what I've learned, what should I do next?" The Research Tree helps you answer that question.
Recap
For tasks longer than a day, set a time limit (2-5 days for medium tasks, max 1 week). When time expires, stop and evaluate using your Research Tree: What was my goal? What did I learn? Is this still the best path?
Time boxes acknowledge that research estimation is hard. They prevent sunk cost fallacy and endless exploration. They're not about pressure or finishing "on time" – they're about forcing reflection instead of momentum-driven continuation.
Combined with the Research Tree, time-boxing gives you control over research execution while respecting its inherent uncertainty.
Part 2 Summary
Effective Research requires structured approaches, not just technical skill:
The Research Tree visualizes solution paths, open questions, and closed questions.
Time-boxing prevents endless exploration while preserving flexibility.
Systematic evaluation helps you choose the best path forward.
In chapter 2, I argued that your role as a Research leader is to:
Ensure Research connects to product impact.
Ensure Research is done effectively.

This part handled the latter part of your role: how to ensure Research is done effectively.
Within this part, your role is to:
Ensure the team uses these methods consistently.
Help identify when to pivot, when to persist, and which questions matter.
These tools prevent common failure modes: jumping on the first idea, tunnel vision, inefficient learning, and wasting time answering irrelevant questions.
Part 3 shows how to connect Research to product impact.
Part 3: Ensuring Product Impact
In chapter 2, I argued that your role as a Research leader is to:
Ensure Research connects to product impact.
Ensure Research is done effectively.
Part 2 addressed (2) above – ensuring Research is done effectively – with tools that work in ANY research context. This part provides the complete answer to (1) – ensuring Research connects to product impact:
First, choose research that matters (chapter 6).
Then, work backwards from product value (chapter 7).
Continuously validate with end-to-end iterations (chapter 8).
Chapter 6 - How to Choose Research Initiatives
The very first step in making sure your Research impacts the product is choosing the right thing to research. And, just as important, avoiding Research that won't impact the product.
Research initiatives can start from two different places:
From a problem the product is facing.
From a technological opportunity.

1. Starting From a Concrete Problem
The most promising way to find research initiatives that will have a big impact on the product is to start from an acute problem that the product is facing.
At Swimm, we allowed users to write documents about their code, but inevitably, the code changed, and the documentation became outdated. This made writing documentation not worth the effort in the first place. We needed to find a way to make sure the documentation stayed up to date automatically, with a good user experience. This was a clear problem we faced, and we didn't know if it was even possible to solve.
Consider a different example: a medical company that wants to diagnose a disease based on a few blood samples. Currently, they have an algorithm in place, but it's not very accurate. Specifically, it yields too many false positives. They need to find a way to improve their prediction accuracy. This is a clear problem the product is facing, and it doesn't have a clear technological solution.
In both cases, the problem is clear, and its impact on the product or company is clear. At the same time, the solution is not clear, and it's not certain that a solution will be technologically feasible.
2. Starting From a Technological Opportunity
When generative AI became popular, many companies started exploring how to leverage it to improve their products. This is an example of an emerging technology that can enable new product features.
The same can happen with smaller, more specific technologies, and not necessarily new technologies – sometimes technologies that the relevant teams just familiarized themselves with. For example, if a researcher reads a paper about a new way to parse source code, that researcher might have an idea for a new product feature that can leverage this technology.
While many good ideas come from technological opportunities, it's important to remember that the real impact of Research is determined by the product, not the technology. It's far more risky to pursue a technological opportunity than a concrete problem. If you do start a Research based on a technological opportunity, your responsibility is to make sure that the technological opportunity, if pursued successfully, will indeed have a big impact on the product.
Problem-Driven vs. Opportunity-Driven: A Comparison
| Aspect | Problem-Driven Research | Opportunity-Driven Research |
| Starting Point | Customer pain point or product limitation | New technology or technique becomes available |
| Product Impact Clarity | High – you know exactly what problem you're solving | Low to Medium – you're searching for problems this technology can solve |
| Risk Level | Lower – you know there's demand if you succeed | Higher – solution might not match any important problem |
| Validation | Problem already validated through user feedback | Needs validation that the solution matters to users |
| Examples | Swimm's auto-updating docs; Reducing false positives in medical diagnosis | "How can we use LLMs in our product?"; "This new parsing technique could enable..." |
| Success Criteria | Did we solve the problem? | Did we find a valuable use case AND solve the problem? |
Problem-driven research starts with validated demand: you know that the goal is worth pursuing. Opportunity-driven research starts with a hammer looking for nails. Sometimes you find valuable nails, but it's riskier.

Should You Pursue a Research Initiative?
Say you have a research initiative – an idea you'd like to research. To know whether you should pursue it, you should be able to answer a simple set of questions:
1. Product Impact – If the Research succeeds, how big will the impact be?
This is the most crucial question.
For problem-driven research, this is usually straightforward: "If we solve automatic doc updates, we'll retain 40% more customers who currently churn due to outdated docs."
For opportunity-driven research, you need to work harder: "If we use LLMs for code analysis, we could enable X feature, which would help Y users save Z hours per week, translating to $W in additional revenue."
If you're not convinced a successful Research result would make a huge impact on the product, it's probably not worth pursuing at the moment (until you are convinced). "Huge impact" means:
Solving a top-3 customer pain point, OR
Enabling a new product capability that significantly expands your market, OR
Reducing a major cost or risk factor
Anything less, and your resources are better spent on Development work with clearer ROI.

2. Time to Impact – How long until we see product value?
Time estimation is always hard in software. This is true for Development tasks, and even more so for Research. You learned ways to manage this uncertainty during Research in chapter 5, but even at this early pre-Research stage, you should consider the timeline.
Ask yourself two related questions:
How long for the Research itself? (days? weeks? months?)
How long from successful Research to product impact? (immediate integration? requires significant Development? needs market validation?)
The total timeline matters because:
Research that takes 6 months but delivers immediate product value might be worthwhile.
Research that takes 2 months but requires 8 more months of Development might not be worth it if your product roadmap can't accommodate that.
Research that takes 1 year with uncertain outcomes probably isn't worth pursuing unless the potential impact is transformational.
Of course, the actual timespans vary greatly by context (and specifically by the company you work for).
Consider also whether you can achieve incremental value. Can you get some product impact in 3 months even if the full solution takes 9 months? This significantly de-risks longer Research initiatives.

3. Resources – Do you have what you need?
Research requires specific resources beyond just "engineering time":
Knowledge: Do you have team members familiar with the relevant:
Technical domain (for example, NLP, compiler design, distributed systems)?
Business domain (for example, medical diagnostics, financial regulations)?
Similar problems solved elsewhere?
If not, can you acquire this knowledge in reasonable time? (Reading papers for a week: reasonable. Earning a PhD: not reasonable.)
Capacity: Can you dedicate someone (or multiple people) for the expected duration? Research requires sustained focus – splitting someone 10% on Research and 90% on urgent product work rarely succeeds.
Dependencies: Do you need:
Access to specific data or systems?
Collaboration from other teams?
External expertise or consulting?
Budget for tools, cloud resources, or datasets?
If critical resources are unavailable or expensive to obtain, the initiative may not be viable.

Pre-Research Checks
You might not have clear answers to all three questions above. In that case, it makes sense to spend time answering them before actually pursuing the Research. This phase should be limited in time – ideally a few days, at most a week or two.
In this stage you might:
Interview customers and business stakeholders to understand the real impact of solving the problem.
Read about the technological aspects and previous research done in this field to assess feasibility and timeline.
Consult with people who have faced similar challenges (internal experts, academic contacts, or practitioners in your network).
Run quick feasibility tests – not full Research, but simple checks like "Can we even access the data we'd need?" or "Does this library exist in our language?"
After pre-research checks, you should have a clear answer: "Yes, we should pursue this because the impact is X, the timeline is roughly Y, and we have (or can get) the resources we need."
If you're not confident about substantial product impact, don't start the Research. This might sound harsh, but it's crucial: unfocused Research that doesn't connect to product needs wastes your most valuable resource – talented engineers' time and attention. One way to enhance your certainty about product impact is described in chapter 7.
The next chapters assume you understand the product impact of successful Research outcomes, and will help you ensure you actually achieve this impact as quickly as possible.
How to Choose Research Initiatives - Summary
Research initiatives start from either:
Problem-driven: A clear product need (strongly preferred)
Opportunity-driven: A new technology (higher risk)

Before pursuing Research, answer three questions:
Product Impact: Will success create huge value?
Time to Impact: How long until we see product results?
Resources: Do we have the knowledge, capacity, and dependencies?

Run pre-research checks (days, not weeks) to answer these questions if unclear.
Only pursue Research when you're confident about substantial product impact.
The next chapter shows how to maintain that product connection throughout the Research process.
Chapter 7 - Drawing Backwards
So you've chosen a research initiative, and done so correctly (following chapter 6). Now, how do you start working on it?
Most teams start by diving into technical challenges: parsing COBOL, building callgraphs, implementing algorithms. But there's a more powerful approach that ensures your Research actually impacts the product: start from the end and work backwards.
This heuristic – working backwards from your goal – is one of the most valuable problem-solving strategies you can use. Let me show you why with a simple game.
The Spiral Game
Consider this game:

The rules are simple:
The pawn starts on spot 41.
On each turn, a player moves the pawn backward between 1 to 6 spots. That is, if the pawn is on spot 41, you can move it to any spot from 35 to 40.
The player who moves the pawn to spot 1 wins.
Take a moment: If you go first, how would you play to guarantee a win?
(I do encourage you to take a moment and try this for yourself first.)
Most people start thinking from the current position (spot 41) and try to calculate forward: "If I move 3 spaces, they can move 2, then I can move 4..." This quickly becomes overwhelming – too many possible moves to track.
But if you work backwards, the solution becomes clear:
Starting from the end (spot 1):
To win, you need the pawn on spot 1 on your turn.
Your opponent just moved, so the pawn could be on spots 2-7 (since they moved 1-6 backward from wherever it was).
For any spot from 2-7, you can move directly to spot 1.
Conclusion: If the pawn is on spots 2-7 at the start of your turn, you win.

Continue working backwards (from spots 2-7):
You want your opponent to land on spots 2-7.
From spot 8, no matter what they do (move 1-6), they land on spots 2-7.
Conclusion: If you get the pawn to spot 8, you guarantee a win.

Continue working backwards from spot 8:
Notice that you've just created a "new" game: you no longer need to land on spot 1 in order to win. It's enough that you land on spot 8 – as from there, you can win no matter what your opponent does.
So, how do you ensure you can land on spot 8?
Spots 9-14 all allow moving to spot 8.
So if your opponent starts their turn with pawn on a spot between 9-14, you can force it to 8.
Which means you do not want to land on spots 9-14, but you do want to land on 15 (which will force your opponent, in turn, to land somewhere between 9-14).

Again, you've just created a "new" game: where your goal is to land on spot 15. From there, you already know how to win.
You can keep going like this, drawing backwards – would you want to land on spot 16 or not? How about 17? Until, at some point...
The pattern emerges:
Safe spots for you: 1, 8, 15, 22, 29, 36
From any of these, your opponent cannot avoid giving you another safe spot
These are all numbers of the form:
1 + 7n

The winning strategy:
From spot 41, move 5 spots backward to reach spot 36 (a "safe spot").
No matter what your opponent does, you can always move to the next "safe spot".
Eventually, you reach spot 1 and win.
Notice what happened: By working backwards from the goal, you discovered the systematic solution. Working forward from the start position would have been much, much harder.
You actually deployed a second, powerful heuristic here: considering specific cases (1, 8, 15, 22, 29, 36) – and generalizing (1+7n). This heuristic is also very common in research, though not specifically when aiming to ensure product impact.
How to Apply Drawing Backwards to Product-led Research
Let's connect this to Product-led Research. When you face a complex Research challenge, the question isn't "What technical problem should I solve first?" but rather:
"If the Research succeeds, what would the result look like?"
This forces you to:
Connect to product impact: You must envision the end state that creates value.
Work systematically: Like the spiral game, you identify the chain of dependencies backward.
Validate assumptions: Before solving sub-problems, ensure they lead to your goal.
Let me show you how this worked in practice at Swimm.
Case Study: Extracting Business Rules from COBOL
At Swimm, we wanted to automatically generate documents from COBOL codebases that included all the extracted business rules.
Quick context on business rules: Business rules are the constraints, conditions, and actions embedded within software that reflect organizational policies. For example, in money transfer logic:
A customer cannot transfer more than their available balance (overdraft limits notwithstanding).
High-value transfers require additional verification.
Cross-currency transfers must apply current exchange rates.
Some sources define business rules with three elements: Event, Condition, Action:
ON <Event>
IF <Condition>
THEN <Action>
ELSE <Action>
Our goal was to extract all business rules from a COBOL codebase. While challenging in any codebase, it's particularly acute in legacy COBOL code. (If you're interested in the technical details, see this post.) For this book, it's sufficient to know that many research attempts over the last few decades have tried different approaches to face this challenge.
Where Do You Even Start?
Faced with this problem, you might think:
"Should I build a callgraph of all functions? That means I need to parse COBOL code..."
"Should I create a COBOL parser first?"
"Maybe I should read academic papers about program comprehension?" (Good practice during the pre-Research checks phase from chapter 6)
"Perhaps I should track COBOL variables through the codebase?"
"Should I distinguish business conditions ('if requested transfer amount > available balance') from technical conditions ('if variable not initialized, show error')?"
Each of these might require its own research effort. Where do you start?

Drawing Backwards: Start with the End Result
Drawing backwards made us ask:
"If the Research succeeds, what would the result look like?"
When we first asked ourselves this question, we weren't sure. We knew from our clients that they wanted extracted business rules, but we couldn't know what the "ideal" output would look like.

So we decided, before tackling any technical challenges, to manually create documents showing extracted business rules from sample programs. We did this completely manually: no parsing, no algorithms, just understanding COBOL code ourselves and writing documentation.
We did this for various types of applications from different codebases, and learned:
There's no single "right" way to construct such a document.
The output structure differs from one program to another.
Certain patterns appear consistently across business logic.
By creating these documents manually, we formed a hypothesis: "This is what the output should look like, which means this output would make the biggest impact on the product. This is our north star."
But was it actually the north star?
Validating the Hypothesis
Once we manually wrote the documents, it was time to verify our hypothesis. With concrete output in hand, we could:
Discuss internally within the team – get feedback from engineers who understand both COBOL and our product.
Reach out to clients – show them the actual output and ask: "Does this solve your problem?"
We deliberately refrained from solving hard technological challenges before knowing where we were aiming.

Working Backwards Through Sub-Problems
The manual documents also gave us something crucial: a concrete example to analyze backwards.
For instance, we saw that our documents listed many conditions. This made us realize we would (most probably) need to:
Find conditions in the code.
Filter out business-related conditions (vs. technical conditions).
Explain the condition in the document.
Here's where drawing backwards becomes powerful: We tackled (3) before (2), and (2) before (1).
Why? Because we needed to solve (3) to reach our goal – creating impactful documents.
This means we mocked the first two steps – we assumed we already had a way to find conditions and filter business-related ones. So we already had our list of business conditions, and now the challenge was: for each condition, explain it clearly in the output document.
This approach prevented us from spending weeks on tasks (1) and (2), only to discover that our explanation approach (3) didn't actually work. If we can't accomplish (3) assuming (1) and (2) are solved, then perhaps this entire approach isn't viable, and we need to reconsider alternatives.
The logic is clear: While solving (3) ultimately requires solving (1) and (2), if we fail at (3) even with (1) and (2) accomplished, then pursuing (1) and (2) might not be worth the effort at all.
What does mocking a dependency look like in practice? In our case, mocking steps (1) and (2) meant manually identifying a handful of business conditions from the COBOL code ourselves, rather than building automated systems to find them. We created a small, hand-crafted list of conditions that we knew were business-relevant and existed in our sample programs, and used that as input for testing our explanation approach in step (3).
Why Drawing Backwards Is So Powerful
The advantages of drawing backwards become clear from both the game examples and the COBOL case study:
1. Forces Connection to Product Impact
Like working backwards from spot 1 in the spiral game, starting with "what does successful output look like?" forces you to think about end-user value. You can't start drawing backwards without a clear goal. This prevents the common failure mode of technically interesting Research that doesn't impact the product.
2. Provides a System for Progressing
When Research seems like a huge, daunting task with endless options, working backwards gives you a systematic approach. Just as the spiral game became trivial once you worked backwards to identify the safe spots (1, 8, 15, 22...), Research becomes more manageable when you work backwards from the desired output to identify the dependencies.
3. Validates Each Step Before Investment
Working backwards lets you validate that each sub-problem actually contributes to your goal before you invest significant effort. In the COBOL example, we could verify that our explanation approach (step 3) worked before spending weeks on finding and filtering conditions (steps 1 and 2).
Practical Application: Your Research Tree
Drawing backwards integrates naturally with the Research Tree from chapter 4. When you create your tree:
Start with the end:
Root of tree: "Generate impactful business rule documentation".
First question: "What should successful output look like?"
Approach: Create manual examples.
Then work backwards:
Once you have output, ask: "What do we need to produce this?".
This reveals the actual sub-questions and their dependencies.
Each branch represents a prerequisite you need to solve.
Validate before going deeper:
Before pursuing any branch deeply, ask: "If I solve this, does it actually get me closer to the goal?"
Mock out dependencies to test approaches cheaply.
Use time-boxing (from chapter 5) to limit exploration of any branch.
Summary: Drawing Backwards
Drawing backwards is the heuristic of starting from your desired end state and working systematically toward your current position.
In Product-led Research, drawing backwards means:
Start by defining what successful output looks like (often manually or semi-manually).
Validate the output with stakeholders before technical work.
Work backwards through dependencies, solving them in reverse order.
Validate that each step contributes to the goal before major investment.
This heuristic ensures that Research connects to product impact, since you start with the product goal. It provides systematic progress even when problems seem overwhelming, and makes you validate each step before you invest heavily.
Integration with other tools:
Use with Research Tree (chapter 4) to map backwards dependencies.
Apply time-boxing (chapter 5) to limit exploration of each branch.
Combine with pre-Research checks (chapter 6) to validate product impact.
In the next chapter, you’ll learn about two limitations of drawing backwards and how to address them with continuous end-to-end iterations.
Chapter 8 - End-to-End Iterations
Your most important role is ensuring that Research impacts the product. One major risk: spending weeks on research questions that seem vital, only to discover they don't actually impact the product.
In chapter 7, I advocated for drawing backwards – starting from product impact and working your way back to research questions. This is indeed powerful, especially because it forces you to focus on the end result and validate it with users.
But drawing backwards alone has limitations. Two risks emerge:
Risk 1: Infeasibility in practice Your manually-created "ideal output" might be technically infeasible or impossibly expensive to generate. You won't discover this until you try to build it.
Risk 2: Lack of real-world validation Since you haven't completed an end-to-end process, you probably haven't run your solution on clients' actual data (assuming you can't access it during the manual phase). Continuing our COBOL example from the previous chapter, what works on carefully-chosen examples might fail on real codebases.
These two risks are why I advocate for continuous end-to-end iterations.
Drawing Backwards + End-to-End: A Combined Approach
These aren't competing approaches – they're complementary:
| Heuristic | Purpose | What It Gives You | Strength | Limitation |
| Drawing Backwards (chapter 7) | Define target and path | 1. Target output\ 2. Hypothesized chain of steps to get there\ 3. Order of dependencies | Ensures product focus; reveals what you need to build | Hypotheses may be wrong; doesn't validate feasibility on real data |
| End-to-End Iterations (this chapter) | Validate and build incrementally | 1. Proof the chain works\ 2. Learning from real data\ 3. Prioritized improvements | Validates feasibility; discovers what actually works | Can lose direction without clear target and chain |

The recommended flow:
Use drawing backwards to:
Define your target output (manually create examples, validate with users).
Identify the chain of intermediary steps needed to produce that output.
Understand the order of dependencies.
Switch to end-to-end iterations to:
Test whether your hypothesized chain actually works.
Validate on real data (not just your manual examples).
Incrementally build toward the target.
Throughout iterations, keep both the target AND the chain from step 1 as your guide.
Drawing backwards already outlines your end-to-end process (Principle 1 below). End-to-end iterations validate and build that process incrementally, learning what works and what doesn't.
The Five Principles of End-to-End Iterations
The end-to-end approach relies on five principles:
Outline the end-to-end process.
Get to end-to-end by simplifying.
Ship it as fast as you can.
Gradually replace steps.
Get frequent feedback on results.
Let me explain each principle using the COBOL business rules example from chapter 7. Since this book isn't about COBOL, I'll keep explanations short while focusing on the methodology.

Principle 1: Outline the End-to-End Process
Start by outlining the entire process from input to output. More accurately, outline the hypothesized end-to-end process – you can't know for certain until you test with users.
If you followed drawing backwards from chapter 7, you already have this. Drawing backwards naturally produces this chain: you started with the target output, asked "what do I need to produce this?", then "what do I need for that?", working your way back to the starting input. That chain IS your end-to-end process outline.
For the COBOL business rules example: We used drawing backwards to identify that our document needed business rule sections, which meant we needed to explain conditions, which meant we needed to filter business conditions, which meant we needed to find conditions, which meant we needed to parse COBOL. Working backwards gave us this chain:
Start with a COBOL program.
Parse the COBOL program into an Abstract Syntax Tree (AST).
Traverse the AST to find conditions.
Filter out business-related conditions (vs. technical conditions).
For every business rule, create a document section explaining the condition.
Sort document sections according to business rule dependencies.
This is your first draft: the hypothesized process that drawing backwards revealed.

How to outline:
Draw boxes on a whiteboard.
Use a flowchart if the process isn't linear.
Keep it visible throughout the research.
Update it as you learn.
The outline serves as your map – it shows you where you are and what you're building toward.
Principle 2: Get to End-to-End by Simplifying
Your goal: make the process work end-to-end. Start with input (a COBOL program), reach the output (a document with business rules), while passing through the intermediate stages.
This sounds like too much. Don't you need to complete the whole research to achieve this?
Definitely not. The trick is to find the easiest way to get end-to-end by taking shortcuts and making assumptions that are definitely too generous for production. You should fight your inner engineer who wants to "do it right" from the start. Your goal is to get an end-to-end process working, as this heuristic is far more valuable than perfecting one step.
This means that some of the steps can be completed manually, or with very simple implementations that you know won't work in production. Remember it is an intermediate milestone, not the final product.
For our COBOL example:
Start with a single, known COBOL program.
Skip parsing – just manually write a list of conditions for the next stage.
The filtering function returns
trueif business-related,falseotherwise.First implementation: a simple mapping between input conditions (that you know of) and whether to return
trueorfalse.Alternative: always return
true– yes, you'll get non-relevant rules, but that's a problem for later.
The document generation might also be manual for the first pass.

End result: A rough-looking document, generated by a combination of manual work and code that works solely on a single program. This is far from shippable, but it's an extremely important milestone for ensuring research impacts the product.
You might argue that it's overkill, and why waste this time on manual steps when you'll need to automate them eventually? Even if you don't, I promise from experience that many researchers and engineers feel that way.
From my own experience, I learned (the hard way) that this pays off. Getting to a working end-to-end makes sure you:
Validate that the entire flow works.
Identify bottlenecks and blockers early.
Have something concrete to show and get feedback on.
Prevent spending months on one component that turns out to be unnecessary.
Principle 3: Ship It as Fast as You Can
By the end of Principle 2, you have a working end-to-end process for a single case. You definitely can't ship it yet – it only works on one specific program, certainly not the client's program.
Next milestone: make it shippable.
Does "shippable" mean it works on any program? That's too hard and takes too long. You need shortcuts.
This is where creativity matters. For our COBOL example:
Information gathering:
If running on a client's program, get as much information as possible.
Perhaps assume it's less than 1,000 lines of code.
Perhaps you know which COBOL dialect the client uses, so you don't need to support others (fun fact: COBOL has more than 300 dialects. Well, it's fun for you, not for those who need to support them).
Algorithmic shortcuts:
Use regular expressions to find conditions instead of a full parser.
Yes, you'll miss some conditions – that's a problem for later.
UX shortcuts:
Skip the document generation. Just print rules to the console.
Run from the command line without a GUI.
Manual configuration file instead of user interface.
Your goal is clear: ship it.
It doesn't need to be perfect. It needs to be enough to learn from this iteration. If you don't ship, you can only learn from your intuition – a very bad idea.
Unlike in Principle 2, note that here you don't generate an output (partially) manually - you need a working software that can run on real data.
What "shippable" means:
Works on the client's actual data (even if imperfectly).
Produces output you can get real feedback on.
Doesn't require your manual intervention for each run.
What "shippable" doesn't mean:
Perfect accuracy.
Handles all edge cases.
Production-quality code.
Beautiful user interface.
You can't ship just anything, though. In our example, if you only have a solution that works on one specific test program you created, generating an irrelevant document for the client wastes time and teaches you nothing.
You can't learn from iteration without shipping. And you can't ship without a working end-to-end process. I know it sounds obvious, but many teams miss this in practice as they get into the rabbit hole of solving one step "the right way" before validating the entire flow.
Principle 4: Gradually Replace Steps, While Carefully Prioritizing
Now you have a working end-to-end process. You can start replacing various steps' implementations with better ones:
Replace manual steps with automated ones.
Remove shortcuts and add more robust implementations.
Improve accuracy and coverage
After shipping, you'll have many things you think you must replace immediately. But remember: the goal is making research impact the product. To do that, you need careful prioritization.
The Prioritization Framework
Prioritize changes based on three criteria:
1. Learned Necessity Did you learn that something doesn't work in the current implementation and must be fixed for the product to be viable?
Example: "Regular expressions miss nested conditions, and 60% of the client's business rules are in nested conditions. We must use a real parser."
2. Learning Potential Will changing this implementation help you learn more about product impact in the next iteration?
Example: "If we improve the filtering accuracy from 40% to 80%, we'll learn whether the document format is actually useful when it contains mostly-correct content."
3. Effort Estimation How much time will this change take?
Example: "Building a full parser: 3 weeks. Improving regex to handle nested conditions: 2 days. The latter gives us 80% of the value for 10% of the effort."
Continuing with our COBOL example, you may consider and prioritize these changes following the first iteration:
Changes after first iteration:
- Fix parser to handle nested conditions [Learned: Critical, 2 days] → DO FIRST
- Add GUI for document generation [Nice-to-have, 1 week] → DEFER
- Improve filtering accuracy [Learning: Critical, 3 days] → DO SECOND
- Support additional COBOL dialects [No evidence needed, very long time...] → DEFER
- Better document formatting [Client mentioned it, 1 week] → DEFER (validate content first)
Iterate fast: change something, ship again, get feedback. Don't solve every issue you find, even issues clients mention. Ask: "What's the most important thing to change to learn something in the very next iteration?"
Principle 5: Get Frequent Feedback on Results
The trick is to be obsessed with getting as much feedback as you can, on each and every iteration.
On each iteration:
Get feedback on the end result
Show the actual output to users.
When applicable, don't just ask "does this work?" – also watch them try to use it.
Identify what works, what doesn't, what's missing.
Understand the next questions to answer
What did you learn about product impact?
What assumptions were validated or invalidated?
What new questions emerged?
Plan the next iteration accordingly
Use the prioritization framework above.
Focus on learning, not building.
Implement minimal changes to answer questions
Don't fix everything.
Make the smallest changes that will answer your next most important question.
Keep the cycle fast (days to weeks, not months).
Example iteration cycle:
Iteration 1:
- Shipped: Regex-based condition finder, always-true filter, console output.
- Learned: Document structure works, but too many false positives (noise).
- Question: Is filtering accuracy the blocker to usefulness?
- Next: Improve filtering to 80% accuracy, ship again.
Iteration 2:
- Shipped: Same regex finder, smarter filtering (80% accurate), console output.
- Learned: With better filtering, users found the output useful!
- Question: Do we need a parser, or is regex sufficient?
- Next: Test on larger programs to see where regex breaks down.
Iteration 3:
- Shipped: Same pipeline, tested on 10 real programs.
- Learned: Regex fails on 4/10 programs (nested conditions).
- Question: Parser worth the investment now?
- Next: Build parser for nested conditions, ship again.
Notice: Each cycle is fast, focused on one question, and based on real learning.
In real life, you may want to tackle a few questions per iteration. If two things are clear, fix them before shipping again so you can actually gain valuable feedback rather than hearing the same complaints.
Also, when working with real clients, they might not be as receptive to trying things so many times – so you’ll need to consider that aspect as well. Regardless, the key remains the same: keep cycles short and focused on learning.
Integration with Other Tools
End-to-end iterations work best when combined with other research management tools:
Research Tree (chapter 4):
The outlined process becomes a branch in your tree.
Each iteration tests different approaches on branches.
Failed iterations mark branches red, successful ones mark green.
Time-boxing (chapter 5):
Time-box each iteration.
If you can't ship in the time box, you're building too much.
Drawing Backwards (chapter 7):
Drawing backwards (chapter 7) defines both:
The target output.
The hypothesized chain of intermediary steps to reach it.
End-to-end iterations test whether that chain actually works on real data.
The target from drawing backwards acts as your north star throughout iterations.
Each iteration validates or refines the steps that drawing backwards identified.
Use both: drawing backwards reveals what to build, while end-to-end iterations prove it works and let you test it incrementally.
Summary: End-to-End Iterations
End-to-end iterations ensure Research impacts the product by continuously validating feasibility and learning from real data.
The five principles:
Outline the process: Draw backwards already gives you this: the chain from input to output.
Simplify to get end-to-end: Use shortcuts and manual steps to make the whole chain work.
Ship fast: Real data teaches what theory can't.
Prioritize carefully: Use the three-criteria framework:
Learned necessity (is it broken?)
Learning potential (what will you learn?)
Effort estimation (how long will it take?)
Get frequent feedback: Fast cycles (days to weeks) focused on learning.
Part 3 Summary
Back in chapter 2, you learned that your role as a Research leader is to:
Ensure Research connects to product impact.
Ensure Research is done effectively.

Part 2 handled ensuring Research is done effectively.
Part 3 handled ensuring Research connects to product impact – your most important responsibility. This part provided a complete methodology for ensuring product impact through three complementary stages:
The Three Stages of Product-Led Research
Stage 1: Choose research that matters (chapter 6)
Before starting any Research, answer three critical questions:
Product impact: Will success create huge value?
Time to impact: How long until we see product results?
Resources: Do we have the knowledge, capacity, and dependencies?

You learned to distinguish between problem-driven research (starting from validated customer pain points, which is strongly preferred) and opportunity-driven research (starting from new technologies).
Run focused pre-research checks to answer these questions. Only pursue Research when confident about substantial product impact.
Stage 2: Start from product value and work backwards (chapter 7)
Once you've chosen what to research, the drawing backwards heuristic ensures that you start right. Instead of diving into technical challenges, start from the end:
Manually create the desired output: What should successful Research produce? Create it by hand before solving any technical problems.
Validate with stakeholders or customers: Show them the output and confirm it solves their problem.
Work backwards through dependencies: From that validated output, identify what you need to produce it, then what you need for that, working your way back to your starting point.
Solve in reverse order: Tackle the final step first (with earlier steps mocked), validating that each step contributes to the goal before investing heavily in earlier dependencies.

The spiral game example showed why this works: working backwards from the goal reveals systematic solutions that working forward obscures.
Drawing backwards forces product connection because you literally start with product output. It integrates with the Research Tree (chapter 4): While drawing backwards identifies what questions matter, the Tree helps you explore approaches for answering them.
Stage 3: Validate and build iteratively (chapter 8)

Drawing backwards alone has two limitations:
Your manually-created "ideal output" might be technically infeasible to generate.
You haven't validated on real user data.
End-to-end Iterations address both limitations. These two tools aren't competing approaches – rather, they complement one another:
How the Three Stages Work Together
These chapters form a complete methodology for product-led Research:
Choose → Start from the end → Validate iteratively
Chapter 6 ensures you choose research that COULD have huge impact (strategic decision).
Chapter 7 ensures you start from product value (planning backward from validated output).
Chapter 8 ensures you continuously validate with real users.
Each stage prevents different undesired, but painfully common outcomes:
Chapter 6 prevents pursuing research that won't matter (even if successful).
Chapter 7 prevents building technically correct solutions that don't create product value.
Chapter 8 prevents building infeasible solutions or solutions that fail on real data.
You now have the complete answer to "How do I ensure Research impacts the product?":
Choose wisely: Only pursue research with clear, huge product impact.
Start from product value: Manually create and validate desired output before technical work.
Work backwards: Identify dependencies from output back to starting point.
Build end-to-end fast: Get entire chain working with shortcuts and manual steps.
Ship to real users: Validate on actual data, not just examples.
Iterate based on learning: Improve the chain incrementally, prioritizing what teaches you most.
This methodology keeps product impact central at every stage: choosing, planning, and executing. It prevents the most expensive failure: "successful" Research that doesn't affect the product.
Book Summary
You picked up this book because managing Research is different from managing Development – and you needed concrete tools to handle that difference.
What You Learned About Research
In chapter 1, you learned that Research isn't about difficulty or technical sophistication. It's about uncertainty of approach – that is, confronting problems where you don't know if a solution exists, where multiple approaches might work but you're not sure which, and where the path to success isn't immediately clear.
You saw how Alan Schoenfeld's problem-solving framework breaks down the Research process into four components:
Knowledge base – what you know.
Heuristics – strategies for approaching problems.
Control – monitoring and adjusting your approach.
Beliefs – your mindset toward the problem.

The good news: all four can be improved with the right management and methods.
In chapter 2, you explored the distinction between Research and Development more deeply. You learned that your role as a Research leader has two parts:
Ensure Research connects to product impact: "Successful" Research that doesn't affect the product is a failed project. This is the most important part in Product-led companies.
Ensure Research is done effectively: Even brilliant researchers benefit from structured approaches.

This two-part framework organized everything that followed.
How to Do Research Effectively
Part 2 gave you concrete methods for effective Research execution – tools that work in any research context (whether it's Product-led or not).
In chapter 3, I shared with you a personal reverse engineering classroom story. Students with sophisticated technical skills missed the obvious solution (checking the Help menu) because they lacked structured methodology. The story clearly showed that the problem isn't capability, it's often approach. This illustrated why you need the methods that follow.
Chapter 4 introduced the Research Tree method – a living visual framework for systematically exploring solution paths. You learned:
How to map questions you need to answer and approaches for answering them.
A decision framework for choosing which approach to try first: fastest feedback, lowest cost, best coverage.
How using the tree helps avoid common failure modes: jumping on the first idea, tunnel vision, inefficient learning, answering questions you don't need to, and lost context.
The Research Tree helps you implement Schoenfeld's "control" component – helping you monitor and adjust your approach systematically rather than randomly trying things. It is helpful for a researcher, but as a Research leader, it allows you to guide your team effectively.
In chapter 5, you learned how to manage exploration without killing creativity. Since Research estimation is inherently difficult, time-boxing provides structure by setting time limits for specific research directions. After the allocated time, you stop to reconsider: What did you learn? Is this still the most promising path? This tool acknowledges uncertainty while preventing endless exploration, or diving too deep into rabbit holes that don't necessarily help your Product goals.
How to Ensure Product Impact
Part 3 focused on your most important responsibility: ensuring Research creates product value.
Chapter 6 showed you how to choose what directions to research – and more importantly, what not to pursue. You learned the distinction between:
Problem-driven research – starting from customer pain points (strongly preferred, lower risk).
Opportunity-driven research – starting from new technologies (higher risk, needs validation).
Before pursuing any Research initiative, you need to answer three questions:
Product impact: Will success create huge value?
Time to impact: How long until we see product results?
Resources: Do you and your team have the knowledge, capacity, and dependencies needed?
You saw how to run focused pre-research checks to answer these questions.
Chapter 7 introduced a powerful heuristic for ensuring product connection: start from the end and work backwards. Through the spiral game example, you saw how working backwards reveals systematic solutions that working forward obscures.
In the COBOL business rules case study, you saw a practical application:
Start by manually creating the desired output (before solving any technical challenges).
Validate that output with stakeholders.
Work backwards through dependencies, solving them in reverse order.
Validate that each step contributes to the goal before major investment.
Drawing backwards forces connection to product impact because you must start with the product goal.
Chapter 8 showed you how to address two limitations of the drawing backwards heuristic through continuous end-to-end iterations. Your manually-created "ideal output" might be infeasible to generate, and you haven't validated it on real data. End-to-end iterations solve both problems.
You learned five principles to run effective end-to-end iterations:
Outline the end-to-end process: Drawing backwards already gives you this chain.
Get to end-to-end by simplifying: Use shortcuts and manual steps to make the whole chain work.
Ship it as fast as you can: Real data teaches what theory can't.
Gradually replace steps: Prioritize based on learned necessity, learning potential, and effort.
Get frequent feedback: Fast cycles focused on learning.
Drawing backwards reveals what to build and in what order, while end-to-end iterations prove it works and build it incrementally.
Your Toolkit for Research Management
You now have a complete toolkit:
From Part 2 (works for any research):
Research Tree – maps solution space, chooses approaches systematically.
Time-boxing – manages exploration with structure.
From Part 3 (specifically for product impact):
Choosing frameworks – decides what deserves Research effort.
Drawing backwards – forces product connection from the start.
End-to-end iterations – validates feasibility and learns from real users.
These methods work together. Drawing backwards identifies your goal and the chain of steps. The Research Tree maps approaches for each step. Time-boxing prevents getting deep into a rabbit hole when you should reconsider based on what you've learned, while acknowledging the inability to provide exact time estimates on Research tasks. End-to-end iterations validate and build incrementally.
My Message To You
Research is fundamentally uncertain work. Applying traditional Development management practices fails because it assumes known solution paths, predictable timelines, and steady progress.
But Research doesn't have to be mystical or random. With the right frameworks, you can manage it systematically while maintaining focus on what matters: creating product value. You learned to ensure that Research is done effectively (part 2) and that it connects to product impact (part 3). You have concrete tools, real examples, and a clear framework for both responsibilities.
Now go turn your team's uncertain Research into systematic progress toward measurable product impact. I am confident that you can lead Research teams to success, and I would be happy to hear about your experiences applying these methods.
If you liked this book, please share it with more people.
Acknowledgements
I am extremely lucky to have such wonderful people supporting me in this journey.
Abbey Rennemeyer has been a wonderful editor. Abbey had edited my posts for freeCodeCamp over the past few years, as well as my previous book Gitting Things Done, so I knew she was the perfect fit for this book as well. Her insights both on the content and the writing style have greatly improved this book.
Quincy Larson founded the amazing community at freeCodeCamp. I thank him for starting this incredible community, his ongoing support, and for his friendship.
Estefania Cassingena Navone designed the cover of this book. I am grateful for her professional work and her patience with my perfectionism and requests.
Beta readers who contributed their time and mind to read through an unfinished version of this book to improve it for all of you have helped me get it to its current shape. Specifically, I would like to thank Jason S. Shapiro and Omer Gull for their insights.
Dr. David Ginat introduced me first to Alan Schoenfeld's problem-solving research during my time at Tel Aviv University. His teaching inspired me to apply these ideas in practical contexts, including Research management.
I was privileged to work with many brilliant researchers and engineering leaders over the years, too many to name here.
To readers of my previous book, Gitting Things Done, who were kind enough to provide feedback and support – you are awesome. Receiving your emails and comments made me feel like there is a real reason to keep writing.
If You Wish to Support This Book
If you would like to support this book, you are welcome to buy E-Book version, Paperback, Hardcover , or buy me a coffee. Thank you!
Contact Me
If you liked something about this book, or felt that something was missing or needed improvement – I would love to hear from you. Please reach out at: gitting.things@gmail.com.
Thank you for learning and allowing me to be a part of your journey.
- Omer Rosenbaum
About the Author
Omer Rosenbaum is an established technologist and writer. He's the author of the Brief YouTube Channel, and the books Gitting Things Done and Computer Networks (in Hebrew). He's also a cyber training expert and founder of Checkpoint Security Academy.