Hacker News new | past | comments | ask | show | jobs | submit login
Focus on decisions, not tasks (technicalwriting.dev)
177 points by kaycebasques 18 hours ago | hide | past | favorite | 30 comments





I write about German bureaucracy, and I wholeheartedly agree with your approach.

Most of my guides start with: what this is, who needs to do this, why you need to do this. If you don’t confirm that people are on the right page doing the right thing for the right reasons, they can go really far in the wrong direction.

Most government websites don’t explain any of this. They just tell you what they want from you to complete the part of the task that concerns them. They don’t bother to treat the task as part of a bigger decision. They just assume that you are here because you know what you are doing.


This fits the way I like to use LLMs: I always ask them for options, then I decide myself which of those options makes the most sense.

Essentially I'm using them as weird magical documentation that can spit out (incomplete but still useful) available options to guide my decision making at any turn.


Would you be willing to give an example of this?

You should actually ask for 3 examples so you can select one.

I like to think of it as the apprentices working for famous artists like Leonardo. The master would draw the outline/sketch, and then the students would fill in the blanks under supervision. Sometimes, the master would steal ideas from the students.

Smells like reinforcement learning in real life. Master sets ups the task environment, collects samples from students, picks the best and maybe even augments them. Students watch the master and learn... and the cycle continues.

And then the master becomes a grandmaster (unless entropy explosion occurs).


I've advocated something similar. Don't just describe the tool at a high level (people often seem to go into marketing mode) - tell the story of what problem it was designed to solve and the trade offs you made along the way. Makes it much easier to place the tool and available options/modes/etc in context and quickly decide whether it's a good fit for you.

Thanks for sharing. A couple of thoughts.

It seems like it's a lot harder to measure whether your docs are helping people make good decisions than it is to measure whether they are helping people successfully accomplish a task. I think we optimize for task-based/procedural docs because the business needs us to prove our value, and there is a need for this type of documentation, and there are lots of ways to measure and report on it over short timelines. But answering the question of, "Did this docset help someone build the right thing in the right way", I mean...organizations struggle to answer this question about their own products, abstracting that to try and measure the effectiveness of your docs seems super fuzzy.

Which is not to say you can't write docs that do this, just that it seems very hard to use numbers to prove that you have done so. I definitely think I could rank how well different docsets support users who need to make decisions, and I could offer up explanations to support my reasoning, but I don't know how to quantify that for the business.

I wonder how the structure of a docset that is designed to support decisions differs from that of a docset that supports tasks. I expect you'll have the same main categories (conceptual, reference, guides) but maybe a lot more conceptual docs, and more space dedicated to contextualizing the concepts. I would expect to see topics become more interdependent, more cross-references, etc.


Interesting that your first thought here is not, oh, how can I use this to improve the docs I am writing, but it is, how can I prove that this improves the docs I am writing. You seem to live in a though environment.

This is what I generally mean by taking an "heuristic approach."

I feel that we need to have a "fuzzy logic" approach to our work.

However, that works best, when the engineer is somewhat experienced.

If they are inexperienced (even if very skilled and intelligent), we need to be a lot more dictatorial.


Thinking in Bets has been one of the most useful books to how I approach software engineering. It’s not even about code, just how to make decisions effectively in limited information environments.

Love that book. Such a powerful idea to phrase your predictions in terms of percentages rather than absolutes. Apparently the Super Bowl anecdote is controversial though? I.e. the conclusions to draw from that anecdote are very debatable.

I don’t remember the specific anecdotes too much, but the lessons make intuitive sense and feel useful.

The one that sticks to mind most is that a good decision can have a bad outcome and that a good outcome doesn’t always mean the decision was good.


I have no idea what you mean by taking a "fuzzy logic" approach to work. Could you expand and explain that a bit please?

Well “fuzzy logic” is kind of a dated term. I don’t think it has been used in software development, for twenty years.

TL;DR, It basically means not having “hard and fast” boundaries, and instead, having ranges of target values, and “rules” for determining target states, as opposed to “milestones,” so targets are determined on a “one at a time” basis.


When we think of future, mostly we think in a deterministic one single point in the future. I would like to think of future as "possible states" rather than just a single point.

This helps me prepare for different scenarios and then build on top of whatever opportunity comes along.

I got reminded of it when I read "target states" and so thought will share it.

I wrote about how I think about the future here: https://jjude.com/shape-the-future/


> "The key to strategy, little Vor," she explained kindly, "is not to choose a path to victory, but to choose so that all paths lead to a victory." —LMB

The term the ancients had for this was paying attention to the "weakest precondition".


"Work is simply whatever we must do to get from one decision to the next." (Venkatesh Rao, Tempo)

Over on r/technicalwriting I'm having a debate with someone regarding whether this is a general principle or not. To be clear, I have no idea how niche or widespread this problem, is. My hunch is that it's a brilliant insight that applies to technical writing in many industries. For now it's just an idea that I think deserves a lot more thought and discussion.

Knowing how people use your system to make decisions is important. I think that knowledge is vital for maintaining, extending, or building a system.

But the article suggests a higher responsibility: you should document your user's decision-making. You should tell them the context, the choices they have to make, and the consequences of their decisions.

I've worked on a "decision support system" with that responsibility and it got really messy, really fast. Humans love to argue about consequences, even 100% absolutely known ones. They also despise automated emails bearing uncertainty, as well as docs demanding binary choices when many more choices are available in reality.

I would hope the book beyond this article raises the concept of control. That is, to document a behavior, you need some guarantee (or enforcement) about that behavior so your documentation remains authoritative. IMO, the lack of authority/control is common, gaping blindspot of writing initiatives like https://www.plainlanguage.gov unfortunately.


Quoting myself from r/technicalwriting discussion on this post:

> Let me rephrase what I think is really important about Baker's idea. The dogma of technical writing education absolutely revolves around focusing on tasks. If we survey a lot of professional technical writers I will bet you that a majority of them believe that "helping users achieve tasks" is a primary goal of documentation, if not THE primary goal. This small quote from Baker is kinda radical (in the Latin sense of "going to the roots"), because it's suggesting that one of our fundamental assumptions is majorly lacking.

For me personally, Baker's idea is fascinating simply because it sets the bar a lot higher than the current status quo of what's expected of technical writing. A lot of docs just assume that it's "mission complete" once a task is documented, and Baker (to me) is suggesting that it's simply not enough. Tasks of course still need to be documented, but tasks are a subset of the information that goes into decisions.

I don't recall Baker's book discussing control in the way you mention. It's a new idea to me, thanks for sharing. One concrete example of control that comes to mind: if a lot of my docs rely on a page from another open source project, and that external page is not good (low quality), then it should probably be my responsibility to improve that doc. Many people might assume that docs external to their site are outside of their responsibility. But if you're really committed to supporting decisions then it doesn't really matter who is hosting the doc. Maybe there's a lot to learn from the ethos of being a good open source citizen in general


I love this. Short, to the point, and insightful. I’m one of those ‘big picture’ type of thinkers. It’s really important to me to just not know the how behind something, but also the why and how it relates to the larger context. I encourage our developers to make liberal use of the Description field in Jira stories and tasks and provide an overview of why we are doing something and how it relates to the bigger picture. Some of them don’t like it I guess, but some are really digging it. I’m happy to provide the big picture behind the project and they are happy with added independence understanding the big picture gives them.

They say you should not judge a book by its cover, but I found it very hard not to [1]. After I found out the book was from 2013 I felt a slight sense of relief.

[1] https://xmlpress.net/publications/eppo/


What's even more ironic is that it's the most profound book about technical writing that I have yet found in my 12-year career.

It also inspired me to deep-dive into how the pages of my docs site relate to each other, which yielded some useful insights: https://technicalwriting.dev/data/intertwingularity.html

(Baker's book led me to Too Big To Know, which in turn led me to the concept of intertwingularity)


I'm afraid I don't follow: what's unusual about the printed cover of the book, and why is the date of publication relevant to your assessment?

It’s ugly, but ugly in a way that was more common 11 years ago.

The damaged, scribbled collage of papers that forms the background image of the cover was a pretty odd design choice IMO: https://xmlpress.net/wp-content/uploads/covers/EPPO-Cover-Fr...

It's surely related to the central thesis of the book (quoted below) but I think there could have been a more appealing way to get that idea across

> What is needed today is the same rigor and discipline professional writers have long brought to making books, but not the same methodology. The book model does not work for the Web or for content consumed in the context of the Web.


Might this apply to marketing communications as well?

How profound /s

This is very much not part of the canon of technical writing dogma, so yes it is profound to me. Professional technical writers are trained from day one to assume that the job revolves around task completion.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: