TED Conversations

Jason Pontin

Editor in Chief/Publisher, MIT's Technology Review


This conversation is closed.

"Why Can't We Solve Big Problems?"

I'll be giving a TED U Talk in Longbeach at the end of the month. I'll be asking "Why Can't We Solve Big Problems?" I think that blithe optimism about technology’s powers has evaporated as big problems that people had imagined technology would solve, such as hunger, poverty, malaria, climate change, cancer, and the diseases of old age, have come to seem intractably hard.

I'd love to know what the TED Community thinks our difficulties are - or, even if the idea is true at all.

Here's a URL to the story I wrote in MIT Technology Review on the subject: http://www.technologyreview.com/featuredstory/429690/why-we-cant-solve-big-problems/


Showing single comment thread. View the full conversation.

  • thumb
    Feb 15 2013: Jason,

    I suspect one reason we can't solve big problems is our habit of focusing more on the finite ends than infinite means inclusive of dynamic variables that inform, affect and/or obstruct solutions.

    We generally commit to linear, static and step-wise if complicated solutions to systemic, amorphous and increasingly complex or 'elegant' problems. We rarely factor in all factors. Including unknown. Not that we don't realize big problems need big-picture answers, but we neglect much of their evolving context.

    In part because we're constrained by resources like time, money and various issues that can distract, over-extend or redirect our efforts, we tend to fall into the trap of either over-simplifying or over-complicating our solutions. We tolerate little that isn't clear, expected, desired or know-able. While so little of what we are faced with neatly submits to any of these ideals.

    Something of a 'best-laid plans' effect goes on. We access and analyze the problem, before predicting and/or proposing outcomes, planning for them and finally applying our solutions. While we might stagger and/or stage-in this process to accommodate some variables, in general we avoid deviating once it's underway.

    Which wouldn't be all bad, if we built it to accommodate what I call "acts of God" that crop up uninvited and unanticipated. Sometimes due to lacks in our analyses, planning or flexibility, other times due to serendipity, dumb luck, whatever. These might be negative or constructive agents that don't or can't easily fit into the plan. They are, of late, aptly understood as 'disruptions.'

    Rather then yield as disruptions emerge, embracing them as fodder for iteration and/or amendment of a dynamic if not dialogic process, we tend to respond by ignoring them or scrapping the effort.

    In exceptional cases, we maximize disruption, and/or minimize its impacts. Accepting and addressing the asymmetry with intentionally elastic processes can yield robuster outcomes.


Showing single comment thread. View the full conversation.