TED Conversations

Simon Khuvis

Student B.S. Engineering, Cooper Union for the Advancement of Science and Art

TEDCRED 50+

This conversation is closed.

How can computer models help us build intuition?

The use of visual diagrams to explain and understand difficult concepts is as old as history itself, but in the twentieth century, for the first time, engineers and scientists were able to enlist the help of computational tools to represent systems with greater clarity and detail. While computers, with the right peripherals, are able to present data to all the senses, in two or three dimensions and through time, perhaps their greatest pedagogical virtue is their interactivity. People learn by doing: young children internalize Newton's Laws long before their first formal physics class by manipulating the world around them. Computers offer the promise of similar interactivity for systems which are less readily accessible, or even entirely esoteric. In my Bioelectricity class, for example, we have been using computer simulations of the complicated Hodgkin-and-Huxley membrane equations to gain insight into neural reactions to various experimental stimuli. How can computer models be used to learn, understand and ultimately build intuition about systems in nature and science?

Share:

Showing single comment thread. View the full conversation.

  • Feb 16 2012: A lot of this is already done, really, but:
    Augmented reality seems like a promising approach, its already being used a bit in interactive visualisations. Being able to overlay internal views of things onto the things themselves, or deal with models of things interactively with gestures or occular control or neuro control seems useful.
    Being able to modify or tag models with meta-data collaboratively might help people build different visualisations based on the tags. Like people might tag all the possible tumors they see in some high resolution scan of a patient, and a doctor might check the spots that are getting multiple hits by having them stand out in some way, as a heat map skinned over the model or something.Some of this kind of stuff, after you've gathered enough human data you can start training neural nets to identify features based on identifications made by people.
    I like the idea of using data hiding in 3d models a lot. Being able to open up a thing and dig into it with more complexity or pack it down to a simpler form means you can kind of tweak which aspects of the representation you need to see in how much detail. This is also a pretty good way to simulate and blueprint stuff, since the highest level blocks of your simulation can be dummy modules until their details are filled in.
    Since lots of different kinds of systems interact with each other, I'd suggest a kind of universal modular simulation system, where users could get on and collaboratively apply detail, that was in some way tagged or connected to physical scans or models if possible, with a lot of different views that a person can arrive at, and with the ability to take the model part of it and hide it completely, or make it as abstract as possible, or dig into specific details as deep as necessary. modules might also be linked to media that can explain how that component works. And if all the people using the model can communicate with each other, they can all share intuition.

Showing single comment thread. View the full conversation.