TED Conversations

This conversation is closed.

We want new user interfaces and natural gestures but we are as much slaves to the keyboard and mouse as our computers.

We want instant freedom from cords and wires and we want it now!!! But are we ready to be free? The TED talks about SixthSense technology brought to mind Tom Cruise in the movie "Minority Report" as well as the lab scenes in James Cameron's movie "Avatar." Both the movies and the TED talk bolster the concept of interacting with computers and data without a mouse, traditional keyboard or even a traditional screen.

With the popularity of modern tablets like the iPad, it would seem the computer interface has finally shed itself of the traditional keyboard and mouse. We seem to be learning to do without the tactile feeling or the audible "tap" or "click" associated with depressing a springy key on a keyboard. We have also exchanged the mouse's point and click for the tap, pinch and zoom of multi-touch touchscreens. However, what happens when we take away the sense of touch and sensation of feedback and we put nothing between the user and their data? You would think it would be pure...natural...the way we have always wanted to use our computers. But in fact, it's far from natural in the beginning. Think back to the first time you tried to use a Trackball mouse or a touchscreen keyboard. Maybe even think of the first time you saw your parents or grandparents use one of these things. Remember the first time you saw someone flounder with the "Kinect" for XBOX or the awkward way they held the Nintendo Wii-mote? Moreover, how many of you don't use Siri or Google Voice or ANY voice commands because "it's just not there yet"? I believe that the only thing between us and our data is US. Technologies like SixthSense, Google Glass and Kinect are slow to emerge not because they need more work, but more so because WE need to work to break free of what binds us to the old ways. Isn't the computer itself just an interface to the information we want? What will it take for us to free our minds AND our data from the keyboard and mouse? Change is coming. Practice now.

Share:
  • thumb
    Feb 24 2014: My question goes along the same lines, why are we still lugging around those bulky computers, software and storage devices? Why not just work from a communication device like Bluetooth which has access to a rental cloud computer, software and storage. That way we could concentrate on improving the I/O devices and save an enormous amount of cost, energy and it would be so much more convenient and useful. It would put an unlimited amount of computing power at our beckon call, anywhere, anytime for everyone. The ultimate equalizer and freedom. And yes, the cops would know where we are at all times and we would also know where they are at all times.
    • Feb 24 2014: A fellow evangelist! Preach, brother!!!

      Thanks for the reply!
  • Feb 25 2014: The keyboard will never go away, need it to type in programs and commands - I still type most of my commands. It is data and not information (misuse of the term) - the data stored is structured, semi-structured, and unstructured. 90% of the data is unstructured and we need a program to create a structure which allows us to use the data (i.e. convert it to information). A good example is a search engine.

    I have been tracking the work of the multi-media lab since it was created by negropante. I agree we need a more intuitive user interface. I am not sure what it is - it almost has to predict what we need (note I said need not want, maybe want later) I have been using dragon for years but speech is not there yet. It is great for writing notes, papers, etc. but command mode is not quite there.
    • Feb 25 2014: Excellent clarification, Wayne. However, I was not intending to say that we don't need programs and should be able to just "jack into" raw data without programs. Until someone designs a system that can in fact add structure to unstructured data and present it as information on it's own, there will always be a job for programmers and a need for software. But like I said, that topic (although intriguing) is not the purpose of my post.

      My post, at it's core is about how we as human beings interact with computing systems to consume and interact with information (I have you to thank for the clarification :-)). I also thank you for a new thought... Is it possible that the freedom from traditional input methods can only be found in the consumption or manipulation of information at first? Since data entry requires far more accuracy than what speech recognition provides today and physically dragging, dropping and swiping to compile code would be exhausting, maybe it wouldn't work for those functions. However, anyone who does most of their work from a command line interface (i.e., any of us working in the IT infrastructure field) is not exactly the target for "new' interfaces anyway now are they?

      Honestly, I think the fact that you have "been using Dragon for years" puts you in the camp of believers. Moreover, when it does get "there." You will be ahead of many people that have shied away completely. I truly believe that we are closer to freedom from the traditional mouse and keyboard than most people think. And by the way, I don't think any of us know exactly what the new user interface will be, but that's why we're talking about it. Check out Steven Johnson's talk on "Where good ideas come from."

      Thanks again for the input! It really helps me think this through!
      • Feb 25 2014: NP, you might be interested in the early work using VR. I always felt a better interface was one that closely related to reality. For example, tablets and touch screens allow you to relate directly to the image on the screen. Lift your finger or the pointing device and the focus moves to the new touch point. Spent years building tablets and touch screens - made pattern recognition much easier than with a mouse.
        • thumb
          Feb 28 2014: You bring up a good point Wayne. I think there is a need for systems to have the intelligence to interpret the true meaning of a command or question. The fact that a tablet allows you to interact with objects resolves a good bit of ambiguity.
  • thumb
    Feb 24 2014: I believe that the issue is not the interface. We can input large amounts of data but without the proper formatting, that data can't be truly understood.

    What we need is an operating system that can input and process information. Systems like SIRI are a start but they can interpret meaning or solve ambiguous questions.
    • Feb 24 2014: Hello Tim!

      I think that you make a good point. Something definitely has to input and process the data. I'm wondering...how much of that operating system you speak of has to be visible to the end user?

      Imagine if Siri for instance, has a structure similar to a command line. However instead of typing a string of commands to search your device for audio files with the value "Billy Joel" in the field and "Piano Man" in the field and then launch music player and play file found, you just say "Play Piano Man" and it happens. A similar process happens when I ask Siri what year the song was released. Without me seeing it, a search is run and a response is generated and read aloud by Siri.

      I think Siri is a fine example of interface because Siri isn't actually doing any of the things I mentioned. There is voice recognition software taking my input and converting it into executable commands. Therefore the missing link in your scenario is in fact just a more robust algorithm to analyze how I say things along with what I say, right? Is that what you mean by interpreting "meaning"? I'm confused however as to why you would want to ask a system an ambiguous question? Can you elaborate?