Dan Hume's Blog


Research into Multi-Touch Surface
November 18, 2010, 7:52 am
Filed under: Performance Video

Having decided that this idea is initially going to be an application for the iPod Touch, iPad and iPhone, I’ve decided to have a look into the technology behind apples impressive interface on these three beautifully engineered products.

Multi-touch

On touchscreen displays, multi-touch refers to the ability to simultaneously register three or more distinct positions of input touches. It is often used to describe other, more limited implementations, like Gesture-Enhanced Single-Touch or Dual-Touch.

Multi-touch has been implemented in several different ways, depending on the size and type of interface. Both touchtables and touch walls project an image through acrylic or glass, and then back-light the image with LEDs. When a finger or an object touches the surface, causing the light to scatter, the reflection is caught with sensors or cameras that send the data to software which dictates response to the touch, depending on the type of reflection measured. Touch surfaces can also be made pressure-sensitive by the addition of a pressure-sensitive coating that flexes differently depending on how firmly it is pressed, altering the reflection.Handheld technologies use a panel that carries an electrical charge. When a finger touches the screen, the touch disrupts the panel’s electrical field. The disruption is registered and sent to the software, which then initiates a response to the gesture.

In the past few years, several companies have released products that use multitouch. In an attempt to make the expensive technology more accessible, hobbyists have also published methods of constructing DIY touchscreens.

iPhone Multi-Touch Surface

Electronic devices can use lots of different methods to detect a person’s input on a touch-screen. Most of them use sensors and circuitry to monitor changes in a particular state. Many, including the iPhone, monitor changes inelectrical current. Others monitor changes in the reflection of waves. These can be sound waves or beams of near-infrared light. A few systems use transducers to measure changes in vibration caused when your finger hits the screen’s surface or cameras to monitor changes in light and shadow.

The basic idea is pretty simple — when you place your finger or a stylus on the screen, it changes the state that the device is monitoring. In screens that rely on sound or light waves, your finger physically blocks or reflects some of the waves. Capacitive touch-screens use a layer of capacitive material to hold an electrical charge; touching the screen changes the amount of charge at a specific point of contact. Inresistive screens, the pressure from your finger causes conductive and resistive layers of circuitry to touch each other, changing the circuits’ resistance.

Most of the time, these systems are good at detecting the location of exactly one touch. If you try to touch the screen in several places at once, the results can be erratic. Some screens simply disregard all touches after the first one. Others can detect simultaneous touches, but their software can’t calculate the location of each one accurately. There are several reasons for this, including:

  • Many systems detect changes along an axis or in a specific direction instead of at each point on the screen.
  • Some screens rely on system-wide averages to determine touch locations.
  • Some systems take measurements by first establishing a baseline. When you touch the screen, you create a new baseline. Adding another touch causes the system to take a measurement using the wrong baseline as a starting point.

The Apple iPhone is different — many of the elements of its multi-touch user interface require you to touch multiple points on the screen simultaneously. For example, you can zoom in to Web pages or pictures by placing your thumb and finger on the screen and spreading them apart. To zoom back out, you can pinch your thumb and finger together. The iPhone’s touch screen is able to respond to both touch points and their movements simultaneously. We’ll look at exactly how the iPhone does this in the next section.

A mutual capacitance touch-screen contains a grid of sensing lines and driving lines to determine where the user is touching

Multi-touch Systems

To allow people to use touch commands that require multiple fingers, the iPhone uses a new arrangement of existing technology. Its touch-sensitive screen includes a layer of capacitive material, just like many other touch-screens. However, the iPhone’s capacitors are arranged according to a coordinate system. Its circuitry can sense changes at each point along the grid. In other words, every point on the grid generates its own signal when touched and relays that signal to the iPhone’s processor. This allows the phone to determine the location and movement of simultaneous touches in multiple locations. Because of its reliance on this capacitive material, the iPhone works only if you touch it with your fingertip — it won’t work if you use a stylus or wear non-conductive gloves.

The iPhone’s screen detects touch through one of two methods: Mutual capacitance or self capacitance. In mutual capacitance, the capacitive circuitry requires two distinct layers of material.

A self capacitance screen contains sensing circuits and electrodes to determine where a user is touching.

One houses driving lines, which carry current, and the other houses sensing lines, which detect the current at nodes. Self capacitance uses one layer of individual electrodes connected with capacitance-sensing circuitry.

Both of these possible setups send touch data as electrical impulses. In the next section, we’ll take a look at exactly what happens.

iPhone Processor

The iPhone’s processor and software are central to correctly interpreting input from the touch-screen. The capacitive material sends raw touch-location data to the iPhone’s processor. The processor uses software located in the iPhone’s memory to interpret the raw data as commands and gestures. Here’s what happens:

  1. Signals travel from the touch screen to the processor as electrical impulses.
  2. The processor uses software to analyze the data and determine the features of each touch. This includes size, shape and location of the affected area on the screen. If necessary, the processor arranges touches with similar features into groups. If you move your finger, the processor calculates the difference between the starting point and ending point of your touch.

  1. The processor uses its gesture-interpretation software to determine which gesture you made. It combines your physical movement with information about which application you were using and what the application was doing when you touched the screen.
  2. The processor relays your instructions to the program in use. If necessary, it also sends commands to the iPhone’s screenand other hardware. If the raw data doesn’t match any applicable gestures or commands, the iPhone disregards it as an extraneous touch.

All these steps happen in an instant — you see changes in the screen based on your input almost instantly. This process allows you to access and use all of the iPhone’s applications with your fingers.

Now obviously I’m not going to be able produce a multi-touch surface to this standard, but I’ll be looking at making a basic construction using key components, which will hopefully work just as well.

Advertisements

Leave a Comment so far
Leave a comment



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s



%d bloggers like this: