Dan Hume's Blog


TouchOSC iPhone
November 30, 2010, 5:08 pm
Filed under: Performance Video

I have decided that I’m going to use the iPhone as a MIDI device to control sound effects in Ableton. After looking through various ways of trying to map my inexpensive Multi-touch pad to Ableton, I felt I wasn’t really going to achieve my goal with the prototype. I’ve managed to get the tracking data of my finger movements on the interface of my touch pad from Community Core Vision into Osculator. However, I’ve been constantly thinking ahead about how well the touch pad would work in terms of it’s interface and I could see some problems arising not far off. I felt I was limited to how much I could control in Ableton with a blank interface. There isn’t really a clear structure for me to map my touch pad to Ableton. I think a lot of detail would have been needed in that area; therefore I wouldn’t have time to work on making a visual piece for this project as well.

Looking through all the tutortials about how to get Osculator to send MIDI data through to Ableton, pretty much all the examples were showing either, the iPhone or iPad as the MIDI devices to control the software. There are loads of examples on youtube of people using there iPad’s or iPhones as MIDI devices. Here’s one of someone using their iPhone to control some effects in Ableton on a looped track.

I started thinking maybe I should use my iPhone for this project. I mean, why not? The TouchOSC application is out there for people to create their own MIDI devices on their iPad, iPod Touch or iPhone. I feel I’ll get a better result with using the TouchOSC application, plus it’s also very flexible. If I need to make any amendments to the interface, I can easily do that.

Hopefully, once I’ve got the application up and running with Ableton I should then be able to map another software as well to work in conjunction with the visuals.

Installing TouchOSC on iPhone

TouchOSC is a universal iPhone / iPod Touch / iPad application that lets you send and receive Open Sound Controlmessages over a Wi-Fi network using the UDP protocol. It’s basically similar to Community Core Vision, but is a specific application.

The application allows to remote control and receive feedback from software and hardware that implements the OSC protocol such asApple Logic Pro/Express, Renoise, Pure Data, Max/MSP/Jitter, Max for Live, OSCulator, VDMX, Resolume Avenue 3, Modul8, Plogue Bidule, Reaktor, Quartz Composer, Vixid VJX16-4, Supercollider, FAW Circle, vvvv, Derivative TouchDesigner, Isadora and others.

The interface provides a number of different touch controls to send/receive messages:

  • Faders
  • Rotary controls
  • Push buttons
  • Toggle buttons
  • XY pads
  • Multi-faders
  • Multi-toggles
  • LEDs
  • Labels (New!)

It supports full multi-touch operation, five controls can be used at the same time. Additionally the program can send Accelerometer data. The application comes with five default layouts that are organized in multiple pages but custom layouts can be constructed using the TouchOSC Editor application.

TouchOSC Software Editor/Layout Design

The TouchOSC software editor allows you to custom design the layout of the interface for which will control the audio software.

Above is a screen shot of the program. You have the option to design a layout for the iPad, iPhone or iPod Touch, from the layout list. I’m going to be designing a layout for the iPhone. Designing a layout is relatively simple as long you know exactly what you want to use. I’m new to all this so I’m going to go to design a simple layout that reflects the layout in Ableton.

In Ableton I have 5 audio tracks so therefore I’m going to use 5 fader controls on my layout page.

Above is a screenshot of the layout I’ve made in TouchOSC. I’ve added some extra controls which I’ll use for controlling effects to the track I’ve made in Ableton. The next stage will be to sync this design to the iPhone and mapping it to Ableton using Osculator.



Production (Multi-Touch Pad)
November 24, 2010, 10:26 am
Filed under: Performance Video

I’ve finally managed to get my multi-touch pad constructed and working after a struggle with software and webcam issues. For the last few days I was struggling to find a webcam that would be compatible with a Mac. Phil said that a software called, Macam, would allow pretty much most webcams, which are specifically made for Window PC’s to work on a Mac. I downloaded the Macam software easily, but once I connected the webcam it, the software wouldn’t pick up the device. I was convinced after looking for troubleshooting tips that it was a webcam issue, but I came across another software called iMage USB Webcam.

Construction Process

Firstly I used an old cardboard box to use as the body part. It doesn’t really matter what size it is as long as it has a reasonable width and height for the camera to track movement.

I cut away the four flaps at the top, so nothing is loose or getting in the way.

Using the sharp end of a scissor, I cut out a small hole to feed through the webcam wire through.

I then placed the webcam in the centre of the box for it to be really effective when tracking hand movement.

For the surface to touch onto I used a picture frame.

Using sellotape, I taped up a plain sheet of paper to some glass, from the picture frame. This is going to act as the multi-touch surface.

Tracking Test

This is a screenshot of the of the software called Community Core Vision. It’s where the tracking data is collected from. Fortunately, after I had installed the camera to the computer, the software was working and tracking the movement of my hands I made onto the surface of the glass.



Making a Multi-touch Pad
November 21, 2010, 6:31 pm
Filed under: Performance Video

I’ve decided instead of making an actual application for the iPhone, iPad and iPod Touch, I’m going to make my very own multi-touch pad. I’ll be using a software called Community Core Vision to make this work.

Community Core Vision, CCV for short (aka tbeta), is a open source/cross-platform solution for computer vision and machine sensing. It takes an video input stream and outputs tracking data (e.g. coordinates and blob size) and events (e.g. finger down, moved and released) that are used in building multi-touch applications. CCV can interface with various web cameras and video devices as well as connect to various TUIO/OSC/XML enabled applications and supports many multi-touch lighting techniques including: FTIR, DI, DSI, and LLP with expansion planned for the future vision applications (custom modules/filters).

Being a student, money is often an issue so my multi-touch track pad will be a very inexpensive and basic construction, but if done correctly, will work perfectly for what I need it to do. Where do I begin… well Phil found me this tutorial which will help construct a multi-touch pad in just a few minutes.

Items I’ll need:

  • Cardboard box – main body to install web-cam into.
  • Picture Frame – This will act as the surface for the interface.
  • Webcam – This is the device that will track movement made on the interface.
  • Plain Paper – This is to underneath the glass, so the camera will only pick up touch movement on the interface.


Cinema 4D
November 19, 2010, 8:58 am
Filed under: 3D, Cinema 4D

Just been trying out some other 3D software, which is Cinema 4D. I actually prefer this software to Maya as it seems a bit more logical to use. Having used Maya previously, I have learnt quite a bit about certain terminology, which has helped me to be able to use C4D. I’ve now got a better understanding of 3D after watching various C4D tutorials. Below are some basic renders below, just to get me going.

 

Some basic animation and Lighting.

Shatter effect and using HDRI lighting.

Above is a screen capture of Cinema 4D. As you can see it’s got quite a similar interface to Maya.



Research into Multi-Touch Surface
November 18, 2010, 7:52 am
Filed under: Performance Video

Having decided that this idea is initially going to be an application for the iPod Touch, iPad and iPhone, I’ve decided to have a look into the technology behind apples impressive interface on these three beautifully engineered products.

Multi-touch

On touchscreen displays, multi-touch refers to the ability to simultaneously register three or more distinct positions of input touches. It is often used to describe other, more limited implementations, like Gesture-Enhanced Single-Touch or Dual-Touch.

Multi-touch has been implemented in several different ways, depending on the size and type of interface. Both touchtables and touch walls project an image through acrylic or glass, and then back-light the image with LEDs. When a finger or an object touches the surface, causing the light to scatter, the reflection is caught with sensors or cameras that send the data to software which dictates response to the touch, depending on the type of reflection measured. Touch surfaces can also be made pressure-sensitive by the addition of a pressure-sensitive coating that flexes differently depending on how firmly it is pressed, altering the reflection.Handheld technologies use a panel that carries an electrical charge. When a finger touches the screen, the touch disrupts the panel’s electrical field. The disruption is registered and sent to the software, which then initiates a response to the gesture.

In the past few years, several companies have released products that use multitouch. In an attempt to make the expensive technology more accessible, hobbyists have also published methods of constructing DIY touchscreens.

iPhone Multi-Touch Surface

Electronic devices can use lots of different methods to detect a person’s input on a touch-screen. Most of them use sensors and circuitry to monitor changes in a particular state. Many, including the iPhone, monitor changes inelectrical current. Others monitor changes in the reflection of waves. These can be sound waves or beams of near-infrared light. A few systems use transducers to measure changes in vibration caused when your finger hits the screen’s surface or cameras to monitor changes in light and shadow.

The basic idea is pretty simple — when you place your finger or a stylus on the screen, it changes the state that the device is monitoring. In screens that rely on sound or light waves, your finger physically blocks or reflects some of the waves. Capacitive touch-screens use a layer of capacitive material to hold an electrical charge; touching the screen changes the amount of charge at a specific point of contact. Inresistive screens, the pressure from your finger causes conductive and resistive layers of circuitry to touch each other, changing the circuits’ resistance.

Most of the time, these systems are good at detecting the location of exactly one touch. If you try to touch the screen in several places at once, the results can be erratic. Some screens simply disregard all touches after the first one. Others can detect simultaneous touches, but their software can’t calculate the location of each one accurately. There are several reasons for this, including:

  • Many systems detect changes along an axis or in a specific direction instead of at each point on the screen.
  • Some screens rely on system-wide averages to determine touch locations.
  • Some systems take measurements by first establishing a baseline. When you touch the screen, you create a new baseline. Adding another touch causes the system to take a measurement using the wrong baseline as a starting point.

The Apple iPhone is different — many of the elements of its multi-touch user interface require you to touch multiple points on the screen simultaneously. For example, you can zoom in to Web pages or pictures by placing your thumb and finger on the screen and spreading them apart. To zoom back out, you can pinch your thumb and finger together. The iPhone’s touch screen is able to respond to both touch points and their movements simultaneously. We’ll look at exactly how the iPhone does this in the next section.

A mutual capacitance touch-screen contains a grid of sensing lines and driving lines to determine where the user is touching

Multi-touch Systems

To allow people to use touch commands that require multiple fingers, the iPhone uses a new arrangement of existing technology. Its touch-sensitive screen includes a layer of capacitive material, just like many other touch-screens. However, the iPhone’s capacitors are arranged according to a coordinate system. Its circuitry can sense changes at each point along the grid. In other words, every point on the grid generates its own signal when touched and relays that signal to the iPhone’s processor. This allows the phone to determine the location and movement of simultaneous touches in multiple locations. Because of its reliance on this capacitive material, the iPhone works only if you touch it with your fingertip — it won’t work if you use a stylus or wear non-conductive gloves.

The iPhone’s screen detects touch through one of two methods: Mutual capacitance or self capacitance. In mutual capacitance, the capacitive circuitry requires two distinct layers of material.

A self capacitance screen contains sensing circuits and electrodes to determine where a user is touching.

One houses driving lines, which carry current, and the other houses sensing lines, which detect the current at nodes. Self capacitance uses one layer of individual electrodes connected with capacitance-sensing circuitry.

Both of these possible setups send touch data as electrical impulses. In the next section, we’ll take a look at exactly what happens.

iPhone Processor

The iPhone’s processor and software are central to correctly interpreting input from the touch-screen. The capacitive material sends raw touch-location data to the iPhone’s processor. The processor uses software located in the iPhone’s memory to interpret the raw data as commands and gestures. Here’s what happens:

  1. Signals travel from the touch screen to the processor as electrical impulses.
  2. The processor uses software to analyze the data and determine the features of each touch. This includes size, shape and location of the affected area on the screen. If necessary, the processor arranges touches with similar features into groups. If you move your finger, the processor calculates the difference between the starting point and ending point of your touch.

  1. The processor uses its gesture-interpretation software to determine which gesture you made. It combines your physical movement with information about which application you were using and what the application was doing when you touched the screen.
  2. The processor relays your instructions to the program in use. If necessary, it also sends commands to the iPhone’s screenand other hardware. If the raw data doesn’t match any applicable gestures or commands, the iPhone disregards it as an extraneous touch.

All these steps happen in an instant — you see changes in the screen based on your input almost instantly. This process allows you to access and use all of the iPhone’s applications with your fingers.

Now obviously I’m not going to be able produce a multi-touch surface to this standard, but I’ll be looking at making a basic construction using key components, which will hopefully work just as well.



MIDI
November 18, 2010, 2:26 am
Filed under: Performance Video

MIDI – Musical Instrument Digital Interface

MIDI, short for Musical Instrument Digital Interface, is the standard electronic language ‘spoken’ between electronic instruments and the computerized devices which control them during performances. Developed in the early 1980s, MIDI technology allows a keyboardist to kick off a drum synthesizer with one key or a computer to store a sequence of composed notes as a MIDI file, for example. The keyboard, drum synthesizer and computer all recognize the same set of binary code instructions.

Before the development of the MIDI system, professional keyboardists would often need to set up towering banks of synthesizers, pianos, organs and other electronics in order to perform live. They would go from instrument to instrument in order to produce the necessary sounds. With the introduction of MIDI, these same musicians could connect all of the peripheral keyboards together with 5-pin DIN cables and control them all through one master keyboard. A synthesizer set for background strings, for example, could ‘teach’ another keyboard how to generate that sound through a MIDI connection.

MIDI technology is not restricted to musical synthesizers, however. It is not unusual to find other stage equipment, such as lighting banks, under the control of MIDI-compatible computers. Each light may be assigned a specific MIDI channel and turned on or off according to a master program. MIDI programs may also control effects pedals for guitarists or pre-recorded sequences to supplement the sound onstage.

MIDI files do not actually record the sound of the keyboard instrument, but rather record instructions on how to recreate that sound elsewhere. For instance, a keyboardist might play Beethoven’s Moonlight Sonata on a MIDI-compatible synthesizer connected to a computer. The MIDI file would change each note into a series of 1s and 0s, similar to binary code language. The MIDI coding incorporates other aspects of the performance besides notes, including dynamics, note-bending and changes in key pressure.

If someone wanted to play that recorded version of the Moonlight Sonata on a different computer, the MIDI file would play exactly what the original keyboardist played on the original instrument. The sound reproduction qualities of the computer itself may present a problem. The computer’s sound generation card might render a very weak-sounding version of the MIDI file, with some unpleasant electronic noises. Modern computers with advanced sound cards have eliminated many of these reproduction problems, but many people still associate MIDI files with a less-than-spectacular performance.

Because MIDI files are relatively small and easy to produce, they have become very popular for use on websites, video game programs and MIDI-compatible cellular phones. The ring tones on many cellphones are actually MIDI files which reproduce the original tunes using the phone’s own sound card.



Research & Chosen Idea
November 17, 2010, 8:48 pm
Filed under: Performance Video

After talking with Phil today, I’ve decided to go with my second Idea, which is to create a distorted still image, made up of different layers of photographs and object. These layers of different imagery will react to the sounds produced from a multi-touch surface.

Initially the main incentive behind this idea was to create an application for the iPod Touch, iPad and iPhone that would allow users to create music and manipulate visuals at the same time. Due to the short time scale to complete this unit in, I’ll be producing a basic set-up of a multi-touch interface, which will represent the three apple devices, as a way to execute the idea of the application and how it functions.

Research

Just been looking at some examples of music/sound applications for the iPod Touch, iPad and iPhone. There are lots of applications for the three devices, which can instantly transform them into musical instruments. I want to try and combine a visual experience with creating sound. I’ve looked at three applications developed by ambient pioneer Brian Eno and musician / software designer Peter Chilvers.

Bloom

Part instrument, part composition and part artwork, Bloom’s innovative controls allow anyone to create elaborate patterns and unique melodies by simply tapping the screen. A generative music player takes over when Bloom is left idle, creating an infinite selection of compositions and their accompanying visualisations.

Trope

Darker in tone, Trope immerses users in endlessly evolving soundscapes created by tracing abstract shapes onto the screen, varying the tone with each movement.

Air

Air is a generative audio-visual work that assembles vocal and piano samples into a beautiful, still and ever changing composition, which is always familiar, but never the same.

Air features four ‘Conduct’ modes, which let the user control the composition by tapping different areas on the display, and three ‘Listen’ modes, which provide a choice of arrangement. For those fortunate enough to have access to multiple iPhones and speakers, an option has been provided to spread the composition over several players.

Although you can make a really nice visual piece with these applications, however I think they’re quite limited and they can only be displayed on the interface screen. My idea is a development on these applications, in which you create music/sounds that manipulates imagery you create and the visuals can be projected or displayed on large screens.




%d bloggers like this: