Intro
I worked with the Teaching Assistant Intelligence (TAI) team, a cross-functional research and product group at UC Berkeley’s Vive Center, focused on creating AI-powered tools to transform how students learn in large STEM classes. On this project, I collaborated with two other UI/UX designers, two product managers, and a team of five to ten engineers. I was responsible for the redefining the new user flow for the up-coming version, refine key features such as note, knowledge base, and file/video chat functions.
Trailer Video
MVP
This is what longboard dancing looks like. Dancers usually post-edit music or listen to music while they dance.
Here’s what Jambo sounds like. The rider’s dancing style is captured by the board and transformed into sound.


The Problem


Additional micro adjustments & customization within Max/msp
potential for generative sound synthesis & effect



Gather data of dancer’s movements using Arduino
feet position, wheel speed, and board tilt

Map value to MIDI Control in Ableton Live
to sample sound, tempo, and pitch

layout
Jambo 1.0
Jambo 2.0



triggering multiple sensors all the time
clearance at the center avoids accidentally triggering sounds.
shape & size
the shape and size of the new sensor pad resemble a foot, which is more ergonomic for dancers.
Jambo 1.0
Jambo 2.0

when the dancer is simply pushing forward, the front foot stays near the center. This new layout makes music interaction more intentional.
fabrication
old sensors are unpredictable and too sensitive. we added more elastic material as a cushion in-between. New sensors are more reliable, stable, and durable.



Adhesive Foam Tape
FSR-touch sensor
silicone grid sheet
acrylic sheet



background beats
setting a rhythmic background

switch between multiple sets of samples



one-step
mix & switch

54 mm
32 mm








