TAI: You Learning Agent

TAI: You Learning Agent

TAI: You Learning Agent

03
TAI
UX Design
UX Design
AI Product
AI Product
Intro

I worked with the Teaching Assistant Intelligence (TAI) team, a cross-functional research and product group at UC Berkeley’s Vive Center, focused on creating AI-powered tools to transform how students learn in large STEM classes. On this project, I collaborated with two other UI/UX designers, two product managers, and a team of five to ten engineers. I was responsible for the redefining the new user flow for the up-coming version, refine key features such as note, knowledge base, and file/video chat functions.

I worked with the Teaching Assistant Intelligence (TAI) team, a cross-functional research and product group at UC Berkeley’s Vive Center, focused on creating AI-powered tools to transform how students learn in large STEM classes. On this project, I collaborated with two other UI/UX designers, two product managers, and a team of five to ten engineers. I was responsible for the redefining the new user flow for the up-coming version, refine key features such as note, knowledge base, and file/video chat functions.

I worked with the Teaching Assistant Intelligence (TAI) team, a cross-functional research and product group at UC Berkeley’s Vive Center, focused on creating AI-powered tools to transform how students learn in large STEM classes. On this project, I collaborated with two other UI/UX designers, two product managers, and a team of five to ten engineers. I was responsible for the redefining the new user flow for the up-coming version, refine key features such as note, knowledge base, and file/video chat functions.

The Problem

Large STEM classes at Berkeley face three major pain points:

Limited personalized attention: Office hours are overcrowded and intimidating.

🧩 Fragmented learning materials: Lecture slides, past exams, labs, and readings are scattered across different platforms, making it hard to study systematically.

🤖 Generic AI isn’t enough: Tools like ChatGPT can’t provide course-specific, reliable answers — leading to frustration or misinformation.

👉 This leads to students relying on guesswork, overloading TAs with repetitive questions, and missing deep conceptual understanding

This is what a Radar look like, but there are more than 20 more them!

This is what a Radar look like, but there are more than 20 more them!

This is what a Radar look like, but there are more than 20 more them!

MVP

Personalized, course-specific teaching assistant that supports students throughout their learning process

Personalized, course-specific teaching assistant that supports students throughout their learning process

Personalized, course-specific teaching assistant that supports students throughout their learning process

Discover Bigger Picture

Discover Bigger Picture

Discover Bigger Picture

Build Depth for Personalized Path

Build Depth for Personalized Path

Build Depth for Personalized Path

Lecture recordings are automatically broken down into meaningful segments, each paired with its transcript. Students can skim through the chaptered timeline or click on transcript sections to jump to specific moments, making long videos searchable and digestible.

Lecture recordings are automatically broken down into meaningful segments, each paired with its transcript. Students can skim through the chaptered timeline or click on transcript sections to jump to specific moments, making long videos searchable and digestible.

Lecture recordings are automatically broken down into meaningful segments, each paired with its transcript. Students can skim through the chaptered timeline or click on transcript sections to jump to specific moments, making long videos searchable and digestible.

Gradient 1 - Blue
Gradient 2 - Purple
Gradient 1 - Blue
Gradient 2 - Purple
Gradient 1 - Blue
Gradient 2 - Purple

From local "ChatGPT" to End-to-end Intelligent Learning Agent

From local "ChatGPT" to End-to-end Intelligent Learning Agent

From local "ChatGPT" to End-to-end Intelligent Learning Agent

When a course PDF is uploaded, the assistant automatically parses its structure—detecting sections, headings, and key concepts. These are presented as a sidebar table of contents and overview bullets inside the chat, giving students multiple ways to navigate. Students can jump directly to relevant sections or ask file-based questions, and the model responds with precise, section-grounded answers.

When a course PDF is uploaded, the assistant automatically parses its structure—detecting sections, headings, and key concepts. These are presented as a sidebar table of contents and overview bullets inside the chat, giving students multiple ways to navigate. Students can jump directly to relevant sections or ask file-based questions, and the model responds with precise, section-grounded answers.

When a course PDF is uploaded, the assistant automatically parses its structure—detecting sections, headings, and key concepts. These are presented as a sidebar table of contents and overview bullets inside the chat, giving students multiple ways to navigate. Students can jump directly to relevant sections or ask file-based questions, and the model responds with precise, section-grounded answers.

Galaxy Visualization: Enables open-ended exploration, revealing hidden connections through an intuitive, visual map.

Galaxy Visualization: Enables open-ended exploration, revealing hidden connections through an intuitive, visual map.

Galaxy Visualization: Enables open-ended exploration, revealing hidden connections through an intuitive, visual map.

Personal Dashboard: Adapts to individual roles and goals, surfacing timely, relevant insights that support day-to-day decisions.

Personal Dashboard: Adapts to individual roles and goals, surfacing timely, relevant insights that support day-to-day decisions.

Personal Dashboard: Adapts to individual roles and goals, surfacing timely, relevant insights that support day-to-day decisions.

Students can start directly from the course landing page to ask questions in natural language. Instead of relying on generic LLM responses, the assistant anchors every answer to instructor-provided materials, ensuring accuracy and academic integrity. When students ask a question, the system retrieves relevant sections from the course’s slides, notes, or problem sets, and shows linked references, allowing students to trace exactly where the information came from. This creates trust and transparency, especially in technical courses where precision matters.

Students can start directly from the course landing page to ask questions in natural language. Instead of relying on generic LLM responses, the assistant anchors every answer to instructor-provided materials, ensuring accuracy and academic integrity. When students ask a question, the system retrieves relevant sections from the course’s slides, notes, or problem sets, and shows linked references, allowing students to trace exactly where the information came from. This creates trust and transparency, especially in technical courses where precision matters.

Students can start directly from the course landing page to ask questions in natural language. Instead of relying on generic LLM responses, the assistant anchors every answer to instructor-provided materials, ensuring accuracy and academic integrity. When students ask a question, the system retrieves relevant sections from the course’s slides, notes, or problem sets, and shows linked references, allowing students to trace exactly where the information came from. This creates trust and transparency, especially in technical courses where precision matters.

Technology View

What it does: Groups projects by the technologies they apply or explore.

  • Hierarchy: Tech field → Specific methods/models → Projects

  • Why it’s helpful: Ideal for identifying innovation trends, technical overlaps, or evaluating tech adoption maturity.

Technology View

What it does: Groups projects by the technologies they apply or explore.

  • Hierarchy: Tech field → Specific methods/models → Projects

  • Why it’s helpful: Ideal for identifying innovation trends, technical overlaps, or evaluating tech adoption maturity.

Technology View

What it does: Groups projects by the technologies they apply or explore.

  • Hierarchy: Tech field → Specific methods/models → Projects

  • Why it’s helpful: Ideal for identifying innovation trends, technical overlaps, or evaluating tech adoption maturity.

Domain View

What it does: Groups projects based on problem space within SLB, application area, or impact field (e.g., “Subsurface,” “Grid Modernization”).

  • Hierarchy: Domain → Product → Projects

  • Why it’s helpful: Useful for strategists or external partners to see applied impact areas and identify gaps or overlaps in research.

Domain View

What it does: Groups projects based on problem space within SLB, application area, or impact field (e.g., “Subsurface,” “Grid Modernization”).

  • Hierarchy: Domain → Product → Projects

  • Why it’s helpful: Useful for strategists or external partners to see applied impact areas and identify gaps or overlaps in research.

Domain View

What it does: Groups projects based on problem space within SLB, application area, or impact field (e.g., “Subsurface,” “Grid Modernization”).

  • Hierarchy: Domain → Product → Projects

  • Why it’s helpful: Useful for strategists or external partners to see applied impact areas and identify gaps or overlaps in research.

Team View

What it does: Organizes projects by contributing teams (e.g., AI Lab, Frontend, Robotics).

  • Hierarchy: Team → Sub teams within a Lab → Projects

  • Why it’s helpful: Great for internal alignment, performance tracking, and collaboration mapping across the org.

Team View

What it does: Organizes projects by contributing teams (e.g., AI Lab, Frontend, Robotics).

  • Hierarchy: Team → Sub teams within a Lab → Projects

  • Why it’s helpful: Great for internal alignment, performance tracking, and collaboration mapping across the org.

Team View

What it does: Organizes projects by contributing teams (e.g., AI Lab, Frontend, Robotics).

  • Hierarchy: Team → Sub teams within a Lab → Projects

  • Why it’s helpful: Great for internal alignment, performance tracking, and collaboration mapping across the org.

While the current MVP focuses primarily on answering students’ course-related questions with high accuracy, our vision goes beyond that. We aim to transform TAI from a local “ChatGPT” clone into an end-to-end intelligent learning agent, one that not only provides answers but also helps students formulate structured notes, plan their study paths, and evaluate their learning progress over time. This semester, we’re focusing on designing and prototyping these expanded capabilities, laying the foundation for a more adaptive, personalized, and longitudinal learning experience.


Stay tuned!

While the current MVP focuses primarily on answering students’ course-related questions with high accuracy, our vision goes beyond that. We aim to transform TAI from a local “ChatGPT” clone into an end-to-end intelligent learning agent, one that not only provides answers but also helps students formulate structured notes, plan their study paths, and evaluate their learning progress over time. This semester, we’re focusing on designing and prototyping these expanded capabilities, laying the foundation for a more adaptive, personalized, and longitudinal learning experience.


Stay tuned!

While the current MVP focuses primarily on answering students’ course-related questions with high accuracy, our vision goes beyond that. We aim to transform TAI from a local “ChatGPT” clone into an end-to-end intelligent learning agent, one that not only provides answers but also helps students formulate structured notes, plan their study paths, and evaluate their learning progress over time. This semester, we’re focusing on designing and prototyping these expanded capabilities, laying the foundation for a more adaptive, personalized, and longitudinal learning experience.


Stay tuned!

Due to the limitation of time, we didn't get to implement all the features. Below are the proposed features that are possible with available data from radar.

Inspiration Prompt:

  • Inspire user about what question to ask

  • Presentation of popular questions

  • Build mental model of how the system work

Inspiration Prompt:

  • Inspire user about what question to ask

  • Presentation of popular questions

  • Build mental model of how the system work

Inspiration Prompt:

  • Inspire user about what question to ask

  • Presentation of popular questions

  • Build mental model of how the system work

Intelligent Search & Filters:

Help user ask better questions by:

  • Evaluating if the system has enough parameters from user input

    • Ask follow up questions and give suggestions on refinement

Intelligent Search & Filters:

Help user ask better questions by:

  • Evaluating if the system has enough parameters from user input

    • Ask follow up questions and give suggestions on refinement

Intelligent Search & Filters:

Help user ask better questions by:

  • Evaluating if the system has enough parameters from user input

    • Ask follow up questions and give suggestions on refinement

The interactive timeline serves as both a filter and a line graph that indicates change in activity over time.

The interactive timeline serves as both a filter and a line graph that indicates change in activity over time.

The interactive timeline serves as both a filter and a line graph that indicates change in activity over time.

See projects with related domains, tech, team easily

See projects with related domains, tech, team easily

See projects with related domains, tech, team easily

visual parameter

Mapped to

What’s needed to implement

Planet size

effort invested (time or money)

Actual effort metrics based on e.g. time and money

Star Brightness

Impact or visibility

quantitive impact score, or stakeholder priority

Rotation speed (within the trail or self-rotation)

Update frequency within stages, major updates

More granular way of documenting updates in projects

Split off trails

if multiple spin-off projects emerged from an original one

More relationship data to indicate relationships between projects

Satellite (Level 3 Category)

work (tech evaluation, partner, spin off) that supported that specific projects

Evaluation of DataOS

Planet shape

Entity type

Project, tech evaluation, partner

Domain star cluster

Product sub concepts

product line-features

Our approach enables adaptive interfaces through conversational feedback. Users can directly express what they want to track or explore.

By entering their goal or question into a prompt input box, the system performs intent tagging to categorize the query and automatically surfaces the most relevant UI components.

Step 1: Intent Tagging: Understanding the ask

On interpreting the user input side, I defined 5 parameters to tag a questions.

On interpreting the user input side, I defined 5 parameters to tag a questions.

On interpreting the user input side, I defined 5 parameters to tag a questions.

For example ☝️

How did I come up with these parameters? Why?

How did I come up with these parameters? Why?

How did I come up with these parameters? Why?

During the user interview, I asked each interviewee: " If you can ask the galaxy page any question, what would you ask?" and looked for recurring patterns in what they were fundamentally asking for.

While an LLM could infer UI responses directly, defining a parameter tagging framework adds a structured, interpretable layer between user intent and system response. Putting LLM on rails as oppose to letting it range free.

During the user interview, I asked each interviewee: " If you can ask the galaxy page any question, what would you ask?" and looked for recurring patterns in what they were fundamentally asking for.

While an LLM could infer UI responses directly, defining a parameter tagging framework adds a structured, interpretable layer between user intent and system response. Putting LLM on rails as oppose to letting it range free.

During the user interview, I asked each interviewee: " If you can ask the galaxy page any question, what would you ask?" and looked for recurring patterns in what they were fundamentally asking for.

While an LLM could infer UI responses directly, defining a parameter tagging framework adds a structured, interpretable layer between user intent and system response. Putting LLM on rails as oppose to letting it range free.

Step 2: UI Matching: How to Answer with Different UI Components

On the system output side, for each of the potential intent, I have a set of corresponding UI components to answer.

On the system output side, for each of the potential intent, I have a set of corresponding UI components to answer.

On the system output side, for each of the potential intent, I have a set of corresponding UI components to answer.

A white and gray background with a gradient of black dots

01

A white and gray background with a gradient of black dots

01

A white and gray background with a gradient of black dots

01

A white and gray background with a gradient of black dots

02

A white and gray background with a gradient of black dots

02

A white and gray background with a gradient of black dots

02

A white and gray background with a gradient of black dots

03

A white and gray background with a gradient of black dots

03

A white and gray background with a gradient of black dots

03

A white and gray background with a gradient of black dots

04

A white and gray background with a gradient of black dots

04

A white and gray background with a gradient of black dots

04

For example, for "Discover" intent, an UI component like this be fetched☝️

Step 3: Curate Dashboard Space with Feedback

Try the MVP: tai.berkeley.edu

Try the MVP: tai.berkeley.edu

Try the MVP: tai.berkeley.edu

During the process of brainstorming how to leverage the power of LLM to empower better search, I talked to a start up company called glean, which provides a software that combines all source of information across platforms and domain and fuse them into smart search that's specific to the company.

Reflection