Skip to main content

Apple publishes Human-Centered Machine Learning workshop videos

Today, Apple published on its Machine Learning Research blog, select recordings from its 2024 Workshop on Human-Centered Machine Learning (HCML), highlighting its work on responsible AI development.

Almost 3 hours of content made available

The event, originally held in August 2024, brought together Apple researchers as well as academic experts, and explored everything from model interpretability to accessibility, and strategies to predict and prevent large-scale negative outcomes due to the evolution of AI.

Here is the full list of videos made available:

  • “Engineering Better UIs via Collaboration with Screen-Aware Foundation Models,” by Kevin Moran (University of Central Florida)
  • “UI Understanding,” by Jeff Nichols (Apple)
  • “AI-Resilient Interfaces,” by Elena Glassman (Harvard University)
  • “Tiny but Powerful: Human-Centered Research to Support Efficient On-Device ML,” by Mary Beth Kery (Apple)
  • “Speech Technology for People with Speech Disabilities,” by Colin Lea and Dianna Yee (Apple)
  • “AI-Powered AR Accessibility,” by John Froehlich (University of Washington)
  • “Vision-Based Hand Gesture Customization from a Single Demonstration,” by Cori Park (Apple)
  • “Creating Superhearing: Augmenting human auditory perception with AI,” by Shyam Gollakota(University of Washington)

Apple is doubling down on responsible AI development

Although the event happened almost one year ago, the talks are still very insightful, as they focus mostly on the human and responsible aspects of machine learning development, rather than on the frontier technology itself.

In the blog post, Apple also highlights its focus on responsible AI development, which includes a set of principles that guide the development of its AI tools:

  1. Empower users with intelligent tools: We identify areas where AI can be used responsibly to create tools for addressing specific user needs. We respect how our users choose to use these tools to accomplish their goals.
  2. Represent our users: We build deeply personal products with the goal of representing users around the globe authentically. We work continuously to avoid perpetuating stereotypes and systemic biases across our AI tools and models.
  3. Design with care: We take precautions at every stage of our process, including design, model training, feature development, and quality evaluation to identify how our AI tools may be misused or lead to potential harm. We will continuously and proactively improve our AI tools with the help of user feedback.
  4. Protect privacy: We protect our users’ privacy with powerful on-device processing and groundbreaking infrastructure like Private Cloud Compute. We do not use our users’ private personal data or user interactions when training our foundation models.

Do you work with machine learning development? How often is responsible development the main part of the conversation? Let us know in the comments.

External drive deals on Amazon

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Marcus Mendes Marcus Mendes

Marcus Mendes is a Brazilian tech podcaster and journalist who has been closely following Apple since the mid-2000s.

He began covering Apple news in Brazilian media in 2012 and later broadened his focus to the wider tech industry, hosting a daily podcast for seven years.