Apple has demonstrated that large language models (LLMs) can accurately identify user activities by integrating textual audio and motion data without accessing raw audio. This multimodal approach opens new possibilities for health monitoring and smart fitness applications.
Apple enables LLMs to recognize actions from sound, advancing health monitoring and smart fitness
26
Nov