Back to Projects

HackaTUM 2025 Winner - LogiTune

An adaptive music engine that generates a real-time soundtrack based on your work state—blending desktop activity, webcam signals, and physical controls.

Tech Stack

Go Python TypeScript gRPC Strudel Electron

Overview

LogiTune is an adaptive music system that keeps you in the zone by generating a soundtrack that responds to how you’re working. By fusing desktop activity, webcam-based emotion detection, and physical controls, it creates a personalized audio experience that matches your focus and energy levels in real-time.

🏆 Winner at HackaTUM 2025 — Built in 36 hours at one of Europe’s largest student hackathons.

How It Works

LogiTune translates signals from multiple sources into a unique, adaptive soundtrack:

  • Desktop Activity: Keyboard, mouse, and window focus patterns
  • Webcam Analysis: Face and pose tracking for affect detection
  • Physical Controls: Logitech MX Keypad integration for tactile overrides

These signals are fused into mood vectors (Focus/Intensity) that drive a Strudel-based music engine, dynamically adjusting mix parameters like drum energy, room ambience, and lead effects.

Architecture

The system consists of several interconnected components:

  • visual-emotions (Python/MediaPipe): Tracks face and pose, smooths state scores, publishes camera metrics
  • activity-monitor (Go): Logs keyboard, mouse, and window focus; converts activity to mood scales with weighted smoothing
  • DJ (Go): Public gRPC API coordinating mood services and music playback
  • sprudel-production (TypeScript/Electron): Strudel renderer UI with gRPC mixer control
  • LogiTunePlugin: Hardware integration exposing play/pause and reset on deck buttons

Technical Challenges

  • Translating noisy webcam and body signals into stable focus/energy without over-smoothing away responsiveness
  • Balancing weights across keyboard, mouse, window, and camera so no single source dominates
  • Mapping abstract mood dimensions to concrete musical changes that feel intentional rather than random
  • Keeping feedback tight enough that the music feels reactive to your state, not lagging behind

What’s Next

  • Dynamic weighting and personalization of mood-to-music mappings
  • Richer mixer scenes and effects in the Strudel engine
  • Improved hardware UX with status feedback and configurable controls