Most music recommendation systems are backward-looking. They mine your listening history, find patterns, and surface more of the same. Spotify Discover Weekly is elegant in its own way — but it assumes that what you've liked before is the best predictor of what you want now.
I built Resonance to test a different hypothesis: context is a stronger signal than history.
The Problem with History
Listening history encodes mood, but it encodes it as an average. The songs you've loved span midnight study sessions, morning runs, and Sunday afternoons that felt like they'd last forever. Averaged together, they produce recommendations that are technically consistent but contextually incoherent.
What if we asked not "what have you liked?" but "what do you need right now?"
This is a much harder question to answer — but it's a more honest one. It requires the system to model the user's present state rather than their cumulative preferences.
How Resonance Works
Context signals
Resonance takes three inputs: location (coarse, opt-in), time of day, and a freeform text mood prompt. These are embedded using a lightweight language model and mapped to a latent "context vector" that describes the current moment.
The context vector is then used to retrieve songs from a pre-indexed embedding space, where each track is encoded not just by audio features, but by the semantic content of its lyrics and the contexts in which it typically appears in user-curated playlists.
The retrieval architecture
Rather than training a new model end-to-end, Resonance uses retrieval-augmented generation (RAG) adapted for audio. A vector database holds track embeddings; the context vector queries it using approximate nearest-neighbor search. The top-k results are reranked by a small cross-encoder that scores relevance to the specific mood prompt.
What I Learned
The hardest part wasn't the retrieval. It was defining "good" in a context where there's no ground truth. A playlist that feels perfect at 11pm in a rain-soaked city is wrong by definition at 7am in a sunlit kitchen — even if the tracks are the same.
This pushed me toward a more epistemic framing of the evaluation problem: rather than asking whether the system produced the "right" playlist, I asked whether it produced a coherent one — internally consistent in mood, responsive to the stated context, and surprising in the right ways.
Resonance is ongoing. The current version is a prototype; I'm building toward a version that can adapt the playlist in real-time as context shifts. If any of this intersects with work you're doing, I'd love to talk.