Abstract

We address the problem of translating natural, semantically dense prompts (e.g., “Underground New York no wave improvisations”) into playable Spotify playlists. Conventional keyword search lacks the context to satisfy such queries. Our system executes a multi‑stage pipeline: semantic decomposition into facets (genre, era, scene, instrumentation); query expansion and diversification across multiple search paths; fault‑tolerant fuzzy matching and de‑duplication; and asynchronous enrichment via background queues. The result blends NLU, IR techniques, and resilient pipeline design to deliver results traditional APIs cannot, offering a “crate‑digging” experience driven by AI reasoning.

Pipeline Sandbox (Interactive)

Enter Ctrl/Cmd + Enter

Ready.

Semantic Decomposition

    Candidate Expansion

      Fallback Strategies (Heuristics)

        Tracks (from API if available)

        If your backend is running, the button above will fetch real candidates.
        Artist Title Preview

        Key Implementation Patterns (Interactive Code)

        Lazy OAuth (Login Only at “Send”)
        # /api/playlist (excerpt)
        user_sp = get_sp_user()
        if not user_sp:
            oauth = spotify_oauth()
            return jsonify({
                "ok": False,
                "need_login": True,
                "login_url": oauth.get_authorize_url()
            }), 401
        
        if (res.status === 401) {
          const body = await res.clone().json().catch(() => ({}));
          if (body?.need_login && body.login_url) {
            const ok = await popupLoginAndWait(body.login_url);
            if (ok) return postPlaylist(payload); // retry
          }
        }
        Progressive Search (Optimistic → Verified)
        def _resolve_spotify_url(artist, title, market=None):
            sp = get_sp_client_credentials()
            for q in (f'track:"{title}" artist:"{artist}"', f"{artist} {title}"):
                res = sp.search(q=q, type="track", limit=1, market=market or SPOTIFY_MARKET)
                items = (res or {}).get("tracks", {}).get("items", [])
                if items:
                    return items[0]["external_urls"]["spotify"]
            return None
        // Light up 🎧 links as they resolve
        for (const row of candidates) {
          const res = await fetch('/api/quicksearch', { method:'POST',
            headers:{'Content-Type':'application/json'},
            body: JSON.stringify({ artist: row.artist, title: row.title })
          });
          const j = await res.json();
          if (j.found && j.spotify_url) updateRowWithLink(row, j.spotify_url);
        }
        Parallel ID Resolution (Order‑Preserving)
        from concurrent.futures import ThreadPoolExecutor, as_completed
        
        def _bulk_resolve_track_ids(sp, urls, max_workers=10):
            ids = [None] * len(urls)
        
            def _fetch_one(idx_url):
                idx, url = idx_url
                try:
                    return idx, sp.track(url)["id"]
                except Exception:
                    return idx, None
        
            with ThreadPoolExecutor(max_workers=max_workers) as ex:
                for f in as_completed([ex.submit(_fetch_one, (i,u)) for i,u in enumerate(urls)]):
                    idx, tid = f.result()
                    ids[idx] = tid
            return [t for t in ids if t]

        IR + AI Pipeline Diagram

        Pipeline: User Prompt → AI Decomposition → Candidate Expansion → Fallback Strategies → Background Queue → Fuzzy Matching → Playable Spotify Playlist
        Black & white pipeline with your palette swatches for reference.