Childern's Day Music

Schumann’s Kinderszenen (Op. 15) is not really meant for kids—it’s like a memory-scape written from adulthood, looking backward with tenderness, longing, and the complexity only hindsight can afford. The simplicity of the music is deceptive. It’s technically accessible “Leichte Stücke”—but emotionally layered. Happy children’s day :)

June 1, 2025

Duanwu — Chaos, Comfort, and Coming Home

Duanwu, also known as the Dragon Boat Festival, is a traditional Chinese holiday celebrated on the 5th day of the 5th lunar month. It commemorates Qu Yuan, a patriotic poet who lived—and died—for his country over 2,000 years ago. But for me, Duanwu means something more personal. It marks the true beginning of summer—thick with heat, humidity, and a kind of restless energy. It’s a season that feels chaotic, slightly uncomfortable, yet oddly peaceful. Everything about this festival is part of my comfort zone. The traditions. The food—especially zongzi (sticky rice dumplings), my all-time favorite. The heat. It all brings me back to childhood, to a place I belong. ...

May 31, 2025

Spotify, and general recommendation algorithms with incentives

Here’s an interesting article about Spotify’s music recommendation algorithm: The surprising thing I learned from quitting Spotify Vox | Ada Estes | https://www.vox.com/even-better/404896/spotify-youtube-apple-music-amazon-playlists Spotify’s success key is the platform’s superior recommendation algorithm. What distinguishes Spotify is its 15-year archive of the user’s listening habits and an advanced algorithm that understands and reinforces personal tastes — something competitors like Apple Music failed to replicate (hmm?). Spotify’s strength lies in its hybrid use of content-based filtering (analyzing song attributes like genre, mood, and artist) and collaborative filtering (recommending music based on what similar users enjoy). With access to the listening patterns of 675 million users, Spotify can create surprisingly effective and serendipitous recommendations. ...

May 30, 2025

Recommender systems as mechanisms for social learning

Believe it or not, ChatGPT dug this up for me among the ocean of economic literature! Recommender Systems as Mechanisms for Social Learning QJE 2017 | Yeon-Koo Che , Johannes Hörner. https://doi.org/10.1093/qje/qjx044 This article studies how a recommender system may incentivize users to learn about a product collaboratively. To improve the incentives for early exploration, the optimal design trades off fully transparent disclosure by selectively overrecommending the product (or “spamming”) to a fraction of users. Under the optimal scheme, the designer spams very little on a product immediately after its release but gradually increases its frequency; she stops it altogether when she becomes sufficiently pessimistic about the product. The recommender’s product research and intrinsic/naive users “seed” incentives for user exploration and determine the speed and trajectory of social learning. Potential applications for various Internet recommendation platforms and implications for review/ratings inflation are discussed. ...

May 29, 2025

Morricone’s Lolita Theme

There’s a thread in Reddit “If someone ask why did you watch Lolita. What will be your answer?” The famous poster — For research purposes, thank you! Ennio Morricone (1928–2020) was an Italian composer and conductor best known for his prolific work in film music. He scored over 400 films and television series. He received two Academy Awards, three Grammys and three Golden Globes. Works includes The Good, the Bad and the Ugly (1966), Once Upon a Time in the West (1968), and Cinema Paradiso (1988) and The Legend of 1900 (1998), and ...

May 28, 2025

Newton's Method🍎 for Constrained Optimization | Notes from YYYe's ORML Intensive Lectures

I used to think Newton’s method was a bit old-school. Turns out I was the one playing catch-up. This is about Newton’s method and a clever way to make it work for constrained optimization. Consider optimizing an unconstrained function $f(\cdot)$ whose Hessian exists. Newton’s method works around its gradient function, finding a point $x^\star$ such that $\nabla f(x^\star) = 0$. In an iterative way, at each point $x^k$, it approximates the gradient function $\nabla f(\cdot)$ around $x^k$ using its second-order derivative information $$ \nabla {\tilde f}(x^{k + 1}) \approx \nabla f(x^k) + \nabla^2 f(x^k)(x^{k + 1} - x^k), $$ and solve for $x^{k + 1}$ such that this approximation equals zero—which gives us the classic update: $$ x^{k + 1} = x^k - (\nabla^2 f(x^k))^{-1}\nabla f(x^k). $$ Btw: Newton’s method converges rapidly—quadratically, even—if the starting point $x^0$ is close enough to the solution. ...

May 27, 2025

Listen Closer to, say, Walter Gieseking

Best known for his intimate, almost translucent interpretations of Debussy and Ravel, Gieseking’s style has long divided listeners. Some dismiss his playing as emotionally distant or overly plain (me). But others say it’s a refusal to overstate (almost me). Like, take his Debussy as an example—most pianist lean into impressionism drama. Eg., the master of which: Gieseking’s moonlight is like, wind whisper through water. He lets those notes hang like breath and his pedaling is nearly, invisible. There’s no excess ornament, no indulgence (to me), ...

May 26, 2025

Insomnia talk

May 25th is part of the annual Mental Health Awareness Month—today encourages individuals to focus on their own well-being. And I got insomnia…is that contagious? Anyway, if the night caught you sober—might as well stay here just a little longer and let’s listen to Chopin together. Here’s Nocturne Op. 9 No. 1—let our thoughts becomes feelings and feelings become sky.

May 25, 2025

Elegant and Untrustworthyly Flirting with Sincerity — Music for Lolita

Wine pairs with food. A good book pairs with music too. This weekend I’ve been hypnotized by Nabokov’s Lolita (for academic purposes, thank you). It’s strange, obsessed and beautifully written—like something dark wrapped in silk. It goes too well with Szymanowski’s Myths (Op 30). The music is dreamy, intense, and just a little unsettling. It feels like it mirrors the book’s mood—especially in the first half, when everything still seems elegant, restrained on the outside. Like Szymanowski doesn’t just compose music—he dissolves you into it. Those sliding but dissonant harmonics in the grand glissandi are sensual chaos that’s not just beauty but intoxication. Like Humbert’s monologue—obsessions when it tips past the point of control: ...

May 24, 2025

First Order Methods | Notes from YYYe's ORML Intensive Lectures

I used to think first-order methods (FOM) were mostly just warm-up materials. But after sitting through Professor Ye’s lightning-fast 3-hour tour of the topic… FOM are probably the best! I can’t do a full overview of FOMs here. Here are some takeaways from the lecture: a few highlights that are surprisingly insightful and fun so worth pausing over. We’ll skip zero-order methods. Sure, you can do bisection (aka binary search) if you assume strong structure on the objective function. But the more efficient zero-order methods typically approximate gradients via sampling. That’s clever, but we’re moving on. ...

May 23, 2025