March 2026
Testing Lectr’s Recommendations
One of the questions I had before shipping Lectr’s recommendation feature was simple: does it actually work? Does your reading history change what gets recommended to you, or is it doing something more cosmetic?
I’m a solo developer. I don’t have a research team, a controlled study, or a statistically clean methodology. What I do have is one real library — my own, 521 books — and the time to run some tests. So I ran them.
Here’s the short version of what I found.
The Profile Steers Genre
When recommendations run without any reading history, the results lean toward literary fiction — the kind of thing a general reader might enjoy. When the full reading profile is used, the results shift into psychology, philosophy, and self-help. That’s closer to what my library actually contains. The two conditions share almost no books in common.
The Privacy Layer Holds
Lectr doesn’t send your tags and notes to a server. It maps them to an anonymised vocabulary on-device first, and only that anonymised version leaves your phone. I measured how much that anonymisation degraded the recommendations. The answer: not much, and with some prompt engineering the gap closed almost completely.
Richer Signals Hit a Ceiling Quickly
This surprised me. I tested a stripped-down profile — just your top five most-engaged tags — against the full profile. For steering the genre of recommendations, they performed equally well. The full profile does seem to influence which specific books get picked within a genre, and it produces more personalised explanation text. But it doesn’t move the needle on the broad category of what gets recommended.
I Got the Initial Interpretation Wrong
My first read of the data was that the profile wasn’t working. It took an external review to catch what I’d missed: I hadn’t computed a baseline for what “no difference” actually looked like. Once I did, the picture changed completely. The profile was working; I just hadn’t measured correctly.
What I Can’t Tell You
These results come from one reader, one model, and one domain. I haven’t tested whether they generalise to other readers, especially readers whose tastes already match what the model would recommend by default. And I haven’t done blinded human evaluation of whether the recommendations feel more tailored — only whether the numbers suggest they are.
I also don’t collect any analytics or telemetry from the app. There’s no dashboard showing me how many people tap the recommendation button or whether they save what’s suggested.
The Real Test
The only way to know whether Lectr’s recommendations are useful is to hear from people using them. If you’ve tried the feature, any feedback is the most useful signal I have. There’s no form or in-app survey. Just email.
The experiments gave me reasonable confidence that the profile improves what gets recommended.