Lectr

Recommendations Which Respect Your Privacy

How Recommendations Work

Lectr can suggest books based on your reading patterns. The feature is designed so that we do not receive or store your raw library, notes, or account identity.

The Short Version

Lectr builds a statistical portrait of your reading on your device. When you ask for recommendations, it sends a stripped-down summary of your tastes – themes you return to, how deeply you engage with different topics, the tone of your annotations – to our server. The server includes this statistical summary in a prompt sent to Claude (Anthropic’s AI), which returns book suggestions.

Your book titles, authors, notes, and quotes are not sent to our server. We receive a sketch of your reading personality, not a copy of your bookshelf.

What We Send

Think of it like describing your taste in food to a sommelier without showing them your fridge. The app sends:

  • Themes – recurring concepts in your annotations, with weights (e.g. “morality: 70%, spirituality: 45%”). Extracted by Apple’s on-device natural language processing, then mapped to generic labels.
  • Tag clusters – which of your reading topics tend to appear on the same books (e.g. “philosophy + psychology + spirituality”).
  • Sentiment per topic – whether your annotations in a given area are more positive or more critical, relative to your own baseline.
  • Engagement depth – how many notes and quotes you save per book in each topic area.
  • Note colour distribution – how you use annotation colours (a surprisingly distinctive signal).

None of these fields contain book titles, author names, or raw text from your annotations. Your tag names and annotation themes are never sent directly. Before leaving your device, each term is mapped to the nearest match in a fixed vocabulary of generic literary and thematic descriptors using Apple’s on-device word embeddings. If you tag books “Grief” or “Addiction Recovery,” the server sees generic proxies like “sorrow” or “recovery” – close enough for useful recommendations, but not the specific words you chose.

Here is a realistic example of a complete reading profile payload:

{ "profile": { "themes": { "morality": 0.82, "spirituality": 0.65, "contemplative": 0.41, "economics": 0.29 }, "tagClusters": [ ["philosophy", "psychology", "spirituality"], ["history", "politics"] ], "tagSentiment": { "philosophy": 0.35, "neuroscience": 0.12, "historical": -0.08 }, "engagement": { "philosophy": 8.4, "psychology": 5.1, "historical": 2.3 }, "noteColours": { "yellow": 84, "blue": 31, "pink": 12, "green": 7 }, "blackSwanDistance": 0.42 }, "scope": null, "count": 5 }

This is everything the server receives. No book titles, no author names, no annotation text, and no user-created tag names. All terms are drawn from a fixed generic vocabulary – the server sees broad reading themes, not your personal labels.

The server processes this payload in memory and discards it after the response is sent. It is never written to disk or associated with an identity. There are no user accounts – authentication is per device installation, not per person, so the server has no way to build a profile of you across requests. Anthropic retains API prompts for 30 days for safety monitoring before deletion (see Third-Party Services).

How We Hide Your Library

If the AI recommends a book you already own, someone needs to filter it out. The simplest private answer: your device does it.

The server asks the AI for more candidates than you requested and returns them all. Your device checks each suggestion against your local library and drops the ones you already own, then trims the list to the number you asked for. No representation of your library – not a list, not a hash, not a filter – ever leaves your device.

Your device Our server [Reading profile] ----> receive summary | ask Claude for books (extra candidates) | return full list | <--------------------------+ filter out owned books trim to requested count | [Recommendations]

No Accounts

Lectr has no user accounts, no sign-ups, and no login. This means we cannot tie a recommendation request to a person. But it also means we need another way to prevent abuse of the API.

Device Attestation

Instead of authenticating you, we authenticate your device. We use Apple’s App Attest framework, which lets the app prove to our server that it is:

  • A genuine, unmodified copy of Lectr
  • Running on an attested Apple device, with request signing tied to that installation

Here’s the simplified flow:

  1. On first use, the app generates a cryptographic key pair on a hardware-backed secure enclave. Apple co-signs it. The app registers this key with our server.
  2. For each recommendation request, the app signs the request body with its private key. Our server verifies the signature using the public key from registration.
  3. Our server derives a pseudonymous identifier from the key – a one-way hash that lets us count “how many requests has this app installation made this month?” without knowing who is using it.

The identifier is pseudonymous: it is stable for a given app installation, but meaningless outside our system. We cannot map it to an Apple ID, a name, or a device serial number. If our database leaked, an attacker would see opaque hashes and monthly counters, with no way to tie them to individuals.

Quota Without Identity

Each app installation gets a monthly allowance of recommendation requests. We enforce this with the pseudonymous identifier described above – no account, no email, no device fingerprint. When the month rolls over, the counter resets.

Request Binding

Every recommendation request is cryptographically bound to its content. The app computes a SHA-256 hash of the request body and includes it in the attestation signature. The server verifies that the signature matches the body it received. This prevents replay attacks (reusing an old signature for a different request) and tampering (modifying the request after signing).

What We Store

On our server, we store:

  • A pseudonymous identifier (one-way hash, not reversible)
  • A public key for verifying request signatures
  • A monthly request counter

We do not store:

  • Reading profiles or recommendation request bodies
  • Library data in any form
  • Apple IDs, device identifiers, or IP addresses
  • Any content from your notes or quotes

Honest Limits

We have tried to be precise about what this system does and does not protect. There is one area where reasonable people may want more assurance than we can architecturally guarantee.

Proxy tokens reveal approximate reading interests

Your actual tag names are never sent. Instead, each tag is mapped to the nearest matches in a fixed vocabulary of generic terms using Apple’s on-device word embeddings. A tag like “Depression” might become “despair, melancholy, sorrow.” This prevents the server from seeing your exact vocabulary, but the proxy terms still reveal the general area of your reading interests. Someone inspecting the request could infer that you read about emotionally heavy topics, even if they cannot determine your specific tag.

The fixed vocabulary ships inside the app and is publicly inspectable. A determined attacker who obtained both the vocabulary and an intercepted request could narrow down which user tags produced a given set of proxies, though the weighted random sampling adds non-determinism that makes exact reversal unreliable.

Infrastructure could theoretically see request bodies

Our server code does not log request bodies, but “we don’t log it” is a policy claim, and policies can change. The architectural constraint is stronger: the server runs on Cloudflare Workers, which have no persistent filesystem. There is no disk to write to. console.log output goes to wrangler tail, which is an ephemeral, opt-in developer stream that is not stored. Cloudflare’s edge logs capture connection metadata (IP, status code, timing) but not request bodies. The reading profile passes through memory and is garbage-collected when the response completes.

This does not make leakage impossible – a code change could add logging, and Anthropic retains prompts for 30 days. But the default architecture makes accidental retention difficult rather than relying solely on a promise not to do it.

Why Proxy Tokens Improve Recommendations

The proxy mapping was designed for privacy, but it turns out to make recommendations better too.

Vocabulary normalisation. Different readers label the same interest differently – “Psych,” “Psychology,” “Mind & Brain,” “Cognitive Science.” Without mapping, the AI sees four unrelated strings. With it, they all converge to the same neighbourhood of generic terms, so the AI recognises one coherent interest instead of four fragmented ones.

Reduced sparsity. A reader with three niche tags produces a sparse, hard-to-interpret profile. After mapping, those three tags expand into nine proxy terms that overlap with well-known reading categories. The profile becomes denser and more informative, giving the AI more to work with.

Conceptual reasoning. Raw tag names anchor the AI to your exact wording. Proxy tokens like “introspective,” “resilience,” and “transformation” nudge it toward thematic reasoning – what your reading is about rather than what you called it. This tends to surface more surprising and useful suggestions.

Anecdotally, I have noticed richer and more varied recommendations since introducing this approach – the broadening effect of the proxy vocabulary seems to give the AI more room to surprise me.

Summary

We built recommendations this way to keep the promise we made on the privacy page: your data stays on your device. The app knows your library. The server knows a statistical sketch. Neither side has the full picture, and that’s by design.

Privacy FAQ

Does the server log my reading profile?

No. Request bodies are not logged in normal operation. Two narrow edge cases exist: if App Attest verification fails, the error response is logged (but this contains attestation errors, not your reading data); and if the AI provider returns an error, the error text is captured but not the prompt itself. Neither path exposes your reading profile.

Are IP addresses stored?

The server code never reads or logs IP addresses. No CF-Connecting-IP, X-Forwarded-For, or similar headers are accessed by the application. Cloudflare’s edge infrastructure may retain connection metadata per their own data processing terms – that is outside our code’s control.

Does Anthropic use my data to train their models?

No. Under Anthropic’s API terms, prompts sent via the API are never used to train AI models. They are retained for 30 days for safety monitoring, then deleted. Lectr uses the API, not the consumer web product.

Does the server ever see my library?

No. Owned-book filtering happens entirely on your device. The server returns extra candidates and your device removes the ones you already own. No list, hash, or other representation of your library is ever sent to the server.

Could my data appear in error logs?

No. The reading profile is never attached to error messages. The only error logging covers attestation failures and cryptographic verification errors, neither of which includes reading data.

How does rate limiting work without identifying me?

Rate limiting uses a pseudonymous device identifier: a one-way HMAC-SHA256 hash derived from your App Attest key. The original key cannot be recovered from this hash. Quota is tracked per app installation per calendar month (20 requests). No IP addresses, Apple IDs, or personal identifiers are used.

Does the server cache recommendation results?

No. Each recommendation request is processed fresh and the result is returned directly. Nothing is cached or written to disk. The app has its own on-device cache to avoid unnecessary repeat requests.

Will this system stay stateless?

The server stores three things persistently: App Attest registration records (device public keys and signature counters), monthly quota counters, and re-registration limits for abuse prevention. That is authentication and rate-limiting machinery – not user data.

No reading profiles or prompt payloads are ever persisted. The system is stateless with respect to your reading data. If this architecture ever changes, we will update this page and the privacy policy before any new data retention begins.