BlindPost BlindPost ← All Posts
English简体中文繁體中文

How your client knows who's allowed to do what in a group

A 100,000-person group. Someone posts a moderation signal: "remove user X."

How does your client decide whether to trust it?

Most messengers answer this with a database lookup — yours, or theirs, or both. WhatsApp's server consults its membership table. Telegram's bot framework asks the server "is this user an admin?" Discord caches a role list locally and trusts the cache. Some combination of "the server knows" and "the client remembers what the server said."

BlindPost answers it differently. Our server doesn't know who is an admin of any group, because we don't have a role table. Your client also doesn't query its own database to check who has authority. The signal carries its own proof. Whoever sends a moderation action attaches a cryptographic chain that anyone with the group's public identifier can verify, end-to-end, without consulting anything else.

Here's how the chain works at the standard-crypto level.

The three role layers

Every group action in BlindPost is signed by one of three role layers:

That's the whole role model. No table, no list, no role rows in any database. Each layer's authority is established the same way: someone proves they hold a private key.

How an owner signal proves itself

When an owner issues an action (remove a member, change settings, rotate a key), they sign the action payload with the group's Ed25519 identity private key.

A receiver verifies this in one step:

  1. Look at the group's public identifier — which is the public half of the same keypair, baked into how the group was joined.
  2. Verify the signature attached to the signal against that public key.

If it verifies, the action came from someone holding the group's private key — by definition, the owner. No server roundtrip. No local DB lookup. The verifier already has everything they need from the signal itself and from the public identifier of the group they're already a member of.

How an admin signal proves itself: a two-step chain

The admin case is more interesting because admins are delegated. They have their own admin keypair, and the owner has at some point declared "this admin public key is authorized."

That declaration is itself a signature: the owner signs the admin's public key with the group's identity private key. When the admin later acts, they sign the action payload with their admin private key. The receiver gets the signal, the admin's signature on it, and the admin's public key — packaged with the owner's signature on that public key.

Verification is two steps:

  1. Verify the admin signature on the payload, using the admin's public key.
  2. Verify the owner's delegation of that admin public key, using the group's public identifier.

If both check out, the action came from someone the owner had authorized. Done. Still no DB. Still no server.

This is the same pattern as a TLS certificate chain — leaf cert + root cert, leaf signed by root, root publicly trusted — applied to group governance.

How a member signal proves itself: it just exists

Member-level signaling has the most elegant proof of all: if you successfully decrypted the signal, the sender is a member. Anyone who isn't holding the group's encryption key can't produce a ciphertext that decrypts correctly under it. Membership collapses into key possession.

For actions where attribution matters (who voted in a poll, who reacted to a message), the signal additionally carries the member's identity public key and a signature; the receiver verifies. But for things where attribution doesn't matter, even that is skipped — the very fact of successful decryption is the proof.

What removal really looks like

The architectural punchline: when you remove an admin or a member from a BlindPost group, nothing changes in any "membership table" anywhere. Instead, the group's content encryption key gets rotated, and the removed party is excluded from the redistribution of the new key.

Their old key still works on the messages they already received. It does not work on any message sent after rotation. The cryptographic boundary moves; the social structure adjusts to match. Same with admin revocation — the old delegation signature stays mathematically valid forever, it just stops mattering, because the admin can't participate in the encrypted channel any more.

What you don't have to trust

To evaluate a moderation signal in a BlindPost group, you don't have to trust:

You only have to trust the math, and the public identifier of the group you're already in (which you obviously have, by being in it). The signal carries the rest.

For a service whose tagline is "the server is blind," this is the matching piece: the client doesn't need to be all-seeing either. Cryptographic delegation propagates authority through the network without anyone needing to consult a table — anywhere.

Try BlindPost