Maybe I’ve missed the intentions of markdown, but the ability to easily read the plain text version has always been the killer feature.
Rendering as html is a nice bonus.
I understand there are plenty of useful things to say “but what about…” to, like inline images, and I use them. But they still detract from what differentiated markdown in the first place.
The more of that you add, the more it could have been any document format.
You can read more about the implementation here: https://sdocs.dev/#sec=short-links
Briefly:
https://sdocs.dev/s/{short id}#k={encryption key}
└────┬───┘ └───────┬──────┘
│ │
sent to never leaves
server your browser
We encrypt your document client side. The encrypted document is sent to the server with an id to save it against. The encryption key stays client side in the URL fragment. (And - probably very obviously - the encryption key is required to make the sever stored text readable again).You can test this by opening your browser's developer tools, switch to the Network tab, click Generate next to the "Short URL" heading, and inspecting the request body. You will see a base64-encoded blob of random bytes, not your document.
The analytics[1] is incredible. Thank you for sharing (and explaining)! I love this implementation.
I'm a little confused about the privacy mention. Maybe the fragment data isn't passed but that's not a particularly strong guarantee. The javascript still has access so privacy is just a promise as far as I can tell.
Am I misunderstanding something and is there a stronger mechanism in browsers preserving the fragment data's isolation? Or is there some way to prove a url is running a github repo without modification?
You are right re privacy. It is possible to go from url hash -> parse -> server (that’s not what SDocs does to be clear).
I’ve been thinking about how to prove our privacy mechanism. The idea I have in my head at the moment is to have 2+ established coding agents review the code after every merge to the codebase and to provide a signal (maybe visible in the footer) that, according to them it is secure and the check was made after the latest merge. Maybe overkill?! Or maybe a new way to “prove” things?? If you have other ideas please let me know.
And I believe you can then tell the browser that you need no network communication at that point. And a user can double check that.
I think it's in the hands of browser vendors.
The agent review a la socket.dev probably doesn't address all the gaps. I think you're already doing about as much as you reasonably can.
That is my half of a bad idea.
Concretely: the sdocs.dev JS bundle should be byte-for-byte reproducible
from a clean checkout at a given commit. You publish { gitSha, bundleSha256 }
on the landing. Users (or agents) can compute the hash of what their browser
actually loaded (DevTools → Sources → Save As → sha256) and compare.
That closes the "we swapped the JS after deploy" gap. It doesn't close
"we swapped it between the verification moment and now" — SRI for SPA
entrypoints is still not really a thing. That layer is on browser vendors.
The "two agents review every merge" idea upthread is creative, but I worry
that once the check is automated people stop reading what's actually
verified. A dumb published hash is harder to fake without getting caught.
(FWIW, working on a similar trust problem from the other end — a CLI + phone
app that relays AI agent I/O between a dev's machine and their phone
[codeagent-mobile.com]. "Your code never leaves your machine" is easy to
say, genuinely hard to prove.)Related pattern I've leaned into heavily: treating .md files as structured state the agent reads back, not just output. YAML frontmatter parsed as fields (status, dependencies, ids), prose only in the body. Turns them from "throwaway outputs" into state the filesystem enforces across sessions — a new session can't silently drift what was decided in the previous one.
Your styling-via-frontmatter is the same mechanism applied to presentation. Have you thought about a read mode that exposes the frontmatter as structured data, for agents that consume sdoc URLs downstream?
At the moment the most efficient way to get sdocs content into an agent is to copy the actual content. But I think that's not too beautiful.
You can read more about the implementation here: https://sdocs.dev/#sec=short-links
Briefly:
https://sdocs.dev/s/{short id}#k={encryption key}
└────┬───┘ └───────┬──────┘
│ │
sent to never leaves
server your browser
We encrypt your document client side. The encrypted document is sent to the server with an id to save it against. The encryption key stays client side in the URL fragment. (And - probably very obviously - the encryption key is required to make the sever stored text readable again).You can test this by opening your browser's developer tools, switch to the Network tab, click Generate next to the "Short URL" heading, and inspecting the request body. You will see a base64-encoded blob of random bytes, not your document.
Re URL length: Yes... I have a feeling it could become an issue. I was wondering if a browser extension might give users the ability to have shorter urls without losing privacy... but haven't looked into it deeply/don't know if it would be possible (browser extensions are decent bridges between the local machine and the browser, so maybe some sort of decryption key could be used to allow for more compressed urls...)
I.e. .md -> gzip -> base64
The compression algo SDocs uses reduces the size of your markdown file by ~10x, so 80KB is still ~800KB of markdown, so fairly beefy.