71 points by FailMore 3 days ago | 13 comments
petersumskas 2 hours ago
> Markdown files are slightly annoying to read/preview

Maybe I’ve missed the intentions of markdown, but the ability to easily read the plain text version has always been the killer feature.

Rendering as html is a nice bonus.

I understand there are plenty of useful things to say “but what about…” to, like inline images, and I use them. But they still detract from what differentiated markdown in the first place.

The more of that you add, the more it could have been any document format.

FailMore 56 minutes ago
I feel like things have changed as the main interface for code has (for some) become an agent running in the cli. I feel like we (certainly I) check my code editor way less frequently than before. Because of that (for me) easily reading/rendering Markdown files has become more of a pain than it used to be.
FailMore 1 day ago
A little update: I added privacy-focused optional shorter URLs to SDocs.

You can read more about the implementation here: https://sdocs.dev/#sec=short-links

Briefly:

  https://sdocs.dev/s/{short id}#k={encryption key}
                      └────┬───┘   └───────┬──────┘
                           │                │
                      sent to           never leaves
                       server           your browser

We encrypt your document client side. The encrypted document is sent to the server with an id to save it against. The encryption key stays client side in the URL fragment. (And - probably very obviously - the encryption key is required to make the sever stored text readable again).

You can test this by opening your browser's developer tools, switch to the Network tab, click Generate next to the "Short URL" heading, and inspecting the request body. You will see a base64-encoded blob of random bytes, not your document.

big_toast 2 days ago
URL data sites are always very cool to me. The offline service worker part is great.

The analytics[1] is incredible. Thank you for sharing (and explaining)! I love this implementation.

I'm a little confused about the privacy mention. Maybe the fragment data isn't passed but that's not a particularly strong guarantee. The javascript still has access so privacy is just a promise as far as I can tell.

Am I misunderstanding something and is there a stronger mechanism in browsers preserving the fragment data's isolation? Or is there some way to prove a url is running a github repo without modification?

[1]:https://sdocs.dev/analytics

FailMore 2 days ago
Thanks for the kind words re the analytics!

You are right re privacy. It is possible to go from url hash -> parse -> server (that’s not what SDocs does to be clear).

I’ve been thinking about how to prove our privacy mechanism. The idea I have in my head at the moment is to have 2+ established coding agents review the code after every merge to the codebase and to provide a signal (maybe visible in the footer) that, according to them it is secure and the check was made after the latest merge. Maybe overkill?! Or maybe a new way to “prove” things?? If you have other ideas please let me know.

adelks 4 hours ago
How about simply making the website an app and have it load your makedown file with a button and file browser. Just like e.g. https://app.diagrams.net/

And I believe you can then tell the browser that you need no network communication at that point. And a user can double check that.

big_toast 2 days ago
No, I don't have any good ideas. Just hoping someone else does, or that I'm missing something.

I think it's in the hands of browser vendors.

The agent review a la socket.dev probably doesn't address all the gaps. I think you're already doing about as much as you reasonably can.

FailMore 2 days ago
Thanks. The question has made me wonder about the value of some sort of real time verification service.
Nevermark 6 hours ago
If it's possible to isolate that part of the code, and essentially freeze it for long periods. At least people would know it wasn't being tweaked under them all the time.

That is my half of a bad idea.

FailMore 54 minutes ago
I have something coming out soon (just working on it). Your client (browser) has hashing algos built into it. So the browser can run a hash of all the front end assets it serves. Every commit merged into main will cause a hash of all the public files to be generated. We will allow you to compare the hashes of the front end files in your browser with the hashes from the public GH project. Interested to know what you think...
edgardurand 5 hours ago
For the "prove the server doesn't touch the data" problem — the realistic path today is probably reproducible builds + published bundle hashes.

  Concretely: the sdocs.dev JS bundle should be byte-for-byte reproducible                                                                                                         
  from a clean checkout at a given commit. You publish { gitSha, bundleSha256 }
  on the landing. Users (or agents) can compute the hash of what their browser                                                                                                     
  actually loaded (DevTools → Sources → Save As → sha256) and compare.                                                                                                             
   
  That closes the "we swapped the JS after deploy" gap. It doesn't close                                                                                                           
  "we swapped it between the verification moment and now" — SRI for SPA
  entrypoints is still not really a thing. That layer is on browser vendors.                                                                                                       
                                                                                                                                                                                   
  The "two agents review every merge" idea upthread is creative, but I worry                                                                                                       
  that once the check is automated people stop reading what's actually                                                                                                             
  verified. A dumb published hash is harder to fake without getting caught.                                                                                                        
                                                                                                                                                                                   
  (FWIW, working on a similar trust problem from the other end — a CLI + phone                                                                                                     
  app that relays AI agent I/O between a dev's machine and their phone                                                                                                             
  [codeagent-mobile.com]. "Your code never leaves your machine" is easy to                                                                                                         
  say, genuinely hard to prove.)
FailMore 52 minutes ago
That's basically exactly what I'm working on now actually. We will let you compare all the publicly served files with their hashes on github
big_toast 5 hours ago
Ya. I could imagine a browser extension performing some form of verification loop for simpler webpages. Maybe too niche.
fredericgalline 22 hours ago
Nice implementation — the URL fragment trick for privacy is clever.

Related pattern I've leaned into heavily: treating .md files as structured state the agent reads back, not just output. YAML frontmatter parsed as fields (status, dependencies, ids), prose only in the body. Turns them from "throwaway outputs" into state the filesystem enforces across sessions — a new session can't silently drift what was decided in the previous one.

Your styling-via-frontmatter is the same mechanism applied to presentation. Have you thought about a read mode that exposes the frontmatter as structured data, for agents that consume sdoc URLs downstream?

FailMore 50 minutes ago
I think the next thing I want to do (but not sure how to implement yet) is to make it easy for your agent to go from SDocs url to content. I don't know if that's via curl or a `sdoc` command, or some other way... That could include the styling Front Matter / the agent could specify it.

At the moment the most efficient way to get sdocs content into an agent is to copy the actual content. But I think that's not too beautiful.

throwaway81523 3 hours ago
Soon... there are 15 competing standards.
pdyc 2 days ago
i also used fragment technique for sharing html snippets but url's became very long, i had to implement optional url shortener after users complained. Unfortunately that meant server interaction.

https://easyanalytica.com/tools/html-playground/

FailMore 1 day ago
(I left a stand alone comment, but:) A little update: I added privacy-focused optional shorter URLs to SDocs.

You can read more about the implementation here: https://sdocs.dev/#sec=short-links

Briefly:

  https://sdocs.dev/s/{short id}#k={encryption key}
                      └────┬───┘   └───────┬──────┘
                           │                │
                      sent to           never leaves
                       server           your browser

We encrypt your document client side. The encrypted document is sent to the server with an id to save it against. The encryption key stays client side in the URL fragment. (And - probably very obviously - the encryption key is required to make the sever stored text readable again).

You can test this by opening your browser's developer tools, switch to the Network tab, click Generate next to the "Short URL" heading, and inspecting the request body. You will see a base64-encoded blob of random bytes, not your document.

FailMore 2 days ago
Really nice implementation by the way.

Re URL length: Yes... I have a feeling it could become an issue. I was wondering if a browser extension might give users the ability to have shorter urls without losing privacy... but haven't looked into it deeply/don't know if it would be possible (browser extensions are decent bridges between the local machine and the browser, so maybe some sort of decryption key could be used to allow for more compressed urls...)

pdyc 2 days ago
i doubt it would be possible, it boils down to compression problem compressing x amount of content to y bits, since content is unpredictable it cannot be done without having intermediary to store it.
mystickphoenix 2 days ago
For this use-case, maybe compression and then encoding would get more data into the URL before you hit a limit (or before users complain)?

I.e. .md -> gzip -> base64

beckford 1 hour ago
Using fragments for secure data has been discussed before on hn: https://news.ycombinator.com/item?id=23036515. Tldr: it may not go directly to the server (unless you are using a buggy browser or web client) but the fragment is captured in several places.
stealthy_ 2 days ago
Nice, I've also built something like this we use internally. Will it reduce token consumption as well?
FailMore 2 days ago
Thanks. Re tokens reduction: not that I’m aware of. Would you mind explaining how it might? That could be a cool feature to add
Arij_Aziz 20 hours ago
This is a neat tool. I always had to manually copypaste longs texts into notepad and convert it into md format. Obvisouly i couldn't parse complex sites with lots of images or those that had weird editing. this will be useful
FailMore 19 hours ago
Thank you. If you use an AI agent you might be able to tell it to curl the target website, extract the content into a markdown file and then sdoc it. It might have some interesting ideas with images (using the hosted URLs or hosting them yourself somehow)
moaning 2 days ago
Markdown style editing looks very easy and convenient
FailMore 2 days ago
Thanks! One potential use case I have for it is being able to make "branded" markdown if you need to share something with a client/public facing.
moeadham 3 days ago
I had not heard of url fragments before. Is there a size cap?
FailMore 3 days ago
Ish, but the cap is the length of url that the browser can handle. For desktop chrome it's 2MB, but for mobile Safari its 80KB.

The compression algo SDocs uses reduces the size of your markdown file by ~10x, so 80KB is still ~800KB of markdown, so fairly beefy.

tcfhgj 6 hours ago
It's 2^16=65,536 bytes for Firefox
vivid242 2 days ago
Hadn’t heard of it either - very smart, could open lots of other privacy-friendliness-improved „client-based web“ apps
FailMore 2 days ago
TYVM. Yeah, I am curious to explore moving into other file formats like CSVs.
pbronez 1 day ago
Cool project. Heads up - there’s a commercial company with a very similar name that might decide to hassle you about it:

https://www.sdocs.com/

FailMore 1 day ago
Thanks + thanks for the heads up. I will see what happens. It's a domain-name war out there!
deepfriedbits 2 hours ago
In the spirit of r/IllegallySmolCats, perhaps SmolDocs is a possible option.
adamsilvacons 2 days ago
[dead]