16-01-2020, 02:04 PM
(13-01-2020, 10:23 PM)Voltralog Wrote: I admit I'm surprised at the no URL proposal. It's such a fundamental part of the web.
I think the point is exactly not to be part of "The Web", which has its own shortcomings.
URLs aren't the problem, Web URLs are. I'm pretty confident hat if we develop this envisioned forum software we'll also develop a standard for referencing content that's much like URLs (ok, probably not as human readable in most cases). But since these would reference content inside the system it will be available as long as the post referencing it.
In that sense I think a good solution to the Web URL problem is having a website snapshot data type (e.g. just a large screenshot to avoid security problems, please no PDF, it's also too mighty) that can reside inside async. Whenever one posts something with an URL in it a script in the background fetches the website and attaches the snapshot object to the post. You can even special case it and use youtube-dl on youtube URLs or just download the content if the MIME type isn't text/html.
In the best case all this can run in headless mode and tor in the background. That way you also avoid exposing any information about you (if you were using a browser plugin you could be logged in somewhere and the snapshot could dox you). Of course that inherently limits this approach to publicly available content.
@frankbraun @smuggler
# Centralization vs Decentralization
If we go the centralized path: it should be super easy for members to take automated daily snapshots at least.
I view a system that allows live replication and only limits access through a centralized CA as more attractive (but also harder to implement of course). But that would make different clients much easier. There could be a pure Web/JS client (for total newcomers/not IT savvy people), a private server/web interface client, a CLI client and all could share the same backend code base due to WebASM. Personally I'd be most interested in the private server/web client approach.