• Home
  • Help
  • Search
  • Register
  • Login
  • Home
  • Help
  • Search
ASync.pre ASync.pre ASync.pre Forum software future: Async(-.pre)

Forum software future: Async(-.pre)
zeb
#11
16-01-2020, 02:04 PM
(13-01-2020, 10:23 PM)Voltralog Wrote: I admit I'm surprised at the no URL proposal. It's such a fundamental part of the web.

I think the point is exactly not to be part of "The Web", which has its own shortcomings.

URLs aren't the problem, Web URLs are. I'm pretty confident hat if we develop this envisioned forum software we'll also develop a standard for referencing content that's much like URLs (ok, probably not as human readable in most cases). But since these would reference content inside the system it will be available as long as the post referencing it.

In that sense I think a good solution to the Web URL problem is having a website snapshot data type (e.g. just a large screenshot to avoid security problems, please no PDF, it's also too mighty) that can reside inside async. Whenever one posts something with an URL in it a script in the background fetches the website and attaches the snapshot object to the post. You can even special case it and use youtube-dl on youtube URLs or just download the content if the MIME type isn't text/html.

In the best case all this can run in headless mode and tor in the background. That way you also avoid exposing any information about you (if you were using a browser plugin you could be logged in somewhere and the snapshot could dox you). Of course that inherently limits this approach to publicly available content.

@frankbraun @smuggler
# Centralization vs Decentralization

If we go the centralized path: it should be super easy for members to take automated daily snapshots at least.

I view a system that allows live replication and only limits access through a centralized CA as more attractive (but also harder to implement of course). But that would make different clients much easier. There could be a pure Web/JS client (for total newcomers/not IT savvy people), a private server/web interface client, a CLI client and all could share the same backend code base due to WebASM. Personally I'd be most interested in the private server/web client approach.
smuggler
#12
16-01-2020, 07:39 PM
# Remote Content
Yes, the point is to NOT depend on the web for content to remain available. An URL pointing to the original source is useful, but the content should become part of the forum system so it remains accessible AND doesn't require users to follow some links to the web with all its perils.
Creating automated screenshots (image or pdf) via headless mode is not a big problem. The problem is rather that a lot of content requires user session information, such as clicking away one of those pesky cookie confirmation boxes or "see more" links. Also, one might want to include content behind paywalls. PDF, while being a piece of "too much to feel safe" has the upside of containing (hopefully) text information that is lost in pure bitmaps.
The solution to the text info being lost could potentially be solved by using one of the OCR tools out there with layout analysis - but that's a stretch goal. Pure picture would be enough for now I think.

To solve the doxxing issue, one could just make the screenshot editable before upload, so that people can black out sections manually, or just crop to the main content.

A quick search results in a couple of browser extensions that support this basic functionality. Some hacking might turn it into everything we need. I think only with that kind of support can we have any hope that people give up the "post URLs without comment" habit that I personally find deeply annoying and counterproductive.

# Centralization vs Decentralization
I envision something like a public repository that users (with access rights) can just download all modifications since their last synchronization. Also useful for "offline mode".
Access rights for forum should come in two forms:

- Towards the repository, show that you have rights to post or retrieve data for a certain path. This could very well be shared access tokens to maintain some more privacy vs the repository. What access requirements are necessary for a "sub board" should be up to its administrators. For example, boards could very well allow everybody to download all data, and only limit uploads/posts. The reason to have post permissions is that it helps a lot with limit attacks and it would allow implementing moderation pipelines: People that can publish themselves, as well as people that have to get the content approved. I see a use for both, depending on specific board.

- Towards the content, the ability to decrypt and understand the posts. This again could be a simple concept like just having group keys that are distributed to people with read access.

Posts themselves just require the necessary signatures for access control/moderation pipeline. They could be in addition be signed (before content encryption to conceal information from the repository), with multiple signing schemes being possible depending on future features to be added.

This would put board admins in the position that they have to distribute access tokens and decrypt keys. New decrypt keys would only require to be distributed when people lose access rights.
The requirement there would be a way for members receiving private messages from the admins/moderators of the boards they subscribe to so that keys can be updated and distributed. That's basically the stable cryptographic identity coming into play. To prevent targeting of single users a commitment to the current state of keys/tokens should be publicly available in the datastructure.

Overall, the repository then would be a rather simple thing to implement, synchronization as well. UI/UX will be a bigger chunk.

Maybe I'll just go ahead an throw together some experimental stuff and we see where it can go from there, or if it should go somewhere else.
ferris
#13
23-01-2020, 08:03 AM
I found this board while leeching the mp3s from the taz0 website and realized, that the bbs link was just added recently. So i was amazed. They realy put up a bbs? I opened my magiterm and tried to connect telnet. Nah, telnet, no way? So ssh. Nothing. BBS maybe down? I pasted the link into the address bar of the browser and boing, there you go!

That beeing said. I understand a bbs as where it came from. Dial UP BBS System. These days they operate trough telnet/ssh/https websockets.

There are still SysOps operating these systems. Mostly retrocomputing Geeks and ANSI Art Fans. The thing to mention here is, that they operate not only local boards, they also have network boards like araknet, agoranet, fidonet...

And another feature they offer is they allow mods. For example so called 'doors' which essentially are custom binaries with STDOUT functionalities that also can have grid-networking funtionalities. Mostly used for Games with worldwide alltime highscore lists.

Just wanted to mention this, because this could be related to further rethinking bbs systems.

https://imgur.com/a/DvcZ5ZD (screenshot of a bbs network selection menu)
smuggler
#14
25-01-2020, 03:09 PM
ferris, I hear you. I've been missing ANSI art and terminal access. It's just some longing that a lot of people do not share, at all.
However, I am in full support to add a terminal/TUI interface to whatever next system we are finding/developing/dreaming about.
jeltz
#15
27-01-2020, 10:09 PM
Hi everyone, i am new here ^_^

@smuggler

> I envision something like a public repository that users (with access rights) can just download all modifications since their last synchronization.
Also useful for "offline mode".

You mean something "like" git ? When users can "git clone" their local copy of the forum and "git push" their changes to the master server, with a feature like "git --gpg-sign", fork branch (topic) and push to others servers.

Also when you speak about TUI you think about "curses style" applications like weechat/irssi ? (where all the server interactions are transparent for the user)
or more the old UN*X way: a simple set of command-line options. (When you need to understand the tech and where all the network requests are triggered by the user manually.)

Sending a PM the "old" way can look something like that:

user@localhost:~$ async pm frankbraun -m "Hi frank i love your face mask style"

user@localhost:~$ async sync --all --verbose
Sync bbs.arnarplex.net...
[*] 3 new reply to subject XXX
[*] New topic from XXX into XXX section
[#] bbs.arnarplex.net is up to date

user@localhost:~$ async send --all
[*] PM frankbraun OK!
[#] All changes have been send!
smuggler
#16
28-01-2020, 04:35 AM
@jeltz: first, welcome!

You describe one of the interaction modes really well. Git style clone/push/pull for data synchronization - including ability to fork, use different servers, and synchronize servers.
For interface, there should be options. A lot of users will die if they can't use an app or browser interface.
Personally, I'd love to have something like a locally running BBS-style interface, something like mutt (but with all the nice ANSI art please :) ).
More CLI style (the "UN*X way" you mention) should also be possible, and is likely the first thing to do.

Keybase is roughly on the way to it, except that I don't appreciate their overall communication design that much.

I kinda envision something like a local directory/file structure that is then interpreted by whatever frontend put on it.
Makes it more scriptable (and yes, anti-spam will be a thing) and flexible.
jeltz
#17
28-01-2020, 08:16 PM
@smuggler
smuggler Wrote:For interface, there should be options. A lot of users will die if they can't use an app or browser interface.

I understand that, i think we should focus on the interface WE need before making a "Transparent tech" product.

smuggler Wrote:Personally, I'd love to have something like a locally running BBS-style interface, something like mutt (but with all the nice ANSI art please :) ).

The curses window should be configurable like mutt and call external readers (zathura, feh, evince...) for images and pdf.
Vi key bindings + color schemes will be nice i think.

In my vision is very important to dissociate the writing/reading (in the mutt like window) that modify the local tree,
from the actual pull/push actions triggered in pure CLI.
I really really like this kind of latency communication when user take time to write long messages.
This model can be very resilient: script for sync at some fix hours in the days every days,
there is no login/logout logs, the server admin can't tell if you are a currently reading something (NO PHP sessions, NO JWT).

Just ephemerals connections for syncing.
koidestroyer
#18
05-04-2020, 07:29 AM
I love the main ideas behind this, but I've seen too many times that grand visions are left either never started or just half built because life. To get an ambitious project off the ground you need committed leaders that will drive and keep momentum, and if anything happens to them (so they must stop working on the project) the project dies.

A relatively good solution to that problem is to design the project with small and useful deliverables, so even if the project got only half done, we get something useful and working out of it. I'd strongly suggest prioritizing development in this manner to make sure efforts don't go to the trash. For example:

a) Encrypted and secure user id system
b) Git style file based mechanism to deliver and distribute content
c) Terminal UI
d) Web UI

And even here, resplit it in useful chunks, if possible:

a.1) Authentication server
a.2) Authorization server

and so on...
BCC
#19
18-04-2020, 03:41 AM
Hello All.

Great to see this thread.

I'm participating in this exact same discussion (and undertaking) in other online communities, who are also perceiving the need to carve out their own space separate from the influence, control and oversight of mainstream corporations and institutions. So, FWIW, I thought I would share here some of what I’ve been working on with them; as I think it also might also be applicable here.

Before describing some of the solutions I’ve been working on, I’ll outline what’s been guiding me which I believe are also compatible with this community.

Requirements: A communication and information sharing platform that supports, at a minimum, short-form messages (IM, chat, etc), long-form messages (articles, structured documents, etc), realtime collaboration (shared workspaces/documents/presentations, ideally with voice and/or video conferencing), and multimedia content (audio, video, etc). The sharing of such information can be via different modes (async/sync, broadcast/unicast/multicast, etc) and methods (search, navigating, sharing, etc). Which requires information in its various forms be recordable, searchable, and referrencable.

Fortunately, there are already many applications which support exactly this type of information sharing. They are web browsers, email readers, web servers, wikis, search engines, chat clients, news readers, etc. And even more important, there are also standards for interoperability and sharing of data such as SNMP, POP, IMAP, HTTP, NNTP, XMPP, SIP etc.

So the problem here is not that the requisite applications don’t exist. But rather, its the public networks and centralised platforms these applications typically connect to which are often operated in arbitrary, capricious and unprincipled ways.

The solution then is to provide an alternative to the current centralised network / platform arrangement, and thus avoiding those centralised chokepoints of control.

For this reason I’ve been experimenting with the use of private communications networks. These are permissioned, encrypted LANs which operate over the public network. i.e. a VPN overlay.

There are many technologies available to achieve this. I’ve been using ZeroTier (https://www.zerotier.com/) for years to provide a virtual LAN that connects compute nodes I use which are located around the world, some of which even reside on mobile networks. (Actually, I run several LANs so as to isolate different groups of unrelated nodes/operations). Another VPN technology, which is also promising but in early stages, is WireGuard (https://www.wireguard.com/). These and other software defined network technologies (SDNs) allow communication among distributed, remote entities to be encrypted with access control, management and monitoring all using with existing network management tools and best practices.

I envision small communities (such as here) establishing and managing their own private SDNs. Community member’s devices would be granted access to the network. (For better security, the user's devices can connect to the SDN via regular VPNs so that the device's IP addresses on the SDN shows as a VPN endpoint and not the user’s actual IP address). Members can then freely converse with each other using standard applications without concern for content leaking out into the public network, nor of unauthorised access to that content. If desired, within each such network a host can be setup to act as a secure gateway in order to share information outside the community in a controlled way. Outsiders here might be the general public, or invited guests (with a password for access for example), or it could be just to the TOR network in general by being a TOR endpoint service for instance. The point being that controlling access to shared information in such an environment can be done effectively and easily because it uses existing network technologies and best practices. I believe this approach of private networking is the natural solution to the true problem we’re facing.

Which brings me to say a few things about the nodes on such a network. Obviously, the nodes are network endpoints. But they are also richer than the simple user devices we see on the internet today. In today’s centralised networks, because content resides on centralised nodes, users need only run browsers and other lightweight apps to fetch data and send updates. But if control over centralised data is to be avoided then data must instead reside on the user’s nodes - implying a distributed (content) architecture.

This in fact was the original vision of what the WWW would be according to its inventor Tim Berners-Lee. He had imagined that the browser (as a user agent) would also be a webserver so that it could both retrieve data from other WWW users (browser functionality) as well as host content to share (webserver functionality). Unfortunately that egalitarian design became road kill very early on as large commercial interests sped into this new, undiscovered environment. To his credit, Tim is still pursing this same general thinking - viz. his latest project, Solid (https://solid.mit.edu/).

Anyway, hosting content locally and running services locally is pretty easy to do when you’re on a LAN with a fixed IP address. In my experimentation, I’ve had local compute nodes run webservers, wikis, mail server, news server, even video conferencing servers and video streaming servers - all on an internal network - using only laptops and even raspberrypis. I could go further and setup private DNS to provide user-friendly network addressing, and other niceties although its not a high priority at the moment (I just hack /etc/hosts). There is a surprisingly large amount of applications available that can be hosted locally on a server (node). Such apps are usually also OpenSource which is another big plus in my book. And because the network environment is ‘normal’, the software/app typically has no problems in that regard, as opposed to getting software to work across something like TOR or IPFS or other alternative overlay technologies.

With each user hosting their own content and services, we become curators of our own content. I have many and various interests; some which this community might find interesting and some not. Being self hosted, I can easily choose which content is available to which network and thus which community. No longer does the community need to decide what they host and what they don’t. Its all just user hosted data. Likewise, if you come across some content that you want to retain (e.g. a movie) then you just copy it into your local archive (as I have done, alot). It ’s as if each user is operating their own academy or library built around their own pursuits and interests. The idea generally is if you want to retain something then its up to you.

I also have been experimenting with running my own news server NN (https://en.wikipedia.org/wiki/Usenet) for this same reason. It allows me to subscribe to the news channels that interest me and ignore others that don’t, and I can set my own data retention policies. Other nn severs can connect to mine and download their updates. The usenet news model (like email) was always intended to operate in a distributed (p2p) environment. Its perfect for this environment - except its gone out of style as only OGs like myself are still on it. But I think it could be a very useful tool for communities such as here. It naturally supports long form, async, threaded, text-based comms. It doesn’t have the distractions of embedded crap. Its searchable, and easily navigable. It integrates well with email and there are dozens of news reader applications. I think it would be a good fit.

Another experiment I plan to try out is running a web indexing (search) system. To me, this is an essential aspect of any community that shares quality data: how to find it. This is even more important when data becomes distributed. The fact that centralised platforms today openly admit that they skew, alter and censor their search results is just unacceptable. I’d love to work on private-network search engines and services more.

I could go on about a few other apps and services that I’d like to see in private space that I think would make information discovery and management much easier and more scalable, but I’ll leave that for another post / another time if anyone is interested.

Finally, I’ll share one other aspect of this that I’ve been working on, and that’s system administration. Running (many) services locally can become an administrative burden on the user, which is not the case with the centralised platform model where the likes of Facebook, Google, etc have administrators to managing their systems - applying software updates, backing up data, etc. I recognise that some users don’t have the technical expertise to manage a node themselves, or perhaps don’t want the responsibility that comes with such activities. Hence I’ve been exploring ways this could be minimised and/or outsourced. One idea is to distribute entire nodes as a single software package. These smart and low maintenance nodes can be constructed by a combination of containerisation, orchestration, and simplified user interfaces. I’m experimenting with this now. This allows the management of the services to be simplified to just pulling down the latest version from a shared repository. I’m even going further with developing nodes that can run on low cost commodity hardware (e.g. a RaspberryPi 4) which only requires the user to plug in a network cable, a HDMI screen and keyboard (or not, if you want to run it headless and connect via RDP over WIFI), and optionally a cheap usb storage device. This appliance would then connect to the network and pull down the latest set of services and start them and maintain them. The appliance then becomes a standardised, commodity, platform which software developers can easily develop for much like Synology () created for their home disk storage servers than can run 3rd party apps - but in this case for private, community networks.

I hope this info contributes positively to your discussion. I’d be happy to work with the community here on prototyping and experimenting further and even setting up a live implementation to serve the community. I think this is something that many other similar communities might also benefit from.

cheers,
BC
smuggler
#20
19-04-2020, 02:35 PM
(18-04-2020, 03:41 AM)BCC Wrote: I also have been experimenting with running my own news server NN (https://en.wikipedia.org/wiki/Usenet) for this same reason. It allows me to subscribe to the news channels that interest me and ignore others that don’t, and I can set my own data retention policies. Other nn severs can connect to mine and download their updates. The usenet news model (like email) was always intended to operate in a distributed (p2p) environment. Its perfect for this environment - except its gone out of style as only OGs like myself are still on it. But I think it could be a very useful tool for communities such as here. It naturally supports long form, async, threaded, text-based comms. It doesn’t have the distractions of embedded crap. Its searchable, and easily navigable. It integrates well with email and there are dozens of news reader applications. I think it would be a good fit.

Actually you make a good point here. Usenet is halfway of what would probably be required, just slap on encryption and maybe better access control for hierarchies. Gotta have to look at that again, might be a way to bootstrap.
« Next Oldest | Next Newest »
Pages (3): « Previous 1 2 3 Next »



  • Subscribe to this thread
Forum Jump:

© Designed by D&D - Powered by MyBB

Linear Mode
Threaded Mode