Hmm. Centralizing the end graph is a necessity for quality. If you had more users and thus more data you could sacrifice that a bit.
What could be done, in theory. I'm not going to do it (time). Is set up collector proxies. You would need to pool multiple people to run a smaller version and then export the result to the centralized one.
That does mean someone is getting your history, but it might be someone you know (worse?). If you ran it locally and then exported the graph to the central server you might as well just give the history.
Now perhaps if everyone, and I mean everyone, was using tor it could be sent to an onion site that wouldn't know your ip address.
But I've said this for a long time. Almost no body cares about your ip address. They care about your usernames and session cookies. Those associate across devices and locations (work) and don't get mixed up with separate users in a household. If a company doesn't have a username for you (google doing analytics on some article) they will create a temporary session for you with third party cookies because of how much better session associated data is compared to ip addresses. So does tor or vpn really help?
It's pretty funny how much vpn usage there is without usage of things like ublock, privacy badger, user-script sanitizers, cookie containers.
I guess options are a thing. You could in settings set things as read only. But that would only work if you had more users so you could get good enough coverage out of the people who don't click that option. People do like to influence things.
If I were to pursue this further, which I probably won't, I have so many project (have you checked out https://js.lifelist.pw), but if I did I would probably focus on getting the titles better so that people just think it's worth it if the links are that good. (notabug's titles are just always "notabug", voat's v/all and v/all/new produce the same title, your site has some signin pages).
So the goal would be to find url types that are just worthless and block them and to interpret titles better.
Maybe I could include upvoting and downvoting. Whether that just tells me what's producing bad links or if it impacts the associations it might be good.
So I actually built a much poorer system for this when I ran gvid.pw which was a youtube alternative. I recently built a more generalized suggestion engine that just takes the pathname to a leveldb and it does the rest. So this was also a proof of concept for using it.
So I didn't build the suggestion engine just for this. I already had it and plan to use it back on gvid when I relaunch it and a distributed video standalone app I've had planned for a little over a year.
Also on js.lifelist.pw. "You've imported this module in your code you might try this one." "You used this app you could try this one." It's infinitely useful.
That's kinda what we were just talking about this morning. Your timing couldn't be better.. or worse, depending on how you look at it. We're throwing all the crap at you that we already talked out in the chat :p
Is "track what I do and [anonymously] use it to help other peoples' experience" a direction that the web could go? or at least include? Can that data be collected, sorted, and reused without making lists out of people?
Dissenter is a sweat idea, but their webserver gets your location/url/timestamp every time you load a page. Kinda like installing analytics on yourself for all sites.
Is there a way to make these site association lists p2p or disseminated in other ways, but still be useful for current day-to-day trending info?
This is an amazing and awesome quick execution of a concept that may be the future of the web in some way, but boy does it make the hair on the back of my neck stand up thinking about the privacy stuff.