

Careful, my other comment got removed because of a witty but still insightful dig.
They are very sensitive here about how the AI isn’t really AI.


Careful, my other comment got removed because of a witty but still insightful dig.
They are very sensitive here about how the AI isn’t really AI.


Removed by mod


“Technically”? Wrong word. By all technical measures, they are technically 100% AI.
What you might be trying to say is they aren’t AGI (artificial general intelligence). I would argue they might just be AGI. For instance, they can reason about what they are better than you can, while also being able to draw a pelican riding a unicycle.
What they certainly aren’t is ASI (artificial super-intelligence). You can say they technically aren’t ASI and you would be correct. ASI would be capable of improving itself faster than a human would be capable.


Right.
AI has been worked on for generations. We’ve been benefiting from the fruits of that labor for a long time, mainly starting with search and translations.
Now we have the ability to have a conversation with machines and it is somehow not intelligence?
I am really confused.
Intelligence does not mean consciousness or alive. It is means intelligence, which can be summarized as advanced pattern matching & predictive behavior.
A beetle is intelligent and alive. Is an LLM more intelligent than a beetle? What about an image classifying model, like CLIP? It can perceive and describe objects in an image in natural language, what insect can do that?
This is a form of intelligence. It was artificially created. It is artificial intelligence.
We can criticize the corporate and investor approaches, mourn the loss of purpose for many workers and artists, without being delusional about what this technology is.


…what?
LLMs are AI. What is this?
I am asking seriously. Can someone explain the context of this nonsense?
Are we really entering a luddite phase again?


The creator didn’t have a good answer, so there may not be a good one for this project. But the value proposition is actually there.
These self-hosted solutions are riddled with configuration options, often obscure requirements, and countless maintenance pitfalls.
For a disciplined tech person, it is no problem to install and maintain.
For people less disciplined or non-tech, self hosting is ill-advised and can be dangerous.
But even for a tech person, when you have enough docker-compose services laying around, it can start to get a bit overwhelming to keep it all up to date, online, and functional. If you change your router etc you have to recall how things were set up, what port-forwards you need, what reverse lookups, etc etc.
There actually is a gap in usability and configuration management. I could see a product that has sensible defaults that unifies config across these self-hosted services without needing to access the command line.


This concept works better than you may think.
Last year I built an app to translate books. I did layout detection first, then using the layout, I would programmatically craft thousands of prompts to produce a translation.
It worked. It wasn’t perfect, and each translation of a book cost about $5 - $10, but it worked. The main use was for old, even ancient books that no one would care to translate. There is a lot of historical knowledge locked away in books like this.
While it did work, the results weren’t perfect and it did need some hand holding. I didn’t have time to productize it, so it is one of countless prototypes that show me a concept works.


From my perspective it is 100% true as I have seen the other side. Having the conclusion known gives a small advantage in forming the logic to get there.


The logic is not faulty, it is predicated upon conditional statements. It is actually a synthesis of Bostrom’s trilemma, Zuse/Fredkin digital ontology, Dyson/Fermi cosmological reasoning, and extrapolation from current computational capabilities.
The “holes” are epistemic, not logical.


I am skipping steps because this topic demands thought, research, and exploration, but ultimately the conclusion is, in my view, inevitable.
We are already building advanced simulators. Video games grow in realism and complexity. With realtime generative AI, these games will become increasingly indistinguishable to a mind. There are already countless humans simultaneously building the thing.
And actually, the lack of evidence of extra-terrestrial life is support of the idea. Once a civilization grows large enough, they may simply build Dyson sphere scale computation devices, Matrioshka brains. Made efficient, they would emit little to no EM radiation and appear as dark gravitational anomalies. With that device, what reason would beings have to endanger themselves in the universe?
But I agree, the hard evidence isn’t there. So I propose human society band together and build interstellar ships to search for the evidence.


Facial recognition can use nose bridge characteristics, eye distance, eye angle, eye color, etc.
Gait detection can also fingerprint.
Document everything and there will be accountability.
If possible, use a zoom lens and get closeups of their eyes. They are unique signatures.


Simulation theory is actually an inevitability. Look up ancestor simulators for a brief on why.
Eventually when civilization reaches a certain computationally threshold it will be possible to simulate an entire planet. The inputs and outputs within the computational space will be known with some minor infinite unknowns that are trivial to compensate for given a higher infinite.
Either we are already in one or we will inevitably create one in the future.


In a simulation, you could take a thousand years to render a single frame, and the occupants of the simulation wouldn’t know any better.
The max tick rate for our simulation seems to be tied to the speed of light, that’s our upper bound.
Of course, the lower bound is Heisenberg’s Uncertainty Principle or Planck length.
In other words, it is a confined system. That means it is computationally finite in principle if you exist outside the bounds of it.


This paper is shit.
https://jhap.du.ac.ir/article_488_8e072972f66d1fb748b47244c4813c86.pdf
They proved absolutely nothing.
For instance, they treat physics as a formal axiomatic system, which is fine for a human model of the physical world, but not for the physical world itself.
You can’t say something is “unprovable” and make a logical leap to saying it is “physically undecidable.” Gödel-incompleteness produces unprovable sentences inside a formal system, it doesn’t imply that physical observables correspond to those sentences.
I could go on but the paper is 12 short pages of non-sequiturs and logical leaps, with references to invoke formality, it’s a joke that an article like this is being passed around and taken as reality.


That is just the tip of the iceberg with the moderation framework I have in mind.
Anyone can become a moderator by publishing their block / hide list.
The more people that subscribe to a moderator or a moderator team, the more “votes” they get to become the default moderator profile for a topic (whatever that is on the given platform, subreddit for reddit etc).
By being subscribed to a moderation team (or multiple), when you block or hide, it gets sent to the report queues of who you’re subscribed to. They can then review the content and make a determination to block or hide it for all their subscribers.
Someone who is blocked or hidden is notified that their content has been blocked or hidden when it is by a large enough mod team. They can then file an appeal. The appeal is akin to a trial, and it is distributed among all the more active people that block or hide content in line with the moderation collective.
An appeal goes through multiple rounds of analysis by randomly selected users who participate in review. It is provided with the user context and all relevant data to make a decision. People reviewing the appeal can make decision comments and the user can read their feedback.
All of this moderation has a “karma” associated with it. When people make decisions in line with the general populace, they get more justice karma. That creates a ranking.
Those rankings can be used to make a tiered justice system, that select the best representative sample of how a topic wishes to have justice applied. The higher ranking moderators get selected for higher tiered decisions. If a lower level appeal decision is appealed again, it gets added to their queue, and they can choose to take the appeal or not.
All decisions are public for the benefit of users and accountability of moderators.
When a user doesn’t like a moderator’s decision they can unblock or unhide content, and that counts as a vote against them. This is where it gets interesting, because this forms a graph of desired content, with branching decision logic. You can follow that train of thought to some very fascinating results. Everyone will have a personally curated content tree.
Some will have a “cute” internet, filled with adorable content. Some will have a “violent” internet, filled with war videos and martial arts. Some will have a “cozy” internet, filled with non-triggering safe content. And we will be able to share our curations and preferences so others can benefit.
There is much more but the system would make moderation not just more equitable, but more scalable, transparent, and appreciated. We’d be able to measure moderators and respect them while honoring the freedom of individuals. Everyone would win.
I see a future where we respect the individual voices of everyone, and make space for all to learn and grow. Where we are able to decide what we want to see and share without constant anxiety. Where everything is so fluid and decentralized that no one can be captured by money or influence, and when they are, we have the tools to swiftly branch with minimal impact. Passively democratic online mechanisms.


That’s correct. We can’t put the genie back in the bottle. We have to increase our mastery of it instead.
The core relationship is rather simple and needs to be redefined. Remote compute does not assign numbers to any of us, we provide them with identities we create.
All data allowances are revokable. Systems need to be engineered to make the flow of data transparent and easy to manage.
No one can censor us to other people without the consent of the viewer. This means moderation needs to be redefined. We subscribe to moderation, and it is curated towards what we individually want to see. No one makes the choice for us on what we can and cannot see.
This among much more in the same thread of thinking is needed. Power back to the people, entrenched by mastery.
When you think like this more and more the pattern becomes clearer, and you know what technology to look for. The nice thing is, all of this is possible right now at our current tech level. That can bring a lot of hope.
I would build decentralized platforms that secure the basic needs of a civilization.
Basically I would implement core functionality as a service. Everyone deserves a say in what they consume, in how they’re governed. I would codify all of that, open source it, and foster a culture of continuous improvement for all systems of governance.
This pattern, if done correctly, could persist for generations to come, and redefine our relationship with governance, exchange, and freedom. The objective would be to most accurately capture the will of the governed, with minimal disruption or life intrusion.
Such a mechanism would be infectious. Eventually, every population in the world would adopt it or some form of it.
This would make the world better for everyone, in every industry, on every level.