but there’s no reason to think we can’t achieve it
They provide a reason.
Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
What are we science deniers now?
but there’s no reason to think we can’t achieve it
They provide a reason.
Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
What are we science deniers now?
I also noticed that they were talking about sending arguments to a custom function? That’s like a day-one lesson if you already program. But this was something they couldn’t find in regular search?
Maybe I misunderstood something.
All right, I guess I’m here to collect then. We doin’ paypal or what?
I was equivocating singular words and entire sentences on purpose.
If you can recombine sentences in interesting ways, into paragraphs that are your own ideas, that isn’t plagiarism. Why would “people can’t construct unique sentences either” be a rebuttal if that’s not what plagiarsm is?
Instead it studies the prior work of humans, finds patterns and combines these in unique and novel ways.
You’re anthropomorphising.
LLMs are little clink-clink machines that produce the most typical output. That’s how they’re trained. Ten thousand inputs say this image is of a streetlight? That’s how it knows.
The fact an LLM knows what a Lord of Rings is at all means that Tolkien’s words, the images, the sounds, are all encoded in its weights somewhere. You can’t see them, it’s a black box, but they live there.
Could you say the same of the human brain? Sure. I know what a neuron is.
But, LLMs are not people.
All of that is besides the point, though. I was just floored by how cynical you could be about your own supposed craft.
A photograph of, say, a pretty flower is fantastic. As an enjoyer of art myself, I love it when people communicate things. People can share in the beauty that you saw. They can talk about it. Talk about how the colors and the framing make them feel. But if you’re view is that you’re not actually adding anything, you’re just doing more of what already exists, I really don’t know why you bother.
Nobody has seen every photo in the world.
Okay, assume someone has. Is your art meaningless, then? All of photography is just spectacle, and all the spectacles have been seen?
it doesn’t mean you can’t combine them in a unique ways
Okay, so you don’t believe new things can’t be unique. You just think that plagiarism is when one person uses the word ‘the’ and then a second person uses the word ‘the’.
Why do you find it such a depressing idea?
That art is dead? Through sheer saturation alone, no one has anything left to say? That watching the new Cinderella is line-by-line the same as watching the old Cinderella, and the money machine keeps this corpse moving along only because people are too stupid to realize they’re being sold books from a library? I really don’t know how you couldn’t.
This is like asking me why a polluted lake is sad.
the truth is a moving target somewhere in between.
Token guessing and… consciousness?
I’d argue it’s virtually impossible to write a sentence that has not been written before
I mean this sincerely: why bother getting excited about anything, then?
A new Marvel movie, a new game, a new book, a new song. If none of them are unique in any way, what is the point of it all? Why have generative AI go through this song and dance? Why have people do it? Why waste everyone’s time?
If the plagiarism engine is acceptable because it’s not possible to be unique anyway… I just, I don’t know how you go on living. It all sounds so unbelievably boring.
Your taxes pay for the library.
Arguing why it’s bad for society for machines to mechanise the production of works inspired by others is more to the point.
I agree, but the fact that shills for this technology are also wrong about it is at least interesting.
Rhetorically speaking, I don’t know if that’s useless.
I don’t care why they’re different, or that it technically did or didn’t violate the “free swim” policy,
I do like this point a lot.
If they can find a way to do and use the cool stuff without making things worse, they should focus on that.
I do miss when the likes of cleverbot was just a fun novelty on the Internet.
If I as a human want to learn a subject from a book, I buy it
xD
That’s good.
It’s not because what they’re against is the consolidation of power.
If the principle “information is free” can lead to systems where information is not free, then that’s not really desirable, is it.
If free information to inspire more creative works can lead to systems with less creative works, then that’s not really desirable, is it.
It is a tool
Yeah, I agree. I wrote twelve lines about that.
are all just a tool
just a tool
it’s just a tool
a tool is a tool
all are just tools
it’s no more than a tool
it’s just a tool
it’s a tool we can use
one of our many tools
it’s only a tool
these are just tools
a tool for thee, a tool for me
guns don’t kill people, people kill people
the solution is simple:
teach drunk people not to shoot their guns so much
unless they want to
that is the American way
tanks don’t kill people, people kill people
the solution is simple:
teach drunk people not to shoot their tanks so much
the barista who offered them soy milk
wasn’t implying anything about their T levels
that is the American way
Thanks for reminding me that AI is just tools, friend.
My memory is not so good.
I often can’t
remember
I’m not gonna go looking for scans or anything, but KnowYourMeme lists the popularity of this one as starting between 2013 and 2015, and I definitely remember seeing this phrase in a textbook around 2010 or 2011. So honestly, I might blame Pearson or McGraw Hill.
No problem, yo!
And to be fair, communication is always two-way. It"s not like I don’t want the public to be more thoughtful in their disagreements.
Have a good one.
Okay so, this is a rhetoric problem.
This phrase here:
I disagree with the premise, the Holocaust was unique.
You lost the crowd immediately. The thrust of Walz’ position is that people should be more aware of the ubiquity of genocidal thinking, and in your first sentence, you put yourself in opposition to him.
Even though you agree with Walz later in spirit, the immediate impression is that you’re downplaying other genocides by over-fixating on the shock and horror of this one in particular, and it takes you way too long to clear up your position.
If you had phrased this as “added context” or “an additional fun fact” or “some ways in which the holocaust was unique,” it becomes much harder to disagree with you. Your audience isn’t primed immediately to be angry, and you beget much more charitability, at least from those who aren’t insane.
They mean that the code is being written like it were python. You can’t get rid of the curly braces, but you can shove them all under your bed where mom can’t see.
During the third or fourth time I was mad that 3D hadn’t taken off like technicolor, I though “fine! I’ll just look at trees and hallways in real life then!” And yeah, it kinda works.
There’s a lot of beauty in the world if you just, you know, look at it.
And why should those things be stopped? See, unlike you, “I believe in freedom.” If people don’t like their company town, they shall simply move away~.
I said it is better if the government doesn’t verify all the code that makes it on the internet.
You also said this apropos of nothing. I didn’t say anything about vetting code. You think I care if Biden has read your commit messages.
Hey! Just asking you because I’m not sure where else to direct this energy at the moment.
I spent a while trying to understand the argument this paper was making, and for the most part I think I’ve got it. But there’s a kind of obvious, knee-jerk rebuttal to throw at it, seen elsewhere under this post, even:
If producing an AGI is intractable, why does the human meat-brain exist?
Evolution “may be thought of” as a process that samples a distribution of situation-behaviors, though that distribution is entirely abstract. And the decision process for whether the “AI” it produces matches this distribution of successful behaviors is yada yada darwinism. The answer we care about, because this is the inspiration I imagine AI engineers took from evolution in the first place, is whether evolution can (not inevitably, just can) produce an AGI (us) in reasonable time (it did).
The question is, where does this line of thinking fail?
Going by the proof, it should either be:
I’m not sure how to formalize any of this, though.
The thought that we could “encode all of biological evolution into a program of at most size K” did made me laugh.