• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Sam started this. The comparisons would have come up anyway, but it’s a lot harder to dismiss the claims from users when your CEO didn’t tweet “her” before the release. I don’t myself think the voice in the demos sounded exactly like her, just closer in seamlessness and attitude, which is a problem itself down the road for easily convinced users.

    AI companions will be both great and dangerous for those with issues. Wow, it’s another AI safety path that apparently no company is bothering exploring.







  • The 6 foot distancing that wasn’t really ever followed well was a compromise to keep things open for the economy while pretending we’re doing something. What amazes me is how there wasn’t any mandate to require air filtration at key points in places with crowds - like a Corsi-Rosenthal box, the DIY stores could have had these in the front with a how-to-build and they would have made tons of profit while supplies lasted. I guess 6-foot stickers and signage was easier and cheaper. Remember when some stores tried to go further and enforce one way aisles?


  • Models are geared towards seeking the best human response for answers, not necessarily the answers themselves. Its first answer is based on probability of autocompleting from a huge sample of data, and in versions that have a memory adjusts later responses to how well the human is accepting the answers. There is no actual processing of the answers, although that may be in the latest variations being worked on where there are components that cycle through hundreds of attempts of generations of a problem to try to verify and pick the best answers. Basically rather than spit out the first autocomplete answers, it has subprocessing to actually weed out the junk and narrow into a hopefully good result. Still not AGI, but it’s more useful than the first LLMs.




  • There’s no question I wrote the couple of things I’ve done for work to automate things, but I swear every time I have to revisit the code after a long while it’s all new again, often wondering what the hell was I thinking. I like to tell myself that since I’m improving the code each time I review it, each new change must be better overall code. Ha. But it works…







  • Where their creativity lies at the moment seems to be a controlled mixing of previous things. Which in some areas works for the definition of creativity, such as with artistic images or some literature. Less so with things that require precision to work, such as analysis or programming. The difference from LLMs and humans in using past works to bring new things to life is that a human is actually (usually) thinking throughout the process on what adds and subtracts. Right now the human feedback on the results is still important. I can’t think of any example where we’ve yet successfully unleashed LLMs into the world confident enough of their output to not filter it. It’s still only a tool of generation, albeit a very complex one.

    What’s troubling throughout the whole explosion of LLMs is how safety of the potentials is still an afterthought, or a “we’ll figure it out” mentality. Not a great look for AGI research. I want to say if LLMs had been a door to AGI we would have been in serious trouble, but I’m not even sure I can say it hasn’t sparked something, as an AGI that gains awareness fast enough sure isn’t going to reveal itself if it has even a small idea of what humans are like. And LLMs were trained on apparently the whole internet, so…