Languages change over time. As long as the intent is clear, don’t get hung up on what is and isn’t “correct”. “You’re welcome” probably was seen as extreme at some point itself.
Languages change over time. As long as the intent is clear, don’t get hung up on what is and isn’t “correct”. “You’re welcome” probably was seen as extreme at some point itself.
True of many things we take for granted now. It would be a different world entirely. Another non-computer example would be the 3-point seat belt that Volvo left as an open patent, saving countless lives over the past decades.
Or a different “feel” when turned on vs. off (more resistance or something). They spent effort printing all that text to show where the switch was when a universal 0/1 would have made it clear.
I can’t think of any example of a button or switch that by itself can be clear if it is engaged or not. A button could be assumed to be on if in, but that isn’t always the case, like for example with emergency stops.
Be sure to not have throwable things around you if you haven’t heard about this before. Especially the amount of money that was being reported missing right before 9-11-2000, and suddenly was never brought up again.
The 6 foot distancing that wasn’t really ever followed well was a compromise to keep things open for the economy while pretending we’re doing something. What amazes me is how there wasn’t any mandate to require air filtration at key points in places with crowds - like a Corsi-Rosenthal box, the DIY stores could have had these in the front with a how-to-build and they would have made tons of profit while supplies lasted. I guess 6-foot stickers and signage was easier and cheaper. Remember when some stores tried to go further and enforce one way aisles?
Models are geared towards seeking the best human response for answers, not necessarily the answers themselves. Its first answer is based on probability of autocompleting from a huge sample of data, and in versions that have a memory adjusts later responses to how well the human is accepting the answers. There is no actual processing of the answers, although that may be in the latest variations being worked on where there are components that cycle through hundreds of attempts of generations of a problem to try to verify and pick the best answers. Basically rather than spit out the first autocomplete answers, it has subprocessing to actually weed out the junk and narrow into a hopefully good result. Still not AGI, but it’s more useful than the first LLMs.
I have heard that over the years, I think that may have been hit or miss (as with anything in production). Once I had something to fight the power swings I never had an issue with my power supply again. Perhaps the last one I got was from a “better” run.
Brownouts, even ones so minor the human eye can’t see, are killers to electronics. Learned that decades ago when I got my first computer (C-64) and had to return a few before we figured out it was bad power. Building code ought to include protection within the main breaker box. Maybe in some places they have such a thing.
There’s no question I wrote the couple of things I’ve done for work to automate things, but I swear every time I have to revisit the code after a long while it’s all new again, often wondering what the hell was I thinking. I like to tell myself that since I’m improving the code each time I review it, each new change must be better overall code. Ha. But it works…
I think there’s awareness of the disease, just not enough to have better support for the family. It seems anyone I talk to about it has someone close who did or is going through some level, so dementia is not a secret itself, but more a thing that people just “deal with”. But that’s true of a health in general, the infrastructure for need is lacking.
Same here, although I thought VGA. Dealt with too many parallel cables in the past and that didn’t look wide enough.
You can update the test version all day long, you’ll never get better results than if you just push it to production. Fridays work the best.
It changes so much so fast. For a video source to grasp the latest stuff I’d recommend the Youtube channel “AI Explained”.
In the context of LLMs, I think that means giving them access to their own outputs in some way.
That’s what the AUTOGPTs do (as well as others, there’s so many now) they pick apart the task into smaller things and feed the results back in, building up a final result, and that works a lot better than just a one time mass input. The biggest advantage and main reason for these being developed was to keep the LLM on course without deviation.
Where their creativity lies at the moment seems to be a controlled mixing of previous things. Which in some areas works for the definition of creativity, such as with artistic images or some literature. Less so with things that require precision to work, such as analysis or programming. The difference from LLMs and humans in using past works to bring new things to life is that a human is actually (usually) thinking throughout the process on what adds and subtracts. Right now the human feedback on the results is still important. I can’t think of any example where we’ve yet successfully unleashed LLMs into the world confident enough of their output to not filter it. It’s still only a tool of generation, albeit a very complex one.
What’s troubling throughout the whole explosion of LLMs is how safety of the potentials is still an afterthought, or a “we’ll figure it out” mentality. Not a great look for AGI research. I want to say if LLMs had been a door to AGI we would have been in serious trouble, but I’m not even sure I can say it hasn’t sparked something, as an AGI that gains awareness fast enough sure isn’t going to reveal itself if it has even a small idea of what humans are like. And LLMs were trained on apparently the whole internet, so…
Hallucinations come from the weighting of training to come up with a satisfactory answer for the output. Future AGI or LLMs guided by such would look at the human responses and determine why the answers weren’t good enough, but current LLMs can’t do that. I will admit I don’t know how the longer memory versions work, but there’s still no actual thinking, it’s possibly just wrapping up previous generated text along with the new requests to influence a closer new answer.
I couldn’t do better than Wikipedia.
It’s also used in business. Six sigma is the holy grail of “close to perfection”.
Sam started this. The comparisons would have come up anyway, but it’s a lot harder to dismiss the claims from users when your CEO didn’t tweet “her” before the release. I don’t myself think the voice in the demos sounded exactly like her, just closer in seamlessness and attitude, which is a problem itself down the road for easily convinced users.
AI companions will be both great and dangerous for those with issues. Wow, it’s another AI safety path that apparently no company is bothering exploring.