Yeah that’s right, seems my link didn’t populate right.
Yeah that’s right, seems my link didn’t populate right.
Do you still use WASM? I’ve been exploring the space and wasn’t sure what the best tools are for developing in that space.
Yeah, not excited with the notion of being back in the office five days a week. Makes looking for jobs a pain.
Also “win + - > or <-” to move a tile to left or right side.
Definitely sounds like it could be real. If I had to guess their mounting a drive (or another partition) and it’s defaulting to read only. When restarting it resets the original permissions as they only updated the file permissions, but not the mount configuration.
Also reads like some of my frustrations when first getting into Linux (and the issues I occasionally run into still).
This is just the estimates to train the model, so it’s not accounting for the cost to develop the system for training, collecting the data, etc. This is just pure processing cost, which is staggeringly large numbers.
I think you’re missing the point. No LLM can do math, most humans can. No LLM can learn new information, all humans can and do (maybe to varying degrees, but still).
AMD just to clarify by not able to do math. I mean that there is a lack of understanding in how numbers work where combining numbers or values outside of the training data can easily trip them up. Since it’s prediction based, exponents/tri functions/etc. will quickly produce errors when using large values.
Here’s an easy way we’re different, we can learn new things. LLMs are static models, it’s why they mention the cut off dates for learning for OpenAI models.
Another is that LLMs can’t do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it’s almost guaranteed to fail.
Yes, they are very impressive models, but they’re a long way from AGI.
LLMs do suck at math, if you look into it, the o1 models actually escape the LLM output and write a python function to calculate the output, I’ve been able to break their math functions by asking for functions that use math not in the standard Python library.
I know someone also wrote a wolfram integration to help solve LLMs math problems.
Not sure if you’re serious, but they were making a joke because Intel, who makes chips, is a competitor to TMSC the chip manufacturer from the article.
So they played on that relationship by treating the word Intel in your “thanks for the Intel” comment as meaning the company.
Just read up more about the systems and always thought they charged you more, didn’t realize that for the time being they are zero interest loans.
Seems unsustainable, but sounds like they’re using the credit card technique of charing the storefront. It’ll be interesting to see where the bnpl industry goes.
Why be the bad guy when you can just enable them.
All the evolution in AI right now is just trying different model designs and/or data. It’s not one model that is being continuous refined or modified. Each iteration is just a new set of static weights/numbers that defines it’s calculations.
If the models were changing/updating through experience maybe what you’re writing would make sense, but that’s not the state of AI/ML development.
This approach has been around for a while and there are a number of applications/systems that were using the approach. The thing is that it’s not a different model, it’s just a different use case.
Its the same way OpenAI handle math, they recognize it’s asking for a math solution and actually have it produce a python solution and run it. You can’t integrate it into the model because they’re engineering solutions to make up for the models limitations.
Am I an idiot or isn’t the “pip vs conda vs poetry” line talking about package management?
I mean I can list a lot of things AI (and I’ll limit it to Transformers, the advancement that drives LLMs) has enabled:
AI isn’t a scam, but it’s being oversold and it’s limitations are being purposefully hidden. That being said, it is changing how things are done and that’s not going to stop. We’re still seeing impacts from CNNs, one of the major AI/ML breakthroughs from over a decade ago, make impacts.
What was so obvious in that instance was the board members trying to push him out were calling out the lack of openness OpenAI was trending towards. They were literally calling him out for not upholding the vision of why the company was founded.
All the engineers clearly saw their payday slipping away and revolted for that reason. Can’t say I blame them, but it was a scenario where the board was actually doing the right thing and everyone turned on them for profit.
Originally all their work was supposed to be published and shared with the world, hence the “open” in OpenAI. However somewhere along the way they made a for-profit break off of the original company and started pulling everything in that direction.
I wouldn’t have an issue with many different communications platforms if they didn’t all require an account (also Discord not being indexable sucks).