Hardware keeps getting exponentially faster and software keeps getting exponentially slower. The only people seeming to benefit from better hardware is lazy developers.
Hardware keeps getting exponentially faster and software keeps getting exponentially slower. The only people seeming to benefit from better hardware is lazy developers.
Yeah, I use that all the time. I think I use it in a different way though. I have projects with C, C++ and other languages. The C and C++ get compiled and linked together, and so there are some considerations for those files that don’t apply to anything else. So I mean C files and C++ files, but not as if they were the same language.
I guess that’s the joke, and I think we’re all confused because it’s wrong.
I did this in a project and someone later came and changed them all to .h, because that was “the convention” and because “any C is valid C++”. Obviously neither of those things is true and I am constantly befuddled by people’s use of the word convention to mean “something some people do”. It didn’t seem worth the argument though.
What is going on at Microsoft? Did anyone ask for this? How about they make search work again and not use 4 Gb just turn turn on the computer?
I found a single prompt that works for every level except 8. I can’t get anywhere with level 8 though.
The article headline is misleading. Nothing in the study indicates that fingerprints can’t be used to uniquely identity people. It claims to show that although each fingerprint on a single person is unique, they have similar features. Thus, one could assess whether a pair of fingerprints come from the same person.
Could you elaborate on this? Is this new community restricting specifically mentions of Musk, or is there a broader difference? In particular, there are a lot of people claiming that Twitter is a tech company and that it belongs in tech news. Is that also the view of this new community or do you consider Twitter a social media company, with only tangential and generally not newsworthy actions related to technology?
It’s odd that you’re saying you shouldn’t consider the specific cases where C excels and then narrowing down things to the Web, where languages like php excel. So now you probably have some idea why your experience is so narrow. There’s a lot more to programming than the Web, and there’s always going to be.
“The software development market evolved from C to very high language languages such as Javascript/Typescript and the majority of stuff developed is done or will be done in those languages thus the CPU architecture becomes irrelevant.”
I saw someone else make a similar comment about C. People track these things, and C has been in the top 2 most widely used languages for more than 2 decades. Not knowing this should probably make you wonder why your background has resulted in such a narrow experience.
There’s still a big difference between what can collected from an app vs a Web site.
“Interestingly, this effect cannot be explained by differences in participants’ experience with generative AI models, as that variable is insignificant in the mode”
When predictors are correlated, which is most likely the case here, this analysis cannot separately estimate their effects. The software will end up splitting the total effect size between the two predictors. Without describing collineariry between predictors, it’s not possible here to judge whether experience with AI is truly unimportant or the analysis is merely incapable of spotting the effect.
As for eroding confidence in reviews, this will make it worse, but I already put next to no stock in user reviews anymore. You don’t need AI to make a good human-like review that lies about a product, and there are plenty of those around.