cm0002@lemmy.world to Technology@lemmy.zipEnglish · 8 months agoChatGPT's hallucination problem is getting worse according to OpenAI's own tests and nobody understands whywww.pcgamer.comexternal-linkmessage-square16linkfedilinkarrow-up165arrow-down12
arrow-up163arrow-down1external-linkChatGPT's hallucination problem is getting worse according to OpenAI's own tests and nobody understands whywww.pcgamer.comcm0002@lemmy.world to Technology@lemmy.zipEnglish · 8 months agomessage-square16linkfedilink
minus-squarefinitebanjo@lemmy.worldlinkfedilinkEnglisharrow-up2·8 months agoI think comparing a small model’s collapse to a large model’s corruption is a bit of a fallacy. What proof do you have that the two behave the same in response to poisoned data?
I think comparing a small model’s collapse to a large model’s corruption is a bit of a fallacy. What proof do you have that the two behave the same in response to poisoned data?