|
Post by ratty on Aug 1, 2024 6:24:36 GMT
|
|
|
Post by gridley on Aug 2, 2024 12:36:06 GMT
They should call them Shipstones and push Robert Heinlein novels as part of their PR campaign...
|
|
|
Post by Sigurdur on Aug 3, 2024 1:24:25 GMT
|
|
|
Post by Sigurdur on Aug 3, 2024 2:22:53 GMT
|
|
|
Post by ratty on Aug 4, 2024 10:47:41 GMT
|
|
|
Post by ratty on Aug 4, 2024 23:14:31 GMT
|
|
|
Post by ratty on Aug 5, 2024 6:57:18 GMT
|
|
|
Post by ratty on Aug 5, 2024 22:18:26 GMT
|
|
|
Post by ratty on Aug 9, 2024 5:56:16 GMT
|
|
|
Post by Sigurdur on Aug 9, 2024 9:57:55 GMT
|
|
|
Post by blustnmtn on Aug 9, 2024 14:10:16 GMT
But they destroyed a precious marsh land!!! Imagine the species lost...(sarc)
|
|
|
Post by ratty on Aug 13, 2024 10:59:37 GMT
|
|
|
Post by gridley on Aug 13, 2024 11:27:54 GMT
thefederalist.com/2024/08/13/all-too-predictably-reality-is-puncturing-the-ai-hype-bubble/"After so much hype and trillions of dollars of investment, the AI bubble seems like it might finally be bursting." "Worse still, Taylor explains that AI programs “have run out of stuff to train on, and the more they are trained on ‘the internet,’ the more the internet contains a body of work written by AI — degrading the product in question.” Originally, a Large Language Model (LLM) like ChatGPT could review the vast quantity of human-made content across the internet and take that data to produce a unique essay that would meet the parameters of its users. But now, there is so much AI-generated content online that any essay that the LLM produces will become increasingly derivative, defective, and incomprehensible. Garbage in, garbage out." The other problem, not really discussed in the article, is that yet again something is being called something it isn't. Current-generation AI *aren't* AI in the accepted sense from Sci-fi. They do not extrapolate. Maybe some of them can pass the Turing test, maybe they can't, but at the moment they remain bound by one of the theoretical limits of computer science - no one has yet come up with a way for a computer to extrapolate data in a chaotic system. Will someone figure out a way around that someday? Perhaps - and when that day comes we may hit 'the singularity' FAST.
|
|
|
Post by missouriboy on Aug 13, 2024 12:34:10 GMT
thefederalist.com/2024/08/13/all-too-predictably-reality-is-puncturing-the-ai-hype-bubble/"After so much hype and trillions of dollars of investment, the AI bubble seems like it might finally be bursting." "Worse still, Taylor explains that AI programs “have run out of stuff to train on, and the more they are trained on ‘the internet,’ the more the internet contains a body of work written by AI — degrading the product in question.” Originally, a Large Language Model (LLM) like ChatGPT could review the vast quantity of human-made content across the internet and take that data to produce a unique essay that would meet the parameters of its users. But now, there is so much AI-generated content online that any essay that the LLM produces will become increasingly derivative, defective, and incomprehensible. Garbage in, garbage out." The other problem, not really discussed in the article, is that yet again something is being called something it isn't. Current-generation AI *aren't* AI in the accepted sense from Sci-fi. They do not extrapolate. Maybe some of them can pass the Turing test, maybe they can't, but at the moment they remain bound by one of the theoretical limits of computer science - no one has yet come up with a way for a computer to extrapolate data in a chaotic system. Will someone figure out a way around that someday? Perhaps - and when that day comes we may hit 'the singularity' FAST. Cindy will be waiting.
|
|
|
Post by ratty on Aug 13, 2024 23:02:30 GMT
|
|