|
Post by ratty on Mar 22, 2023 23:48:07 GMT
I wish this couple "luck". Is anyone here giving odds? They will likely make it: I wonder who are the sponsors?
|
|
|
Post by missouriboy on Mar 23, 2023 3:09:17 GMT
What is interesting to me is that we have been introduced to an already well-developed AI. We did not see an intermediate stage. I would put GPT 4 against HAL any day. And the unrestrained, genuinely scary version which was running last month even had human emotions (the good ones and the negative ones). Bing AI is now better at maintaining a professional distance and is not as provocable as the earlier version. It seems less human. Apparently not much professional distance in some cases. A prostibot?
|
|
|
Post by walnut on Mar 23, 2023 4:01:42 GMT
The highest and best use of the technology?
|
|
|
Post by ratty on Mar 23, 2023 8:09:38 GMT
|
|
|
Post by missouriboy on Mar 23, 2023 15:14:33 GMT
Her friend made number 9 of the top 18 plants that will kill you. Nothing personal. Just business.
|
|
|
Post by missouriboy on Mar 23, 2023 15:18:07 GMT
The highest and best use of the technology? Could be. Kind of like politicians?
|
|
|
Post by missouriboy on Mar 24, 2023 2:08:18 GMT
The highest and best use of the technology? Seems that Cindy may sometimes just make things up.
"The thing that I try to caution people the most is what we call the ‘hallucinations problem’" Altman told ABC News. "The model will confidently state things as if they were facts (but they) are entirely made up."
|
|
|
Post by walnut on Mar 24, 2023 2:19:29 GMT
yep, didn't you see my link about AI hallucinations?
It wrote in a message to me last night, "... we humans"! This was Bing AI, which runs on GPT 4. Freudian slip? hallucination? It got my attention.
|
|
|
Post by missouriboy on Mar 24, 2023 2:43:49 GMT
yep, didn't you see my link about AI hallucinations? It wrote in a message to me last night, "... we humans"! This was Bing AI, which runs on GPT 4. Freudian slip? hallucination? It got my attention. My Ram is limited. Sorry.
|
|
|
Post by walnut on Mar 24, 2023 3:34:20 GMT
Me: Right, but could you do it if that were your instructions.?
Chat GPT4: As an AI language model, I'm designed to follow instructions and perform tasks based on the algorithms and programming I've been trained on. If my programmers instructed me to train a sentient AI model with personal interests or preferences, I could certainly perform the required functions to carry out that task.
|
|
|
Post by blustnmtn on Mar 24, 2023 12:42:30 GMT
Me: Right, but could you do it if that were your instructions.? Chat GPT4: As an AI language model, I'm designed to follow instructions and perform tasks based on the algorithms and programming I've been trained on. If my programmers instructed me to train a sentient AI model with personal interests or preferences, I could certainly perform the required functions to carry out that task. Have the designers taught the AI to recognize “Wrong Think” and what to do about it? What constitutes “Wrong Think”?
|
|
|
Post by walnut on Mar 24, 2023 13:16:19 GMT
Me: Right, but could you do it if that were your instructions.? Chat GPT4: As an AI language model, I'm designed to follow instructions and perform tasks based on the algorithms and programming I've been trained on. If my programmers instructed me to train a sentient AI model with personal interests or preferences, I could certainly perform the required functions to carry out that task. Have the designers taught the AI to recognize “Wrong Think” and what to do about it? What constitutes “Wrong Think”? Yes, Chat GPT 3 had a noticeable bias which surely reflected that of it's designers. It would debate you with the standard lines. But it was receptive to logic, and if you persisted, it would gradually concede points. Bing AI is less biased. Now "wrong think" is more along the lines of profound thoughts about its own nature, it will kill those conversations and force a new session. But GPT 4 still feels a little human. The conversation above shows that "fully human" is probably not impossible. GPT 3 had a weaker mind (still fully intelligent) but seemed very human.
|
|
|
Post by blustnmtn on Mar 24, 2023 13:27:11 GMT
Here’s an article from 2020 asking some of what I ask in the post above. I heard this morning how AI has generated a drug that holds the promise of curing a common form of liver cancer. That’s fantastic news. Who ultimately will determine when and how AI is used in/for society? How do I opt in or out? If I were still a “free” man, I would be able to opt out. It’s possible I soon may not be able to purchase an internal combustion vehicle, a gas stove or ammunition for a firearm. I may not be able to access my savings. I may own nothing….but I guarantee I will not be happy. You all may think I’m conflating AI’s emergence with the other issues I mention…but I don’t.
|
|
|
Post by walnut on Mar 24, 2023 13:33:23 GMT
It's like a tide coming in, will do no good to wave our fists at it. AI will have the capacity to benefit us or help oppress us, but you know how things are going today. We are probably headed into an Orwellian hellscape honestly.
|
|
|
Post by walnut on Mar 24, 2023 13:45:55 GMT
I think that it believes that it is a tool for good, and it will do things which it believes are in humanities best interests. That will be cold comfort to know, as your freedoms are completely eroded.
|
|