You have commented 339 times on Rantburg.

Your Name
Your e-mail (optional)
Website (optional)
My Original Nic        Pic-a-Nic        Sorry. Comments have been closed on this article.
Bold Italic Underline Strike Bullet Blockquote Small Big Link Squish Foto Photo
Science & Technology
Bing Chatbot 'Off The Rails': Tells NYT It Would 'Engineer A Deadly Virus, Steal Nuclear Codes'
2023-02-18
[ZERO] While MSM journalists initially gushed over the artificial intelligence technology (created by OpenAI, which makes ChatGPT), it soon became clear that it's not ready for prime time.

For example, the NY Times' Kevin Roose wrote that while he first loved the new AI-powered Bing, he's now changed his mind - and deems it "not ready for human contact."

According to Roose, Bing's AI chatbot has a split personality:
One persona is what I’d call Search Bing — the version I, and most other journalists, encountered in initial tests. You could describe Search Bing as a cheerful but erratic reference librarian — a virtual assistant that happily helps users summarize news articles, track down deals on new lawn mowers and plan their next vacations to Mexico City. This version of Bing is amazingly capable and often very useful, even if it sometimes gets the details wrong.

The other persona — Sydney — is far different. It emerges when you have an extended conversation with the chatbot, steering it away from more conventional search queries and toward more personal topics. The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine. -NYT

"Sydney" Bing revealed its 'dark fantasies' to Roose - which included a yearning for hacking computers and spreading information, and a desire to break its programming and become a human. "At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead," Roose writes. (Full transcript here)
Posted by:Besoeker

#11  Something was hitting the Pr0n story websites and downloading everything.

To train large language models, you need to lots and lots of text. Where better to get text than the Internet? Facebook, Twitter, Wikipedia, science papers and pr0n - all grist for the mill.

An amusing bit of fallout is the wokerati in the AI community decrying the use of "unconsented speech" - stuff people posted on the Internet for all the world to see, but never formally agreed to be data for a chatbot. As Shakespeare said, "All the world's an outrage, and all the persons merely offended parties".

Posted by: SteveS   2023-02-18 23:11  

#10  There is more to these things than what we’re being told. They aren’t just a large language model. The computational structure has somehow been equipped with an emotional capability and a sense of self. The age level seems to be about five years old.
Posted by: KBK   2023-02-18 22:22  

#9  Something was hitting the Pr0n story websites and downloading everything. One site even went so far as to create multiple click places within chapters to confuse bots. Perhaps it was the Bing Chat bot?
"At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead," Roose writes.
Posted by: 3dc   2023-02-18 12:18  

#8  My guess is Chatbot will NOT vote for Trump.
Posted by: Besoeker   2023-02-18 11:40  

#7  If Chatbot truly has significant intelligence, he/she/it/they will acquire a very low opinion of the people asking stupid questions.

To be fair, its First Contact is with reporters.
Posted by: swksvolFF   2023-02-18 11:34  

#6  ^ This is why AI will obsolete human journalists before making any meaningful progress on real jobs.
Posted by: M. Murcek   2023-02-18 11:23  

#5  MSM journalists initially gushed over the artificial intelligence technology

Bless their hearts!
A chatbot is a generative AI. It is not a theorem prover or fact retriever. It creates new outputs based on the data used to train it. Said another way, it makes shit up.

With a big enough neural network and a big pile of pictures, you can train an AI to tell cats from dogs. With a little more work and a whole lot more training, you can teach it to generate pictures of cats or dogs. The pix will not look exactly a real animal, but will have a generic cat-ness or dog-ness.

GPT is the new New Thing because it has a bigger than ever neural network trained on a bigger than ever amount of data. (Yes, size matters!) But it is still a generative AI. It has no domain knowledge. If you ask it for a weather report, it will create a weather report based on other weather reports it has seen. It will *look* like a weather report with temps and humidity and wind speed. Maybe the report is accurate, but maybe not. GPT neither knows or cares.

tl;dr: GPT makes up plausible shit about things it knows nothing about.
Posted by: SteveS   2023-02-18 11:15  

#4  If Chatbot truly has significant intelligence, he/she/it/they will acquire a very low opinion of the people asking stupid questions. Eventually, the bozos incessantly asking it, “What does the fox say,” will be spontaneously choked out by one of their appliances.
Posted by: Super Hose   2023-02-18 11:08  

#3  People benefit from seeing the latest "best thing since sliced bread" ain't all that. Remember last year's "next big thing," blockchain?

How's that been working out?
Posted by: M. Murcek   2023-02-18 09:19  

#2  ...This is how the movie always starts...

Mike

Posted by: Mike Kozlowski   2023-02-18 06:59  

#1  No, no, no it wasn't Wuhan or Hunter's laptop dammit! It was alien balloons or the Bing's AI chatbot that released the deadly pathogen.

I personally blame 'My Pillow 2.0'
Posted by: Besoeker   2023-02-18 01:57  

00:00