Ticker

6/recent/ticker-posts

Ad Code

Responsive Advertisement

Computers have learned to write. But here’s why AI should really worry us. - The Boston Globe

In 1997, when world chess champion Garry Kasparov was defeated by an IBM supercomputer, he confessed that the sheer power of the machine terrified him.

We journalists are made of sterner stuff. I’m not frightened by the prose cranked out by the artificial intelligence system du jour, ChatGPT, which is designed to write essays on command and generate lucid answers to complex questions. A quarter-century after a computer trounced Kasparov, a program hailed as the best AI text generator ever built mostly produces predictable prose, often riddled with errors, with a side order of lousy poetry.

And yet, CEOs are already using it to help them write memos. Business and tech leaders view it as a threat to search engines like Google. And teachers are up in arms over students’ ability to cheat via chatbot.

There is something bigger and spookier going on here. Not even the creators of ChatGPT fully understand how the system works, because it’s too complex for humans to comprehend. Even when it makes a mistake, nobody knows why. The same is true of AI programs that help make high-stakes decisions about health care, law enforcement, and even military operations. And that is what scares me.

ChatGPT has awed and amazed over a million people since it became available for free public use last month. Developed by San Francisco-based OpenAI, an artificial intelligence company cofounded by Twitter owner Elon Musk, ChatGPT does a credible impersonation of the smartest kid in the room, despite its fumbles.

You can ask it to generate the story of the Titanic as it might have been composed by William Shakespeare, for instance, or have it tailor its responses based on age and education. It uses short simple words when you ask it to explain gravity to a 6-year-old, longer words when explaining to an adult, and Newtonian math when talking to an engineer.

And it generates answers to commonplace questions, with a clean efficiency that can put Google searching to shame.

For instance, ask Google the deepest point of the Atlantic Ocean, and it points you to a website where you can read the answer. Ask ChatGPT, and it answers directly, writing up a paragraph about the 27,493-foot Milwaukee Deep off the coast of Puerto Rico. It’s so good at this sort of thing that ChatGPT could potentially supplant Google and Wikipedia as the fastest way to look up basic facts. Indeed, Google recently held an all-hands company meeting to grapple with the competitive threat posed by ChatGPT.

But — this is important — ChatGPT also serves up boneheaded errors. When I asked it who was the first European to reach North America, I got a brief essay on Christopher Columbus, with no mention of the earlier voyage of the Norseman Leif Erikson. Seconds later, when I asked who Erikson was, the AI said he “was credited with discovering North America.”

CEOs are already using ChatGPT to help them write memos. Business and tech leaders view it as a threat to search engines like Google. And teachers are up in arms over students’ ability to cheat via chatbot. Ascannio via Adobe Stock Images

Same AI, totally contradictory answers. And not a word about the millions of people already inhabiting the continent that Erikson was “credited with discovering.”

Tech experts are well aware of the system’s limitations. Drew Volpe, a founding partner at Cambridge’s First Star Ventures and a veteran of AI company Semantic Machines, said that systems like ChatGPT are “very well-spoken machines with deep memories that are very dumb.”

For now, chatbots may find their greatest commercial success in pursuits like marketing and advertising. Many of the world’s leading businesses already use such programs to write blog posts, e-mail ads, and other mundane copy, with humans only needed to fact-check and polish the results.

Companies such as San Francisco-based Writer employ the same algorithm used by ChatGPT to generate corporate copy for companies including Intuit, Spotify, and Cisco Systems. May Habib, former associate managing editor of the Harvard Crimson and now chief executive at Writer, says it’s the future of business communications.

“There’s no going back,” Habib said. “The genie’s out of the bottle.”

But what about the other genies we’ve unleashed? ChatGPT won’t help decide whether you get a kidney transplant, whether you get custody of your children, or whether you’ll be let out on parole. Yet other AIs designed to help answer these far more critical questions are already widely deployed, and their decisions are affecting the lives of millions.

That might not be bad, if the AIs were completely trustworthy. Instead, they’re sometimes brilliant and sometimes stupid, and their creators don’t know why. “We don’t have a good enough understanding of how all these models work under the hood to give a precise answer to that question,” said Jacob Andreas, assistant professor of computer science at the Massachusetts Institute of Technology.

Some failings can be caused by training AIs with inaccurate data, or with data that reflects the race or gender biases of the humans who compiled it. But Andreas said that even with absolutely perfect data, an AI could still make mistakes for reasons no human could fully understand.

We can only hope that AI’s human users have sense enough to double-check the results. But human nature being what it is, some people will just believe whatever the computer tells them, like the notorious case of the Detroit man falsely arrested for shoplifting in 2020, because he was misidentified by a facial recognition program.

Kasparov was frightened by his digital opponent because it was too good. The reason to fear today’s AI is that it’s not good enough.


Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him on Twitter @GlobeTechLab.

Adblock test (Why?)


https://ift.tt/oLdyINr

2022-12-18 21:02:11Z
CBMicWh0dHBzOi8vd3d3LmJvc3Rvbmdsb2JlLmNvbS8yMDIyLzEyLzE4L2J1c2luZXNzL2NvbXB1dGVycy1oYXZlLWxlYXJuZWQtd3JpdGUtaGVyZXMtd2h5LWFpLXNob3VsZC1yZWFsbHktd29ycnktdXMv0gEA

Enregistrer un commentaire

0 Commentaires