![]() ![]() ![]() I learned recently that CNET has been, for several months now, using these generative AI tools to write articles. For example, people using it as a learning tool and accidentally learning wrong information, or students writing essays using ChatGPT when they’re assigned homework. Narayanan: There are very clear, dangerous cases of misinformation we need to be worried about. It is trying to be persuasive, and it has no way to know for sure whether the statements it makes are true or not.Īngwin: What are you most worried about with ChatGPT? Effectively, that is what ChatGPT is doing. As long as they persuade, those ends are met. ![]() A human bullshitter doesn’t care if what they’re saying is true or not they have certain ends in mind. This actually matches what the philosopher Harry Frankfurt has called bullshit, which is speech that is intended to persuade without regard for the truth. It often produces true statements as a side effect of being plausible and persuasive, but that is not the goal. It is very good at being persuasive, but it’s not trained to produce true statements. We mean that it is trained to produce plausible text. We mean this not in a normative sense but in a relatively precise sense. Narayanan: Sayash Kapoor and I call it a bullshit generator, as have others as well. Our conversation, edited for brevity and clarity, is below.Īngwin: You have called ChatGPT a “ bullshit generator.” Can you explain what you mean? He is a recipient of the White House’s Presidential Early Career Award for Scientists and Engineers. Narayanan is also a co-author of a textbook on fairness and machine learning and led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use personal information. The reason: anthropomorphizing AI incorrectly implies that it has the potential to act as an agent in the real world.) (Near the top of the list: illustrating AI articles with cute robot pictures. Last year, the pair released a list of 18 common pitfalls committed by journalists covering AI. Narayanan then teamed up with one of his students, Sayash Kapoor, to expand the AI taxonomy into a book. To his surprise, his obscure academic talk went viral, and his slide deck was downloaded tens of thousands of times his accompanying tweets were viewed more than two million times. So, how worried should we be? For an informed perspective, I turned to Princeton computer science professor Arvind Narayanan, who is currently co-writing a book on “ AI snake oil.” In 2019, Narayanan gave a talk at MIT called “ How to recognize AI snake oil” that laid out a taxonomy of AI from legitimate to dubious. Meta’s chief AI scientist, Yann LeCun, told reporters ChatGPT was “ nothing revolutionary.” University of Washington computational linguistics professor Emily Bender warns that “the idea of an all-knowing computer program comes from science fiction and should stay there.” The Atlantic declared that it could “ destabilize white-collar work.” Venture capitalist Paul Kedrosky called it a “ pocket nuclear bomb” and chastised its makers for launching it on an unprepared society.Įven the CEO of the company that makes ChatGPT, Sam Altman, has been telling the media that the worst-case scenario for AI could mean “ lights out for all of us.”īut others say the hype is overblown. ![]() Writers are worried it will take their jobs ( BuzzFeed and CNET have already started using AI to create content). Teachers are worried students will use it to cheat in class ( New York City public schools have already banned it). The clever AI chat program has captured the imagination of the public for its ability to generate poems and essays instantaneously, its ability to mimic different writing styles, and its ability to pass some law and business school exams. If you have been reading all the hype about the latest artificial intelligence chatbot, ChatGPT, you might be excused for thinking that the end of the world is nigh. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |