/* ---- Google Analytics Code Below */

Thursday, May 25, 2023

Alien Minds, Immaculate Bullshit, Outstanding Questions: College in the Age of ChatGPT

 From the Penn Alumni Mag,   Where I was much interested in AI. Considerable piece I am reading.  

 Considerble pieice, I do NOT agree in much of it, below an intro the whole thing follows 

Alien Minds, Immaculate Bullshit, Outstanding Questions

26 Apr 2023   Page 22-33

College in the age of ChatGPT.

By Trey Popp | Illustration by Chris Gash   Sidebar | The Coming Economic and Ethical Earthquake

In June 2021, Chris Callison-Burch typed his first query into GPT-3, a natural language processing platform developed by the San Francisco-based company OpenAI. Callison-Burch, an associate professor of computer and information science, was hardly new to AI chatbots or the neural networks that power them. He’s been at the forefront of machine translation since the early 2000s, and at Penn he teaches courses in computational linguistics and artificial intelligence. Besides, digital assistants like Siri and Alexa had already woven NLPs into the fabric of everyday life. But the jaw-dropping fluency of OpenAI’s new model pitched him into a “career existential crisis.”

It could respond to prompts with cogent, grammatically impeccable prose. It could turn plain language into Python code. It could expand bullet-point outlines into five-paragraph essays—or theatrical dialogues.

“I was like, ‘Is there anything left for me to do? Should I just drop out of computer science and become a poet?’” he later recollected. “But then I trained the model to write better poetry than me

On November 30, 2022, OpenAI publicly released a refined version called ChatGPT. Its shock-and-awe debut quickly gave Callison-Burch plenty of company on campus. On February 1 he went to a meeting convened by Penn’s Center for Teaching & Learning (CTL) to address the anxiety and excitement racing through faculty lounges—especially after the bot had passed a Wharton operations management exam administered to it by Christian Terwiesch, the Andrew M. Heller Professor. “It was probably the best-attended CTL meeting ever,” Callison-Burch recalled, with a wry chuckle, a couple weeks later. So many people came that CTL director Bruce Lenthall split them into three sessions—two comprising social sciences and humanities faculty and one that blended professors of math, engineering, and physical sciences with counterparts from the University’s health schools.

They’d come for varied reasons. “Some people were just alarmed,” Lenthall said. Having ingested vast swathes of internet text, ChatGPT and other so-called generative AI tools are exquisitely adapted to serve as “sophisticated plagiarism machines,” in the words of Eric Orts, the Guardsmark Professor in Wharton’s department of legal studies and business ethics, who’d experimented with ChatGPT in an MBA course and discussed it within the Faculty Senate executive committee. Other attendees had yet to engage with the tools at all and simply wanted to learn about them. A third group sensed a chance to get in on the ground floor of a revolutionary change. “They suggested that this could be exciting and open up possibilities,” Lenthall recalled, “but they didn’t really have a good idea of what those might be.”

Most participants fell somewhere in the middle—worried about the threats ChatGPT posed to established modes of teaching and evaluation, but curious about its potential to advance the scope or pace of instruction. “What was the most gratifying to me,” said Lenthall, who is also an adjunct associate professor of history, “was that all the faculty came to the conclusion that they really needed to think through the question: What is it most critical that my students learn to do on their own? And when is it most appropriate for them to do something with another tool?”

Their search for answers gave the spring semester a hothouse atmosphere of probing and experimentation.

Wharton associate professor Ethan Mollick, a Ralph J. Roberts Distinguished Faculty Scholar and academic director of Wharton Interactive, not only permitted but in some cases required students in his innovation and entrepreneurship courses to use generative AI platforms, which he likened to “analytic engines.” Meanwhile, on the other end of campus, astronomy professor Masao Sako and his analytical mechanics students asked ChatGPT to solve homework problems. “It returns answers and explanations that sound plausible,” Sako said, but “failed on every single one.” Given the confident authority with which ChatGPT announced its defective solutions, Sako concluded that the tool might indeed have some utility in the realm of upper-level physics. “I’ve told my students to continue using it to get some practice on identifying errors, which is a useful skill.”

Penn Integrates Knowledge (PIK) University Professor Konrad Kording, who teaches psychology and neuroscience with a focus on neural networks and machine learning, emerged as a pithy generative AI maximalist. “It’s just a mistaken opportunity for any student to not use ChatGPT for any possible project they’re working on,” he declared at a late-February panel discussion sponsored by the School of Arts & Sciences’ Data Driven Discovery Initiative (DDDI). “It’s just incompetent. We should give them bad grades for not using ChatGPT.” Yet Eric Orts was finding that when he let his MBA students use it for some assignments, it tended to lead them toward bad grades—in the form of “deadening” prose—all on its own. “I’m convinced that there are positive uses emerging for this in the real world,” he told me. “But in general I was not impressed by the answers I got from students using it.”

Neither was Karen Rile C’80, a fiction writing teacher in the English department whose experimentation with chatbots goes back to a primitive model developed by AOL at the turn of the century. “What I value in writing is specificity, sharpness, clarity—and it fails on every level,” she reflected. “It’s like a bad student writer who writes in a way that’s very generic, with lots of vague cliches and phrases. It feels blurry. I think that it’ll probably get sharper and better, but I can’t imagine it’s ever going to do anything that’s literary quality. It’ll be very formulaic.”

Yet Rile kicked off her fiction seminar this spring by assigning her students a piece by a writer who’d used GPT-3 as a kind of a coauthor. “I wanted to get ahead of it at the beginning of the semester,” she told me. Then, in mid-March, she brought in Callison-Burch and his PhD student Liam Dugan EAS’20 GEng’20, who focuses on natural language processing, to give a guest lecture about generative AI and creative writing.

All 10 professors I interviewed, plus another four who participated in that DDDI panel discussion and several with whom I spoke informally, expressed a similarly open attitude. Sako’s dim view of ChatGPT’s analytical chops didn’t keep him from seeing its potential to boost the conceptual sophistication of his mid-level coding class. Mollick mixed breathless boosterism with a running list of warnings about its boundless propensity to deceive users. Skeptics were on the lookout for positive use cases, and enthusiasts frequently offered insights about generative AI’s limitations.   .... ' '

No comments: