At this school, computer science class now includes critiquing chatbots

0

NEW YORK — Marisa Shuman’s computer science class at the Young Women’s Leadership School of the Bronx in New York City began as usual on a recent January morning.

Just after 11:30, energetic 11th and 12th graders bounded into the classroom, settled down at communal study tables and pulled out their laptops. Then they turned to the front of the room, eyeing a whiteboard where Shuman had posted a question on wearable technology, the topic of that day’s class.

For the first time in her decadelong teaching career, Shuman had not written any of the lesson plan. She had generated the class material using ChatGPT, a new chatbot that relies on artificial intelligence to deliver written responses to questions in clear prose. Shuman was using the algorithm-generated lesson to examine the chatbot’s potential usefulness and pitfalls with her students.

“I don’t care if you learn anything about wearable technology today,” Shuman said to her students. “We are evaluating ChatGPT. Your goal is to identify whether the lesson is effective or ineffective.”

Across the United States, universities and school districts are scrambling to get a handle on new chatbots that can generate humanlike texts and images. But while many are rushing to ban ChatGPT to try to prevent its use as a cheating aid, teachers such as Shuman are leveraging the innovations to spur more critical classroom thinking. They are encouraging their students to question the hype around rapidly evolving AI tools and consider the technologies’ potential side effects.

The aim, these educators say, is to train the next generation of technology creators and consumers in “critical computing.” That is an analytical approach in which understanding how to critique computer algorithms is as important as — or more important than — knowing how to program computers.

New York City Public Schools, the nation’s largest district, serving about 900,000 students, is training a cohort of computer science teachers to help their students identify AI biases and potential risks. Lessons include discussions on defective facial recognition algorithms that can be much more accurate in identifying white faces than darker-skinned faces.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment