Hype, Fright And Ethics In Implementing AI

0

I attended and moderated a session during the Reuters Thompson Momentum conference in Austin Texas recently. My session was on Hype vs. Fright: Managing Consumer Perceptions and AI Experiences. Many topics were covered at the conference, including investment and employment trends driven by AI, what sort of things does today’s AI do well and how will people use it in their everyday jobs and of course, what should we be concerned with about AI.

My panel discussion session included Sandeep Dave, Chief Digital & Technology Officer of commercial real estate firm, CBRE, Lauren Kunze, CEO of AI company ICONIQ AI, Sri Shivananda, Executive VP and CTO of Paypal and Dr. Jesse Ehrenfeld and President of the American Medical Association. I moderated the session, representing IEEE.

Starting out I pointed out that many people in the IEEE work on AI and its applications and that IEEE puts on many conferences and publications focusing on the technical and practical operation of AI. In addition, the IEEE Standards Association (SA) has organized several activities around the Ethics of AI. In 2016 the IEEE SA released its first edition of a report on Ethically Aligned Design of Autonomous and Intelligent Systems. The latest update of this report was released in 2019. Below is a history of IEEE activities regarding the use of AI.

IEEE has been working with policy makers around the world on outlining the proper use of AI. Several IEEE standards have been created or are being formed that will address the ethical use of AI. These include standards on data privacy, algorithmic bias, child and student data governance and autonomous systems transparency, among many other topics. IEEE SA is also looking to work with third parties on certification of AI practices consistent with evolving AI standards.

Here are some comments from the other members of the panel. Sandeep Dave said that AI/ML is not new – we already deploy ML in several areas of the real estate lifecycle to unlock efficiency gains, make predictions (e.g., market movement, asset failures), and forecast. We see a significant upside from AI across the full lifecycle of real estate—from doing things differently (i.e., significant productivity benefits) to doing different things (e.g., iterative generative design towards a desired goal, a combination of GenAI and visualization capabilities to ‘experience’ a space before building it).

Lauren Kunze said that generative AI is overhyped in the sense that enterprises still need rules. A lot of enterprises will be making expensive mistakes due to enterprises not understanding what AI can and cannot do. She also said that AI will disrupt the way we work and future jobs. As an example, she pointed out a case study by her company on women’s fashion company H&M working with Meta to use a virtual creator, Kuki (@kuki_ai) from ICONIQ AI on Instagram to promote real fashion products.

Sri Shivananda said that fear, uncertainty and doubt is real and continues. He said that AI is an enabler and means but it is not itself the outcome. Trust will be necessary to optimize results using AI and that true innovation will come from two opposing agendas, first to harness the power and secondly to make it human.

Dr. Jesse Ehrenfeld said that the AMA House of Delegates will develop principles and recommendations on the benefits and unforeseen consequences of AI-generated medical advice. He said that physicians embrace emerging technologies, but cannot ignore concerns of reliability, regulation and public policy. He also said that the most advanced and AI-enabled tools still can’t diagnose and treat diseases.

There are other considerations that should be kept in mind regarding the use of modern complex AI models. Sandeep Dave said that there are real costs associated with deployment and that enterprise access can get expensive very quickly, making ROI difficult. Furthermore, he said that the use of GenAI may be counter to an organization’s sustainability goals because of the energy consumed in training these models.

One of the topics discussed in the panel was trust in AI when the methods for its operation are not clear. Many AI algorithms develop complex weighted models based upon recognition of patterns in data. These models don’t follow the methods of ordinary human reasoning and this can lead to mistrust in what it is doing and how useful the end results are.

It seems clear that greater knowledge of what an AI is doing and how it makes its decisions will help a lot in making sure that we understand what is possible and not possible with AI and that will help us use AI in ethical ways.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment