In 2022, the AI boutique movie became a blockbuster.
It took exactly ten years for the modern AI story to transform from the new new thing for a bunch of geeks to popular entertainment for the masses. In 2022, AI conversationalist ChatGPT signed up more than one million users in 5 days and AI image generator Midjourney amassed 6 million members in less than 6 months.
The 2012 deep learning breakthrough in image identification convinced many computer science PhD candidates to switch their dissertation work to the new superior method for finding patterns in data. Even more important, the entire “Silicon Valley” community fell for the hype created by this new phase of the six decades-long quest for replicating human intelligence in a computer.
Most hyped was the idea of self-driving vehicles. We were told we were the last car-owning generation and that by 2022 there will be a broad deployment of driverless cars produced by mainstream automobile manufacturers. Humans are so bad at driving that “the bar for AI is low,” observed one VC.
Instead, Ford announced in its most recent earnings report that it is shifting its resources from driverless cars to developing advanced driver assistance systems, and autonomous vehicle startup Argo AI folded, after raising $3.6 billion (mostly) from Ford and Volkswagen.
Amnon Shashua, co-founder and CEO of Mobileye, predicted a few years ago that by 2030 driving will mainly be a sport. Mobileye (Nasdaq: MBLY), which still gets most of its revenues from assistance and alert systems and not from fully autonomous vehicle systems, defied the general market trend and became one of the most successful IPOs of the year with its market cap going from $17 billion to $27 billion.
Days after filing for an IPO, Mobileye has stopped testing autonomous vehicles in New York, 15 months after the company first announced it had expanded its robotaxi testing program to the city. But also this year, driverless robotaxi Cruise launched in San Francisco over the summer and is now expanding its service to Austin and Phoenix. We may still own cars and get driver licenses decades from now, but the long road to “autonomous” AI will get us to specific locations in specific situations and uses. Eventually.
It turned out that the bar was set very high for AI, with the hype, dollars, and attention focused on AI use by the masses in the real-world, as in driving (or not). In 2022, the hype, dollars, and attention shifted completely to the use of AI by the masses in a made-up world, the digital world created by Tim Berners-Lee thirty years ago, together with all the Web (“Internet”) entrepreneurs, researchers, investors, businesses, and users that came after him.
As the latest AI programs have been trained on the data accumulated on the Web—text, images, videos—and could be used to generate new data, the hype in 2022 was all about how AI has become “creative.” Nonsense. The real creativity lies in the ingenuity and imagination of the humans that tinker with, modify, and improve the data analysis method of deep learning (originated as “machine learning” and “neural networks” in the 1950s). Specifically, over the last five years (since the publication of Attention is All you Need), they have dramatically improved what can be done in the area of text analysis.
OpenAI’s Generative Pre-trained Transformer (GPT) gave a new meaning to Silicon Valley obsession with rapidly “scaling up.” GPT-1, introduced in June 2018, was trained on 4.5 gigabytes of data and had 117 million parameters. Less than a year later (February 2019), the second version of GPT was trained on 40 gigabytes of data and had 1.5 billion parameters. GPT-3, introduced in May 2020, was trained on 753 gigabytes of data and had 175 billion parameters.
A new “AI Law” describes this scaling up, with the computing power required to run these large language models doubling every 14 weeks, surpassing the performance of Moore’s Law which describes how computer power itself is doubling every 18 months. We also learned this year that true to another Silicon Valley mantra—The Unreasonable Effectiveness of Data—more data trumps larger models (see DeepMind’s Chihcilla).
More than 59 zettabytes (59 trillion gigabytes) of data will be created, captured, copied, and consumed in the world this year, according to IDC. For seventy-five years we have been on a fascinating journey from “data processing” to “data mining” to “big data,” with two big catalysts in the last thirty. In addition to the Web linking everything to everything, the introduction of the smart phone fifteen years ago, turned billions of people around the world into non-stop consumers and creators of data. Big data represents a big opportunity for some—Startup Cinchy, for example, offers a data-centric approach to help manage the data tsunami.
Some of this data reflects the real world, but a lot of it has been generated by creative humans and as such it reflects fantasies, biases, prejudices, all the products of fertile human imagination. What will be the impact of generative AI, which is based on this data, on our mental health? How do we ensure we have a “responsible AI” that is protecting us from hatm, and to begin with, protecting our data?
Tim O’Reilly believes we can fix AI problems by fixing ourselves first and that generative AI promises to bring creativity to the masses. Many entrepreneurs just forge ahead with the publicly available generative AI programs, modifying and/or combining them to create a whole new generative experience.
For example, the first multimodal generative AI video platform to combine text, image and animation in one interface. And an NLP expert turned entrepreneur, using generative and other AI tricks to challenge Google in search, the “killer app” of the Web.
With or without large language models, other entrepreneurs apply the data analysis power of deep learning (and other modern AI and machine learning variants) to practical problems, with some success. For example: AI-driven microbiome-based therapeutics; AI-based retinal imaging and diagnostics; automating the repetitive work that data professionals perform, improving their productivity; checkout-free AI technology for grocery stores; helping hospitals address acute labor shortage; improving the manufacturing of syringes; saving the bees and the global food supply; and understanding better the proteome to bring precision medicine to oncology.
The recently-published Working with AI describes 29 case studies of human-machine collaboration in enterprises that successfully use AI to augment rather than replace their employees. Two other books, published earlier this year, provide practical lessons from thirty years of working with data and insights from companies addressing the challenges and opportunities of big data analysis.
In the corporate world, AI has become mainstream, albeit with some implementation challenges. A Gartner survey found that on average, 54% of AI projects make it from pilot to production, just about the same percentage as in 2019. 73% of data and analytics decision-makers are building AI technologies and 74% seeing a positive impact in their organizations, according to Forrester’s 2022 Data And Analytics Survey. The just-published McKinsey 2022 global AI survey found that 50% of respondents reported adopting AI in at least one business area, up from 20% in 2017 (that figure peaked in 2019 at 58%).
And what about the masses?
In March 2022, Pew Research published the results of a survey showing the general public’s ambivalence about AI: “45% say they are equally excited and concerned about the increased use of AI programs in daily life, compared with 37% who say they are more concerned than excited and 18% who say they are more excited than concerned.”
The respondents may not be entirely clear about what exactly is “AI.” Last year, the Allen Institute for AI found in an online survey that only 35% of participants responded correctly (false) to the statement “AI is capable of thinking for itself, independent of human beings.”
When I entered this statement into ChatGPT, here’s what I got:
It depends on how you define “thinking for itself.” Some AI systems are capable of making decisions and taking actions without direct human input, but they are not truly independent in the sense that they are still governed by the rules and constraints programmed into them by humans. AI systems do not have their own motivations or desires, and they do not have consciousness in the same way that humans do. They are tools that are designed to perform specific tasks, and they rely on humans to provide them with goals and objectives. In this sense, AI is not capable of truly “thinking for itself” in the way that a human does.
May the thinking humans behind ChatGPT and upcoming offspring of large language models continue to educate the public, through ChatGPT or any other channel, about what “AI” is and isn’t. How about adding this paragraph (which I published earlier this year) as a guardrail for upcoming generative AI applications:
Today’s “AI” is simply the most recent stage of computer-based learning from data. The evolution of computer technology over the last 75 years can be divided into two major eras: the first has been focused mainly on improving the speed of computers and the second (starting in the 1970s) on storing, organizing, and analyzing the data that computers, those perennial observers, collect… “Computational statistics,” “machine learning,” “predictive analytics,” “data science,” are some of the labels that have been given over the years to the marriage of computing and statistical analysis. “Deep Neural Networks” and “AI” are the latest.
In 2022, the marriage of computing and statistical analysis became entertainment for the masses.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here