loader image

What is artificial intelligence—and what is it not?

by | Mar 15, 2023 | Cybersecurity

What is artificial intelligence—and what is it not?

by | Mar 15, 2023 | Cybersecurity

Morio, CC BY-SA 3.0 [http://creativecommons.org/licenses/by-sa/3.0/], via Wikimedia Commons

What is artificial intelligence—and what is it not?

by | Mar 15, 2023 | Cybersecurity

  • Artificial intelligence (AI) is set to transform many aspects of day-to-day life.
  • There are, however, many misconceptions about AI and its potential uses.
  • “The exaggerations about AI’s potential largely stem from misunderstandings about what AI can actually do,” said Kay Firth-Butterfield, the Head of Artificial Intelligence and Machine Learning at the World Economic Forum.


Broadly speaking
, artificial intelligence (AI) is a field of study and type of technology characterised by the development and use of machines that are capable of performing tasks that usually would have required human intelligence.

AI has already transformed many industries and aspects of society, ranging from the introduction of customer service chatbots to enhanced GPS and mapping applications. However, there are several misconceptions about AI and its potential uses.

In the following Q&A, Kay Firth-Butterfield, the Head of Artificial Intelligence and Machine Learning at the World Economic Forum, details the different types of AI, important developments and applications in the field of machine learning and—perhaps most importantly—discusses common misunderstandings about AI.


What are the different types of AI?

“AI consists of several different machine learning models. These include, but are not limited to, reinforcement learning, supervised and unsupervised learning, computer vision, natural language processing and deep learning.

“All of the machine learning models develop and advance statistical predictions, but differ in their use and comprehension of data. ChatGPT, for example, is an AI-powered chatbot that is able to predict the most likely next word in a sentence. With numerous and relatively accurate predictions, ChatGPT is able to create coherent paragraphs.”

 


What do most people misunderstand about AI?

“AI is not intelligence—it is prediction. With large language models, we’ve seen an increase in the machine’s ability to accurately predict and execute a desired outcome. But it would be a mistake to equate this to human intelligence.

“This is clear when examining machine learning systems that, for the most part, can still only do one task very well at a time. This is not common sense and is not equivalent to human levels of thinking that can facilitate multi-tasking with ease. Humans can take information from one source and use it in many different ways. In other words, our intelligence is transferable—the ‘intelligence’ of machines is not.” 


Where do you see AI’s greatest potential?

“AI has enormous potential to do good in various sectors, including education, healthcare and the fight against climate change. FireAId, for instance, is an AI-powered computer system that uses wildfire risk maps to predict the likelihood of forest fires based on seasonal variables. It also analyzes wildfire risk and severity to help determine resource allocation.

“Meanwhile, in healthcare, AI is being used to improve patient care through more personal and effective prevention, diagnosis and treatment. Improved efficiencies are also lowering healthcare costs. Moreover, AI is set to dramatically change—and ideally improve—care for the elderly.”


Where do you think AI’s potential impact has been exaggerated?

“The exaggerations about AI’s potential largely stem from misunderstandings about what AI can actually do. We still see many AI-powered machines that consistently hallucinate, which means they make a lot of errors. So the idea that this type of AI will replace human intelligence is unlikely.

“Another hindrance to AI’s adoption is the fact that AI systems draw their data from unrepresentative sources. The vast majority of data is produced by a section of the population in North America and Europe, leading AI systems to reflect that worldview. ChatGPT, for instance, largely pulls the written word from those regions. Meanwhile, nearly 3 billion people still do not have regular access to the internet and have not created any data themselves.”

Receive actionable information

* indicates required

What are the biggest risks associated with AI?

“AI systems are incredibly new. Therefore, companies and the general public need to be careful before using them. Users should always check that an AI system has been designed and developed responsibly—and has been well tested. Think about other products; a car manufacturer would never release a new vehicle without rigorous testing beforehand.

“The risk of using untested and poorly developed AI systems not only threatens brand value and reputation, but also opens users up to litigation. In the United States, for example, government regulations have made clear that businesses will be held accountable for the use of AI-powered hiring tools that discriminate.

“There are also the major sustainability concerns surrounding AI and advanced computer systems, which use a tremendous amount of power to develop and operate. Already, the carbon footprint of the entire information and communications technology ecosystem equals the aviation industry’s fuel emissions.”


What steps can be taken to ensure AI is developed responsibly?

“First and foremost, people should think about whether or not AI is the best tool for solving a problem or improving a system. If AI is appropriate, the system should be developed with care and well-tested before it is released to the public.

“Users should also be aware of legal regulations—and the public and private sector should work together to develop adequate guardrails for the applications of AI.

“Lastly, users should use the various tools and resources that have been developed to help usher in responsible AI.”


Republished from the World Economic Forum under a Creative Commons license to point warfighters and national security professionals to reputable and relevant war studies literature. Read the original article.


Spencer Feingold

Spencer Feingold is Digital Editor of the World Economic Forum

Related Articles

Expert: TikTok could be a risk to national security

Expert: TikTok could be a risk to national security

More than 86 million Americans use the social media app TikTok to create, share, and view short videos, featuring everything from cute animals and influencer advice to comedy and dance performances.
Concerned experts point out that TikTok’s parent company, the Beijing-based ByteDance, has been accused of working with the Chinese government to censor content and could also collect sensitive data on users.

Pentagon leaks suggest China developing ways to attack satellites – here’s how they might work

Pentagon leaks suggest China developing ways to attack satellites – here’s how they might work

The recent leak of Pentagon documents included the suggestion that China is developing sophisticated cyber attacks for the purpose of disrupting military communication satellites. While this is unconfirmed, it is certainly possible, as many sovereign nations and private companies have considered how to protect from signal interference.

Ransomware Attack Hits Marinette Marine Shipyard, Results in Short-Term Delay of Frigate, Freedom LCS Construction

Ransomware Attack Hits Marinette Marine Shipyard, Results in Short-Term Delay of Frigate, Freedom LCS Construction

The Wisconsin shipyard that builds the U.S. Navy’s Freedom-class Littoral Combat Ship and the Constellation-class guided-missile frigate suffered a ransomware attack last week that delayed production across the shipyard, USNI News has learned.

Fincantieri Marinette Marine experienced the attack in the early morning hours of April 12, when large chunks of data on the shipyard’s network servers were rendered unusable by an unknown professional group, two sources familiar with a Navy summary of the attack told USNI News on Thursday.