Artwork

Content provided by TechTarget Editorial. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by TechTarget Editorial or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Player FM - Aplicație Podcast
Treceți offline cu aplicația Player FM !

Security, bias risks are inherent in GenAI black box models

37:20
 
Distribuie
 

Manage episode 408780154 series 3493557
Content provided by TechTarget Editorial. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by TechTarget Editorial or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

From bias to hallucinations, it is apparent that generative AI models are far from perfect and present risks.

Most recently, tech giants -- notably Google -- have run into trouble after their models made egregious mistakes that reflect the inherent problem with the data sets upon which large language models (LLMs) are based.

Microsoft faced criticism when its models from partner OpenAI generated disturbing images of monsters and women.

The problem is due to the architecture of the LLMs, according to Gary McGraw, co-founder of the Berryville Institute of Machine Learning.

Because most foundation models are a black box that contain security flaws within their architecture, users have little ability to manage the risks, McGraw said on the Targeting AI podcast from TechTarget Editorial.

In January, the Berryville Institute published a report highlighting some risks associated with LLMs, including data debt, prompt manipulation and recursive pollution.

"These are some risks that need to be thought about while you're building your LLM application so that you don't put your business, your enterprise, your business, at more risk than you want to take on when you adopt this technology," McGraw said.

The risks are embedded in both closed and open source models and small and large language models, he added.

"When people build their own language model, what they're often doing ... is taking a foundation model that's already developed and they're training it a little bit further with their own proprietary prompting," he continued. "These steps do not eradicate the risks that are built into the black box. In fact, all they do is hide them even further."

These risks can be dangerous for real-world situations such as the 2024 election, McGraw said. Since the language models are built from data from all over the web -- both good and unreliable -- LLMs trained on that data can be used to produce false and malicious information about the election.

"Using this technology, we need some way of controlling the output so that it doesn't get back out there into the world and just cause more confusion among people who don't know which way is up," he said.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.

  continue reading

36 episoade

Artwork
iconDistribuie
 
Manage episode 408780154 series 3493557
Content provided by TechTarget Editorial. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by TechTarget Editorial or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

From bias to hallucinations, it is apparent that generative AI models are far from perfect and present risks.

Most recently, tech giants -- notably Google -- have run into trouble after their models made egregious mistakes that reflect the inherent problem with the data sets upon which large language models (LLMs) are based.

Microsoft faced criticism when its models from partner OpenAI generated disturbing images of monsters and women.

The problem is due to the architecture of the LLMs, according to Gary McGraw, co-founder of the Berryville Institute of Machine Learning.

Because most foundation models are a black box that contain security flaws within their architecture, users have little ability to manage the risks, McGraw said on the Targeting AI podcast from TechTarget Editorial.

In January, the Berryville Institute published a report highlighting some risks associated with LLMs, including data debt, prompt manipulation and recursive pollution.

"These are some risks that need to be thought about while you're building your LLM application so that you don't put your business, your enterprise, your business, at more risk than you want to take on when you adopt this technology," McGraw said.

The risks are embedded in both closed and open source models and small and large language models, he added.

"When people build their own language model, what they're often doing ... is taking a foundation model that's already developed and they're training it a little bit further with their own proprietary prompting," he continued. "These steps do not eradicate the risks that are built into the black box. In fact, all they do is hide them even further."

These risks can be dangerous for real-world situations such as the 2024 election, McGraw said. Since the language models are built from data from all over the web -- both good and unreliable -- LLMs trained on that data can be used to produce false and malicious information about the election.

"Using this technology, we need some way of controlling the output so that it doesn't get back out there into the world and just cause more confusion among people who don't know which way is up," he said.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.

  continue reading

36 episoade

Toate episoadele

×
 
Loading …

Bun venit la Player FM!

Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.

 

Ghid rapid de referință