Artwork

コンテンツは The PaymentsJournal Podcast によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The PaymentsJournal Podcast またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!

Taking On the AI-Assisted Fraudsters

25:18
 
シェア
 

Manage episode 454825993 series 3046334
コンテンツは The PaymentsJournal Podcast によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The PaymentsJournal Podcast またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
AI-Assisted Fraud, Kannan Srinivasan

Artificial intelligence is fueling a major transformation in the financial fraud landscape. AI has democratized criminal sophistication and fraud at a very low cost of conducting business, generating more malignant actors that financial institutions have to fight against.

What can these institutions do to mitigate increasingly sophisticated frauds and scams? In a recent PaymentsJournal podcast, Kannan Srinivasan, Vice President for Risk Management, Digital Payment Solutions at Fiserv, and Don Apgar, Director of the Merchant Payments Practice at Javelin Strategy and Research, discussed how fraudsters are using generative AI to hone social engineering and bypass authentication, and how we can fight back.

The Deep-Fake Threat

Driven by AI, deep fakes represent a new frontier in fraud. There has been a 3000% increase in deep fake fraud over the last year and 1200% increase in phishing emails since ChatGPT was launched.

Synthetic voices have been around for decades. They used to sound like a hollow robot, but recent advances in technology have allowed voices to be cloned from just a few seconds of audio. They are so realistic that fraudsters were able to use a deep-fake voice of a company executive to fool a bank manager into transferring $35 million to them.

“In banking, especially at the wire desk, talking to the customer is always considered the gold standard of verification,” said Apgar. “So if somebody sends an e-mail and says I want to initiate a wire, they’ll actually have to talk to a banker. But now, if the voice can be cloned, how do bankers know if it’s real or not?”

In business applications, single-channel communication should not be accepted, said Srinivasan. “If you get a voice call from somebody to do a certain thing, don’t just act on that,” he said. “Send an email or a text to confirm that you heard it from that person. Or hang up the phone and confirm through another channel that this is exactly what they wanted.

“We hear stories about a phone call coming in and saying your son has met with an accident and they’re in a hospital, you need to send $8000 for an emergency procedure. They prey on human emotions. We have to make sure that we step back, think about what’s happening, then call your family or friend to make sure that the news is accurate.”

A Range of Use Cases

Imposter scams have also exploded recently across other use cases. Large language models can take a phishing email, customize the content and iterate it until the scamster gets a successful response from the victim.

Sophisticated criminals are creating packages for less-sophisticated criminals to buy. For $100 a month, a would-be hacker can purchase a bot-as-a-service turnkey application. To conduct a fraud operation, they just need to upload the victim’s information, such as their phone number and the impersonating business name and phone.

The bot will automatically call the victim and impersonate the business, often requesting that they read out the one-time password. Once the criminal gets the OTP, they can do whatever they want with it, including logging into the institution under attack, authenticating transactions, and changing passwords.

The entry barrier to committing fraud has come down significantly. “There’s almost a multiplier effect on the attack vectors end,” said Apgar, “because AI is not only making it easier to crank out more and more phishing emails more efficiently, but it also makes them more realistic.”

How Are We Stopping Fraud?

Machine learning models have allowed us to identify pockets of fraud and scam so that we can detect and stop them. Auto machine-learning tools have allowed Fiserv to perform this function at scale.

Srinivasan said that Fiserv is also deploying self-learning models, which will generate models at a more automated pace. Since the models can be generated much more frequently, they can more effectively detect any change in fraud patterns.

“We use more than 500 risk signals to identify any emerging trend and deploy preventative measures against them,” said Srinivasan.

Getting Started

For a financial institution initiating a strategy against AI fraud, the first step is to make an inventory of all the touch points they have and conduct a vulnerability assessment. Determine all the possible risk areas that could be subject to a fraud attempt, such as the new account opening processes or login controls. Don’t forget about money movement, changes in user behavior, and brand-new patterns of usage.

Two other back-end processes are critical for assessment too. The first is customer education on scam awareness. Reach out to consumers via multiple channels to make sure they are aware of the nature of these new scams. When they are targeted by a scam artist, they should alert the bank to what is happening.

The second is to educate employees and frontline representatives on the techniques used in fraud, to ensure that they are not social engineered by fraudster when they are reviewing a transaction or removing a hold. Then, when a user calls, they can educate the consumer on potential scam activity to make sure that they are not falling into one.

The most successful fraud mitigation outcomes result from adopting a hybrid approach. Machine learning has to work in conjunction with an intelligent human to ensure a contextual application of the response being deployed. Make sure that the organization has absolute good governance and oversights on whatever results it’s giving, so there is no bias in the strategy.

“Having a variety of mitigation options offered to the client or the financial institution helps a lot,” said Srinivasan. “Pick and choose or deploy all of them, so that we can keep the consumer safe.”

While fraud attempts will always be an issue, Fiserv and financial institutions are working toward solutions that mitigate fraud while improving the customer experience. Working together, we should be able to manage fraud losses to very low levels. By combining layered security strategies, the industry can foster a more robust difference against both existing and new fraud payment threats.

The post Taking On the AI-Assisted Fraudsters appeared first on PaymentsJournal.

  continue reading

26 つのエピソード

Artwork
iconシェア
 
Manage episode 454825993 series 3046334
コンテンツは The PaymentsJournal Podcast によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The PaymentsJournal Podcast またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
AI-Assisted Fraud, Kannan Srinivasan

Artificial intelligence is fueling a major transformation in the financial fraud landscape. AI has democratized criminal sophistication and fraud at a very low cost of conducting business, generating more malignant actors that financial institutions have to fight against.

What can these institutions do to mitigate increasingly sophisticated frauds and scams? In a recent PaymentsJournal podcast, Kannan Srinivasan, Vice President for Risk Management, Digital Payment Solutions at Fiserv, and Don Apgar, Director of the Merchant Payments Practice at Javelin Strategy and Research, discussed how fraudsters are using generative AI to hone social engineering and bypass authentication, and how we can fight back.

The Deep-Fake Threat

Driven by AI, deep fakes represent a new frontier in fraud. There has been a 3000% increase in deep fake fraud over the last year and 1200% increase in phishing emails since ChatGPT was launched.

Synthetic voices have been around for decades. They used to sound like a hollow robot, but recent advances in technology have allowed voices to be cloned from just a few seconds of audio. They are so realistic that fraudsters were able to use a deep-fake voice of a company executive to fool a bank manager into transferring $35 million to them.

“In banking, especially at the wire desk, talking to the customer is always considered the gold standard of verification,” said Apgar. “So if somebody sends an e-mail and says I want to initiate a wire, they’ll actually have to talk to a banker. But now, if the voice can be cloned, how do bankers know if it’s real or not?”

In business applications, single-channel communication should not be accepted, said Srinivasan. “If you get a voice call from somebody to do a certain thing, don’t just act on that,” he said. “Send an email or a text to confirm that you heard it from that person. Or hang up the phone and confirm through another channel that this is exactly what they wanted.

“We hear stories about a phone call coming in and saying your son has met with an accident and they’re in a hospital, you need to send $8000 for an emergency procedure. They prey on human emotions. We have to make sure that we step back, think about what’s happening, then call your family or friend to make sure that the news is accurate.”

A Range of Use Cases

Imposter scams have also exploded recently across other use cases. Large language models can take a phishing email, customize the content and iterate it until the scamster gets a successful response from the victim.

Sophisticated criminals are creating packages for less-sophisticated criminals to buy. For $100 a month, a would-be hacker can purchase a bot-as-a-service turnkey application. To conduct a fraud operation, they just need to upload the victim’s information, such as their phone number and the impersonating business name and phone.

The bot will automatically call the victim and impersonate the business, often requesting that they read out the one-time password. Once the criminal gets the OTP, they can do whatever they want with it, including logging into the institution under attack, authenticating transactions, and changing passwords.

The entry barrier to committing fraud has come down significantly. “There’s almost a multiplier effect on the attack vectors end,” said Apgar, “because AI is not only making it easier to crank out more and more phishing emails more efficiently, but it also makes them more realistic.”

How Are We Stopping Fraud?

Machine learning models have allowed us to identify pockets of fraud and scam so that we can detect and stop them. Auto machine-learning tools have allowed Fiserv to perform this function at scale.

Srinivasan said that Fiserv is also deploying self-learning models, which will generate models at a more automated pace. Since the models can be generated much more frequently, they can more effectively detect any change in fraud patterns.

“We use more than 500 risk signals to identify any emerging trend and deploy preventative measures against them,” said Srinivasan.

Getting Started

For a financial institution initiating a strategy against AI fraud, the first step is to make an inventory of all the touch points they have and conduct a vulnerability assessment. Determine all the possible risk areas that could be subject to a fraud attempt, such as the new account opening processes or login controls. Don’t forget about money movement, changes in user behavior, and brand-new patterns of usage.

Two other back-end processes are critical for assessment too. The first is customer education on scam awareness. Reach out to consumers via multiple channels to make sure they are aware of the nature of these new scams. When they are targeted by a scam artist, they should alert the bank to what is happening.

The second is to educate employees and frontline representatives on the techniques used in fraud, to ensure that they are not social engineered by fraudster when they are reviewing a transaction or removing a hold. Then, when a user calls, they can educate the consumer on potential scam activity to make sure that they are not falling into one.

The most successful fraud mitigation outcomes result from adopting a hybrid approach. Machine learning has to work in conjunction with an intelligent human to ensure a contextual application of the response being deployed. Make sure that the organization has absolute good governance and oversights on whatever results it’s giving, so there is no bias in the strategy.

“Having a variety of mitigation options offered to the client or the financial institution helps a lot,” said Srinivasan. “Pick and choose or deploy all of them, so that we can keep the consumer safe.”

While fraud attempts will always be an issue, Fiserv and financial institutions are working toward solutions that mitigate fraud while improving the customer experience. Working together, we should be able to manage fraud losses to very low levels. By combining layered security strategies, the industry can foster a more robust difference against both existing and new fraud payment threats.

The post Taking On the AI-Assisted Fraudsters appeared first on PaymentsJournal.

  continue reading

26 つのエピソード

すべてのエピソード

×
 
Loading …

プレーヤーFMへようこそ!

Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。

 

クイックリファレンスガイド