成人**美色阁,欧美一级专区免费大片,久久这里只有精品18,国产日产欧产美韩系列app,久久亚洲电影www电影网,王多鱼打扑克视频下载软件

 
+更多
專家名錄
唐朱昌
唐朱昌
教授,博士生導師。復旦大學中國反洗錢研究中心首任主任,復旦大學俄...
嚴立新
嚴立新
復旦大學國際金融學院教授,中國反洗錢研究中心執行主任,陸家嘴金...
陳浩然
陳浩然
復旦大學法學院教授、博士生導師;復旦大學國際刑法研究中心主任。...
何 萍
何 萍
華東政法大學刑法學教授,復旦大學中國反洗錢研究中心特聘研究員,荷...
李小杰
李小杰
安永金融服務風險管理、咨詢總監,曾任螞蟻金服反洗錢總監,復旦大學...
周錦賢
周錦賢
周錦賢先生,香港人,廣州暨南大學法律學士,復旦大學中國反洗錢研究中...
童文俊
童文俊
高級經濟師,復旦大學金融學博士,復旦大學經濟學博士后。現供職于中...
湯 俊
湯 俊
武漢中南財經政法大學信息安全學院教授。長期專注于反洗錢/反恐...
李 剛
李 剛
生辰:1977.7.26 籍貫:遼寧撫順 民族:漢 黨派:九三學社 職稱:教授 研究...
祝亞雄
祝亞雄
祝亞雄,1974年生,浙江衢州人。浙江師范大學經濟與管理學院副教授,博...
顧卿華
顧卿華
復旦大學中國反洗錢研究中心特聘研究員;現任安永管理咨詢服務合伙...
張平
張平
工作履歷:曾在國家審計署從事審計工作,是國家第一批政府審計師;曾在...
轉發
上傳時間: 2024-06-15      瀏覽次數:599次
AI Chatbot Fools Scammers & Scores Money-Laundering Intel

 

https://www.darkreading.com/cyber-risk/ai-chatbot-fools-scammers-and-scores-money-laundering-intel

 

Responding to scammers' emails and text messages typically has been the fodder of threat researchers, YouTube stunts, and even comedians.

 

Yet one experiment using conversational AI to answer spam messages and engage fraudsters in conversations has shown that large language models (LLMs) can interact with cybercriminals, gleaning threat intelligence by diving down the rabbit hole of financial fraud — an effort that usually requires a human threat analyst.

 

Over the past two years, researchers at UK-based fraud-defense firm Netcraft used a chatbot based on Open AI's ChatGPT to respond to scams and convince cybercriminals to part with sensitive information: specifically, banks account numbers at more than 600 financial institutions spanning 73 different countries that are used to transfer stolen money.

 

Overall, the technique allows threat analysts to extract more details about the infrastructure used by cybercriminals to con people out of their money, says Robert Duncan, vice president of product strategy for Netcraft.

 

"We're effectively using AI to emulate a victim, so we play along with the scam to get to the ultimate goal, which typically [for the scammer] is to receive money in some form," he says. "It's proven remarkably robust at adapting to different types of criminal activity ... changing behavior between something like a romance scam, which might last months, [and] advanced fee fraud — where you get to the end of it very quickly."

 

As international fraud rings are profiting from scams — especially romance and investment fraud operating out of cyber-scam centers in Southeast Asia — defenders are searching for ways to expose cybercriminals' financial and infrastructure components and shut them down. Countries, such as the United Arab Emirates, have embarked on partnerships to develop AI in ways that can improve cybersecurity. Using AI chatbots could shift the technological advantage from attackers back to defenders, a form of proactive cyber defense.

 

Personas With Local Languages

Netcraft's research shows that AI chatbots could help curb cybercrime by forcing cybercriminals to work harder. Currently, cybercriminals and fraudsters use mass email and text-messaging campaigns to cast a wide net, hoping to catch a few credulous victims from which to steal money.

 

The two-year research project uncovered thousands of accounts linked to fraudsters. While Duncan would not reveal the name of the banks, the scammers' accounts were mainly in the United States and the United Kingdom — likely because the personas donned by the AI chatbots were from those regions as well. Financial fraud works better when using bank accounts in the same country as the victim, he says.

 

The company is already seeing that distribution change, however, as it adds more languages to its chatbot's capabilities.

 

"When we spin up some new personas in Italy, we're now seeing more Italian accounts coming in, so it's really a function of where we're running these personas and what language we're having them speak in," he says.

 

The promise of using AI chatbots to engage with scammers and cybercriminals is that machines can conduct such conversations at scale. Netcraft has bet on the technology as a way to acquire threat intelligence that would not otherwise be available, announcing its Conversational Scam Intelligence service at the RSA Conference in May.

 

AI on AI

Typically, scammers attempt to convince the victims to buy cryptocurrency or gift cards as the preferred way of payment, but eventually hand over bank account information, according to Netcraft. The goal in using an AI chatbot is to keep the conversation going long enough to reach those milestones. The average conversation results in cybercriminals sending 32 messages and the chatbot issuing 15 replies.

 

When the AI chatbot system succeeds, it can harvest important threat data from cybercriminals. In one case, a scammer promising an inheritance of $5 million to the "victim" sent information on 17 different accounts at 12 different banks in an attempt to complete the transfer of an initial fee. Other fraudsters have impersonated specific banks, such as Deutsche Bank and the Central Bank of Nigeria, to convince the "victim" to transfer money. The chatbot duly collected all the information.

 

While Netcraft's current focus with the experiment is to gain in-depth threat intelligence, the platform could be operationalized to engage fraudsters on a larger scale, flipping the current asymmetry between attackers and defenders. Rather than attackers using automation to increase the workload on defenders, a conversational system could widely engage cybercriminals, forcing them to have to figure out which conversations are real and which are not.

 

Such an approach holds promise, especially since attackers are starting to adopt AI in new ways as well, Duncan says.

 

"We've definitely seen indicators that attackers are sending texts that resemble the type of texts that ChatGPT puts out," he says. "Again, it's very hard to be certain, but we would be very surprised if we weren't already talking back to AI, and essentially we have an AI-on-AI conversation."