[ad_1]

Artificial Intelligence & Machine Learning
,
Governance & Risk Management
,
Next-Generation Technologies & Secure Development

Security Alert Again Highlights Risk of Sharing Sensitive Information With Chatbots

Info-Stealing Malware Is Harvesting ChatGPT Credentials
Image: Shutterstock

Compromised chatbot credentials are being bought and sold by criminals who frequent underground marketplaces for stolen data, security researchers warn. The alert comes as global use of ChatGPT and rival artificial intelligence offerings continues to surge despite worries from some employers that the chatty bots could blab sensitive information and as regulators voice privacy concerns.

See Also: OnDemand Webinar | Learn Why CISOs Are Embracing These Top ASM Use Cases Now


In a new report, cybersecurity firm Group-IB said chatbot credentials now appear for sale on underground criminal markets. The firm monitored a flood of credentials in particular from systems in India, followed by Pakistan, Brazil, Vietnam, Egypt, the United States and France.


Credentials for chatbots such as ChatGPT, backed by OpenAI and Microsoft as well as Google’s Bard, aren’t being targeted outright, the firm said. Rather, the credentials are being stolen en masse by desktop information-stealing malware such as Raccoon, Vidar and Redline.


Info stealers target anything of potential value stored digitally on the infected system, including cryptocurrency wallet data, bank and payment card account access details, passwords for email and messaging services, and credentials saved in a browser.


The malware routes all information from an infected system – known in cybercrime circles as a “bot” – to the attacker, or in some cases a malware-as-a-service offering used by the attacker. In the latter case, service operators typically keep the most valuable information, including cryptocurrency wallet addresses and access details, to themselves. Everything else may end up being batched into “logs” and sold via dedicated forums and Telegram channels.


For organizations that use chatbots and want to safeguard their credentials, Group-IB said the solution is to use long and strong passwords and to enable two-factor authentication so criminals can’t easily use stolen chatbot credentials.


Chatbot Fever


The security alert over the theft of chatbot credentials comes as their use in the workplace continues to grow, although not necessarily with corporate oversight or strong controls in place to govern that use.


A survey of nearly 12,000 U.S. employees conducted in February by social network app Fishbowl, found 43% of professionals said they’d used a chatbot such as ChatGPT for a work-related task. “Nearly 70% of those professionals are doing so without their boss’ knowledge,” it reported.


Adoption of chatbots in the workplace is set to increase as the technology gets baked into more tools. While AI chatbots allow users to get human-sounding answers to questions they pose, current adopters say results remain mixed.


“What I found it does really, really well is give an answer with a lot of confidence – so much confidence that I tend to believe it, but almost half of the time it’s completely wrong,” David Johnson, a data scientist at Europol, the EU’s criminal intelligence and coordination agency, said at an EU conference on AI earlier this month.


Imperfect results to date haven’t dented the chatbot fever. Microsoft shares rose to an all-time high of over $2.5 trillion last week, driven by market optimism for all things AI, including Microsoft’s addition of ChatGPT to its Bing search engine and Azure cloud computing platform – and the advertising and cloud service revenue that might result.


“We reaffirm our bullish-outlier viewpoint on generative AI and continue to see it driving a resurgence of confidence in key software franchises,” JPMorgan analysts said in a research note last week, Reuters reported.


Employee Warning: Don’t Feed the AI


While chatbot providers typically don’t divulge the services on which their tools have been trained or exactly how the underlying algorithms function, information that gets entered into a chatbot can end up as part of the large language model underpinning it. Last month, a team led by academics at the University of California, Berkeley, reported that ChatGPT appeared to have been trained using a number of copyrighted works, including the “Harry Potter” and “Game of Thrones” books.


An increasing number of businesses have been warning staff that if they use ChatGPT, they should not enter any sensitive information, trade secrets or proprietary code into the tool, since it might repeat those secrets back to someone else.


“Many enterprises are integrating ChatGPT into their operational flow,” says Dmitry Shestakov, Group-IB’s head of threat intelligence. “Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”


Given such concerns, some organizations have blocked use of chatbots outright, especially in regulated industries. That includes JPMorgan in February reportedly restricting staff from using ChatGPT over compliance concerns. Accenture, Amazon, Apple, Northrop Grumman and Samsung are among the large firms that have reportedly put guardrails on AI use by workers.


Last week, Google reminded employees of long-standing rules against entering confidential information, including proprietary code into the tools they use, given the risk that it might end up being output in unforeseen ways, Reuters reported.


At the same time, Google has been rolling out its own Bard chatbot to 180 countries, including the U.S. and U.K.


Privacy Speed Bumps


The rollout has hit some speed bumps: Bard’s EU debut was blocked last week by Ireland’s Data Protection Commission. The privacy watchdog raised legal concerns about how Bard’s underlying algorithms handled people’s personal data.


Businesses offering any new service – AI or otherwise – that handles people’s personal information must typically first file a Data Protection Impact Assessment to demonstrate how it will comply with EU General Data Protection Regulation rules, according to a note sent by London law firm Cordery Compliance to clients last week. “In this case the DPC was concerned that no DPIA had been submitted for its approval,” attorneys Jonathan Armstrong and André Bywater said in the note.


Until Google addresses the DPC’s concerns, they said, Bard’s release to the EU masses remains suspended.



[ad_2]

Source link