|
Generative Artificial Intelligence (GenAI) has become one of the most transformational technologies of our time. From automated vehicles to virtual healthcare, GenAI is revolutionizing the way we live, work and do business. Despite the benefits, GenAI poses significant cybersecurity risks to our nation’s financial infrastructure.
Cybercriminals use GenAI to initiate cyberattacks, such as ransomware, phishing and disinformation campaigns, that result in financial losses, reputational damages and legal ramifications.
This edition of Cornerstone explores GenAI – specifically how it's being used in cybercrimes and how to protect yourself and your institution from AI-driven cyberattacks.
|
|
GenAI is a powerful artificial intelligence tool, also referred to as a deep-learning model, used to create high-quality text, audio, images and other content-based data. GenAI can collect, analyze and interpret large amounts of information from a variety of resources without human intelligence.
GenAI’s ability to store and process personal and proprietary information freely shared on the internet makes it an attractive tool for criminals. They are leveraging large language models (LLM) to collect data on their targeted victims and input the information in AI-enabled software to create sophisticated targeted scams and attacks to generate and launder illegal proceeds. Many of these tactics are extremely convincing and go undetected.
-
AI Password Cracking: Commonly used passwords can be cracked in minutes by AI, which can put personal and proprietary information at risk. Sound patterns from the strokes on a keyboard train LLM to execute acoustic side-channel attacks, streamlining brute force attacks.
-
AI-Powered Phishing Attacks: Phishing is a form of social engineering where attackers deceive people into revealing sensitive data or granting system access. Unlike traditional phishing scams, which can be easy to identify because of grammatical errors or unusual requests, AI generated phishing emails culminate from analyzing disparate data sets to create extremely convincing and can be difficult to spot.
-
AI-Powered Voice Phishing (Vishing) Attack: Vishers use voice to trick users into divulging sensitive information. They use fraudulent phone numbers, voice-altering software, text messages, and social engineering to steal identities, money and accounts. Cybercriminals manipulate victims, often the elderly, by creating urgent situations, such as a crisis related to a family member, to wire funds.
-
Business Email Compromise (BEC): BEC fraud is extremely lucrative since it specifically targets profitable businesses to bypass traditional email filters to trick the victim into performing an immediate action from an authoritative sender, such as transferring money to and from specific accounts by an executive.
-
Deepfakes: Deepfakes use AI to generate completely new video or audio, with the end goal of portraying something that didn't actually occur in reality. Cybercriminals are leveraging deepfake technology to harass and extort victims for money and services.
-
Ransomware: A malicious software (malware) that encrypts data on a computer, making it unusable for the user and released by the attacker once a demand of payment is received. GenAI lowers the technical barrier for criminals to engage in ransomware attacks by eliminating the need for advanced technical knowledge to exploit vulnerabilities and victims.
|
|
Misuses of AI
With the power and influence of GenAI rapidly growing as the technology continues to advance and companies find new exciting and innovative uses, the potential for misuse and misrepresentation of this technology has grown as well. According to multiple SEC settlements there is a demonstrated risk of the misrepresentation of AI to defraud investors by claiming to use AI technology to advance business practices while not actually having the claimed capability. On the forefront of this exciting technological advancement, it is important to be up to date on the capabilities of AI especially when looking for new software and business partners.
Investment Fraud
The SEC recently announced settled charges against two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., for making false and misleading statements about their use of artificial intelligence. As GenAI becomes more prominent in businesses, it is important to recognize the risks associated with it, one of those being investment fraud. Investors gravitate towards new and emerging technology that will lead to successful business operations and profit growth. The SEC is recommending that investors should carefully review the disclosures that companies are making about their AI products. Investors should make sure they are working with a registered investment professional as well as identify the legitimacy of their campaign.
SEC Charges Founder of AI Hiring Startup Joonko with Fraud
“According to the SEC’s complaint, Joonko claimed to use artificial intelligence to help clients find diverse and underrepresented candidates to fulfill their diversity, equity, and inclusion hiring goals. To raise money for Joonko, the complaint alleges that Ilit Raz, CEO and founder, falsely told investors that Joonko had more than 100 customers, including Fortune 500 companies, and provided investors with fabricated testimonials from several companies expressing their appreciation for Joonk and praising its effectiveness.”
- SEC Press Release 2024-70
|
|
Deloitte’s Center for Financial Services predicts that Generative AI could enable fraud losses to reach $40 billion in the United States by 2027. As fraudsters increasingly turn to this cost-effective tool, it is imperative to recognize signs that you or your business is affected by AI-driven fraud.
- AI phishing emails are becoming harder to detect, so it’s important to check the validity of the source directly before responding to any requests for information or sensitive data. For instance, if you receive an email or text message from your bank requesting you to provide your pin or account number, be sure to call the bank directly before providing any information.
- If you receive unsolicited emails or texts with links or attachments, don’t immediately click on it because it may be infected with malware and could cause your sensitive information to be compromised.
- While GenAI technology can be used to create lifelike phishing emails, vishing calls, and hidden malware the technology is not without flaw. If an email, text, voice message, or any other online content feels overly formal, weirdly structured, or simply feels odd, that is a red flag that the content could be AI generated and could pose a cybersecurity risk.
If you are investing in companies involved in AI, be on the lookout for scams that include high-pressure sales tactics by unregistered individuals, promises of quick profits, or claims of guaranteed returns with little or no risk.
|
|
HSI encourages public and private sectors to proactively take measures to prevent from being a victim of AI-driven cyberattacks by doing the following:
- Provide employee training to raise awareness of AI cybersecurity risks.
- Research and deploy security audits using cutting-edge tools to help detect and harden vulnerabilities in computer network systems.
- Have an encrypted, offline data backup strategy to prevent loss of data and revenue.
- Implement multi-factor authentication (MFA), Zero Trust architectures and other robust security measures to mitigate against AI-enabled attacks.
- If something sounds too good to be true or suspicious, trust your instincts and validate the information. Use trusted contact information to validate sources prior to responding to any requests for information or sensitive data.
|
|
HSI has identified an updated typology in pig butchering fraud schemes targeting real estate professionals. "Pig butchering" involves fraudsters building relationships with their victims over time to "fatten" them up for financial exploitation. Fraudsters are now utilizing platforms such as SetSchedule, LinkedIn, and Zillow to pose as real estate investors. If contacted by a prospective customer keep the following key red flags in mind:
- Out-of-state investors seeking expensive properties in your area of operation.
- Desire to move the conversation to WhatsApp.
- Reluctance to talk via FaceTime.
- Reluctance to provide identification.
- Reluctance to meet in person.
- Initial casual mentions of cryptocurrency investments that become more direct over time.
- Overly friendly demeanor and sharing personal life details.
- Offering to teach you about cryptocurrency trading in exchange for real estate advice.
- Seemingly authentic LinkedIn profiles claiming employment with financial companies.
- Sharing URLs to fake cryptocurrency trading platforms with slight deviations from real website names.
|
|
HSI encourages the public to report suspected suspicious activity through its toll-free Tip Line at 877-4-HSI-TIP. Callers may remain anonymous. |
|
|
|
HSI is the principal investigative arm of the Department of Homeland Security (DHS), responsible for investigating transnational crime and threats, specifically those criminal organizations that exploit the global infrastructure through which international trade, travel and finance move. |
|
|
|
|
|