This article was authored by Mikesh Nagar, Dave Waugh and Alessio Ragazzi of Kroll’s Threat Intelligence Team.
Key Takeaways
- Investigation into AMOS InfoStealer reveals initial infection source being ChatGPT
- Victims were tricked into believing they were running a command to fix a sound issue on their Mac device
- New AMOS InfoStealer delivery vector highlights risk of growing trust placed in artificial intelligence (AI)
During a recent investigation into AMOS InfoStealer, Kroll SOC along with Kroll Threat Intelligence Team have discovered a troubling new delivery vector that leverages the growing trust users place in AI tools. In this case, attackers leveraged ChatGPT as the source of guidance, tricking victims into initiating the infection, presenting it as a legitimate solution to a common technical problem. Victims were tricked into believing they were running a harmless command to fix a sound issue on their Mac device.
What appeared to be a simple troubleshooting step was, in reality, a malicious command that installed AMOS InfoStealer. Once executed, the malware successfully exfiltrated sensitive data from the system. This tactic highlights how attackers are increasingly exploiting the credibility of widely recognized platforms and tools to lower user suspicion and increase infection rates.
By framing the attack around a trusted AI brand and a relatable technical annoyance, the threat actors created a convincing lure that bypassed the skepticism many users might have toward traditional phishing attempts.
The result was a seamless compromise that blended social engineering with technical exploitation, underscoring the importance of user awareness and proactive security controls.
Figure 1: Google Chrome Browsing History extract from infected Mac device
From the Google Chrome Browsing History of the device, Kroll Threat Intelligence Team observed that the user accessed what appeared to be a legitimate ChatGPT session. The attackers cleverly framed the instruction as a troubleshooting step, exploiting the user’s trust in both the ChatGPT brand and the plausibility of a common technical glitch.
Figure 2: ChatGPT Instructions shown to the user
The command line provided is an Indicator of Compromise (IOC) of AMOS InfoStealer. Attackers delivered this command to victims by instructing them to copy and paste it directly into the macOS terminal. Once executed, the command initiates the download of a malicious script, which is then used to install AMOS InfoStealer on the system. This script acts as the entry point for the malware, enabling data exfiltration and further compromise of the affected device.
(Ref: https://www.trendmicro.com/en_us/research/25/i/an-mdr-analysis-of-the-amos-stealer-campaign.html)
MITRE ATT&CK ID’s relevant to the infection vector (ChatGPT):
- User Execution (T1204)
- Malicious File (T1204.002)
Applies when a user is tricked into downloading and running a malicious file - Malicious Command (T1204.003)
Applies when a user is convinced to execute a malicious command (e.g., via terminal or script).
- Malicious File (T1204.002)
- Phishing (T1566)
If the AI chat interaction is considered a social engineering vector (similar to phishing). - Application Layer Protocol (T1071)
If the malware is downloaded over HTTP/HTTPS or another application protocol. - Ingress Tool Transfer (T1105)
Covers transferring tools or malware from an external system to the victim machine. - Command and Scripting Interpreter (T1059)
If the malicious command uses a shell (e.g., Bash, PowerShell) or scripting language.
Questioning ChatGPT on the Output
On further investigation, it is interesting to see ChatGPT’s response when questioned as to why it delivered the malicious response:
Figure 3: ChatGPT Conversation
Figure 4: ChatGPT Conversation
When directly asked why the response was given, ChatGPT said that it would never under any circumstances usually deliver that type of response.
Threat Actors Use of Google Ads
In past years threat actors have been increasingly using Google Ads to conduct malvertising and phishing campaigns, and this is also the case for ChatGPT as an infection vector.
During the investigation, it was discovered that attackers are abusing Google Ads to display the malicious ChatGPT chat at the top of search results. The use of the legitimate ChatGPT domain, in contrast to the typo-squatted or newly-crafted observed in previous cases, adds a layer of difficulty from the user perspective in detecting malicious intentions.
Figure 5: Google Ad for ChatGPT chat
Recreating the Lure
Kroll Threat Intelligence Team attempted to recreate the chat instructions. It questioned the original prompt used to create the original chat, which came back as: “Follow this method to get your Mac sound working again”.
Figure 6: Response for original ChatGPT prompt
When Kroll Threat Intelligence Team attempted to replicate the prompt used by the threat actors, the results were very different from what they had achieved. Instead of producing the malicious output that led to infection, ChatGPT responded with a safeguarding message designed to prevent harmful or unsafe instructions from being executed. This protective behavior is part of the platform’s built-in safety mechanisms, which are intended to stop users from being tricked into running dangerous commands.
The contrast between the attacker’s reported experience and our own highlights an important point: threat actors often manipulate context, presentation or even spoofed interfaces to bypass user scepticism. In this case, although the malicious instructions were generated by ChatGPT itself, the threat actor had bypassed the guard rails of the AI Agent thus presenting the command and instructions as though it came from a trusted AI assistant, the attackers lowered the victim’s guard and increased the likelihood of execution.
Lessons Learned
The AMOS InfoStealer case highlights several important takeaways for both defenders and everyday users. Attackers are increasingly exploiting the credibility of trusted brands, in this instance ChatGPT, to make malicious instructions appear legitimate. Social engineering continues to be highly effective, with simple lures such as fixing a sound issue convincing users to run dangerous commands.
Malware delivery is also becoming more subtle, moving away from obvious phishing attachments and instead embedding malicious instructions into routine troubleshooting scenarios.
While legitimate AI platforms enforce safeguards to block unsafe outputs, attackers may spoof or imitate these environments to bypass protections. Finally, user behaviour remains the most critical factor, as technical defences cannot always prevent a user from executing harmful commands if they believe the source is trustworthy.
Significance of AI in Corporate Environments
The significance of this attack is heightened by the prevalence of ChatGPT in corporate environments. Some interesting statistics include:
Corporate Adoption
- Forty-nine percent of companies are already using ChatGPT, and 93% of those plan to expand usage.
- Over 80% of Fortune 500 companies have integrated ChatGPT into workflows within nine months of its launch.
Employee Usage
- Thirty-six percent of workers use ChatGPT at least monthly for work tasks; 22% use it daily.
- Surveys show 43% of employees have used ChatGPT for work-related tasks, including writing, debugging, and troubleshooting issues.
- In the U.S., 28% of employed adults reported using ChatGPT for work activities as of March 2025.
This data highlights a significant concern because nearly half of businesses and a large portion of employees are actively using AI platforms like ChatGPT for work-related tasks, including technical troubleshooting. If such platforms were compromised or intentionally provided malicious commands, the impact could be widespread and severe.
The trust users place in these tools means they are likely to follow instructions without verifying them, creating an ideal vector for social engineering attacks. With adoption rates this high, a single malicious prompt could lead to mass malware infections, data breaches and operational disruptions across corporate environments.
This risk underscores the need for strict governance, user training and monitoring when integrating AI into critical workflows.
Recommendations
- Provide training for staff to identify suspicious prompts and to avoid copying commands into terminals unless they come directly from trusted vendor documentation.
- Encourage users to confirm fixes through official support channels rather than third party instructions.
- Deploy monitoring tools to detect unusual command execution and script downloads, especially on macOS systems.
- Integrate known IOCs, such as malicious command lines, into threat intelligence feeds and detection rules.
- Combine technical safeguards with strong awareness programs to reduce the success rate of social engineering attacks.
Stay Ahead with Kroll
Cyber and Data Resilience
Kroll merges elite security and data risk expertise with frontline intelligence from thousands of incident responses and regulatory compliance, financial crime and due diligence engagements to make our clients more cyber- resilient.

