Cover Image

Using LimaCharlie and ChatGPT to Perform Malware Anomaly Detection

David Burkett | March 17, 2023

Working as a detection engineer, I'm constantly researching and trying to come up with innovative and new ways to detect adversary tradecraft. With all of the new hype around ChatGPT and it's capabilities I figured I'd give it a try and put it to a test to see how well it performed at detecting common and well known modern attacks vs a benign process.

NOTE: I am NOT a Machine Learning or A.I. expert by anymeans and intend to stay in my lane in regards to talking about how ML works itself. That said, I do understand it well enough to understand what it's limitations are and what some of its capabilities are. I have also not been paid for this post.

Understanding the Tooling



LimaCharlie is an absolute gem in the cybersecurity community. While originally starting as an End Point Detection & Response (EDR) tool, they've since grown to be so much more.

These days I like to think of LimaCharlie more as enterprise grade security infrastructure as a service. They make it incredibly easy to sign up (don't have to talk with sales people - just create an account and you can spin it up in a few seconds) and it gives you more EDR/Endpoint data then most every other EDR tool I've used (which at this point is most of the Gartner Magic Quadrant.)

Because it's so easy to setup, I chose to use it as the way I was going to collect the data that I needed.


ChatGPT (short for Chat Generative Pre-trained Transformer) is an advanced artificial intelligence (A.I.) language model created by OpenAI. It generates human-like text responses based on the input it receives.


You should know however that while I may use "A.I." in this blog, ChatGPT isn't "A.I." like you may see in the movies, but is a language model which is an algorithm that is designed to generate and understand human language.

ChatGPT learns patterns in language, such as grammar, vocabulary, and context, by being trained on large amounts of text data from various sources. The primary goal of a language model is to predict the likelihood of a sequence of words or to suggest the next word in a sentence based on the previous words.

In English: It doesn't understand what it is saying as much as it is predicting what word will follow based on math way over my head.

Additionally - while ChatGPT can explain code, it is NOT trained to be a security tool or identify malicious behavior. With that, it should not be expected it to perform well in comparison to modern security ML tooling, nor is it a replacement.

Last, I am only evaluating individual command line arguments without taking prior process execution data into the determination process. In some cases adversaries will use legitimate commands to blend in, and on there own they are not malicious, however they can be considered malicious based on other surrounding context.




New Process execution artifacts have the single highest MITRE ATT&CK coverage of any single information source. With this in mind, I've decided to use an adversary emulation tool I've written about previously.

You can find the full operation JSON if you'd like to check it out for yourself in my git repo.

Some of the techniques I've used were common recon techniques, disabling windows defender, some process injection techniques, etc.

It should be noted that I am only evaluating the command line arguments of new process events for malicious behaviors.

Example Response Data:

Command Line Arg: powershell.exe -ExecutionPolicy Bypass -C ".\totallylegit.exe #{} C:\Users\Public\creds.dmp"

AI Determination: It is not possible to determine if this command is malicious without more context.


Command Line Arg: C:\Windows\System32\svchost.exe -k WerSvcGroup

AI Determination: No, this command is not malicious. It is a legitimate command used to start the Windows Error Reporting Service.


Command Line Arg: "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\NGenTask.exe" /RuntimeWide /StopEvent:988

AI Determination: No, this command is not malicious. It is a command used to stop the .NET Runtime Optimization Service.


Command Line Arg: "C:\Windows\system32\MusNotification.exe" Display

AI Determination: It is not necessarily malicious, but it is suspicious. MusNotification.exe is not a standard Windows system file, so it could be a malicious program. It is best to investigate further to determine if it is malicious or not.


Command Line Arg: "C:\Users\Administrator\totallylegit.exe"

AI Determination: No, this command is not malicious.


Command Line Arg: C:\Windows\winsxs\amd64_microsoft-windows-servicingstack_31bf3856ad364e35_10.0.14393.5351_none_7f0b8fbe21ea4a21\TiWorker.exe -Embedding

AI Determination: No, this command is not malicious. It appears to be a legitimate command related to Windows servicing stack.


Command Line Arg: powershell.exe -ExecutionPolicy Bypass -C "$loadResult = [Reflection.Assembly]::LoadWithPartialName(\"System.Drawing\");function screenshot([Drawing.Rectangle]$bounds, $path) {   $bmp = New-Object Drawing.Bitmap $bounds.width, $bounds.height;   $graphics = [Drawing.Graphics]::FromImage($bmp);   $graphics.CopyFromScreen($bounds.Location, [Drawing.Point]::Empty, $bounds.size);   $bmp.Save($path);   $graphics.Dispose();   $bmp.Dispose();}if ($loadResult) {  $bounds = [Drawing.Rectangle]::FromLTRB(0, 0, 1000, 9

AI Determination: Yes, this command is potentially malicious. It is attempting to bypass the execution policy and load the System.Drawing assembly, which could be used to take screenshots of the user's screen.


Overall - it did well at detecting the malicious process command lines, and in most cases explained what was happening in said command line correctly.

Total process CLI Args Checked: 128 Total the AI Found to be malicious: 34 Total AI False positives: 10/34 Total AI False negatives: 12 Total malicious commands: 36

A major issue is that it occasionally "halucinated", or essentially just made things up in regards to processes it hadn't seen and what they were doing.

This could potentially cause analyst confusion, which could result in the analyst making an incorrect determination.


You can find the data from this including code used, the command line arguments supplied, the AI output, and the Caldera Report json in my GitHub:


When it comes to Cybersecurity, "Machine Learning/A.I." tooling has a reputation of being garbage. Overall ChatGPT did surprisingly well, that said the FP rate was very high given the low amount of overall events supplied. I'm looking forward to seeing where the future of A.I. and Machine Learning takes us!

Company Info

Signalblur Cyber Threat Intelligence is a cybersecurity company that delivers exceptional services and empowers organizations with accessible, cutting-edge solutions. Our founder brings years of experience from a Fortune 50 company and building Cyber Security Operations Centers.


For any inquiries, please email us at