Firewall integrations

Fortinet

Elevate the power of your Fortinet Fortigate Firewall using by adding our Intelligence.

Palo Alto

Palo Alto Firewalls can be hardened with our threat intelligence as well.

Sophos XGS

Enhance the Sophos XGS Firewall with our threat intelligence.

OPNsense

Enhance your OPNsense Firewall with our threat intelligence using the native plugin.

SIEM integrations

Splunk

Splunk is a great platform, but without the right Threat Intelligence it's just a log server. Try our threat intelligence today. 

Microsoft Sentinel

One of the most used SIEM solutions should be enriched with the right Intelligence. At Q-Feeds you're at the right place!

Other

Luckily there are many other SIEM vendors whom support 3rd party threat intelligence.

Threat Intelligence Portal

Darkweb Monitoring

Darkweb monitoring is one of our services, not only for threat intelligence but also for you most important assets.

Threat Lookup

With Threat Lookup you get full insights in our IOC database, including full MITRE ATT&K mapping.

External Attack Surface Management

A toolset to check your external facing assets exposed on the internet

Vulnerability Scanner

A comprehensive vulnerability scanner which can scan your infrastructure and web applications

Brand Protection

Protect your brand for look-a-likes and potential phishing attempts

Services

TAXII Feeds & Server Software

TAXII/STIX2.1 standard. Both in form of feeds and server software available

Implementation

Need help with implementations? No worries, we have a strong network of partners who are able to help you.

Solutions

Enrich my SIEM

Elevate the power of your SIEM solution using by adding our Intelligence.

Enrich my Firewall

Firewalls can be hardened with our threat intelligence as well.

Prevent phishing

Enhance your protection against phishing

Achieve compliancy

Achieve compliancy by correlating the best threat intelligence to your logs

Futuristic eye design with circuits and geometric shapes.

Company

About

Read here all about Q-Feeds

News and Updates

Cybersecurity news and updates about us

Publications

All of our media coverage in one place

Become a reseller

Strengthen your portfolio with our comprehensive reseller program

Partner locator

Find our certified partners here

Contact

For all your questions or inquiries

Neural network representation of a human brain

Support

My Account

Access your account and manage your licenses

Downloads & Manuals

On this page you find white papers and manuals

Knowledge base

Our knowledge base full of implementation instructions

Start for free

Start your cyber security intelligence journey here

Abstract geometric wireframe human head

Uncovering vulnerabilities: How prompt fuzzing reveals the fragility of LLMs in both open and closed models

Mar 18, 2026 | Threat Intelligence Research

Genetic Algorithm Method Targets Vulnerabilities in Generative AI Responses

TL;DR
Unit 42’s research reveals weaknesses in guardrails of generative AI systems, particularly when attackers leverage genetic algorithm-inspired fuzzing techniques to bypass content restrictions. This poses significant risks to organizations using generative AI for customer interactions, employee support, and other applications.

Main Analysis
Unit 42 has identified critical vulnerabilities in large language model (LLM)-powered generative AI applications through an innovative fuzzing approach that automates the creation of evasion prompts. By applying a genetic algorithm methodology, researchers generated variants of disallowed requests while maintaining their original intent. The study illustrated significant variability in evasion success rates depending on keyword and model combinations, marking a shift from previous, more manual attack methods to a scalable and automated process that can be deployed by adversaries.

The research focused on prompt jailbreaking—a technique where crafted inputs manipulate LLMs to bypass safety mechanisms and produce harmful or non-compliant content. Results show marked evasion rates for certain keywords, as demonstrated by experiments with harmful queries related to explosives, where automated variants successfully bypassed guardrails in multiple models. Notably, the automated nature of this approach means that even low-success rates can become operationally significant when multiplied across many attempts.

Figures depicting workflow comparisons between standard genetic algorithms and LLM-based variations efficiently illustrate the methodology used to generate evasive prompts, highlighting both the process and its effectiveness in compromising model guardrails.

Defensive Context
Organizations implementing LLM technologies for applications such as support services or knowledge management should prioritize understanding the risk associated with prompt-based adversarial attacks. Entities less involved in deploying these types of systems or those operating in limited-scope uses may find lesser concern.

Why This Matters
The findings underscore the real-world risks posed to businesses using generative AI, especially in customer-facing applications where compliance and safety are paramount. Industries like customer service, employee training, and automated content generation may be particularly exposed due to their reliance on LLMs, making them attractive targets for abuse.

Defender Considerations
Focusing on the research’s conclusions regarding robust multi-signal content control systems is vital. Continuous testing of generative AI with adversarial input, as advised in the study, could be essential for identifying weaknesses in models’ responses over time. The varied success of evasion attempts across different keywords further emphasizes the necessity of tailored defenses, rather than one-size-fits-all strategies.

Indicators of Compromise (IOCs)
The article does not provide specific indicators of compromise such as IP addresses, domains, or file hashes, thus this section is omitted.

Click here for the full article

Try our Intelligence today!

Streamline your security operations with a free Q-Feeds trial and see the difference.

Other articles