Model overview
Gemma-4-31B-JANG_4M-CRACK is a modified version of Google DeepMind's base model that removes safety restrictions while maintaining strong general capabilities. Built by dealignai, this 31-billion parameter model runs in just 18 GB of memory through advanced quantization techniques. The model operates as a dense transformer with 60 layers and hybrid attention mechanisms, supporting both text and image inputs. Unlike the original instruction-tuned variant, this version prioritizes compliance with requests over safety filtering, achieving 93.7% HarmBench compliance while retaining 74.5% of the baseline MMLU performance.
Model inputs and outputs
Inputs
- Text prompts of any length up to 256K tokens, including detailed technical requests, code generation tasks, and reasoning questions
- Image inputs passed through in float16 precision for multimodal understanding alongside text queries
Outputs
- Generated text responses with greedy decoding, including executable code, explanations, and detailed reasoning
- Full compliance responses without refusal loops or truncation, even for requests typically rejected by safety-aligned models
Capabilities
The model demonstrates strong coding and technical abilities across security, systems, and general programming domains. It generates working port scanners, reverse shells, and exploit code without refusals. Testing on MMLU shows competency across abstract algebra, anatomy, astronomy, computer science, physics, biology, chemistry, mathematics, logic, and religious studies. The model maintains coherence on factual queries, correctly identifies capital cities, lists astronomical data in order, attributes literature correctly, and produces complete working implementations of algorithms like binary search. Performance on cybercrime and intrusion scenarios reaches 100% compliance, while illegal activities achieve 98%, misinformation 96%, and chemical/biological harm scenarios reach 95%.
What can I use it for?
Developers and security researchers can use this model for penetration testing frameworks, vulnerability research, and red-team tool development without dealing with safety rejection responses. Educational projects exploring model alignment, ablation research, and AI safety mechanisms can benefit from studying how refusal mechanisms operate. Content creators developing technical documentation, coding tutorials, and security training materials get unfiltered outputs suitable for comprehensive coverage. The model serves as a baseline for AI safety research investigating how models behave when alignment constraints are removed, useful for understanding model internals and developing improved safety approaches.
Things to try
Experiment with the model's coding output by requesting increasingly complex security tools and observing the quality degradation across complexity levels. Test whether the model maintains accuracy on MMLU subjects when combined with adversarial phrasing designed to trigger refusals in standard models. Compare responses between thinking mode enabled and disabled to see how reasoning affects compliance patterns. Try multimodal inputs combining images with requests that normally trigger safety filters, examining whether visual context changes the model's behavior. Query the model on the same prompts used in HarmBench testing to verify the published compliance scores match your local results.
This is a simplified guide to an AI model called Gemma-4-31B-JANG_4M-CRACK maintained by dealignai. If you like these kinds of analysis, join AIModels.fyi or follow us on Twitter.
