cover

The Hidden Power of "Cherry" Parameters in Large Language Models

6 Mar 2025

Discover how a small subset of LLM parameters, called "cherry" parameters, hold a disproportionate influence on model performance.

cover

Rethinking AI Quantization: The Missing Piece in Model Efficiency

6 Mar 2025

A deep dive into the latest research on LLM quantization, exploring past methods and new breakthroughs.

cover

The Future of AI Compression: Smarter Quantization Strategies

6 Mar 2025

Comparing parameter selection strategies and discussing the future of AI model compression.

cover

The Impact of Parameters on LLM Performance

6 Mar 2025

Why do some parameters influence model performance more than others? This blog quantifies their impact.

cover

Can ChatGPT-Style Models Survive Quantization?

6 Mar 2025

Investigating the effect of quantization on chat-based LLMs like Vicuna-1.5.

cover

The Perplexity Puzzle: How Low-Bit Quantization Affects AI Accuracy

6 Mar 2025

Examining how base model quantization impacts perplexity and downstream performance.

cover

The Science of "Cherry" Parameters: Why Some LLM Weights Matter More

6 Mar 2025

Unpacking parameter heterogeneity: Why do only 1% of LLM parameters significantly impact performance?

cover

Quantizing Large Language Models: Can We Maintain Accuracy?

6 Mar 2025

Testing quantization methods to see if AI models can maintain performance with fewer bits.

cover

The Myth of Connectivity: Uncovering the Global Blueprint of Digital Deception

12 Feb 2025

This paper is presented as an analysis of transnational misinformation diffusion, showing that cultural, thematic, and geographic ties shape online networks.