Feb 25, 2025 Native Sparse Attention ; Hardware-Aligned Breakthrough for Long-Context LLMs 🤖✨ Nov 30, 2024 Star Attention ; Supercharging LLM Inference with Speed & Accuracy 🚀✨