Skip to content

Pull requests: Dao-AILab/flash-attention

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Pull requests list

Add compress_factor for compressed causal attention
#2418 opened Mar 31, 2026 by jduprat Loading…
[Cute,Fwd,Sm90] Support SplitKV
#2415 opened Mar 31, 2026 by imbr92 Loading…
fix noisy logger
#2414 opened Mar 31, 2026 by drisspg Loading…
[ROCM] Fix windows issues
#2385 opened Mar 23, 2026 by micmelesse Loading…
Fix missing seqlen_info param in softcap scoremod
#2366 opened Mar 17, 2026 by rucnyz Loading…
[CuTe, Sm100] PackGQA for backward
#2354 opened Mar 15, 2026 by reubenconducts Loading…
Add SM120 kernel-level paged KV cache support
#2348 opened Mar 13, 2026 by blake-snc Loading…
ProTip! Add no:assignee to see everything that’s not assigned.