DeepScience
Back to Roadmap
Artificial IntelligenceProgressing

Long-context understanding

Extending the effective context window of language models beyond millions of tokens while maintaining faithful retrieval and reasoning over the full context is an active research area. Current models exhibit degraded performance in the middle of long contexts ('lost in the middle' effect) and struggle with tasks requiring synthesis across distant passages. Efficient attention mechanisms, improved position encodings, and context compression techniques all show promise but have not fully solved the problem.

Research Domains

foundationssystems

Keywords

long contextcontext windowRoPEposition encodinglost in the middleattentionmemoryretrievalcontext compressionmillion token

Last updated: April 8, 2026

Recent Papers(Artificial Intelligence)

DETECTING RARE CORTICAL CONNECTIVITY AROUND THE HUMAN CENTRAL SULCUS: A DEEP LEARNING ANALYSIS OF 37,000+ TRACTOGRAPHIES

April 8, 2026openalex

MULTI-MAP FUSION FOR WEAKLY SUPERVISED DISEASE LOCALIZATION FROM GLOBALLY ASSIGNED DIAGNOSTIC LABELS IN BRAIN MRI

April 8, 2026openalex

EVALUATING SEGMENTATION USING BETTI-1 TOPOLOGICAL METRIC: APPLICATION TO NASAL CAVITIES IN THE CONTEXT OF AIRFLOW SIMULATION

April 8, 2026openalex

Faster 4D Flow MRI Scan with 3D Arbitrary-Scale Super-Resolution

April 8, 2026openalex

Iterative confidence-based pseudo-labeling for semi-supervised lung cancer segmentation under annotation scarcity

April 8, 2026openalex

FALCON: Unfolded Variational Model for Blind Deconvolution and Segmentation in 3d Dental Imaging

April 8, 2026openalex

Diffusion-Based Fourier Domain Deconvolution with Application to Ultrasound Image Restoration

April 8, 2026openalex