Gap Map

Problem statements from Convergent Research's Gap Map across 20 scientific fields, scored against 155 toolkit items for mechanistic fit, practicality, and translatability.

Coverage by Field

64 of 103 gaps have at least one candidate tool (225 total tool-gap connections)

>70% connected40–70%<40%None

Biophysics

7 gaps · 29 tool connections · best match 68%

Light Scattering in Living Tissue Prevents Optical Access to Deeper Regions

Biophysics· 2 capabilities

Tools

6

Best

58%

Living tissue exhibits strong light scattering, which hampers deep-tissue imaging and limits resolution. Overcoming this barrier is critical for mapping neural activity and enabling noninvasive diagnostic imaging.

Foundational Capabilities

Anti-Scattering Optical and Opto-Acoustic Methods

Develop novel optical and opto-acoustic techniques that reduce scattering, enabling deep-tissue imaging without expensive MRI. These methods aim to improve resolution and enable whole-brain activity mapping as well as cost-effective “body scanners” for diagnostics.

7 resources

Magnetic-Based Imaging

Develop magnetic imaging (magneto-genetics) approaches as an alternative modality that circumvents the limitations of optical scattering, potentially offering noninvasive imaging of deep tissue structures.

3 resources

Candidate Tools

Gap applicability

58

This study aimed to correlate micro-CT and MRI images of the middle and caudal abdominal regions with corresponding anatomical sections in Syrian hamsters (SHs).

MRI is a non-optical imaging modality and therefore directly avoids the core bottleneck of photon scattering in tissue. The gap description explicitly includes noninvasive diagnostic imaging as a target, making MRI a plausible alternative access strategy for deep tissue.

Mechanistic
86
Context
78
Throughput
45
First test time
28
First test cost
12
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes the goal can be addressed by substituting a non-optical deep-imaging modality rather than improving optical penetration itself.

Missing evidence: No supplied evidence on depth, resolution, neural activity mapping performance, or implementation burden in the stated use cases.

Gap applicability

57

functional magnetic resonance imaging (MRI) methods such as diffusion-weighted imaging, blood oxygen level-dependent MRI

BOLD MRI bypasses optical scattering by using magnetic readout and is specifically relevant to functional imaging, which aligns with the gap's interest in mapping neural activity. It is a plausible deep-tissue alternative when optical access is fundamentally limited.

Mechanistic
84
Context
82
Throughput
46
First test time
24
First test cost
10
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes functional readout of deep tissue is acceptable even if spatial or temporal resolution differs from optical methods.

Missing evidence: No supplied evidence on achievable resolution, whole-brain mapping suitability, or comparative performance versus optical methods in this dataset.

Gap applicability

53

functional magnetic resonance imaging (MRI) methods such as diffusion-weighted imaging

Diffusion-weighted imaging is another magnetic modality that avoids the optical scattering barrier for deep tissue access. It is more relevant to structural or microenvironmental contrast than direct optical imaging, but could still support noninvasive deep-region imaging goals.

Mechanistic
80
Context
72
Throughput
45
First test time
24
First test cost
10
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes the gap can be partially addressed by alternative deep-imaging contrast rather than optical-resolution recovery.

Missing evidence: No supplied evidence on neural activity readout, depth-specific advantages in this context, or operational fit for the named capabilities.

Gap applicability

47

functional magnetic resonance imaging (MRI) methods such as diffusion-weighted imaging, blood oxygen level-dependent MRI, magnetic resonance elastography

Magnetic resonance elastography avoids optical propagation through scattering tissue and can provide deep-tissue information noninvasively. It is a plausible fit for diagnostic imaging aims, though it is less directly aligned with restoring optical access or neural activity mapping.

Mechanistic
74
Context
63
Throughput
40
First test time
20
First test cost
8
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes diagnostic deep-tissue imaging is a valid subproblem of the gap.

Missing evidence: No supplied evidence on relevance to brain activity mapping, resolution in deep tissue, or ease of deployment.

Gap applicability

47

imaging modalities including novel ultrasound techniques, shear wave elastography

Shear wave elastography is a non-optical imaging approach and therefore plausibly circumvents the tissue light-scattering bottleneck for some deep diagnostic applications. It may offer a more accessible first test than MRI-class methods, but it is not evidenced here as a solution for high-resolution optical replacement or neural activity mapping.

Mechanistic
68
Context
66
Throughput
58
First test time
55
First test cost
48
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes ultrasound-based deep imaging is within scope as an alternative modality.

Missing evidence: No supplied evidence on brain applicability, capillary or cellular resolution, or compatibility with the specific anti-scattering goals in the gap.

Gap applicability

42

Superparamagnetic iron oxide nanoparticles (SPMNPs) offer a powerful theranostic platform, combining magnetic resonance imaging (MRI)-based diagnostics with therapeutic delivery and hyperthermia.

SPMNPs are described as an MRI-based theranostic platform, so they plausibly support deep-tissue magnetic imaging that is not limited by optical scattering. They are less direct than a full imaging modality and the supplied evidence emphasizes delivery and theranostics rather than solving the imaging bottleneck itself.

Mechanistic
62
Context
61
Throughput
34
First test time
32
First test cost
26
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes contrast-agent or delivery-enabled magnetic imaging is relevant to the gap's magnetic-based imaging capability.

Missing evidence: No supplied evidence on imaging depth gains, neural imaging use, contrast performance, or reproducibility metrics.

Live Cell Imaging at Deep Nanoscale Resolution is Destructive

Biophysics· 2 capabilities

Tools

6

Best

50%

Techniques that achieve deep nanoscale resolution in live cell imaging often destroy the sample, limiting the ability to conduct longitudinal studies on the same specimen.

Foundational Capabilities

Quantum Electron Microscopy

Develop a quantum electron microscope that leverages quantum principles to achieve high-resolution imaging while minimizing sample damage, enabling repeated measurements on the same specimen.

1 resources

Quantum Non-Demolition X-Ray (Ghost) Imaging

Explore quantum non-demolition x-ray imaging (or ghost imaging) methods that use quantum correlations to image samples with minimal perturbation, preserving sample integrity for longitudinal studies.

2 resources

Candidate Tools

Gap applicability

50

Lattice lightsheet microscopy (LLSM) is a modified light-sheet imaging platform used for three-dimensional optogenetic activation with subcellular resolution. In the cited 2022 study, it enabled high-spatiotemporal-resolution manipulation of cellular behavior, including membrane ruffling and guided cell migration.

This is a live-cell optical imaging platform with high spatiotemporal resolution, so it is plausibly relevant to reducing damage relative to destructive nanoscale methods while still enabling repeated measurements. Its light-sheet format is the strongest supplied evidence for a less perturbative imaging approach in this set.

Mechanistic
72
Context
66
Throughput
58
First test time
34
First test cost
22
Replication
20
Practicality
71
Translatability
11

Assumptions: Assumes light-sheet geometry may reduce photodamage compared with more invasive high-resolution methods.

Missing evidence: No direct evidence here on nanoscale resolution, deep imaging performance, or quantitative sample-damage reduction for longitudinal imaging.

Gap applicability

45

NIR light-based imaging is an optical assay and photoregulation approach that uses near-infrared light to sense, and in some cases modulate, specific cellular events in living systems. The cited review describes these strategies as enabling real-time interrogation of deep tissues with subcellular accuracy.

The supplied evidence says NIR imaging enables real-time interrogation of deep tissues in living systems with subcellular accuracy, which aligns with the gap's need for less destructive live imaging at depth. It is a plausible lower-perturbation optical alternative to destructive high-resolution modalities.

Mechanistic
61
Context
69
Throughput
55
First test time
49
First test cost
43
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes deeper-penetrating NIR optical imaging could help reduce the need for more destructive imaging approaches.

Missing evidence: No direct evidence here for nanoscale resolution, longitudinal same-specimen preservation, or explicit phototoxicity/sample-damage comparisons.

Gap applicability

40

In this Review, we explore new insights from studies using super-resolution and volume electron microscopy into the nanoscale organization of these junctional complexes...

This item directly targets super-resolution imaging, so it is relevant to the resolution side of the gap. Because it is light-based and includes in vivo and plant imaging hints, it could plausibly support less destructive live imaging than electron-based approaches.

Mechanistic
46
Context
42
Throughput
47
First test time
38
First test cost
31
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes some super-resolution implementations may be compatible with live imaging and lower damage than destructive nanoscale methods.

Missing evidence: The supplied summary does not state live-cell compatibility, depth performance, or reduced destructiveness.

Gap applicability

36

fenestrations were only discernible with EM, but now they can be visualized ... in fixed cells using single molecule localization microscopy (SMLM) techniques such as direct stochastic optical reconstruction microscopy

SMLM is a super-resolution fluorescence method and therefore plausibly relevant to achieving nanoscale information without resorting to destructive electron microscopy. The metadata includes both fixed-cell support and a live-cell hint, so it may be testable for repeated live measurements in some contexts.

Mechanistic
44
Context
39
Throughput
41
First test time
33
First test cost
28
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes the live-cell facet hint reflects at least some usable live-cell SMLM implementations.

Missing evidence: Evidence is conflicting on live-cell support and does not provide depth capability, damage profile, or longitudinal same-specimen performance.

Gap applicability

35

Fluorescence microscopy is an imaging assay method used to detect and localize fluorescent signals in living biological specimens. In the supplied evidence, it is described for larval zebrafish as a means to achieve subcellular fluorescence localization and real-time monitoring of cell identity, fate, physiology, and organ pathophysiology.

The evidence explicitly supports live biological imaging and real-time monitoring, which fits the longitudinal, same-specimen part of the gap. It is not a nanoscale solution by itself, but it is a plausible low-damage baseline or comparator for testing less destructive live imaging workflows.

Mechanistic
31
Context
56
Throughput
67
First test time
78
First test cost
72
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes a baseline live-cell imaging method is still useful for addressing the destructive-imaging bottleneck experimentally.

Missing evidence: No evidence here for nanoscale resolution or deep imaging.

Gap applicability

32

Confocal microscopy is an in vivo fluorescence imaging assay method described as part of microscopy platforms tailored to larval zebrafish research. In the cited review context, it is used with fluorescent probes for real-time monitoring of cell identity, fate, and physiology in living larvae, including pancreatic and islet studies.

Confocal microscopy is explicitly described as in vivo, real-time fluorescence imaging, so it is relevant to repeated live-specimen observation. It does not solve the nanoscale requirement directly, but it could serve as a practical lower-damage live-imaging benchmark against more destructive high-resolution methods.

Mechanistic
29
Context
52
Throughput
61
First test time
74
First test cost
58
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes benchmarking against established live imaging is useful for this gap.

Missing evidence: No direct evidence for nanoscale resolution, deep-tissue performance, or reduced damage relative to destructive nanoscale imaging.

Lack of Structure Prediction for Highly Dynamic Proteins

Biophysics· 2 capabilities

Tools

4

Best

58%

Current structure prediction tools like AlphaFold excel for stable proteins but struggle with highly dynamic proteins whose structures fluctuate continuously, leaving a gap in our understanding of intrinsically disordered proteins and protein allostery.

Foundational Capabilities

NMR Mapping and Modeling of Intrinsically Disordered Proteins

Utilize nuclear magnetic resonance (NMR) techniques to capture the dynamic ensembles of intrinsically disordered proteins, enabling accurate modeling of their fluctuating structures.

1 resources

Protein Allostery Prediction Across the Proteome

Develop methods to  measure and predict allosteric regulation mechanisms across the proteome, capturing dynamic conformational changes that impact protein function.

1 resources

Candidate Tools

Gap applicability

58

In case the transfer process does not involve stable or transient paramagnetic species or states, site-directed spin labeling with suitable nitroxide radicals still allows EPR techniques to be used for studying structure and conformational dynamics of the proteins in action.

This assay is directly described as enabling EPR studies of protein structure and conformational dynamics in action, which is closely aligned with the gap around highly dynamic proteins. It could provide experimental distance constraints for fluctuating ensembles that are hard to capture with static structure prediction alone.

Mechanistic
82
Context
78
Throughput
42
First test time
55
First test cost
46
Replication
35
Practicality
35
Translatability
35

Assumptions: Assumes the target proteins can be spin-labeled without destroying relevant dynamics.

Missing evidence: No replication, practicality, or translatability metrics were provided; no explicit evidence for intrinsically disordered proteins or allostery-specific use was provided.

TiGGER

assay method

Gap applicability

57

TiGGER is a 240 GHz time-resolved Gd-Gd electron paramagnetic resonance assay for tracking inter-residue distances during a protein mechanical cycle in the solution state. It was demonstrated on the light-responsive AsLOV2 domain to resolve time-dependent structural separation and relaxation after photoactivation.

TiGGER is explicitly a time-resolved EPR assay for tracking inter-residue distance changes during a protein mechanical cycle in solution, which is relevant to dynamic conformational ensembles and allosteric motion. It could help generate experimental dynamic constraints where static predictors fail.

Mechanistic
80
Context
72
Throughput
38
First test time
48
First test cost
34
Replication
20
Practicality
83
Translatability
27

Assumptions: Assumes the gap can be partially addressed by experimental mapping methods rather than prediction-only methods.

Missing evidence: Evidence is shown for a light-responsive domain, not for intrinsically disordered proteins; no evidence was provided for scalability to proteome-wide allostery mapping.

Time-resolved Gd-Gd electron paramagnetic resonance (TiGGER) is a 240 GHz EPR-based assay method for tracking inter-residue distances during a protein mechanical cycle in the solution state. It is positioned as a method to study triggered functional dynamics in proteins.

This method is positioned for tracking time-resolved inter-residue distance changes in proteins, which is directly relevant to studying conformational dynamics underlying allostery. It may provide experimental ensemble information to support modeling of proteins that do not adopt one stable structure.

Mechanistic
78
Context
70
Throughput
38
First test time
46
First test cost
34
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes solution-state distance tracking is useful for the specific dynamic proteins of interest.

Missing evidence: No explicit evidence for intrinsically disordered proteins, NMR integration, or proteome-scale deployment was provided.

magnetic tweezers

assay method

Gap applicability

42

Here, we review a broad spectrum of single-molecule tools and techniques such as optical and magnetic tweezers...

Magnetic tweezers are a single-molecule structural/functional characterization method and could plausibly probe dynamic conformational landscapes or force-dependent allosteric behavior. That makes them potentially relevant for understanding proteins whose structures fluctuate continuously.

Mechanistic
62
Context
58
Throughput
22
First test time
30
First test cost
24
Replication
35
Practicality
35
Translatability
35

Assumptions: Assumes the proteins of interest can be adapted to single-molecule force assays.

Missing evidence: The supplied summary is very broad and does not explicitly mention intrinsically disordered proteins, allostery mapping, or structure-prediction support; replication metadata is missing.

Some Proteins Are Still Recalcitrant to Experimental Structure Analysis

Biophysics· 2 capabilities

Tools

4

Best

42%

Membrane proteins are notoriously difficult to analyze experimentally and to incorporate into technological applications due to their inherent insolubility in aqueous environments. Their recalcitrance limits our capacity to study their structure and function in detail. Other challenges include the difficulty of studying small proteins with cryo-EM.

Foundational Capabilities

Make it Easier to Map Orientations of Small Proteins in Cryo-Em

Barcode the orientations of small proteins in the cryo-EM

1 resources

Recode Membrane Proteins for Solubility

Modify the genetic code or structure of membrane proteins so they become soluble in water, thereby enabling more robust experimental analysis and practical applications.

2 resources

Candidate Tools

Gap applicability

42

Molecular dynamics simulation is a computational method for modeling atomistic conformational dynamics of proteins and analyzing residue fluctuations and vibrational behavior. In the cited studies, it was used as a noninvasive approach to validate dynamic behavior and to compare PAS-domain dynamics across functional groups.

This method can provide atomistic conformational models and dynamic analyses for proteins that are hard to solve experimentally, which is directly relevant when membrane proteins are recalcitrant to structure analysis. It is a plausible complementary route when experimental structure determination is limited, although the supplied evidence does not show specific use on membrane proteins or cryo-EM orientation problems.

Mechanistic
56
Context
48
Throughput
42
First test time
52
First test cost
40
Replication
20
Practicality
83
Translatability
12

Assumptions: Used as a complementary structural characterization method rather than a direct experimental fix.

Missing evidence: No supplied evidence for membrane-protein-specific performance, small-protein cryo-EM support, or soluble recoding.

AlphaFold3

computation method

Gap applicability

39

AlphaFold3 is a computational structure-prediction method used in the cited study to model the MagMboI–DNA complex. In that work, it was applied to infer interactions with the 5'-GATC-3' recognition sequence and to guide optimization of the photoactivatable endonuclease variant MagMboI-plus for top-down genome engineering.

A structure-prediction method could help generate structural hypotheses for proteins that remain difficult to analyze experimentally. It is fast to try computationally, but the supplied evidence only supports protein-DNA complex modeling and does not specifically address membrane-protein solubilization or cryo-EM orientation barcoding.

Mechanistic
51
Context
34
Throughput
78
First test time
72
First test cost
68
Replication
0
Practicality
59
Translatability
9

Assumptions: Applied here as a general computational structure-modeling aid.

Missing evidence: No supplied evidence for membrane proteins, small-protein cryo-EM, or design of soluble membrane-protein analogues.

Gap applicability

34

All-atom replica exchange discrete molecular dynamics is a computational docking method used to generate structural models of calcium and integrin binding protein 1 (CIB1) bound to α-integrin cytoplasmic tails. In the cited CIB1 study, it predicted that multiple α-integrin tails engage the same hydrophobic binding pocket on CIB1.

Replica-exchange docking and conformational sampling can help build structural models when direct experimental structure analysis is difficult. However, the supplied evidence is limited to a specific protein-peptide docking case and does not show relevance to membrane-protein insolubility or cryo-EM of small proteins.

Mechanistic
44
Context
28
Throughput
38
First test time
45
First test cost
36
Replication
20
Practicality
83
Translatability
12

Assumptions: Could be repurposed for difficult structural modeling problems beyond the cited example.

Missing evidence: No supplied evidence for membrane proteins, cryo-EM applications, or soluble redesign.

Gap applicability

26

Accelerated MD simulation is an in silico computational method reported for elucidating the photoactivation mechanism of the AsLOV2 light-responsive domain. The available evidence supports its use as a mechanistic analysis approach for a protein photosensor rather than as a deployable biological reagent.

Accelerated MD can probe conformational mechanisms for proteins that are hard to characterize experimentally, offering a possible fallback analysis route. The provided evidence is narrow and tied to a photosensor mechanism study, so applicability to the stated membrane-protein and cryo-EM bottlenecks is uncertain.

Mechanistic
36
Context
22
Throughput
34
First test time
43
First test cost
34
Replication
9
Practicality
59
Translatability
9

Assumptions: Considered only as a general in silico structural-analysis aid.

Missing evidence: No supplied evidence for membrane proteins, cryo-EM orientation mapping, or solubility engineering.

Difficulty Delivering Physical Probes for Imaging into Living Cells

Biophysics· 2 capabilities

Tools

3

Best

68%

Delivering physical probes for imaging into living cells is challenging due to barriers in cell membranes and potential perturbation of cellular function. New approaches are required to enable high-dimensional biosensing without invasive probes.

Foundational Capabilities

Label-Free High-Dimensional Biosensing

Develop label-free biosensing techniques that leverage vibrational signatures to capture complex cellular information without the need for physical probe delivery.

1 resources

Metabolically Incorporated Labels

Engineer cells to incorporate imaging labels metabolically, thereby eliminating the need for external probe delivery and enabling noninvasive imaging.

1 resources

Candidate Tools

Gap applicability

68

These label-free techniques span a variety of different approaches, including structured illumination, transient absorption, infrared absorption, and coherent Raman spectroscopies.

The gap explicitly highlights avoiding physical probe delivery, and coherent Raman is listed as a label-free imaging approach. That makes it a direct mechanistic fit for noninvasive live-cell biosensing based on intrinsic vibrational signatures.

Mechanistic
95
Context
95
Throughput
70
First test time
35
First test cost
25
Replication
45
Practicality
40
Translatability
55

Assumptions: Assumes the intended use is live-cell imaging without exogenous probes.

Missing evidence: No direct live-cell performance, dimensionality, or implementation details are provided in the item summary.

Gap applicability

62

These label-free techniques span a variety of different approaches, including structured illumination, transient absorption, infrared absorption, and coherent Raman spectroscopies.

This item is explicitly described as part of a set of label-free imaging techniques, which could reduce or eliminate the need to deliver physical probes into living cells. That aligns with the gap's emphasis on noninvasive imaging alternatives.

Mechanistic
88
Context
86
Throughput
65
First test time
35
First test cost
25
Replication
45
Practicality
40
Translatability
50

Assumptions: Assumes transient absorption can be applied in living-cell settings relevant to the gap.

Missing evidence: The summary does not state live-cell compatibility, biosensing dimensionality, or specific advantages over other label-free methods.

Gap applicability

59

These label-free techniques span a variety of different approaches, including structured illumination, transient absorption, infrared absorption, and coherent Raman spectroscopies.

Infrared absorption super-resolution imaging is explicitly grouped here as a label-free approach, so it plausibly addresses the bottleneck of getting external probes across cell membranes. It is relevant because the gap calls for noninvasive imaging strategies.

Mechanistic
86
Context
84
Throughput
60
First test time
30
First test cost
25
Replication
45
Practicality
38
Translatability
48

Assumptions: Assumes the method can be used on living cells rather than only fixed or specialized preparations.

Missing evidence: No direct evidence is provided for live-cell use, multiplexing capacity, or ease of deployment.

Understanding Life as a Far-From-Equilibrium Physical Phenomenon

Biophysics· 1 capabilities

Tools

3

Best

46%

Our ability to analyze organisms holistically as systems which emerge from fundamental physics is limited by our lack of formal frameworks for distinguishing living and nonliving systems which are precise enough to be useful for practical scientific problems

Foundational Capabilities

Simulating Nonequilibrium Systems at Scale

Develop software for simulating stochastic thermodynamics at scale with modern hardware accelerators (GPUs, etc), going beyond Gillespie algorithm.

1 resources

Candidate Tools

Gap applicability

46

The Bayesian optimization framework is a computational method built from high-throughput Lustro measurements to guide control of blue light-sensitive optogenetic systems. It uses data-driven learning, uncertainty quantification, and experimental design to identify light induction conditions for multiplexed regulation in Saccharomyces cerevisiae.

This is one of the few items that directly matches the gap's stated capability need for scalable analysis and experimental design in complex dynamical systems. Its Bayesian optimization and uncertainty-quantification workflow could help probe and parameterize nonequilibrium biological behaviors, even though the supplied evidence is tied specifically to optogenetic control in yeast rather than general living-vs-nonliving frameworks.

Mechanistic
62
Context
72
Throughput
78
First test time
55
First test cost
52
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the optimization and uncertainty-quantification approach can be repurposed from optogenetic control to broader nonequilibrium system exploration.

Missing evidence: No supplied evidence that it supports stochastic thermodynamics, chemical reaction network simulation, or formal discrimination of living versus nonliving systems.

Lustro

assay method

Gap applicability

29

Lustro is a high-throughput optogenetics platform for studying and controlling blue light-sensitive optogenetic systems. In the cited 2023 work, it was combined with machine learning to achieve multiplexed control of split transcription factor responses in Saccharomyces cerevisiae.

Lustro provides a high-throughput perturbation-and-measurement platform for dynamic control of biological systems, which could be useful for generating datasets on driven, far-from-equilibrium responses. That makes it potentially relevant as an experimental platform, but only indirectly addresses the gap's need for formal physical frameworks.

Mechanistic
34
Context
48
Throughput
74
First test time
42
First test cost
36
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes controlled optogenetic perturbation experiments are useful for studying nonequilibrium system behavior.

Missing evidence: No supplied evidence for use in biophysical theory building, stochastic thermodynamics, or distinguishing living from nonliving systems.

CIDNP

assay method

Gap applicability

24

CIDNP (chemically induced dynamic nuclear polarization) is a phenomenon in which chemical reactions generate nuclear hyperpolarization. Photo-CIDNP is the light-driven form of this phenomenon and is discussed in electron-transfer systems that often use flavins as electron acceptors.

CIDNP is at least mechanistically tied to light-driven electron transfer and nonequilibrium chemical reactions, so it could serve as a specialized assay for probing physical signatures of driven biochemical systems. Its relevance to the stated gap is narrow and indirect because the supplied evidence does not connect it to organism-level or formal framework development.

Mechanistic
28
Context
31
Throughput
22
First test time
24
First test cost
18
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes a spectroscopy-style assay of driven reaction physics is useful for studying far-from-equilibrium biological phenomena.

Missing evidence: No supplied evidence on scalability, organism-level applicability, or use for formal classification of living versus nonliving systems; replication metadata is also missing.

Lack of Direct Measurement of Quantum Effects in Biological Systems

Biophysics· 1 capabilities

Tools

3

Best

45%

Despite theoretical predictions, quantum effects in biological systems remain largely unmeasured. Direct experimental evidence is needed to explore how quantum phenomena influence biomolecular interactions.

Foundational Capabilities

Quantum Biology Measurements

Develop the instrumentation and controls required to directly measure quantum effects in biological systems.

3 resources

Candidate Tools

theoretical modeling

computation method

Gap applicability

45

Continuous development of experimental methodologies, synergistic correlation with theoretical modeling, and the expansion to other nonequilibrium, photoswitchable, and controllable protein systems will greatly advance the chemical, physical, and biological sciences.

The gap explicitly calls for direct measurement of quantum effects, and the supplied evidence states that experimental methodologies should be used synergistically with theoretical modeling. This makes it a plausible interpretive support tool for designing controls and analyzing candidate quantum-biology measurements, though it is not itself a direct measurement assay.

Mechanistic
42
Context
68
Throughput
70
First test time
80
First test cost
82
Replication
25
Practicality
40
Translatability
30

Assumptions: Used as analysis and experiment-design support alongside a physical measurement method.

Missing evidence: No specific quantum-biological model class, target system, or validated measurement workflow is provided.

Gap applicability

31

In this study, we probed the effects of a few key mutations on the coupled binding and folding of α-synuclein by using a combination of single-molecule (smFRET) and ensemble (far-UV CD) measurements.

Single-molecule FRET is an actionable high-resolution biophysical assay that can detect heterogeneous conformational dynamics in biomolecules. It could plausibly help probe signatures consistent with unusual coupling or coherence-related hypotheses in specific systems, but the supplied evidence does not show direct measurement of quantum effects.

Mechanistic
28
Context
46
Throughput
55
First test time
42
First test cost
35
Replication
30
Practicality
35
Translatability
30

Assumptions: Relevant only if the target quantum hypothesis predicts measurable distance or state-distribution changes at the single-molecule level.

Missing evidence: No evidence here links smFRET to quantum-effect readouts, quantum controls, or the biological systems of interest.

Gap applicability

24

Light-addressable potentiometric sensors (LAPS) ... use photocurrent measurements at electrolyte-insulator-semiconductor substrates for spatio-temporal imaging of electrical potentials and impedance.

This is a real biosensing platform for spatiotemporal imaging of electrical potentials and impedance, so it could support exploratory measurements of biophysical states in living samples. However, the provided evidence supports conventional potentiometric sensing rather than direct detection of quantum phenomena.

Mechanistic
18
Context
34
Throughput
58
First test time
38
First test cost
32
Replication
30
Practicality
35
Translatability
30

Assumptions: Could only contribute as a supporting electrical readout in a broader quantum-biology measurement setup.

Missing evidence: No evidence of sensitivity to quantum effects, biomolecular quantum observables, or required controls for quantum-biology experiments.

Chemistry

7 gaps · 27 tool connections · best match 58%

In-Silico Molecular Simulation Is Slow and Kludgy

Chemistry· 8 capabilities

Tools

6

Best

46%

In-silico molecular simulation has not received the necessary push, despite the promise of machine learning-based surrogate models. Moreover, advancements in quantum chemistry—both AI accelerated and quantum/ASIC-enabled—remain underexploited.

Foundational Capabilities

Ab-Initio Calculation of Heavier Elements

Create accurate, thoroughly benchmarked calculations for elements beyond the second row of the periodic table, for clusters of 3 or more atoms. Could complement 29.5.

2 resources

AI-Accelerated Quantum Chemistry

Leverage AI techniques to accelerate quantum chemistry calculations, improving the speed and accuracy of electronic structure predictions.

3 resources

AI-Enabled Molecular Dynamics

Use neural network potentials and force fields to enhance molecular dynamics simulations, making them more efficient and accurate.

10 resources

ASIC-Enabled Quantum Chemistry

Develop application-specific integrated circuits (ASICs) tailored for quantum chemistry calculations, aiming to combine efficiency with high computational power.

1 resources

High-Quality Experimental Chemistry Benchmarks

Produce a high-quality experimental dataset to validate molecular simulation techniques and transition to frictionless reproducibility.

2 resources

High-Quality Open Reaction and Structure Datasets

Support open alternatives to Reaxys and the Cambridge Structural Dataset to the point where they are equal or superior in quality

2 resources

Machine Learning Force Fields for Electrochemistry

Extend work on ML force fields to charge transfer problems in external potentials, enabling in-silico discoveries in batteries, electrolysis, carbon capture, biochemistry and the origins of life

3 resources

Quantum Computing-Enabled Quantum Chemistry

Utilize hybrid quantum algorithms to perform quantum chemistry simulations, capitalizing on recent progress in quantum computing.

2 resources

Candidate Tools

Gap applicability

46

Molecular dynamics simulation is a computational method for modeling atomistic conformational dynamics of proteins and analyzing residue fluctuations and vibrational behavior. In the cited studies, it was used as a noninvasive approach to validate dynamic behavior and to compare PAS-domain dynamics across functional groups.

This is a directly relevant in silico simulation method for atomistic molecular dynamics, which is part of the gap's core bottleneck. It is not itself an acceleration strategy, but it is a usable baseline method for testing improved workflows, surrogate models, or benchmarking simulation speed and fidelity.

Mechanistic
62
Context
42
Throughput
45
First test time
72
First test cost
62
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes baseline MD methods are in scope as actionable computational tools for improving simulation workflows.

Missing evidence: No evidence here that this specific implementation is AI-accelerated, quantum-enabled, or optimized for chemistry beyond protein dynamics.

Markov State Modeling

computation method

Gap applicability

34

Markov State Modeling (MSM) is a computational method applied with molecular dynamics simulations to resolve conformational dynamics in the AsLOV2 photosensory domain. In the cited 2023 study, MSM was used to explain blue-light-induced stepwise unfolding of the C-terminal Jα-helix and to identify seven structurally distinguishable unfolding states spanning initiation and post-initiation phases.

Markov state modeling is an actionable computational analysis method that can extract kinetic and conformational structure from MD trajectories, potentially making simulation outputs more interpretable and useful. That addresses part of the 'kludgy' workflow problem even though it does not directly accelerate quantum chemistry.

Mechanistic
48
Context
34
Throughput
40
First test time
55
First test cost
58
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the gap includes downstream analysis methods that improve usability of simulation pipelines.

Missing evidence: Evidence is limited to a light-responsive protein system, not general chemistry, heavier elements, or AI-accelerated simulation.

Gap applicability

31

Accelerated MD simulation is an in silico computational method reported for elucidating the photoactivation mechanism of the AsLOV2 light-responsive domain. The available evidence supports its use as a mechanistic analysis approach for a protein photosensor rather than as a deployable biological reagent.

Accelerated MD is one of the few candidate items whose named mechanism directly suggests faster exploration of molecular dynamics landscapes. That makes it plausibly relevant to the gap's complaint that molecular simulation is slow, even though the provided evidence is narrow and protein-specific.

Mechanistic
52
Context
28
Throughput
43
First test time
58
First test cost
58
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the reported accelerated MD method is being considered as a general computational strategy rather than only for the cited photoreceptor case.

Missing evidence: No supplied evidence for chemistry-wide applicability, quantitative speedup, surrogate modeling, quantum chemistry coupling, or validation on non-protein systems.

transition path sampling

computation method

Gap applicability

31

Transition path sampling is a computational method applied to explicit-solvent molecular dynamics trajectories to extract atomistic features of conformational reaction networks. In the cited study, it was used to analyze the millisecond partial unfolding transition in the light-driven photocycle of photoactive yellow protein and to predict reaction coordinate models and tentative transition states.

Transition path sampling is a concrete computational method for extracting reaction-coordinate and transition-state information from MD trajectories. It could help reduce inefficiency in studying rare events or conformational transitions, which is relevant to making simulation workflows more informative per unit compute.

Mechanistic
44
Context
31
Throughput
34
First test time
46
First test cost
50
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes rare-event analysis methods are relevant to the stated simulation bottleneck.

Missing evidence: No supplied evidence for broader chemistry use, AI acceleration, quantum chemistry integration, or performance gains versus standard approaches.

Gap applicability

30

Molecular dynamics simulations combined with Markov state modeling were used to characterize blue-light-induced conformational switching in the Avena sativa LOV2 (AsLOV2) domain. This computation method resolved C-terminal Jα-helix unfolding into seven structurally distinguishable steps spanning initiation and post-initiation phases.

This is a usable MD-plus-MSM computational workflow for resolving conformational dynamics, so it is at least directly in the simulation-method space of the gap. It may help as a reference workflow for evaluating better simulation or analysis stacks, but the evidence does not show it solves the speed bottleneck itself.

Mechanistic
40
Context
28
Throughput
36
First test time
56
First test cost
56
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes combined MD analysis workflows are in scope as actionable methods.

Missing evidence: No evidence for acceleration, quantum chemistry relevance, broader chemical systems, or improved computational efficiency.

Gap applicability

20

The Bayesian optimization framework is a computational method built from high-throughput Lustro measurements to guide control of blue light-sensitive optogenetic systems. It uses data-driven learning, uncertainty quantification, and experimental design to identify light induction conditions for multiplexed regulation in Saccharomyces cerevisiae.

Bayesian optimization is an actionable ML method for data-driven search and experimental design, which is conceptually adjacent to surrogate-model-guided optimization. It could plausibly inform more efficient parameter search in simulation workflows, but the supplied evidence is from optogenetic control rather than molecular simulation or quantum chemistry.

Mechanistic
24
Context
18
Throughput
41
First test time
52
First test cost
56
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes general optimization frameworks can be repurposed toward simulation workflow tuning.

Missing evidence: No direct evidence here for use in molecular simulation, force fields, electronic structure prediction, or quantum chemistry.

We Can’t Yet Replicate Animal Olfaction Synthetically as a Sensing and Classification Modality

Chemistry· 2 capabilities

Tools

5

Best

58%

We currently lack a comprehensive model explaining how biological systems decode and classify chemical signals through olfaction. Understanding this process is critical for applications ranging from flavor science to disease diagnostics to understanding and harnessing animal communication.

Foundational Capabilities

Build an Olfaction Decoding Model

Develop an integrative model that explains how the olfactory system decodes and classifies complex chemical stimuli, linking molecular features to perceived odors.

3 resources

Map of Odorant Receptor Binding

Humans have ~400 odorant receptor genes that encode functional GPCR proteins. Binding ligands have been identified for ~80. Mapping the binding profiles of olfactory receptors from humans and other animals would enable novel biosensors and reveal novel therapeutic targets.

4 resources

Candidate Tools

OBP-based biosensors

construct pattern

Gap applicability

58

When coupled with electrical transducers, OBPs act as recognition elements, converting chemical signals into electrical outputs. This enables the development of biological electronic noses that are based on biomimetics and aim for sustainability.

This item is directly framed as a biomimetic electronic-nose strategy, using odorant-binding proteins as chemical recognition elements coupled to electrical readout. That makes it plausibly relevant to the gap's sensing side, especially for building synthetic odor detection systems even if it does not by itself explain full biological decoding.

Mechanistic
78
Context
72
Throughput
46
First test time
58
First test cost
52
Replication
30
Practicality
45
Translatability
62

Assumptions: Assumes OBP recognition can serve as a useful proxy for at least part of olfactory input encoding.

Missing evidence: No supplied evidence on odor panel breadth, discrimination performance, species relevance, or whether OBPs support complex odor classification rather than simple detection.

cell-free protein synthesis

engineering method

Gap applicability

56

Cell-free protein synthesis (CFPS) has been used as a transformative technology in synthetic biology, providing a programmable, scalable, and automation-compatible platform for biological engineering.

The supplied summary specifically notes transmembrane protein expression and automation-compatible scaling, which could plausibly support expression and testing of olfactory receptors or related sensing components. That makes it a potentially useful enabling method for receptor-ligand mapping and synthetic olfaction prototyping.

Mechanistic
67
Context
74
Throughput
80
First test time
70
First test cost
62
Replication
35
Practicality
58
Translatability
42

Assumptions: Assumes the transmembrane protein expression capability extends to functional olfactory receptors or similar chemosensory proteins.

Missing evidence: No direct evidence here that olfactory receptors have been expressed functionally, coupled to readouts, or screened against odorants in this platform.

This chapter explores the principles, platforms, and applications of CFS-based HTS... Altogether, CFS-based HTS offers a flexible, rapid, and accessible approach for next-generation biomolecular screening and therapeutic development.

A major bottleneck in the gap is mapping many receptor-odorant interactions, and this item is explicitly a rapid, flexible high-throughput screening approach. It could plausibly help generate large binding or functional datasets needed for odor decoding models if adapted to olfactory receptor assays.

Mechanistic
61
Context
68
Throughput
92
First test time
72
First test cost
60
Replication
28
Practicality
55
Translatability
36

Assumptions: Assumes olfactory sensing components can be incorporated into a cell-free HTS workflow.

Missing evidence: No supplied evidence of compatibility with GPCR-like olfactory receptors, volatile odorants, or olfaction-specific functional readouts.

biosensors

assay method

Gap applicability

45

Biosensors have shown potential for success in diagnostic testing due to their ease of use, inexpensive materials, rapid results, and portable nature. Biosensors can be combined with nanomaterials to produce sensitive and easily interpretable results.

Biosensors are generally relevant to the synthetic sensing part of the gap because they provide practical chemical detection formats with rapid and portable readouts. They could support early prototyping of odor-responsive devices, though the supplied evidence is generic rather than olfaction-specific.

Mechanistic
49
Context
55
Throughput
50
First test time
76
First test cost
78
Replication
25
Practicality
70
Translatability
50

Assumptions: Assumes the general biosensor framework can be adapted to volatile odorant detection and classification.

Missing evidence: No direct evidence on odorant recognition elements, multiplexing, selectivity for olfactory chemistry, or use in odor classification.

Gap applicability

39

Split-protein complementation assays (PCAs), where a reporter protein is divided into two inactive fragments, have evolved from simple reporters of biological events into an increasingly important tool in modern virology.

If adapted appropriately, split-protein complementation could provide a functional reporter format for receptor activation or interaction events during odorant screening. That could help assay parts of olfactory signal detection, but the supplied evidence does not show olfaction use.

Mechanistic
42
Context
44
Throughput
56
First test time
57
First test cost
55
Replication
20
Practicality
35
Translatability
25

Assumptions: Assumes PCA designs can be engineered to report odorant receptor activation or associated signaling events.

Missing evidence: No supplied evidence for olfactory receptors, GPCR signaling compatibility, volatile ligand handling, or classification-scale screening performance.

Limited ability to identify molecular structures through spectroscopy

Chemistry· 2 capabilities

Tools

5

Best

43%

Most molecular structure determination methods lose critical information, and solving the inverse problem remains challenging. This limits our ability to accurately reconstruct molecular structures from spectral data.

Foundational Capabilities

Microwave Spectroscopy for 1:1 Mapping

Employ microwave spectroscopy to establish a direct one-to-one mapping between molecular structure and spectrum, preserving detailed structural information.

1 resources

Universal Spectroscopic Databases

Develop comprehensive spectroscopic databases to train AI models for both forward and inverse predictions, enabling more accurate structure determination.

1 resources

Candidate Tools

Raman spectroscopy

assay method

Gap applicability

43

Optical imaging methods covered in this review include... Raman spectroscopy for early-stage cancer detection.

Raman spectroscopy is directly a spectroscopic assay modality, so it is plausibly relevant to extracting structural information from spectra. It could contribute complementary vibrational fingerprints for molecular structure identification, although the supplied evidence is tied to cancer detection rather than inverse structure reconstruction.

Mechanistic
62
Context
58
Throughput
55
First test time
45
First test cost
35
Replication
30
Practicality
30
Translatability
30

Assumptions: Assumes Raman spectra are being considered as one candidate input modality for structure determination in chemistry.

Missing evidence: No supplied evidence on molecular structure reconstruction, inverse modeling performance, database generation, or compatibility with microwave spectroscopy workflows.

graph neural networks

computation method

Gap applicability

42

Graph neural networks (GNNs) are one of the fastest growing classes of machine learning models. They are of particular relevance for chemistry and materials science, as they directly work on a graph or structural representation of molecules and materials and therefore have full access to all relevant information required to characterize materials.

The gap explicitly mentions training AI models for forward and inverse prediction from spectroscopic data, and graph neural networks are a chemistry-relevant machine learning method for molecular representations. They could plausibly be used as a modeling component once suitable spectroscopic databases are available.

Mechanistic
48
Context
72
Throughput
82
First test time
62
First test cost
66
Replication
30
Practicality
30
Translatability
30

Assumptions: Assumes the intended solution includes ML models that map between spectra and molecular structures.

Missing evidence: No supplied evidence that this item has been applied to spectroscopy, inverse structure determination, or spectroscopic database training.

molecular dynamics

computation method

Gap applicability

29

Molecular dynamics is a computational method used to study signaling mechanisms of LOV domains through simulation-based analysis. In the cited literature, it functions as an in silico approach for mechanistic investigation rather than as a biological reagent or genetically encoded tool.

Molecular dynamics is a usable computational method for generating or refining structural hypotheses that could be compared against spectral observations. That makes it a possible support tool for the inverse problem, but the supplied evidence only shows use in LOV-domain mechanistic analysis rather than general spectroscopic structure identification.

Mechanistic
34
Context
41
Throughput
38
First test time
52
First test cost
46
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes candidate structures need computational refinement or conformational sampling before spectral comparison.

Missing evidence: No supplied evidence linking this method to microwave spectroscopy, spectroscopic database generation, or direct spectrum-to-structure inversion.

homology modeling

computation method

Gap applicability

27

computational methods (such as homology modeling, molecular docking, molecular dynamics and enhanced sampling techniques) can provide structural insights to guide photoswitch design and to understand the observed light-regulated effects.

Homology modeling can provide candidate structural models that might be evaluated against spectral data. However, this is only an indirect fit because the supplied evidence is about photoswitch design rather than solving spectroscopy-based molecular structure determination.

Mechanistic
22
Context
28
Throughput
44
First test time
58
First test cost
61
Replication
30
Practicality
30
Translatability
30

Assumptions: Assumes target molecules are in a class where template-based structural modeling is relevant.

Missing evidence: No supplied evidence for use with small-molecule spectroscopy, inverse spectral reconstruction, or spectroscopic database building.

Gap applicability

22

Ultrafast mid-infrared spectroscopy is an assay method used to study primary light-driven reactions in the LOV2 domain of phototropin. In the cited 2009 Biophysical Journal study, it was applied together with quantum chemistry to investigate early photochemical events.

This is directly a spectroscopy method and, in principle, richer spectroscopic measurements can preserve more structural information. But the supplied evidence is narrowly about light-driven reactions in a LOV2 protein domain, not general molecular structure identification or inverse reconstruction.

Mechanistic
31
Context
24
Throughput
20
First test time
18
First test cost
12
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes richer time-resolved infrared measurements could be useful for some chemistry structure-assignment problems.

Missing evidence: No supplied evidence for broad chemical applicability, one-to-one structure-spectrum mapping, database generation, or inverse prediction workflows.

We Don’t Have Easy Programmable Synthesis of Bio Polymers Other Than Nucleic Acids

Chemistry· 1 capabilities

Tools

3

Best

58%

While long-chain nucleic acid synthesis is advancing rapidly, the programmable synthesis of other polymers remains underdeveloped, limiting our capacity to design and produce diverse synthetic polymers.

Foundational Capabilities

Solid-Phase Synthesizers for Other Polymers

Build solid-phase synthesizers capable of the universal, programmable synthesis of polymers such as proteins, peptides, spiroligomers, carbohydrates, and RNA mimetics.

5 resources

Candidate Tools

cell-free protein synthesis

engineering method

Gap applicability

58

Cell-free protein synthesis (CFPS) has been used as a transformative technology in synthetic biology, providing a programmable, scalable, and automation-compatible platform for biological engineering.

This is the clearest actionable platform in the set for programmable synthesis of a non-nucleic-acid biopolymer, namely proteins. Its summary explicitly supports programmability, scalability, and automation compatibility, which aligns with the gap's emphasis on easy programmable synthesis platforms.

Mechanistic
72
Context
74
Throughput
84
First test time
62
First test cost
48
Replication
35
Practicality
50
Translatability
40

Assumptions: Assumes protein synthesis is an acceptable partial match to the broader polymer-synthesis gap.

Missing evidence: No direct evidence here on peptide, carbohydrate, spiroligomer, or RNA-mimetic synthesis; no replication or cost metadata supplied.

genetic code expansion

engineering method

Gap applicability

42

Genetic code expansion is an engineering method that enables incorporation of non-physiological amino acids into proteins. In the supplied evidence, it was used to design efficient incorporation systems in Bacillus subtilis and to generate a Cas9 variant that became full-length and active in cultured somatic cells only after BOC exposure.

Genetic code expansion can extend programmable protein synthesis beyond the canonical amino acid set, which is relevant to making protein-like polymers more designable. It addresses part of the bottleneck by increasing chemical diversity within genetically encoded polymer production.

Mechanistic
55
Context
52
Throughput
42
First test time
46
First test cost
44
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes expanding amino acid repertoire in proteins is a meaningful step toward broader programmable polymer synthesis.

Missing evidence: Evidence does not show universal polymer synthesis, solid-phase synthesis, long-chain non-protein polymers, or automation performance for this use case.

click-labelling

engineering method

Gap applicability

30

Click-labelling in this context is a Bacillus subtilis genetic code expansion platform that incorporates noncanonical amino acids for click-chemistry-based protein labelling. In the cited 2021 study, it was implemented within broad and efficient stop-codon suppression systems and used alongside photo-crosslinking and translational titration applications.

This item provides a directly usable genetic-code-expansion implementation for introducing noncanonical amino acids into proteins, which could support more programmable protein polymer chemistry. It is weaker than CFPS because the supplied evidence is focused on labeling and stop-codon suppression rather than general polymer synthesis.

Mechanistic
38
Context
36
Throughput
30
First test time
40
First test cost
38
Replication
9
Practicality
59
Translatability
24

Assumptions: Assumes a protein-focused noncanonical incorporation platform is still relevant to the stated chemistry gap.

Missing evidence: No evidence for de novo polymer assembly workflows, automation, long-chain synthesis performance, or applicability beyond protein labeling contexts.

Manual and Laborious Nature of Chemical Synthesis

Chemistry· 7 capabilities

Tools

3

Best

50%

Chemical synthesis remains largely manual, limiting throughput and reproducibility. The field requires robust automation to accelerate discovery and production of new molecules.

Foundational Capabilities

Broader Chemistry Automation

Advance the field of chemistry automation through additional robotics and high-throughput platforms, enabling scalable synthesis processes.

7 resources

Cheap Enzymatic DNA Synthesis

Direct long DNA synthesis could still be cheaper.  Biosecurity consideration: implementation should be governed by security measures such as: • Trustless DNA synthesis screening • Hardware lock for DNA synthesizer See these capabilities under the “Risks of Malicious Bioengineering” bottleneck.

1 resources

Cheap Long Peptide Synthesis

Fast on-demand complex peptide manufacturing

1 resources

Chemputers for Automated Synthesis

Implement a “chemputer” system to automate chemical synthesis processes, reducing human intervention and increasing reproducibility.

3 resources

Generative Model Based Parallel Library Synthesis

Synthesize massive biopolymer libraries according to the statistics of a generative model

1 resources

In-Vivo Externally Programmable DNA/RNA Synthesis

Physics based control across the cell membrane of DNA synthesis inside the cell as a route to maximize “bandwidth across the cell membrane”

1 resources

Modular Synthesis Using Improved Building Blocks

Develop and utilize a better set of standardized building blocks to enable modular synthesis, making the assembly of complex molecules more efficient and scalable.

3 resources

Candidate Tools

cell-free protein synthesis

engineering method

Gap applicability

50

Cell-free protein synthesis (CFPS) has been used as a transformative technology in synthetic biology, providing a programmable, scalable, and automation-compatible platform for biological engineering.

The gap is about reducing manual work and increasing throughput in synthesis workflows, and this item is explicitly described as programmable, scalable, and automation-compatible. It is a plausible fit for automating some molecule-production and screening tasks, especially where biological production can substitute for manual bench synthesis.

Mechanistic
58
Context
46
Throughput
78
First test time
55
First test cost
42
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes the synthesis bottleneck can include biomolecule production workflows, not only small-molecule organic synthesis.

Missing evidence: No direct evidence here that CFPS automates small-molecule chemical synthesis or integrates with robotic chemistry platforms.

Gap applicability

28

The Bayesian optimization framework is a computational method built from high-throughput Lustro measurements to guide control of blue light-sensitive optogenetic systems. It uses data-driven learning, uncertainty quantification, and experimental design to identify light induction conditions for multiplexed regulation in Saccharomyces cerevisiae.

The gap calls for robust automation to accelerate discovery, and Bayesian optimization is an actionable experimental-design method that can reduce manual parameter search. It is plausibly useful for closed-loop optimization in automated synthesis, though the provided evidence is limited to light-controlled optogenetic systems.

Mechanistic
39
Context
28
Throughput
61
First test time
49
First test cost
56
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the chemistry automation problem includes experimental-condition optimization.

Missing evidence: No direct evidence here for reaction optimization, synthesis planning, or deployment in chemical synthesis platforms.

Gap applicability

28

In silico feedback control strategies are computationally implemented control schemes coupled to optogenetic measurement and light stimulation platforms. They are used to create computer-controlled living systems through automated measurement and stimulation workflows.

This item directly concerns automated measurement-and-stimulation workflows and computer-controlled experimental control, which is mechanistically relevant to reducing manual intervention. It could plausibly contribute control logic for automated synthesis platforms, but the supplied evidence is from optogenetic living-system contexts rather than chemistry automation.

Mechanistic
42
Context
31
Throughput
52
First test time
38
First test cost
34
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes control-system concepts may transfer from biological automation to chemistry workflows.

Missing evidence: No direct evidence of use in chemical synthesis, robotic chemistry, flow synthesis, or self-driving labs.

Limited Understanding of the Chemical Reaction Space

Chemistry· 3 capabilities

Tools

3

Best

46%

Our overall knowledge of the chemical reaction space, including the catalysts that drive these reactions, is still rudimentary. We also lack detailed the large materials synthesis and processing datasets needed to enable highly predictive models.

Foundational Capabilities

Comprehensive Mapping and Modeling of Chemical Space

Map and model chemical reactions and catalysts more comprehensively to better understand reaction mechanisms and discover novel catalysts.

2 resources

Multi-Modal Chemical Data to Enable AlphaChem

An open dataset that links multiple modes of chemical characterization, integrating existing databases that are currently siloed or behind paywalls.  The dataset should include structure, synthetic route, NMR/ IR spectra, and bioactivity to enable truly holistic chemical AI that can predict synthesis routes and spectra based on structure.

1 resources

Open Synthesis Database

A community-wide, structured repository (PDB-equivalent) of how materials are made, including processing parameters, environments, recipes, and results. Procedure logs and outcomes including failed experiments. 

1 resources

Candidate Tools

Gap applicability

46

Microfluidic technology has become a powerful tool to address these challenges by supporting the miniaturization and automation of complex, multi-step workflows. Integrating cell-free gene and protein synthesis with microfluidic platforms has redefined bioprocessing, making it more compact and accessible.

The gap emphasizes the need for larger, structured synthesis and processing datasets. Microfluidic automation is plausibly useful for generating reaction-condition and outcome data at higher throughput, which could support broader mapping of chemical or materials synthesis space.

Mechanistic
58
Context
34
Throughput
78
First test time
42
First test cost
30
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes the platform can be adapted beyond cell-free synthetic biology to chemistry or materials workflow screening.

Missing evidence: No direct evidence here that the method has been used for chemical reaction-space mapping, catalyst discovery, or open synthesis database generation.

mathematical modeling

computation method

Gap applicability

40

Mathematical modeling is a computational method used to guide the rational design of synthetic gene circuits. The cited literature also places it alongside live-cell imaging and within quantitative model systems used to study microbial drug resistance and spatial-temporal features of cancer in mammalian cells.

The gap explicitly calls for better mapping and modeling of chemical space. Mathematical modeling is a directly actionable computational approach for organizing sparse data and generating hypotheses about reaction behavior, even though the supplied evidence is not chemistry-specific.

Mechanistic
42
Context
28
Throughput
72
First test time
80
First test cost
86
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes the modeling approach is generalizable from biological systems to chemical reaction datasets.

Missing evidence: No direct evidence in the item summary for reaction prediction, catalyst modeling, or materials synthesis data integration.

molecular modelling

computation method

Gap applicability

39

Molecular modelling in conjunction with experiments is also a very important component of the general approach.

Molecular modelling is a plausible tool for exploring parts of chemical space and relating structure to behavior. It could contribute to hypothesis generation around reaction mechanisms or catalyst-substrate interactions, but the provided evidence is very generic.

Mechanistic
38
Context
24
Throughput
62
First test time
76
First test cost
80
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes the modelling can be applied to small molecules, catalysts, or reaction intermediates rather than only nucleic acid systems.

Missing evidence: No direct evidence for reaction-space coverage, catalyst discovery, dataset building, or materials synthesis applications.

Inability to Image Materials Atom by Atom

Chemistry· 3 capabilities

Tools

2

Best

43%

Many current imaging techniques lack the resolution to image materials on an atomic scale, limiting our understanding of material properties at the most fundamental level.

Foundational Capabilities

Atom-by-Atom Imaging with Nanoscale Laser Ionization

Utilize nanoscale laser ionization techniques to achieve atom-by-atom imaging of materials, providing unprecedented resolution of atomic structures.

1 resources

Compact X-Ray Lasers

Make X-ray lasers more accessible and compact

2 resources

Increasing the Range of Biopolymer Genetic Systems

Use new enzymes to enable the use of synthetic genetic polymers. Note: don’t make mirror life https://www.science.org/doi/10.1126/science.ads9158

1 resources

Candidate Tools

free-electron lasers

engineering method

Gap applicability

43

very short bursts of X-rays of extremely high intensity that are now accessible as a result of the construction of free-electron lasers, in particular to carry out time-resolved studies of biochemical reactions

This is the only candidate directly tied to high-intensity X-ray structural characterization, which is mechanistically relevant to the gap's imaging bottleneck. It also aligns with the gap capability around making compact X-ray lasers more accessible, although the summary supports time-resolved X-ray studies more clearly than true atom-by-atom materials imaging.

Mechanistic
72
Context
84
Throughput
28
First test time
12
First test cost
6
Replication
30
Practicality
18
Translatability
42

Assumptions: Assumes X-ray laser-based structural characterization is being considered as part of the imaging solution space for materials.

Missing evidence: No explicit evidence here for atomic-resolution materials imaging, compactness, materials-focused use, or atom-by-atom reconstruction performance.

Gap applicability

31

The most significant new data have come from X-ray crystallography of four-way DNA junctions... Ultimately crystallography provides the gold standard for structural analysis.

X-ray crystallography is a direct structural characterization method and is relevant to high-resolution structure determination. However, the supplied evidence is centered on biomolecular junction structure rather than atom-by-atom imaging of general materials, so the fit is limited.

Mechanistic
58
Context
46
Throughput
22
First test time
20
First test cost
14
Replication
40
Practicality
30
Translatability
38

Assumptions: Assumes crystallographic structural methods are in scope as partial approaches to atomic-scale imaging.

Missing evidence: No evidence for applicability to non-crystalline materials, atom-by-atom imaging workflows, nanoscale laser ionization, or compact X-ray laser integration.

Synthetic Biology

5 gaps · 18 tool connections · best match 59%

Protein Design Has Been Limited to Static, Bio-mimetic Structures

Synthetic Biology· 5 capabilities

Tools

5

Best

58%

Protein engineering has largely focused on designing static structures that closely mimic natural proteins. This narrow approach limits the creation of truly novel or highly functional enzymes.

Foundational Capabilities

AI Enzyme Design and De Novo Design of Novel Protein Functions

Leverage generative AI to design new enzymes with novel functions by predicting active conformations and optimizing catalytic activity. Especially for complex redox reactions and reactions that go beyond classic biological catalysis.

2 resources

Biosynthesis of Inorganic Materials

Biosynthesis of C allotropes, e.g. (strong) polyynes and chiral CNTs; Biosynthesis of Si allotropes e.g. crystalline Si from SiO2 for photovoltaics

1 resources

Enzyme Stabilization in Extreme Environments

Expand the operating range of biological enzymes to new solvents, enabling anhydrous enzymes, gas phase and vacuum biochemistry

4 resources

Non-Canonical Amino Acids

Expand the chemical diversity of proteins by incorporating non-canonical amino acids, thereby enabling functions beyond those accessible with natural amino acids.

2 resources

Protein Carpentry

Develop techniques for “protein carpentry” that allow for the precise construction and remodeling of protein structures to yield dynamic, functional enzymes.

2 resources

Candidate Tools

directed evolution

engineering method

Gap applicability

58

Directed evolution is an engineering method that improves biological tool performance by iteratively selecting functional protein variants. In the cited split fluorescent protein study, it was demonstrated as one of two approaches used to improve split fluorescent proteins, contributing to brighter split sfCherry3 variants.

The gap is about escaping static, bio-mimetic protein designs to reach new function, and directed evolution is a directly actionable way to search functional sequence space beyond initial rational designs. It is especially relevant when designed proteins need iterative optimization for catalytic activity or dynamic behavior.

Mechanistic
72
Context
74
Throughput
78
First test time
58
First test cost
52
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes the team can build or access a selectable or screenable assay for the desired novel protein function.

Missing evidence: No evidence here ties this item to enzyme design specifically, dynamic conformational design, or non-natural catalysis in this gap context.

PACE

engineering method

Gap applicability

52

PACE (Phage Assisted Continuous Evolution) is an engineering method used in this study to evolve cryptochrome properties. In the cited work, it was applied to increase the dynamic range of the blue-light-dependent interaction between Arabidopsis thaliana CRY2 and BIC1.

PACE is a high-throughput directed-evolution platform, so it could help optimize designed proteins toward functions that are hard to obtain from static structure-first design alone. Its main value for this gap is accelerating iterative exploration of new functional variants once a suitable selection scheme exists.

Mechanistic
66
Context
58
Throughput
88
First test time
42
First test cost
36
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes the target protein function can be coupled to a phage-compatible continuous selection or enrichment readout.

Missing evidence: Provided evidence only shows use on cryptochrome interaction properties, not enzyme catalysis, de novo proteins, or dynamic enzyme conformations.

domain fusion

engineering method

Gap applicability

49

Domain fusion is a protein engineering method in which protein domains are fused or split to improve existing protein functions or create novel functions. In the supplied evidence, it is described as a general strategy for expanding CRISPR-Cas9 applications.

Domain fusion is an actionable construct pattern that can create proteins with new regulatory or functional architectures rather than only natural-like static folds. That makes it plausibly useful for protein carpentry-style efforts to build dynamic or composite functions.

Mechanistic
55
Context
63
Throughput
61
First test time
68
First test cost
66
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes modular fusion architectures are acceptable for the intended protein function.

Missing evidence: Evidence is general and comes from CRISPR-related context; no direct evidence here for enzyme creation, dynamic conformational control, or de novo catalytic function.

protein splitting

engineering method

Gap applicability

46

Protein splitting is a protein engineering method in which proteins are modified through domain fusion or splitting to improve existing functions or develop novel functions. In the provided evidence, it is discussed as a strategy relevant to expanding CRISPR-Cas9 applications.

Protein splitting is a concrete engineering strategy for introducing conditional assembly and non-natural functional architectures, which is directionally aligned with moving beyond static protein designs. It may support dynamic control or modular reconfiguration in protein carpentry workflows.

Mechanistic
53
Context
61
Throughput
58
First test time
63
First test cost
63
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes split-protein architectures are compatible with the target activity and can be reconstituted or regulated experimentally.

Missing evidence: No direct evidence here for use in enzyme design, de novo function generation, or stabilization in extreme environments.

AlphaFold3

computation method

Gap applicability

45

AlphaFold3 is a computational structure-prediction method used in the cited study to model the MagMboI–DNA complex. In that work, it was applied to infer interactions with the 5'-GATC-3' recognition sequence and to guide optimization of the photoactivatable endonuclease variant MagMboI-plus for top-down genome engineering.

The gap includes designing proteins with novel functions, and AlphaFold3 is at least directly relevant as a computational method for modeling structures and interaction geometries that can guide design iterations. It could help evaluate whether non-natural architectures or active-site arrangements are structurally plausible before experimental testing.

Mechanistic
48
Context
69
Throughput
72
First test time
74
First test cost
70
Replication
0
Practicality
59
Translatability
9

Assumptions: Assumes structure prediction is being used as a design-support tool rather than as a complete solution to dynamic function design.

Missing evidence: Evidence only supports protein-DNA complex modeling in one engineering case; no supplied evidence for de novo enzyme design, conformational ensemble prediction, or dynamic-state design.

Inability to Program Complex Organisms and Developmental Pathways

Synthetic Biology· 3 capabilities

Tools

5

Best

46%

Current genetic tools primarily enable modification of simple organisms. Programming more complex organisms and orchestrating entire developmental pathways remains a major challenge.

Foundational Capabilities

Developmental Biology Read-Write Platforms

Create integrated platforms that allow for real-time monitoring, modeling and precise manipulation of developmental processes, including with bioelectric and other novel control layers.  Goals could include: Genome encoding of symmetry (e.g. 2 to 8-fold radial), accurate size ratios, Branching pattern codes, Natural & synbio eutely and other counting mechanisms

5 resources

Easier Genetic Access to Plants

Develop advanced technologies to facilitate genetic manipulation in plants, broadening the range of programmable organisms. 

6 resources

Map of Cell Adhesion Codes

Decode the molecular signals governing how cells communicate and organize to form complex 3D structures, a fundamental process in tissue formation, organ development, immune cell targeting, etc.

1 resources

Candidate Tools

Gap applicability

46

The far-red light-induced split Cre-loxP system (FISC system) is a multi-component optogenetic genome-engineering tool built from a bacteriophytochrome-based light-responsive system and split Cre recombinase. It enables far-red-light-controlled Cre-loxP recombination for non-invasive, spatiotemporally regulated genome engineering in living systems and mice.

This system directly supports spatiotemporally controlled genome engineering in living systems and mice, which is relevant to programming developmental processes in complex organisms. Its far-red light control could help impose patterned recombination with less invasive stimulation than shorter-wavelength systems.

Mechanistic
78
Context
72
Throughput
42
First test time
34
First test cost
31
Replication
9
Practicality
37
Translatability
6

Assumptions: Assumes controlled recombination is a useful intermediate capability for developmental programming.

Missing evidence: No direct evidence here for use in developmental pathway control, embryos, plants, or whole-organism morphogenesis.

Adeno-associated virus

delivery harness

Gap applicability

43

Adeno-associated virus (AAV) is a viral delivery harness used to package and express CRISPR genome-editing components in vivo. In the cited literature, AAV supports single-vector delivery when smaller Cas9 orthologues and their chimeric guide RNAs fit within AAV packaging constraints, enabling robust in vivo genome editing.

AAV is an actionable in vivo delivery harness for genetic payloads, which is a practical bottleneck when trying to program cells within complex organisms. The summary specifically supports delivery of compact genome-editing systems in vivo.

Mechanistic
58
Context
69
Throughput
48
First test time
63
First test cost
46
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes delivery into animal tissues is part of the gap's near-term bottleneck.

Missing evidence: No evidence here for developmental-stage targeting, embryo-wide patterning, plant use, or orchestration of multi-step developmental programs.

AAV-PA-Cre 3.0

delivery harness

Gap applicability

42

AAV-PA-Cre 3.0 is an adeno-associated viral delivery resource for the photoactivatable Cre recombinase 3.0 system, generated and validated for in vivo mouse applications. It delivers a blue-light-gated Cre/lox recombination system engineered for mammalian expression with reduced background recombination.

This combines in vivo AAV delivery with blue-light-gated Cre/lox recombination and reduced background recombination, offering a concrete way to impose spatial and temporal control over gene activation in mammalian settings. Such control is plausibly useful for probing or engineering developmental trajectories.

Mechanistic
66
Context
64
Throughput
39
First test time
45
First test cost
39
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes mouse-compatible inducible recombination is relevant to the complex-organism part of the gap.

Missing evidence: No direct evidence for developmental biology applications, embryonic systems, plant systems, or control of full developmental pathways.

Gap applicability

40

Adeno-associated virus delivery is a viral gene delivery harness used to deploy the far-red light-induced split-Cre recombinase (FISC) system in vivo. In the cited study, AAV delivery enabled implementation of optogenetically controlled genome engineering in living systems.

This is an in vivo AAV delivery route already used to deploy an optogenetically controlled genome-engineering system. Delivery is a concrete enabling layer for testing programmable control schemes in complex organisms.

Mechanistic
52
Context
66
Throughput
45
First test time
61
First test cost
45
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes the main need is a deployable in vivo delivery harness for controlled genome engineering.

Missing evidence: No direct evidence for developmental pathway programming, embryo targeting, plant transformation, or tissue-patterning performance.

HiRet

delivery harness

Gap applicability

31

HiRet is a lentiviral system for highly efficient retrograde gene transfer that targets specific neural circuits. It supports neural circuit-selective, stable transgene expression and has been used with optogenetic tools to manipulate neuronal activity and behavior.

HiRet offers circuit-selective stable gene delivery in neural systems, which could help with programming specific cell populations in a complex organism. It is more relevant to targeted access in nervous systems than to general developmental programming.

Mechanistic
47
Context
55
Throughput
36
First test time
42
First test cost
36
Replication
0
Practicality
59
Translatability
9

Assumptions: Assumes neural circuit access is a useful subproblem within programming complex organisms.

Missing evidence: No evidence here for developmental biology, non-neural tissues, embryos, plants, or multi-lineage developmental control.

Limited Microbial Hosts/Chassis Organisms

Synthetic Biology· 1 capabilities

Tools

3

Best

59%

Scientists are constrained to a small number of microbial hosts for bioproduction, limiting the diversity and efficiency of engineered biological systems. Expanding the repertoire of microbial hosts could unlock novel biochemical pathways, enabling the production of a wider array of biomolecules and improving the efficiency of biosynthetic processes. It is important to address any biosafety and biosecurity risks associated with developing such technologies.

Foundational Capabilities

New Microbial Chassis

Create platforms that simplify the process of identifying and adapting new microbial hosts, providing recipes for their use in synthetic biology applications. This could take advantage of biocontainment approaches to enhance safety.

1 resources

Candidate Tools

cell-free protein synthesis

engineering method

Gap applicability

59

Cell-free protein synthesis (CFPS) has been used as a transformative technology in synthetic biology, providing a programmable, scalable, and automation-compatible platform for biological engineering.

CFPS could help de-risk expansion into new microbial chassis by allowing parts, expression conditions, or pathway components to be screened in a programmable, automation-compatible setting before committing to full host engineering. That is relevant to the gap's need for recipes that simplify identifying and adapting new hosts.

Mechanistic
62
Context
66
Throughput
84
First test time
72
First test cost
48
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes CFPS is being used as a host-prototyping and part-screening workflow rather than as the final production platform.

Missing evidence: No direct evidence here that the cited CFPS use specifically supports discovery or adaptation of new microbial chassis.

dynamic regulation

engineering method

Gap applicability

35

Dynamic regulation is a metabolic engineering method that modulates gene expression over time to rebalance metabolic fluxes in response to changing cellular or fermentation conditions. It is used to build responsive cell factories rather than relying on fixed static control.

Dynamic regulation could plausibly make non-model microbes more usable as production hosts by rebalancing pathway expression and metabolic flux under changing growth or fermentation conditions. This addresses part of the host-adaptation bottleneck, though not host discovery itself.

Mechanistic
46
Context
58
Throughput
55
First test time
52
First test cost
46
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes the new chassis problem includes tuning unstable or poorly balanced expression in candidate hosts.

Missing evidence: No direct evidence in the item summary that this method has been applied in new or non-model microbial chassis.

mathematical modeling

computation method

Gap applicability

34

Mathematical modeling is a computational method used to guide the rational design of synthetic gene circuits. The cited literature also places it alongside live-cell imaging and within quantitative model systems used to study microbial drug resistance and spatial-temporal features of cancer in mammalian cells.

Mathematical modeling could support rational design of gene circuits and reduce trial-and-error when adapting constructs to candidate microbial hosts. It is lightweight to test and may help generate host-use recipes, but the evidence provided is generic rather than chassis-specific.

Mechanistic
34
Context
52
Throughput
78
First test time
86
First test cost
82
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes computational design support is considered actionable for chassis adaptation workflows.

Missing evidence: No direct evidence here for modeling specifically enabling identification, domestication, or biosafety design of new microbial hosts.

Lack of Applied Synthetic Biology Platforms

Synthetic Biology· 3 capabilities

Tools

3

Best

49%

Applied synthetic biology is underutilized in applications such as building sustainable food systems and repairing the environmental damage caused by conventional agriculture and industry. Despite advances in tools and chassis engineering, there are few robust platforms that translate synthetic biology into scalable, field-ready solutions. This includes not only the production of low-impact proteins and agricultural inputs but also bioremediation technologies for legacy pollutants—such as pesticide-laden soils, heavy metals, and nutrient runoff—that degrade ecosystems and constrain land use. A new generation of synthetic biology platforms is needed to address both sides of the problem: replacing harmful production methods and cleaning up their long-term consequences.

Foundational Capabilities

Bioremediation

Engineered microbes and plants for environmental remediation (e.g., superfund and landfill clean-up and mining) that can survive in toxic conditions and degrade diverse classes of pollutants while concentrating and mining valuable elements. Biocontainment risks need to be addressed.

2 resources

Ethical Food Production Interventions

Implement alternative strategies that enhance the ethical aspects of food production without compromising cost-performance.

1 resources

Synthetic Meat Production via Fungus Engineering

Develop synthetic meat production processes based on fungus engineering to create affordable, ethical, and sustainable protein sources.

1 resources

Candidate Tools

dynamic regulation

engineering method

Gap applicability

49

Dynamic regulation is a metabolic engineering method that modulates gene expression over time to rebalance metabolic fluxes in response to changing cellular or fermentation conditions. It is used to build responsive cell factories rather than relying on fixed static control.

The gap emphasizes robust, scalable applied platforms, and dynamic regulation directly addresses a common bottleneck in cell-factory engineering by rebalancing metabolic flux over time instead of relying on static expression. That could plausibly support more resilient production platforms for sustainable proteins or agricultural inputs.

Mechanistic
72
Context
58
Throughput
66
First test time
55
First test cost
52
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes the platform need includes engineered microbial production systems, not only field-deployed remediation organisms.

Missing evidence: No supplied evidence ties this method to specific food, agricultural, fungal, plant, or bioremediation deployments.

gene activation

engineering method

Gap applicability

42

Gene activation is described in the supplied evidence as one of several CRISPR/Cas-based genome-engineering tools used in microbial biotechnology. The evidence supports its inclusion within the CRISPR/Cas toolbox, but does not specify a particular activator design, target organism, or quantitative performance.

Programmable gene activation could be part of an applied synthetic biology platform by enabling tunable upregulation of pathways in microbial biotechnology. This is plausibly relevant to building production strains or remediation chassis, but the supplied evidence is generic.

Mechanistic
56
Context
46
Throughput
68
First test time
58
First test cost
56
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes CRISPR-based transcriptional control in microbes is within scope of the desired platform layer.

Missing evidence: No specific activator design, organism, field-use context, pollutant target, or production application is provided.

Fluorescent-protein-based methods are assay approaches discussed for evaluating the efficacy of CRISPR-based genome-editing systems in bacteria. The available evidence supports their use as fluorescence-based functional readouts of bacterial editing performance, but does not specify particular reporter proteins, construct architectures, or assay workflows.

Applied platforms need practical build-test cycles, and a fluorescence-based assay for bacterial CRISPR efficacy could help screen editing performance during chassis development. It supports platform engineering indirectly rather than solving the end-use application itself.

Mechanistic
34
Context
44
Throughput
62
First test time
63
First test cost
61
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes the gap includes enabling assays for engineering microbial platforms.

Missing evidence: No evidence connects this assay to environmental strains, fungi, plants, scale-up, or field conditions.

Poor Scalability of Bioreactors Limits Biomanufacturing

Synthetic Biology· 2 capabilities

Tools

2

Best

52%

Current bioreactor designs are inefficient when scaling up production processes, limiting the ability to produce bioproducts at industrial scales.

Foundational Capabilities

Modularization of Bioreactors

Develop modular bioreactor designs that can be easily scaled up, offering flexibility and improved efficiency in industrial bioproduction.

3 resources

Novel Materials and Sterilization of Bioreactors 

Replace steel steam sterilized bioreactors with something more scalable

2 resources

Candidate Tools

cell-free protein synthesis

engineering method

Gap applicability

52

Cell-free protein synthesis (CFPS) has been used as a transformative technology in synthetic biology, providing a programmable, scalable, and automation-compatible platform for biological engineering.

CFPS is directly relevant to biomanufacturing and is described as scalable and automation-compatible, so it could help bypass some scale-up bottlenecks tied to conventional bioreactor operation. It is not a bioreactor redesign tool, but it is a plausible alternative production platform to test where reactor scalability is the limiting step.

Mechanistic
62
Context
58
Throughput
82
First test time
56
First test cost
38
Replication
30
Practicality
50
Translatability
40

Assumptions: Assumes the gap can be partially addressed by replacing some reactor-dependent production workflows rather than only improving vessel hardware.

Missing evidence: No supplied evidence on industrial-scale yields, cost competitiveness, sterility advantages, or fit to modular/non-steel bioreactor goals.

dynamic regulation

engineering method

Gap applicability

33

Dynamic regulation is a metabolic engineering method that modulates gene expression over time to rebalance metabolic fluxes in response to changing cellular or fermentation conditions. It is used to build responsive cell factories rather than relying on fixed static control.

Dynamic regulation can rebalance metabolism in response to changing fermentation conditions, which could reduce performance losses that often emerge during scale-up. This addresses process robustness more than reactor hardware scalability, so the fit is limited but plausible.

Mechanistic
34
Context
46
Throughput
55
First test time
52
First test cost
50
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes part of the scale-up problem is biological instability across fermentation conditions, not only vessel design and sterilization.

Missing evidence: No supplied evidence that this method improves large-scale bioreactor performance, modularization, materials, or sterilization.

Computation

8 gaps · 17 tool connections · best match 41%

AI is Still Narrow in its Reasoning and Planning

Computation· 5 capabilities

Tools

6

Best

33%

Current AI systems exhibit narrow reasoning and planning capabilities compared to human cognition. Broadening AI training methods to include holistic, brain-inspired architectures and cognitive frameworks can advance general intelligence (flagging that there is an AI safety risk here).

Foundational Capabilities

Bayesian Cognitive Architectures  and Program Synthesis

Implement Bayesian cognitive models and probabilistic programming and program synthesis techniques to endow AI with more human-like reasoning, planning, and decision-making capabilities.

7 resources

Developmentally Realistic AI Training

Train AI models on data similar to those developing humans or animals actually experience

3 resources

Holistic Brain-Inspired Architectures

Develop AI architectures that are inspired by the human brain's structure and functionality, enabling more flexible and general reasoning and planning.

6 resources

Improve Out of Context Reasoning in Large Language Models

Improve the ability of large language models or systems based on them to synthesize disparate information outside their context windows to formulate new scientific hypotheses

2 resources

Mimic Human Evolution in Silico

Use evolutionary algorithms and generative models to simulate human evolution processes in computational systems, driving more robust and adaptable AI.

3 resources

Candidate Tools

Gap applicability

33

This Bayesian computational approach is a data-analysis method developed to improve prediction of split protein behavior by contextualizing errors inherent to experimental procedures. In the cited study, it was applied to pooled, sequencing-based screening data from split Cre recombinase constructs generated with optogenetic dimers, enabling comprehensive analysis of split sites across the protein.

The gap explicitly highlights Bayesian cognitive architectures, and this item is at least a concrete Bayesian inference method. It could plausibly contribute analytical machinery for uncertainty-aware model selection or error modeling in broader reasoning systems, though the supplied evidence is from split-protein screening rather than general AI reasoning.

Mechanistic
42
Context
34
Throughput
62
First test time
66
First test cost
68
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes Bayesian inference methods can transfer from biological data analysis to AI system design or evaluation.

Missing evidence: No evidence here that the method supports planning, program synthesis, world modeling, or human-like reasoning tasks.

AlphaFold3

computation method

Gap applicability

28

AlphaFold3 is a computational structure-prediction method used in the cited study to model the MagMboI–DNA complex. In that work, it was applied to infer interactions with the 5'-GATC-3' recognition sequence and to guide optimization of the photoactivatable endonuclease variant MagMboI-plus for top-down genome engineering.

This is a concrete AI computation method, so it is more context-relevant than most biological tools in the list. However, the provided evidence only supports structure prediction for protein-DNA modeling, not broader reasoning or planning, so applicability to the stated gap is limited.

Mechanistic
24
Context
46
Throughput
71
First test time
58
First test cost
55
Replication
0
Practicality
59
Translatability
9

Assumptions: Assumes the gap can include transferable lessons from specialized AI architectures.

Missing evidence: No evidence that AlphaFold3 improves general reasoning, planning, cognitive architectures, or out-of-context synthesis.

GUBS

engineering method

Gap applicability

27

GUBS (Genomic Unified Behavior Specification) is a domain-specific, rule-based declarative language for behavioral specification of synthetic biological devices. It represents device programs as behavioral specifications for open systems rather than as complete closed-system descriptions.

GUBS is a rule-based declarative specification language with theorem-proving-like compilation, which gives it a plausible connection to symbolic specification and program-like reasoning. That said, the supplied evidence is for synthetic biology device specification, not AI planning or cognitive architectures.

Mechanistic
27
Context
22
Throughput
57
First test time
63
First test cost
64
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes declarative behavioral specification ideas may transfer to AI reasoning system design.

Missing evidence: No direct evidence for use in AI, planning benchmarks, probabilistic reasoning, or general intelligence research.

Fernando's model

computation method

Gap applicability

24

Fernando's model is a computational model of a synthetic molecular circuit designed to mimic Hebbian learning in a neural network architecture. It is described as one of the earliest models in this area to use Hill equation-based regulatory modeling, and computational analysis indicated that a reinforcement effect can be obtained with appropriate parameter choices.

The item is a computational model explicitly framed as mimicking Hebbian learning in a neural-network-like architecture, which is at least conceptually aligned with brain-inspired AI. But the evidence provided is about a synthetic molecular circuit model, not an actionable AI training or planning method.

Mechanistic
25
Context
20
Throughput
55
First test time
61
First test cost
63
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes brain-inspired learning motifs are relevant to the gap even when instantiated in another domain.

Missing evidence: No evidence for performance on reasoning, planning, developmental training, or general AI tasks.

This tool is a reinforcement learning-based computational method for multi-intersection traffic signal scheduling that incorporates visible light communication (VLC) queuing, request, and response behaviors. It is described as part of a traffic management system integrating VLC localization services with learning-driven signal control.

The gap includes interest in planning and agent behavior, and this item is a concrete reinforcement-learning method for decentralized control. Still, the supplied evidence is narrowly tied to traffic signal scheduling with VLC signaling, so it does not directly address broad reasoning or human-like cognition.

Mechanistic
21
Context
18
Throughput
49
First test time
52
First test cost
50
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes narrow planning/control methods are still relevant as partial components for broader AI planning research.

Missing evidence: No evidence for transfer beyond traffic control, no cognitive framework, and no support for general reasoning.

Gap applicability

18

Learning-driven traffic signal control is an engineering method for multi-intersection traffic management that integrates visible light communication localization services with reinforcement learning-based signal scheduling. It uses VLC-derived queuing, request, and response behaviors to control traffic signals in simulated multi-intersection settings.

This is an actionable engineering method using reinforcement learning for decentralized scheduling, so it has a weak but plausible connection to planning methods. However, the evidence is highly domain-specific to traffic management and does not show broader reasoning or brain-inspired generalization.

Mechanistic
19
Context
16
Throughput
46
First test time
49
First test cost
47
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes domain-specific RL control methods may offer limited transferable planning ideas.

Missing evidence: No evidence for human-like reasoning, cognitive architectures, developmental training, or out-of-context synthesis.

Biological Life is Our Only Working Example of Complex Evolved Computation

Computation· 1 capabilities

Tools

4

Best

41%

Biological systems are the sole example we have of complex, evolved computation. Replicating this level of complexity in digital systems could unlock entirely new computational paradigms.

Foundational Capabilities

Virtual Life Enabled by Learning and GPUs

Develop virtual life simulations powered by machine learning and modern GPU infrastructure to mimic the complex, evolved computations seen in biological systems.

2 resources

Candidate Tools

GUBS

engineering method

Gap applicability

41

GUBS (Genomic Unified Behavior Specification) is a domain-specific, rule-based declarative language for behavioral specification of synthetic biological devices. It represents device programs as behavioral specifications for open systems rather than as complete closed-system descriptions.

GUBS is a concrete computational framework for specifying behaviors in open biological systems, which is at least directionally relevant to building or formalizing life-like computational systems. It could help structure programmable agent behaviors in artificial-life-style simulations or synthetic biological device design, though the evidence does not show direct use for virtual life or GPU-based open-ended evolution.

Mechanistic
42
Context
34
Throughput
55
First test time
62
First test cost
68
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes a behavioral specification language is useful for the gap's goal of replicating evolved computation, even if not directly tied to the named virtual life resources.

Missing evidence: No supplied evidence of use in artificial life, open-ended evolution, GPU simulation, or learning-enabled virtual organisms.

Gap applicability

29

The Bayesian optimization framework is a computational method built from high-throughput Lustro measurements to guide control of blue light-sensitive optogenetic systems. It uses data-driven learning, uncertainty quantification, and experimental design to identify light induction conditions for multiplexed regulation in Saccharomyces cerevisiae.

This is an actionable learning-and-search method that uses uncertainty-aware optimization, which is broadly relevant to exploring large design spaces in complex adaptive systems. It could plausibly inform search strategies for artificial-life experiments, but the provided evidence is specific to optogenetic control in yeast rather than virtual life or evolved computation.

Mechanistic
31
Context
22
Throughput
58
First test time
46
First test cost
38
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the gap can benefit from general adaptive search and experimental-design methods despite the biological optogenetics context.

Missing evidence: No supplied evidence for use in agent-based simulation, open-ended evolution, GPU-native environments, or digital life.

graph neural networks

computation method

Gap applicability

24

Graph neural networks (GNNs) are one of the fastest growing classes of machine learning models. They are of particular relevance for chemistry and materials science, as they directly work on a graph or structural representation of molecules and materials and therefore have full access to all relevant information required to characterize materials.

GNNs are a concrete machine-learning model class for structured systems, so they could plausibly support representations of interacting components in artificial-life-style computational settings. However, the supplied evidence only supports chemistry and materials applications, not evolved computation or virtual life.

Mechanistic
24
Context
20
Throughput
61
First test time
57
First test cost
55
Replication
0
Practicality
0
Translatability
0

Assumptions: Assumes structured relational modeling is relevant to the gap's computational paradigm goals.

Missing evidence: No supplied evidence for artificial life, biological computation emulation, agent simulation, or open-ended evolutionary dynamics.

Gap applicability

21

Additionally, coupling CFPS with machine learning has enabled predictive optimization of genetic constructs and biosynthetic systems.

This item is an actionable ML-guided optimization approach for complex biological design spaces, which is weakly relevant to searching for life-like computational behaviors. The evidence is limited to CFPS optimization and does not show a connection to virtual life or evolved digital computation.

Mechanistic
22
Context
16
Throughput
49
First test time
52
First test cost
44
Replication
0
Practicality
0
Translatability
0

Assumptions: Assumes predictive optimization methods may transfer conceptually to artificial-life search problems.

Missing evidence: No supplied evidence for simulation environments, evolutionary computation, agent learning, or GPU-enabled virtual life.

AI Could Go Rogue

Computation· 8 capabilities

Tools

3

Best

33%

The potential for AI systems to behave unpredictably or dangerously (“go rogue”) is a critical concern. Ensuring safe and controllable AI architectures is essential for reliable operation. See also:  • https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024 • https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi/

Foundational Capabilities

Automate AI Interpretability

Use AI to enhance the interpretability of other AI systems, creating tools that automatically explain and verify AI behavior.

2 resources

Guaranteed Safe AI Architectures

Develop and implement AI architectures with separable, auditable world models; where safety can be specified in terms of the state space of the model; and proposed AI outputs come with proofs that the output does not leave the safe region of the world model’s state space.

2 resources

Hardware Governance

Develop hardware-level governance mechanisms to enforce safety and compliance in AI systems, ensuring robust operational constraints. This includes tamper-proof hardware. 

13 resources

Mitigating (Indirect) Data Poisoning

Robust strategies for data integrity, anomaly detection, and defensive training protocols to mitigate situations where indirect data poisoning could lead to intentionally misaligned AI systems (not unlike “sleeper agents”).

1 resources

Non-Agentic AI Scientists

Build non-agentic AI scientists that act as oracles but don’t have long-term states or goals

Secure and Privacy-Preserving Local AI Enclaves

Digital fortresses that enable sensitive data to be processed in a controlled, privacy-preserving environment. 

2 resources

Understand AI Psychology without Assuming Human-Like Psychology

Observing emergent AI decision-making processes and cognitive patterns with fewer anthropomorphic assumptions.

1 resources

Understanding Neural Design Principles of Social Instincts

Study the neural basis of human social instincts to inform AI design, ensuring that AI systems can safely interpret and emulate human social behavior.

3 resources

Candidate Tools

GUBS

engineering method

Gap applicability

33

GUBS (Genomic Unified Behavior Specification) is a domain-specific, rule-based declarative language for behavioral specification of synthetic biological devices. It represents device programs as behavioral specifications for open systems rather than as complete closed-system descriptions.

This is a rule-based declarative specification language with theorem-proving-like compilation, which is at least directionally relevant to specifying and checking allowed system behaviors. That could plausibly inform work on auditable or constrained AI architectures, but the provided evidence is from synthetic biology device specification rather than AI safety systems.

Mechanistic
42
Context
28
Throughput
55
First test time
62
First test cost
70
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes cross-domain reuse of formal behavioral specification ideas is acceptable for this computation gap.

Missing evidence: No evidence here that GUBS has been applied to AI systems, safety proofs, interpretability, hardware governance, or rogue-behavior prevention.

Gap applicability

24

The feed-forward and feedback control technique is an engineering method proposed for astrocytes that manipulates intracellular IP3 to stabilize Ca2+ concentration. It is described in the context of Ca2+-based molecular communications nanonetworks, where controlled Ca2+ dynamics are intended to support more reliable signaling behavior.

The item explicitly uses feed-forward and feedback control to stabilize system behavior, which is conceptually relevant to keeping powerful systems within safe operating regimes. However, the supplied evidence is specific to astrocyte Ca2+ regulation and does not show applicability to AI architectures or governance.

Mechanistic
31
Context
20
Throughput
46
First test time
55
First test cost
63
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes generic control-theoretic stabilization methods may transfer conceptually to AI safety work.

Missing evidence: No evidence of use in machine learning, AI oversight, formal guarantees, or hardware-level enforcement.

Gap applicability

20

Switched differential equations were developed as a computational framework to model oscillatory behavior of circadian clock cells in the Madeira cockroach. The model was used to interpret RNAi perturbation phenotypes and to support a hypothesis of coupled morning and evening oscillators linked by mutual inhibition.

State-switching dynamical models are loosely relevant to analyzing systems that can transition between safe and unsafe regimes. But the evidence only supports a biological oscillatory modeling use case, so the link to rogue AI prevention is weak.

Mechanistic
22
Context
16
Throughput
42
First test time
58
First test cost
68
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes hybrid dynamical-system modeling could be considered for AI behavior regime analysis.

Missing evidence: No evidence for AI behavior modeling, safety verification, interpretability, or operational control.

AI Could Be Misused

Computation· 2 capabilities

Tools

2

Best

36%

The risk of AI being misused—whether through malicious intent or unintended consequences—necessitates robust safeguards and countermeasures.

Foundational Capabilities

Decentralized Training

Maintain decentralized control over large neural network training, like SETIatHome/Wikipedia for large language models, to equalize access. However, this also introduces some AI proliferation and control related risks.

1 resources

Jailbreak Resistance

Develop techniques to ensure AI systems are resistant to "jailbreaking" or circumvention of built-in safety and control protocols.

1 resources

Candidate Tools

mathematical modeling

computation method

Gap applicability

36

Mathematical modeling is a computational method used to guide the rational design of synthetic gene circuits. The cited literature also places it alongside live-cell imaging and within quantitative model systems used to study microbial drug resistance and spatial-temporal features of cancer in mammalian cells.

Mathematical modeling is the only candidate with a directly computational design framing that could plausibly support analysis of safeguard strategies or decentralized training dynamics. It is still a weak link because the supplied evidence is about synthetic gene circuits rather than AI safety, misuse prevention, or jailbreak resistance.

Mechanistic
34
Context
42
Throughput
72
First test time
86
First test cost
84
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes general computational modeling methods may transfer to AI system risk analysis.

Missing evidence: No evidence that this method has been used for AI misuse, safety safeguards, adversarial robustness, or decentralized training governance.

Fernando's model

computation method

Gap applicability

20

Fernando's model is a computational model of a synthetic molecular circuit designed to mimic Hebbian learning in a neural network architecture. It is described as one of the earliest models in this area to use Hill equation-based regulatory modeling, and computational analysis indicated that a reinforcement effect can be obtained with appropriate parameter choices.

Fernando's model is at least a computational model involving neural-network-like learning behavior, so it could weakly inform conceptual work on control or reinforcement effects. However, the evidence provided does not connect it to AI misuse mitigation, safeguards, or jailbreak resistance.

Mechanistic
18
Context
28
Throughput
55
First test time
70
First test cost
76
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes a learning-model concept may have limited relevance to AI control research.

Missing evidence: No evidence of use for safety alignment, misuse prevention, adversarial defense, or operational AI systems.

Our Compute Stack is Insecure but Fundamentally Doesn’t Have To Be

Computation· 1 capabilities

Tools

1

Best

39%

Insecure software can lead to vulnerabilities that undermine the reliability and safety of computational systems. Formal methods and rigorous verification are needed to synthesize secure software.

Foundational Capabilities

Formally Verified Software Synthesis

Utilize formal verification techniques to synthesize software that is provably secure, reducing vulnerabilities and enhancing system robustness. Traditional programs in these areas have tended to assume that AI capabilities are saturating, leaving important avenues neglected.  Efforts could break down into several key components:  • Specification generation and validation tools • Code and proof generation systems • Tools that integrate formal verification into existing engineering workflows  • Practical formalization structures that facilitate real-world adoption

19 resources

Candidate Tools

GUBS

engineering method

Gap applicability

39

GUBS (Genomic Unified Behavior Specification) is a domain-specific, rule-based declarative language for behavioral specification of synthetic biological devices. It represents device programs as behavioral specifications for open systems rather than as complete closed-system descriptions.

GUBS is the only candidate with an explicit formal specification language and theorem-proving-like compilation workflow, which is mechanistically adjacent to specification generation and validation for verified software synthesis. It could plausibly inform tooling patterns for declarative specification and proof-oriented compilation, though the supplied evidence is from synthetic biology rather than secure software engineering.

Mechanistic
62
Context
28
Throughput
55
First test time
46
First test cost
58
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes cross-domain transfer from a declarative specification language to software verification workflows is relevant.

Missing evidence: No evidence here that GUBS has been used for software security, formal verification of conventional software, or integration into software engineering toolchains.

Proving Math Theorems is Challenging for Both Humans and AI

Computation· 1 capabilities

Tools

1

Best

36%

Both human mathematicians and current AI systems struggle with proving complex math theorems. Enhancing theorem proving through interactive and automated methods could push the boundaries of mathematical reasoning.

Foundational Capabilities

Reinforcement Learning from Interactive Theorem Prover Feedback

Close the loop between AI and human-guided interactive theorem provers by using reinforcement learning to refine proofs based on feedback from proof assistants.

9 resources

Candidate Tools

GUBS

engineering method

Gap applicability

36

GUBS (Genomic Unified Behavior Specification) is a domain-specific, rule-based declarative language for behavioral specification of synthetic biological devices. It represents device programs as behavioral specifications for open systems rather than as complete closed-system descriptions.

GUBS is the only candidate with explicit structured evidence connecting it to theorem-proving-like compilation, via a scheme similar to automated theorem proving. Its rule-based declarative specification could plausibly inform formal reasoning workflows relevant to interactive proof systems.

Mechanistic
42
Context
28
Throughput
45
First test time
62
First test cost
72
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes a formal specification language with theorem-proving-like compilation is relevant to theorem proving tooling even though the original application is synthetic biology device specification.

Missing evidence: No evidence that GUBS was used for mathematical theorem proving, proof assistant integration, reinforcement learning, or human-AI interactive proof feedback.

2 unmatched gaps

Silicon-Based Electronics Face Fundamental Limits in Dimensional Scaling

Computation· 2 capabilities
No tool matches

For over five decades, silicon-based CMOS technology has driven unprecedented progress in computing and information technology through dimensional scaling following Moore's Law. This miniaturization has led to exponential increases in transistor density, performance, and energy efficiency. However, as transistor channel dimensions shrink below a few nanometers, silicon and conventional bulk semiconductors (e.g., SiGe, III-V materials) are encountering insurmountable fundamental physical and material limits (heat dissipation, short-channel effects, etc.).

Security RiskSafety Risk

Foundational Capabilities

Atomically Thin 2D Semiconductors for Integrated Circuits

Two-dimensional (2D) semiconductors, especially transition metal dichalcogenides (TMDs), have potential to shrink transistors beyond the scaling limits of silicon. Unlike silicon, which suffers from degraded performance at channel thickness < 12 nm, 2D materials are "dangle-bond-free," meaning their surfaces are naturally stable and less prone to defects. This allows them to maintain high carrier mobility and low leakage currents even at atomically thin channel thickness. Wafer-scale growth of monolayer TMD material has been demonstrated and prototype transistors have shown feasibility of outperforming conventional transistors. However, their industrial adoption requires optimization for industrial manufacturing, integration with semiconductor foundry processes, and standardized methods for characterizing material properties and device performance.

5 resources

Next-Gen 3D Integration of Discrete Components

New logic and memory technologies based on CNTs or other structures, 3D integration with fine-grained connectivity, and new architectures for computation immersed in memory

3 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Silicon Compute is Massively Energy Intensive Compared to Biological Brains

Computation· 11 capabilities
No tool matches

Modern deep learning and general computation demand enormous energy, limiting scalability and sustainability. Addressing energy efficiency is critical for the next generation of computing platforms, though it also supports potential proliferation of advanced AI and should be advanced alongside AI safety and governance considerations.

Foundational Capabilities

AI-Based Design of AI Chips

Use AI to design the next generation of hardware for AI.

2 resources

Denser and More Robust Longer Term Data Storage

Enable new modalities of long term, ultra dense data storage such as via DNA polymers

1 resources

Living Computers

Biologically inspired or living computer systems that use biological components to perform computation at very low energy levels.

1 resources

Lower Energy Architectures for Deep Learning

Develop novel hardware architectures optimized for deep learning and artificial intelligence that dramatically reduce energy consumption compared to current systems.

2 resources

Millivolt Switching

Develop switching technologies that operate at millivolt levels, significantly reducing the energy required for signal processing and computation.

2 resources

Neuromorphic Systems

Leverage brain-inspired neuromorphic hardware to perform computation more efficiently, emulating the low-energy operation of biological neural networks.

2 resources

Next-Gen 3D Integration of Discrete Components

New logic and memory technologies based on CNTs or other structures, 3D integration with fine-grained connectivity, and new architectures for computation immersed in memory

3 resources

Probabilistic Computing Hardware

Hardware for probabilistic computation, which can perform certain tasks more energy-efficiently by embracing uncertainty.

3 resources

Reversible Computing

Create computing architectures that use reversible logic, theoretically allowing computation with near-zero energy dissipation by avoiding information loss.

3 resources

Superconducting Virtual Brains

Explore superconducting hardware to achieve brain-inspired computing with drastically reduced energy consumption, scaling to large networks.

6 resources

Thermodynamic Computing

Computing paradigms based on thermodynamic principles, where computation is driven by energy gradients and can operate at lower energy costs by harnessing reversible and low-energy processes.

1 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Neuroscience

4 gaps · 17 tool connections · best match 58%

We Can’t Take High-Resolution Movies of or Intervene in Brain Computation at the Single Neuron Level

Neuroscience· 4 capabilities

Tools

6

Best

45%

Capturing the dynamics of large brain networks at single-neuron resolution in vivo is extremely challenging. Advanced imaging methods that record fast, high-resolution activity without destructive intervention are required to unravel the complex interplay of neuronal circuits in real time.

Foundational Capabilities

In-Vivo Connectomics

Create methods to map neuronal connectivity in living brains, capturing the dynamic interactions between neurons at a single-cell level.

1 resources

In-Vivo Optical Transparency of Brain Tissue

Develop techniques that render brain tissue optically transparent in vivo, allowing deeper and higher-resolution imaging of neural networks without invasive sectioning.

1 resources

Low Energy Multiphoton Brain Imaging

Develop a novel physical pathway for multiphoton fluorescence generation that enables optical sectioning and excitation at red wavelengths with simple systems like continuous wave lasers.

1 resources

Novel Fast-Scanning Microscopy of Brain Cortex

Develop innovative microscopy techniques that enable rapid, high-resolution imaging of neuronal networks in vivo at single–neuron resolution (e.g., Light Beads Microscopy)

1 resources

Candidate Tools

Gap applicability

45

Lattice lightsheet microscopy (LLSM) is a modified light-sheet imaging platform used for three-dimensional optogenetic activation with subcellular resolution. In the cited 2022 study, it enabled high-spatiotemporal-resolution manipulation of cellular behavior, including membrane ruffling and guided cell migration.

This is an actionable optical platform with explicit high-spatiotemporal-resolution, three-dimensional light control, which is relevant to the gap's need to image and intervene in neural activity with fine spatial precision. It is a stronger fit than generic microscopy because the supplied evidence includes subcellular-resolution optogenetic manipulation, though not brain-specific use.

Mechanistic
72
Context
38
Throughput
62
First test time
22
First test cost
16
Replication
20
Practicality
71
Translatability
11

Assumptions: Assumes the platform could be adapted from cell-behavior studies to neural tissue imaging or perturbation.

Missing evidence: No supplied evidence for in vivo brain use, single-neuron recording performance, imaging depth in brain tissue, or demonstrated neuronal activity readout.

Gap applicability

41

Light-sheet microscopy, also termed single plane illumination microscopy, is an in vivo fluorescence imaging method tailored to larval research and embryonic imaging. The supplied evidence indicates that it can capture the full course of embryonic development from egg to larva and has been coupled with optogenetic perturbation to study Wnt signaling during embryogenesis.

Light-sheet microscopy is directly an in vivo fluorescence imaging method and the supplied evidence also notes coupling to optogenetic perturbation, matching the gap's combined recording-and-intervention framing. Its volumetric imaging orientation makes it plausibly relevant to large-network observation, but the evidence is from embryos and larvae rather than mammalian brain circuits.

Mechanistic
61
Context
34
Throughput
66
First test time
30
First test cost
22
Replication
20
Practicality
71
Translatability
11

Assumptions: Assumes general light-sheet principles could transfer to neural circuit imaging contexts.

Missing evidence: No supplied evidence for single-neuron resolution in brain tissue, fast neural activity imaging, adult mammalian in vivo use, or deep-brain performance.

Gap applicability

40

We review recently developed functional mapping methods that use optogenetic single-point stimulation in the rodent brain and employ cellular electrophysiology, evoked motor movements, voltage sensitive dyes (VSDs), calcium indicators, or functional magnetic resonance imaging (fMRI) to assess activity.

Voltage-sensitive dye imaging is at least directly an activity-readout method and the supplied evidence places it in rodent brain functional mapping, which is closer to the gap context than most other items. Its high temporal-resolution hint is relevant to capturing fast computation, even though the provided evidence does not establish single-neuron resolution.

Mechanistic
58
Context
52
Throughput
55
First test time
46
First test cost
42
Replication
25
Practicality
25
Translatability
25

Assumptions: Assumes the activity-mapping use could be extended toward finer-scale recordings depending on implementation.

Missing evidence: No supplied evidence for single-neuron resolution, volumetric imaging, chronic in vivo use, or compatibility with simultaneous intervention.

Gap applicability

31

NIR light-based imaging is an optical assay and photoregulation approach that uses near-infrared light to sense, and in some cases modulate, specific cellular events in living systems. The cited review describes these strategies as enabling real-time interrogation of deep tissues with subcellular accuracy.

The supplied evidence describes near-infrared strategies for real-time interrogation of deep tissues and in some cases photoregulation, which is directionally relevant to the gap's need for less destructive deep-brain sensing and intervention. It is still a broad modality class rather than a validated single-neuron brain method in the provided evidence.

Mechanistic
49
Context
36
Throughput
44
First test time
34
First test cost
30
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes deep-tissue optical access is a key bottleneck for this gap.

Missing evidence: No supplied evidence for neuronal activity imaging, single-neuron resolution, in vivo brain demonstrations, or specific compatible reporters/actuators.

Gap applicability

27

Down-conversion phosphors are material-based light-delivery harnesses explored for remote optogenetic control of neuronal activity in living animals. They are used in wireless, less invasive optical stimulation strategies to control cellular functions in the brain and other tissues.

These phosphors are also described as wireless, less invasive light-delivery tools for remote optogenetic control of neuronal activity in living animals, making them a plausible intervention-enabling component for this neuroscience gap. The provided evidence is limited to delivery and does not establish fine spatial precision or recording capability.

Mechanistic
33
Context
48
Throughput
31
First test time
28
First test cost
24
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes remote optical stimulation is useful even without matched recording evidence.

Missing evidence: No supplied evidence for single-neuron targeting, compatibility with high-resolution brain imaging, or demonstrated performance in the specific in vivo contexts implied by the gap.

up-conversion phosphors

delivery harness

Gap applicability

27

Up-conversion phosphors are material-based light-delivery harnesses used to enable remote optogenetic control of neuronal activity in living animals. They are being explored as wireless, less invasive approaches for controlling cellular functions in the brain and other tissues.

These are specifically described as less invasive light-delivery approaches for remote optogenetic control of neuronal activity in living animals, so they plausibly address the intervention side of the gap. They do not address high-resolution recording directly and the supplied evidence does not support single-neuron targeting precision.

Mechanistic
33
Context
48
Throughput
31
First test time
28
First test cost
24
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the gap includes minimally invasive optical actuation as well as imaging.

Missing evidence: No supplied evidence for single-neuron selectivity, integration with imaging, brain-region depth performance, or robust in vivo replication.

Most of the Human Brain Remains Inaccessible

Neuroscience· 7 capabilities

Tools

5

Best

39%

Large portions of the living human brain are difficult to observe and modulate with current technologies. Safer, noninvasive, or minimally invasive methods are needed to capture real-time brain state information. One funding program dedicated to making advancements in this space is that of ARIA (UK science R&D agency), which launched the Scalable Neural Interfaces opportunity space to support a new suite of tools to interface with the human brain at scale.

Foundational Capabilities

Fully Noninvasive Neural Read–Write Technologies

Use noninvasive modalities—such as ambient field magnetoencephalography (MEG) with quantum gradiometers, sono–magnetic tomography, optical interference methods, and ultrasound modulated optical tomography—to record and modulate brain activity without surgery.

10 resources

Gene Expression Control of Cellular Transplants in the Brain

Technologies to control gene expression in single neurons post-transplantation. Light-based or acoustic methods could offer precision for neuro-activation to enable axonal guidance and integration and enable cellular transplantation for neuroregeneration, circuitry reconstruction etc.

3 resources

Less Invasive Access to the Brain Through Very Tiny Skull Holes

Access brain fluids through much tinier holes than currently possible to facilitate less invasive delivery of drugs or devices to intracranial or intraventricular spaces.

1 resources

Micro- to Nano-Scale Minimally Invasive BCI Transducers

Harmless nanoscale transducers to record or modulate brain activity that can be delivered minimally invasively.

6 resources

Minimally Invasive Ultrasound–Based Whole Brain Computer Interface

Develop a minimally invasive ultrasound-based platform that can interface programmably with the whole human brain. This approach leverages ultrasound’s ability to penetrate deep tissues, offering scalable imaging and modulation with minimal invasiveness.

2 resources

Noninvasive Blood–Based Measurement of Brain Biomarkers

Use peripheral sampling methods to indirectly monitor brain molecular biomarkers. One approach involves using ultrasound to transiently open the blood–brain barrier, releasing engineered protein markers into the bloodstream for detection.

3 resources

Precision Immunology for Indirect Read/ Write Access to the Central Nervous System

Use peripheral immune cells as both reporters and interventions to decode and influence the brain. This approach leverages the adaptive immune system’s inherent function as sentinel and archivist for etiological and pathological processes in other end organs (including the brain).

1 resources

Candidate Tools

Adeno-associated virus

delivery harness

Gap applicability

39

Adeno-associated virus (AAV) is a viral delivery harness used to package and express CRISPR genome-editing components in vivo. In the cited literature, AAV supports single-vector delivery when smaller Cas9 orthologues and their chimeric guide RNAs fit within AAV packaging constraints, enabling robust in vivo genome editing.

Several gap capabilities involve genetically encoded neural reporters or actuators in the brain, and AAV is a directly actionable in vivo delivery platform with explicit CNS and neural circuit targeting hints. It could support first tests of minimally invasive brain interfacing concepts that depend on transgene expression, although it does not itself solve noninvasive sensing or modulation.

Mechanistic
42
Context
58
Throughput
46
First test time
55
First test cost
42
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes the intended approach for this gap includes gene delivery of reporters or actuators to brain cells.

Missing evidence: No supplied evidence on noninvasive delivery route, whole-brain coverage, human safety in this specific use, or compatibility with the named neural interface modalities.

Gap applicability

33

Upconversion nanoparticles (UCNPs) are a light-conversion delivery harness that couples near-infrared (NIR) illumination to modules that normally require shorter-wavelength activation. In the cited studies, UCNPs were paired with a UV-responsive triangular DNA nano sucker and with Opto-CRAC to enable NIR-triggered nucleic acid amplification and NIR control of Ca2+-dependent signaling.

The gap includes interest in minimally invasive or noninvasive modulation, and UCNPs can couple deeper-penetrating NIR input to shorter-wavelength-responsive modules. That makes them a plausible enabling component for brain-facing optical control strategies where direct short-wavelength delivery is limiting.

Mechanistic
34
Context
36
Throughput
40
First test time
48
First test cost
38
Replication
31
Practicality
71
Translatability
11

Assumptions: Assumes a proposed neural interface would use light-responsive modules and could tolerate nanoparticle-based coupling.

Missing evidence: No supplied evidence for skull penetration performance, brain delivery, neural specificity, safety in the human brain, or real-time whole-brain readout capability.

Gap applicability

29

Chemogenetic methods of transmembrane receptors are a set of approaches for cell-specific regulation of receptor signaling. A 2022 review describes them as methods to control receptor functions in cells, with some strategies applied in living animals to reveal signaling in target cells.

This is a directly actionable control method for cell-specific receptor signaling and could plausibly contribute to the gap's read-write objective on the modulation side. It is most relevant where minimally invasive control of transplanted or targeted neural cells is acceptable.

Mechanistic
28
Context
31
Throughput
45
First test time
52
First test cost
50
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes the gap solution can rely on genetically specified target cells and exogenous ligand administration rather than fully noninvasive physical modalities alone.

Missing evidence: Only review-level summary is provided; no specific evidence here for brain-wide access, human use, noninvasive delivery, or compatibility with the named ARIA interface modalities.

BphP1-Q-PAS1 optogenetic pair

multi component switch

Gap applicability

22

BphP1-Q-PAS1 is a near-infrared-light-inducible optogenetic interaction pair composed of BphP1 and Q-PAS1. It enables light-controlled protein regulation, including transcription-related applications and modification of chromatin epigenetic state, and it can be combined with blue-light LOV-domain systems with negligible spectral crosstalk.

The gap explicitly mentions light-based control of gene expression in brain cellular transplants, and this near-infrared optogenetic pair is a usable actuator for protein regulation and transcription-related control. NIR responsiveness is directionally relevant for deeper tissue access than visible-light-only systems.

Mechanistic
24
Context
27
Throughput
33
First test time
34
First test cost
36
Replication
9
Practicality
37
Translatability
6

Assumptions: Assumes the use case is control of engineered transplanted or targeted cells rather than direct non-genetic whole-brain interfacing.

Missing evidence: No supplied evidence for operation through skull, in neurons in vivo, human applicability, or integration with delivery systems for the brain.

Gap applicability

20

The Q-PAS1-LOV integrated optogenetic tool is a dual-color, multi-component optogenetic system that combines Q-PAS1 with a blue-light-activatable LOV-domain-based module in a single platform. It enables tridirectional protein targeting with independent control by near-infrared and blue light.

This multi-component optogenetic system offers dual-color control and could be useful for precise regulation of engineered brain-cell transplants or multiplexed neural control experiments. It is only a partial fit because the gap emphasizes scalable, safer access to the living human brain, not just intracellular control logic.

Mechanistic
22
Context
24
Throughput
29
First test time
28
First test cost
31
Replication
9
Practicality
37
Translatability
6

Assumptions: Assumes a genetically encoded, light-driven control strategy is acceptable for a subproblem within the gap.

Missing evidence: No supplied evidence for brain delivery, neuronal use in vivo, skull-penetrant performance, or suitability for noninvasive human neural interfaces.

Most Brain Circuitry is Still Invisible

Neuroscience· 3 capabilities

Tools

3

Best

58%

Understanding the complete wiring of the brain at single–cell resolution, along with detailed molecular annotations, is critical for revealing how neural circuits support learning, memory, and behavior. Current technologies are prohibitively expensive and lack scalability, limiting our ability to link molecular composition with circuit connectivity and to understand the alterations present in brain disorders. This gap fundamentally makes diagnosis, treatment, and prevention of many brain disorders more difficult. Beyond the biomedical applications, maps of brain circuitry could play a fundamental role in grounding principles of safety for brain-like AI systems. Initiatives like the NIH BRAIN Initiative’s transformative projects (the BRAIN Initiative Cell Atlas Network (BICAN), the BRAIN Initiative Connectivity Across Scales (BRAIN CONNECTS) Network, and the Armamentarium for Precision Brain Cell Access) represent important efforts to illuminate foundational principles governing the circuit basis of behavior and to inform new approaches to treating human brain disorders by radically enhancing our understanding of brain cell types and the tools needed to access them (The BRAIN Initiative® 2.0: From Cells to Circuits, Toward Cures).

Foundational Capabilities

Faster Electron Microscopy for Connectomics

Develop new physical detection methods for electron microscopy that improves scalability of visualization of brain circuitry.

1 resources

Rapid High–Resolution Neural Circuit Visualization

Combine advanced imaging methods—such as synchrotron X–ray microscopy and expansion microscopy—to rapidly and scalably image neural circuits in both small and large brain regions. This method promises high spatial resolution with faster throughput.

2 resources

Scalable, Anatomically and Molecularly Dense Brain Mapping Technology

Develop a scalable technology that can map the brain’s wiring at the single–cell level and link molecular and circuit properties.

4 resources

Candidate Tools

Gap applicability

58

Single-cell RNA sequencing (scRNA-seq) is a transcriptomic assay method that measures RNA molecules in individual cells by sequencing-based transcript detection. In the cited application, it detected FLiCRE transcripts within the endogenous transcriptome, enabling simultaneous readout of cell type and calcium activation history.

The gap explicitly calls for linking circuit structure to molecular cell identity, and scRNA-seq is a directly actionable single-cell molecular annotation method. While it does not solve connectivity mapping by itself, it plausibly addresses the molecular-annotation half of the bottleneck in scalable brain mapping workflows.

Mechanistic
72
Context
68
Throughput
78
First test time
62
First test cost
42
Replication
9
Practicality
71
Translatability
51

Assumptions: Assumes the intended use is as a companion assay for molecular annotation of mapped neurons rather than a standalone connectomics solution.

Missing evidence: No supplied evidence shows direct use for brain-wide circuit mapping, neuronal tissue workflows, or integration with connectivity readouts.

Gap applicability

49

Spatial transcriptomics is a transcriptomic assay method identified in the supplied review as a recent methodological advance. In that evidence, it is presented as part of a broader technology set that enables easier and more accurate visualization of cell behavior and qualitative and quantitative analysis of cell-cell interactions.

Spatial transcriptomics is an actionable assay for adding molecular information while preserving tissue location, which is relevant to the gap's need to connect molecular composition with circuit organization. It could plausibly complement anatomical mapping approaches in brain tissue.

Mechanistic
66
Context
61
Throughput
70
First test time
50
First test cost
36
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes spatial molecular annotation in tissue sections is useful alongside separate connectivity measurements.

Missing evidence: Supplied evidence does not show neuroscience-specific use, synaptic-resolution performance, or direct integration with connectomics pipelines.

Gap applicability

30

Lattice lightsheet microscopy (LLSM) is a modified light-sheet imaging platform used for three-dimensional optogenetic activation with subcellular resolution. In the cited 2022 study, it enabled high-spatiotemporal-resolution manipulation of cellular behavior, including membrane ruffling and guided cell migration.

This is at least an actionable high-resolution 3D optical imaging platform, so it could plausibly support rapid visualization experiments on neural structures or tool development for circuit imaging. However, the supplied evidence is about optogenetic manipulation, not scalable connectomics or molecularly dense brain mapping.

Mechanistic
34
Context
38
Throughput
41
First test time
28
First test cost
18
Replication
20
Practicality
71
Translatability
11

Assumptions: Assumes the platform could be repurposed from optogenetic imaging/manipulation toward neural imaging workflows.

Missing evidence: No supplied evidence for brain tissue, connectome-scale imaging, single-synapse mapping, or molecular annotation capability.

Current “Model Systems” for Brain Function are Not Representative of the Real Human Brain

Neuroscience· 7 capabilities

Tools

3

Best

43%

Current in vivo and in vitro models often fail to capture human brain function. Innovative model systems—including digital reconstructions, embodied simulations, and new biological models—are needed.

Foundational Capabilities

Data–Driven AI Models of Brain Systems

Use machine learning to construct functional digital emulations of human and primate brain systems. These models, built at varying levels of fidelity, support automated interpretability and data-driven discovery.

7 resources

Datasets that Quantitatively Capture the Full Space of Behavior

Use rich data collection and machine learning to correlate natural behavior with neural activity in animals and humans and decipher the “grammar” and subcomponents of bodily movement.

3 resources

Digital Replicas of the Whole Brain of a Model Organism

Construct a digital replica of a model organism’s brain (e.g., C. elegans) that accurately recapitulates neuronal activity and behavior. 

6 resources

Embodied Testbeds for Neuro–Cognitive Models

Develop virtual models (e.g., a virtual fly, rodent) to simulate neuro–cognitive processes in a controlled, embodied environment.

3 resources

Ex Vivo Human Brains

Utilize live human brains maintained ex vivo (“in a vat”) to study disease and drug responses more accurately.

1 resources

In Vitro Models of the Human Cortex

Develop an in-dish model that mimics the structure and function of the human cortex, providing a controllable platform for studying cortical development, function, and disease.

2 resources

Mapping the Hypothalamus and Brainstem

Systematically map how specific brain regions like the hypothalamus and brainstem (arguably the “steering subsystem” of the brain) drive innate behaviors and learning signals, and understand their role in obesity, chronic pain, fertility, inflammation, and other disorders.

7 resources

Candidate Tools

live-cell imaging

assay method

Gap applicability

43

Live-cell imaging is an assay method used in neurons in culture and brain slices to observe dynamic cellular processes in real time. The cited studies applied it to visualize minute-scale membrane PI(3,4,5)P3 fluctuations and microtubule retrograde flow during neuronal polarization-related dynamics.

The gap explicitly calls for better biological models of human brain function, and this assay is directly evidenced in neurons in culture and brain slices. It could help characterize whether new in vitro or ex vivo neural models show dynamic cellular behaviors that are more brain-like, even though it does not itself create a new model system.

Mechanistic
42
Context
58
Throughput
45
First test time
72
First test cost
48
Replication
31
Practicality
83
Translatability
12

Assumptions: Assumes the user needs characterization tools for candidate human-relevant neural models, not only model-building technologies.

Missing evidence: No evidence here that it works in human organoids, ex vivo human brain, or whole-brain functional model validation.

mathematical modeling

computation method

Gap applicability

29

Mathematical modeling is a computational method used to guide the rational design of synthetic gene circuits. The cited literature also places it alongside live-cell imaging and within quantitative model systems used to study microbial drug resistance and spatial-temporal features of cancer in mammalian cells.

The gap includes digital reconstructions and embodied simulations, and mathematical modeling is a general computational tool that can support quantitative model construction. However, the supplied evidence is about synthetic gene circuits and other non-brain systems, so the link is weak and mainly methodological.

Mechanistic
28
Context
24
Throughput
62
First test time
80
First test cost
82
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes broad computational modeling methods are acceptable as support tools for digital brain-model development.

Missing evidence: No direct evidence for neuroscience, brain-scale modeling, behavior-neural datasets, or human-brain representativeness.

Gap applicability

16

Molecular dynamics simulation is a computational method for modeling atomistic conformational dynamics of proteins and analyzing residue fluctuations and vibrational behavior. In the cited studies, it was used as a noninvasive approach to validate dynamic behavior and to compare PAS-domain dynamics across functional groups.

This is a usable computational method for simulating molecular dynamics, which could in principle support mechanistic submodels inside larger biological brain models. But the provided evidence is limited to protein conformational analysis and does not address the core bottleneck of building representative human brain model systems.

Mechanistic
16
Context
10
Throughput
34
First test time
55
First test cost
46
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes molecular-scale simulation might be used as a component within broader neurobiological model-building workflows.

Missing evidence: No evidence for neural systems, organoids, ex vivo brain, circuit modeling, behavior, or human-brain functional emulation.

Physiology and Medicine

6 gaps · 16 tool connections · best match 58%

We Can’t Safely and Controllably Deliver Complex Molecular Payloads to the Targets We Want in the Body

Physiology and Medicine· 5 capabilities

Tools

6

Best

58%

Current in-vivo delivery systems (viral vectors, nanoparticles, microchips) face challenges such as off-target accumulation and inefficiency, particularly in delivering therapies to the brain. Novel delivery approaches are needed to improve targeting and performance.

Foundational Capabilities

Delivery Systems that “De-Target” the Liver and Macrophages

New delivery mechanisms are needed to target the desired tissues and avoid uptake by hepatocytes and macrophages. Off-target uptake necessitates higher dosing, increasing toxicity risks.

4 resources

Improved & New Gene Therapy Delivery Vehicles

Develop improved gene therapy vectors that can be manufactured at scale, as well as completely novel therapeutic gene therapy delivery vehicles. 

3 resources

Map of Cell-Type or Tissue Specific Targets

Systematically investigate cell-type and tissue specific targets equivalent to ASGPR for the liver. 

2 resources

Map the Determinants of Cellular Migration in the Body

Map the determinants of cellular migration in the body to block metastasis and control cellular delivery.

1 resources

Microelectronics for Tissue-Targeted Delivery

Utilize ultra-small microelectronic devices delivered via endovascular routes to target specific tissues (including the brain), bypassing the limitations of viral vectors and bulky implants.

3 resources

Candidate Tools

focused ultrasound

delivery harness

Gap applicability

58

Focused ultrasound (FUS) is a noninvasive physical delivery and control modality that penetrates deep biological tissues and induces confined mild hyperthermia to activate heat-sensitive genetic modules. In the cited 2023 study, FUS was coupled to heat-sensitive CRISPR, CRISPRa, and CRISPRi systems to enable remote spatiotemporal regulation of genome and epigenome function in live cells and animals.

Focused ultrasound is a concrete noninvasive targeting modality with explicit evidence for deep-tissue and brain-relevant use, including BBB-opening-related hints and remote spatiotemporal control in animals. That makes it one of the few supplied items that directly addresses the gap's need for safer, more controllable in vivo targeting beyond standard viral or nanoparticle delivery alone.

Mechanistic
82
Context
90
Throughput
45
First test time
42
First test cost
28
Replication
22
Practicality
71
Translatability
11

Assumptions: Assumes the gap includes physical targeting/control methods, not only carrier chemistry.

Missing evidence: No direct evidence here for payload class breadth, quantitative off-target reduction, or liver/macrophage de-targeting performance.

Ex vivo stem cell modification and re-transplantation is a clinical delivery workflow in which a patient's own stem cells are isolated, genetically modified outside the body with CRISPR-based approaches, and returned to the same patient. The supplied evidence identifies this format as common among current clinical CRISPR trials.

This workflow can bypass some in vivo vector-targeting bottlenecks by moving payload installation outside the body and then returning engineered autologous cells. It is plausibly relevant where the therapeutic payload can be carried by transplanted cells rather than needing direct tissue transduction in situ.

Mechanistic
61
Context
58
Throughput
34
First test time
31
First test cost
22
Replication
22
Practicality
71
Translatability
11

Assumptions: Assumes cell-based replacement or cell-mediated therapy is acceptable for part of the gap.

Missing evidence: No evidence here for targeting non-hematopoietic tissues, brain delivery, or avoidance of liver/macrophage uptake.

Gap applicability

37

Macrophage transient horizontal gene transfer is a cell-based siRNA delivery harness in which macrophages loaded with siRNA lipoplexes transfer siRNA to cancer cells during in vitro coculture. In the reported system, transferred CIB1-siRNA reduced tumorsphere growth and lowered CIB1 and KI67 mRNA expression in MDA-MB-468 human breast cancer cells.

This is a specific cell-based delivery strategy for transferring siRNA cargo from macrophages to target cells, so it is at least mechanistically aligned with complex payload delivery. It may be useful as a starting point for exploring living-cell carriers instead of conventional particles or viral vectors.

Mechanistic
49
Context
41
Throughput
36
First test time
46
First test cost
44
Replication
22
Practicality
71
Translatability
11

Assumptions: Assumes cell-mediated local transfer could be adapted beyond the reported coculture cancer setting.

Missing evidence: Evidence is limited to in vitro coculture and siRNA transfer; no in vivo targeting, safety, brain access, or de-targeting data are provided.

The self-sufficient subcutaneous push button-controlled cellular implant is an implantable delivery harness powered by repeated gentle finger pressure on the overlying skin. Finger-pressure actuation deforms an embedded piezoelectric membrane, generates low-voltage electrical energy, and triggers rapid biopharmaceutical release from engineered electro-sensitive human cells.

This implant offers controllable, on-demand release of biopharmaceuticals from engineered cells, which addresses the controllability part of the gap. It could be relevant when localized or depot-style delivery is acceptable instead of free systemic distribution.

Mechanistic
44
Context
39
Throughput
29
First test time
33
First test cost
24
Replication
22
Practicality
71
Translatability
11

Assumptions: Assumes an implant-based local delivery solution is within scope for the gap.

Missing evidence: No evidence here for tissue-specific homing after release, brain targeting, systemic biodistribution, or reduced off-target accumulation.

Gap applicability

29

Microencapsulated mammalian cells are a delivery harness in which engineered mammalian cells carrying closed-loop gene networks are implanted into mice. The available evidence supports this platform as a means to deploy synthetic mammalian gene circuits in vivo.

Microencapsulated engineered cells provide an in vivo deployment format for complex genetic payloads and closed-loop therapeutic programs without requiring direct transduction of the final target tissue. That could partially mitigate some delivery bottlenecks for secreted or circuit-based therapies.

Mechanistic
38
Context
34
Throughput
31
First test time
37
First test cost
30
Replication
22
Practicality
71
Translatability
11

Assumptions: Assumes the intended payload can act from an implanted cellular depot.

Missing evidence: No supplied evidence for tissue targeting, brain access, systemic distribution control, or avoidance of liver/macrophage uptake.

Gap applicability

28

Spatial-temporal control of bioactive drug release is an engineering method for delivering developmental cytokines, growth factors, and other bioactive factors in defined spatial and temporal patterns. In the cited 2019 review, it is described as a well-developed approach that makes biomimetic release strategies more feasible for tooth regeneration.

Controlled release methods are directly relevant to improving when and where payloads are exposed, which is one component of safer delivery. The supplied evidence supports spatiotemporal patterning of bioactive factor release, though in a regenerative medicine context rather than systemic in vivo targeting.

Mechanistic
35
Context
28
Throughput
40
First test time
41
First test cost
39
Replication
22
Practicality
71
Translatability
11

Assumptions: Assumes local release engineering is considered part of the delivery gap.

Missing evidence: No direct evidence for whole-body targeting, complex nucleic acid or gene therapy payloads, brain delivery, or de-targeting of liver/macrophages.

Inadequate Models of Human Physiology

Physiology and Medicine· 7 capabilities

Tools

4

Best

39%

Current preclinical models of human physiology, including animals and organoids, do not fully capture the complexity of human physiology, limiting the predicting power of preclinical experiments and explaining, in part, the costly failures of drug development in clinical trials. This is especially true for complex disorders including those of aging, neurological disorders, and female reproductive biology. More systematic and representative models—including ex vivo human organ systems or even whole bodies and novel animal species—are needed to improve the predictive power of biomedical research. These technologies also have applications in addressing organ shortages, improving neonatal care, and other unmet medical needs.

Foundational Capabilities

3D Organoid Technology

Lab-grown 3D organ tissues have become an established additional model system to recapitulate aspects of human biology. This technology could also enable the development of functional organs for transplant. It is also important to make tissue models that recapitulate the effects of aging, or a form of “accelerated lifetime testing”.

6 resources

Cryopreservation and Rewarming to Extend the Viability of Biological Tissues

Develop engineering methods for cryopreserving and safely rewarming large organs or whole bodies, which could revolutionize transplantation and long-term tissue storage.

6 resources

Ectogenesis (Artificial Wombs)

Artificial wombs could revolutionize neonatal care and reduce preterm birth complications. They are an early stage research area with various positive biomedical externalities. 

2 resources

Grow and Maintain Human Organs Ex Vivo or in Animals

Grow human organs in animals to study disease and drug response more accurately. Use advanced stem cell technologies to grow patient-specific tissues and organs in animals for transplantation.  Human organs could also be maintained ex vivo (“in a vat”) for research purposes. Perfused organ systems (including cadaver-based models) that maintain the structure and function of human tissues ex vivo would also be enabling.

4 resources

Improved and Diverse Research Models

A greater variety of small animal models (along with corresponding suites of tools such as species-specific antibodies, annotated genomes, transgenics, etc.) would enable novel biological insights and could be used to develop models of complex human diseases. Additional rodent models, as well as those beyond mouse and rat would be highly enabling. More realistic models are also critical for aging research–many diseases of aging are studied in young animals. Analytical tools are also important to make it easier for researchers to understand a) limitations of their research models, b) be aware of superior but less commonly used models. For example, a “Maniatis” style handbook detailing which human pathophysiology is mirrored in different species.

13 resources

Living Human Bodies Without Brains

Living human bodies created from stem cells without neural components could be transformative for medical research and drug development. There are many open questions–for example, the long time it takes for maturation, whether a body would function without neural components, etc.

2 resources

More Information Gleaned from Single Animals

High-throughput testing in a single animal would enable entire studies to be run in rare/exceptional animals, e.g. with spontaneous disease mimicking humans or species not suitable for research labs.

2 resources

Candidate Tools

RNA sequencing

assay method

Gap applicability

39

RNA sequencing (RNA-seq) is a transcriptomic assay method that quantifies gene-expression changes by sequencing RNA-derived libraries. In the cited study, it was used on adult rat amygdala tissue to detect subtle expression changes associated with development, cellular function, and nervous system disease after gestational high-THC cannabis smoke exposure.

RNA-seq could help characterize how closely organoids, ex vivo tissues, or alternative animal models recapitulate human transcriptional states, which is directly relevant to judging model adequacy. It is especially useful for systematic comparison across candidate models and perturbations.

Mechanistic
42
Context
46
Throughput
78
First test time
55
First test cost
38
Replication
0
Practicality
71
Translatability
11

Assumptions: Assumes transcriptomic similarity is an acceptable partial proxy for physiological relevance in model evaluation.

Missing evidence: No direct evidence here that this item was applied to organoids, ex vivo human organs, aging models, or model-selection workflows.

Gap applicability

35

High-throughput screening is an assay method cited in microbial biotechnology literature as part of the CRISPR/Cas toolbox for evaluating variants generated by multiplexed engineering. In the supplied evidence, it is presented as a screening approach associated with CRISPR/Cas-based metabolic engineering and with development of new dynamic systems.

High-throughput screening could support systematic benchmarking of many engineered model variants or perturbation conditions, which aligns with the gap's call for more systematic and representative models. It is more useful as an evaluation workflow than as a model-building solution itself.

Mechanistic
31
Context
34
Throughput
86
First test time
63
First test cost
52
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes the user needs scalable comparison of candidate physiological models rather than a single bespoke model.

Missing evidence: No supplied evidence that this screening method has been used for organoids, tissue chips, ex vivo organs, or comparative physiology model validation.

Gap applicability

20

Recently, intravital microscopy using multi-photon excitation of fluorophores has been applied to observe virus dissemination and pathogenesis in real-time under physiological conditions in living organisms. In this review, I summarize the latest research on in vivo studies of viral infections using multi-photon intravital microscopy (MP-IVM).

This imaging method could help observe dynamic physiology under more native in vivo conditions, which may improve understanding of where current models diverge from living systems. It is plausibly useful for validating model fidelity, especially for complex tissue behavior.

Mechanistic
28
Context
41
Throughput
22
First test time
24
First test cost
12
Replication
0
Practicality
0
Translatability
0

Assumptions: Assumes live imaging of physiological dynamics is relevant to the specific model-validation bottleneck.

Missing evidence: Evidence provided is limited to viral pathogenesis imaging; no direct evidence for organoids, ex vivo human systems, aging, neurology, or reproductive physiology models.

qRT-PCR

assay method

Gap applicability

20

qRT-PCR is a quantitative reverse-transcription PCR assay used to measure transcript abundance, here applied to GFP mRNA during light-controlled gene expression in Synechococcus sp. PCC 7002. In the cited study, it quantified transcriptional activation and deactivation kinetics of optogenetic systems under green/red and light/dark illumination cycles.

qRT-PCR could provide a fast, low-cost first-pass assay to compare expression of selected marker genes across candidate human-physiology models. That can help triage which models better match expected human tissue programs before deeper phenotyping.

Mechanistic
24
Context
29
Throughput
58
First test time
82
First test cost
74
Replication
9
Practicality
59
Translatability
24

Assumptions: Assumes a targeted marker-panel approach is sufficient for early model comparison.

Missing evidence: No direct evidence here for use in organoid fidelity testing, ex vivo organ maintenance, or whole-physiology model assessment.

Our Measurements and Tests Aren’t Revealing What Is Actually Causing Many Diseases

Physiology and Medicine· 10 capabilities

Tools

3

Best

56%

Our understanding of human physiology and disease remains incomplete. In the last century, we have developed cures for many diseases with well-defined root causes (polio, smallbox, cholera, SMA, cervical cancer, etc.). However, a wide array of conditions still eludes cures and treatments. We have yet to fully decipher the dynamic interplay between brain and peripheral systems, the bioenergetic processes underlying chronic conditions, and the multifactorial pathways that drive aging. The biological mechanisms driving complex diseases and the aging process are multifactorial, involving multiple interacting pathways.  Although we understand some individual aging mechanisms, we do not yet have line of sight to comprehensively rejuvenating mammals or extending lifespan. To overcome these challenges, we need combinatorial approaches that can modulate multiple mechanisms simultaneously, allowing us to measure multi-system impacts and develop effective interventions.

Foundational Capabilities

Better Systems Measurements

To study interactions in complex systems we need to measure multiple agents simultaneously, ideally with timecourse data. Doing this in live aged organisms would require new tools.

2 resources

Chemical and Cell-Type Resolved Mapping of Brain Activation

Create detailed, functional maps of hypothalamus/brainstem activation by specific peptides and hormones to link neural activity with physiological outcomes and decipher how the brain orchestrates complex physiological responses.

1 resources

Combinatorial Aging Interventions Screening

Develop aging-relevant in vitro models and screen combinations of interventions (e.g., small molecules, gene therapies) using multi-omic and functional readouts to identify synergistic treatments that extend lifespan or promote regeneration.

3 resources

Developing Immortal Biological Models

Develop immortal model organisms beyond cell lines to enable the study of longevity and underlying mechanisms of aging. 

2 resources

Engineering Endosymbionts

Study and engineer endosymbiotic relationships (e.g., mitochondria) to better understand and manipulate cellular energy production, potentially offering new avenues to treat bioenergetic disorders.

3 resources

Epidemiological Tools to Identify the Root Causes of Disease

Integrated analytic tools to study clinical, molecular, and environmental datasets to identify patterns and infer the underlying causes of disease.

1 resources

In-Vivo/ In-Situ Pooled Screening

Implement pooled screening techniques directly in living organisms to test multiple intervention combinations concurrently in aged context, accelerating discovery in complex disease and aging research.

4 resources

Long-Term Multimorbidity Endpoints

Human long term multimorbidity endpoint  trials of known to be safe compounds

1 resources

Understudied Biological Systems

There are many other biological systems that are understudied. We need field building to drive greater study of important biological dark matter, e.g., extracellular matrix biology, pregnancy, thymic involution, chronic infections driving chronic disease, menopause biology, and many others

1 resources

Whole Body Connectomes

Map the connectivity not only within the brain but also between the brain and peripheral organs to reveal integrated regulatory networks.

3 resources

Candidate Tools

DECCODE

computation method

Gap applicability

56

To find drugs that mimic this effect, we use DECCODE (Drug Enhanced Cell COnversion using Differential Expression), an unbiased method that matches our transcriptional data with thousands of drug-induced profiles.

This is a concrete analytic method for matching disease or intervention-associated transcriptional states against large drug-response profile collections. That could help the gap's stated need for integrated analytic tools and combinatorial intervention discovery, especially when transcriptomic measurements are available.

Mechanistic
62
Context
66
Throughput
82
First test time
72
First test cost
68
Replication
35
Practicality
45
Translatability
40

Assumptions: Assumes transcriptomic datasets are part of the measurement stack for the disease context of interest.

Missing evidence: No supplied evidence on validation breadth, disease domains, or performance for causal inference rather than drug prioritization.

This chapter explores the principles, platforms, and applications of CFS-based HTS... Altogether, CFS-based HTS offers a flexible, rapid, and accessible approach for next-generation biomolecular screening and therapeutic development.

The gap explicitly calls for combinatorial screening approaches, and this item is an actionable high-throughput assay method for rapidly testing compounds, nucleic acids, or proteins. It could support early-stage screening of intervention combinations before moving into more physiological models.

Mechanistic
41
Context
46
Throughput
90
First test time
80
First test cost
67
Replication
30
Practicality
50
Translatability
28

Assumptions: Assumes an in vitro prescreen is useful before organismal or aged-model validation.

Missing evidence: Supplied evidence does not show disease-mechanism discovery in complex mammalian physiology or compatibility with multi-system readouts.

lentiviral vectors

delivery harness

Gap applicability

39

Lentiviral vectors (LVVs) are used as a viral gene therapeutic and were derived from human immunodeficiency virus subtype 1 (HIV-1). LVVs are used to deliver and induce the stable expression of transgenes through genome integration.

Several capabilities in the gap depend on deploying genetically encoded measurement or perturbation systems in living cells or nervous-system contexts. Lentiviral vectors are a practical delivery harness for stable transgene expression, which could enable testing of such tools.

Mechanistic
28
Context
52
Throughput
38
First test time
62
First test cost
45
Replication
30
Practicality
40
Translatability
42

Assumptions: Assumes the intended measurement or perturbation system requires stable gene delivery in mammalian cells.

Missing evidence: The item is only a delivery vehicle; supplied evidence does not show any direct measurement, mapping, or disease-cause-discovery function.

When We Put a Molecule in the Human Body, We Can’t Predict What It Will Do

Physiology and Medicine· 10 capabilities

Tools

3

Best

53%

Drug development is often hampered by failures related to absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox). Improved predictive models for molecular interactions are essential for designing safer, more effective drugs, as well as evaluating the impact of environmental chemicals. Additionally, there is a significant gap in our knowledge of what exactly is present in foods and how these components affect human biology. A comprehensive mapping of the “foodome” and studies on food component functionality are needed to advance nutrition science and personalized dietary interventions.

Foundational Capabilities

Comprehensive Foodome Mapping

Systematically catalog the chemical and biological components of foods and study their interactions with the human body at multiple scales—from receptors to whole organisms.

2 resources

Functional Food Component Analysis

Measure the biological effects of food components at the receptor, cellular, organ, and organism levels to understand their impact on human health.

1 resources

Immunodominance Prediction for New Pathogens

Develop models to forecast which epitopes will dominate immune responses upon exposure to new antigens, guiding vaccine and therapeutic development.

1 resources

Immunogenicity Prediction for Biologics

Create computational models to predict and mitigate immune responses to biologic drugs, improving safety profiles.

1 resources

Microplastics Characterization and Degradation

Characterize microplastics in food and water and understand human exposure and impacts. Develop new technologies to degrade PET, polystyrene, and microplastics (e.g., microbe/ enzyme systems).

2 resources

Pharm-ome Mapping

Systematically map drug–target interactions (the “pharm-ome”) to better predict drug efficacy, side effects, and repurposing

6 resources

Predictive Drug Safety and Efficacy Models

Develop predictive models for ADME-Tox to lower drug candidate failure rates and increase clinical safety and efficacy.

4 resources

Scalable Tool Compounds

Develop large libraries of tool compounds to systematically probe molecular interactions, aiding both drug discovery and toxicity prediction.

1 resources

Surveillance of Microbes and Fungi

Surveillance networks of genetic mutations in bacteria and fungi–both foodborne pathogens and antimicrobial resistance trends.

2 resources

Toxin Mapping and Neutralization

Adapt pharm-ome mapping approaches to environmental toxins to predict their biological impacts and improve safety assessments.  For example, scalable solutions for rapid detection and neutralization of mycotoxins in food systems, which contaminate 25% of agricultural products and post significant health risks (e.g., portable sensors and enzyme/ microbial/ RNA-based detoxification systems).

4 resources

Candidate Tools

MGTbind

computation method

Gap applicability

53

we developed the molecular glue and ternary binding (MGTbind) database, providing comprehensive resources about ternary structures and experimental data for the coverage of MG-engaged interactome

This database/method could support pharm-ome style mapping by organizing ternary binding and molecular glue interaction data, which is relevant to predicting drug mechanism and off-target biology. It is more applicable to targeted protein degradation modalities than to general ADME/Tox, so the fit is partial rather than broad.

Mechanistic
58
Context
52
Throughput
78
First test time
72
First test cost
74
Replication
35
Practicality
45
Translatability
40

Assumptions: Assumes the gap includes mechanism-of-action mapping for degrader-like small molecules.

Missing evidence: No supplied evidence on validation performance, predictive accuracy, or direct use for human ADME/Tox or food-component biology.

lipid nanoparticles

delivery harness

Gap applicability

45

This review examines recent advancements in nanoparticle( s) (NPs) delivery systems, with a focus on ... lipid nanoparticles (LNPs)... We discussed various NP platforms and their applications, such as ... dry powder formulations of mRNA-loaded LNPs for pulmonary delivery, and LNP-mediated siRNA delivery for respiratory infections.

LNPs are a directly actionable in vivo delivery strategy and could be useful when testing how nucleic-acid therapeutics distribute and act in the body, which touches the gap's delivery and exposure-prediction bottleneck. They help generate relevant physiology-facing test systems, though they are not themselves a predictive ADME/Tox model.

Mechanistic
34
Context
61
Throughput
55
First test time
42
First test cost
36
Replication
40
Practicality
50
Translatability
68

Assumptions: Assumes the gap includes experimental platforms for studying in vivo disposition of therapeutic cargos.

Missing evidence: No supplied evidence on biodistribution prediction, toxicity modeling, food-component applications, or standardized assay framework for this gap.

VHL-recruiting PROTAC

construct pattern

Gap applicability

40

These heterobifunctional molecules are comprised of three units: a ligand for the protein of interest (POI), a ligand for an E3 ubiquitin ligase, and a linker that tethers the two motifs together. Von Hippel-Lindau (VHL) is one of the most widely employed E3 ligases in PROTACs development.

VHL-recruiting PROTACs are an actionable drug construct pattern whose in-body behavior can be difficult to predict, so they are a plausible test class for improving mechanism and safety models. They are relevant mainly for targeted protein degradation therapeutics rather than the broader foodome or general small-molecule ADME/Tox problem.

Mechanistic
41
Context
44
Throughput
46
First test time
33
First test cost
28
Replication
40
Practicality
42
Translatability
55

Assumptions: Assumes the gap includes representative therapeutic modalities for building predictive models.

Missing evidence: No supplied evidence on pharmacokinetics, toxicity, assay compatibility, or benchmark datasets tied to prediction of human in vivo behavior.

2 unmatched gaps

Clinical Trials are Inefficient, Slow and Scarce

Physiology and Medicine· 10 capabilities
No tool matches

Clinical trial designs are often inefficient, resulting in high costs, lengthy timelines, and suboptimal patient outcomes. Innovative trial designs and decision-support tools are required to streamline the clinical evaluation process and accelerate therapeutic development.

Foundational Capabilities

Advanced Noninvasive Measurements of Patient Biology

Noninvasive monitoring technologies can enable high-resolution, point-of-care data collection.

4 resources

Digital Twins / Synthetic Clinical Trial Models

Create high-fidelity computational models (digital twins) that accurately simulate human physiology, enabling synthetic clinical trials and faster hypothesis testing.

2 resources

Enhanced Participant Recruitment

Recruitment is a bottleneck that could be addressed by socio-technical programs. Increased interoperability and unity of data access and patient recruitment across centers and disease states would help de-silo recruitment and improve efficiency.

2 resources

Faster Proxy Biomarkers

Identify and validate robust biomarkers and surrogate endpoints to serve as effective proxies in clinical trials, enabling faster and more informative evaluations. This is especially important for aging (e.g., as proposed by the Norn Group).

4 resources

Improved Diagnostics

Diagnostic tools for the top 10 hard to diagnose diseases could have a significant impact on patients and the healthcare ecosystem. Diagnostics as an industry is in a state of market failure. Many disorders remain challenging to diagnose, impacting patient outcomes and burdening the healthcare system. Monitoring the epigenome can enable the identification of exposures to infectious disease and reveal exposure to threat agents.

3 resources

Infrastructure for Decentralized Patient-Reported Clinical Studies

Decentralized patient reporting for clinical studies would amplify and diversify patient participation at decreased cost by reducing logistical and geographic barriers. It could compound and incorporate subjective patient experiences at large scale and would improve public accessibility of trial data.

5 resources

Multi-Disease Efficacy Testing with Standardized Biomarkers

Infrastructure and coalition to collect extra blood samples during trials and apply biomarkers for different diseases to get more value from each trial.

1 resources

New Clinical Trial Designs and Tools

Clinical trial designs such as challenge trials, adaptive trial design, and group testing can improve efficiency.  These can be complemented with analytics and decision-support systems to optimize design, patient enrollment, and outcome interpretation.

6 resources

Prioritization of Clinical Trials and Practice Updates

Mine biomedical knowledge to decide which neglected assets to run trials on for which conditions

2 resources

Real-Time Dynamics of Patient Molecular Biology

The ability to study the real-time dynamics of molecular pathways in living humans remains extremely limited in time-resolution (frequency and duration) and scalability (beyond lab testing). Continuous, minimally invasive technologies are needed to improve our understanding of human physiology, accurately diagnose patients, and assess the impact of clinical interventions. Molecular engineering, particularly protein engineering, and integrated circuit design can be combined to develop new, miniaturized devices.

5 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Limited Longitudinal Data in Humans

Physiology and Medicine· 5 capabilities
No tool matches

A comprehensive understanding of human health over time is hindered by the lack of longitudinal data from cohorts that are diverse and globally representative. Such datasets are essential to track developmental, nutritional, and environmental influences on long-term health outcomes.

Foundational Capabilities

Accessing Human Cohorts

Develop platforms (e.g., multi-channel data collection systems) to continuously monitor health parameters in diverse human cohorts, e.g., existing or planned clinical trials. 

1 resources

Breast Milk-ome Profiling

Conduct comprehensive “-omics” studies of breast milk to capture its molecular and microbial composition and its role in infant development.

1 resources

Easy Longitudinal Sampling

Implement noninvasive, low-cost sampling methods (e.g., breath analysis and point-of-care nucleic acid sequencing) to collect repeated, high-resolution physiological data. This would augment the depth of data from initiatives such as the NIH All of Us research program.

5 resources

Expanded Biobank Initiatives

Broaden the scope and diversity of existing biobanks (e.g., the UK BioBank) to include more global populations and additional longitudinal health measures.

3 resources

Mapping Developmental Malnutrition

Systematically profile cellular mechanisms by which early-life malnutrition affects physiology across the lifespan, providing insights into long-term developmental impacts.

1 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Biosecurity

5 gaps · 15 tool connections · best match 68%

Insufficient Surveillance of Bio-Threats

Biosecurity· 7 capabilities

Tools

6

Best

68%

Rapid detection of emerging bio-threats is critical for effective intervention and containment. However, many pathogens are detected only after widespread transmission has occurred. In addition, attributing the source of these threats remains challenging.

Foundational Capabilities

Early Detection via Sequencing

Deploy comprehensive sequencing-based early warning systems to rapidly detect and attribute emerging bio-threats. It is important to distinguish detection of known pathogens (easy to find in sequencing data) vs. previously unknown ones (hard).

9 resources

Emerging Bio-Threat Forensics

Capabilities to determine the geographical source of bio-threats and whether they stem from natural sources or human activity. 

2 resources

Enhanced Epidemiology and Epidemic Tracking

Improve epidemic surveillance and modeling by integrating data from multiple sources for timely, actionable insights.

5 resources

Fine-Grained Economic Modeling of Policy Options

Develop detailed economic models to better assess the costs and benefits of different bio-threat interventions and inform policy decisions.

1 resources

Rapidly Adaptable Rapid Test Platforms 

Develop rapid test platforms that can be reconfigured within days or weeks for emerging pathogens, enabling quick self-testing during a pandemic. Address the limitations of traditional lateral flow tests—which rely on antibodies and take months to develop—by significantly accelerating the deployment of diagnostic tools.

4 resources

Universal Detection and Modulation of the Host Response to Infections

Technologies to detect and modulate the host immune response across a broad range of infections to improve patient outcomes and treatment strategies.

3 resources

Volatilomics for Early Detection

Utilize the analysis of volatile organic compounds (VOCs) as an early detection tool to identify pathogenic outbreaks.

3 resources

Candidate Tools

whole-genome sequencing

computation method

Gap applicability

68

While these methods, like polymerase chain reactions or whole-genome sequencing, are considered the "gold standard" for diagnostics, the development of inexpensive, rapid diagnostic assays is necessary for effective AMR detection and management.

The gap explicitly calls for sequencing-based early warning and source attribution, and whole-genome sequencing is directly aligned with pathogen detection and genomic comparison. It is also relevant to forensic analysis of outbreak origin when sufficient comparative data exist.

Mechanistic
92
Context
95
Throughput
72
First test time
45
First test cost
28
Replication
35
Practicality
35
Translatability
40

Assumptions: Assumes this item refers to operational pathogen whole-genome sequencing rather than only a background mention.

Missing evidence: No mechanism details, workflow specifics, field-deployment characteristics, or replication/practicality metadata were provided.

CRISPR-Cas technology

engineering method

Gap applicability

61

CRISPR-Cas technology comprises CRISPR-associated effector proteins that recognize specific DNA or RNA sequences and cleave them. In the cited review, it is presented primarily as a platform for rapid pathogen nucleic acid detection that leverages Cas trans-cleavage activity together with signal amplification and signal transformation strategies.

The gap includes rapidly adaptable rapid test platforms, and this item is explicitly described as a rapid pathogen nucleic acid detection platform using sequence recognition and trans-cleavage. That makes it a plausible tool for faster detection of known or newly sequenced threats once target sequences are available.

Mechanistic
78
Context
82
Throughput
74
First test time
72
First test cost
66
Replication
22
Practicality
71
Translatability
11

Assumptions: Assumes surveillance use includes targeted follow-up diagnostics, not only untargeted discovery.

Missing evidence: No supplied evidence for agnostic detection of unknown pathogens, source attribution performance, or field validation in surveillance programs.

Gap applicability

60

Sensitive detection of tobamoviruses in the field with minimal sample preparation can be achieved using latest technologies such as isothermal amplification, CRISPR/Cas-hybrid assays or next-generation sequencing.

This item is described as enabling sensitive field detection with minimal sample preparation and is marked field-deployable, which fits the need for rapid surveillance response. It is especially plausible for quickly deployable targeted detection once a threat sequence is known.

Mechanistic
74
Context
80
Throughput
70
First test time
76
First test cost
68
Replication
30
Practicality
50
Translatability
40

Assumptions: Assumes plant-virus example generalizes only to the assay format, not necessarily to all pathogen classes.

Missing evidence: No direct evidence here for human biosecurity surveillance, unknown-pathogen discovery, or attribution use.

Gap applicability

59

While these methods, like polymerase chain reactions or whole-genome sequencing, are considered the "gold standard" for diagnostics, the development of inexpensive, rapid diagnostic assays is necessary for effective AMR detection and management.

PCR is directly relevant to pathogen detection and is a practical first-line surveillance assay for known targets. It could support rapid confirmation and monitoring after an emerging threat has been identified by sequencing or other discovery methods.

Mechanistic
64
Context
70
Throughput
76
First test time
82
First test cost
78
Replication
45
Practicality
55
Translatability
50

Assumptions: Assumes standard PCR-based pathogen detection workflows are intended by this item.

Missing evidence: The supplied summary is generic and does not provide surveillance-specific performance, multiplexing, field use, or attribution capability.

Gap applicability

57

Lateral flow technology is a signal transformation format used within CRISPR-Cas pathogen nucleic acid diagnostic platforms. In the supplied evidence, it functions alongside Cas protein-based sequence recognition and cleavage and with signal amplification approaches for rapid molecular diagnosis.

As a readout format within CRISPR-based nucleic acid diagnostics, lateral flow could help convert molecular detection into fast, simple surveillance tests. This is relevant to the gap's need for rapidly adaptable rapid test platforms, though not to untargeted discovery or attribution.

Mechanistic
62
Context
72
Throughput
84
First test time
86
First test cost
88
Replication
22
Practicality
71
Translatability
11

Assumptions: Assumes it would be paired with an upstream CRISPR detection chemistry.

Missing evidence: No standalone evidence for sensitivity, specificity, multiplexing, or deployment in broad surveillance systems.

Gap applicability

52

The fluorescence method is a signal transformation modality used in CRISPR-Cas pathogen nucleic acid diagnostic platforms. In the cited context, Cas effector proteins recognize and cleave specific DNA or RNA targets, and fluorescence is combined with signal amplification and transformation technologies to report detection.

This item is a signal readout modality for CRISPR-based nucleic acid detection, so it could support rapid molecular assays for surveillance of known targets. Its relevance is mainly as an assay component rather than a full surveillance solution.

Mechanistic
58
Context
68
Throughput
70
First test time
74
First test cost
62
Replication
22
Practicality
71
Translatability
11

Assumptions: Assumes access to compatible CRISPR detection reagents and instrumentation.

Missing evidence: No direct evidence for field deployment, unknown-pathogen detection, or source attribution.

Inadequate Blockers of Transmission

Biosecurity· 4 capabilities

Tools

4

Best

52%

Our ability to block the transmission of pathogens is limited. Without effective strategies, airborne and surface-based transmission continues to spread diseases. A meta roadmap is here.

Foundational Capabilities

Better PPE

Develop next-generation, affordable, high-performance personal protective equipment to reduce transmission risks.

5 resources

Built Environment Protection

Enhance indoor environmental controls to reduce pathogen transmission through advanced sensor networks, UV/opto-acoustic disinfection, and improved HVAC systems.

6 resources

Engineering the Microbiome to Improve Immune Response 

Understand the role of the microbiome in immunity against infection and develop the ability to engineer or transplant microbiome communities that protect against infection.

3 resources

Transmission Reduction Through Surfaces and Textiles

Innovate new materials and coatings for surfaces and textiles that actively reduce pathogen viability and transmission.

2 resources

Candidate Tools

Gap applicability

52

The COVID-19 pandemic marked a turning point in vaccine development, leading to the swift creation of mRNA vaccines delivered via lipid nanoparticles (LNP-mRNA).

The gap includes blocking pathogen transmission, and the supplied evidence places LNP-mRNA in infectious-disease therapeutics and vaccines. That makes it a plausible platform for prophylactic interventions that could reduce onward spread, although the evidence does not specifically show transmission-blocking performance.

Mechanistic
62
Context
66
Throughput
58
First test time
45
First test cost
34
Replication
45
Practicality
45
Translatability
70

Assumptions: Assumes vaccine or mucosal prophylaxis development is in scope for transmission reduction.

Missing evidence: No direct evidence here on transmission blocking, mucosal delivery, durability, affordability, or fit to PPE/built-environment/surface interventions.

virus-like particles

delivery harness

Gap applicability

45

Subsequently, we delve into cutting-edge applications of nanoparticles to enhance immune protection, including mosaic and cocktail nanoparticle vaccines, surface-modified targeting strategies, and the integration of mRNA technology with virus-like particles (VLPs).

The item summary explicitly mentions nanoparticle vaccines and immune protection, so VLPs could plausibly support development of prophylactic tools that lower infection and transmission. This is relevant to the gap at a broad level, but the supplied evidence does not connect VLPs to airborne, surface, PPE, or environmental transmission control specifically.

Mechanistic
56
Context
61
Throughput
52
First test time
40
First test cost
36
Replication
35
Practicality
35
Translatability
50

Assumptions: Assumes immune-protective vaccine platforms are acceptable as transmission blockers.

Missing evidence: No direct evidence on transmission reduction, pathogen scope beyond examples, manufacturability, deployment context, or comparison to other vaccine platforms.

Gap applicability

43

Furthermore, we discuss the emerging monitoring mechanism, namely wastewater-based epidemiology, for early warning of the outbreak, focusing on sensors for rapid and on-site analysis of SARS-CoV2 in sewage.

This is an early-warning surveillance method for SARS-CoV-2 in sewage, which could help trigger interventions before transmission accelerates. It does not itself block transmission, but it could support operational transmission-reduction strategies in the built environment or public health response.

Mechanistic
28
Context
55
Throughput
78
First test time
50
First test cost
46
Replication
40
Practicality
45
Translatability
62

Assumptions: Assumes enabling earlier intervention counts as plausibly helping address the gap.

Missing evidence: No direct evidence that this method reduces transmission, and no evidence for airborne or surface-specific mitigation.

This chapter explores the principles, platforms, and applications of CFS-based HTS... Altogether, CFS-based HTS offers a flexible, rapid, and accessible approach for next-generation biomolecular screening and therapeutic development.

The gap may require rapid discovery of new biomolecular countermeasures, and this item is explicitly a rapid, accessible high-throughput screening platform for therapeutic development. It could plausibly accelerate finding candidate transmission-blocking molecules or biologics, but the supplied evidence is not specific to pathogens, surfaces, aerosols, or PPE materials.

Mechanistic
34
Context
38
Throughput
88
First test time
72
First test cost
63
Replication
35
Practicality
50
Translatability
32

Assumptions: Assumes upstream discovery tools are in scope if they can accelerate development of blockers.

Missing evidence: No direct evidence for screening disinfectants, coatings, filtration materials, antivirals, or transmission-relevant phenotypes.

Microbes Quickly Out-Evolve Our Defenses

Biosecurity· 7 capabilities

Tools

4

Best

46%

Pathogenic microbes evolve quickly, and bad actors may exploit biotechnology for harmful purposes. Our current defenses struggle to keep pace with these evolving threats.

Foundational Capabilities

Decentralized, Low-Resource Vaccine Production

Establish scalable, decentralized vaccine production systems to rapidly deploy immunizations during outbreaks, reducing reliance on centralized facilities.

3 resources

Improved Antibiotics Discovery

Develop next-generation antibiotics and novel antimicrobial compounds using advanced discovery platforms to stay ahead of evolving pathogens.

10 resources

Improved Broad-Spectrum Antiviral Discovery

Develop new broad-spectrum antivirals that can be used to treat or prevent infection from evolving viruses.

2 resources

Novel Methods for Rapid Antibody or Antibody-Like Molecule Discovery

Use technologies such as rapid B-cell sorting and computational design to quickly characterize and discover antibodies and antibody-like molecules to combat emerging biosecurity threats.

2 resources

Novel Vaccine Technologies

Develop innovative vaccine platforms that can adapt to or be robust to rapidly mutating viruses (e.g., influenza, HIV, coronaviruses) using methods such as mosaic nanoparticles and mRNA cocktails. Safety and security considerations: https://www.sciencedirect.com/science/article/pii/S0264410X21001717 

6 resources

Prototype Medical Countermeasures

Develop prototype vaccines or therapeutics for viruses in each viral family that infects humans, to be rapidly adapted for the next pandemic.

5 resources

Self-Administrable Nasal Sprays

Develop nasal sprays that could be applied daily for broad-spectrum protection against respiratory pathogens

2 resources

Candidate Tools

phage display

assay method

Gap applicability

46

Phage display is an assay and selection method used during engineering workflows for light-responsive protein tools. In the cited context, it is applied alongside computational protein design and high-throughput binding assays in development of LOV2-based optogenetic systems such as improved light-induced dimers.

Phage display is a concrete selection method that could support rapid discovery of antibody-like binders against newly emerging microbial targets, which aligns with the gap's need to keep pace with evolving threats. Its selection/enrichment mechanism is also compatible with higher-throughput discovery workflows.

Mechanistic
62
Context
58
Throughput
80
First test time
55
First test cost
45
Replication
20
Practicality
71
Translatability
11

Assumptions: Assumes the gap includes rapid countermeasure discovery workflows such as antibody or binding-molecule discovery.

Missing evidence: No direct evidence here for pathogen-specific use, neutralization performance, or deployment in biosecurity response settings.

mathematical modeling

computation method

Gap applicability

39

Mathematical modeling is a computational method used to guide the rational design of synthetic gene circuits. The cited literature also places it alongside live-cell imaging and within quantitative model systems used to study microbial drug resistance and spatial-temporal features of cancer in mammalian cells.

The supplied evidence explicitly notes use of mathematical modeling in studying microbial drug resistance, which is relevant to pathogens evolving around defenses. It could plausibly help prioritize designs or experiments faster than purely empirical iteration.

Mechanistic
34
Context
42
Throughput
72
First test time
82
First test cost
84
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes computational prioritization of resistance-related experiments is in scope for this gap.

Missing evidence: No direct evidence that this modeling approach improves antibiotic, antiviral, vaccine, or diagnostic development in outbreak-response contexts.

CRISPR

engineering method

Gap applicability

34

CRISPR is a widely used engineering method for targeted RNA and DNA manipulation across multiple organisms. The cited review highlights its use for viral genome manipulation, including gene knock-in and gene knock-out, and for precise diagnosis of viral infections.

The provided summary supports CRISPR as a method for viral genome manipulation and precise diagnosis of viral infections. That makes it a plausible enabling method for rapidly characterizing emerging pathogens or building diagnostic countermeasures, though it does not directly solve the broader pace-of-evolution problem on its own.

Mechanistic
31
Context
44
Throughput
63
First test time
62
First test cost
56
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes pathogen characterization and diagnostics are acceptable subproblems within the gap.

Missing evidence: No direct evidence here for broad-spectrum defense, antimicrobial discovery, vaccine adaptation, or field-deployable biosecurity use.

Gap applicability

33

Computational protein design is an engineering methodology described in a 2018 review as a next-generation tool for expanding synthetic biology applications. The supplied evidence frames it as a design approach used alongside phage display and high-throughput binding assays rather than as a single molecular reagent.

Computational protein design is an actionable design methodology that could plausibly accelerate generation of new binders or protein-based countermeasures against evolving pathogens. It fits the gap's emphasis on speed, but the supplied evidence is generic and not tied to antimicrobial, antiviral, or vaccine applications.

Mechanistic
28
Context
39
Throughput
68
First test time
70
First test cost
72
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes protein-based countermeasure design is relevant to the gap.

Missing evidence: No direct evidence in the supplied summary for pathogen-focused outputs, successful therapeutic discovery, or biosecurity deployment.

Fragile Supply Chains and Lack of Backup for Critical Infrastructure

Biosecurity· 2 capabilities

Tools

1

Best

47%

Many critical supply chains and infrastructure systems are fragile and lack robust backup mechanisms, leaving society vulnerable.

Foundational Capabilities

Civilization Reboot Toolkit

Develop a comprehensive toolkit to reboot essential infrastructure and restore societal functions in the event of widespread failure.

2 resources

Optimize Circular and Robust Supply Chains

Adopt circular economy principles and advanced design strategies to build more resilient, robust, and sustainable supply chains.

2 resources

Candidate Tools

mathematical modeling

computation method

Gap applicability

47

Mathematical modeling is a computational method used to guide the rational design of synthetic gene circuits. The cited literature also places it alongside live-cell imaging and within quantitative model systems used to study microbial drug resistance and spatial-temporal features of cancer in mammalian cells.

The gap explicitly mentions optimizing robust supply chains and generating contingency plans, and mathematical modeling is a directly actionable computational method for scenario analysis and design. It could plausibly support stress-testing alternative supply-chain configurations or backup strategies before physical implementation.

Mechanistic
58
Context
46
Throughput
72
First test time
84
First test cost
86
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes the method can be applied beyond gene circuits to logistics or network resilience modeling.

Missing evidence: No supplied evidence shows direct use for supply chains, infrastructure resilience, or contingency planning.

1 unmatched gap

Risks of Malicious Bioengineering

Biosecurity· 3 capabilities
No tool matches

Advances in synthetic biology have unlocked unprecedented innovations, but also raise concerns about the potential for harmful bioengineering. Preventing misuse requires robust screening and control measures around DNA synthesis. Implementation must be coordinated and universal to effectively minimize the risk of malicious actors.

Foundational Capabilities

Hardware Lock DNA Synthesizer

Integrate hardware-based locks into DNA synthesizers to ensure secure operation and prevent unauthorized use. Enhance systems for detecting and reporting flagged orders—enabling cross-verification with other intelligence data—and secure AI-driven biodesign tools to mitigate potential biosecurity risks.

2 resources

Trustless DNA Synthesis Screening

Implement advanced, potentially cryptographic (where appropriate), and/or adversarial AI-based DNA synthesis screening methods to prevent misuse and ensure biosecurity.

3 resources

Watermarking AI Generated Protein Sequences

Develop embedded watermarks in generative protein models to promote traceability of engineered sequences.

2 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Global Health

3 gaps · 15 tool connections · best match 72%

Limited Diagnostic Tools Optimized for Low-Resource Settings

Global Health· 3 capabilities

Tools

6

Best

72%

Current diagnostic tests are costly or often offer only limited information, failing to reveal the cause of disease and delaying or preventing administration of available treatments. Moreover, early detection systems for emerging pathogens are fragmented, delaying critical public health interventions.

Foundational Capabilities

Multiplexed Molecular Diagnostic Platforms

Develop rapid, integrated diagnostic tests for many diseases at once that are low cost.   From Jacob Trefethen’s essay: “I would count success in the diagnostic row as: a multiplex diagnostic for at least 3 pathogens (i.e. flu + COVID does not count), available over the counter for use at home. Either a respiratory panel (e.g. flu + COVID + strep throat) or a fever panel (e.g. malaria + dengue + typhoid) would count. An at-home multiplex STI panel would be great (e.g. chlamydia + gonorrhea + syphilis)”.

3 resources

Point of Care Diagnostics for Lead Testing

Lead exposure remains a critical but under-addressed public health challenge. Approximately 1 in 3 children globally have toxic levels of lead in their bloodstream, leading to developmental delays, cognitive impairment, and increased risk of chronic diseases. However, as it stands most countries do not do comprehensive, routine surveillance of blood lead levels. This is because the primary method for blood lead surveillance is costly and inconvenient.

1 resources

Volatilomics for Early Detection

Utilize the analysis of volatile organic compounds (VOCs) as an early detection tool to identify pathogenic outbreaks.

3 resources

Candidate Tools

Gap applicability

72

Lateral flow technology is a signal transformation format used within CRISPR-Cas pathogen nucleic acid diagnostic platforms. In the supplied evidence, it functions alongside Cas protein-based sequence recognition and cleavage and with signal amplification approaches for rapid molecular diagnosis.

Lateral flow is a directly relevant point-of-care readout format for rapid molecular diagnostics and is well aligned with low-cost, portable use in low-resource settings. The supplied evidence specifically places it within CRISPR pathogen detection workflows, which could support simple field-deployable tests.

Mechanistic
88
Context
90
Throughput
82
First test time
80
First test cost
90
Replication
30
Practicality
71
Translatability
11

Assumptions: Assumes the gap prioritizes low-cost, easy-to-read point-of-care formats over laboratory-only assays.

Missing evidence: No direct evidence here for multiplexing performance, home use, pathogen panel breadth, or validated deployment in low-resource settings.

biosensors

assay method

Gap applicability

61

Biosensors have shown potential for success in diagnostic testing due to their ease of use, inexpensive materials, rapid results, and portable nature. Biosensors can be combined with nanomaterials to produce sensitive and easily interpretable results.

The item summary explicitly highlights ease of use, inexpensive materials, rapid results, and portability, which are core needs for diagnostics in low-resource settings. Biosensor formats could plausibly support simpler frontline detection than centralized lab methods.

Mechanistic
74
Context
86
Throughput
68
First test time
78
First test cost
84
Replication
25
Practicality
40
Translatability
30

Assumptions: Assumes the broad biosensor category includes deployable diagnostic implementations rather than only early-stage concepts.

Missing evidence: Evidence is generic; no specific analyte class, multiplexing capability, field validation, sensitivity/specificity, or low-resource deployment data are provided.

DETECTR

assay method

Gap applicability

56

Emerging CRISPR-based diagnostics (e.g., SHERLOCK and DETECTR)

DETECTR is explicitly listed as an emerging CRISPR-based diagnostic, making it a plausible tool for rapid pathogen detection. Such platforms may help close the gap between limited tests and more specific molecular identification.

Mechanistic
76
Context
72
Throughput
70
First test time
62
First test cost
58
Replication
25
Practicality
35
Translatability
35

Assumptions: Assumes CRISPR diagnostic workflows can be adapted toward point-of-care use.

Missing evidence: No direct evidence is provided for low-resource optimization, multiplexing, cost, field robustness, or operational simplicity.

SHERLOCK

assay method

Gap applicability

56

Emerging CRISPR-based diagnostics (e.g., SHERLOCK and DETECTR)

SHERLOCK is explicitly identified as an emerging CRISPR-based diagnostic platform, so it is mechanistically relevant to rapid molecular detection of pathogens. It could plausibly contribute to more informative diagnostics than simple single-analyte tests.

Mechanistic
76
Context
72
Throughput
70
First test time
62
First test cost
58
Replication
25
Practicality
35
Translatability
35

Assumptions: Assumes CRISPR-based molecular diagnostics are in scope for low-resource pathogen detection.

Missing evidence: The supplied evidence does not state readout format, equipment needs, multiplexing level, cost, or demonstrated suitability for low-resource or at-home use.

Gap applicability

49

Recombinase polymerase amplification (RPA) is used in the cited study as an amplification component within a combined diagnostic workflow that also includes photoactivated CRISPR-Cas12a and a tube-in-tube structure for visual detection of HPV16. The supplied evidence supports its inclusion in this integrated assay, but does not provide mechanistic detail about RPA itself.

RPA is presented here as an amplification component within an integrated diagnostic workflow, so it could plausibly support rapid nucleic-acid-based detection. As an enabling step, it may help increase sensitivity in compact diagnostic formats.

Mechanistic
71
Context
58
Throughput
63
First test time
66
First test cost
61
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the amplification step can be separated from the specific HPV16 workflow and reused in other diagnostic contexts.

Missing evidence: The supplied evidence is limited to one combined assay and does not provide direct support for low-resource deployment, multiplexing, cost, or broad pathogen surveillance use.

Gap applicability

43

Microfluidic technology has become a powerful tool to address these challenges by supporting the miniaturization and automation of complex, multi-step workflows. Integrating cell-free gene and protein synthesis with microfluidic platforms has redefined bioprocessing, making it more compact and accessible.

The summary supports miniaturization and automation of complex multi-step workflows, which could be useful for integrating diagnostic chemistry into compact devices. That makes it a plausible platform component for future low-resource diagnostics, especially where assay integration matters.

Mechanistic
57
Context
55
Throughput
74
First test time
42
First test cost
36
Replication
25
Practicality
30
Translatability
25

Assumptions: Assumes compact integrated workflows are relevant even if the immediate use case is not yet a finished field diagnostic.

Missing evidence: No direct evidence is provided for pathogen diagnosis performance, low-cost manufacturability, field use, or low-resource robustness.

Under-Provisioning of Antibiotics, Vaccines and Other Interventions for Major Global Health Challenges

Global Health· 3 capabilities

Tools

6

Best

61%

Many of the world’s most deadly diseases—such as tuberculosis, Group A Streptococcus, hepatitis C, hepatitis B, and syphilis—lack effective vaccines or cures. Additionally, the pace of developing effective, low-cost, therapeutics for emerging pathogens in low resource settings is too slow to meet global health needs. Malnutrition exacerbates susceptibility to disease and impedes recovery; food security is important especially for early child development. Understanding the basic science of malnutrition during development is important for the development of more effective interventions.

Foundational Capabilities

Effective Vaccines for Global Health Challenges

Accelerate clinical trials and support research into novel vaccine technologies that can be distributed in low-resource settings, especially those that do not require cold chain.

6 resources

Improved Interventions for Malnutrition

We need to better understand the physiology of malnutrition and develop improved interventions.

3 resources

Novel Therapeutic Approaches for Chronic Infections

New treatments for chronic infections (e.g., achieving a functional cure for hepatitis B) and novel monoclonal antibodies for diseases such as malaria. New formulations (e.g., one time dose time release delivery) could improve delivery and compliance in low resource settings.

6 resources

Candidate Tools

lipid nanoparticles

delivery harness

Gap applicability

61

This review examines recent advancements in nanoparticle( s) (NPs) delivery systems, with a focus on ... lipid nanoparticles (LNPs)... We discussed various NP platforms and their applications, such as ... dry powder formulations of mRNA-loaded LNPs for pulmonary delivery, and LNP-mediated siRNA delivery for respiratory infections.

The gap explicitly calls for novel vaccine technologies and improved therapeutic delivery, and this item is a concrete non-viral platform for mRNA, siRNA, and CRISPR cargo. The summary also mentions respiratory infection use cases and dry-powder pulmonary formulations, which plausibly align with scalable interventions for infectious disease.

Mechanistic
82
Context
72
Throughput
74
First test time
55
First test cost
48
Replication
35
Practicality
40
Translatability
45

Assumptions: Assumes nucleic-acid-based vaccines or therapeutics are in scope for the intervention need.

Missing evidence: No direct evidence here on low-resource deployment, cold-chain independence, cost of goods, or success for the specific diseases named in the gap.

virosomes

delivery harness

Gap applicability

52

This review explores virus biomimetic delivery systems, focusing on virus-like particles (VLPs) and virosomes as promising platforms for vaccine and therapeutic development. Virosomes are reconstituted viral envelopes that retain functional glycoproteins but lack a nucleocapsid.

The summary directly frames virosomes as platforms for vaccine and therapeutic development, which matches the gap's need for better vaccines and interventions. As a reconstituted viral-envelope platform, it is at least mechanistically relevant to antigen delivery and immunization strategies.

Mechanistic
76
Context
68
Throughput
58
First test time
46
First test cost
40
Replication
20
Practicality
28
Translatability
34

Assumptions: Assumes the platform can be adapted beyond the review context to pathogens relevant to global health.

Missing evidence: No pathogen-specific efficacy, manufacturability, stability, low-resource suitability, or replication data are provided.

HAdV-D10

delivery harness

Gap applicability

50

Species D adenoviruses, such as human adenovirus type 10 (HAdV-D10), are promising candidates due to low seroprevalence in humans... support the advancement of HAdV-D10 as a next-generation platform for gene delivery and vaccine development.

This item is explicitly described as a platform for gene delivery and vaccine development, which is directly relevant to the vaccine under-provisioning part of the gap. The noted low human seroprevalence could be advantageous for vector-based vaccination strategies.

Mechanistic
71
Context
63
Throughput
56
First test time
44
First test cost
36
Replication
22
Practicality
30
Translatability
38

Assumptions: Assumes adenoviral vaccine platforms are acceptable for the target use cases.

Missing evidence: No direct evidence on clinical performance, manufacturability, thermostability, low-resource deployment, or applicability to the named diseases.

liposomes

delivery harness

Gap applicability

43

A multitude of substances are currently under investigation for the preparation of nanoparticles for drug delivery, varying from biological substances like albumin, gelatine and phospholipids for liposomes

Liposomes are a concrete drug-delivery platform and could plausibly support formulation work for therapeutics or vaccine cargo. That makes them relevant to the gap's call for low-cost, effective interventions, but the supplied evidence is generic.

Mechanistic
58
Context
49
Throughput
55
First test time
57
First test cost
52
Replication
20
Practicality
30
Translatability
30

Assumptions: Assumes general nanocarrier formulation platforms are useful for early intervention development.

Missing evidence: The summary does not provide infectious-disease, vaccine, malnutrition, low-resource, or clinical translation evidence.

Exosomes

delivery harness

Gap applicability

38

Exosomes possess antigens and immunostimulatory molecules and can serve as cell-free vaccines to induce antitumor immunity. In addition, given their stability, low immunogenicity, and targeting ability, exosomes represent ideal drug delivery systems in tumor immunotherapy.

The summary states that exosomes can function as cell-free vaccines and as drug delivery systems, so they are at least directionally relevant to vaccine and therapeutic development. Their stated stability and low immunogenicity could matter for intervention design.

Mechanistic
54
Context
42
Throughput
46
First test time
34
First test cost
28
Replication
18
Practicality
22
Translatability
24

Assumptions: Assumes the vaccine and delivery properties described in tumor immunotherapy may generalize to infectious-disease applications.

Missing evidence: Evidence is centered on tumor immunotherapy, with no direct support here for global infectious diseases, low-resource deployment, manufacturing feasibility, or cost.

microfluidics

assay method

Gap applicability

34

The review integrates data from in vitro, in silico, and clinical studies, including both classical detection strategies and emerging technologies such as clustered regularly interspaced short palindromic repeats (CRISPR)-based modulation, biosensors, and microfluidics.

Microfluidics is a plausible enabling assay/platform technology for faster screening or detection workflows, which could modestly support intervention development pipelines. However, the supplied summary does not directly connect it to vaccines, therapeutics, or malnutrition interventions.

Mechanistic
33
Context
31
Throughput
62
First test time
51
First test cost
45
Replication
18
Practicality
26
Translatability
20

Assumptions: Assumes platform integration and detection methods are relevant to accelerating R&D workflows.

Missing evidence: No direct evidence on use for vaccine development, therapeutic formulation, malnutrition research, or low-resource deployment is provided.

Lack of Infrastructure Technologies and Strategies Optimized for Low-Resource Settings

Global Health· 8 capabilities

Tools

3

Best

49%

Global health outcomes are compromised by insufficient health systems and infrastructure that limit our ability to prevent and control infectious diseases. Key deficiencies include the lack of cost-effective antimicrobial materials to block pathogen transmission, underdeveloped intervention models for effective public health strategies, and outdated sanitation solutions that fail to meet the needs of vulnerable populations.

Foundational Capabilities

Better PPE

Develop next-generation, affordable, high-performance personal protective equipment to reduce transmission risks.

5 resources

Decentralized, Low-Resource Vaccine Production

Establish scalable, decentralized vaccine production systems to rapidly deploy immunizations during outbreaks, reducing reliance on centralized facilities.

3 resources

Enhanced Sanitation Solutions

Innovate and scale sanitation technologies to improve water, sanitation, and hygiene, to reduce disease transmission.

1 resources

Improved Antibiotics Discovery

Develop next-generation antibiotics and novel antimicrobial compounds using advanced discovery platforms to stay ahead of evolving pathogens.

10 resources

Improved Broad-Spectrum Antiviral Discovery

Develop new broad-spectrum antivirals that can be used to treat or prevent infection from evolving viruses.

2 resources

Improved Modeled Intervention Strategies

Advance public health by creating better models and strategies for intervention, integrating data-driven insights to inform policy and practice.

2 resources

Improved Vaccine Distribution 

Vaccine distribution is also a significant challenge in low-resource settings.

1 resources

Transmission Reduction Through Surfaces and Textiles

Innovate new materials and coatings for surfaces and textiles that actively reduce pathogen viability and transmission.

2 resources

Candidate Tools

mathematical modeling

computation method

Gap applicability

49

Mathematical modeling is a computational method used to guide the rational design of synthetic gene circuits. The cited literature also places it alongside live-cell imaging and within quantitative model systems used to study microbial drug resistance and spatial-temporal features of cancer in mammalian cells.

The gap explicitly includes underdeveloped intervention models for public health strategy, and this item is a computational modeling method. It could plausibly support low-cost evaluation and prioritization of intervention strategies before field deployment in low-resource settings.

Mechanistic
58
Context
52
Throughput
86
First test time
88
First test cost
84
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes the modeling approach can be adapted beyond gene-circuit design to infectious-disease intervention planning.

Missing evidence: No direct evidence here that the method has been applied to global health intervention modeling, sanitation, PPE, or low-resource deployment.

lipid nanoparticles

delivery harness

Gap applicability

45

This review examines recent advancements in nanoparticle( s) (NPs) delivery systems, with a focus on ... lipid nanoparticles (LNPs)... We discussed various NP platforms and their applications, such as ... dry powder formulations of mRNA-loaded LNPs for pulmonary delivery, and LNP-mediated siRNA delivery for respiratory infections.

The gap includes decentralized, low-resource vaccine production and distribution challenges, and this item is a non-viral nucleic-acid delivery platform with explicit mRNA therapeutic relevance. Its mention of dry-powder pulmonary formulations also makes it plausibly relevant to infrastructure-constrained deployment strategies.

Mechanistic
46
Context
55
Throughput
62
First test time
42
First test cost
34
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes LNP formulations could be adapted for low-resource vaccine delivery rather than only advanced clinical settings.

Missing evidence: No supplied evidence on thermostability, field manufacturability, cold-chain independence, cost, or use in low-resource settings.

Gap applicability

37

Sensitive detection of tobamoviruses in the field with minimal sample preparation can be achieved using latest technologies such as isothermal amplification, CRISPR/Cas-hybrid assays or next-generation sequencing.

The summary explicitly places next-generation sequencing among field detection technologies with minimal sample preparation, which could support decentralized surveillance in infrastructure-limited settings. That makes it a plausible fit for improved infectious-disease monitoring rather than direct prevention hardware.

Mechanistic
34
Context
41
Throughput
63
First test time
36
First test cost
22
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes surveillance capacity is part of the intervention-strategy bottleneck in the stated gap.

Missing evidence: No direct evidence on affordability, portability, power requirements, or suitability for low-resource public health programs.

Physics

8 gaps · 12 tool connections · best match 56%

Incomplete Resolution of the Possibility of Low-Energy Nuclear Reactions

Physics· 1 capabilities

Tools

3

Best

45%

While low-energy nuclear reactions (LENRs) have received substantial attention and there is no good evidence they exist, there may still be other mechanisms or parameter combinations that are underexplored.

Foundational Capabilities

Possibility of LENR Experiments

Design and implement experiments to rigorously test low-energy nuclear reaction principles, providing clear data on reaction mechanisms and viability. This is highly speculative and at this point unlikely to yield practical LENR, however, see: https://coldfusionblog.net/2019/03/13/the-case-against-cold-fusion-experiments/

4 resources

Candidate Tools

graph neural networks

computation method

Gap applicability

45

Graph neural networks (GNNs) are one of the fastest growing classes of machine learning models. They are of particular relevance for chemistry and materials science, as they directly work on a graph or structural representation of molecules and materials and therefore have full access to all relevant information required to characterize materials.

The gap explicitly allows that underexplored mechanisms or parameter combinations may remain, and GNNs are described as applicable to chemistry and materials science where structure-property relationships matter. They could plausibly support prioritization of materials or conditions for more rigorous LENR screening, but the supplied evidence does not show direct use for nuclear phenomena.

Mechanistic
42
Context
58
Throughput
72
First test time
46
First test cost
50
Replication
35
Practicality
45
Translatability
40

Assumptions: Assumes the LENR work would include materials-selection or parameter-screening components.

Missing evidence: No direct evidence here for LENR, nuclear reaction modeling, or experimental validation in this context.

QM calculations

computation method

Gap applicability

28

QM calculations are a quantum-chemical computational method used to predict conformer energetics, rotational barriers, and infrared spectra of transient glutamine isomers in LOV photoreceptors. In EL222, AsLOV2, and RsLOV, these calculations were used to infer favored glutamine orientations along an assumed light-driven reaction path and to interpret transient infrared behavior.

Quantum-chemical calculations could plausibly be used to examine candidate solid-state environments or energetics relevant to proposed LENR mechanisms before running experiments. This is only a weak link because the provided evidence is from photoreceptor conformer analysis, not nuclear or condensed-matter fusion questions.

Mechanistic
34
Context
36
Throughput
38
First test time
42
First test cost
40
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the intended use is exploratory theoretical filtering rather than direct proof of LENR.

Missing evidence: No supplied evidence for modeling nuclear reaction rates, metal hydrides, lattice effects, or LENR-specific systems.

Gap applicability

22

Likelihood maximization analysis is a computational method for selecting reaction coordinate models for individual substeps of a conformational transition and inferring tentative transition states. In the cited application, it was applied to transition path sampling data from explicit-solvent molecular dynamics of the millisecond partial unfolding transition in the photoactive yellow protein photocycle.

If LENR experiments generated complex time-series or simulation trajectories, a reaction-coordinate or transition-state inference framework could in principle help compare mechanistic models. The match is weak because the supplied evidence is limited to protein conformational transitions rather than physical nuclear or materials experiments.

Mechanistic
22
Context
24
Throughput
41
First test time
33
First test cost
39
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes availability of trajectory-like data or mechanistic model comparison tasks.

Missing evidence: No direct evidence for application to nuclear processes, calorimetry, particle detection, or solid-state fusion hypotheses.

Artisanal Nature of Experimental Physics Platforms

Physics· 1 capabilities

Tools

3

Best

43%

There is a lack of open and repeatable tooling to spread experimental physics into new areas, e.g., can ultracold atoms be more readily leveraged by people outside a small set of quantum physics labs to pave the way for more applied uses?

Foundational Capabilities

Standardized Open Experimental Physics Platform Tools

Applying more modern digital fabrication and hardware engineering techniques could make experimental physics knowledge more transferable and applicable.

1 resources

Candidate Tools

Gap applicability

43

Light-induction hardware-software platforms are optogenetic delivery systems that provide controlled illumination using formats ranging from simple illumination set-ups to microscopy, microtiter plate, and bioreactor designs. They are used to stimulate biological systems with light, and automated implementations can support computer-controlled experiments with in silico feedback control.

This is one of the few items that is explicitly a hardware-software platform with controlled illumination and computer-controlled operation, which loosely matches the gap's need for more standardized and repeatable experimental platforms. Its relevance is mainly at the level of open instrumentation patterns rather than physics-specific experimental content.

Mechanistic
72
Context
41
Throughput
68
First test time
58
First test cost
52
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes cross-domain value from standardized automated optical hardware/software platforms to experimental physics tooling.

Missing evidence: No evidence that this platform is open-source, used in physics, or applicable to ultracold-atom or quantum-lab workflows.

Gap applicability

37

Microscopy designs for optogenetic stimulation are light-delivery platform configurations that range from simple illumination set-ups to sophisticated microscopy, microtiter plate, and bioreactor designs. These platforms support optogenetic stimulation across experimental scales from single-cell stimulation to whole-culture illumination, and some incorporate automated measurement and stimulation for computer-controlled operation.

The item describes configurable optical platform designs spanning simple setups to automated microscopy, which could inform more modular and repeatable instrument design practices. It is potentially relevant because the gap emphasizes transferable hardware engineering techniques.

Mechanistic
64
Context
34
Throughput
63
First test time
50
First test cost
42
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes microscopy platform design principles can transfer to broader experimental physics instrumentation.

Missing evidence: No direct evidence of use outside biology, no open-tooling claim, and no connection to ultracold atoms or quantum experiments.

Gap applicability

33

Bioreactor designs for optogenetic stimulation are light-delivery and culture-platform configurations used to stimulate optogenetic systems across experimental formats ranging from simple illumination set-ups to microscopy, microtiter plate, and bioreactor designs. These platforms support applications spanning single-cell stimulation to whole-culture illumination and can be integrated with automated measurement and stimulation for computer-controlled experiments.

This item is a concrete example of standardized light-delivery and culture-platform configurations with automation and feedback control, which is directionally aligned with making experimental setups more repeatable. The fit is weak because the described use case is biological stimulation rather than physics experimentation.

Mechanistic
58
Context
28
Throughput
61
First test time
47
First test cost
40
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes generalizable lessons from automated platform engineering are relevant to physics labs.

Missing evidence: No evidence for physics-domain adoption, open dissemination, or applicability to non-biological experimental platforms.

Robust and Compact Plasma Confinement for Fusion is Still Not Solved

Physics· 2 capabilities

Tools

3

Best

24%

Stable plasma confinement is a major obstacle in achieving practical fusion energy. Advanced control systems and novel confinement techniques are needed.

Foundational Capabilities

AI-Driven Control Systems for Plasma Confinement

Implement AI-based control systems to dynamically stabilize and confine plasma during fusion reactions, improving overall efficiency and stability.

1 resources

Magneto-Inertial Confinement

Develop magneto-inertial confinement strategies that combine magnetic fields with inertial forces to better confine plasma for fusion.

2 resources

Candidate Tools

CoTV

engineering method

Gap applicability

24

CoTV is a multi-agent deep reinforcement learning system that cooperatively controls traffic light signals and connected autonomous vehicles in mixed-autonomy urban traffic scenarios. It was reported as a computational control method and evaluated in SUMO simulation.

The gap explicitly calls for advanced control systems, and this item is a cooperative multi-agent deep reinforcement learning control method. It could plausibly inform simulation-first control architectures for dynamically stabilizing complex, distributed plasma control actuators, but the provided evidence is from urban traffic simulation rather than fusion.

Mechanistic
34
Context
12
Throughput
58
First test time
72
First test cost
66
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes cross-domain transfer of multi-agent RL control ideas from traffic systems to plasma control is being considered.

Missing evidence: No evidence of use in plasma physics, tokamaks, magnetic confinement, real-time control latency, or fusion-relevant actuator/sensor integration.

This tool is a reinforcement learning-based computational method for multi-intersection traffic signal scheduling that incorporates visible light communication (VLC) queuing, request, and response behaviors. It is described as part of a traffic management system integrating VLC localization services with learning-driven signal control.

This is an actionable reinforcement learning control method, which is directionally aligned with the gap's need for AI-driven plasma stabilization. It may be useful as a generic template for decentralized control logic in simulation, but the supplied evidence is tightly tied to traffic signaling and VLC-specific behaviors.

Mechanistic
28
Context
8
Throughput
56
First test time
74
First test cost
70
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the user wants candidate computational control frameworks, not fusion-specific hardware.

Missing evidence: No evidence for plasma confinement, magnetic control, continuous-time physical systems, safety constraints, or performance under fusion-relevant dynamics.

Gap applicability

19

Learning-driven traffic signal control is an engineering method for multi-intersection traffic management that integrates visible light communication localization services with reinforcement learning-based signal scheduling. It uses VLC-derived queuing, request, and response behaviors to control traffic signals in simulated multi-intersection settings.

The item is an engineering method centered on reinforcement learning-based dynamic control, which loosely matches the gap's call for AI-based confinement control. However, its demonstrated mechanism is embedded in traffic management with VLC localization, so relevance to plasma confinement is weak and indirect.

Mechanistic
24
Context
7
Throughput
52
First test time
70
First test cost
66
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes only high-level control-method analogies are acceptable.

Missing evidence: No fusion-domain validation, no plasma state estimation evidence, and no indication that the sensing or control structure maps to tokamak or magneto-inertial confinement.

Inadequate Imaging of Material Structures

Physics· 1 capabilities

Tools

2

Best

56%

Many materials’ internal structures are difficult to image with current technologies, limiting our understanding of their properties at the nanoscale.

Foundational Capabilities

Neutron Microscopy

Develop and deploy neutron microscopy techniques to image material structures at high resolution, benefiting from neutrons’ deep penetration and sensitivity to light elements.

2 resources

Candidate Tools

Gap applicability

56

The better understanding of the dynamic features of this membrane system requires the use of non-invasive techniques, such as small angle neutron scattering (SANS), which is capable of providing accurate, statistically and spatially averaged information on the repeat distances of periodically organized thylakoid membranes under physiologically relevant conditions with time resolutions of seconds and minutes.

This is the only candidate that directly uses neutrons, which aligns with the gap's stated capability focus on neutron microscopy. Although SANS is a scattering method rather than direct microscopy, it can still provide nanoscale structural information about internal material organization, especially where penetration and sensitivity to light elements matter.

Mechanistic
72
Context
78
Throughput
55
First test time
28
First test cost
22
Replication
50
Practicality
38
Translatability
62

Assumptions: Assumes the gap can be partially addressed by neutron-based structural characterization methods, not only direct image-forming microscopes.

Missing evidence: No evidence here on applicability to non-biological materials, achievable spatial resolution for the target materials, or compatibility with the specific neutron microscopy resources named in the gap.

Gap applicability

31

It is anticipated that the combination of chemical probes, highly selective inhibitors, and sensors with advanced (super resolution) imaging modalities, such as PharmacoSTORM and correlative light-electron microscopy, will uncover the fundamental basis of lipid signaling at nanoscale resolution in the brain.

Correlative light-electron microscopy is at least mechanistically relevant to nanoscale structural imaging, and electron microscopy can resolve internal structures beyond conventional light microscopy. It could plausibly help as a benchmark or complementary workflow for difficult-to-image structures.

Mechanistic
58
Context
30
Throughput
34
First test time
24
First test cost
18
Replication
30
Practicality
25
Translatability
22

Assumptions: Assumes complementary nanoscale imaging methods are in scope even if they are not neutron-based.

Missing evidence: Provided evidence is prospective and brain-signaling specific; no direct evidence for materials imaging, internal material structures, or operational fit with the neutron-microscopy context.

Uncertainty and Noise in the Science of Room-Temperature Superconductivity

Physics· 1 capabilities

Tools

1

Best

39%

There remains significant uncertainty over whether metallic hydrogen can exhibit room-temperature superconductivity at reasonable pressures, and measurements of other systems have been irreproducible and fragmented.

Foundational Capabilities

Conduct Rigorous, Non-Fraudulent Experiments

Encourage and fund rigorous experimental investigations into metallic hydrogen superconductivity, ensuring data integrity and reproducibility.

1 resources

Candidate Tools

theoretical modeling

computation method

Gap applicability

39

Continuous development of experimental methodologies, synergistic correlation with theoretical modeling, and the expansion to other nonequilibrium, photoswitchable, and controllable protein systems will greatly advance the chemical, physical, and biological sciences.

The gap centers on uncertainty and fragmented measurements, and theoretical modeling can in principle help interpret conflicting experimental results and prioritize which conditions to test next. This is only a weak-to-moderate link because the supplied evidence does not show a superconductivity-specific modeling workflow or reproducibility protocol.

Mechanistic
42
Context
38
Throughput
55
First test time
62
First test cost
68
Replication
35
Practicality
45
Translatability
30

Assumptions: Assumes a general modeling approach could be repurposed outside the original light-responsive protein context.

Missing evidence: No evidence here that the method was applied to superconductivity, high-pressure materials, or experimental fraud/reproducibility control.

3 unmatched gaps

Inability to Model Turbulence

Physics· 1 capabilities
No tool matches

Modeling turbulence remains one of the most challenging problems in physics due to its nonlinear and chaotic nature.

Foundational Capabilities

Develop New Modeling Frameworks for Turbulence

Create and implement novel mathematical models and computational frameworks that can more accurately simulate and predict turbulent flows.

6 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Particle Accelerators Are Large and Expensive

Physics· 1 capabilities
No tool matches

Traditional particle accelerators are enormous and costly, limiting experimental flexibility. Compact, benchtop accelerators could democratize high-energy physics and open new avenues in applications such as medical isotope production.

Foundational Capabilities

Benchtop Particle Accelerators

Develop compact particle accelerators that can be operated on a benchtop scale, reducing both cost and size while retaining necessary performance for scientific and medical applications.

3 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Quantum Gravity is Experimentally Hard to Constrain

Physics· 1 capabilities
No tool matches

Quantum gravity remains elusive, with experimental constraints hindered by the need for extremely large-scale or prohibitively expensive experiments.

Foundational Capabilities

Targeted Experiments for Quantum Gravity

Design and execute key experiments that probe quantum gravity phenomena without requiring massive accelerators, leveraging innovative, cost-effective approaches.

9 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Mechanical Engineering

7 gaps · 10 tool connections · best match 69%

Bioengineering is Still Done Manually

Mechanical Engineering· 2 capabilities

Tools

6

Best

69%

Despite advances in automation, many bioengineering processes remain highly manual, limiting throughput and reproducibility in laboratory settings.

Foundational Capabilities

AI-Assisted Programming of Lab Robots

Implement AI-assisted systems for programming and controlling lab robots to automate bioengineering workflows and reduce the reliance on manual processes.

1 resources

Bio Lab of the Future

Develop an integrated "bio lab of the future" that combines robotics, AI, and real-time data analysis to fully automate bioengineering tasks.

4 resources

Candidate Tools

Automated 96-well microplate illumination and measurement is an assay method for high-throughput optogenetic characterization of cultures under controlled light input. In the cited Saccharomyces cerevisiae workflow, it supported construction and characterization of split transcription factors containing cryptochrome and Enhanced Magnet light-sensitive dimerizers.

This is directly an automated assay workflow, replacing manual illumination and measurement with a 96-well format that improves throughput and standardization. It plausibly addresses the gap's core bottleneck of manual, low-reproducibility lab operations.

Mechanistic
92
Context
72
Throughput
94
First test time
62
First test cost
56
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the gap includes automating experimental execution and measurement, not only robot programming.

Missing evidence: No direct evidence here about integration with general-purpose lab robots, AI control, or non-optogenetic workflows.

cell-free protein synthesis

engineering method

Gap applicability

64

Cell-free protein synthesis (CFPS) has been used as a transformative technology in synthetic biology, providing a programmable, scalable, and automation-compatible platform for biological engineering.

The summary explicitly describes CFPS as programmable, scalable, and automation-compatible, which makes it a plausible platform for reducing manual biological engineering steps. It also fits high-throughput experimentation better than many cell-based workflows.

Mechanistic
78
Context
68
Throughput
86
First test time
74
First test cost
58
Replication
35
Practicality
35
Translatability
35

Assumptions: Assumes in vitro prototyping platforms are in scope for reducing manual bioengineering work.

Missing evidence: Replication and practicality metadata are missing, and the summary does not specify concrete robotic integrations or end-to-end automated workflows.

This chapter explores the principles, platforms, and applications of CFS-based HTS... Altogether, CFS-based HTS offers a flexible, rapid, and accessible approach for next-generation biomolecular screening and therapeutic development.

This is an explicitly high-throughput screening method in cell-free systems, so it plausibly reduces manual handling and supports more standardized screening workflows. Its fit is strongest for the throughput side of the gap.

Mechanistic
76
Context
64
Throughput
92
First test time
72
First test cost
60
Replication
35
Practicality
35
Translatability
35

Assumptions: Assumes the gap includes screening and assay automation, not only physical robotics.

Missing evidence: The supplied evidence does not state direct robot compatibility, closed-loop control, or replication/practicality metrics.

Gap applicability

56

In silico feedback control strategies are computationally implemented control schemes coupled to optogenetic measurement and light stimulation platforms. They are used to create computer-controlled living systems through automated measurement and stimulation workflows.

The item is explicitly about computer-controlled living systems using automated measurement and stimulation workflows, which aligns with reducing manual intervention. It is especially relevant to the 'bio lab of the future' and AI-enabled control framing of the gap.

Mechanistic
82
Context
58
Throughput
66
First test time
48
First test cost
52
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes optogenetic closed-loop automation is a relevant exemplar for broader lab automation needs.

Missing evidence: No primary paper count is provided, and the evidence is limited to optogenetic light-control contexts rather than general bioengineering workflows.

modular cloning scheme

engineering method

Gap applicability

55

The modular cloning scheme is an engineering method used with laboratory automation to support high-throughput construction and characterization of optogenetic split transcription factors in Saccharomyces cerevisiae. In the cited work, it enabled assembly of light-responsive transcriptional regulators incorporating cryptochrome and Enhanced Magnet dimerization modules.

This engineering method is explicitly used with laboratory automation and modular assembly, which can reduce manual construct-building steps and support higher-throughput workflows. Its relevance is strongest for standardized build phases rather than full lab automation.

Mechanistic
70
Context
55
Throughput
80
First test time
66
First test cost
68
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the gap includes automating DNA assembly and construct generation workflows.

Missing evidence: Evidence is limited to a specific yeast optogenetics use case and does not show broader robotic or AI-driven deployment.

yeast optogenetic toolkit

multi component switch

Gap applicability

50

The yeast optogenetic toolkit (yOTK) is a modular construct system for light-controlled gene expression in Saccharomyces cerevisiae. It integrates optogenetic parts into an existing yeast toolkit and supports rapid assembly of light-controlled circuits, including split transcription factors built from cryptochrome and Enhanced Magnet dimerizers.

The toolkit supports rapid assembly and is associated with laboratory automation and high-throughput characterization, so it could help standardize and partially automate a specific class of bioengineering workflows. It is a narrower fit because it is tied to yeast optogenetic circuit construction rather than general lab automation.

Mechanistic
58
Context
46
Throughput
72
First test time
50
First test cost
46
Replication
20
Practicality
71
Translatability
11

Assumptions: Assumes specialized modular toolkits that enable automated build-test cycles are relevant to the gap.

Missing evidence: No evidence here for AI-assisted robot programming, broad organism coverage, or automation outside the yeast optogenetics context.

Designing Manufacturing Systems is Hard

Mechanical Engineering· 1 capabilities

Tools

2

Best

33%

Modern manufacturing system design remains complex, with traditional methods relying on outdated processes. AI-based design approaches have the potential to reimagine these systems without relying on the legacy of humanoid robots.

Foundational Capabilities

AI-Based Design of Manufacturing Systems

Develop AI-driven design tools that optimize manufacturing systems from the ground up, moving beyond traditional approaches to enable more efficient, automated production processes.

2 resources

Candidate Tools

AI-driven vector design

computation method

Gap applicability

33

Future directions focus on AI-driven vector design, hybrid systems (AAV-exosomes), and standardized manufacturing to achieve "single-dose, lifelong cure" paradigms for muscular disorders.

This is the only candidate explicitly framed as an AI-driven design method, which loosely matches the gap's interest in AI-based design of systems. However, the supplied evidence is about vector design and standardized manufacturing in gene therapy, not manufacturing-system design in mechanical engineering.

Mechanistic
62
Context
18
Throughput
55
First test time
45
First test cost
50
Replication
30
Practicality
40
Translatability
20

Assumptions: Treating the gap as broadly about AI design methods rather than biology-specific manufacturing.

Missing evidence: No evidence that this method applies to factory layout, process planning, robotics-free manufacturing design, or mechanical engineering workflows.

molecular docking

computation method

Gap applicability

21

Molecular docking and knowledge of bioinformatics are also being used to predict potential applications and manufacturing by industry.

It is a computational design method and the summary mentions prediction of applications and manufacturing by industry. That said, the evidence points to molecular/biological design rather than manufacturing-system design, so relevance to this gap is weak.

Mechanistic
22
Context
12
Throughput
40
First test time
60
First test cost
55
Replication
25
Practicality
35
Translatability
15

Assumptions: Assuming any computational design method with a manufacturing mention could be considered for a very weak link.

Missing evidence: No evidence of use for manufacturing system architecture, production-line optimization, plant design, or mechanical engineering decision support.

Many Methods Are Stuck in 20th Century Fabrication Paradigms

Mechanical Engineering· 4 capabilities

Tools

2

Best

29%

Modern manufacturing systems largely rely on paradigms developed in the last century where large machines produce components smaller than themselves. This approach is increasingly limited by scaling challenges and cost inefficiencies. To meet future demands, we need to reimagine manufacturing by developing universal robotic construction systems and low-capital, high-energy manufacturing solutions that leverage emerging technologies such as advanced robotics, precision machining, and renewable energy integration. These innovations could, for example, dramatically lower the cost of machining high-performance materials like titanium or enable widespread automation in sectors like desalination.

Foundational Capabilities

Automated Defect and Weld Inspection

Automation of welding requires improved ability to verify welds.

2 resources

Cheap, Automatic Five Axis Electrochemical Machining 

Electrochemical machining (ECM) creates complex shapes with high precision, but is costly. Lower cost ECM could make machined titanium as cheap as aluminum.

1 resources

Low-Capital Manufacturing Systems 

If we could create more manufacturing systems with low capex but high energy needs, we could take advantage of drastically cheaper solar. One particularly useful example would be desalination.

1 resources

Universal Robotic Construction Systems

Build the robots that build the robots.  The current paradigm of all automated manufacturing is for machines (from robot arms to presses) to make things smaller than themselves. This quickly runs into scaling limits etc. 

2 resources

Candidate Tools

Gap applicability

29

In silico feedback control strategies are computationally implemented control schemes coupled to optogenetic measurement and light stimulation platforms. They are used to create computer-controlled living systems through automated measurement and stimulation workflows.

The gap emphasizes universal robotic construction systems and automation, and this item is an actionable automation/control method based on closed-loop measurement and stimulation. Its strongest relevance is as a general control architecture for automated systems, including inspection or adaptive process control, rather than as a fabrication technology itself.

Mechanistic
42
Context
28
Throughput
55
First test time
46
First test cost
44
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the gap can include automation/control methods that support manufacturing workflows, not only material-processing hardware.

Missing evidence: No supplied evidence ties this method to mechanical manufacturing, welding, machining, robotics, or industrial energy systems.

cell-free protein synthesis

engineering method

Gap applicability

24

Cell-free protein synthesis (CFPS) has been used as a transformative technology in synthetic biology, providing a programmable, scalable, and automation-compatible platform for biological engineering.

This is a programmable, scalable, automation-compatible manufacturing method, so it loosely aligns with the gap's interest in rethinking production paradigms. However, the supplied evidence supports biological production workflows, not machining, robotic construction, weld inspection, or low-capital industrial systems.

Mechanistic
18
Context
14
Throughput
62
First test time
58
First test cost
42
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes broad interest in alternative manufacturing paradigms could include highly automated biological production methods.

Missing evidence: No evidence of relevance to mechanical engineering use cases such as ECM, titanium machining, construction robotics, desalination, or industrial inspection.

4 unmatched gaps

Designing Buildings is Hard

Mechanical Engineering· 1 capabilities
No tool matches

Architectural design and construction planning are complex and labor-intensive. Advanced computational design and AI-driven optimization have the potential to revolutionize how buildings and construction plans are generated.

Foundational Capabilities

AI Design of Buildings and Construction Plans

Utilize generative AI to automatically generate and optimize building designs and construction plans, streamlining the design process and reducing manual effort.

1 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Modeling Mechanical Systems is Hard

Mechanical Engineering· 1 capabilities
No tool matches

The simulation and modeling of complex mechanical systems is challenging due to the intricate interplay of multiple physical phenomena. Improved computational models can enhance design and optimization.

Foundational Capabilities

Transferrable Multiphysics Foundation Model

Develop a comprehensive multiphysics foundation model that can be applied across various mechanical systems, integrating physics-based models, experimental data, and machine learning for broad transferability.

3 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Robot Hardware and Software is Still Clunky

Mechanical Engineering· 7 capabilities
No tool matches

Robots have the potential to revolutionize manufacturing, logistics, and many other industries—but only if they are both affordable and capable of high performance. Today’s robotic hardware is often prohibitively expensive and built using legacy designs that do not prioritize cost reduction, modularity, or scalability. Moreover, many robots struggle with dexterity and tactile sensing, and current design practices decouple hardware and software, preventing a co-evolution that could unlock new performance regimes. Overcoming these limitations requires a rethinking of both robot morphology and control, with an emphasis on integrated design, cost-effective production, and enhanced functionality.

Foundational Capabilities

Co-Design of Morphology and Control

Integrate hardware and software design processes to co-evolve robot bodies alongside control policies. This approach reduces inefficiencies caused by decoupled design methods and can unlock entirely new performance regimes.

1 resources

Dexterous Robots

Create advanced robot bodies that leverage novel materials, innovative design principles, and improved manufacturing and control techniques to enhance dexterity while reducing cost.

4 resources

Improved Robot Actuation

Develop novel actuator technologies that combine hybrid or mode-switching capabilities with power-dense magnetic actuation. These improvements would allow robots to seamlessly transition between compliant and stiff modes, mimicking biological muscle performance while enhancing energy efficiency.

2 resources

Nanoscale Robots

Employ lithographic techniques and advanced nanofabrication to create tiny robots with high precision. Scalable production of nanoscale robotic systems could enable breakthroughs in medicine (e.g., targeted drug delivery) and materials science.

2 resources

Non-Humanoid Mobile Robots

Design mobile robots with streamlined, efficient designs. Reduce unnecessary degrees of freedom for simpler, cheaper, and more reliable mobility platforms.

1 resources

Robot Tactile Perception

Develop multimodal electronic skin (e-skin) that enables robots to detect force vectors, slippage, and temperature across large surface areas. Enhanced tactile perception will facilitate fine-grained control and more adaptive interactions with the environment.

2 resources

Underactuated Robots

Advance the use of underactuated robotic systems, which use fewer actuators than degrees of freedom, to create compliant, adaptable, and significantly cheaper robots. 

1 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Sim-to-Real Transfer for Robots is Hard

Mechanical Engineering· 3 capabilities
No tool matches

Bridging the gap between simulated robot behavior and real-world performance remains a significant challenge, particularly for tactile interactions and complex environments.

Foundational Capabilities

Combine Physics Models and Generative AI

Integrate detailed physics models with generative AI techniques to improve the accuracy of simulations and facilitate effective transfer of robotic behavior from simulation to reality.

2 resources

Large-Scale Data Collection for Tactile Interaction

Systematically collect and curate large training datasets focusing on tactile interactions to enhance simulation accuracy and real-world performance.

1 resources

Robot Foundation Model

Robot foundational models can address major obstacles in robot learning and enable training on action-free data including video. This is essential for enabling reasoning about novel situations and robustly handling real-world variability.

2 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Cellular and Molecular Biology

3 gaps · 10 tool connections · best match 63%

Fundamental Biomolecular Actors in Cells Remain Largely Invisible

Cellular and Molecular Biology· 4 capabilities

Tools

5

Best

41%

Many of the fundamental actors in cells—proteins, lipids, and metabolites—are still mostly invisible to us, especially when considering their extensive multiplexity, diversity, cell-to-cell heterogeneity, and temporal variation. Without scalable, cost-effective technologies to capture these molecular details, our comprehensive analysis of complex biological systems remains limited.

Foundational Capabilities

Deciphering the DNA Regulatory Code

Decipher the “DNA regulatory code” that governs gene expression. Understanding it would enable the prediction of how perturbations to cell state affect transcription in development and disease.

2 resources

High-Throughput Single-Cell Proteomics

Develop new technologies to improve the cost-performance of single-cell proteomics (>100x), enabling proteome-wide analysis at scale. This would allow proteins—the functional output of genetics—to be analyzed comprehensively across complex biological systems. Some of these technologies are single-molecule, some are not. 

4 resources

Mapping Other Key Biomolecules

Develop technology suites for single-cell glycomics and lipidomics using methods such as in-situ multi-cycle imaging or spatially resolved nanopore/mass spectrometry. This would scale up the mapping of key molecules beyond proteins and metabolites.

4 resources

Metabolomics

Establish foundational datasets and machine learning models for inverse mass spectrometry of small molecules, enabling interpretation of metabolomic data at the single-cell level.

5 resources

Candidate Tools

RNA sequencing

assay method

Gap applicability

41

RNA sequencing (RNA-seq) is a transcriptomic assay method that quantifies gene-expression changes by sequencing RNA-derived libraries. In the cited study, it was used on adult rat amygdala tissue to detect subtle expression changes associated with development, cellular function, and nervous system disease after gestational high-THC cannabis smoke exposure.

RNA-seq is a scalable discovery assay for making one major class of biomolecular actors, transcripts, visible across many genes at once. It does not solve protein, lipid, or metabolite invisibility directly, but it is a plausible partial tool for multiplex molecular profiling in cells.

Mechanistic
42
Context
46
Throughput
72
First test time
45
First test cost
32
Replication
20
Practicality
71
Translatability
11

Assumptions: Treating transcriptome-scale visibility as a conservative partial fit to the broader invisibility gap.

Missing evidence: No supplied evidence for single-cell use, protein/lipid/metabolite readout, spatial resolution, or cost-effectiveness at the scale emphasized by the gap.

Gap applicability

28

Chromatin immunoprecipitation sequencing (ChIP-seq) is an assay method that combines chromatin immunoprecipitation with sequencing-based genomic localization to map protein-associated genomic regions. In the cited study, it was used to identify genome-wide ZFHX3-binding sites in suprachiasmatic nucleus chromatin, revealing occupancy concentrated near transcription start sites and co-localization with known histone modifications.

ChIP-seq can make a subset of otherwise hard-to-observe biomolecular interactions visible by mapping protein-associated genomic regions genome-wide. This is relevant to molecular invisibility, but only for chromatin-associated factors rather than the broader protein, lipid, and metabolite landscape highlighted in the gap.

Mechanistic
28
Context
34
Throughput
58
First test time
34
First test cost
26
Replication
15
Practicality
71
Translatability
11

Assumptions: Using chromatin-associated protein occupancy mapping as a narrow visibility-enabling fit.

Missing evidence: No supplied evidence for single-cell operation, temporal multiplexing, proteome-wide coverage, or applicability to lipids and metabolites.

Genome-wide transcription factor binding site mapping is a genomic assay approach used to identify binding sites across the entire genome for transcription factors involved in plant light signaling. In the cited review context, it links transcription factor occupancy to light-regulated gene expression programs in plants.

This assay class can reveal genome-wide occupancy of transcription factors, making one important class of molecular actors more observable at scale. Its fit is limited because the gap emphasizes broad visibility of proteins, lipids, and metabolites, especially with cell-to-cell and temporal resolution.

Mechanistic
24
Context
27
Throughput
55
First test time
36
First test cost
30
Replication
9
Practicality
59
Translatability
9

Assumptions: Considering TF occupancy mapping a narrow but actionable molecular-visibility assay.

Missing evidence: Supplied evidence is review-like and does not specify implementation details, single-cell capability, temporal resolution, or broader biomolecule coverage.

Gap applicability

23

Cell-free chromatin immunoprecipitation (cfChIP) is an assay method that immunoprecipitates histone mark-associated cell-free chromatin from blood plasma. In the cited study, cfChIP targeting H3K36me3-associated cfDNA was used with droplet digital PCR to infer transcriptional activity of specific genes, including EGFR, in the cells that released the cfDNA.

cfChIP provides a way to infer otherwise hidden transcriptional activity from histone mark-associated cell-free chromatin. It is actionable and measurement-oriented, but it addresses a narrow extracellular proxy of gene activity rather than direct, multiplex single-cell measurement of proteins, lipids, or metabolites.

Mechanistic
22
Context
24
Throughput
33
First test time
48
First test cost
41
Replication
9
Practicality
71
Translatability
11

Assumptions: Treating indirect transcription-state inference as a limited visibility tool.

Missing evidence: No supplied evidence for single-cell use, intracellular profiling, broad multiplexity, or direct measurement of the biomolecule classes emphasized in the gap.

qRT-PCR

assay method

Gap applicability

20

qRT-PCR is a quantitative reverse-transcription PCR assay used to measure transcript abundance, here applied to GFP mRNA during light-controlled gene expression in Synechococcus sp. PCC 7002. In the cited study, it quantified transcriptional activation and deactivation kinetics of optogenetic systems under green/red and light/dark illumination cycles.

qRT-PCR is a fast, accessible assay for making specific transcripts visible and quantifying temporal expression changes. It is only a weak fit because the gap centers on scalable, highly multiplexed visibility of proteins, lipids, and metabolites rather than low-plex transcript measurement.

Mechanistic
16
Context
20
Throughput
12
First test time
82
First test cost
74
Replication
9
Practicality
59
Translatability
24

Assumptions: Including it only as a low-cost first-pass molecular readout tool.

Missing evidence: No supplied evidence for high multiplexing, single-cell use, or direct readout of proteins, lipids, or metabolites.

Cellular and Biomolecular States Are Highly Multimodal and Complex

Cellular and Molecular Biology· 3 capabilities

Tools

4

Best

63%

Cellular state is a multifaceted and complex phenomenon, involving multiple overlapping omics layers that vary in time and space. Capturing and representing this multimodal complexity is essential for predictive modeling of cell behavior and for advancing our understanding of cellular function.

Foundational Capabilities

Automated Mechanistic Interpretability of Generative AI Models Trained on Biological Data

Develop automated approaches for mechanistic interpretability of virtual neural network models trained on biological state data. This would enable the extraction of mechanistic insights from predictive models, potentially informing both basic science and therapeutic design.

1 resources

Models for Dynamic Cellular and Subcellular Data

Models that represent cells as self-organizing and adaptive are critical for enabling the simulation of multi-cellular systems. These should incorporate subcellular data from experimental techniques tracking individual molecules within cells in real time, as well as capture the fundamental interaction between external and internal molecular environment.

1 resources

Universal Latent Variable Model of Cellular State

Train predictive models that can probabilistically infer any missing omics data from any available measurements by learning a broadly useful latent space. Such a universal representation of cellular state could become the foundation for many predictive and diagnostic applications.

3 resources

Candidate Tools

Gap applicability

63

Spatial transcriptomics is a transcriptomic assay method identified in the supplied review as a recent methodological advance. In that evidence, it is presented as part of a broader technology set that enables easier and more accurate visualization of cell behavior and qualitative and quantitative analysis of cell-cell interactions.

This assay directly addresses the gap's spatial component by measuring transcriptomic state in tissue context rather than averaging cells. The supplied summary also explicitly places it within a broader technology set for analyzing cell behavior and cell-cell interactions, which is relevant to multimodal cellular-state representation.

Mechanistic
90
Context
86
Throughput
62
First test time
45
First test cost
30
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes transcript spatial context is a core bottleneck for this gap.

Missing evidence: No direct evidence here on temporal resolution, integration with other omics layers, or suitability for building universal latent-state models.

Gap applicability

62

Multi-omics approaches, particularly single-cell and spatial transcriptomics combined with proteomic and metabolomic profiling, are paving the way for composite diagnostic panels

Single-cell transcriptomics is directly relevant to resolving heterogeneous cellular states that are obscured in bulk measurements. The provided evidence also explicitly situates it within multi-omics combinations, which fits the gap's emphasis on overlapping omics layers.

Mechanistic
84
Context
84
Throughput
78
First test time
56
First test cost
38
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes single-cell state resolution is useful even without explicit temporal readout.

Missing evidence: No structured evidence here on paired proteomic or metabolomic capture, temporal dynamics, or replication/practicality metadata.

Gap applicability

57

selected studies taking advantage of advanced state-of-the-art molecular genetic methods ranging from genome-wide epi/transcriptome mapping

This method is relevant because the gap explicitly concerns overlapping omics layers, and genome-wide epi/transcriptome mapping can capture at least two such layers. Its genome-wide scope also fits efforts to build broad representations of cellular state.

Mechanistic
78
Context
76
Throughput
72
First test time
42
First test cost
34
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes combined epigenomic and transcriptomic mapping is intended rather than transcriptomics alone.

Missing evidence: The supplied summary is sparse and does not specify assay format, single-cell resolution, spatial resolution, temporal resolution, or demonstrated integration into predictive models.

Gap applicability

45

This review focuses on the recent advances in optoacoustic imaging assisted by smart molecular labeling and dynamic contrast enhancement approaches that enable new types of multiscale dynamic observations not attainable with other bio-imaging modalities.

The gap highlights time and space, and this imaging modality is explicitly described as enabling multiscale dynamic observations not attainable with other imaging methods. That makes it a plausible complement for capturing dynamic state information beyond static omics snapshots.

Mechanistic
58
Context
72
Throughput
46
First test time
34
First test cost
22
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes dynamic imaging data are considered part of the multimodal cellular-state representation problem.

Missing evidence: No direct evidence here on molecular specificity, compatibility with omics integration, single-cell resolution, or use for predictive cellular-state modeling.

Limited Ability to Image Molecules in Their Native Contexts

Cellular and Molecular Biology· 5 capabilities

Tools

1

Best

62%

We are currently limited in our ability to image molecules in their native contexts—for example, within live 3D tissues. Achieving scalable, high-resolution imaging of biomolecules in situ would de-risk many areas of biomedical science by enabling integrative, comprehensive molecular mapping within intact specimens.

Foundational Capabilities

Binders for Every Epitope

Highly specific mAb/nanobody type binders for every target epitope, including for specific post-translational modifications

1 resources

Freezing and Unfreezing Time for Living Cells

Develop methods to “freeze” and subsequently “unfreeze” living cells, effectively capturing dynamic states for later analysis and then resuming cellular function.

1 resources

Live Cell Subcellular Imaging & Foundation Models

Develop live cell subcellular imaging techniques in tissues combined with computational foundation models to interpret the resulting data, enabling detailed mapping of subcellular structures and their dynamics  in native environments.

3 resources

Molecular 3D Scanning

Create whole-cell, nanometer-resolved, in-situ multi-omic imaging approaches that can map hundreds to thousands of molecules (e.g., proteins, RNAs) within intact specimens at nanoscale resolution. The core chemistries and imaging technologies for this exist, but they need to be integrated and brought to scale. We could, for example, comprehensively measure the many aspects of the “Hallmarks of Aging” within a single tissue sample.

2 resources

Multiplexed Live-Cell Dynamics Recordings

Develop techniques for spatial multiplexing of dynamic signals in live cells, capturing real-time changes and molecular ticker-tapes that record cellular events over time.

2 resources

Candidate Tools

Gap applicability

62

Light-sheet microscopy, also termed single plane illumination microscopy, is an in vivo fluorescence imaging method tailored to larval research and embryonic imaging. The supplied evidence indicates that it can capture the full course of embryonic development from egg to larva and has been coupled with optogenetic perturbation to study Wnt signaling during embryogenesis.

This is the only candidate that is directly an imaging method, and the supplied evidence specifically supports in vivo fluorescence imaging across intact developing specimens. That makes it a plausible partial fit for imaging biomolecules in native contexts, even though the evidence here is centered on embryos/larvae rather than live 3D tissues broadly.

Mechanistic
86
Context
78
Throughput
72
First test time
45
First test cost
28
Replication
20
Practicality
71
Translatability
11

Assumptions: Assumes fluorescence-based in vivo imaging is relevant to the gap's native-context imaging bottleneck.

Missing evidence: No supplied evidence on molecular multiplexing, nanoscale resolution, imaging in adult 3D tissues, or scalability to hundreds-to-thousands of biomolecules.

Ecology

4 gaps · 9 tool connections · best match 56%

Much of the Biosphere Remains Uncharted and Vulnerable to Information Loss

Ecology· 4 capabilities

Tools

4

Best

56%

Much of Earth's biosphere—from the deep ocean to atmospheric bioaerosols—remains unexplored, with the microbial majority largely uncharted. Advancing new exploration technologies and systematically cataloging the Earth's microbiome could unlock discoveries of new life forms and biological insights that could impact health, climate, geoengineering, agriculture, and fundamental biology.

Foundational Capabilities

Dense Ocean DNA Sampling

Implement dense, systematic DNA sampling in diverse ocean environments—especially areas with sharp gradients in temperature, depth, salinity, and pH (such as reefs, deep trenches, and hydrothermal vents)—to uncover new species and biological insights.

2 resources

Efficient Ocean Exploration Systems

Innovate and deploy more efficient and robust systems for deep ocean exploration, enabling comprehensive study of deep-sea environments and their unique biology.

4 resources

Genomic Maps of Global Microbiomes

Develop comprehensive genomic mapping initiatives for global microbiomes to catalog species essential to Earth's biosphere and inform conservation efforts.

4 resources

Map Atmospheric Extremophiles

Atmospheric bio-aerosols could have applications for cloud seeding or other topics. 

2 resources

Candidate Tools

qRT-PCR

assay method

Gap applicability

56

qRT-PCR is a quantitative reverse-transcription PCR assay used to measure transcript abundance, here applied to GFP mRNA during light-controlled gene expression in Synechococcus sp. PCC 7002. In the cited study, it quantified transcriptional activation and deactivation kinetics of optogenetic systems under green/red and light/dark illumination cycles.

qRT-PCR is a directly actionable nucleic-acid assay that could support environmental DNA or RNA workflows by validating and quantifying specific taxa or marker transcripts during biosphere sampling campaigns. It fits the gap's need for systematic cataloging better as a follow-up validation assay than as a primary discovery technology.

Mechanistic
62
Context
58
Throughput
55
First test time
86
First test cost
78
Replication
32
Practicality
59
Translatability
24

Assumptions: Assumes targeted molecular validation of environmental samples is useful within broader cataloging pipelines.

Missing evidence: No direct evidence here for environmental, ocean, aerosol, or metagenomic deployment.

single-cell RNA-seq

computation method

Gap applicability

37

The supplied web research summary states that the anchor review emphasizes mechanistic advances enabled by optogenetics, RNA-sequencing, animal models, and human genetics, and specifically highlights nociceptor and somatosensory neuron classification by single-cell RNA-seq.

Single-cell RNA-seq could plausibly help characterize uncultured or poorly understood microbial communities at higher resolution once cells are isolated, which is relevant to building richer genomic maps of global microbiomes. Its value for this gap is mainly in downstream characterization rather than broad first-pass exploration.

Mechanistic
48
Context
42
Throughput
64
First test time
38
First test cost
30
Replication
20
Practicality
20
Translatability
20

Assumptions: Assumes recoverable single cells from environmental samples and interest in transcript-level characterization.

Missing evidence: No supplied evidence for environmental microbiome use, marine sampling, aerosol sampling, or microbial single-cell workflow performance.

Gap applicability

25

Spatial transcriptomics is a transcriptomic assay method identified in the supplied review as a recent methodological advance. In that evidence, it is presented as part of a broader technology set that enables easier and more accurate visualization of cell behavior and qualitative and quantitative analysis of cell-cell interactions.

Spatial transcriptomics is an actionable omics mapping assay that could, in principle, add spatial context to complex environmental consortia or host-associated microbiomes. For this gap, it is only a weak fit because the supplied evidence is centered on cell-behavior studies rather than environmental biosphere discovery.

Mechanistic
34
Context
24
Throughput
32
First test time
28
First test cost
22
Replication
20
Practicality
71
Translatability
11

Assumptions: Assumes spatially resolved community profiling is relevant to some microbiome mapping efforts.

Missing evidence: No direct evidence for environmental samples, microbial community mapping, ocean systems, or field-deployable use.

mathematical modeling

computation method

Gap applicability

24

Mathematical modeling is a computational method used to guide the rational design of synthetic gene circuits. The cited literature also places it alongside live-cell imaging and within quantitative model systems used to study microbial drug resistance and spatial-temporal features of cancer in mammalian cells.

Mathematical modeling could plausibly support sampling design or interpretation once exploration programs are underway, because it is a lightweight computational method. However, the supplied evidence ties it to synthetic gene circuit design rather than biosphere mapping, so the link is weak.

Mechanistic
22
Context
20
Throughput
50
First test time
80
First test cost
88
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes computational planning or interpretation methods are in scope for exploration campaigns.

Missing evidence: No direct evidence for ecological survey design, metagenomics, biodiversity mapping, or environmental sampling optimization.

Challenges in Tracking and Restoring Resilient Ecosystems

Ecology· 5 capabilities

Tools

3

Best

34%

Regenerating degraded environments and designing self-sustaining systems require a unified understanding of ecological dynamics. Our current models fall short in predicting complex interactions—such as feedback loops and stability thresholds—that determine ecosystem behavior.  To close this gap, we need better datasets and models of biodiversity and animal movements, as well as tools to predict and contain invasive species. We also need the ability to experiment with restoration strategies, and validate approaches ranging from rewilding to engineering de-extinction technologies.

Foundational Capabilities

Build Robust Ecosystem Testbeds (Closed & Open)

Create well-designed testbeds that simulate key ecosystems, allowing for controlled experimentation and development of restoration strategies. Develop experimental testbeds that simulate closed ecosystems, allowing for controlled experimentation and refinement of life-support strategies.

2 resources

Leverage Stem Cell and Genome Editing Technologies for De-Extinction

Utilize cutting-edge stem cell and mammalian genome editing techniques to facilitate de-extinction and promote genetic diversity in repopulating ecosystems. Are we preserving the genomes of critically endangered species to enable future de-extinction?

1 resources

Protecting Ocean Ecosystems

Develop approaches to protect ocean ecosystems. E.g. Studies have shown that shading corals for the 4 hottest hours of the day can significantly reduce bleaching.

2 resources

Study Closed Ecosystems

Conduct detailed studies of existing closed ecosystems to understand the interactions and feedback loops that enable self-sustainability.

1 resources

Tools to Predict and Contain Invasive Species 

Track and forecast the spread of invasive species and simulate containment strategies. 

3 resources

Candidate Tools

microfluidics

assay method

Gap applicability

34

The review integrates data from in vitro, in silico, and clinical studies, including both classical detection strategies and emerging technologies such as clustered regularly interspaced short palindromic repeats (CRISPR)-based modulation, biosensors, and microfluidics.

The gap explicitly calls for better datasets and controlled ecosystem testbeds, and microfluidics is an actionable platform method for building controlled experimental environments and integrated detection workflows. It could plausibly support small-scale, high-throughput studies of microbial or tissue-like ecological interactions relevant to closed ecosystem experiments.

Mechanistic
42
Context
24
Throughput
72
First test time
45
First test cost
38
Replication
30
Practicality
30
Translatability
25

Assumptions: Assumes ecosystem testbeds include tractable small-scale or microbial interaction models rather than only field-scale ecology.

Missing evidence: Provided evidence is tied to tissue engineering and does not directly show ecological, biodiversity, animal movement, invasive species, or restoration use.

Gap applicability

27

CRISPR/Cas9 is a genome editing technique used in the cited study to generate Cib1 and Cib2 knockout mice. In this evidence set, its demonstrated function is targeted gene disruption for mouse model production.

The gap specifically includes de-extinction and genome editing capabilities, and this is a directly actionable genome editing method with demonstrated use for generating knockout animal models. It could plausibly contribute to species-focused functional genetics work relevant to conservation or de-extinction research programs.

Mechanistic
26
Context
29
Throughput
33
First test time
41
First test cost
36
Replication
9
Practicality
59
Translatability
34

Assumptions: Assumes the de-extinction subproblem is considered part of the gap and that animal model editing is a useful intermediate step.

Missing evidence: Evidence only shows mouse knockout model generation, not endangered species, conservation genetics, restoration outcomes, or ecosystem-level effects.

Gap applicability

27

Recent research has shown that plasmid DNA delivered by single-walled carbon nanotubes (SWCNTs) ... can diffuse through plant cell walls, enabling the transient expression of genetic material in plant tissues.

One part of the gap mentions engineering and de-extinction-related capabilities, and this item is a concrete plant delivery strategy for transient genetic manipulation. It could be relevant for first-pass testing of plant engineering ideas in restoration contexts where plant traits are part of ecosystem recovery.

Mechanistic
28
Context
31
Throughput
40
First test time
46
First test cost
34
Replication
20
Practicality
20
Translatability
22

Assumptions: Assumes plant engineering is in scope for restoration strategy experiments.

Missing evidence: No direct evidence for ecosystem restoration outcomes, field deployment, invasive species control, biodiversity tracking, or use beyond transient expression in plant tissues.

We Can Learn More from Nature’s Biological Designs

Ecology· 6 capabilities

Tools

2

Best

29%

Nature’s blueprints span from the unseen nanoworld to the enigmatic origins of life. Despite the incredible diversity of nanostructures, many remain hidden due to current imaging limitations. Likewise, the mysteries of animal communication—crucial for decoding behavioral cues and social structures—await breakthrough insights. Moreover, the primordial conditions of our planet, essential for understanding life’s genesis, are obscured by the absence of rocks from before 4.1 Ga and fossils before 3.5 Ga (although life almost certainly established itself on our planet before that). Addressing these challenges can unlock new understandings in biology, ecology, and more.

Foundational Capabilities

Advanced Imaging of Natural Nanostructures

Deploy cutting-edge imaging techniques to capture high-resolution details of natural nanostructures, revealing new insights into their form and function.

1 resources

Develop Missions to Search for Life on Europa and/or Enceladus

Design and implement missions specifically aimed at exploring Europa and/or Enceladus for signs of life, leveraging advanced detection and sampling technologies.

1 resources

Leverage Modern Machine Learning to Communicate with Animals

Use advanced machine learning techniques to decode animal vocalizations and behavioral signals, enabling meaningful communication and insights into animal cognition. Note: Many animals (e.g. insects, cephalopods, amphibians) use non-vocal communication (bioluminescence, gestures, electroreception, olfaction). For olfaction see: https://www.osmo.ai/   

7 resources

Massive-Scale Search and Screening of Hadean Zircons

Almost all our knowledge of Hadean Earth comes from Hadean zircons, the “black-box recorder” of minerals. Zircons can incorporate small amounts of the ancient ocean . All known Hadean zircons come from a single site, and they are gathered artisanally by individual graduate students. The solution is to search for more Hadean zircons with the steady, systematic approach of (for example) a diamond company, and then screen massive numbers of zircons for ocean and atmosphere data.

2 resources

Rake Ancient Lunar Regolith - “Earth’s Attic” - to Find Fragments of Hadean Rock from Earth

No rock survives from the Hadean - but that is because Earth has plate tectonics. Statistically, there must be Hadean rocks waiting on ancient Lunar terrains for us to find (just as there are abundant Lunar meteorites on Earth). A plausible Earth mineral has already been identified in Apollo samples (https://www.sciencedirect.com/science/article/abs/pii/S0012821X19300202). They will be easy to spot due to their distinct mineralogy (and color). However, raking through the Lunar regolith to find them is tedious work, beyond the patience of astronauts - but not beyond the patience of robots. This is a NASA science goal, but it is undervalued, due to the intense focus on Lunar resources.

2 resources

Understand Ancient Proteins

Systematically testing functions of ancient proteins 3800 to 50 Mya to understand ancient biochemical capabilities and environmental constraints, informing evolutionary ecology.

1 resources

Candidate Tools

Gap applicability

29

Single-cell RNA sequencing (scRNA-seq) is a transcriptomic assay method that measures RNA molecules in individual cells by sequencing-based transcript detection. In the cited application, it detected FLiCRE transcripts within the endogenous transcriptome, enabling simultaneous readout of cell type and calcium activation history.

The gap includes large-scale discovery problems, and scRNA-seq is an actionable high-throughput assay for uncovering previously hidden biological states and cell-type-specific programs in natural systems. It could plausibly support animal communication studies by profiling neural or sensory cell populations associated with signaling behaviors.

Mechanistic
28
Context
22
Throughput
72
First test time
45
First test cost
30
Replication
9
Practicality
71
Translatability
51

Assumptions: Assumes the work includes accessible tissues or cells from organisms relevant to communication or natural design questions.

Missing evidence: No direct evidence here for use in ecology, animal communication decoding, nanostructure imaging, or origin-of-life samples.

cell-free protein synthesis

engineering method

Gap applicability

28

Cell-free protein synthesis (CFPS) has been used as a transformative technology in synthetic biology, providing a programmable, scalable, and automation-compatible platform for biological engineering.

One capability in the gap is understanding ancient proteins, and CFPS is a directly usable engineering method for rapid expression and testing of reconstructed proteins without needing full cellular systems. Its automation-compatible and scalable nature also fits systematic screening workflows.

Mechanistic
24
Context
20
Throughput
78
First test time
70
First test cost
62
Replication
50
Practicality
50
Translatability
50

Assumptions: Assumes reconstructed ancient protein sequences are available and functional testing is part of the intended workflow.

Missing evidence: The supplied summary is broad and does not specifically mention ancient proteins, ecological samples, or origin-of-life applications.

1 unmatched gap

Inability to Anticipate or Prevent Ecosystem Tipping Points

Ecology· 3 capabilities
No tool matches

We lack the models and infrastructure to monitor and predict how ecosystems behave under stress or when they might collapse, as well as what metrics to use to determine that restoration is effective. We need to understand the underlying dynamics, feedback loops, and thresholds that lead to ecosystem degradation/ collapse, as well as to secure essential systems like pollination.

Foundational Capabilities

Detect Early-Warning Signals of Collapse

Develop monitoring techniques to detect early-warning indicators—such as critical slowing down—that suggest an ecosystem is approaching a tipping point.  Establish robust, scalable networks for real-time ecological monitoring using integrated technologies. Deploy sensor arrays that combine soundscapes, environmental DNA (eDNA) sampling, and satellite data fusion to continuously assess ecosystem health across diverse regions.

4 resources

Foundational Datasets and Models of Biodiversity and Species Interaction

Develop comprehensive, high-quality datasets and predictive models to better understand and forecast animal movements and biodiversity shifts. Use advanced computational and theoretical models that capture how species interact, cascade through ecosystems, and ultimately influence stability or collapse. These models will help identify key feedback loops and thresholds, enabling targeted intervention before degradation accelerates.

6 resources

Pollination Network Monitoring and Recovery

Secure and restore pollination services critical to food systems and biodiversity. This involves: • Building ecological models that map plant–pollinator interactions and forecast vulnerability or collapse points. • Deploying global pollinator monitoring systems using visual, acoustic, and eDNA sensors paired with AI to track pollinator diversity and behavior. • Designing landscape-level interventions such as habitat corridors, floral resource planning, and pesticide regulations to boost wild pollinator recovery. Pollination is critical for food systems and biodiversity, yet global pollinator populations are in sharp decline, and we lack robust ways to track, model, or supplement pollination services at scale. We need to:  • Build ecological models that map plant–pollinator interactions and predict vulnerability or collapse points. • Deploy global pollinator monitoring infrastructure: networks of sensors (visual, acoustic, eDNA) and AI models to monitor pollinator presence, diversity, and behavior across ecosystems and crop systems • Design and deploy landscape-Level Interventions for Pollinator Recovery: large-scale habitat corridors, floral resource planning, and pesticide regulations to recover wild pollinators.

2 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Materials Science

4 gaps · 8 tool connections · best match 61%

Many Molecules Can’t Easily Be Crystallized

Materials Science· 1 capabilities

Tools

4

Best

56%

Crystallization is crucial for determining molecular structure, yet many molecules resist forming crystals. Improved computational models of crystal growth are needed to guide experimental efforts.

Foundational Capabilities

Computational Crystal Growth

Develop integrated computational frameworks that combine quantum chemistry, reaction potential modeling, experimental data, and ML generative models to describe and direct crystal growth at the atomic level.

1 resources

Candidate Tools

graph neural networks

computation method

Gap applicability

56

Graph neural networks (GNNs) are one of the fastest growing classes of machine learning models. They are of particular relevance for chemistry and materials science, as they directly work on a graph or structural representation of molecules and materials and therefore have full access to all relevant information required to characterize materials.

The gap explicitly calls for improved computational models of crystal growth, and this item is a machine-learning method directly noted as relevant to chemistry and materials science on graph or structural representations of molecules and materials. That makes it a plausible modeling backbone for predicting crystallization-related behavior, although the supplied evidence does not mention crystal growth specifically.

Mechanistic
62
Context
82
Throughput
86
First test time
58
First test cost
62
Replication
35
Practicality
50
Translatability
55

Assumptions: Assumes GNNs would be adapted to crystal-growth or crystallization prediction tasks rather than used only for other materials-property problems.

Missing evidence: No supplied evidence of use for crystal growth, crystallization propensity, nucleation, or integration with experimental crystal-growth data.

mathematical modeling

computation method

Gap applicability

40

Mathematical modeling is a computational method used to guide the rational design of synthetic gene circuits. The cited literature also places it alongside live-cell imaging and within quantitative model systems used to study microbial drug resistance and spatial-temporal features of cancer in mammalian cells.

The gap asks for computational frameworks to guide experiments, and mathematical modeling is at least a directly actionable modeling approach. It could support hypothesis generation around crystallization conditions, but the provided evidence is from synthetic biology rather than materials or crystal growth.

Mechanistic
41
Context
28
Throughput
72
First test time
78
First test cost
80
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes the method refers to general quantitative modeling workflows that can be re-scoped to crystallization problems.

Missing evidence: No supplied evidence for crystal growth, materials science, molecular crystallization, or coupling to quantum chemistry and experimental crystallization datasets.

Gap applicability

33

The Bayesian optimization framework is a computational method built from high-throughput Lustro measurements to guide control of blue light-sensitive optogenetic systems. It uses data-driven learning, uncertainty quantification, and experimental design to identify light induction conditions for multiplexed regulation in Saccharomyces cerevisiae.

Bayesian optimization and uncertainty-aware experimental design could in principle help prioritize crystallization conditions or simulation parameters when experiments are expensive. However, the supplied evidence is narrowly tied to optogenetic control in yeast, not crystallization or materials science.

Mechanistic
38
Context
18
Throughput
74
First test time
55
First test cost
56
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the optimization framework is portable beyond optogenetics to experimental design for crystallization studies.

Missing evidence: No supplied evidence for crystal growth modeling, crystallization-condition search, materials-science use, or compatibility with quantum-chemistry-based crystal models.

SOS-CIS(D) method

computation method

Gap applicability

24

SOS-CIS(D) is a quantum-chemical excited-state calculation method used to compute vertical excitation energies. In the cited 2010 BLUF photoreceptor study, it was applied to model flavin-associated structural and spectral changes and to evaluate light-induced states.

The gap's capability description mentions quantum chemistry as one component of integrated crystal-growth frameworks, and this item is a quantum-chemical calculation method. But the supplied evidence is specifically for excited-state photoreceptor modeling, which is only weakly connected to crystallization bottlenecks.

Mechanistic
29
Context
16
Throughput
44
First test time
34
First test cost
30
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes some relevant crystallization problems may involve electronic-structure calculations, though not necessarily excited states.

Missing evidence: No supplied evidence for crystal packing, intermolecular interaction modeling in crystals, nucleation, growth kinetics, or ground-state materials applications.

Searching Through the Vast, Underexplored Space of Materials is Slow and Expensive

Materials Science· 4 capabilities

Tools

3

Best

43%

“New materials create fundamentally new human capabilities. And yet…new materials-enabled human capabilities have been rare in the past 50 years.” The core challenge lies in our inability to reliably design and manufacture materials that meet specific engineering requirements–and to do so at an industrial scale and reasonable cost.  Identifying promising new materials is hampered by the slow pace of exploration. The integration of machine learning, physics-based property prediction, and self-driving laboratories could dramatically accelerate this process. A significant opportunity lies in modeling the vast, unexplored space of potential materials in silico.

Foundational Capabilities

Build Better Assay Platforms for Materials

Develop more efficient assay platforms to test the properties of materials, thus enabling faster feedback and iteration in materials discovery.

4 resources

In-Silico Modeling of Material Space

Utilize deep learning and computational modeling to predict and discover millions of new materials, expanding our understanding of what can exist.

1 resources

Materials Property Bank

Large open dataset of experimentally determined mechanical, thermal, electrical properties of millions of samples that consolidates published and crowdsourced data to enable ML models. This would augment initiatives like the Materials Project and OQMD, which are simulation-heavy.

2 resources

ML & Physics-Based Property Prediction and Iterative Self-Driving Lab

Leverage machine learning models combined with physics-based property prediction to iteratively explore the materials space using automated, self-driving laboratory platforms, to find things like higher temperature superconductors or topological materials.   New designs are needed to minimize large capital expenditures and integrate flexible, modular components that can be rapidly repurposed for new experiments and are robust to variations and error handling.

13 resources

Candidate Tools

Gap applicability

43

The Bayesian optimization framework is a computational method built from high-throughput Lustro measurements to guide control of blue light-sensitive optogenetic systems. It uses data-driven learning, uncertainty quantification, and experimental design to identify light induction conditions for multiplexed regulation in Saccharomyces cerevisiae.

This item directly matches the gap's need for data-driven search over large design spaces using uncertainty quantification and experimental design. Those features are relevant to active learning or self-driving-lab style prioritization of which candidate materials or conditions to test next.

Mechanistic
72
Context
28
Throughput
78
First test time
62
First test cost
66
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the optimization framework is portable beyond optogenetic light-control experiments to materials screening workflows.

Missing evidence: No supplied evidence that it has been applied to materials, physical property prediction, or non-biological assay loops.

Gap applicability

31

The quantitative mathematical model is a computational design method used to guide the combination of synthetic biology-derived functional modules within a polymer framework. In the cited biohybrid materials system, this model-supported design enabled light pulse-counting behavior linked to distinct molecular outputs.

The gap explicitly calls for in silico modeling of large unexplored spaces, and this item is at least an actionable computational design method. It could plausibly serve as a template for modular quantitative modeling in biohybrid or polymer-linked materials contexts.

Mechanistic
46
Context
22
Throughput
44
First test time
63
First test cost
68
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes relevance to materials because the summary mentions a polymer framework and model-supported design.

Missing evidence: No evidence of use for broad materials discovery, industrial materials property prediction, or high-throughput search over large candidate spaces.

computational modeling

computation method

Gap applicability

27

Computational modeling was used to analyze how promoters decode light-driven transcription factor nuclear translocation dynamics. In the cited work, the modeling identified promoter kinetic regimes that enable efficient expression under short light pulses and proposed a multi-stage, thresholded activation scheme to explain opposite promoter-response phenotypes.

This is an actionable computational modeling approach, and the gap includes a need for modeling-driven understanding to guide search. Its value here would be as a general kinetic or mechanistic modeling template rather than a direct materials-discovery solution.

Mechanistic
34
Context
16
Throughput
36
First test time
58
First test cost
64
Replication
20
Practicality
71
Translatability
11

Assumptions: Assumes generalizable value from mechanistic modeling methods even though the demonstrated system is transcriptional and light-driven.

Missing evidence: No supplied evidence for materials science use, property prediction, autonomous experimentation, or scaling to large materials spaces.

Limited Ability to Design and Scalably Synthesize Macroscale Materials

Materials Science· 3 capabilities

Tools

1

Best

61%

While many promising materials have been discovered in the lab, current synthesis methods are often too expensive to produce these materials in sufficient quantities. Some examples of novel materials that would be highly enabling include:  • Low activation, thermally conductive materials that are resistant to radiation damage are needed to enable fusion reactors (the first wall material is currently a limitation), spacecraft, etc. • Materials that emit at the transparency window of the atmosphere (that were easy to apply like paint) to drastically diminish solar earth heating (example) • Hyper-efficient thermoelectrics that could directly turn heat into electricity.  • Materials that autonomously heal to improve our infrastructure and prevent system failures due to material defects. • Materials that have the insulating properties and high melting points of ceramics but the formability and ductility of metals for jet engines and atmospheric reentry vehicles.

Foundational Capabilities

Macroscale Bioprinting

Methods to build large-scale structures from cells and proteins.

2 resources

Novel Design and Processing of Textile Fibers

Next generation, high performance protein-based fibers created through new spinning processes that can align and control the molecular assembly of the final fiber, taking advantage of protein’s unique capabilities.

7 resources

Scalable Synthesis of Carbon-Based Materials 

Carbon fiber could potentially replace steel in many situations and sequester atmospheric carbon instead of creating it if we could make enough of it cheaply enough. Arbitrarily long carbon nanotubes would enable tethers with tensile strength near the limits of physics which unlock things like space elevators.Scaling the production of conductive carbon materials could potentially replace copper.

3 resources

Candidate Tools

cell-free protein synthesis

engineering method

Gap applicability

61

Cell-free protein synthesis (CFPS) has been used as a transformative technology in synthetic biology, providing a programmable, scalable, and automation-compatible platform for biological engineering.

The gap emphasizes scalable synthesis and design of new materials, and CFPS is explicitly described as programmable, scalable, and automation-compatible for biological engineering. It could plausibly support rapid prototyping and screening of protein-based material components relevant to macroscale bioprinting or high-performance fibers.

Mechanistic
72
Context
63
Throughput
84
First test time
72
First test cost
48
Replication
50
Practicality
58
Translatability
42

Assumptions: Assumes the intended materials include biologically produced protein or biomolecular components rather than only inorganic bulk materials.

Missing evidence: No direct evidence here that CFPS has been used to make the specific macroscale material classes named in the gap or that it reaches economical bulk production.

1 unmatched gap

We Have a Limited Ability to Acquire, Concentrate and Substitute Chemical Elements in Processes

Materials Science· 3 capabilities
No tool matches

The cost of materials is often dominated by the cost to obtain their constituent elements. What presents commercially as the “critical minerals problem” masks a larger scientific bottleneck on how we acquire, concentrate, and substitute chemical elements.

Foundational Capabilities

Efficient Chemical Separation of Lanthanides

Create an industrial center of excellence focused on the practical separation of Lanthanides to distribute this knowledge as a public good.

1 resources

High-Strength, Non-Neodymium Magnets

Develop high-strength permanent magnets not made of rare-earth elements. Currently the high-strength magnets underpinning many technologies (e.g., hard disk drives, mobile phones, electric vehicle motors) are all made out of neodymium, a rare earth element at risk of supply chain shortages and environmental issues. 

1 resources

Recovering Metals and Rare Minerals from Waste

E-waste represents a significant opportunity to recapture and reuse rare earth elements that are in short supply. 

2 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Nanoscale Fabrication

3 gaps · 8 tool connections · best match 68%

Synthetic Biology Platforms Are Over-Reliant on Evolved Cells That We Don’t Fully Understand or Control

Nanoscale Fabrication· 1 capabilities

Tools

4

Best

68%

We currently perform synthetic biology using naturally evolved (“kludgy”) cells rather than truly bottom-up engineered cells. This bottleneck limits our ability to design fully customizable biological systems.

Foundational Capabilities

Bottom-Up Synthetic Cells

Develop entirely synthetic cells constructed from the ground up, rather than relying on evolved cell systems. This approach would enable precise control over cellular functions and properties, although risks must also be considered, e.g., https://www.science.org/doi/10.1126/science.ads9158

2 resources

Candidate Tools

cell-free protein synthesis

engineering method

Gap applicability

68

Cell-free protein synthesis (CFPS) has been used as a transformative technology in synthetic biology, providing a programmable, scalable, and automation-compatible platform for biological engineering.

CFPS directly reduces dependence on intact evolved cells by moving core expression and prototyping functions into a programmable cell-free platform. That makes it a plausible near-term tool for bottom-up synthetic cell development and testing of synthetic components under more controlled conditions.

Mechanistic
82
Context
80
Throughput
88
First test time
78
First test cost
58
Replication
35
Practicality
50
Translatability
40

Assumptions: Assumes the gap accepts partial de-risking tools that replace some cell-based functions rather than complete synthetic cells.

Missing evidence: No supplied replication or direct synthetic-cell assembly evidence; summary does not specify liposome/protocell integration.

Gap applicability

55

Mathematical and statistical modelling is a computational design approach used in synthetic biology to improve the predictability of engineered biological systems. In the cited plant synthetic biology literature, it supports model-informed rational design for engineering plant gene regulation and metabolism.

Model-informed design could help address the gap's core predictability problem by making bottom-up biological systems more rationally specifiable before building them. It is especially plausible as a design-layer tool for synthetic cells where uncontrolled evolved-cell complexity is a bottleneck.

Mechanistic
62
Context
66
Throughput
84
First test time
86
First test cost
84
Replication
20
Practicality
71
Translatability
11

Assumptions: Assumes computational design support is in scope as an actionable method for bottom-up synthetic cell engineering.

Missing evidence: Evidence is from plant synthetic biology, not synthetic cells; no direct bottom-up cell construction example is supplied.

GUBS

engineering method

Gap applicability

52

GUBS (Genomic Unified Behavior Specification) is a domain-specific, rule-based declarative language for behavioral specification of synthetic biological devices. It represents device programs as behavioral specifications for open systems rather than as complete closed-system descriptions.

GUBS is a behavioral specification method for synthetic biological devices in open systems, which is relevant to the gap's need for more explicitly designed and controlled biological systems. It could help formalize intended behaviors for bottom-up constructs instead of relying on poorly understood native cellular context.

Mechanistic
58
Context
60
Throughput
72
First test time
74
First test cost
86
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes formal specification languages are useful upstream tools for synthetic-cell platform design.

Missing evidence: No direct evidence for use in bottom-up synthetic cells or physical implementation workflows is provided.

Gap applicability

47

Computational protein design is an engineering methodology described in a 2018 review as a next-generation tool for expanding synthetic biology applications. The supplied evidence frames it as a design approach used alongside phage display and high-throughput binding assays rather than as a single molecular reagent.

Bottom-up synthetic cells would likely need designed protein parts rather than inherited natural ones, so computational protein design is plausibly relevant as a component-generation method. It may help replace evolved functions with more intentionally engineered modules.

Mechanistic
56
Context
61
Throughput
70
First test time
55
First test cost
50
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes the gap includes enabling methods for designing synthetic components, not only whole-cell assembly methods.

Missing evidence: Supplied evidence is broad review-level only, with no specific mechanism or synthetic-cell use case.

Inability to Perform Chemistry with Direct Positional Control

Nanoscale Fabrication· 6 capabilities

Tools

4

Best

34%

Our current methods do not allow precise control over the positional placement of atoms or groups during chemical synthesis, limiting our ability to build molecules with atomic precision. A general-purpose approach to atomically precise fabrication was envisioned by Drexler in the 1980s and Feynman in the late 1950s. DNA origami made a leap in 2006, but DNA is in some key ways a much less precise and versatile nanoscale building material than proteins/peptides. A promising path would extend “DNA origami” to “protein carpentry” by adapting Beta Solenoid proteins, or other modular protein components with programmable binding properties, as lego-like building blocks and then using the latter to construct massively parallel protein-based 3D printers for lego-like covalent assembly of a restricted set of chemical building blocks. This one is riskier: how programmably can we really control protein assembly, and could we bootstrap from initial crappy prototype protein-carpentry-and-or-DNA-origami-based molecular 3D printers to genuinely useful ones? 

Foundational Capabilities

Cell-Free Systems

Utilize cell-free platforms that enable synthetic biology outside of living cells, thereby bypassing the limitations of evolved cellular machinery.

2 resources

Designer Enzymes from Quantum Chemistry and Protein Design

Create enzymes specifically engineered via quantum chemical methods and de novo protein design, which can precisely catalyze reactions at defined positions.

1 resources

Direct 3D Specification of Protein-Like Molecules

Design polymers that are not limited to amino acids and are directly specified in three dimensions, enabling precise positional control in synthesis and potentially broader or more robust functions than proteins.

2 resources

Molecular 3D Printing

Explore methods for molecular-scale 3D printing, which would enable the precise assembly of molecules layer by layer.   This would in principle move us towards a general-purpose approach to atomically precise fabrication as envisioned by Drexler in the 1980s and Feynman in the late 1950s. DNA origami made a leap in 2006, but DNA is in some key ways a much less precise and versatile nanoscale building material than proteins/peptides. A promising path would extend “DNA origami” to “protein carpentry” by adapting Beta Solenoid proteins, or other modular protein components with programmable binding properties, as lego-like building blocks and then using the latter to construct massively parallel protein-based 3D printers for lego-like covalent assembly of a restricted set of chemical building blocks. This one is riskier: how programmably can we really control protein assembly, and could we bootstrap from initial crappy prototype protein-carpentry-and-or-DNA-origami-based molecular 3D printers to genuinely useful ones?  Safety consideration: https://iopscience.iop.org/article/10.1088/0957-4484/15/8/001  Strategy consideration: https://www.effectivealtruism.org/articles/ea-global-2018-paretotopian-goal-alignment 

3 resources

Silicon-Protein Interfaces

Current fabrication methods allow us to work at macroscopic scales (10^0 m) down to the nanometer scale (10^-8 m) with photolithography, and further down to the atomic scale (10^-10 m) with proteins. However, directly bridging from macroscopic to atomic scales (10^0 m to 10^-10 m) for nanotechnology applications remains a significant challenge. A key obstacle is the lack of effective interfaces between single addressable electrodes and proteins.

1 resources

Vacuum Mechanosynthesis Exploration

Investigate the feasibility of vacuum mechanosynthesis—a process that uses mechanical forces under vacuum conditions to construct molecules with high positional precision.

3 resources

Candidate Tools

AlphaFold3

computation method

Gap applicability

34

AlphaFold3 is a computational structure-prediction method used in the cited study to model the MagMboI–DNA complex. In that work, it was applied to infer interactions with the 5'-GATC-3' recognition sequence and to guide optimization of the photoactivatable endonuclease variant MagMboI-plus for top-down genome engineering.

This gap explicitly points toward programmable protein assembly and designer protein components, and AlphaFold3 is at least directly relevant as a structure-modeling method for protein complexes. It could help evaluate candidate interfaces or geometries for protein-based positional assembly, but the supplied evidence only shows use on a protein-DNA complex rather than nanoscale fabrication scaffolds.

Mechanistic
42
Context
46
Throughput
72
First test time
74
First test cost
63
Replication
0
Practicality
59
Translatability
9

Assumptions: Assumes protein-complex structure prediction is useful for early design of protein carpentry components.

Missing evidence: No direct evidence for beta-solenoid design, programmable protein self-assembly, covalent assembly control, or molecular 3D printing applications.

Gap applicability

31

Molecular dynamics simulation is a computational method for modeling atomistic conformational dynamics of proteins and analyzing residue fluctuations and vibrational behavior. In the cited studies, it was used as a noninvasive approach to validate dynamic behavior and to compare PAS-domain dynamics across functional groups.

Atomistic dynamics simulation could plausibly support the gap by checking whether designed protein building blocks maintain the conformations needed for positional control. That is relevant to early-stage protein carpentry, but the provided evidence is limited to analyzing protein dynamics rather than directing precise synthetic chemistry.

Mechanistic
31
Context
38
Throughput
66
First test time
68
First test cost
58
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes conformational stability is a key bottleneck for programmable protein assembly.

Missing evidence: No direct evidence for positional catalysis, scaffolded covalent assembly, nanoscale fabrication, or protein-origami design workflows.

Gap applicability

29

All-atom replica exchange discrete molecular dynamics is a computational docking method used to generate structural models of calcium and integrin binding protein 1 (CIB1) bound to α-integrin cytoplasmic tails. In the cited CIB1 study, it predicted that multiple α-integrin tails engage the same hydrophobic binding pocket on CIB1.

This docking and conformational-sampling method could be useful for testing whether modular protein components form intended interfaces, which is one subproblem in protein-based positional assembly. However, the supplied evidence only supports protein-peptide binding-model generation, not direct control of chemistry at defined positions.

Mechanistic
28
Context
34
Throughput
63
First test time
64
First test cost
55
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes interface prediction for modular protein parts is relevant to the proposed protein carpentry path.

Missing evidence: No evidence for use in nanoscale fabrication, beta-solenoid assembly, catalytic placement, or atomically precise synthesis.

Gap applicability

25

The alpha-helical domain linker is a construct pattern in which a rigid alpha-helical segment is placed between fused protein domains to couple their functions. In the cited design context, it is proposed to act as a helical allosteric lever arm that transmits conformational information between domains.

A rigid helical linker is at least an actionable construct pattern for transmitting geometry between fused protein domains, which could matter in engineered protein assemblies that need controlled relative positioning. The evidence provided does not show use for nanoscale fabrication or direct positional chemistry, so this is only a weak, enabling link.

Mechanistic
24
Context
29
Throughput
57
First test time
71
First test cost
72
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes rigid domain positioning within fusion proteins could contribute to prototype protein assembly systems.

Missing evidence: No direct evidence for scaffold precision at fabrication-relevant scales, self-assembly behavior, or catalytic group placement.

1 unmatched gap

Current Chip Fabrication Methods are Extremely Expensive and Hard to Change

Nanoscale Fabrication· 6 capabilities
No tool matches

Modern chip fabs are enormous, multi-billion-dollar facilities with limited versatility in what they can produce. This bottleneck restricts the ability to create assemblies with diverse molecular components on a small scale.

Foundational Capabilities

Digital Assembly

Use digital techniques to plan and assemble chip components, integrating computational design with physical fabrication.

1 resources

Expand the Scope of Chip Fabs

Broaden the capabilities of existing chip fabrication facilities to produce a wider range of devices, potentially reducing costs and expanding functionality.

2 resources

Laser Direct Write

Employ laser direct writing techniques to pattern chips with high precision, offering an alternative to traditional lithographic methods.

1 resources

Nano Modular Electronics

Develop modular electronic components at the nanoscale, enabling flexible, low-cost assemblies with high molecular diversity.

2 resources

Randomized DNA-Based Chip Assembly

Explore methods to assemble chips using randomized DNA as a templating or assembly tool, allowing for scalable production with inherent molecular diversity.

2 resources

Shrink Molecularly Diverse 3D Assemblies for Fabrication

Develop methods for producing smaller, more diverse assemblies using patterned techniques that allow for greater molecular variability.

3 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Astrophysics

4 gaps · 4 tool connections · best match 29%

Limited Detection of Gravitational Waves Across the Frequency Spectrum

Astrophysics· 3 capabilities

Tools

4

Best

29%

Detecting gravitational waves allows us to observe cosmic events like black hole mergers and neutron star collisions that are invisible through traditional telescopes. Current gravitational wave detectors are primarily sensitive to audio-band signals. Some phenomena, including speculative ones such as high-frequency emissions from advanced propulsion systems, might only be detectable with novel approaches.

Foundational Capabilities

Decihertz (~0.1 Hz) Gravitational Wave Detection

There are an exceptionally large number of compelling signals that live in this band, including: the elusive intermediate mass black holes, white dwarf mergers (putative SN Ia progenitor), tidal disruption events, high precession and eccentricity systems, neutron star merger early warning, and more.

1 resources

Explore High-Frequency Gravitational Wave Detection

Investigate new methodologies to detect high-frequency gravitational waves, potentially unveiling phenomena that are invisible to current detectors.

2 resources

Space-Based Gravitational Wave Detection

Space-based Laser Interferometer Gravitational-Wave Observatories (LIGOs) allows an extremely large detector to study regions of the gravitational wave spectrum that are inaccessible from Earth.

1 resources

Candidate Tools

Stimulated Brillouin scattering slow light in optical fibers is an optical-fiber delay method for microwave photonics that provides continuously tunable delay of broadband analog signals. In the cited demonstration, pump-laser chirp control synthesized a broadened SBS response that delayed 1-GHz-wide linear frequency-modulated radio-frequency signals.

The gap explicitly includes interest in novel high-frequency gravitational-wave detection approaches, and this item is an optical-fiber microwave-photonics method for delaying broadband analog RF signals. That could plausibly support signal-conditioning or readout-chain prototyping in photonic detection concepts, but the supplied evidence does not show direct use in gravitational-wave sensing.

Mechanistic
42
Context
46
Throughput
38
First test time
34
First test cost
28
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes photonic signal-processing components are relevant to exploratory high-frequency detector architectures.

Missing evidence: No evidence of gravitational-wave detection use, sensitivity impact, noise performance in detector settings, or compatibility with decihertz/space-based observatories.

Gap applicability

28

Pump laser chirp control for broadened stimulated Brillouin scattering (SBS) slow light is an optical spectrum synthesis method in optical fibers. It uses chirp-controlled pump spectral shaping to broaden the SBS slow-light response, enabling continuously tunable delay of broadband analog radio-frequency signals with low amplitude and phase distortion.

This is a more specific optical spectrum-synthesis method for broadened SBS slow light, enabling tunable delay of broadband analog RF signals with low distortion. It could plausibly be useful as a subsystem method in experimental photonic readout or modulation schemes relevant to high-frequency detection concepts.

Mechanistic
40
Context
44
Throughput
38
First test time
34
First test cost
28
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes tunable low-distortion RF photonic delay is useful in candidate optical frequency-modulation detection setups.

Missing evidence: No direct evidence linking the method to gravitational-wave instrumentation, detector sensitivity gains, or operation in the target frequency bands.

The liquid-crystal-on-silicon spatial light modulator is an LCOS-based device engineered for phase-only optical modulation. It modulates incident light wavefronts with a phase range exceeding one wavelength and was developed for wavefront control applications including adaptive optics, optical manipulation, and laser processing.

Wavefront control is plausibly relevant to precision optical systems, and the item is directly described as a phase-only optical modulation device for adaptive optics and related applications. It might help in benchtop prototyping of optical control paths for novel interferometric or modulation-based sensing concepts.

Mechanistic
31
Context
36
Throughput
33
First test time
41
First test cost
32
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes adaptive wavefront control is relevant to exploratory optical detector development.

Missing evidence: No evidence for gravitational-wave detector integration, noise reduction performance, metrology precision, or suitability for space-based systems.

Gap applicability

23

A single-mode optical fiber serves as the propagation and interaction medium for all-optical polarization control of telecommunication signals. In the cited 2011 study, a signal wave in standard single-mode fiber interacted nonlinearly with a counterpropagating control pump beam to produce polarization attraction and stabilization for 10-Gb/s signals near 1550 nm.

The gap includes optical-frequency-modulation-style high-frequency detection ideas, and single-mode fiber is a basic delivery medium for nonlinear optical interactions and signal transport. It could be a practical component in early photonic benchtop experiments, but by itself it does not address the core detection bottleneck.

Mechanistic
24
Context
30
Throughput
35
First test time
55
First test cost
58
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes simple photonic prototyping infrastructure is in scope for exploratory detector concepts.

Missing evidence: Only evidence provided is for telecom polarization control, not gravitational-wave sensing, precision metrology, or detector-band relevance.

3 unmatched gaps

Frontier Telescopes Are Expensive and Take Decades to Build

Astrophysics· 3 capabilities
No tool matches

The current model for building space telescopes is cost-prohibitive and slow, often requiring decades of development. New approaches that exploit reduced launch costs and modular assembly are needed to accelerate telescope construction and reduce costs, and there needs to be the organizational structure and hunger to adopt such methods.

Foundational Capabilities

Leveraging Commercial Component Advances

Utilize advances in commercial components and software to accelerate the development cycle of telescopes.

3 resources

Leveraging Modular Assembly & Reduced Launch Costs for Space Telescopes

Leverage the reduced launch costs enabled by vehicles like Starship and employ modular assembly techniques to build telescopes more rapidly and economically.

4 resources

Space Telescope Factory

Standardize a broadly capable design, and create a space telescope factory to produce modular components at scale, dramatically reducing the cost and time required for mission development.

2 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Higher-Resolution Views of the Universe Are Roadblocked by Formation Flying Technology

Astrophysics· 1 capabilities
No tool matches

Space telescopes offer vastly superior sensitivity to ground-based systems, but enhancing their resolution requires spacecraft with sub-micron precision. Angular resolution of telescopes is limited by the size of the primary optic. Coherent aperture synthesis (interferometry) gets around this by coherently combining signal from separated telescopes where the resolution is proportional to the baseline separation. This has been very successful in the radio (see Event Horizon Telescope) but in the optical regime requires extremely difficult optomechanics and controls, and the sensitivity on the ground is inherently limited by the coherence time of the atmosphere.

Foundational Capabilities

Advance Formation Flying Technology for Spacecraft

Even small optical interferometers in space could vastly exceed the sensitivity of ground based systems, and directly image Earth-like planets around Sun-like stars, but this requires advances in precision (<1 micron) formation flying technology for spacecraft. [This is both an engineering bottleneck, and a scientific bottleneck] 

4 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Major Planetary Science and Astrobiology Missions Are Not Realized by Existing Government Space Agencies

Astrophysics· 1 capabilities
No tool matches

Some of the most important planetary science and astrobiology missions remain unrealized by traditional government agencies like NASA. Alternative, independent initiatives are needed to explore these high-priority scientific questions.

Foundational Capabilities

Independent Planetary Missions

Develop and launch independent planetary science missions to explore key astrobiological questions. Relying solely on traditional government agencies like NASA unnecessarily limits planetary science and astrobiology research.

4 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Geophysics and Climate

6 gaps · 3 tool connections · best match 39%

Insufficient Integrated Earth Climate Models

Geophysics and Climate· 3 capabilities

Tools

3

Best

39%

Current models struggle to accurately predict climate tipping points due to the intricate interplay of diverse climatic factors, hindering proactive intervention efforts. Additionally, designing optimal climate control strategies is challenging because of the nonlinear and multifaceted interactions among economic, technological, and social factors.

Foundational Capabilities

Build a Predictive System for Tipping Points

Develop advanced predictive systems that integrate diverse climate data to forecast tipping points more accurately.

1 resources

Develop Integrated Assessment Models

Create more robust integrated assessment models that minimize ungrounded economic assumptions and better capture sensitive intervention points and amplification mechanisms in socioeconomic and political systems.

5 resources

Next Generation Earth System Model

Build open, composable Earth System Simulation infrastructure based on high-resolution data. De-silo climate data across ESMs, IAMs, observational data to increase collaborative potential. Traditional ESMs are built using legacy programming languages, which introduce a barrier for new entrants to the field and impede the usage of hybrid machine-learning techniques and modern computing architectures. Collect high-resolution earth system data: Today’s global models and reanalyses are at tens of kilometers resolution. Build an open dataset of ultra-high-resolution simulations or merged observations (e.g. <1 km, resolving clouds, storms, and local topography). It could train AI to capture fine-scale processes (convection, urban heat islands, etc.) that current models miss.

4 resources

Candidate Tools

Gap applicability

39

The Bayesian optimization framework is a computational method built from high-throughput Lustro measurements to guide control of blue light-sensitive optogenetic systems. It uses data-driven learning, uncertainty quantification, and experimental design to identify light induction conditions for multiplexed regulation in Saccharomyces cerevisiae.

This item is an actionable computational optimization method with uncertainty quantification and experimental design, which is directionally relevant to optimizing intervention or control strategies under nonlinear, uncertain systems. It could plausibly inform climate-control strategy search or model-tuning workflows, although the supplied evidence is from optogenetic control rather than climate modelling.

Mechanistic
62
Context
28
Throughput
66
First test time
63
First test cost
58
Replication
9
Practicality
59
Translatability
9

Assumptions: Assumes the optimization and uncertainty-quantification framework is portable beyond its original biological application.

Missing evidence: No evidence here of use on climate, geophysical, socioeconomic, or integrated assessment models.

random forest modelling

computation method

Gap applicability

35

The predictive accuracy has been increased by using random forest modelling which identifies nonlinear pattern in the data

The gap explicitly involves nonlinear interactions and prediction, and random forest modelling is a usable method for learning nonlinear patterns from heterogeneous data. It could be a lightweight baseline for tipping-point prediction or surrogate modelling, but the provided evidence is very sparse and from a different application area.

Mechanistic
48
Context
22
Throughput
72
First test time
78
First test cost
80
Replication
30
Practicality
30
Translatability
30

Assumptions: Assumes access to structured climate or integrated assessment datasets suitable for supervised learning.

Missing evidence: No supplied mechanism details beyond nonlinear pattern detection, and no climate-specific validation or replication metadata.

Gap applicability

31

Computational modelling and machine learning are emerging as pivotal tools in enhancing optical techniques, offering new avenues for analysing complex data and optimizing therapeutic strategies.

This is a broadly actionable computation method, and the gap centers on integrating complex data and optimizing strategies, which machine learning can sometimes support. However, the supplied evidence is generic and tied to optical techniques rather than Earth system or integrated climate modelling.

Mechanistic
41
Context
18
Throughput
68
First test time
70
First test cost
72
Replication
30
Practicality
30
Translatability
30

Assumptions: Assumes the item represents a reusable modelling workflow rather than only a domain-specific observation.

Missing evidence: No specific model class, no uncertainty handling, no climate-domain evidence, and no replication or practicality metadata.

5 unmatched gaps

Inadequate Emergency Climate Interventions and Response

Geophysics and Climate· 13 capabilities
No tool matches

There is a critical need for more precise, rapid, and localized climate intervention strategies. Current approaches lack the fine-grained models and rapid response mechanisms required to adapt to diverse climate impacts, such as heatwaves, which demand swift and effective action. The ability to control local weather phenomena—including cloud formation and hurricanes—could help mitigate climate risks.

Foundational Capabilities

Backup Power Transformer Protection

Establish protocols and infrastructure for banking backups of power transformers to mitigate the impact of solar flare-induced disruptions.

1 resources

Cloud Seeding

Assess the feasibility of using  cloud seeding techniques to stimulate precipitation and modulate local weather conditions in a controlled manner.

2 resources

Dynamic Ecosystem Feedback Modeling

Develop dynamic models that incorporate microbial, hydrological, and climate-driven processes to better capture methane/N₂O feedback loops.

1 resources

Earthquake Prediction

1 resources

Equip Ships and Planes as Sensors or Modulators of Climate

Retrofit existing vessels and aircraft with advanced sensors to systematically measure aerosol–cloud interactions in situ, improving our understanding and models.

3 resources

Fine-Grained Impact Modeling and Response

Develop detailed models that capture the local impacts of climate change (e.g., heatwaves) and implement responsive strategies to minimize harm and adapt to changing conditions.

2 resources

Hurricane Diversion or Mitigation

Explore strategies to divert or mitigate the impact of hurricanes using advanced atmospheric control methods.

3 resources

Improved Remote Sensing of Aerosol Particle Size and Composition

Develop improved sensor technologies for airborne particulates, e.g., hyperspectral and lidar-based remote sensing for aerosol particle size, type, and radiative forcing potential

1 resources

Improve Solar Flare Models

Enhance predictive models for solar flares using advanced data analytics and observation techniques to better forecast solar activity. Note: NASA spends $0.8 bn/yr on heliophysics. 

1 resources

Ocean Heat and Circulation Modeling

Global ARGO-like sensors for deep ocean currents. We have relatively sparse ocean data compared to atmospheric data. Initiatives like Argo floats ( ~4,000 drifting sensors) have collected over two million ocean profiles of temperature and salinity, providing a crucial 3D view of the oceans. Expanding such efforts (more floats, deeper measurements, biogeochemical sensors) and releasing the data in unified formats could enable AI to model ocean currents, carbon uptake, and climate patterns like El Niño with greater skill. A gap remains in high-resolution, full-depth ocean data that AI models could exploit for improved climate forecasts.

3 resources

Starlink Atmospheric Tomography

Explore the concept of using satellite constellations (like Starlink) to perform atmospheric tomography, thereby building a 3D picture of atmospheric dynamics.

3 resources

Stratospheric Observation Platforms

Deploy specialized platforms in the stratosphere to gather high-resolution data on atmospheric processes and composition.

1 resources

Subduction Zone Observation

1 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Inadequate Interventions for Greenhouse Gas Removal

Geophysics and Climate· 5 capabilities
No tool matches

We need more effective approaches to removing greenhouse gases from the atmosphere to mitigate climate impacts. However, challenges remain in harnessing natural carbon removal systems—due to difficulties in accurately measuring their environmental impact—and in reducing methane emissions from sources like the cow rumen. Innovative strategies, including modifying cow microbiomes and deploying scalable measurement and validation platforms, are essential to advance greenhouse gas removal efforts. See also:  https://www.bezosearthfund.org/news-and-insights/bezos-earth-fund-releases-global-roadmap-to-scale-greenhouse-gas-removal-technologies and https://gaps.frontierclimate.com/

Foundational Capabilities

Develop Scaled Platforms for Validation & Measurement

Create computational and experimental platforms tailored to validate, measure, report, and verify carbon removal and its environmental impacts in natural systems such as the ocean or soil. Beyond MRV, methods to valorize CDR at scale are needed.

9 resources

Integrated Methane and N2O Monitoring in Natural Systems

Deploy networks of in-situ and remote sensors to monitor emissions across key ecosystems like thawing permafrost, peatlands, and tropical forests. 

3 resources

Modify the Cow Microbiome

Reduce methane production from cows by modifying or removing the methanogen microbes in their rumen:  • Microbe-targeting vaccines • Gene engineered cow microbiome • Highly specific antibacterials

3 resources

Precise Methane Production Measurement

Implement high-precision monitoring systems to measure methane production from livestock, enabling optimized agricultural practices.

2 resources

Removal of Methane and N2O from the Atmosphere

Develop systems to oxidise diffuse atmospheric methane and nitrous oxide.

1 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Insufficient Monitoring and Modeling of Climate Processes and Control Paths

Geophysics and Climate· 9 capabilities
No tool matches

We have limited capacity to predict key disruptive events, such as solar flares that threaten power grids and communications, alongside an incomplete understanding of natural processes (atmospheric, ocean, etc.) that underpin climate models. We need better monitoring tools for characterizing phenomena that impact climate dynamics, such as aerosol-cloud interactions, and assessing potential interventions such as marine cloud brightening. These issues underscore the need for enhanced observational tools and more sophisticated models of climate processes.

Foundational Capabilities

Dynamic Ecosystem Feedback Modeling

Develop dynamic models that incorporate microbial, hydrological, and climate-driven processes to better capture methane/N₂O feedback loops.

1 resources

Efficient Ocean Exploration Systems

Innovate and deploy more efficient and robust systems for deep ocean exploration, enabling comprehensive study of deep-sea environments and their unique biology.

4 resources

Equip Ships and Planes as Sensors or Modulators of Climate

Retrofit existing vessels and aircraft with advanced sensors to systematically measure aerosol–cloud interactions in situ, improving our understanding and models.

3 resources

Improved Remote Sensing of Aerosol Particle Size and Composition

Develop improved sensor technologies for airborne particulates, e.g., hyperspectral and lidar-based remote sensing for aerosol particle size, type, and radiative forcing potential

1 resources

Improve Solar Flare Models

Enhance predictive models for solar flares using advanced data analytics and observation techniques to better forecast solar activity. Note: NASA spends $0.8 bn/yr on heliophysics. 

1 resources

Integrated Methane and N2O Monitoring in Natural Systems

Deploy networks of in-situ and remote sensors to monitor emissions across key ecosystems like thawing permafrost, peatlands, and tropical forests. 

3 resources

Ocean Heat and Circulation Modeling

Global ARGO-like sensors for deep ocean currents. We have relatively sparse ocean data compared to atmospheric data. Initiatives like Argo floats ( ~4,000 drifting sensors) have collected over two million ocean profiles of temperature and salinity, providing a crucial 3D view of the oceans. Expanding such efforts (more floats, deeper measurements, biogeochemical sensors) and releasing the data in unified formats could enable AI to model ocean currents, carbon uptake, and climate patterns like El Niño with greater skill. A gap remains in high-resolution, full-depth ocean data that AI models could exploit for improved climate forecasts.

3 resources

Starlink Atmospheric Tomography

Explore the concept of using satellite constellations (like Starlink) to perform atmospheric tomography, thereby building a 3D picture of atmospheric dynamics.

3 resources

Stratospheric Observation Platforms

Deploy specialized platforms in the stratosphere to gather high-resolution data on atmospheric processes and composition.

1 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Intervening in Earth Systems at Scale is Largely Untested

Geophysics and Climate· 8 capabilities
No tool matches

There are currently no direct interventions to address climate tipping points such as glacier melt, leaving some critical processes unmitigated. The fundamental science and engineering principles behind emergency climate interventions remain largely untested at relevant scales, limiting our preparedness for rapid climate change. See: https://www.outlierprojects.org/

Foundational Capabilities

Arctic Interventions

Understand whether specific interventions in the Arctic, from Mixed-Phase Cloud Thinning to Sea Ice Thickening can counter extreme warming and tipping-points in the region. 

5 resources

Glacial Climate Intervention

Develop and test intervention strategies aimed at stabilizing glaciers and mitigating associated climate impacts.

4 resources

Glacier Measurement Tools

New technologies to collect data about ice sheets and improve the efficiency with which we can use that data. For example, UAV-borne ice-penetrating radar systems.

3 resources

Planetary Sunshade Fundamentals

Understand whether a planetary sunshade could be viable

2 resources

Protecting Ocean Ecosystems

Develop approaches to protect ocean ecosystems. E.g. Studies have shown that shading corals for the 4 hottest hours of the day can significantly reduce bleaching.

2 resources

Removal of Methane and N2O from the Atmosphere

Develop systems to oxidise diffuse atmospheric methane and nitrous oxide.

1 resources

Safely Testing Emergency Atmospheric Interventions

Conduct controlled, scaled experiments to validate the basic science and engineering principles behind emergency climate interventions such as sunlight reflection modification and atmospheric methane removal.

3 resources

Solar Radiation Modification Fundamentals

Measurement reporting and verification, foundation modeling and interventions for solar radiation modification.

13 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Outdated and Fragmented Recycling, Cleanup and Bioremediation Systems

Geophysics and Climate· 2 capabilities
No tool matches

We need to improve our management of natural systems. The world's oceans suffer from extensive pollution, undermining marine ecosystems and disrupting global climate processes. Inefficient wildfire management is a prime example of problematic management of natural fire cycles.

Foundational Capabilities

Advanced Recycling Capabilities

Develop and scale advanced recycling methods, especially for plastics.

2 resources

Efficient Ocean Cleaning Technologies

Innovate and deploy efficient, scalable methods for cleaning up ocean pollution to restore marine ecosystems and improve ocean health.

1 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Immunology

2 gaps · 3 tool connections · best match 45%

Our Immune System Can Uniquely Recognize Nearly Any Molecule but We Don’t Know the Recognition Code

Immunology· 1 capabilities

Tools

3

Best

45%

A better understanding of how the immune system interacts at the molecular level with threats and triggers is critical. This knowledge would enable the development of predictive tools and technologies to augment immune responses—improving interventions against infections, cancers, and autoimmune disorders.

Foundational Capabilities

T-cell and B-cell Receptor Antigen Mapping

Construct a comprehensive map linking trillions of T-cell receptors (TCRs) and B-cell receptors (BCRs) to millions of disease antigens. This map will facilitate antibody–antigen binding prediction, allowing the identification of target antigens based solely on immune cell DNA sequences.

5 resources

Candidate Tools

synNotch reporter system

multi component switch

Gap applicability

45

Here, using the synthetic Notch (synNotch) receptor that tethers antigen binding to customized transgene expression, we linked intratumoral immune-cancer cell communication to a simple secreted reporter blood test... Our synNotch reporter system allows for the monitoring of antigen-dependent intratumoral immune-cancer cell interactions through a simple and convenient blood test.

This system directly couples antigen recognition by engineered immune cells to a measurable reporter output, which could support functional mapping of receptor-antigen interactions. That is mechanistically relevant to building datasets that connect immune recognition events to specific antigens.

Mechanistic
72
Context
68
Throughput
34
First test time
42
First test cost
33
Replication
20
Practicality
30
Translatability
35

Assumptions: Assumes the gap can be partially addressed by functional receptor-antigen screening systems, not only by native repertoire observation.

Missing evidence: No evidence here on screening scale, compatibility with diverse TCR/BCR libraries, or use for recognition-code discovery beyond a CD19-focused example.

scMOVIR

computation method

Gap applicability

44

Here, we present scMOVIR, a single-cell multi-omics database for human viral infections and immune responses.

A single-cell multi-omics immune-response database could help generate or benchmark hypotheses about immune recognition patterns in human infection contexts. It is relevant as a data resource for discovering correlates of receptor-antigen recognition, though it does not itself perform direct antigen mapping.

Mechanistic
41
Context
74
Throughput
66
First test time
78
First test cost
72
Replication
30
Practicality
55
Translatability
48

Assumptions: Assumes curated human immune-response datasets are useful intermediate resources for recognition-code modeling.

Missing evidence: No explicit evidence that the database contains paired TCR/BCR-antigen labels, receptor sequence linkage, or assays for direct molecular recognition.

Gap applicability

28

Chimeric antigen receptor (CAR) T-cell therapy utilizes synthetic biology techniques to engineer T cells to specifically target tumor cells using most commonly single-chain variable fragments (scFvs) to recognize tumor-associated antigens.

CAR-based systems are directly about programmable antigen recognition and could provide engineered testbeds for studying how binding modules encode specificity. The summary specifically notes scFv-based recognition of tumor antigens, which is adjacent to the broader recognition-code problem.

Mechanistic
38
Context
52
Throughput
22
First test time
28
First test cost
20
Replication
25
Practicality
28
Translatability
50

Assumptions: Assumes engineered antigen-recognition constructs can serve as simplified models for studying recognition logic.

Missing evidence: No evidence here for use in receptor-antigen mapping, high-throughput screening, or inference of native TCR/BCR recognition rules.

1 unmatched gap

Our Immune Memory Contains a Detailed History of Exposures but We Can’t Read It

Immunology· 5 capabilities
No tool matches

Immunological diseases often have nonobvious, complex etiologies and pathophysiologies that are difficult to identify.

Foundational Capabilities

Immune Tolerance Induction to Enable In-Human Synthetic Biology with Foreign Proteins

We need a way to programmably induce immune tolerance to a user-defined foreign protein, in order to enable many new forms of gene and cell therapy (not to mention help with autoimmune diseases). This is especially needed for brain computer interface as many of the most powerful concepts for BCI would involve adding foreign protein such as transducer proteins for optical or acoustic signals. As Hannu Rajaniemi wrote, “a flexible ability to induce immune tolerance to opsins is a prerequisite of a two-way mind meld with computers”.

1 resources

Large-Scale Profiling of Infection to Expose Links to Neurodegeneration

Conduct a large-scale, comprehensive study to establish causal links between persistent microbial/viral infections (such as herpes simplex 1) and neurodegenerative diseases like Alzheimer’s Disease. Such a study would illuminate the role of infections in increasing disease risk and progression. This effort could serve as a sequel to the Genome-Wide Association Studies (GWAS) that have been performed since the completion of the Human Genome Project. Many diseases failed to show obvious genetic etiologies from those GWAS efforts, suggesting a role for the environment in disease causation. Large-scale infection profiling could therefore unearth etiologies that were not possible to detect by GWAS.

4 resources

Longitudinal Multi-Omic Immune Profiling Across Populations

Generate longitudinal, multi-omic immunological data from a diverse cohort of individuals. This dataset would be critical for enabling immune-ome modeling and prediction (for example, in forecasting vaccine responses).

1 resources

Repertoire- and Cellular Subset-Level Adaptive Immune Tools

Adapt and build experimental and computational methods to read out distributed and sparse immune memory signatures from adaptive immune cells. Beyond individual antibody-antigen binding, this approach focuses on signatures distributed across cellular subpopulations and the repertoire. Decoding these signatures could identify hidden causes of and cures for disease, enabling more accurate diagnosis, treatment, and prevention of chronic conditions.

4 resources

Universal Immune-Computer Interface (ICI)

Develop a universal immune-computer interface to enhance the immune system’s targeting of pathogens and cancers, while reducing issues such as autoimmunity and transplant rejection. The ICI would involve two-way coupling—where the immune system and computer mutually optimize their matching processes—and real-time feedback loops. An example could be a wearable device that integrates mRNA manufacturing with single-cell sequencing.

1 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Social Science

9 gaps · 2 tool connections · best match 39%

Policy Creation and Evaluation is Manual and Suffers from Low Efficiency and Accountability

Social Science· 1 capabilities

Tools

2

Best

39%

Policy development and evaluation processes today rely heavily on manual human review to ensure accountability. However, as AI systems increasingly support or automate these processes, this human-centered accountability becomes challenging. Human reviewers risk becoming a critical bottleneck, slowing policy implementation. New tools are needed to streamline policy creation and evaluation, and to ensure consistency and compliance before deployment.

Foundational Capabilities

Formalization of Policy

Build tools to enable policy-makers to write mechanized (i.e. runnable software) versions, moving more of the subjective evaluation ahead of the action (rather than interpreting more things post-facto). This would leverage formal logic to streamline policy development and governance.

11 resources

Candidate Tools

GUBS

engineering method

Gap applicability

39

GUBS (Genomic Unified Behavior Specification) is a domain-specific, rule-based declarative language for behavioral specification of synthetic biological devices. It represents device programs as behavioral specifications for open systems rather than as complete closed-system descriptions.

GUBS is a rule-based declarative language with compilation related to automated theorem proving, which plausibly aligns with the gap's need for mechanized, runnable policy specifications and pre-deployment checking. It is not a policy tool, but its formal-specification pattern is one of the few items with a mechanism resembling policy formalization.

Mechanistic
62
Context
28
Throughput
55
First test time
46
First test cost
58
Replication
20
Practicality
83
Translatability
12

Assumptions: Assumes a formal rule language from another domain can inform policy specification workflows.

Missing evidence: No evidence here that GUBS has been applied to legal, governance, or social-science policy evaluation.

Gap applicability

14

Non-viral vectors, nanoparticle systems, and artificial intelligence-guided diagnostics are being explored to address these limitations and support personalized care.

This is at least an AI-enabled computational method, so it could weakly relate to automating evaluation workflows. However, the supplied evidence only says it is being explored for diagnostics, not for formal policy creation, compliance checking, or accountability.

Mechanistic
24
Context
18
Throughput
42
First test time
35
First test cost
34
Replication
0
Practicality
0
Translatability
0

Assumptions: Assumes generic AI workflow methods may transfer across domains.

Missing evidence: No structured evidence of policy formalization, rule checking, governance use, or accountability evaluation.

8 unmatched gaps

Education Modalities Suffer From Scaling Limitations

Social Science· 3 capabilities
No tool matches

Current education systems face structural inefficiencies such as excessive administrative workloads on educators, overcrowded classrooms, and inequitable resource distribution. Innovative technologies have the potential to significantly reduce these burdens by providing tools that assist teachers with scheduling, grading, and creating personalized, adaptive lesson plans. Digital platforms could dynamically tailor learning experiences to individual student progress, complementing classroom teaching. Additionally, technology-driven improvements in administrative efficiency could free valuable resources, enhancing educational equity and overall student experiences. “US K-12 teachers are 30% more likely to face burnout than U.S. soldiers, whose lives are defined by relentless duty, perpetual war and low wages.” - Adrienne Williams  “Given recent improvements in the quality, affordability, and usability of technologies like AI, computer vision, and AR/VR, we can reimagine a more personalized, research-driven K-12 experience—one better able to meet diverse learning needs and set students up for success both in school and in life. From chatbots for individual coaching to immersive mixed reality solutions to adaptive technologies supporting culturally responsive pedagogy, we have an opportunity to leverage new technologies to tackle pressing needs across literacy education, STEM instruction, and preparing students for the workforce.” - Kumar Garg 

Foundational Capabilities

Digital Tools for Educators and School Districts

Tools for lesson planning and administrative tasks. Tools for administration of school systems, including managing resource distribution.

1 resources

Digital Tutor Technologies

Develop sophisticated digital tutor systems that offer personalized learning assistance by adapting to individual learning styles using AI.

4 resources

New Architectures for Learning Assistance

Explore novel architectures and settings for AI assisted learning

1 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Ephemeral Societal Data on Proprietary Platforms

Social Science· 1 capabilities
No tool matches

Much critical data is stored on proprietary platforms and is at risk of disappearing, hindering long-term research and reproducibility.

Foundational Capabilities

Create an Internet Archive for Critical Data

Establish initiatives to systematically archive and preserve essential datasets from proprietary platforms, ensuring long-term accessibility and reproducibility.

2 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Labor-Replacing AI Could Lead to Human Disempowerment

Social Science· 7 capabilities
No tool matches

As AI systems become the cornerstone of competitive advantage, they can inadvertently marginalize human roles and decision-making. The drive for efficiency and cost reduction may lead organizations to rely predominantly on AI, sidelining human judgment, creativity, and accountability. This dynamic risks creating environments where economic and social inequities widen, and the intrinsic value of human input is systematically undermined (see examples). The gradual disempowerment of individuals under such competitive pressures poses significant challenges for societal well-being and democratic governance.  See: https://gradual-disempowerment.ai/

Foundational Capabilities

AI Agents to Advocate for Human Interests

Develop AI delegates who can advocate for people's interest with high fidelity, while also being better able to keep up with the competitive dynamics that are causing the human replacement.

4 resources

Early Warning Signs for Human Disempowerment

Tools to forecast and monitor key thresholds or tipping points beyond which human influence becomes critically compromised, and the ability to measure effectiveness of intervention strategies.

1 resources

Improved Voting and Auditing Protocols

Develop next-generation voting systems and auditing protocols that are secure, transparent, and capable of supporting robust collective decision-making.

4 resources

Interventions for Maintaining Human Oversight

Develop direct interventions for preventing accumulation of excessive AI influence:  • Regulatory frameworks mandating human oversight for critical decisions, limiting AI autonomy in specific domains, and restricting AI ownership of assets or participation in markets • Progressive taxation of AI-generated revenues both to redistribute resources to humans and to subsidize human participation in key sectors • Cultural norms supporting human agency and influence, and opposing AI that is overly autonomous or insufficiently accountable

1 resources

Metrics to Track Human Influence

We need comprehensive methods to detect and quantify human disempowerment including economic, cultural, and political metrics as well as research and education.

2 resources

Technology to Augment Humans

Augmentation technology, from better interfaces to AI tools to brain computer interfaces (BCIs), can amplify human strengths, democratize high-skill work, enable greater oversight, and help make humans more economically capable. See BCI-related Foundational Capabilities: • Minimally Invasive Ultrasound–Based Whole Brain Computer Interface • Fully Noninvasive Read–Write Technologies • Micro- to Nano-Scale Minimally Invasive BCI Transducers

7 resources

Tools for Deliberative Democracy

Develop AI-driven platforms that facilitate deliberative democracy by aggregating diverse perspectives, advancing community moderation, and guiding public decision-making processes to move beyond corporate and partisan pressures.

22 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Limited Tools for Improving Individual, Social and Societal Epistemics in the Face of Misinformation

Social Science· 11 capabilities
No tool matches

In an era of relentless information overload and pervasive misinformation—fueled by algorithms that prioritize fleeting engagement over meaningful value—we have the opportunity to reshape our digital spaces. By leveraging AI and more intentional design of social media spaces for epistemic improvement, we can empower users to curate, evaluate, and contextualize content more effectively to create a healthier digital world.

Foundational Capabilities

Content Curation Tools

Develop tools that empower users to both create personalized information streams and collaboratively curate content.  Users can set up custom feeds—filtered by semantic content, social network data, and engagement signals—to tailor the information they receive. 

5 resources

Contextualization Engines

Develop systems that provide context to content, helping users understand the broader background and counteract misinformation without censorship.

1 resources

Crowd-Sourced Fact-Checking 

Build on initiatives like Community Notes–a crowdsourced fact-checking system that attaches contextual annotations to tweets–to build robust community-driven evaluation tools for social media platforms. This allows users to create and vote on annotations, while an open source algorithm determines which context notes to attach.

9 resources

Detection of Coordinated Misinformation and Scams

Develop digital tools to shield individuals from online scams and fraudulent schemes, and that detects and filters or flags coordinated misinformation campaigns.

3 resources

Improved Observatory of the Information Environment

Build centralized platforms to monitor and analyze the information ecosystem, enabling better identification of misinformation trends.

8 resources

Incentives for Verification and Accountability

Create market-based mechanisms that incentivize accurate fact-checking and hold sources accountable.

2 resources

Privacy Preserving Identity Verification

Robust identity verification systems that safeguard personal privacy.

6 resources

Projects to Crowdsource Ground Truth Information

Global distributed crowdsourced public knowledge and knowledge graphs

4 resources

Structured Transparency

Implement frameworks that allow actors to reduce collaboration risks and costs by defining and enforcing precise flows of information. This allows a larger negotiating space for strategic actors dealing with advanced technologies. 

1 resources

“System 2” Recommenders in Social Media Spaces

Develop recommender systems that prioritize content we’ll appreciate upon reflection, rather than content that only captures our immediate attention. 

6 resources

Tools to Promote Constructive Dynamics in Social Media Spaces

Develop “systems to  increase mutual understanding and trust across divides, creating space for productive conflict, deliberation, or cooperation”

4 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Our Platforms for Civic Engagement and Democratic Decision-Making Don’t Take Advantage of 21st Century Scalable Technology

Social Science· 2 capabilities
No tool matches

Our current systems for democratic participation are hindered by outdated platforms and tools that fail to scale with modern needs. Limited survey infrastructure, insecure voting methods, and under-informative deliberative tools restrict our capacity for informed, collective decision-making. By harnessing AI to facilitate clearer expression of public opinion and leveraging innovative technologies for secure, scalable engagement, we can transform civic participation into a more robust, effective, and inclusive process.

Foundational Capabilities

Improved Voting and Auditing Protocols

Develop next-generation voting systems and auditing protocols that are secure, transparent, and capable of supporting robust collective decision-making.

4 resources

Tools for Deliberative Democracy

Develop AI-driven platforms that facilitate deliberative democracy by aggregating diverse perspectives, advancing community moderation, and guiding public decision-making processes to move beyond corporate and partisan pressures.

22 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Translational Gaps in Development Economics

Social Science· 6 capabilities
No tool matches

There exists a disconnect between academic research and the practical implementation of development economics, hampering the conversion of theoretical insights into effective real-world interventions. Current mechanisms for delivering public goods and fostering collective cooperation are inefficient, limiting our capacity to coordinate resources and drive meaningful change. Innovative approaches are needed to bridge this gap, ensuring that cutting-edge economic theories can be transformed into actionable policies and scalable interventions that truly improve development outcomes.

Foundational Capabilities

Cost Effectiveness and Techno-Economics Calculations

Make it easier to calculate cost effectiveness of interventions without a large staff through AI assisted methods

2 resources

Enhance Operational Approaches in Development Economics

Bridge the translational research gap by adopting more operational, nonacademic approaches to development economics, moving beyond traditional models like JPAL.

4 resources

Experiments in Alternative Economic Structures

Run controlled experiments—such as basic income trials and alternative market models for carbon pricing—to test and refine innovative economic structures.

5 resources

Invent New Decentralized Coordination and Contract Mechanisms

Design and implement novel, decentralized models for public goods allocation and cooperation that overcome existing systemic inefficiencies.

9 resources

Large-Scale Economic Experiments in Multiplayer Games

Utilize multiplayer online games as testbeds for large-scale economic experiments, enabling controlled studies of human behavior in dynamic environments.

Long-Run Followups on Major Economic Studies

Most major economic studies include data collection for 1-2 years, but the outcomes (e.g. education, child health) could plausibly impact large sections of one’s life.  There is little funding for followups on existing studies.

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Underdevelopment of Deep Tooling for Economic Modeling and Future Forecasting

Social Science· 4 capabilities
No tool matches

Current economic models are often too simplistic to capture the intricate dynamics of our global economy, limiting effective policy-making and forecasting. Experimentation with innovative economic models—such as those incorporating universal basic income or alternative market systems—is rare, leaving us unprepared for emerging trends. The inherent complexity of global systems further complicates accurate forecasting, underscoring the urgent need for more sophisticated, adaptive tools that can better predict and navigate the economic landscape of tomorrow.

Foundational Capabilities

Complex Systems Simulation Techniques for Economics

Utilize techniques from chaos and complex systems theory to develop more realistic economic models that account for non-linear dynamics.

4 resources

Model Radical Economic Transformations

Develop simulation models to explore scenarios of profound economic change to help plan for disruptive shifts. For example: a future where extended youth replaces traditional retirement and end-of-life expenses are significantly reduced, assessing the economic impact of a major AI transition.

1 resources

Prediction Markets and AI-Based Forecasting

Develop and refine prediction markets augmented by AI to improve future forecasting accuracy and better inform policy decisions.

11 resources

Real-World SimCity with AI Agents

Create complex simulation models using AI agents that mimic real-world economic dynamics, providing a richer framework for policy analysis.

7 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Underdevelopment of Modern Tools in the Social Sciences

Social Science· 3 capabilities
No tool matches

The social sciences need new tools to help researchers identify and prioritize important questions that will have an impact, and better infrastructure to collect qualitative data. Qualitative methods are powerful for understanding the how and why behind social outcomes, yet even the most comprehensive surveys don’t capture all the factors that contribute to social outcomes. AI-enabled qualitative methods could super-charge the social sciences, but there is much work to be done. Similarly, many archaeological methods remain manual and lack the technological revolution seen in other fields, limiting discovery and analysis.

Foundational Capabilities

Infrastructure for Problem-Driven Research

A framework to help researchers identify and pursue meaningful questions. This includes support structures—such as fellowship programs to train students in problem-driven research workflows, and innovative funding mechanisms to empower researchers to focus on questions that truly matter.

1 resources

Modernize Survey Infrastructure

Develop modern, scalable survey platforms that can efficiently capture and analyze public opinion data and other qualitative data for social sciences. New tools are needed to collect, analyze, curate, and model large-scale qualitative data.

10 resources

Satellite and Machine Learning-Enabled Archaeology

Apply satellite imagery and ML techniques to modernize archaeological surveys and analysis, enabling more rapid and systematic discoveries.

2 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Space Engineering

3 gaps · 2 tool connections · best match 43%

Lack of a Dedicated Field for Planetary Terraforming

Space Engineering· 2 capabilities

Tools

2

Best

43%

There is currently no established field for systematically studying and applying planetary terraforming methods, leaving key challenges in transport, energy supply, and civil engineering largely unaddressed.

Foundational Capabilities

Directed Work on Biological Approaches

Initiate dedicated research into biological terraforming strategies that utilize living organisms to modify planetary environments.

1 resources

Directed Work on Non-Biological Approaches

Invest in research exploring non-biological methods for planetary engineering, such as atmospheric modification and energy-efficient infrastructure.

2 resources

Candidate Tools

Gap applicability

43

CRISPR-Cas-mediated genome editing is a programmable genome-editing approach discussed here in the context of bacterial systems. The cited review summarizes the main approaches for bacterial CRISPR-Cas editing and the difficulties associated with applying these systems in bacteria.

The gap explicitly includes directed work on biological approaches, and programmable genome editing is a directly actionable method for engineering microbes or other organisms for environmental modification tasks. It could support early terraforming-oriented strain construction, but the supplied evidence is only general and focused on bacterial editing rather than planetary applications.

Mechanistic
56
Context
42
Throughput
62
First test time
58
First test cost
52
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes biological terraforming work would involve engineering microbial systems.

Missing evidence: No evidence here on use in extreme planetary conditions, relevant chassis organisms, or specific terraforming functions.

directed evolution

engineering method

Gap applicability

39

Directed evolution is an engineering method that improves biological tool performance by iteratively selecting functional protein variants. In the cited split fluorescent protein study, it was demonstrated as one of two approaches used to improve split fluorescent proteins, contributing to brighter split sfCherry3 variants.

If a terraforming program pursues biological approaches, directed evolution is a practical optimization method for improving organism or protein performance after initial designs are built. It is broadly useful for adapting functions, but the provided evidence does not tie it to environmental engineering or planetary constraints.

Mechanistic
48
Context
39
Throughput
66
First test time
55
First test cost
50
Replication
9
Practicality
71
Translatability
11

Assumptions: Assumes iterative optimization of biological parts or organisms is in scope for terraforming research.

Missing evidence: No supplied evidence on target phenotypes such as radiation tolerance, gas conversion, regolith use, or low-pressure growth.

2 unmatched gaps

Outdated Space Station Construction

Space Engineering· 3 capabilities
No tool matches

Only one new space station (天宫) has been launched this century, due to high costs and reliance on traditional, government-led megaprojects.

Foundational Capabilities

In-Situ Resource Utilization

Everything from mining the moon and using those resources to build things in space to fuel etc etc

3 resources

Leverage Commercial Approaches

Utilize emerging commercial spaceflight companies to drive down costs and accelerate the development of new space stations through innovative design and manufacturing.

1 resources

New Construction Approaches for Space Stations

Develop novel construction methods—such as inflatable or modular architectures—that can be assembled in orbit to create new space stations more efficiently.

6 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

We Lack Basic Capabilities that Are Necessary for Travel Far Beyond Earth

Space Engineering· 4 capabilities
No tool matches

We lack the many technologies to support exploration and survival off earth. Our exploration efforts remain confined to our solar system, limiting our potential to explore beyond and understand the broader cosmos.

Foundational Capabilities

Air Breathing Fusion for Single-Stage-to-Orbit Vehicle

Air breathing fusion to propel single-stage-to-orbit (SSTO) vehicles for a single-stage, reusable spaceplane.

1 resources

Cryosleep

Assuming no FTL the only way to get outside of the solar system in a single lifetime is with cryosleep.

1 resources

Interstellar Probes

Invest in the development of interstellar probe technologies that can traverse vast distances, opening the door to exploration beyond our solar system.

2 resources

Pressurizing Technology or Habitation Domes

Methods to pressurize large surface areas or positive pressure habitation domes will be necessary for living most places off earth.

1 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Metascience

5 gaps · 0 tool connections

A Limited Set of Rigid Organizational Structures for Organizing and Funding Research Constrains the Forms of R&D That Get Done

Metascience· 3 capabilities
No tool matches

This is more of a meta-bottleneck. But scientists are spending a lot of time not doing science, and the institutional structures in which they work are often set up with incentive structures that hinder certain kinds of outcomes, like more coordinated research.

Foundational Capabilities

Decentralized Science Funding

Diversify who can be a funder of science by launching new fast grants-style programs. 

6 resources

New Models to Reduce Fundraising Burden of Scientists

There are various solutions that could help scientists spend less time fundraising and more time doing science. This could include new structures for research institutes, where researchers don’t need to apply for external grants, and new tools for decentralized science funding.

9 resources

Novel and Diverse Research Organization Structures 

We should design research institutions to support the particular kinds of research they need to house, not fit every square peg into the same round hole.

3 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Clinical Trials Are Poorly Optimized for Evidence Gathering

Metascience· 5 capabilities
No tool matches

Current clinical trial designs are not sufficiently optimized for gathering robust evidence, leading to inefficiencies and suboptimal outcomes.

Foundational Capabilities

Federated Data Approaches

Implement federated data and differential privacy systems to aggregate clinical trial data from multiple sites while ensuring patient privacy.

4 resources

Increase Predictive Validation of Early Studies

Enhance early clinical studies by integrating predictive validation techniques to forecast outcomes and optimize trial design.

8 resources

Infrastructure for Decentralized Patient-Reported Clinical Studies

Decentralized patient reporting for clinical studies would amplify and diversify patient participation at decreased cost by reducing logistical and geographic barriers. It could compound and incorporate subjective patient experiences at large scale and would improve public accessibility of trial data.

5 resources

Out-of-Clinic Studies

Promote studies conducted outside traditional clinical settings to gather real-world evidence more effectively.

2 resources

Quasi-Experimental Causality in Biomedical Research

Apply quasi-experimental designs to strengthen causal inference in clinical studies, improving evidence quality, and do this for wider and more unified datasets (e.g., existing medical records) and across many conditions, e.g., combined with federated data approaches. Historically it has taken 5-10 years for advanced methods to percolate into relevant areas of omics / biotechnology x clinical area. It is also changing a culture of thinking — that there exists a different kind of validation that is neither 'do a perfect experiment' nor 'I tested on an external hold-out' but a third thing.

2 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Doing and publishing research is expensive and subject to structural roadblocks

Metascience· 4 capabilities
No tool matches

Traditional structures dominate in how research is conducted and how its outputs are disseminated. Expensive publishing practices restrict and slow the spread of knowledge. We should replace outdated publishing practices and complement research practices with new approaches that leverage frugal innovation, community-led platforms, and open access. We imagine a future where scientific discovery is more inclusive and dynamic.

Foundational Capabilities

Disrupt Traditional Publishing Models

Support modern, open, and community-driven publishing platforms that challenge and replace outdated models.

3 resources

Frugal Science Initiatives

Promote and support frugal science projects that empower communities worldwide to participate in scientific research, especially in developing countries.

1 resources

New Protocols for Knowledge Production and Verification

Combination of new tools and norms that are situated outside of the traditional publishing pipeline: protocols  for empowering and recognizing citizen science on social media. New practices around micro/nanopublishing to lower the barrier to publishing

3 resources

Post-Publication Peer Review Layer

Establish a community-driven, post-publication peer review system that supplements traditional publishing, enhancing transparency and accountability.

4 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Fraud in the Scientific Literature

Metascience· 1 capabilities
No tool matches

Scientific literature is plagued by fraudulent publications, undermining trust and slowing progress.

Foundational Capabilities

Automated Scientific Fraud Detection

Develop AI and multimodal LLM systems to automatically detect fraudulent research, flag suspicious publications, and improve overall scientific integrity. Especially as rapid advancements in AI models make it feasible to generate inaccurate scientific content at scale to disrupt.

11 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.

Inability to Comprehend and Synthesize the Entire Scientific Literature at Scale

Metascience· 5 capabilities
No tool matches

The volume of scientific publications is overwhelming, making it difficult for humans to read, comprehend, and synthesize the entire body of literature. How can AI-generated knowledge become cumulative? What should a machine-human shared Wikipedia look like? We should collect and synthesize all the world’s knowledge, accelerate its development, and make it universally available in a compelling form.

Foundational Capabilities

AI Scientific Literature Agents

Develop AI agents that can read, summarize, and integrate scientific literature, providing researchers and policymakers with synthesized insights.

12 resources

Infrastructure for Research Curation

Infrastructure to support and incentivize the diverse curation of research through science social media and/or dedicated spaces. This infrastructure would support rapid dissemination of research and encourage broader exploration of the research landscape, mitigating the risk of homogenous research focus, and maladaptive collective attention patterns in science.

8 resources

Intelligent Databases

Intelligent databases and automatic probabilistic integration across multiple databases

1 resources

Knowledge Synthesis Infrastructure

Build robust infrastructure to integrate and synthesize diverse scientific findings into cohesive, accessible formats for researchers.

12 resources

Large Knowledge Models

Knowledge models that can facilitate reasoning by synthesizing and clarifying relevant information transparently from multiple domains. “Provide a semantic medium that is both more expressive and more computationally tractable than natural language, a medium able to support formal and informal reasoning, human and inter-agent communication, and the development of scalable quasilinguistic corpora with characteristics of both [scientific] literatures and associative memory”. LLMs have powerful capabilities but their knowledge is opaque,  not cumulative, and not easily updatable and comparable. Human society has not only individual brains with memory but a cumulative scholarship to grow knowledge, compare alternative views, etc. Such AI could be used as a collective strategic assistant. 

3 resources

Candidate Tools

No candidate tools matched this gap. This may be because the gap describes a domain (e.g. astrophysics, social science) where the current biotech-focused toolkit has limited relevance, or because the gap requires capability types not yet well-represented in the database.