this post was submitted on 28 Nov 2024
21 points (92.0% liked)

Futurology

1920 readers
89 users here now

founded 2 years ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] Alexstarfire@lemmy.world 4 points 1 month ago (2 children)

Why do we care about predicting results? Isn't the point of studies to determine actual results?

[–] hendrik@palaver.p3x.de 1 points 1 month ago* (last edited 1 month ago)

I think this isn't really about predicting something. That's just a means to benchmark AI. You can either ask it questions to probe knowledge. Or test if it can look forward, reason and jump to some conclusions. In other words predict something. They tried how well it performed at that. Not because these predictions itself are useful. But because you can use them to measure the AI's capabilities at similar tasks.

[–] Lugh@futurology.today 1 points 1 month ago

There's a few ways they say it may help, this one seems the main one.

We foresee a future in which LLMs serve as forward-looking generative models of the scientific literature. LLMs can be part of larger systems that assist researchers in determining the best experiment to conduct next. One key step towards achieving this vision is demonstrating that LLMs can identify likely results. For this reason, BrainBench involved a binary choice between two possible results. LLMs excelled at this task, which brings us closer to systems that are practically useful. In the future, rather than simply selecting the most likely result for a study, LLMs can generate a set of possible results and judge how likely each is. Scientists may interactively use these future systems to guide the design of their experiments.