All digests
General publicENArtificial Intelligencedaily

[Artificial Intelligence] Robots that feel, listen underwater, and lie less.

DeepScience — Artificial Intelligence
DeepScience · Artificial Intelligence · Daily Digest

Robots that feel, listen underwater, and lie less.

Today's AI research asks one quiet question: can we trust these systems when the stakes are physical?
May 14, 2026
Hi — today's batch of 92 papers is heavy on preprints and dataset deposits with thin results, so I did some triage. Three stories made the cut: a prosthetic hand that can feel hardness in real time, a robot mapping coral reefs by listening for fish, and a solo researcher's attempt to explain why language models hallucinate. That last one comes with serious caveats — I'll be upfront about them. Let's dig in.
Today's stories
01 / 03

A prosthetic hand learns to feel how hard things are

The last time you reached into a bag and felt for your keys without looking, your fingertips were doing something a prosthetic hand cannot — yet.

Think about gripping a paper cup of hot coffee. Your fingers automatically adjust — not too tight, not too loose — based on constant feedback about pressure and texture. People using prosthetic hands lose that channel entirely. A research team working with the Hannes prosthetic hand, developed at the Italian Institute of Technology in Genoa, has built a system that starts to address this. They lined the hand's fingers and palm with 64 flexible sensors made from a material called PVDF — polyvinylidene fluoride, a plastic that generates a tiny electrical signal when squeezed or bent. A compact neural network reads those signals in real time and classifies what the hand is touching: soft, medium, or firm. In lab tests, it got this right about 91% of the time. The response took 0.21 seconds — roughly a quarter of the time it takes you to blink. In a separate offline test with nearly 12,000 measurement windows, accuracy climbed to 94.6%. They also tested the feedback direction: a small electrical pulse delivered to the user's skin to signal what the hand is 'feeling.' The system correctly located contact — finger versus palm — about 95% of the time. Here is the catch: every one of these tests happened in a controlled lab, gripping defined objects in defined ways. A wet glass, a crumpled bag, or a handshake is messier. The 64-channel sensor array needs to shrink considerably before it fits into a device someone wears all day. And there is no long-term wear data yet. This is a solid engineering result — not a finished prosthetic.

Glossary
PVDFA flexible plastic material that generates a small electrical charge when pressed or bent, used here as a pressure sensor.
electrotactile feedbackDelivering a small electrical pulse to skin to simulate the sensation of touch, replacing the signal that a real hand's nerves would send.
Source: Smart Human Machine Interface Using Piezoelectric Sensors and Artificial Intelligence
02 / 03

An underwater robot maps coral reefs by listening for fish

Finding the busiest corner of a reef, at night, without a map, on a single battery charge — that is roughly what this robot had to do.

Coral reefs are not uniformly alive. Some patches teem with fish; others are quiet. Knowing where the hotspots are matters for conservation — but surveying them by hand, with divers, is slow, expensive, and limited in range. A team whose work appears in Science Robotics deployed an autonomous underwater vehicle — a torpedo-shaped robot — at two reef sites in the US Virgin Islands: Joel's Shoal and Lameshur Bay. The robot carried two sensors: a downward-facing camera and a hydrophone, which is simply an underwater microphone. It also built 3D maps of reef rugosity — how bumpy and complex the seafloor structure is — because rough, complex terrain tends to shelter more life. Here is how the three signals combined: the camera fed video into a YOLO object-detection model — the same class of software used to spot pedestrians in autonomous vehicle footage — to count fish. The hydrophone picked up the clicks, grunts, and chirps that reef fish make when they aggregate. And the rugosity maps provided structural context. Together, the three signals produced heatmaps showing where biological activity was highest. The robot also demonstrated autonomous homing, returning to a target location after a survey on its own, across nine successful trials. The honest limitation here: what I have is the dataset deposit accompanying the paper, not the full paper text. Specific accuracy numbers — detection rates, false-positive rates — are in the Science Robotics publication, not available to me here. Two reef sites is a promising start; it is not yet evidence the system generalises to reefs globally.

Glossary
hydrophoneAn underwater microphone that picks up sounds produced by fish, currents, or other sources in the ocean.
rugosityA measure of how rough or complex a surface is — high-rugosity reefs have lots of crevices and structures that shelter fish.
YOLOA real-time computer vision model originally built to detect objects in images and video — here repurposed to count fish from underwater footage.
03 / 03

One researcher's attempt to map the physics of AI hallucinations

Why does an AI chatbot confidently tell you a made-up citation? A solo researcher thinks they have found the culprit — and named it the 'Grammar Police.'

Language models hallucinate — they produce false information stated with full confidence. We know this happens. We are much less clear on why, mechanically, inside the model. A researcher working independently — not affiliated with a lab, not peer-reviewed — has published what they call 'Project Aletheia': a framework of seven claimed laws describing how hallucination works, using the vocabulary of physics. The central idea is intuitive: inside a language model, some processing layers retrieve facts, and others enforce grammatical structure. The claim is that the grammar-focused layers actively suppress factual recall — a bit like a meticulous copy editor who keeps smoothing sentences even when doing so strips out the original meaning. Using specific processing units called attention heads, identifiable at particular layers of the network, this suppression is claimed to explain up to 70% of factual errors. The researcher also claims that prefixing prompts with code-style symbols — comment markers like // or # — partially bypasses the suppression and improves factual accuracy. Now the part that matters most: I cannot recommend taking these specific numbers seriously yet. This is a solo investigation with no institutional review, no statistical significance testing, and no error bars on most of the claimed 'laws.' The primary testbed is GPT-2, a model from 2019 that is tiny by today's standards. Whether any of this extends to the models you actually use is unconfirmed. What is genuinely useful here is the vocabulary and the framing — fact retrieval and grammatical suppression pulling against each other is a real and testable hypothesis. Someone should run the proper experiments.

Glossary
attention headA small sub-unit inside a transformer model that learns to focus on particular relationships between words or concepts — models have hundreds of them across many layers.
logit lensA technique for peeking at what a language model is 'thinking' at each internal layer, before it reaches a final output — used here to trace where factual information gets suppressed.
instruction tuningA training step that teaches a model to follow conversational instructions and be helpful — the paper claims this inadvertently increases hallucination by amplifying suppression mechanisms.
The bigger picture

Three stories, and they are not as different as they look. Two of them — the prosthetic hand and the reef robot — are about AI leaving the screen and operating in physical space: on a body, underwater, in an environment where a wrong answer has a real consequence. The third asks whether we can trust AI's outputs at all, even on a screen. That is the tension running through all three today. The prosthetic hand is impressive precisely because its error rate is low enough to mean something for a real person gripping a real cup. The reef robot only helps conservation if its hotspot maps are accurate. And the hallucination paper, for all its methodological weaknesses, is pointing at a question that becomes more urgent the moment AI leaves the controlled setting. The pattern: we are building AI for physical, high-stakes contexts faster than we are building the tools to verify its reliability in those contexts. That gap is the story.

What to watch next

For the prosthetic hand work, the next meaningful step is long-term wearability trials — watch for clinical studies coming out of Italian Institute of Technology or partner hospitals in the next year. On hallucination, the International Conference on Machine Learning (ICML) runs in July 2026 and typically brings interpretability papers that will either support or challenge claims like Aletheia's — that is the moment to check back. The open question I would most want answered: does grammatical suppression of facts scale up with model size, or shrink? Nobody has a clean answer.

Further reading
Thanks for reading — and as always, the honest digest beats the exciting one. — JB
DeepScience — Cross-domain scientific intelligence
deepsci.io