Ultrasound bone boundaries are notoriously unstable: as soon as the probe presses harder, tilts, or slides, the bright line that should mark the cortex starts to deform, shadow, or disappear. Purely image-based segmentation models learn this instability—they drift with pressure and angle, and their predictions vary across subjects and scanning sites.
This thesis explores a different idea: teach the model how the probe feels, not just what it sees. The system records synchronized B-mode ultrasound and contact forces during scanning, and trains a segmentation network in which the force (scalar or vector) conditions the feature maps (e.g., via FiLM-style modulation). At inference time, every frame is paired with its current force reading, and the network is explicitly asked:
“Given this pressure and direction, where is the true, stable bone surface?”
Concretely, the method takes each B-mode frame plus the corresponding force signal as input, and outputs a clean bone mask (optionally with boundary confidence) suitable for 3D surface reconstruction and scan planning. The thesis investigates:
-
How much force conditioning reduces segmentation drift under varying probe pressure, tilt, and tissue composition;
-
How to design lightweight conditioning blocks that can run in real time and remain robust across subjects and anatomical sites;
-
How segmentation uncertainty relates to force patterns, and how this can support force-aware control and quality monitoring.
A successful thesis will deliver:
-
A synchronized US + force dataset with annotated bone boundaries;
-
A real-time force-conditioned segmentation model running in the loop with a probe (hand-held or robotic);
-
Quantitative evidence that force conditioning stabilizes bone boundaries across poses and subjects, enabling more reliable 3D reconstruction and safer, force-aware scan automation—without external markers or extra hardware beyond a force sensor.