|
|
Here's
a list of some of the projects I am (or I have been) involved with.
|
|
|
Surgical
Simulation
We're
working on creating a multi-user surgical simulator where users
can interact with the simulated scenarios through stereo-graphics
and haptic interfaces. The project, which is funded by NIH, is
divided in various parts described below.
|
|
|
An Event-Driven
Framework for the Simulation of Complex Surgical Procedures
Existing surgical
simulators provide a physical simulation that can help a trainee
develop the hand-eye coordination and motor skills necessary for
specific tasks, such as cutting or suturing. However, it is equally
important for a surgeon to gain experience in the cognitive processes
involved in performing an entire procedure. The surgeon must be
able to perform the correct tasks in the correct sequence, and
must be able to quickly and appropriately respond to any unexpected
events or mistakes. It would be beneficial for a surgical procedure
simulation to expose the training surgeon to difficult situations
only rarely encountered in actual patients. We present here a
framework for a fullprocedure surgical simulator that incorporates
an ability to detect discrete events, and that uses these events
to track the logical flow of the procedure as performed by the
trainee. In addition, we are developing a scripting language that
allows an experienced surgeon to precisely specify the logical
flow of a procedure without the need for programming. The utility
of the framework is illustrated through its application to a mastoidectomy.
Where:
Stanford
Robotics Lab,
Stanford School of Medicine
When: 2003-present
Collaborators:
Christopher Sewell, Nikolas Blevins, Kenneth Salisbury
papers
project
page
|
|
|
Simulation
of Temporal Bone Surgery
We created
a framework for training-oriented simulation of temporal bone
surgery. Bone dissection is simulated visually and haptically,
using a hybrid data representation that allows smooth surfaces
to be maintained for graphic rendering while volumetric data is
used for haptic feedback. Novel sources of feedback are incorporated
into the simulation platform, including synthetic drill sounds
based on experimental data and simulated monitoring of virtual
nerve bundles. Realistic behavior is modeled for a variety of
surgical drill burrs, rendering the environment suitable for training
low-level drilling skills. The system allows two users to independently
observe and manipulate a common model, and allows one user to
experience the forces generated by the other’s contact with
the bone surface. This permits an instructor to remotely observe
a trainee and provide real-time feedback and demonstration.
Where:
Stanford
Robotics Lab,
Stanford School of Medicine
When: 2002-present
Collaborators:
Dan Morris, Christopher Sewell, Nikolas Blevins, Kenneth Salisbury
papers
project
page
|
|
|
Craniofacial
Surgery Simulation
We have developed an environment for simulating
craniofacial surgeries visually and haptically. CT or MR data
can be loaded into the simulation environment, and a user can
drill and manipulate skeletal anatomy using a variety of virtual
tools, controlled with a force-feedback haptic device. Graphic,
haptic, and auditory feedback is coordinated to provide a realistic
sense of interaction with the virtual bone. For simulation of
osteosynthesis techniques, 3D models of several osteosynthesis
plates are incorporated into the system. Using these industry
standard plates, users can plan and practice operations using
exact 3D models of both the patient and the hardware which will
be used intraoperatively.
Where:
Stanford
Robotics Lab,
Stanford School of Medicine
When: 2004-present
Collaborators:
Dan Morris, Sabine Girod, Ken Salisbury
papers
project
page
|
|
|
|
|
|
Haptic
Interface Control
I'm currently
working on various projects involving the analysis of stability
for haptic devices and the synthesis of better control algorithm
allowing rendering of a wider range of impedances.
|
|
|
Passivity
analysis of Haptic Devices
Stability
of haptic devices has been studied in the past by various groups
(Colgate, Hannaford, Gillespie, ...). Past
analysis has not focused on quantization, Coulomb friction and
amplifier dynamics. Our work extends such results.
Where:
Stanford
Robotics Lab,
Stanford Telerobotics
Lab, LAR-DEIS
University of Bologna
When: 2004-present
Collaborators:
Nicola Diolaiti, Gunter Niemeyer, Ken Salisbury, Claudio Melchiorri
papers
|
AAAA |
|
Haptic
Devices as Hybrid Systems
We're
currently analyzing stability of haptic devices using Hybrid System
concepts. Hybrid models describe systems composed of both continuous
and discrete components, the former typically associated with
dynamical laws (e.g., physical first principles), the latter with
logic devices, such as switches, digital circuitry, software code.
As a part of this we're extending the concept of passivity to
Hybrid Models.
Where:
DII
University of Siena, Stanford
Robotics Lab
When: 2003-present
Collaborators:
Filippo Brogi, Alberto Bemporad, Gianni Bianchini
papers
|
|
|
|
|
|
CHAI3D
CHAI3D
is a set of open-source libraries to create visio-haptic software.
The idea is to stop reinventig the wheel every time someone needs
to use basic (and not so basic) haptic or graphic rendering algorithms.
After many delays the first beta release of chai3d is out (as
of July 2004). Particular effort is placed on creating libraries
that are easily expandable, given that new devices and/or algorithms
are constantly being created by the haptic community. To learn
more go to
www.chai3d.org
Where:
many
places
When: 2002-present
Collaborators:
Francois Conti, Dan Morris, Chris Sewell, Yuka Teraguchi, Doug
Wilson, Maurice Halg, ...
|
|
|
|
|
|
Redundant
and Mobile Haptic interfaces
Haptic
devices normally feature fairly small workspaces
(definitely not exceeding the workspace of a human arm). Moreover
they are typically grounded and can hardly be transported. Virtual
environments, on the other side, can be large (think CAVE, for
instance, or a virtual museum inside which users can move from
an art piece to the next). How can we go beyond current limitations
in workspace and portability of haptic interfaces?
A possible solution is combining a haptic device with a mobile
platform. This creates a new type of device and a new set of interesting
problems in haptic rendering and control. How do we render free
space motion? How do we render contact forces? How do we make
this system safe for the user? How do we coordinate multiple mobile
devices in a crowded virtual environment? We're currently working
on some of these questions.
Where:
DII University of Siena
When:
2003-present
Collaborators:
Alessandro Formaglio, Max Franzini, Antonello Giannitrapani, Domenico
Prattichizzo
papers
video
(WMV 7MB) - Moving in free space
video
(WMV 30MB) - Limitations of Mobile Haptic devices in free
space. In order to test two different mobile haptic devices in
a controlled fashion we employ a mobile robot to act as the human
user of the haptic device (ISER04).
|
|
|
|
|
|
Multi-point
haptic interaction
One
of my main interests is in studying multi-point interaction with
virtual objects. Past experience has shown that the simple single-point
contact interaction metaphor can be surprisingly convincing and
useful. This interaction paradigm imposes, however, limits on
what a user can do or feel. Single point of contact interaction
makes it impossible for a user to perform such basic tasks as
grasping, manipulation, and multi-point exploration of virtualized
objects, thus restricting the overall level of interactivity necessary
in various applications (such for instance as surgical training).
Pushing haptic interfaces beyond this limits has been, and still
is, one of my main goals. Some
of the aspects I have focused on are described in the following. |
|
|
Stable
Multi-point haptic interaction with deformable objects
Obtaining
stable haptic interaction with deformable objects, like the ones
employed in force-feedback enhanced surgical simulators, is a
challenging task. Deformable object algorithms can reach very
high levels of computational complexity which translates in low
servo-rates and computational delays and ultimately in unstable
force feedback. Using simple local representations of the object
being touched can limit such effects by decoupling simulation
and haptic rendering loops. However, in the case of deformable
objects, this "local model" cannot simply approximate
local geometry of the object being touched. As demonstrated in
our work, local stiffness must also be considered, thus creating
"soft" local models. By choosing the local model stiffness
appropriately the overall algorithm results stable, independently
from servo rate and computational delays that are present in the
system.
Where:
Stanford Robotics
Lab, DII University of Siena
When:
2001-2003
Collaborators:
Ken Salisbury, Remis Balaniuk, Domenico Prattichizzo, Maurizio
de Pascale, Gianluca de Pascale
papers
video (WMV 2MB) - more users
interacting with a deformable object
video
(WMV 3MB) - two point interaction: breast palpation exam
|
|
|
Soft
Finger Proxy Algorithm
Point contact,
i.e. one that can exert forces in three degrees of freedom, combines
a high level of realism with simpler collision detection algorithms.
Using two or more point contacts to grasp virtual objects works
well but has one main drawback: objects tend to rotate about their
contact normal. A simple way to avoid this, one that does not
increase complexity, is allowing point contacts to exert torsional
friction about contact normal. The grasping community refers to
this type of contact as a "soft finger". In our work
we have proposed a soft finger proxy algorithm. In order to tune
such algorithm to fit the behavior of human fingertips we have
considered various fingerpad models proposed in the past by the
biomechanics community, derived their torsional friction capabilities,
and compared them to experimental data obtained on a set of five
subjects.
Where:
Stanford Robotics
Lab
When:
2001-present
Collaborators:
Ken Salisbury, Roman Devengenzo, Antonio Frisoli, Massimo Bergamasco
papers
video
(WMV 2MB) - virtual object manipulation using 4 contacts
points |
|
|
Psychophysics
of multi-point contact
It has been
proven in the past that humans perceive objects' shape in a faster
and more efficient way when using their hands to their full capability,
i.e. when using all ten fingers at the same time. In this ongoing
project we're trying to understand if the same results are true
for shape perception mediated by haptic devices allowing multiple-point
interaction. Is multi-point kinesthetic feedback enough to allow
users better perceptual capabilities of objects shape? Can tactile
feedback help in this sense?
Where:
Stanford Robotics
Lab, PERCRO
When:
2003
Collaborators:
Antonio Frisoli, William
Provancher, Mark Cutkosky, Ken Salisbury, Massimo Bergamasco
papers
|
|
|
|
|
|
Sensor/actuation
asymmetry for haptic interfaces
Haptic
interfaces enable us to interact with virtual objects by sensing
our actions and communicating them to a virtual environment. A
haptic interface with force feedback capability will provide sensory
information back to the user thus communicating the consequences
of his/her actions.
Ideally haptic devices should be built employing an equal number
of sensors and actuators, fully mapping actions and reactions
between user and virtual environment. As the number of degrees
of freedom for haptic devices increases, however, a possible scenario
is that devices will feature more sensors than actuators, given
that the former are usually smaller, lighter and cheaper than
the latter.
What
are the effects of using this type of haptic devices,
which we refer to as "asymmetric"?
As
it turned out in our past research while
asymmetric devices can enable more rich exploratory interactions,
the lack for equal dimensionality in force feedback can lead to
interactions which are energetically non-conservative.
In our present work we are investigating how to create haptic
rendering software that will limit such non-conservative effects
and testing how these effects are perceived by users.
Where:
Stanford Robotics Lab
When:
2001-present
Collaborators:
Ken Salisbury, Gabriel Robles-De-La-Torre (psychophysical tests).
papers
|
|
|
|
|
|
Haptic
Media Types
Embedding
haptic elements inside different media types may be one of the
most promising, and yet conceptually simple, applications of force-feedback.
Letting users touch a product before they purchase it online,
creating e-books where readers can interact with the story being
narrated, allowing readers to test the results proposed in haptic-related
scientific publications in electronic form - all these simple
ideas would allow haptics to become a more common and useful everyday
tool. We (Unnur for the most part) have developed an activeX controller
that allows to embed haptic scenes inside HTML pages and PPT presentations.
The first version (contact me for the code... it's open source
even though a bit messy) only supported Phantom haptic devices.
Currently we're working (Pierluigi for the most part) at a more
general object that will be embeddable in PDF documents as well
as HTML and PPT. We're also interested in testing how this technology
can be used in online shopping and interactive electronic books
scnenarios.
Where: DII
University of Siena, Stanford
Robotics Lab
When:
2002-present
Collaborators:
Unnur Gretarsdottir, Pierluigi Viti, Kenneth Salisbury
papers
demo (soon to come)
|
|
|
|
|
|
Haptic
Interaction with fetuses
The FETOUCH (FEtus TOUCH) system allows users to extract a visual-haptic
3D model from a set of 2D scans in DICOM format and then interact
(visually and haptically) with such model.
The system has been mainly used by the Dept. of Gynecology of
the University of Siena (Italy) to allow mothers to interact with
3D models of the fetus they carry. Even though this system is
very similar to one developed and sold by Novint technologies,
it was developed independently during the years 2001-2002, and
it is freely available for download. For more information go to
the FETOUCH web site.
Where:
DII
University of Siena
When:
2002-2003
Collaborators:
Berardino LaTorre (who has been the main programmer behind this),
Domenico Prattichizzo, Antonio Vicino, Siena University Medical
School.
video (avi 1.2MB)
papers
Some
of the algorithms developed for Fetouch are now being used in
another project.
Virtual
Baby
The primary
goal of this project is to create a virtual-reality based system
for training physicians, nurses, and allied health care professionals
in newborn resuscitation. We're currently developing a first prototype
of the system which will allow trainees to physically examine
the virtual baby, perform critical technical interventions, and
develop the cognitive and motor skills necessary for caring for
real human patients. Interesting research aspects include: creating
a realistic physical model of the baby from CT and MRI scans;
create a model of the baby relating physiologic, anatomic, and
behavioral characteristics; create haptic rendering algorithms
for some basic interventions (chest compression, feel a pulse
by pinching umbilical, ventilation, intubation).
Where:
DII University of Siena,
Stanford Center
for Advanced Pediatric Education (CAPE)
When:
2002-present
Collaborators:
Berardino La Torre, Louis Halamek, Allison Murphy, Domenico Prattichizzo
papers
(check back soon)
video (check back soon)
|
|
|
|
|
|
Pure
Form
The "Museum of Pure Form" was conceived in 1993 by Professor
Massimo Bergamasco which was my Ph.D. advisor at PERCRO. However,
it wasn't until the end on the 90s that the project was funded
by the IST program of the EU. I was very lucky to be involved
in the project from the beginning, and participated at various
levels (grant
writing, managing contacts with museums,
haptic rendering algorithms). The system is now been showcased
in various European museums, and a collection of 3D digital models
of statues has been created by using 3D scanners. Users can physically
interact with statues, something that would normally be impossible
(or, I guess, not advisable). For more information on the project
please visit www.pureform.org.
Where:
PERCRO (Pisa), Centro
Galego de Arte Contemporánea, University
College London (UCL)
When:
2000-2004
Collaborators:
Massimo Bergamasco, Antonio Frisoli, all the folks at PERCRO
papers
|
|
|
|
|
|
Motion-base
simulators: Moris
This was the first major project I was involved with at PERCRO
during my PhD. Moris is a 7DOF motion-base motorcycle simulator
based on a hydraulically actuated Stewart platform. While similar
ideas have been implemented for flight simulators, Moris was,
at the time (1999), the most advanced motorcycle simulator ever
built. Moris was built with and for PIAGGIO (the makers of Vespa
scooters) and is currently located at their headquarters in Pontedera,
Italy.
I was involved in the designing algorithms in charge of creating
realistic inertial feedback on the user. This is a challenging
problem given that the accelerations that
would normally be experienced by the user must be replicated using
a simulator with limited workspace, while ensuring
a high level of safety. We designed a washout filter finding inspiration
in past solutions used in flight simulators. Our design, however,
was specific to the case of a motorcycle, i.e. one for which the
position of the rider's head is constantly changing. We tracked
the position of the rider's head using a mechanical structure
mounted on the motorcycle mock-up and created a washout filter
tuned on the user's head, differently from what happens for flight
simulators.
Where:
PERCRO (Pisa), Piaggio
When: 1999-2001
Collaborators:
Carlo Alberto Avizzano, Diego Ferrazzin, Giuseppe Prisco, Massimo
Bergamasco.
papers |
|
|
|
|