 |
 |
Research Groups by Faculty
Professor Batzoglou's group is interested in
algorithms and
computational systems for genomics. Some of their recent research
projects are: (1) MLAGAN, the first large-scale multiple alignment
system. Using MLAGAN, they can align and compare the entire DNA
sequences of Human, Mouse, Rat, and other organisms, and discover
elements that are evolutionarily constrained such as genes and
gene-regulatory sites. (2) ProbCons, a multiple aligner of proteins
based on a probabilistic model, and on a new technique that they call
"probabilistic consistency" in alignment. Using ProbCons they can
align hundreds of protein sequences with significant higher accuracy
than was previously possible. (3) ICA-based clustering of genes using
microarray data. By projecting gene expression vectors into
independent components, they cluster genes into statistically
interesting components that may represent independent biological
processes in a cell. This group is currently involved in the ENCODE
project, an NIH initiative to analyze 1% of the human genome with
computational and experimental techniques, a pilot study that can be
scaled to the complete genome at a later time.
Gill
Bejerano: Computational and Experimental Genomics
The Bejerano lab focuses on harnessing
Comparative Genomics of
Human and related species to the fascinating challenge of
understanding Human Embryonic Development.
Recent research has highlighted many thousands of regions in
the human genome that have never been studied before. These
regions appear to enact the exquisite resource allocation
control required during embryonic development. Among them are
Prof. Bejerano's discovery of "ultraconserved elements",
arguably the most mysterious regions in the human genome.
The Bejerano Lab focuses on deciphering the syntax and grammar
of this unique regulatory language; tracing its origins,
evolution and affect on the human lineage; and understanding
its contribution to human diseases, aiming to discover new
approaches to diagnose, and possibly even cure and prevent
them.
Our computational approaches rely heavily on machine learning,
probabilistic and statistical reasoning, and projects range
from the design of discovery-facilitating computational tools, to
their extensive application in pursuit of novel biological insights.
The KSL
conducts research in the core AI areas of knowledge representation and
reasoning with the goal of developing techniques for effectively
representing and intelligently using knowledge in computer systems.
Current research areas include representation languages and deductive
question-answering for the Semantic Web, explanation of reasoning
results, hybrid reasoning, modeling and analysis of alternative
hypothetical scenarios, knowledge aggregation and compilation, and
multi-use ontology engineering.
Michael Genesereth is director of the Stanford Logic Group. He is most
known for his work on Computational Logic and applications of that work
in enterprise computing and electronic commerce.
Computational Logic is that branch of Computer Science
concerned with the representation and processing of information in the
form of logical statements. "If A is true and B is true, then either C
is true or D is true" - things of that sort.
Research topics carried out under Prof Genesereth's supervision include
formal languages, automated reasoning, and "deliberate systems"
(computer systems capable of controlling their activity based on
declarative specifications, changeable at runtime).
Leonidas
Guibas: Computational Geometry and Distributed Systems
Professor Guibas heads the
Geometric Computation group in the Computer Science Department of
Stanford
University. He is a member of the Computer Graphics and Robotics
Laboratories.
He works on algorithms for sensing, modeling, reasoning, rendering, and
acting on the physical world. Professor Guibas' interests span
computational
geometry, geometric modeling, computer graphics, computer vision,
robotics,
and discrete algorithms --- all areas in which he has published and
lectured
extensively. Current activities focus on animation, collision
detection,
efficient rendering, motion planning, image data-bases, and physical
simulations.
Specific projects include:
- data structures for mobile data (kinetic data
structures)
- ad-hoc sensor and communication networks
- randomized geometric algorithms
- rounding and approximating geometric structures
- visibility-based motion planning
- Monte-Carlo algorithms for global illumination and
motion planning
- organizing and searching libraries of 3D shapes
and images
- physical simulations with deformable objects
(molecules, fabric)
Leonidas Guibas obtained his Ph.D. from Stanford in
1976, under the supervision of Donald Knuth. His main subsequent
employers were Xerox
PARC, MIT, and DEC/SRC. He has been at Stanford since 1984 as Professor
of Computer
Science. He has produced several Ph.D. students who are well-known in
computational
geometry, such as John Hershberger, Jack Snoeyink, and Jorge Stolfi, or
in computer
graphics, such as David Salesin and Eric Veach. At Stanford he has
developed new courses
in algorithms and data structures, the mathematical foundations of
computer graphics, and
geometric algorithms. Professor Guibas was recently elected an ACM
Fellow.
Professor Khatib pursues research on robotic
control, haptic interface,
mobile manipulation, and simulation. A new field of robotics is
emerging. Robots are today moving towards applications beyond the
structured environment of a manufacturing plant. They are making their
way into the everyday world that people inhabit. The successful
introduction of robotics into human environments will rely on the
development of competent and practical systems that are dependable,
safe, and easy to use. The discussion focuses on strategies and
algorithms associated with the autonomous behaviors needed for robots
to work, assist, and cooperate with humans. In addition to the new
capabilities they bring to the physical robot, these models and
algorithms and more generally the body of developments in robotics is
having a significant impact on the virtual world. Haptic interaction
with an accurate dynamic simulation provides unique insights into the
real-world behaviors of physical systems. The potential applications
of this emerging technology include virtual prototyping, animation,
surgery, robotics, cooperative design, and education among many
others. Haptics is one area where the computational requirement
associated with the resolution in real-time of the dynamics and
contact forces of the virtual environment is particularly
challenging. The presentation describes various methodologies and
algorithms that address the computational challenges associated with
interactive simulations involving multiple contacts with complex
human-like robotic structures.
Daphne
Koller: Bio-Informatics, Probabilistic Inference, and Machine
Learning
Professor Koller has been a
pioneer in the area of probabilistic inference and relational
models. Her framework "Probabilistic Relational Models" is in
widespread use around the works in applications as diverse as
intelligent data analysis, robotic mapping, image understanding, and
computational biology.
Professor Koller has been working on understanding
genetic processes
from a variety of genomic data sets, using techniques from machine
learning and probabilistic models. In one recent project, they
considered the problem of gene regulation. All of the cells in our
body contain exact the same DNA, but the behavior of different cells
can vary radically. The reason is that some genes are activated in
some cells and dormant in others. Understanding the regulatory
processes that cause genes to activate has important implications on
comprehending how cells function. It also affects how diseases that
involve breakdown in regulatory processes, such as cancer, can
develop. In their recent work, published in the highly prestigious
journal Nature Genetics, Daphne and Eran Segal, together with several
other collaborators (including Stanford alum Nir Friedman, now at
Hebrew University), provided a high-throughput computational method
for extracting regulatory circuits from large collections of gene
expression measurements. The method identified modules of genes that
are co-regulated and determined the regulatory genes that tell each
module of genes to turn on or off - in other words, to start or stop
making proteins. The proteins from each module, in turn, are
responsible for a different cell process. The results of the analysis
were shown to reproduce many regulatory relationships that were
previously discovered. More interesting, in collaboration with
Prof. David Botstein's group (Stanford, Genetics Department), they
also tested some of the method's novel predictions in real wet-lab
experiments. They "knocked out" a regulator under the conditions where
it is predicted to be active. In the tested knock outs, three out of
three turned out to regulate predicted genes. This showed that the
method works, and allowed the characterization of three previously
uncharacterized genes.
The goal of Professor Latombe's research is to
create autonomous
agents that sense, plan, and act in real and/or virtual worlds. His
work
focuses on designing architectures and algorithms to represent, sense,
plan, control, and render motions of physical objects. The key
underlying issue is to efficiently capture the connectivity of
configuration or state spaces that are both high-dimensional and
geometrically complex. Specific topics include: collision-free path
planning among obstacles, optimal motion planning using dynamics
equations, motion planning to achieve visual tasks, dealing with
sensing
and control uncertainty, assembly planning, construction of 3-D models
of complex environments, visual tracking of articulated objects,
relating shapes to functions, and reasoning in multiple-agent worlds.
Applications include: robot-assisted medical surgery, integration of
design and manufacturing, graphic animation of digital actors, study of
molecular motions (folding, binding). His current projects include
the study of motion pathways of bio-molecules, the acquisition and
exploitation of geometric models of 3D deformable objects, and
the creation of multi-limbed rock-climbing robots.
Fei-Fei Li: Computer Vision, Human Vision
Research in Professor Li's lab focuses on two intimately connected branches of vision research: computer vision and human vision. In both fields, we are intrigued by visual functionalities that give rise to semantically meaningful interpretations of the visual world. In computer vision, we aspire to build intelligent visual algorithms that perform important visual perception tasks such as object recognition, scene categorization, integrative scene understanding, human motion recognition, material recognition, etc. In human vision, our curiosity leads us to study the underlying neural mechanisms that enable the human visual system to perform high level visual tasks with amazing speed and efficiency.
Chris
Manning: Natural Language Processing
Chris Manning works on systems and
formalisms that can intelligently process and produce human
languages. His research concentrates on probabilistic models of
language and statistical natural language processing, information
extraction, text understanding and text mining, constraint-based
theories of grammar (HPSG and LFG) and probabilistic extensions of
them, syntactic typology, computational lexicography (involving work
in XML, XSL, and information visualization), and other topics in
computational linguistics and machine learning.
Andrew Y.
Ng: Machine Learning and Robotics
Professor Ng's research focuses on
machine learning for data mining, pattern recognition and control.
His work addresses the fundamental mathematical properties of learning
as well as their practical application. Using machine learning, he
hopes to build the best, open-source spam filter in the world. He
also applies machine learning to problems in control such as
autonomous helicopter (and fixed-wing aircraft) flight, and legged
robot walking. These are problems that were either intractable to
human engineering efforts, or that took thousands of person-hours to
find solutions. His learning methods are typically able to design
better-than-human controllers in minutes. Using machine learning, his
autonomous helicopter also recently became the first to be capable of
sustained inverted (upside-down) flight.
Professor Salisbury's research is in the area
of robotics and haptics with particular emphasis on enabling enhanced
human-machine interaction. His appointment in the departments of
computer science and surgery reflects his interest in medical
applications.His NIH-sponsored research team is working to create a
collaborative, simulation-based surgical training environment,
utilizing networked multi-hand haptic and visual simulation to support
surgical skill and team training. He is also developing of mechanical
and control systems for human-friendly robots - devices that will work
in cooperation (and contact) with humans. This work addresses
teleoperative and autonomous tasks as well as affective aspects of
human-machine interactions. His work on human interface technologies
focuses on the development of new haptic interface devices to enable
multi-hand, multi-finger interaction. This is part of his
visio-haptic workstation project.
Some of Professor Salisbury's previous activities,
which have resulted in
significant technology transfer, include his involvement in creating
advanced technologies exemplified by SensAble Technology's PHANTOM
haptic interface, Intuitive Surgical's daVinci Surgical System, and
Barrett Technology's WAM Arm.
Yoav
Shoham: Game Theory and Multi-Agent Systems
Professor Shoham's artificial intelligence work
includes formalizing
common-sense (including notions such as time, causation, and mental
state), and multi-agent systems (including agent-oriented programming
and coordination mechanisms). His current interests concern problems
at the interface of computer science and game theory, including
foundational theories of rationality, online auctions, and electronic
commerce.
Sebastian Thrun:
Robotics and Machine Learning
Professor Thrun seeks to understand information
processing and
decision making in robotics and decentralized systems. Thrun is best
known for his contributions to probabilistic robotics, which applies
methods from statistics and decision theory to robotics
problems. Many of Thrun's algorithms define the state of the art in
robotics perception and control.
Thrun has built a number of pioneering robot systems.
In 1997, the
world's first robotic museum tour guide for the German Museum in Bonn,
a year later, a similar robot for the Smithsonian Museum. In 1999, he
developed an autonomous robot for picking up balls from a tennis
court. In 2000, he developed a series of robotic assistants for the
elderly, which provide a range of services, such reminding people to
take their medication, escorting them to the doctor, or being a
telepresence interface to deliver off-site health care services. In
2002, a robot for mapping abandoned coal mines. In 2003, Thrun
developed one of the first ground mapping helicopters, showing how
flying robots can assist ground vehicles when exploring urban terrain.
All these innovations are based on the new paradigm of probabilistic
robotics, and the basic science of statistical estimation in robotics.
|
 |
|
|