Main Aim
High Performance Computing
(HPC) is a key Information Technology attracting a great deal of attention in
USA, Europe, Japan and other economically raising Asian countries as a
fundamental tool for scientific research, industrial production and business
decision making. HPC is viewed as one of the most important technologies of the
21st Century. For example, by the PTAC, US President Information Technology
Advisory Council. Significant amounts of energy and financial resources have
been invested by governments and private sectors for building adequate
infrastructures and human skills to enable the expected benefits. Yet, despite
many spectacular accomplishments of HPC, this tool is still underutilized by
many academic schools, industrial companies and businesses. There are several
challenges that slow down deployment of HPC. Some are mentioned in the Section
Programme. Some or all of these challenges may disappear or become less
essential with the advent of Grid Computing. Most IT strategists and experts
believe that Grid Computing is the future. So far there have been very
successful attempts to connect globally individual computers and create large
clusters that solved very compute and data intensive problems in physics,
chemistry, life sciences, astronomy, etc. Grid Computing may be ready soon for
industrial and business applications. It could be used for Enterprise
deployment as soon as several technical challenges are solved. Among them are:
security, standards, reliability and accountability. The aim of the Workshop is
to discuss the future developments in the HPC technologies, and to contribute
to assess the main aspects of Grids, with special emphasis on solutions to grid
computing deployment. The HPC Advanced Workshops in Cetraro have been well
established and two of them (1992 and 1996) were sponsored by NATO.
Workshop Topics
·
General Issues in High Performance Computing
·
Advanced Technologies for Petaflops Computing
·
Emerging Computer Architectures and Their Performance
·
Programming Models
·
Parallel Languages
·
Parallel Software Tools and Environments
·
Distributed Systems and Algorithms
·
Parallel Multimedia Computing Technologies
·
Innovative Applications in Science and Industry
·
High Performance Computing for Commercial Applications
·
General Issues in Grid Computing
·
Grid Scheduling
·
Grid Computing for Enterprise: security, system life cycle management,
reliability, accountability
Programme
Over forty
invited papers will be presented in the five days long workshop. Keynote overview
talks will be given together with research presentations.
Despite
significant investments in the HPC science and technology there are many
technical and economic challenges that limit the use of HPC computers. Examples
of such challenges are:
·
limited parallel software portability;
·
unclear cost performance metric for parallel
computing;
·
expensive reengineering of the sequential
legacy software;
·
difficult parallel programming.
If we
consider the TOP500 computers currently in use we will see that the predominant
architecture of these machines is a cluster. In comparison to clusters MPPs and
vector computers are a small minority. This trend may or may not continue.
It will be
interesting to see if some tools developed in scientific and commercial HPC
environments will eventually become interchangeable.
Several
sessions on Grids will play the central role in the workshop programme; invited
speakers from different sectors, public and private, will debate the most
critical issues related to the grid development strategies and Grids for
Enterprise.
Programme Committee
D. ABRAMSON Clayton School of Information Clayton, Vic, Australia |
F. BAETKE Global HPC Technology Hewlett Packard Richardson, TX, U.S.A. |
R. BUYYA Melbourne, VIC, Australia |
F. CAPPELLO INRIA France |
C. CATLETT Maths and Computer Science Division Argonne National Laboratory Argonne, IL and |
J. DONGARRA Innovative
Computing Laboratory Computer
Science Dept. |
I. FOSTER Math
& Computer Science Div. and Dept of
Computer Science The |
G. FOX Community Grid Computing
Laboratory Indiana University |
W. GENTZSCH DEISA Distributed European Infrastructure for
Supercomputing Applications and |
L. GRANDINETTI University of Calabria Rende – Cosenza, Italy |
C. JESSHOPE Faculty of Science Informatics Institute |
H. JIN & Technology Cluster and Grid Computing Lab Services Comp. Technology and System Lab |
G. JOUBERT |
C. KESSELMAN Information
Sciences Institute Los Angeles, CA, U.S.A. |
J. KOWALIK formerly The
Boeing Company |
M. LIVNY
|
S. MATSUOKA Tokyo Institute of Technology |
D. REED Microsoft
Research Redmond,
Seattle formerly and Renaissance
Computing Institute |
S. SEKIGUCHI National Institute of Advanced
Industrial Science and |
H. SIMON |
P. SLOOT |
D. TALIA Dept. of Electronics, Informatics and Systems Rende – |
M. VALERO Universidad Politecnica de Catalunya Barcelona, Spain |
Organizing Committee
Ř
J. DONGARRA (U.S.A.)
Ř
L. GRANDINETTI (
Ř M. AL-BAALI (OMAN)
Ř M. C. INCUTTI (ITALY)
Ř P. LEGATO (ITALY)
Sponsors
HEWLETT PACKARD |
|
|
|
|
|
IBM |
|
|
|
|
|
MICROSOFT |
|
|
|
|
|
NEC |
|
|
|
|
|
SUN |
|
|
|
|
|
INTEL |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Altair
Engineering |
|
|
|
|
|
|
|
|
ENEA Italian National Agency for New
Technologies, Energy and the Environment |
|
|
|
|
|
|
|
|
|
|
|
CINECA |
|
|
|
|
|
|
|
SPACI Southern Partnership
for Advanced Computational Infrastructures |
|
|
|
|
|
DataDirect
Networks |
|
|
|
|
|
ClusterVision |
|
|
|
|
|
FZJ Juelich Supercomputing Center |
|
|
|
|
|
Nice |
|
|
|
|
|
SiCortex |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
IEEE Computer
Society |
|
|
|
|
|
|
|
|
|
|
|
|
|
Speakers
David Abramson Clayton, Vic AUSTRALIA |
Mehiddin Al-Baali Dept. of Mathematics and Statistics OMAN |
Avner Algom The Israeli
Association of Grid Technologies ISRAEL |
Giovanni
Aloisio University of Salento Lecce ITALY |
Marcos Athanasoulis |
Frank Baetke Global HPC Technology Hewlett Packard |
Toine Beckers DataDirect Networks Inc. Netherlands |
Pete Beckman Maths & Computer Science Division Argonne National Laboratory Argonne, IL U.S.A. |
Dept. of Electronics, Informatics
and Systems University of Calabria Rende, Cosenza ITALY |
John R. Boisseau The |
Marian Bubak Academic Computer Centre CYFRONET POLAND |
Franck Cappello Laboratoire de Recherche en Informatique INRIA Futurs Orsay FRANCE |
Umit Catalyurek Department of Biomedical Informatics The |
Charlie Catlett Maths and Computer Science Division and U.S.A. |
Kihyeon Cho e-Science Division KISTI - Daejon KOREA |
Antonio Congiusta NICE ITALY |
Tim David Centre for |
Martijn De Vries ClusterVision BV Amsterdam NETHERLANDS |
Jack Dongarra Innovative
Computing Laboratory Computer
Science Dept. |
Giovanni Erbacci System and Technology Department CINECA - Inter-University Consortium Casalecchio di Reno ITALY |
Sandro Fiore University of Salento Lecce ITALY |
Ian Foster Math
& Computer Science Div. and Dept of
Computer Science The |
Geoffrey Fox Community
Grid Computing Laboratory Indiana University |
Alan Gara Blue Gene Supercomputers IBM |
Wolfgang Gentzsch DEISA Distributed European Infrastructure for
Supercomputing Applications and |
Stephan Gillich Intel - HPC EMEA |
Lucio Grandinetti Dept. of Electronics, Informatics and Systems University of Calabria Rende, Cosenza ITALY |
Atul Gurtu Tata Institute of Fundamental Research Mumbai INDIA |
Rick Hetherington Microelectronics Sun Microsystems, Inc U.S.A. |
André Höing Electrical Engineering and Computing Science GERMANY |
Weiwu Hu Institute of Computing Technology CHINA |
Chris Jesshope Informatics Institute Faculty of Science NETHERLANDS |
William Johnston Computational Research Division |
Carl Kesselman Information Sciences Institute Marina del Ray, Los Angeles, CA |
Thomas Lippert John von Neumann-Institute for Computing (NIC) FZ Jülich GERMANY |
Miron Livny
|
Ignacio Llorente Distributed
Systems Architecture Group Universidad
Complutense de Madrid Madrid SPAIN |
Fabrizio Magugliani Sicortex EMEA Maynard, MA U.S.A. |
Satoshi Matsuoka Department
of Mathematical and Computing Sciences Tokyo Institute
of Technology JAPAN |
Mirco
Mazzucato INFN - Istituto Nazionale di Fisica Nucleare |
Paul Messina formerly Caltech and U.S.A. |
Barton Miller Computer Sciences Dept. |
Per Öster CSC – Finnish IT Center for
Science Espoo FINLAND |
Marcelo Pasin École Normale Supérieure de Lyon Laboratoire de l’informatique du parallélisme Lyon FRANCE |
Robert Pennington Urbana, IL U.S.A. |
Daniel Reed Microsoft
Research Redmond,
Seattle formerly and Renaissance
Computing Institute |
Yves Robert Ecole Normale Supérieure de Lyon FRANCE |
Anatoly Sachenko Department of Information Computing Systems and Control Ternopil UKRAINE |
Rizos Sakellariou |
Takayuki
Sasakura NEC HPCE |
Alex Shafarenko Department of Computer Science Hatfield UNITED KINGDOM |
Mark Silberstein Technion-Israel Institute of Technology Haifa ISRAEL |
Derek Simmel Pittsburgh Supercomputing Center Pittsburgh, PA U.S.A. |
Peter Sloot Faculty of Science NETHERLANDS |
Achim Streit Jülich Supercomputing Centre (JSC) at
Forschungszentrum Jülich GERMANY |
Domenico Talia Dept. of Electronics, Informatics and Systems University of Calabria Rende, Cosenza ITALY |
Abderezak Touzene AL-Khod OMAN |
Anne Trefethen U.K. |
Paolo Trunfio Dept. of Electronics, Informatics and Systems University of Calabria Rende, Cosenza ITALY |
Jeffrey Vetter Computer Science and Maths
Division and Georgia Institute of Technology Atlanta, GA U.S.A. |
Proceedings
All contributions to the Workshop are invited original research papers not
previously published.
It is planned to publish a selection of papers
presented at the Workshop in a Proceedings Volume or in a well established
international journal.
Participation
NO
REGISTRATION FEES ARE REQUIRED FOR PARTICIPANTS OF THE WORKSHOP
This policy encourages wide Workshop participation, in order to increase
awareness of the scientific aspects and practical benefits of HPC technology,
to facilitate professional relations and to create technology transfer
opportunities.
Please use the Registration
form here attached.
The workshop will be held at the Grand Hotel San Michele, a charming Hotel on the Tyrrhenian
coast of
The Hotel is very close to a fisherman village named Cetraro, near
The number of rooms available at the Hotel is limited,
AN EARLY
BOOKING IS RECOMMENDED.
Please visit the Accommodation
page for explanation
and for the reservation form.
Local Arrangements
The site of the workshop is the Grand Hotel
San Michele in Cetraro, a
beautiful seaside village near
Information as well as accommodation and other local
arrangements will be handled by the workshop Secretariat:
Maria Teresa Guaglianone
Dipartimento
di Elettronica Informatica e Sistemistica - Universitŕ della Calabria
87036
Rende-Cosenza Italy
Phone:
++39 0984 494731
Fax:
++39 0984 494847
e-mail: hpc2008 @ unical . it
Workshop Address
Enquires about the technical programme and
applications for participation in the workshop should be sent to:
HPC Workshop 2008
Prof. Lucio Grandinetti
Dipartimento Elettronica,
Informatica, Sistemistica – Universitŕ della Calabria
87036, Rende, Cosenza, Italy
Phone: +39-0984-494731
Fax: +39-0984-494847
e-mail: lugran @ unical .
it
Website Updating
The info
given in this website and the relevant links are updated day by day.
Therefore,
the interested people are invited to visit the site frequently.
Workshop Agenda
Monday, June 30th
State of the Art and Future
scenarios of HPC and Grid |
||
|
Welcome Address |
|
9:10 – 9:45 |
J. Dongarra |
|
|
I. Foster |
“Towards an Open Analytics
Environment” |
|
D. Reed |
|
|
A. Gara |
“Present
and future challenges as we architect for the Exascale” |
|
Coffee Break |
|
11:45 – 12:15 |
A. Trefethen |
|
|
W. Johnston |
“The Evolution
of Research and Education Networks |
|
Concluding Remarks |
|
Emerging Computer Systems and Solutions |
||
|
F. Baetke |
|
|
S. Gillich |
“Intel -
Delivering Leadership HPC Technology Today and Tomorrow” |
|
T. Sasakura |
“NEC’s HPC Strategy - Consistency
and Innovation” |
|
Coffee Break |
|
18:45 – 19:10 |
T. Beckers |
“High
Performance Storage Solutions from DataDirect Networks” |
|
M. De Vries |
|
|
F. Magugliani |
|
|
Concluding Remarks |
Tuesday, July 1st
Advances in HPC Technology and
Systems 1 |
||
|
W. Hu |
“The
Godson-3 multi-core CPU and its application in High Performance Computers” |
|
R. Hetherington |
|
|
C. Jesshope |
“Managing
resources dynamically in SVP - from many-core to Grid” |
|
A. Shafarenko |
|
|
F. Cappello |
“Fault
Tolerance for PetaScale Systems: Current Knowledge, Challenges and
Opportunities” |
|
Coffee Break |
|
11:35 – 12:00 |
P. Beckman |
“The Path to Exascale Computing” |
|
S. Matsuoka |
“Ultra
Low Power HPC --- scaling supercomputing by three orders of magnitude” |
|
J. Vetter |
“HPC
Interconnection Networks – The Key to Exascale Computing” |
|
Concluding Remarks |
|
Advances in HPC Technology and
Systems 2 |
||
|
J. Boisseau |
“Deployment
Experiences, Performance Observations, and Early Science Results on Ranger” |
|
R. Pennington |
“NCSA Blue
Waters: Preparing for the Sustained Petascale System” |
|
T. Lippert |
|
|
B. Miller |
|
|
Coffee Break |
|
18:45 – 20:00 |
PANEL DISCUSSION 1 Chairman: P. Messina Co-organizers: P. Beckman, P.
Messina Panelists: P. Beckman, A. Gara, D.
Reed, S. Matsuoka, R. Vetter |
Wednesday, July 2nd
Grid Technology and Systems 1 |
||
9:00 – 9:25 |
M. Livny |
“Old problems
never die – managing the multi-programming mix” |
|
D. Abramson |
“Active
Data: Blurring the distinction between data and computation” |
|
D. Talia |
“Using
Peer-to-Peer Dynamic Querying in Grid Information Services” |
|
Y. Robert |
“Algorithms and
scheduling techniques for clusters and grids” |
|
R. Sakellariou |
“Feedback
control for efficient autonomic solutions on the Grid” |
|
Coffee Break |
|
11:35 – 12:00 |
C. Catlett |
|
12:00 – 12:25 |
A. Algom |
“From
Grid Computing to Cloud Computing - The evolution of the Grid Marketplace” |
|
I. Llorente |
|
|
Concluding Remarks |
|
Grid Technology and Systems 2 |
||
17:00 – 17:25 |
M. Pasin |
“Network
resource reservation and virtualization for grid applications” |
|
A. Touzene |
“A Performance
Based Distribution Algorithm for Grid Computing |
17:50 – 18:15 |
C. Kesselman |
“Applications of Grid Technology
to Health Care Systems” |
|
Coffee Break |
|
18:45 – 20:00 |
PANEL DISCUSSION 2 “From Grids to
Cloud Services” Organizer: C. Catlett Panelists: Avner Algom, Pete
Beckman, Charlie Catlett, Ignacio Llorente, Satoshi Matsuoka |
Thursday, July 3rd
Infrastructures,
Instruments, Products, Solutions for High Performance Computing and Grids |
||
|
G. Fox |
|
|
A. Höing |
“Summary-based
Distributed Semantic Database for Resource and Service |
|
A. Streit |
|
10:15 – 10:40 |
W. Gentzsch |
|
|
M. Silberstein |
“Superlink-online
- delivering the power of GPUs, clusters and opportunistic grids to
geneticists” |
|
Coffee Break |
|
11:35 – 12:00 |
M. Bubak |
“Building
collaborative applications for system-level science” |
|
D. Simmel |
|
|
A. Congiusta |
|
|
Concluding Remarks |
|
National and International Grid
Infrastructures and Projects |
||
|
D. Abramson |
“e-Research
& Grid Computing in Australia: From Infrastructure to Research” |
|
K. Cho |
|
|
A. Gurtu |
|
18:15 – 18:45 |
Coffee Break |
|
18:45 – 19:10 |
A. Sachenko |
|
|
P. Öster |
|
19:35 – 20:00 |
M. Mazzucato |
“Italian Grid Infrastructure” |
20:00 – 20:10 |
Concluding Remarks |
Friday, July 4th
Challenging Applications of HPC
and Grids |
||
|
M. Athanasoulis |
“Building Shared High Performance Computing Infrastructure for
the Biomedical Sciences” |
|
P. Sloot |
“ViroLab:
Distributed Decision Support in a virtual laboratory |
|
U. Catalyurek |
“Processing of Large-Scale Biomedical Images on a Cluster of
Multi-Core CPUs and GPUs” |
|
T. David |
“A
Heterogeneous Computing Model for a Grand Challenge Problem” |
|
L. Grandinetti – P. Beraldi |
|
|
Coffee Break |
|
11:35 – 12:00 |
G. Aloisio – S. Fiore |
“Data Issues in
a challenging HPC application to Climate Change” |
|
G. Erbacci |
“An advanced
HPC infrastructure in Italy for
challenging scientific applications” |
|
K. Cho |
|
|
Concluding Remarks |
ABSTRACTS
Scheduling for Numerical Linear Algebra Library at Scale Jack
Dongarra Innovative
Computing Laboratory Computer
Science Dept. In this
talk we will look at some of the issues numerical library developers are
facing when using manycore systems with millions of threads of execution. |
Clouds and ManyCore: The Revolution Daniel A. Reed Microsoft Research Redmond, As Yogi
Berra famously noted, “It’s hard to make predictions, especially about the
future.” Without doubt, though, scientific discovery, business practice and
social interactions are moving rapidly from a world of homogeneous and local
systems to a world of distributed software, virtual organizations and cloud
computing infrastructure. In science,
a tsunami of new experimental and computational data and a suite of increasingly
ubiquitous sensors pose vexing problems in data analysis, transport,
visualization and collaboration. In society and business, software as a
service and cloud computing are empowering distributed groups. Let’s
step back and think about the longer term future. Where is the technology
going and what are the research implications?
What architectures are appropriate for 100-way or 1000-way multicore
designs? How do we build scalable
infrastructure? How do we develop and support software? What is the ecosystem of components in
which they will operate? How do we optimize performance, power and
reliability? Do we have ideas and
vision or are we constrained by ecosystem economics and research funding
parsimony? Biographical Sketch Daniel A. Reed is Microsoft’s Scalable and Multicore Computing
Strategist, responsible for re-envisioning the data center of the
future. Previously, he was the
Chancellor’s Eminent Professor at UNC Chapel Hill, as well as the Director of
the Renaissance Computing Institute (RENCI) and the Chancellor’s Senior
Advisor for Strategy and Innovation for UNC Chapel Hill. Dr. Reed is a member of President Bush’s
Council of Advisors on Science and Technology (PCAST) and a former member of
the President’s Information Technology Advisory Committee (PITAC). He recently chaired a review of the federal
networking and IT research portfolio, and he is chair of the board of
directors of the Computing Research Association. He was previously Head of the Department of Computer Science at the |
Present and future challenges as
we architect for the Exascale Alan Gara Dept. of Computer Science In this
presentation current trends toward achieving Petascale computing are
examined. These current trends will be contrasted with what is needed to
reach the Exascale. Possible directions and critical enabling technologies
will be discussed. |
Effective computing on heterogeneous platforms Anne
Trefethen We have entered an era where at every scale of
computing - desktop, high-performance and distributed - we need to deal with
heterogeneity. Systems are made up of
multicore chips and accelerators in an assortment of hardware architectures
and software environments. This has
created a complexity for scientific application developers and algorithm
developers alike. Our focus is on
effective algorithms and environments across these scales to support
efficient scientific application development. |
The Evolution of Research and Education Networks William E. Johnston Senior Scientist and Energy Sciences Network (ESnet) Department Head In the
past 15 years there has been a remarkable increase in the volume of data that
must be analyzed in world-wide collaborations in order to accomplish the most
advanced science and a corresponding increase in network bandwidth,
deployment, and capabilities to meet these needs. Further, these changes have
touched all aspects of science including, in addition to data analysis,
remote conduct of experiments and multi-component distributed computational
simulation. Terabytes
of data from unique and very expensive instruments must be collaboratively
analyzed by the many science groups involved in the experiments. The highly
complex, long-running simulations needed to accurately represent macro-scale
phenomenon such as the climate, stellar formation, in-vivo cellular functioning
in complex organisms, etc., all involve building applications that
incorporate and use components that are located at the home institutions of
many different scientific groups. The
volume of traffic in research and education networks has increased
exponentially since about 1990. Virtually all of this increase – demonstrably
so in the past five years – is due to increased use of the network for moving
vast quantities of data among scientific instruments and widely distributed
analysis systems, and among supercomputers and remote analysis centers.
Further, this data movement is no longer optional for science: Increasingly
large-scale science is dependent on network-based data movement in order for
the science to be successful. Modern
science approaches require that networks provide not only high bandwidth, but
also advanced services. Scheduled and on-demand bandwidth enables connection
and simultaneous operation of instruments, local compute clusters,
supercomputers, and large storage systems. Low latency, high bandwidth,
secure circuits interconnect components of simulations running on systems
scattered around the country and internationally. Comprehensive, global
monitoring and reporting that allow distributed workflow systems to know
exactly how end-to-end paths that transit many different networks are
performing. At the same time, the network must provide a level of reliability
that is commensurate with the billion dollar instrument systems, scarce
supercomputers, and the hundreds of collaborating scientific groups being
interconnected that is typical of large-scale science. In this
talk I will look at how network architectures, technologies, and services
have evolved over the past 15 years to meet the needs of science that now
uses sophisticated distributed systems as an integral part of the process of
doing science. One result of this is that the R&E community has some
unique communications requirements and some of the most capable networks in
the world to satisfy those requirements. I will also look at the projected
requirements for science over the next 5 to 10 years and how the R&E
networks must further expand and evolve to meet these future requirements. |
Grids, Clouds and HPC: Opportunities and Challenges Dr.
Frank Baetke - Global HPC Technology Program Manager New trends in the HPC area can be derived
from increasing growth-rates at the lower end of the market, specifically at
the workgroup and departmental level, and from concepts which are based on
the original promises of computational grids. Those trends combined with the
ever increasing demand for even higher
component densities and higher energy efficiency generate additional
challenges: examples of new products will be shown which specifically address
those issues. |
Intel - Delivering Leadership HPC
Technology Today and Tomorrow Stephan Gillich Director HPC EMEA Intel We are
excited about the opportunity that lies in front of us as our |
High Performance
Storage Solutions from DataDirect Networks Toine Beckers DataDirect Networks
Inc., With the growing needs for High Performance
Computing clusters (from GFlops to TFlops and even PFlops systems) in many
application fields also the need for more and more data storage capacity
increases as well. This often leads to complex, difficult to manage storage
solutions. With the Silicon Storage Appliance products from DataDirect
Networks an easy to manage, scalable and high performance solution is
provided which is becoming widely accepted in the High Performance Computing
Community. |
Next-Generation Cluster Management with ClusterVisionOS Martijn
De Vries Setting up and managing a large cluster can
be a challenging task without In this presentation, various aspects of the
ClusterVisionOS cluster |
Green Scalable High Performance Supercomputing Fabrizio
Magugliani EMEA
Business Development Director Sicortex As CPU speeds have reached a point where
simply increasing the clock |
The Godson-3 multi-core CPU and its application in High Performance Computers Weiwu
Hu, Xianggao, Yunji Chen Institute
of Computing Technology, Godson-3
is a multi-core processor based on the 64-bit superscalar Godson-2 CPU core.
It takes a scalable CMP architecture in which processors and global addressed
L2 cache modules are connected in a distributed way and coherence of multiple
L1 copies of the same L2 block is maintained with a directory-based cache
coherence protocol. The Godson-2
CPU core is a four-issue, out-of-order execution CPU which runs the MIPS64
instruction set. The latest Godson Godson-3
adopts two-dimension mesh topology. Each node in the mesh include an 8*8
crossbar which connects four processor cores, four shared L2-cache banks and
four adjacent nodes in the East, South, West and North. A 2*2 mesh network
can connect a 16-core processor, and a 4*4 mesh network can connect a 64-core
processor. The distributed on-chip L2 cache modules are globally addressed.
Each cache block of L1 cache has a fixed L2 cache home node in which the
cache directory is maintained by directory-based cache coherence protocol.
Each node has one (or more) DDR2 memory controller. IO controllers are
connected through free crossbar ports of boundary nodes. Based on
the Godson-3 architecture, several product chips are defined and will be
physically implemented. The 4-core Godson-3 chip is designed and fabricated
based on 65nm STMicro CMOS technology. It includes one 4-core node, 4MB L2
cache, two DDR2/3 ports, two HT1.0 ports, two PCIE ports, one PCI port and
one LPC port. It will be taped out in first half of 2008. One
important application of Godson-3 is the low cost high performance computers
(HPC). Based on Godson-3, the design of one national PetaFLOPS HPC and one
personal TeraFLOPs HPC are planed. This presentation will introduces the HPC
plans based on the Godson-3 multi-core processor. |
Aggressively
Threaded Systems: A Wise Choice for HPC Rick Hetherington These
throughput workloads were not very computationally intensive but demanded
memory subsystems that provided high bandwidth and high capacity. The
second and third generations of The result
is a set of products that efficiently deliver high levels of computational
throughput. This talk
will discuss the UltraSparc T2 and T2+ processor designs as well as an
analysis of their behavior while executing 'technical' workloads. |
Managing resources dynamically in SVP – from many-core to Grid Chris
Jesshope Professor
of Computer Systems Architecture Our computer
systems are becoming pervasive and ubiquitous. It is now |
Nondeterministic Coordination
using S-Net Prof
Alex Shafarenko Department
of Computer Science Coordination languages have been used for many years
in order to separate computation and concurrency/communication, that is
coordination, concerns. Despite that, a typical coordination language
intrudes into the computational part of the code even though it provides
some abstract projection of those distributed computing realities. As a
result, units of an application program become barely readable in isolation,
without having the "big picture" in mind --- and that big picture
in turn is overburdened with interface details. We believe that the reason why coordination has
these problems is that true separation between computation and concurrency
concerns is only possible using a nondeterministic glue. Indeed deterministic
coordination abstracts application code as a state-transition system,
introducing synchonization over and above the inimum needed for correct
functioning of the application code. Nondeterministic coordination,
which we describe in this paper, leans towards loose, data-flow-style
composition using asynchronous computational structures ---
and synchronisers where necessary to ensure that the correct data
sets are worked on by fully encapsulated application code units. The paper will present a coordination language
S-Net, developed and implemented by the authors. The language is very compact, only using 4
combinators acting on user-defined boxes to create hierarchical networks of
asynchronously communicating components. The boxes are written in a
conventional language and use a conventional stream interface for output,
while the input comes as a standard parameter list. We expect ordinary engineers to be able to provide
these components. There is only one special box which the user cannot
create and which comes with the S-Net language: the synchrocell. The
significant expressive power of coordination in such a small language
is achieved by using a sophisticated type system with
subtyping, which influences the network "wiring" provided by
the combinators. The coordination program is thus a large algebraic formula
using the combinators, or several such formulae, and it is written by a
concurrency engineer who needs no detailed knowledge of the application
domain. Concurrency and self-adaptivity of S-Net is
helped by the fact that user-defined boxes are assumed to be without
persistent state, i.e. after the output stream has been flushed
and the box terminates, all local state is destroyed, so that the
next invocation of the box can take place at a different location in
the distributed system. Synchrocells retain their state between
invocations but they do not perform computations and consequently
consume no computing power. In conclusion, we will briefly dwell on the recent
success in applying S-Net to a signal processing problem in radar systems
industry at Thales Research & Technology, France. |
Fault Tolerance for PetaScale
Systems: Current Knowledge, Challenges and Opportunities Franck Cappello INRIA The
emergence of PetaScale systems reinvigorates the community interest about how
to manage failures in such systems and ensure that large applications
successfully complete. Existing results for several key mechanisms associated
with fault tolerance in HPC platforms will be presented during this talk. Most of
these key mechanisms come from the distributed system theory. Over the
last decade, they have received a lot of attention from the community and
there is probably little to gain by trying to optimize them again. We will describe
some of the latest findings in this domain. Unfortunately,
despite their high degree of optimization, existing approaches do not fit
well with the challenging evolutions of large scale systems. There is room
and even a need for new approaches. Opportunities may come from different
origins like adding hardware dedicated to fault tolerance or relaxing some of
the constraints inherited from the pure distributed system theory. We will
sketch some of these opportunities and their associated limitations. |
Ultra Low Power HPC --- scaling
supercomputing by three orders of Satoshi
Matsuoka Tokyo
Institute of Technology Low power
supercomputing as represented by various power efficient architectures such
as IBM BlueGene and power aware methods are starting to receive considerable
attention in the light of global agenda to reduce energy consumption and also
to alleviate increasing heat density problems. Our new project, Ultra Low-Power
HPC, greatly extend this horizon by taking the innovative approaches to
fundamentally slash energy consumption of supercomputing by up to 3 orders of
magnitude in 10 years. This is achieved by the comprehensive use of new
energy-efficient hardware devices and power-saving algorithms that are
modeled and optimized in a systemwide fashion. Early results from the project
are exhibiting good results in achieving 10-100 times energy efficiency,
mostly by the use of acceleration and new memory device technologies. |
HPC Interconnection Networks – The
Key to Exascale Computing Jeffrey Vetter Interconnection
networks play a critical role in the design of next generation HPC
architectures and the performance of important applications. Despite the
significance of interconnects, current trends in HPC interconnects do not
appear to fulfill the requirements for next generation multi-petaflop and
exaflop systems. Application requirements drive networks with high bandwidth,
low latency, and high message rate, while practical constraints, such as
signaling, packaging, and cost, limit improvements in hardware bandwidth and
latencies. To address these
challenges, Sandia and Oak Ridge National Laboratories have established the
Institute for Advanced Architectures and Algorithms (IAA). In this talk, I
will present some of the challenges and potential solutions for exa-scale
interconnection networks, which are being considered by IAA. |
Deployment Experiences,
Performance Observations, and Early Science Results on Ranger John (Jay) R. Boisseau, Ph.D. Director, The The Texas
Advanced Computing Center (TACC) at The University of |
NCSA Blue Waters: Preparing for
the Sustained Petascale System Robert Pennington, Urbana, IL, U.S.A. The NCSA
Blue Waters system will be installed at the |
The Impact of Petacomputing on
Models and Theories Thomas Lippert John von Neumann-Institute for Computing (NIC) FZ In 2008,
supercomputers have reached the Petaflop/s performance level. Machines likes
the IBM Blue Gene/P, the Los Alamos Roadrunner or the IBM Ranger at TACC
achieve their unprecedented power using O(100.000) cores. In my talk I will,
on the one hand, discuss the question if we have arrived at the limits of
scalability – I will present first scalability results from the Jülich Blue
Gene/P system with 64k cores –, and, on the other hand, argue how
Petacomputers with hundreds of thousands of processors might transform
science itself. |
Scalable Middleware for Large Scale Systems Barton P. Miller Computer Sciences Department I will discuss the problem of developing tools for large scale parallel
environments. We are especially interested in systems, both leadership class parallel
computers and clusters that have 10,000's or even millions of processors. The
infrastructure that we have developed to address this problem is called
MRNet, the Multicast/Reduction Network. MRNet's approach to scale is to
structure control and data flow in a tree-based overlay network (TBON) that
allows for efficient request distribution and flexible data reductions. The second part of this talk will present an overview of the MRNet
design, architecture, and computational model and then discuss several of the
applications of MRNet. The
applications include scalable automated performance analysis in Paradyn, a
vision clustering application and, most recently, an effort to develop our
first petascale tool, STAT, a scalable stack trace analyzer running currently
on 100,000's of processors. I will conclude with a brief description of a new fault tolerance
design that leverages natural redundancies in the tree structure to provide
recovery without checkpoints or message logging. |
Old problems never die – managing
the multi-programming mix Miron Livny Computer Sciences Department Old
problems never die; they just fade away as technologies and tradeoffs
change. As the state of the art in
hardware and applications evolves further, they resurface. When virtual memory was introduced almost
50 years ago, computer systems had to find a way to prevent thrashing by
controlling the number and properties of the applications allowed to share
their physical memory. The recent proliferation of multi-core processors,
usage of virtual machines and deployment of complex I/O sub-systems require
the development of similar capabilities to control and manage at several
scales the mix of applications that share the compute and storage resources
of today’s systems. |
Active Data: Blurring the
distinction between data and computation Tim Ho and David Abramson Clayton, The
amount of data being captured, generated, replicated and archived |
Using
Peer-to-Peer Dynamic Querying in Grid Information Services Domenico
Talia and Paolo Trunfio DEIS, Dynamic
querying (DQ) is a technique adopted in unstructured Peer-to-Peer (P2P)
networks to minimize the number of nodes that is necessary to visit to obtain
the desired number of results. In this talk we describe the use of the DQ
technique over a distributed hash table (DHT) to implement a scalable Grid
information service. The DQ-DHT (dynamic querying over a distributed hash
table) algorithm has been designed to perform DQ-like searches over DHT-based
networks. The aim of DQ-DHT is two-fold: allowing arbitrary queries to be
performed in structured P2P networks, and providing dynamic adaptation of
search according to the popularity of resources to be located. Through
the use of the DQ-DHT technique it is possible to implement a scalable Grid
information service supporting both structured search and execution of
arbitraries queries for searching Grid resources on the basis of complex
criteria or semantic features. |
Algorithms and scheduling techniques for
clusters and grids Yves
Robert Ecole
Normale Supérieure de Lyon, France In this talk we provide several
examples to |
Feedback
control for efficient autonomic solutions on the Grid Rizos Sakellariou This talk
will consider different approaches for |
Accidentally Using Grid Services Charlie
Catlett Maths
and Computer Science Division and Though
the term "grid" has fallen from the front page headlines, there is
an extremely active market of "grid services" - based on web
services and other standards - emerging. The web originally empowered
Internet users to create services and products with very little infrastructure,
and signs of success a decade ago included server meltdown from high demand.
Today one need not own any infrastructure at all to launch a new
service or product, and the combination of virtual and web services offers
not only near unlimited scaling but also reliability. This talk will
focus on a number of examples of new services, illustrating that at least one
measure of success is not only "ease of use" but "accidental
use" of transparent, but foundational, services. |
From Grid Computing to Cloud Computing The evolution of the Grid Marketplace Avner
Algom The
Israeli Association of Grid Technologies Over the last few years we have seen grid computing
evolve from a niche technology associated with scientific and technical
computing, into a business-innovating technology that is driving increased
commercial adoption. Grid deployments accelerate application performance,
improve productivity and collaboration, and optimize the resiliency of the IT
infrastructure. Today, the maturity of the Virtualization
technologies, both at the VM and at the IT infrastructure levels, and the
convergence of the Grid, Virtualization and SOA concepts, enables the business
implementation of the Cloud Computing for utility and SaaS services. At last, the Grid Computing vision becomes a
reality: people that get electricity from their electrical outlet, on-demand,
can get applications, computing and storage services from the network,
on-demand. We can dynamically scale our computation and storage power, at no
time, and we pay only for what we use. This is going to change the marketplace as we know
it. |
Cloud Computing
for on-Demand Resource Provisioning Ignacio Llorente Distributed Systems
Architecture Group Universidad
Complutense de Madrid Madrid, Spain The aim of the presentation is to show the
benefits of the separation of resource provisioning from job execution
management in different deployment scenarios. Within an organization, the
incorporation of a new virtualization layer under existing Cluster and HPC
middleware stacks decouples the execution of the computing services from the
physical infrastructure. The dynamic execution of working nodes, on virtual
resources supported by virtual machine managers such as the OpenNEbula
Virtual Infrastructure Engine, provides multiple benefits, such as cluster consolidation,
cluster partitioning and heterogeneous workload execution. When the computing
platform is part of a Grid Infrastructure, this approach additionally
provides generic execution support, allowing Grid sites to dynamically adapt
to changing VO demands, so overcoming many of the obstacles for Grid
adoption. The previous scenario can
be modified so the computing services are executed on a remote virtual
infrastructure. This is the resource provision paradigm implemented by some
commercial and scientific infrastructure Cloud Computing solutions, such as
Globus VWS or Amazon EC2, which provide remote interfaces for control and
monitoring of virtual resources. In this way a computing platform could scale
out using resources provided on-demand by a provider, so supplementing local
physical computing services to satisfy peak or unusual demands. Cloud
interfaces can also provide support for the federation of virtualization
infrastructures, so allowing virtual machine managers to access resources
from remote resources providers or Cloud systems in order to meet fluctuating
demands. The OpenNEbula Virtual Infrastructure Engine is being enhanced to
access on-demand resources from EC2 and Globus-based clouds. This scenario is
being studied in the context of the RESERVOIR– Resources and Services
Virtualization without Barriers — EU-funded initiative. |
Network resource reservation and
virtualization for grid applications Marcelo Pasin INRIA, École Normale Supérieure
de Lyon Laboratoire de
l’informatique du parallélisme Lyon, France The
coordination of grid resource allocation often needs a service to
|
A Performance
Based Distribution Algorithm for Grid Computing Heterogeneous Tasks Abderezak
Touzene, Hussein AlMaqbali, Ahmed AlKindi, Khaled Day Department
of Recently
in [1] we proposed a performance based load-balancing algorithm for
independent tasks, which require similar computing need in the sense that the tasks are
almost identical. This paper extends the work and proposes a load
distribution algorithm for independent tasks with different computing
requirements including short and long tasks. We assume a preprocessing phase
of prediction of the number of instruction (TNI) needed for each task in the
grid. Our load distribution algorithm takes into account both the CPU speed
of the computing units and the TNI of different tasks. We design a simulation
model using steady-state, based on NS2 to study the performance of our load
distribution algorithm. Keywords: grid
computing, load-balancing, steady-state, resource management, performance
evaluation, simulation models. |
Parallel Data Mining from
Multicore to Cloudy Grids Geoffrey Fox We
describe a suite of data mining tools that cover clustering, Gaussian
modeling and dimensional reduction and embedding. These are applied to three
class of applications; Geographical information systems, cheminformatics and
bioinformatics. The data vary in
dimension from low (2), high (thousands) to undefined (sequences with
dissimilarities but not vectors defined). We use deterministic annealing to
provide more robust algorithms that are relatively insensitive to local
minima. We use embedding algorithms both to associate vectors with sequences
and to map high dimensional data to low dimensions for visualization. We
discuss the algorithm structure and their mapping to parallel architectures
of different types and look at the performance of the algorithms on three
classes of system; multicore, cluster and Grid using a MapReduce style
algorithm. Each approach is suitable in different application scenarios. |
Summary-based Distributed Semantic Database for Resource and Service
Discovery André Höing Electrical Engineering and
Computing Science Today's
RDF triple stores that are based on distributed hash tables (DHTs) distribute
the knowledge of all participating peers in the P2P network. They use hash
values of the subject, predicate, and object of each triple in order to
identify three nodes in the network that shall store a copy of the triple.
Query processors collect relevant triples by identifying responsible nodes
using the hash values of literals and constants occurring in the query. |
UNICORE 6 – A European Grid Technology Achim Streit Jülich Supercomputing
Centre (JSC) at Forschungszentrum The
development of UNICORE started back in 1997 with two projects funded by the
German ministry of education and research (BMBF). UNICORE is a vertically
integrated Grid middleware, which provides a seamless, secure, and intuitive
access to distributed resources and data and provides components on all
levels of a Grid architecture from an easy-to-use graphical client down to
the interfaces to the Grid resources. Furthermore, UNICORE has a strong
support for workflows while security is established through X.509
certificates. Since 2002 UNICORE is continuously improved to mature
production ready quality and enhanced with more functionalities in several
European projects. Today UNICORE is used in several national and
international Grid infrastructures like D-Grid and DEISA and is also
providing access to the national Supercomputer of the NIC in The talk
will give details about the new version of UNICORE 6, which is web-services
enabled, OGSA-based and standards-compliant. To begin with the underlying
design principles and concepts of UNICORE are presented. A detailed
architecture diagram shows the different components of UNICORE 6 and its
interdependencies with a special focus on workflows. This is followed by a
view on the adoption of common open standards in UNICORE 6, which allows
interoperability with other Grid technologies and a realisation of an open
and extensible architecture. The talk closes with some interesting use case
examples, where the UNICORE Grid technology is used. The
European UNICORE Grid Middleware is available as Open Source from http://www.unicore.eu. |
e-Science Applications on Grids -
The DEISA Success Story Wolfgang Gentzsch DEISA Distributed European
Infrastructure for Supercomputing Applications and We will present
selected compute and data intensive applications which Bio: Wolfgang Gentzsch DEISA, Duke University Wolfgang Gentzsch is Dissemination Advisor for the
DEISA Distributed European Initiative for Supercomputing Applications. He is
adjunct professor of computer science at |
Superlink-online
- delivering the power of GPUs, clusters and opportunistic grids to
geneticists M. Silberstein Technion-Israel Institute of Technology Haifa, Israel Genetic linkage analysis is a statistical tool used
by geneticists for mapping disease-susceptibility genes in the study of
genetic diseases. The analysis is based on the exact inference in very large probabilistic
(Bayesian) networks, which is often computationally hard (ranging from
seconds to years on a single CPU). We
present a distributed system for faster analysis of genetic data, called
Superlink-online. The system achieves high performance through parallel
execution of linkage analysis tasks over thousands of computational resources
residing in multiple opportunistic computing environments, aka Grids. It
utilizes the resources in many available grids, unifying thousands CPUs over
campus grids in the Technion and the Notably,
the system is available online, which allows geneticists to perform
computationally intensive analyses with no need for either While the grids potentially provide enormous amount
of computing power, we also explore an alternative approach of using Graphics
Processing Units (GPUs) to accelerate the genetic linkage computations. We
achieve up to two orders of magnitude speedups on average, and up to three
order of magnitude speedups on some particularly complex problem instances
versus the optimized application performance on a single CPU. The use of GPUs
is particularly appealing in the context of Community Grids, considering
the number of high performance GPUs available worldwide. |
Building Collaborative Applications for System-Level Science Marian Bubak 30-059 ACC CYFRONET AGH, Krakow, ul.
Nawojki 11, 30-950 A novel
approach to scientific investigations, besides analysis of individual
phenomena, integrates different, interdisciplinary sources of knowledge about
a complex system to obtain an understanding of the system as a whole. This
innovative way of research has recently been called system-level science [1].
Problem-solving
environments and virtual laboratories have been the subject of research and
development for many years [2]. Most of them are built on top of workflow
systems [3]. Their main drawbacks include limited expressiveness of the
programming model and lack of mechanisms for integration of computing
resources from grids, clusters and dedicated computers. The
ViroLab project [4] is developing a virtual laboratory [5] for research of
infectious diseases to facilitate medical knowledge discovery and provide
decision support for HIV drug resistance [6], and this virtual laboratory may
be useful in other
areas of system-level science. To
overcome the limitations of the programming methods, we have defined an
experiment plan notation based on a high-level scripting language - Ruby. For easy interfacing of
different technologies, we have introduced a grid object abstraction level
hierarchy [7]. Each grid object class is an abstract entity which defines the
operations that can be invoked from the script, each class may have multiple
implementations, representing the same functionality; and an implementation
may have multiple instances,running on different resources [8]. The
Experiment Planning Environment is an Eclipse-based tool supporting rapid
experiment plan development while Experiment Management Interface enables
loading and execution of experiments. The Experiment Repository stores
experiment plans prepared by developers and published for future usage, and
the laboratory database holds the obtained results.To enable high-level
programming, the virtual laboratory engine, called the GridSpace, includes
the Grid Operation Invoker which instantiates grid object representatives and
handles remote operation invocations. The GridSpace Application Optimizer is
responsible for optimal load balancing
on computational servers.The Data Access Service acquires data from remote
databases located in research institutions and hospitals. To meet the
specific requirements for exchanging biomedical information within such a
virtual environment, the solution introduced in DAS bases on existing Grid
technologies: Globus Toolkit, OGSA-DAI, and Shibboleth. The provenance
approach [9] in the ViroLab virtual laboratory brings together ontology-based
semantic modeling, monitoring of applications and the runtime infrastructure,
and database technologies, in order to collect rich information concerning
the execution of experiments, represent it in a meaningful way, and store it
in a scalable repository [10]. The
virtual laboratory has already been used to plan and execute a few
virological experiments, with various types of analysis of HIV virus
genotypes such as calculation of drug resistance based on virus genotype,
querying historical and provenance information about experiments, a drug
resistance system based on the Retrogram set of rules, data mining and
classification with Weka [5], and the molecular
dynamics NAMD application which has been installed on the CYFRONET EGEE site.
The
virtual laboratory provides an environment to collaboratively plan, develop
and use collaborative applications; it is dedicated for multi-expertise
task-oriented groups running complex computer simulations; its basic features
are: mechanisms for user-friendly experiment creation and execution,
possibility of reusing existing libraries,
tools etc., gathering and exposing provenance information, integration of
geographically-distributed data resources, access to WS, WSRF, MOCCA
components and jobs, secure access to data and applications. Acknowledgments
The
Virtual Laboratory is being developed at the Science
and CYFRONET AGH, Gridwise Technologies, Universiteit van Amsterdam, and HLRS
Stuttgart in the framework of the EU IST ViroLab and CoreGRID projects
as well as the related Polish SPUB-M and Foundation for Polish Science
grants. References [1] Exploration and IT Implications, IEEE Computer, vol. 39, no 11, 31-39,
2006 [2] K. Rycerz, M. Bubak, P.M.A. Sloot, V.
Getov: Problem Solving Environment for Distributed
Interactive Simulations in: Sergiei Gorlatch, Marian Bubak, and Thierry
Priol (Eds). Achievements in European Reseach on Grid Systems. CoreGRID
Integration Workshop 2006 (Selected Papers) ISBN-13: 978-0-387-72811-7; pp
55 - 66, Springer, 2008 [3] Y. Gil, E. Deelman, M. Ellisman, T. Fahringer, G. Fox, D. Gannon, C. Goble, M. Livny, L. Moreau, and
J. Myers. Examining the Challenges of Scientific Workflows. IEEE Computer vol 40, no 12 pp 24-32, 2007 [4] ViroLab - EU IST STREP Project 027446; www.virolab.org [5] ViroLab Virtual Laboratory, http://virolab.cyfronet.pl [6] P. M.A. Sloot, I. Altintas, M. Bubak, Ch.A. Boucher: From Molecule to Man: Decision
Support in Individualized E-Health, IEEE Computer vol. 39, no 11, 40-46,
2006 [7] T. Gubala, M. Bubak: GridSpace -
Semantic Programming Environment for the Grid, PPAM'2005, LNCS 3911,
172-179, 2006 [8] M. Malawski, M. Bubak, M. Placek, D.
Kurzyniec, V. Sunderam: Experiments with Distributed Component
Computing Across Grid Boundaries, Proc. HPC-GECO/CompFrame Workshop -
HPDC'2006, Paris, 2006 [9] D. de Roure, N.R. Jennings, N. Shadbolt,
The semantic grid: a future e-science infrastructure, Grid
Computing - Making the Global Infrastructure a
Reality, Wiley, 2003, pp. 437-470 [10] B.
Balis, M. Bubak, and J. Wach: User-Oriented Querying over Repositories of Data and
Provenance, In G. Fox, K. Chiu, and R. Buyya,
editors, Third IEEE International Conference on e-Science and Grid
Computing, e-Science 2007, IEEE Computer Society, 2007 |
DMOVER: Scheduled Data
Transfer for HPC Grid Workflows Derek Simmel TeraGrid
users have expressed a need for better tools to schedule and |
Grid Computing or the Internet of services? Opportunities and perspectives from research to business Antonio Congiusta NICE-ITALY, Cortanze,
Asti, Italy Experience has shown that solutions to
better enable organizations to take advantage of the benefits of Grid
computing, are based on clear identification of the requirements and the
application of the best available standardized and reliable technologies. Relevant examples of such principle with
related best practices can be extracted from some of the success stories that
recently have involved EnginFrame in the Oil & Gas industry, the Energy
and Automotive sectors, HPC support from collaboration facilities to
infrastructure provision and management, and also some fruitful cooperations
with strategical partners. In particular, beyond to well
established HPC activities within a primary European consortium for providing
a production quality infrastructure, a new trend has been undertaken towards
the integration of collaboration facilities to HPC environments. Quite
interesting are also the activities devoted to enable for workflow management
and distributed visualization, some of which are part of European-wide
research projects. From all such experiences we can
envision as future of the Grid an always strong evolution towards interoperable
key services, within a scenario in which comprehensive and all-inclusive
software is ever less important. In such a scenario, a key role is played by
integration technologies capable of homogenizing and enforcing service
interactions and access. |
e-Research
& Grid computing in David Abramson Clayton, Over the past
few years the Australian government has performed a major review of its
research infrastructure needs, from hard technological areas to the
social sciences. Along with this review, they have investigated the
electronic platforms required to support these various disciplines.
What has evolved is an grid computing strategy called "Platforms
for Collaboration" that addresses computation, networking and data
management. In addition to this, various computer science groups are developing
grid technologies that underpin this platform. In this talk I will give
an over of the Australian e-Research agenda and highlight a few major
research activities in grid computing. |
Kihyeon
Cho e-Science Division Korea Institute of Science and Technology Information Daejeon,
305-806, For Grid and e-Science in |
Atul Gurtu Tata Institute of Fundamental
Research Grid technology has changed the way advanced
research is being conducted today. In |
National Grid Initiative of Anatoly Sachenko Department of Information
Computing Systems and Control Uniting of
the existing Grid segments and supercomputer centers in scientific and
educational areas into joint
Ukrainian National Grid Initiative(UNGI) and the issues of UNGI integration
into the European Grid infrastructure are considered in this paper. The
peculiarities of Grid segment at
National Academy of Science as well as the UGrid Project of Ministry of
Education and Science are described too. It’s stressed on the joint project
UNGI for EGI and other integration possibilities within INTAS, NATO and Frame
7 programs. Finally an advanced approach for security strengthening in Grid-systems is proposed. |
Per Öster CSC – Finnish IT Center for
Science The European Grid Initiative (EGI) has as goal to ensure
a long-term sustainability of grid infrastructures in |
Building Shared High Performance
Computing Infrastructure for the Biomedical Sciences Marcos Athanasoulis, Dr.PH, MPH In recent years high performance
computing has moved from the sidelines to the mainstream of biomedical
research. Increasingly researchers are employing computational methods
to facilitate their wet lab research. Some emerging laboratories and
approaches are based on a 100% computational ramework. While there are
many lessons to be learned from the computational infrastructure put into
place for the physical and mechanical sciences, the character, nature and
demands of biomedical computing differ from the needs of the other sciences.
Biomedical computational problems, for example, tend to be less
computationally intensive but more “bursty” in their needs. This
creates both an opportunity (it is easier to meet capacity needs) and a
challenge (job scheduling rules are more complicated to accommodate the
bursts). |
ViroLab: Distributed Decision Support in a virtual laboratory for
infectious diseases P. Sloot In future
years, genetic information is expected to become increasingly significant in
many areas of medicine. This expectation comes from the recent and
anticipated achievements in genomics, which provide an unparalleled
opportunity to advance the understanding of the role of genetic factors in
human health and disease, to allow more precise definition of the non-genetic
factors involved, and to apply this |
Processing of Large-Scale Biomedical Images on a Cluster of Multi-Core
CPUs and GPUs Umit Catalyurek Department of Biomedical
Informatics The As
microprocessor manufacturers strain to continue to increase performance,
multi-core chips are quickly becoming the norm. The demand in computer gaming
industry also brought us GPUs as an alternative fast, general purpose,
streaming co-processors. Commodity
GPUs and multi-core CPUs bring together an unprecedented combination
of high performance at low cost, and provide an ideal environment for
biomedical image analysis applications. In this
talk we will present our ongoing efforts on developing optimized biomedical
image analysis kernels for heterogeneous multi-core CPUs and GPUs. We will
also present how a cooperative cluster of multi-Core CPUs and GPUs can be
efficiently used for large scale biomedical image analysis. |
Grid
Computing for Financial Applications M. Al-Baali§,
P. Beraldi*, L. Grandinetti*, G. Aloisio^ I.
Epicoco^, A. Violi**, C. Figŕ Talamancaç § Dept. of Mathematics and Statistics,
* Department of Electronics,
Informatics and Systems, * * CESIC - ^ ç Innova spa In recent
years financial operators have shown an increasing interest in quantitative
tools able to efficiently measure, control and manage risk. Such an interest
is motivated by the necessity to operate in a very competitive and volatile
environment with a high level of
complexity increased by the
globalization of the economic activities and the continuous
introduction of innovative financial products. The complexity of the problems
to deal with and the necessity to operate in real time has highlighted the
serious computational constraints imposed by conventional numerical
platforms, prompting the need to take advantage of high performance computing
systems. In this
talk we present a prototypal system designed to support financial operators
in investment decisions concerning the strategic asset allocation
problem. The system has been designed
and tested within the European Project BEINGRID. At the
core of the system is the formulation of sophisticated optimization models
able to capture with an increasing level of realism with respect to
traditional approaches, the specific features of the applicative problem.
Moreover, the system is based on the integration of advanced scenario
generation procedures and efficient methods to solve the resulting huge sized
problems. The
system has been deployed on the SPACI grid infrastructure. In particular, an
user – friendly web grid environment
has been realized by using the GRB technology for the resource management and
the GRelC services for distributed data. |
Data Issues in a challenging HPC
application to Climate Change Giovanni Aloisio University of
Salento Lecce, Italy Earth
Science is strongly becoming a data intensive and oriented activity.
Petabytes of data, big collections, huge datasets are continuously produced,
managed and stored as well as accessed, transferred and analyzed by several
scientists and researchers at multiple sites. From the data grid perspective,
a key element to search, discover, manage and access huge amount of data
stored within distributed storages is the related data and metadata
framework. A new supercomputing centre, the Euro-Mediterranean Centre for
Climate Change (CMCC), was recently created by the Italian Government to support
research on Climate Change. The SPACI Consortium, one of the main CMCC
Associate Centres, provides know-how and expertise on High Performance and
Grid Computing. The GRelC Middleware (provided by SPACI Consortium) has been
recently adopted as part of the CMCC Data Grid framework in order to provide
a secure, transparent and scalable grid enabled metadata management solution. We
present the CMCC initiative, the supercomputing facility as well as data grid
architectural and infrastructural issues concerning the adopted grid
data/metadata handling systems. |
Tim David Centre for “A
Heterogeneous Computing Model for a Grand Challenge Problem” |
The e-Science for High Energy Physics Kihyeon Cho, Ph.D. KISTI (Korea Institute of Science
and Technology Information) The
e-Science for High Energy Physics is to study High Energy Physics (HEP) any
time and anywhere even if we are not on-site of accelerator laboratories. The
components are 1) data production, 2) data processing and 3) data analysis
any time and anywhere. The data production is to do remote control and take
shifts remotely. The data processing is to run jobs anytime, anywhere using
Grid farms. The data analysis is to work together to publish papers using
collaborative environment. We apply this concept to LHC experiment at CERN
and Tevatron experiment at Fermilab. It this talk we will present the current
status and embodiment of the idea. |
A HPC infrastructure at the service of Scientific Research in Italy Giovanni
Erbacci CINECA -
System and Technology Deparment CINECA Inter-University Consortium,
Casalecchio di Reno, Italy State of the art HPC infrastructures are fundamental
to support scientific research and to advance science at European level.
Since many years, at Italian level, CINECA has been able to assure to the
scientific community a competitive
advantage by putting into timely production advanced HPC systems that have proven very wide applicability
and success. The CINECA HPC infrastructure de facto represents
the national facility for supercomputing and the CINECA HPC systems are part of the Italian research
Infrastructures system, integrated by means of the Italian academic and
research network facility (GARR). In this work we present the CINECA HPC
infrastructure, its evolution, and the
service model. Moreover, we outline
the CINECA role in the context of the
main HPC Infrastructure projects, operating
at the European level: DEISA, PRACE and HPC-Europa. DEISA is a consortium between the most advanced HPC centres in This infrastructure is mainly intended to support
challenge scientific applications by integrating and making easily accessible
supercomputers in different centres. PRACE is a feasibility project intended to build
the next generation of challenge HPC infrastructure and services at European level. The infrastructure will
consist of a limited number (3 to 5) of
PetaFlop/s class HPC systems integrated in a network of HPC systems on
a pyramidal model basis, with three different layers (European, National and Regional) in the European HPC eco-system. HPC-Europa supports the human network of
knowledge, experiences and expertise exchange, in the context of the
scientific research communities using
advanced HPC systems. HPC-Europa factively promotes such mission
supporting the mobility of the European researchers among the main research
institutions, and providing the access
to the computational resources offered by the main European HPC
infrastructures. |
PANELS