As the results of the 44th round of the Big Iron game of thrones are about to be announced, letâ€™s put this in context by taking a synoptic look at the historical data from the perspective of competition among nations.
You are browsing the archive for Updates.
Since June of 1993, the Top500 List has been presenting information on the worldâ€™s 500 most powerful computer systems.Â The statistics about these systems have proven to be of substantial interest to computer manufacturers, users and funding authorities.Â While interest in the list is focused on the computers, less attention is paid to the countries hosting them.Â Letâ€™s take a look at the Top500 List countries.Â Who are they?Â How might one characterize them?
The Qatar National Convention Center was recently the venue for the first meeting of the User Forum of Qatarâ€™s newly commissioned computing infrastructure institute. The National Computing Infrastructure for Research (NCIR) is the latest addition to the Qatar Foundationâ€™s collection of R&D institutes. Is NCIR about to place Qatar on the TOP500 list and join the supercomputing club?
Was it a victim of its own success?
HPC is a tool. We use it to solve problems and make discoveries. At the highest end of HPC, itâ€™s all about capability, not capacity. How does one demonstrate or sell a new capability? Since its inception, the approved solution to this problem has been the killer app.
In the beginning, we didnâ€™t call them killer apps. They were the â€śGrand Challenges.â€ť The first collection of grand challenges was described in February of 1991 when the US governmentâ€™s Office of Science and Technology Policy released the first Blue Book â€“ a supplement to the Presidentâ€™s FY 1992 Budget Request for the newly created High Performance Computing and Communications Program. The Blue Book was entitled â€śGrand Challenges: High Performance Computing and Communicationsâ€ť and contained a listing of the computational science and engineering challenges seen as drivers for federal expenditures on HPC at that time.
As I pointed out in an HPCwire article a couple of years ago (Meet the Exascale Apps, 12 April 2012), those apps havenâ€™t changed much in the past twenty years, and, with few exceptions, they are the current set in global use.
Are the killer apps working for us? Some observers think not. The argument has been made that, as HPC has successfully diffused through many application disciplines over the past decades, the killer apps have morphed into what might better be called the â€śusual suspects.â€ť So, given that exascale computing projects are currently being funded on several continents, how were they justified? And what does this portend for the future of HPC?
Policies, like sausages, cease to inspire respect in proportion as we know how they are made. This is a loose paraphrase of a common analogy. Is it apt when applied to science and technology policy? In particular, how â€“ and where â€“ is HPC policy made? Is it currently being made at all?
Given the number of HPC issues currently on the table and in competition for scarce federal resources, perhaps we should set aside any lack of respect for the process and get engaged in the sausage making. We need clear federal policy guidance to allocate resources and mobilize the technology agencies to advance HPC and its applications. Some suggestions for HPC community action to improve this situation are included.
All countries have some computing capability, but relatively fewer are serious players in HPC.Â So far in the Middle East, the only country to place machines on the Top500 list is Saudi Arabia.Â Qatar, which is right next door, is a very wealthy and focused country that could easily become a significant HPC power.Â Why would Qatar want to play in HPC and how significant a player might it become?
Many HPC aficionados probably think of Enterprise Computing as something static and boring: a solved problem; something to be maintained and occasionally updated; or maybe moved to a Cloud â€“ but not a fruitful area for novel approaches or exotic hardware.Â Big Data may change those views.Â Letâ€™s take a look.
When we think about progress in HPC, most of us use hardware speed, as reported in listings like the Top500, as our yardstick.Â But, is that the whole story â€“ or even its most important component?Â HPC hardware and the attendant systems software and tools suites are certainly necessary for progress.Â But to harness HPC for practical problem solving, we also need the right math, as expressed in our solvers and applications algorithms.Â Hardware is tangible and visible while math is seen through the mindâ€™s eye â€“ and is easily overlooked.Â Lately, there hasnâ€™t been much public discussion of HPCâ€™s math.Â Where has it gone?Â Has it matured to the point of invisibility â€“ or is it still a vibrant and dynamic part of HPC?Â Letâ€™s take a look.
Despite phenomenal progress in HPC over a sustained period of decades, a few issues limiting its effectiveness and acceptance remain.Â Prominent among these are the repeatability, transportability, and openness of HPC applications.Â As we prepare to move HPC to the exascale level, we should take the time and effort to consolidate HPCâ€™s gains and deal with these residual issues from the early days of computational science.Â Only then will we be ready to reap the benefits of more powerful HPC tools.
ISC’13 Infrastructure Panel – Part 1 of 3 / Think Tank “Data Science – The Sexiest Field of the 21st Century”
Computational Science Solutions Founder, Gary Johnson, moderates a discussion of Data Science Infrastructure at the 2013 International Supercomputing Conference.
Sverre Jarp, CERN
Simon Lin, ASGC
John Shalf, LBNL