Wednesday, September 22, 2021

What is mean by Supercomputer?

In this article we are going to see about Supercomputer...



 A supercomputer is a computer with a high level of performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Supercomputers contain tens of thousands of processors and can perform billions and trillions of calculations or computations per second. Some supercomputers can perform up to a hundred quadrillion FLOPS. Since information moves quickly between processors in a supercomputer (compared to distributed computing systems) they are ideal for real-time applications.


Supercomputers are used for data-intensive and computation-heavy scientific and engineering purposes such as quantum mechanics, weather forecasting, oil and gas exploration, molecular modeling, physical simulations, aerodynamics, nuclear fusion research and cryptoanalysis. Early operating systems were custom made for each supercomputer to increase its speed. In recent years, supercomputer architecture has moved away from proprietary, in-house operating systems to Linux. Although most supercomputers use a Linux-based operating system, each manufacturer optimizes its own Linux derivative for peak hardware performance. In 2017, half of the world’s top 50 supercomputers used SUSE Enterprise Linux Server.

THE PREHISTORY OF U.S. SUPERCOMPUTING

 The development of computer technology in the United States was inextricably linked to U.S. government funding for research on cryptanalysis, nuclear weapons, and other defense applications in its first several decades. Arguably, the first working, modern, electronic, digital computer was the Colossus machine, put into operation at Bletchley Park,


1 .An expanded version of much of the analysis in this chapter will be found in “An Economic History of the Supercomputer Industry,” by Kenneth Flamm, 2004.


2 . “Military Roots,” of Creating the Computer: Government, Industry, and High Technology (Brookings Institution Press, 1988), Kenneth Flamm lays out the entire panorama of government-funded projects in the late 1940s and 1950s that essentially created the early U.S. computer industry. Another good but less comprehensive source ends in the very early 1950s, when high-volume production was 20 machines: N. Metropolis, J. Howlett, and Gian-Carlo Rota, A History of Computing in the Twentieth Century (Academic Press, 1980).


in the United Kingdom, in 1943. Although it was designed and employed to break a specific German cipher system, this machine was in fact a true electronic computer and could be used, in principle, on a range of problems. The existence of this machine was classified until the 1970s.


U.S. personnel working with Bletchley Park during World War II played a major role in creating the early U.S. computer industry in the decade following the war. In particular, U.S. engineers at the Naval Computing Machinery Laboratory (a National Cash Register plant in Dayton, Ohio, deputized into the war effort) were building copies or improved versions of Bletchley Park electronic cryptanalysis machines, as well as computers of their own design. American engineers involved in this effort included William Norris and Howard Engstrom—Norris later founded Engineering Research Associates (ERA), then Control Data; Engstrom was later deputy director of the National Security Agency (NSA)—and Ralph Palmer who was principal technical architect of IBM’s move into electronic computers in the 1950s. Of the 55 people in the founding technical group at ERA, where Seymour Cray had his first design job in computers, 40 came from Navy communications intelligence in Washington, 5 from the Navy lab in Dayton, and 3 from the Naval Ordnance Laboratory.3


The ENIAC, built in 1945 at the University of Pennsylvania and often credited as the first functioning electronic computer, was a larger, plug-programmable computer designed to compute artillery ballistics tables.4 Ironically, it came into existence, indirectly, as a result of the code-breaking efforts of the U.S. intelligence community. The U.S. Army’s Ballistic Research Laboratory (BRL) had originally funded a ballistics computer project at National Cash Register and had turned down a competing proposal from J. Presper Eckert and John Mauchly at the University of Pennsylvania. BRL reconsidered this decision after the National Cash Register Dayton group was drafted into producing cryptanalysis machines for the Navy and finally decided to fund the ENIAC project.


3 .See Flamm, 1988, pp. 36-41, 43-45.


4 .As is the case for many other technologies, there has been a heated debate about who should be credited as the inventor of the first digital computer. In addition to the Colossus and the ENIAC, the following are worth mentioning: Konrad Zuse, working in Germany, built a relay-based automatic digital computer in Germany in 1939-1941. A similar system, the Automatic Sequence Controlled Calculator (ASCC), also called the Mark I, was conceived by Howard Aiken and designed and built by IBM in 1939-1944. John Vincent Atanasoff and Clifford Berry started building an electronic digital computer at Iowa State University in 1937-1942. Although the project was not completed, Atanasoff and Berry won a patent case against Eckert and Mauchly in 1973, invalidating the patent of the latter on ENIAC as the first automatic electronic computer.


Princeton mathematician and War Department consultant John von Neumann heard about the existence of the ENIAC project at the BRL and involved himself in the project.5 It is reported that some of the early atomic bomb calculations (in which von Neumann was involved) made use of the ENIAC even before it was formally delivered to the Army. The link between both cryptanalytical and nuclear design applications and high-performance computing goes back to the very first computers.


ENIAC’s designers, Eckert and Mauchly, built the first working stored program electronic computer in the United States in 1949 (the BINAC) and delivered it to Northrop Aircraft, a defense contractor. A number of advanced machines had been built in Britain by that time—Britain was actually leading in the construction of working electronic computers in the late 1940s. A massive U.S. government investment in computer technology in the 1950s was critical to the rapid rise of U.S. companies as the undisputed leaders in the field.


The second and third computers in the United States were the SEAC (built for the National Bureau of Standards, now renamed NIST) and the ERA 1101 (built for predecessors to the National Security Agency). Both went into operation in 1950, runners-up in the United States to the Eckert-Mauchly BINAC.


The first Eckert and Mauchly-designed computer targeting a commercial market, the UNIVAC, was delivered to the Census Bureau in 1951. The experimental MIT Whirlwind computer, built with Navy and later Air Force funding, also went into operation in 1951.


Von Neumann, who had brought British computing theoretician Alan Turing to Princeton in the 1930s and was much influenced by this contact, began work on the conceptual design of a general-purpose scientific computer for use in calculations of military interest in 1946, but a working machine was not completed until 1951. This machine was intended to be a tool for scientists and engineers doing numerical calculations of the sort needed in nuclear weapons design. Versions of the first machine installed at the Institute of Advanced Studies in Princeton, the IAS machine, were built and installed at Los Alamos (the MANIAC I) in 1952 and Oak Ridge (the ORACLE) in 1953; these were the first computers installed at the nuclear weapons laboratories.6 The nuclear weapons labs-sponsored IAS design was highly influential. But the laboratories were so pressed for computing resources before these machines were delivered that they did


5 .Nancy Stern. 1981. From ENIAC to UNIVAC: An Appraisal of the Eckert-Mauchly Computers. Digital Press.


6 .The Argonne National Laboratory built AVIDAC (Argonne’s Version of the Institute’s Digital Automatic Computer), which was operational prior to IAS.


INNOVATION IN SUPERCOMPUTING

While one part of the U.S. government reacted by building walls around the U.S. market, DARPA and its Strategic Computing Initiative (SCI), in concert with other government agencies and programs, took the opposite tack, attempting to stimulate a burst of innovation that would qualitatively alter the industry.17 Computing technology was regarded as the cornerstone of qualitative superiority for U.S. weapons systems. It was argued that the United States could not regain a significant qualitative lead in computing technology merely by introducing faster or cheaper computer components, since Japanese producers had clearly achieved technological parity, if not some element of superiority, in manufacturing them. Furthermore, many technologists believed that continued advances in computer capability based on merely increasing the clock rates of traditional computer processor designs were doomed to slow down as inherent physical limits to the size of semiconductor electronic components were approached. In addition, Amdahl’s law was expected to restrict increases in performance due to an increase in the number of processors used in parallel.18


The approach to stimulating innovation was to fund an intense effort to do what had not previously been done—to create a viable new architecture for massively parallel computers, some of them built around commodity processors, and to demonstrate that important applications could benefit from massive parallelism. Even if the individual processors were less efficient in delivering usable computing power, as long as the parallel architecture was sufficiently scalable, interconnecting a sufficient number


Investments in high-performance computing were only one area funded by the SCI, which funded over $1 billion in R&D from 1983 to 1993. There are no available data that break out this investment by technology area. Other areas were electronic components, artificial intelligence and expert systems, and large-scale prototype development of advanced military systems intended to explore new technology concepts. The committee is not aware of any objective assessment of the success and utility of the program as a whole. An excellent history of the program may be found in Alex Roland and Phillip Shiman, 2002, Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993, Cambridge, Mass.: MIT Press.


Amdahl’s law states that if a fraction of 1/s of an execution is sequential, then parallelism can reduce execution time by at most a factor of s. Conventional wisdom in the early 1980s was that for many applications of interest Amdahl’s law will restrict gains in performance from parallelism to factors of tens or low hundreds.


The largest, most powerful supercomputers are actually multiple computers that perform parallel processing. Today, many academic and scientific research firms, engineering companies and large enterprises that require massive processing power are using cloud computing instead of supercomputers. High performance computing (HPC) via the cloud is more affordable, scalable and faster to upgrade than on-premises supercomputers. Cloud-based HPC architectures can expand, adapt and shrink as business needs demand. SUSE Linux Enterprise High Performance Computing allows organizations to leverage their existing hardware for HPC computations and data-intensive operations.

Labels:

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home