They are used in various fields be it research, aerospace or accurate weather forecasts.
The parts of a supercomputer are comparable to those of a desktop computer: Although both desktop computers and supercomputers are equipped with similar processors, their speed and memory sizes are significantly different. For instance, a desktop computer built in the year normally has a hard disk data capacity of between 2 and 20 gigabytes and one processor with tens of megabytes of random access memory RAM —just enough to perform tasks such as word processingweb browsing, and video gaming.
Meanwhile, a supercomputer of the same time period has thousands of processors, hundreds of gigabytes of RAM, and hard drives that allow for hundreds, and sometimes thousands, of gigabytes of storage space. Although desktop computers can perform millions of floating-point operations per second megaflopssupercomputers can perform at speeds Supercomputers computer and current processing capacity billions of operations per second gigaflops and trillions of operations per second teraflops.
Evolution of Supercomputers Many current desktop computers are actually faster than the first supercomputer, the Cray-1, which was developed by Cray Research in the mids. The Cray-1 was capable of computing at megaflops by using a form of supercomputing called vector processingwhich consists of rapid execution of instructions in a pipelined fashion.
Contemporary vector processing supercomputers are much faster than the Cray-1, but an ultimately faster method of supercomputing was introduced in the mids: Applications that use parallel processing are able to solve computational problems by simultaneously using multiple processors.
Using the following scenario as a comparative example, it is easy to see why parallel processing is becoming the preferred supercomputing method. If you were preparing ice cream sundaes for yourself and nine friends, you would need ten bowls, ten scoops of ice creamten drizzles of chocolate syrup, and ten cherries.
Working alone, you would take ten bowls from the cupboard and line them up on the counter. Then, you would place one scoop of ice cream in each bowl, drizzle syrup on each scoop, and place a cherry on top of each dessert.
This method of preparing sundaes would be comparable to vector processing. To get the job done more quickly, you could have some friends help you in a parallel processing method.
If two people prepared the sundaes, the process would be twice as fast; with five it would be five times as fast; and so on. Conversely, assume that five people will not fit in your small kitchen, therefore it would be easier to use vector processing and prepare all ten sundaes yourself.
This same analogy holds true with supercomputing. Some researchers prefer vector computing because their calculations cannot be readily distributed among the many processors on parallel supercomputers.
But, if a researcher needs a supercomputer that calculates trillions of operations per second, parallel processors are preferred—even though programming for the parallel supercomputer is usually more complex.
Applications of Supercomputers Supercomputers are so powerful that they can provide researchers with insight into phenomena that are too small, too big, too fast, or too slow to observe in laboratories. For example, astrophysicists use supercomputers as "time machines" to explore the past and the future of our universe.
A supercomputer simulation was created in that depicted the collision of two galaxies: Although this collision is not expected to happen for another three billion years, the simulation allowed scientists to run the experiment and see the results now.
This particular simulation was performed on Blue Horizon, a parallel supercomputer at the San Diego Supercomputer Center. This would have been impossible to do in a laboratory. Another example of supercomputers at work is molecular dynamics the way molecules interact with each other.
Supercomputer simulations allow scientists to dock two molecules together to study their interaction. Molecular characterization at this level is extremely difficult, if not impossible, to perform in a laboratory environment.Supercomputers Supercomputers are the most powerful computers made and are The largest type of computer in common use is the mainframe.
Mainframe computers are used in large organisations like insurance processing operations but do not provide ant storage. A supercomputer is a computer that leads the world in terms of processing capacity, particularly speed of calculation, at the time of its introduction.
(The term Super Computing was first used by New York World newspaper in to refer to the large custom built tabulators IBM had . nese electronics and computer manufac-turer, announced that it would market a Neumann, are a single central processing unit linked to a memory.
The capacity ofthe U.S. bureaucracy to or-chestrate a government-industry part-nership in comparison to Japan's mighty MITI. The Federal Plan for High End Computing outlines that if we had to times the current processing capacity, we could perhaps understand the initiation of .
A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the s until Cray left to form his own company, Cray Research.
One of the trends in supercomputer evolution lies in the direction of multiprocessor systems with increasing numbers of powerful processors.
The user who is interested in attaining the maximum achievable computation rates on these machines must understand the philosophy underlying the parallel processing mechanisms provided on them.