Over clock part 1

Since the very beginning of the PC era, the most demanding users have sought ways to increase system performance. “Overclocking” may in fact pre-date PCs, going back to the days of simpler devices, but legends of 8 MHz 8088 processors overclocked to 12 MHz via a simple change in clock crystal started a phenomenon. Overclockers later came to be divided between two camps: “the many” who desire high-end performance on a low-end budget, and “the few” seeking ultimate performance at any price.

The Concept

Overclocking refers to increasing the speed of any component beyond that specified by its manufacturer. The word “clock” comes from the use of a “clock crystal”, an oscillator that sets a rhythm from which all higher frequencies are derived for the component. The simplest devices operated at oscillator frequency, so that an 8 MHz processor required an 8 MHz clock crystal. Overclocking early processors was as simple – and as limited – as changing the discrete clock crystal from say, an 8 MHz part to a 12 MHz part. As computers became more complex a single crystal could no longer support the wide range of speeds various data buses required. While motherboards could contain several oscillators for specific devices, an additional integrated circuit was required to allow a wider range of speeds for a wider variety of interfaces. Known as the clock generator, this component generates clock signals at multiples and fractions of clock crystal oscillation. Clock generators have become ever more complex, to the point that modern boards and a few add-in components now support adjusting frequencies in extremely small steps.

A crystal oscillator and a clock generator

The advent of adjustable clock generators has allowed overclocking to be done without changing parts such as the clock crystal. Further advancements in BIOS and firmware now allow device speeds to be manipulated without so much as a change in jumper settings.

Benefits And Risks
Overclocking allows a low-end part to achieve the performance of a higher-priced version, or a better-quality model to be pushed beyond what the best models offer. For example, a 3.0 GHz Pentium 4 at 3.4 GHz performs similarly to the more expensive Pentium 4 3.4 GHz. Anyone who made this change before the 3.4 GHz version became available was able to sample the Pentium 4’s future! The primary risks of overclocking are instability and a possible loss of data, which can be overcome through extensive testing to verify the highest stable speed. This was best summarized by Dr. Thomas Pabst, simply known as Tom, founder of Tom’s Hardware Guide: “Nobody likes system crashes or hangs, but in a professional business environment, avoiding a system crash or hang can be most crucial. It certainly is a fact that you are increasing the probability of system faults by overclocking your CPU. But it is only a probability! If you have just overclocked your system and the first thing you do is use it to start writing your dissertation, don’t be surprised if a system crash occurs which causes you to lose all your data. After finishing the overclocking process you have to put your system through a tough and thorough testing procedure. If the system passes all the testing, only then can you talk of successful overclocking and feel confident that everything is working well.”

Available freely on the web, Prime 95’s “torture test” has become the gold standard for CPU stability testing. The most significant of secondary risks is hardware damage. Higher overclock settings translate directly into increased risk of component damage, but risk assessment is not as straightforward as many non-overclockers assume. Damage contributors, from least to greatest, are as follows:

Speed – Integrated circuits have a finite lifespan: each operation deteriorates the circuit an infinitesimal amount, so that doubling the number of cycles per second could cut the circuit’s life in half. This alone is not usually enough to “break” a component before it becomes outdated, but speed also contributes to heat.

Heat – Circuits deteriorate more quickly as temperatures rise. Heat is also an enemy of stability, so that low temperatures must be maintained to reach a component’s highest stable speed.

Voltage – Increased voltage allows for greater signal strength, which can have a tremendous effect on how far a component can be pushed. But increased voltage also increases circuit deterioration, and is the leading cause of early failure. Increased voltage also increases heat, requiring additional cooling improvements.

Circuit deterioration is caused by a phenomenon called electromigration. Tom again had something to say about this:

“Electromigration takes place on the actual silicon chip of your CPU, in areas that operate at a very high temperature, and can cause permanent damage to the chip. Before you start to panic, you should first realize a few things. CPUs are designed to run at temperatures between -25 and 80 degrees Celsius. To give you an idea, 80 degrees Celsius is a temperature that nobody is able to touch for longer than 1/10 second. I have never come across a CPU at this temperature. There are plenty of ways to keep the CPU case at less than 50 degrees Celsius, which increases the probability of keeping the chip inside at less than 80 degrees. Also, electromigration does not immediately damage your chip. It is a slow process, which more or less shortens the life span of a CPU running at a very high temperature. A normal CPU is meant

Iklan

Tinggalkan Balasan

Isikan data di bawah atau klik salah satu ikon untuk log in:

Logo WordPress.com

You are commenting using your WordPress.com account. Logout / Ubah )

Gambar Twitter

You are commenting using your Twitter account. Logout / Ubah )

Foto Facebook

You are commenting using your Facebook account. Logout / Ubah )

Foto Google+

You are commenting using your Google+ account. Logout / Ubah )

Connecting to %s