We looked at an early digital computer memory, see the history of the computer – kernel memory, and noted that the present standard RAM (Random Access Memory) with a & # 39 is a memory chip. This is consistent with the widely quoted application of Moore's Law (Gordon Moore was one of the founders of Intel). He argues that the density of components on integrated circuits, which can be paraphrased as performance per unit of cost, doubles every 18 months. Early core memory was the cycle time in microseconds, today we are talking in nanoseconds.
You may be familiar with the term cache in relation to the PC. This is one of the features of performance mentioned, when it comes to the final processor or hard drive. You can have the L1 or L2 cache on the processor and disk cache sizes. Some programs have cache too, also known as a buffer, for example, when writing data on a CD-ROM drive. Early CD vodka program was "overrun". The final result was a good supply of their coasters!
mainframe system uses the cache for many years. The concept became popular in the 1970s as a way to speed up the memory access time. It was a time when the kernel memory is minimized and replaced by integrated circuits, or chips. Despite the fact that the chips were much more efficient in terms of physical space, they had other problems of reliability and heat generation. Chips of a certain design were faster, hotter and more expensive chips of another design, which were cheaper but slower. Speed has always been one of the most important factors in computer sales and design engineers are always looking for ways to improve performance.
The concept of cache memory based on the fact that your computer & # 39 is inherently sequential processing machine. Of course, one of the biggest advantages of the computer program with a & # 39 is that it can "branch" or "jump" out of sequence – the subject of another article in this series. Nevertheless, there are enough times when one instruction follows another to make a buffer or cache a useful addition to the computer.
The basic idea of the cache to predict what data is required from the memory for processing in the processor. Consider a program that is composed of a series of instructions, each of which is stored in the memory cell, for example, from address 100 upwards. User location 100 is read from the memory and executed by a processor, the next command is read from location 101 is performed and then 102, 103, etc.  If the memory is referred to RAM, it will take maybe 1 microsecond to read the instructions. If the processor takes, say 100 nanoseconds to execute a command, it then must wait 900 nanoseconds for the next command (1 microsecond = 1000 nanoseconds). Efficient processor repeat rate is 1 microsecond .. (time and speed are specified typical, but do not refer to any specific hardware, merely give an illustration of the principles involved).