"RAM" redirects here. For other uses of the word, see Ram. Dynamic RAM (DRAM) modules
Two 512 MB DRAM Modules Connects to:
* PCB or motherboard via one of o Socket o Integration
* SDRAM * DDR * RDRAM * DDR 2 * DDR 3
* Micron Technology * Samsung * Kingston Technology * Corsair Memory * Mushkin * Apacer
Random access memory (usually known by its acronym, RAM) is a type of computer data storage. It takes the form of integrated circuits that allow the stored data to be accessed in any order — that is, at random and without the physical movement of the storage medium or a physical reading head. RAM is a volatile memory as the information or instructions stored in it will be lost if the power is switched off.
The word "random" refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data. This contrasts with storage mechanisms such as tapes, magnetic discs and optical discs, which rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than the data transfer, and the retrieval time varies depending on the physical location of the next item. Contents [hide]
* 1 RAM * 2 Overview * 3 Recent developments * 4 Memory wall * 5 DRAM packaging * 6 See also * 7 Notes and references * 8 External links
Originally, RAM referred to a type of solid-state memory, and the majority of this article deals with that, but physical devices which can emulate true RAM (or, at least, are used in a similar way) can have "RAM" in their names: for example, DVD-RAM. RAM is usually writable as well as readable, so "RAM" is often used interchangeably with "read-write memory". The alternative to this is "ROM", or Read Only Memory. Most types of RAM lose their data when the computer powers down. "Flash memory" is a ROM/RAM hybrid that can be written to, but which does not require power to maintain its contents. RAM is not strictly the opposite of ROM, however. The word random indicates a contrast with serial access or sequential access memory.
"Random access" is also the name of an indexing method: hence, disk storage is often called "random access" because the reading head can move relatively quickly from one piece of data to another, and does not have to read all the data in between. However the final "M" is crucial: "RAM" (provided there is no additional term as in "DVD-RAM") always refers to a solid-state device.
Many CPU-based designs actually have a memory hierarchy consisting of registers, on-die SRAM caches, DRAM, paging systems, and virtual memory or swap space on a hard-drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the "random access" term in RAM. Even within a hierarchy level such as DRAM, the specific row/column/bank/rank/channel/interleave organization of the components make the access time variable, although not to the extent that rotating storage media or a tape is variable. 1 Module of 128Mb NEC SD-RAM 1 Module of 128Mb NEC SD-RAM
The key benefit of RAM over types of storage which require physical movement is that retrieval times areas 'main memory' or primary storage: the working area used for loading, displaying and manipulating applications and data. In most personal computers, the RAM is not an integral part of the motherboard or CPU—it comes in the easily upgraded form of modules called memory sticks or RAM sticks about the size of a few sticks of chewing gum. These can quickly be removed and replaced should they become damaged or too small for current purposes. A smaller amount of random-access memory is also integrated with the CPU, but this is usually referred to as "cache" memory, rather than RAM.
Modern RAM generally stores a bit of data as either a charge in a capacitor, as in dynamic RAM, or the state of a flip-flop, as in static RAM. Some types of RAM can detect or correct random faults called memory errors in the stored data, using RAM parity and error correction codes.
Many types of RAM are volatile, which means that unlike some other forms of computer storage such as disk storage and tape storage, they lose all data when the computer is powered down. For these reasons, nearly all PCs use disks as "secondary storage". Small PDAs and music players (up to 8 GB in Jan 2007) may dispense with disks, but rely on flash memory to maintain data between sessions of use.
Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. Unless the memory used is non-volatile, a RAM disk loses the stored data when the computer is shut down. However, volatile memory can retain its data when the computer is shut down if it has a separate power source, usually a battery.
If a computer becomes low on RAM during intensive application cycles, the computer can resort to swapping. In this case, the computer temporarily uses hard drive space as additional memory. Constantly relying on this type of backup memory is called thrashing, which is generally undesirably because it lowers overall system performance. In order to reduce the dependency on swapping, more RAM can be installed.
 Recent developments
Currently, several types of non-volatile RAM are under development, which will preserve data while powered down. The technologies used include carbon nanotubes and the magnetic tunnel effect.
In summer 2003, a 128 KB magnetic RAM chip manufactured with 0.18 µm technology was introduced. The core technology of MRAM is based on the magnetic tunnel effect. In June 2004, Infineon Technologies unveiled a 16 MB prototype again based on 0.18 µm technology.
Nantero built a functioning carbon nanotube memory prototype 10 GB array in 2004.
In 2006, Solid state memory came of age, especially when implemented as "Solid state disks", with capacities exceeding 150 gigabytes and speeds far exceeding traditional disks. This development has started to blur the definition between traditional random access memory and disks, dramatically reducing the difference in performance.
 Memory wall
The "memory wall" is the growing disparity of speed between CPU and memory outside the CPU chip. An important reason of this disparity is the limited communication bandwidth beyond chip boundaries. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance. 
Currently, CPU speed improvements have slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense. Intel summarized these causes in their Platform 2015 documentation (PDF):
“First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat (more on power consumption below). Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-called Von Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address.”
The RC delays in signal transmission were also noted in Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures which projects a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014. The data on Intel Processors clearly shows a slowdown in performance improvements in recent processors. However, Intel's new processors, Core 2 Duo (codenamed Conroe) show a significant improvement over previous Pentium 4 processors; due to a more efficient architecture, performance increased while clock rate actually decreased.
 DRAM packaging
Main article: DRAM packaging
For economic reasons, the large (main) memories found in personal computers, workstations, and non-handheld game-consoles (such as Playstation and Xbox) normally consists of dynamic RAM (DRAM). Other parts of the computer, such as cache memories and data buffers in hard disks, normally use static RAM (SRAM)
Jump to: navigation, search Two types of DIMMs: a 168-pin SDRAM module (top) and a 184-pin DDR SDRAM module (bottom). Two types of DIMMs: a 168-pin SDRAM module (top) and a 184-pin DDR SDRAM module (bottom).
A DIMM, or dual in-line memory module, comprises a series of random access memory integrated circuits. These modules are mounted on a printed circuit board and designed for use in personal computers. DIMMs began to replace SIMMs (single in-line memory modules) as the predominant type of memory module as Intel's Pentium processors began to control the market.
The main difference between SIMMs and DIMMs is that SIMMs have a 32-bit data path, while DIMMs have a 64-bit data path. Since Intel's Pentium has (as do several other processors) a 64-bit bus width, it required SIMMs installed in matched pairs in order to use them. The processor would then access the two SIMMs simultaneously. DIMMs were introduced to eliminate this inefficiency. Another difference is that DIMMs have separate electrical contacts on each side of the module, while the contacts on SIMMs on both sides are redundant.
The most common types of DIMMs are:
* 72-pin DIMM, used for FPM DRAM and EDO DRAM * 72-pin SO-DIMM, used for FPM DRAM and EDO DRAM * 100-pin DIMM, used for printer SDRAM * 144-pin SO-DIMM, used for SDR SDRAM * 168-pin DIMM, used for SDR SDRAM (less frequently for FPM/EDO DRAM in workstations/servers) * 184-pin DIMM, used for DDR SDRAM * 200-pin SO-DIMM, used for DDR SDRAM and DDR2 SDRAM * 240-pin DIMM, used for DDR2 SDRAM and FB-DIMM DRAM
There are 2 notches on the bottom edge of 168-pin-DIMMs, and the location of each notch determines a particular feature of the module. usually it is 13cm for desktop version and 15cm for server version.
* The first notch is DRAM key position. It represents RFU (reserved future use), registered, and unbuffered. * The second notch is voltage key position. It represents 5.0V, 3.3V, and Reserved. * The upper DIMM in the photo is an unbuffered 3.3V 168-pin DIMM.
A DIMM's capacity and timing parameters may be identified with SPD (Serial Presence Detect), an additional chip which contains information about the module type.
ECC DIMMs are those that have extra data bits which can be used by the system memory controller to detect and correct errors. There are numerous ECC schemes, but perhaps the most common is Single Error Correct, Double Error Detect (SECDED) which uses a 9th extra bit per byte. Contents [hide]
* 1 Ranking * 2 Organization * 3 Speeds * 4 Form factors * 5 See also * 6 External links
The number of ranks on any DIMM is the number of independent sets of DRAMs that can be accessed simultaneously for the full data bit-width of the DIMM to be driven on the bus. The physical layout of the DRAM chips on the DIMM itself does not necessarily relate to the number of ranks. Sometimes the layout of all DRAM on one side of the DIMM PCB versus both sides is referred to as "single-sided" versus "double-sided". These terms may cause confusion as they do not necessarily relate to how the DIMMs are logically organized or accessed.
For example, on a single rank DIMM that has 64 data bits of I/O pins, there is only one set of DRAMs that are turned on to drive a read or receive a write on all 64-bits. In most electronic systems, memory controllers are designed to access the full data bus width of the memory module at the same time.
On a 64-bit (non-ECC) DIMM made with two ranks, there would be two sets of DRAM that could be accessed at different times. Only one of the ranks can be accessed at a time, since the DRAM data bits are tied together for two loads on the DIMM (Wired OR). Ranks are accessed through chip selects (CS). Thus for a two rank module, the two DRAMs with data bits tied together may be accessed by a CS per DRAM (e.g. CS0 goes to one DRAM chip and CS1 goes to the other). DIMMs are currently being commonly manufactured with up to four ranks per module.
Consumer DIMM vendors have recently begun to distinguish between single and dual ranked DIMMs. JEDEC decided that the terms "dual-sided," "double-sided," or "dual-banked" were not correct when applied to registered DIMMs.
Most DIMMs are built using "x4" (by 4) memory chips or "x8" (by 8) memory chips. "x4" or "x8" refer to the data width of the DRAM chips in bits.
In the case of the "x4" Registered DIMMs, the data width per side is 36 bits; therefore, the memory controller (which requires 72 bits) needs to address both sides at the same time to read or write the data it needs. In this case, the two-sided module is single-ranked.
For "x8" Registered DIMMs, each side is 72 bits wide, so the memory controller only addresses one side at a time (the two-sided module is dual-ranked).
For various technologies, there are certain bus and device clock frequencies that are standardized. There is also a decided nomenclature for each of these speeds for each type.
SDRAM DIMMs - These first synchronous registered DRAM DIMMs had the same bus frequency for data, address and control lines.
* PC66 = 66 MHz * PC100 = 100 MHz * PC133 = 133 MHz
DDR SDRAM (DDR1) SDRAM DIMMs - DIMMs based on Double Data Rate (DDR) DRAM have data but not the strobe at double the rate of the clock. This is achieved by clocking on both the rising and falling edge of the data strobes.
* PC1600 = 200 MHz data & strobe / 100 MHz clock for address and control * PC2100 = 266 MHz data & strobe / 133 MHz clock for address and control * PC2700 = 333 MHz data & strobe / 166 MHz clock for address and control * PC3200 = 400 MHz data & strobe / 200 MHz clock for address and control
DDR2 SDRAM SDRAM DIMMs - DIMMs based on Double Data Rate 2 (DDR2) DRAM also have data and data strobe frequencies at double the rate of the clock. This is achieved by clocking on both the rising and falling edge of the data strobes. The power consumption of DDR2 is significantly lower than DDR(1) at the same speed.
* PC2-3200 = 400 MHz data & strobe / 200 MHz clock for address and control * PC2-4200 = 533 MHz data & strobe / 266 MHz clock for address and control * PC2-5300 = 667 MHz data & strobe / 333 MHz clock for address and control * PC2-6400 = 800 MHz data & stOptical disc authoring
* Optical disc * Optical disc image * Optical disc drive * Authoring software * Recording technologies o Recording modes o Packet writing
Optical media types
* Laserdisc * Compact disc (CD): CD-Audio, PhotoCD, CD-R, CD-ROM, CD-RW, Video CD, SVCD, CD+G, CD-Text, CD-ROM XA, CD-Extra, CD-i Bridge, CD-i * MiniDisc * DVD: DVD-R, DVD-D, DVD-R DL, DVD+R, DVD+R DL, DVD-RW, DVD+RW, DVD-RW DL, DVD+RW DL, DVD-RAM * Blu-ray Disc: BD-R, BD-RE * HD DVD: HD DVD-R, HD DVD-RW, HD DVD-RAM * UDO * UMD * Holographic data storage * 3D optical data storage * History of optical storage media
* Rainbow Books * File systems o ISO 9660 + Joliet + Rock Ridge # Amiga Rock Ridge extensions + El Torito + Apple ISO9660 Extensions o Universal Disk Format + Mount Rainier
A DVD-RAM disc can be identified by many small rectangles distributed on the surface of the data carrier. A DVD-RAM disc can be identified by many small rectangles distributed on the surface of the data carrier. These rectangles constitute the hard (factory originated) sectoring of the DVD-RAM. These rectangles constitute the hard (factory originated) sectoring of the DVD-RAM.
DVD-RAM (DVD–Random Access Memory) is a disc specification presented in 1996 by the DVD Forum, which specifies rewritable DVD-RAM media and the appropriate DVD writers. DVD-RAM media have been used in computers as well as camcorders and personal video recorders since 1998. The direct successor of this format will be HD DVD-RAM. Contents [hide]
* 1 Introduction * 2 DVD-RAM Cartridge types * 3 Specification * 4 Compatibility * 5 Evaluation o 5.1 Advantages of DVD-RAM o 5.2 Disadvantages of DVD-RAM * 6 See also * 7 External links
Currently there are three competing technologies for rewritable DVDs: DVD-RAM, DVD+RW and DVD-RW. DVD-RAM is considered a highly reliable format, as the discs have built-in error control and a defect management system. Therefore, DVD-RAM is perceived to be better than the other DVD technologies for traditional computer usage tasks such as general data storage, backup and archival, though the Mt. Rainier standard for DVD+RW somewhat lessens the DVD-RAM format's perceived advantage. Curiously, DVD-RAM has a larger presence in camcorders and set-top boxes than in computers, although the DVD-RAM's popularity in these devices can be explained by the fact that it is very easily written to and erased, which for example allows extensive in-camera editing.
The on-disc structure of DVD-RAMs is closely related to hard disk and floppy disk technology, as it stores data in concentric tracks. DVD-RAMs can be accessed just like a hard or floppy disk and usually without any special software. DVD-RWs and DVD+RWs, on the other hand, store data in one long spiral track and require special packet reading/writing software to read and write data discs. It is a common misconception that DVD-RAM uses magneto-optical (MO) technologies, since both DVD-RAM and MO have numerous rectangles on the disc surface. However, DVD-RAM is a pure phase change medium, similar to CD-RW or DVD-RW.
See also DVD, Compact Disc.
 DVD-RAM Cartridge types Size Bare Disc Non-removable Cartridge Removable Cartridge Empty/No Cartridge sides single double single double single double single double 12 cm yes (type 0) none type 1 type 1 type 2 type 4 type 3 type 5 8 cm yes (type 0) none none none type 7 type 6 type 9 type 8
 Specification A DVD-RAM Type 2. A DVD-RAM Type 2. A DVD-RAM for DVD Recorders A DVD-RAM for DVD Recorders
Since the Internationale Funkausstellung Berlin 2003 the specification is being marketed by the RAM Promotion Group (RAMPRG), built by Hitachi, Toshiba, Maxell, LG Electronics, Matsushita/Panasonic, Samsung and Teac.
The specification distinguishes between
* DVD-RAM version 1.0, recording speed 1x o Single-sided, one layer discs with a capacity of 2.58 GB o Double-sided one layer discs with a capacity of 5.16 GB * DVD-RAM version 2.0, recording speed 2x o Single-sided, one layer discs with a capacity of 4.7 GB o Double-sided one layer discs with a capacity of 9.4 GB * DVD-RAM version 2.1/Revision 1.0, recording speed 3x * DVD-RAM version 2.2/Revision 2.0, recording speed 5x * DVD-RAM version 2.3/Revision 3.0, recording speed 6x max * DVD-RAM version 2.4/Revision 4.0, recording speed 8x max * DVD-RAM version 2.5/Revision 5.0, recording speed 12x max * DVD-RAM version 2.6/Revision 6.0, recording speed 16x max
Physically smaller, 80 mm in diameter, DVD-RAM discs also exist with a capacity of 1.46 GB for a single-sided disc, but they are uncommon. DVD-RAMs were originally solely sold in cartridges; recent DVD recorders however also work with no-cartridge discs — some devices even do not support cartridges anymore. A cartridge disc is about 50% more expensive than a disc without a cartridge. A miniDVD-RAM with DVD Round Holder. A miniDVD-RAM with DVD Round Holder. How to open DVD-RAM cartridge How to open DVD-RAM cartridge
Many operating systems like Mac OS (Mac OS 8.6 up to Mac OS 9.2), Linux and Microsoft Windows XP support DVD-RAM operation directly, while earlier versions of Windows require device drivers or the program InCD. The optical drives shipped with most Apple Macintosh computers do not support DVD-RAM operation, but a third party DVD-RAM-compatible drive can be connected and used directly with Mac OS.
Windows XP can only write directly to FAT32-formatted DVD-RAM discs. For UDF-formatted discs, which are considered faster, compatible device drivers or software such as InCD or DLA are required. Windows Vista can natively access and write to UDF-formatted DVD-RAM discs. This is a non-issue with Linux however, which allows the use of virtually any file system of the multitude that ship with the operating system, including UDF. It is even possible to use the ext3 file system on a DVD-RAM disc. Even though it is possible to use any file system one likes, only very few perform well on DVD-RAM. This is because some file systems frequently over-write data on the disc and the table of contents is contained at the start of the disc.
Mac OS up tp 9.2 (Mac OS Classic) can read and write HFS-, HFS Plus-, FAT-, and UDF-formatted DVD-RAM discs directly. Although Mac OS X does not officially support any DVD-RAM formatting or writing operations, it is reported that it is possible to use some third-party DVD-RAM drives to format, read and write HFS Plus- and FAT32- disks (for instance see ).
Many DVD standalone players and recorders do not support DVD-RAM, especially older or cheaper versions. However, within "RAMPRG" (the DVD-RAM Promotion Group) there are a number of well-known manufacturers of standalone players and recorders that do support DVD-RAM. Panasonic for instance have a range of players and recorders which make full use of the advantages of DVD-RAM. There are also a number of video cameras that use DVD-RAM as the recording media.
 Advantages of DVD-RAM
* Long life — without physical damage, data is retained for an estimated 30 years minimum. Ideal for video evidence recording in CCTV applications amongst many other uses. * Can be rewritten over 100,000 times (DVD±RW can be rewritten approx. 1,000 times). Faster DVD-RAMs support fewer rewrites (3x speed: 100,000, 5x speed: 10,000) , but still more than DVD+RW or DVD-RW. (These are theoretical numbers. In practice they could be smaller depending on the drive, the treatment of the disc and the file system.) * Reliable writing of discs. Verification done in hardware by the drive, so post-write verification by software is unnecessary. This is disabled in all current DVD Video Recorders. * Disc defect management safeguards data, though no optical phase change media is suitable for long term archiving. * DVD-burning software may not be required — discs can be used and accessed like a removable hard disk. Mac OS (8.6 or later) supports DVD-RAM directly. Windows XP supports DVD-RAM directly only for FAT32-formatted discs. Windows Vista is able to write to both FAT32- and UDF-formatted DVD-RAM discs right from within Windows Explorer. Device drivers or other software are needed for earlier versions of Windows or if one wants to use the arguably better UDF format rather than FAT32. * Arguably easier to use than other DVD technology. * Very fast access of smaller files on the disc. * 2 KB disc block size wastes less space when writing small files. * Finalization not necessary. This is an attribute of the VR mode recording mode and is available on other media such as DVD-RW. * Media available with or without protective cartridges. * In video recorders, DVD-RAM can be written to and watched (even separate programs) at the same time, much like TiVo. This is an attribute of the VR mode recording mode and is available on other media such as DVD-RW, though this is only possible at the lower bit rates. * Supports time slip recording and recording without border in/out writing. This is an attribute of the VR mode recording mode and is available on other media such as DVD-RW, though again only at the lower bit rates. * Supported by some high end security digital video recorders, such as the Tecton Darlex, as a secure export medium. 30 year retention makes this an ideal format for evidence. * Holds more data when using Double Sided discs than dual-layer DVD+RW and DVD-RW, 9.4GB for DVD-RAM vs 8.5GB for DVD+RW DL and DVD-RW DL.
 Disadvantages of DVD-RAM
* Less compatibility than DVD+RW and DVD-RW, despite predating both formats (as noted above). * 12x media is reportedly available at OpticStor in the USA. It is not reportedly available in Europe. 16x media may not be available anywhere except manufacturers' R&D laboratories. * DVD-RAM media was initially more expensive than other DVD types. * DVD-RAM writing will be slower than DVD+RW and DVD-RW until 12x DVD-RAM media becomes available. If write verification is enabled, corders and drives, but the media itself is further hindered by its scarcity at retail stores in contrast to other recordable DVD formats.
robe / 400 MHz clock for address and control
* PC2-8000 = 1000 MHz data & strobe / 500 MHz clock for address and control * PC2-8500 = 1066 MHz data & strobe / 533 MHz clock for address and control
 Form factors
Several form factors are commonly used in DIMMs. Single Data Rate(SDR) SDRAM DIMMs commonly came in two main sizes: 1.7" and 1.5". When 1U rackmount servers started becoming popular, these form factor Registered DIMMs had to plug into angled DIMM sockets to fit in the 1.75" high box. To alleviate this issue, the next standards of DDR DIMMs were created with a "Low Profile" (LP) height of ~1.2". These fit into vertical DIMM sockets for a 1U platform. With the advent of blade servers, the LP form factor DIMMs have once again been often angled to fit in these space constrained boxes. This led to the development of the Very Low Profile (VLP) form factor DIMM with a height of ~.72" (18.3 mm). Other DIMM form factors include the SO-DIMM, the Mini-DIMM and the VLP Mini-DIMM.
Static random access memory (SRAM) is a type of semiconductor memory. The word "static" indicates that the memory retains its contents as long as power remains applied, unlike dynamic RAM (DRAM) that needs to be periodically refreshed (nevertheless, SRAM should not be confused with read-only memory and flash memory, since it is volatile memory and preserves data only while power is continuously applied). SRAM should not be confused with SDRAM, which stands for synchronous DRAM and is entirely different from SRAM, or with pseudostatic RAM ([PSRAM]), which is DRAM configured to function, to an extent, as SRAM. Contents [hide]
* 1 Design * 2 SRAM operation o 2.1 Standby o 2.2 Reading o 2.3 Writing o 2.4 Bus behaviour * 3 Applications and Uses o 3.1 Characteristics + 3.1.1 Clock speed and power o 3.2 Embedded use o 3.3 In computers o 3.4 Hobbyists * 4 Types of SRAM o 4.1 By transistor type o 4.2 By function o 4.3 By feature * 5 See also * 6 References * 7 External links
 Design A six-transistor CMOS SRAM cell. A six-transistor CMOS SRAM cell.
Random access means that locations in the memory can be written to or read from in any order, regardless of the memory location that was last accessed.
Each bit in an SRAM is stored on four transistors that form two cross-coupled inverters. This storage cell has two stable states which are used to denote 0 and 1. Two additional access transistors serve to control the access to a storage cell during read and write operations. It thus typically takes six MOSFETs to store one memory bit.
Access to the cell is enabled by the word line (WL in figure) which controls the two access transistors M5 and M6 which, in turn, control whether the cell should be connected to the bit lines: BL and BL. They are used to transfer data for both read and write operations. While it's not strictly necessary to have two bit lines, both the signal and its inverse are typically provided since it improves noise margins.
During read accesses, the bit lines are actively driven high and low by the inverters in the SRAM cell. This improves SRAM speed compared to DRAMs—in a DRAM, the bit line is connected to storage capacitors and charge sharing causes the bitline to swing upwards or downwards. The symmetric structure of SRAMs also allows for differential signalling, which makes small voltage swings more easily detectable. Another difference with DRAM that contributes to making SRAM faster is that commercial chips accept all address bits at a time. By comparison, commodity DRAMs have the address multiplexed in two halves, i.e. higher bits followed by lower bits, over the same package pins in order to keep their size and cost down.
The size of an SRAM with m address lines and n data lines is 2m words, or 2m × n bits.
 SRAM operation
A SRAM cell has three different states it can be in: standby where the circuit is idle, reading when the data has been requested and writing when updating the contents. The three different states work as follows:
If the word line is not asserted, the access transistors M5 and M6 disconnect the cell from the bit lines. The two cross coupled inverters formed by M1 – M4 will continue to reinforce each other as long as they are disconnected from the outside world.
Assume that the content of the memory is a 1, stored at Q. The read cycle is started by precharging both the bit lines to a logical 1, then asserting the word line WL, enabling both the access transistors. The second step occurs when the values stored in Q and Q are transferred to the bit lines by leaving BL at its precharged value and discharging BL through M1 and M5 to a logical 0. On the BL side, the transistors M4 and M6 pull the bit line toward VDD, a logical 1. If the content of the memory was a 0, the opposite would happen and BL would be pulled toward 1 and BL toward 0.
The start of a write cycle begins by applying the value to be written to the bit lines. If we wish to write a 0, we would apply a 0 to the bit lines, i.e. setting BL to 1 and BL to 0. This is similar to applying a reset pulse to a SR-latch, which causes the flip flop to change state. A 1 is written by inverting the values of the bit lines. WL is then asserted and the value that is to be stored is latched in. Note that the reason this works is that the bit line input-drivers are designed to be much stronger than the relatively weak transistors in the cell itself, so that they can easily override the previous state of the cross-coupled inverters. Careful sizing of the transistors in a SRAM cell is needed to ensure proper operation.
 Bus behaviour
A RAM memory with an access time of 70 ns will output valid data within 70 ns from the time that the address lines are valid. But the data will remain for a hold time as well (5-10ns). Rise and fall time also affect (~5ns). By reading the lower part of an address range bits in sequence (page cycle) one can read with significantly shorter access time (30ns).  It is also referred to as Shadow Random Access Memory.
 Applications and Uses
SRAM is a little more expensive, but faster and significantly less power hungry (especially idle) than DRAM. It is therefore used where either speed or low power, or both, are of prime interest. SRAM is also easier to control (interface to) and generally more truly random access than modern types of DRAM. Due to a more complex internal structure, SRAM is less dense than DRAM and is therefore not used for high-capacity, low-cost applications such as the main memory in personal computers.
 Clock speed and power
The power consumption of SRAM varies widely depending on how frequently it is accessed; it can be as power-hungry as dynamic RAM, when used at high frequencies, and some ICs can consume many watts at full speed. On the other hand, static RAM used at a somewhat slower pace, such as in applications with moderately clocked microprocessors, draw very little power and can have a nearly negligible power consumption when sitting idle — in the region of a few microwatts.
Static RAM exists primarily as:
* general purpose products o with asynchronous interface, such as the 28 pin 32Kx8 chips (usually named XXC256), and similar products up to 16 Mbit per chip o with synchronous interface, usually used for caches and other applications requiring burst transfers, up to 18 Mbit (256Kx72) per chip * integrated on chip o as RAM or cache memory in microcontrollers (usually from around 32 bytes up to 128 kibibytes) o as the primary caches in powerful microprocessors, such as the x86 family, and many others (from 8 KiB, up to several mebibytes) o on application specific ICs, or ASICs (usually in the order of kibibytes) o in FPGAs and CPLDs (usually in the order of a few kilobytes or less)
It may be noted that cpu registers and parts of the state-machines used in microprocessors are also often (not always!) built around static RAM.
 Embedded use
Many categories of industrial and scientific subsystems, automotive electronics, and similar, contains static RAM. Some amounts (kibibytes or less) is also embedded in practically all modern appliances, toys, etc that implements an electronic user interface. Several mebibytes may be used in complex products such as digital cameras, cell phones, synthesizers, etc.
SRAM in its dual-ported form is sometimes used for realtime digital signal processing circuits.
 In computers
SRAM is also used in personal computers, workstations, routers and peripheral equipment: internal CPU caches and external burst mode SRAM caches, hard disk buffers, router buffers, etc. LCD screens and printers also normally employ static RAM to hold the image displayed (or to be printed). Small SRAM buffers are also found in CDROM and CDRW drives; usually 256 KiB or more are used to buffer track data, which is transferred in blocks instead of as single values. The same applies to cable modems and similar equipment connected to computers. The so called "CMOS RAM" on PC motherboards was originally a battery-powered SRAM chip, but is today more often implemented using EEPROM or Flash.
Hobbyists often prefer SRAM due to the ease of interfacing. It is much easier to work with than DRAM as there are no refresh cycles and the address and data buses are directly accessible rather than multiplexed. In addition to buses and power connections, SRAM usually require only three controls: Chip Enable (CE), Write Enable (WE) and Output Enable (OE).
 Types of SRAM
 By transistor type
* Bipolar junction transistor (used in TTL and ECL) — very fast but consumes a lot of power * MOSFET (used in CMOS) — low power and very common today
 By function
* Asynchronous — independent of clock frequency; data in and data out are controlled by address transition * Synchronous — all timings are initiated by the clock edge(s). Address, data in and other control signals are associated with the clock signals
 By feature
* ZBT (ZBT stands for zero bus turnaround) — the turnaround is the number of clock cycles it takes to change access to the SRAM from write to read and vice versa. The turnaround for ZBT SRAMs or the latency between read and write cycle is zero. * syncBurst (syncBurst SRAM or synchronous-burst SRAM) — features synchronous burst write access to the SRAM to speed up write operation to the SRAM.
Spin torque transfer writing technology is a technology in which data is written by aligning the spin direction of the electrons flowing through a TMR (tunneling magneto-resistance) element. Data writing is performed by using a spin-polarized current with the electrons having the same spin direction. Spin torque transfer RAM (STT-RAMTM) has the advantages of lower power-consumption and better scalability over conventional MRAM. Spin torque transfer technology has the potential to make possible MRAM devices combining low current requirements and reduced cost
Non-volatile random access memory (NVRAM) is the general name used to describe any type of random access memory which does not lose its information when power is turned off. This is in contrast to the most common forms of random access memory today, DRAM and SRAM, which both require continual power in order to maintain their data. NVRAM is a subgroup of the more general class of non-volatile memory types, the difference being that NVRAM devices offer random access, as opposed to sequential access like hard disks.
The best-known form of NVRAM memory today is flash memory. Some claim flash memory to be a truly "universal memory", offering the performance of the best SRAM devices with the non-volatility of Flash. To date these alternatives have not yet become mainstream. Contents [hide]
* 1 Early NVRAMs * 2 The floating-gate transistor * 3 Newer Approaches * 4 See also * 5 External links
 Early NVRAMs
Early computers used a variety of memory systems, some of which happened to be non-volatile, although not typically by design but simply as a side effect of their construction. The most common form of memory through the 1960s was magnetic core memory, which stored data in the polarity of small magnets. Since the magnets held their state even with the power removed, core memory was also non-volatile.
Rapid advances in semiconductor fabrication in the 1970s led to a new generation of solid state memories that core simply could not compete with. Relentless market forces have dramatically improved these devices over the years, and today the low-cost and high-performance DRAM forms the vast majority of a typical computer's main memory. However there are many roles where non-volatility is important, either in cases where the power will be removed for periods of time, or alternately where the constant power needs of DRAM conflicts with low power devices. For many years there was no practical RAM-like device to fill this niche, and many systems used a combination of RAM and some form of ROM for these roles.
Custom ROM was the earliest solution, but had the disadvantage of being able to be written to only once, when the chip was initially designed. ROMs consist of a series of diodes permanently wired to return the required data, the diodes being built in this configuration when they are being fabricated.
PROM improved on this design, allowing the chip to be written to electrically by the end-user. PROM consists of a series of diodes that are initially all set to a single value, "1" for instance. By applying higher power than normal, a selected diode can be "burned out" (like a fuse), thereby permanently setting that bit to "0". PROM was a boon to companies who wished to update the contents with new revisions, or alternately produce a number of different products using the same chip. For instance, PROM was widely used for game console cartridges in the 1980s.
For those who required real RAM-like performance and non-volatility typically have to use conventional RAM devices and a battery backup. This was a common solution in earlier computer systems like the original Apple Macintosh, which used a small amount of memory powered by a watch "button" battery for storing basic setup information like the selected boot volume. Much larger battery backed memories are still used today as caches for high-speed databases, requiring a performance level newer NVRAM devices have not yet managed to meet.
 The floating-gate transistor
A huge advance in NVRAM technology was the introduction of the floating-gate transistor, which led to the introduction of erasable programmable read-only memory, or EPROM. EPROM consists of a grid of transistors whose base terminal (the "switch") is protected by a high-quality insulator. By "pushing" electrons onto the base with the application of higher-than-normal power, the electrons become trapped on the far side of the insulator, thereby permanently switching the transistor "on" ("1"). EPROM can be re-set to the "base state" (all "1"s or "0"s, depending on the design) by applying ultraviolet light (UV). The UV photons have enough energy to push the electrons through the insulator and return the base to a ground state. At that point the EPROM can be re-written from scratch.
An improvement on EPROM, EEPROM, soon followed. The extra "E" stands for electrically, referring to the ability to reset EEPROM using electricity instead of UV, making the devices much easier to use in practice. The bit are re-set with the application of even higher power through the other terminals of the transistor (source and drain). This high power pulse basically sucks the electrons through the insulator, returning it to the ground state. This process has the disadvantage of mechanically degrading the chip, however, so memory systems based on floating-gate transistors generally have short write-lifetimes, on the order of 105 writes to any particular bit.
The basis of Flash RAM is identical to EEPROM, and differs largely in internal layout. Flash allows its memory to be written only in blocks, which greatly simplifies the internal wiring and allows for higher densities. Areal density is the main determinant of cost in most computer memory systems, and due to this Flash has evolved into one of the lowest cost solid-state memory devices available. Starting around 2000, demand for ever-greater quantities of Flash have driven manufacturers to use only the latest fabrication systems in order to increase density as much as possible. Although fabrication limits are starting to come into play, new "multi-bit" techniques appear to be able to double or quadruple the density even at existing linewidths.
 Newer Approaches
Flash and EEPROM's limited write-cycles are a serious problem for any real RAM-like role, however. Additionally, the high power needed to write the cells is a problem in low-power roles, where NVRAM is often used. The power also needs time to be "built up" in a device known as a charge pump, which makes writing dramatically slower than reading, often as much as 1,000 times. A number of new memory devices have been proposed to address these shortcomings.
To date, the only such system to enter widespread production is Ferroelectric RAM, or FeRAM. FeRAM uses a ferroelectric layer in a cell that is otherwise similar to conventional DRAM, this layer holding the charge in a 1 or 0 even with the power removed. To date, FeRAM has been produced on very old fabs, and even the most advanced research samples are still twice the linewidth of most Flash devices. Although this difference might be addressable under normal circumstances, as Flash moves to multi-bit cells the difference in memory density appears to be growing, rather than shrinking.
Another approach to see major development effort is Magnetoresistive Random Access Memory, or MRAM, which uses magnetic elements and generally operates in a fashion similar to core. Only one MRAM chip has entered production to date, Freescale Semiconductor's 4 Mbit part, and using the techniques in this particular design it is unlikely to grow any time soon. Another technique, known as STT-MRAM, appears to allow for much higher densities, but is falling behind Flash for the same reasons as FeRAM – enormous competitive pressures in the Flash market.
Another solid-state technology to see more than purely experimental development is Phase-change RAM, or PRAM. PRAM is based on the same storage mechanism as writable CDs and DVDs, but reads them based on their changes in electrical resistance rather than changes in their optical properties. Considered a "dark horse" for some time, in 2006 Samsung announced the availability of a 512 Mb part, considerably higher capacity than either MRAM or FeRAM. The areal density of these parts appears to be even higher than modern Flash devices, the lower overall storage being due to the lack of multi-bit encoding. This announcement was followed by one from Intel and STMicroelectronics, who demonstrated their own PRAM devices at the 2006 Intel Developer Forum in October. One of the most attended sessions in the IEDM December 2006 was the presentation by IBM of their PRAM technology.
Also seeing renewed interest is silicon-oxide-nitride-oxide-silicon (SONOS) memory.
Perhaps one of the more innovative solutions is millipede memory, developed by IBM. Millipede is essentially a punch card rendered using nanotechnology in order to dramatically increase areal density. Although it was planned to introduce millipede as early as 2003, unexpected problems in development delayed this until 2005, by which point it was no longer competitive with Flash. In theory the technology offers storage densities on the order of 1 Tbit/in², far greater than even the best hard drive technologies currently in use (perpendicular recording offers about 230 Gbit/in²) . However, slow read and write times for memories this large seem to limit this technology to hard drive replacements as opposed to high-speed RAM-like uses, although to a very large degree the same is true of Flash as well. It remains to be seen if this technology will ever become practical.
A number of more esoteric devices have been proposed, including Nano-RAM based on carbon nanotube technology, but these are currently far from commercialization. The advantages that nanostructures such as quantum dots, carbon nanotubes and nanowires offer over their silicon-based predecessors include their tiny size, speed and their density. Several concepts of molecular-scale memory devices have been developed recently.