Follow TV Tropes

Following

History MediaNotes / VideoRAM

Go To

OR

Is there an issue? Send a MessageReason:
Page was movedfrom UsefulNotes.Video RAM to MediaNotes.Video RAM. Null edit to update page.
Is there an issue? Send a MessageReason:
None


The earliest consoles (and some early UsefulNotes/MainframesAndMinicomputers that pioneered graphic usage in the early [[TheSixties Sixties]]) didn't have video RAM at all, requiring the programmer to "ride the beam" and generate the video ''themselves'', in real time, for every frame. The most their hardware could usually do is to convert a digital signal for connecting to an analog CRT unit, usually the standard household TV set or (rarely) a specialized monitor. This was especially true on the Platform/{{Atari 2600}}, but most 1970s consoles used a similar technique. This was difficult, time-consuming and error-prone work, often much exacerbated by the penny-pinching design of the hardware, that frequently produced inadequate results (see the whole UsefulNotes/TheGreatVideogameCrashOf1983, caused in part by a rush of poorly made Platform/{{Atari 2600}} games).

to:

The earliest consoles (and some early UsefulNotes/MainframesAndMinicomputers that pioneered graphic usage in the early [[TheSixties Sixties]]) didn't have video RAM at all, requiring the programmer to "ride the beam" and generate the video ''themselves'', in real time, for every frame. The most their hardware could usually do is to convert a digital signal for connecting to an analog CRT unit, usually the standard household TV set or (rarely) a specialized monitor. This was especially true on the Platform/{{Atari 2600}}, but most 1970s consoles used a similar technique. This was difficult, time-consuming and error-prone work, often much exacerbated by the penny-pinching design of the hardware, that frequently produced inadequate results (see the whole UsefulNotes/TheGreatVideogameCrashOf1983, MediaNotes/TheGreatVideogameCrashOf1983, caused in part by a rush of poorly made Platform/{{Atari 2600}} games).



The first GPU chipsets appeared in early UsefulNotes/{{Arcade Game}}s of the 1970's. The earliest known example was the Fujitsu [=MB14241=], a video shifter chip that was used to accelerate framebuffer drawing in Creator/{{Taito}} & Creator/{{Midway}} titles such as ''VideoGame/GunFight'' (1975) and ''VideoGame/SpaceInvaders'' (1978).

to:

The first GPU chipsets appeared in early UsefulNotes/{{Arcade MediaNotes/{{Arcade Game}}s of the 1970's. The earliest known example was the Fujitsu [=MB14241=], a video shifter chip that was used to accelerate framebuffer drawing in Creator/{{Taito}} & Creator/{{Midway}} titles such as ''VideoGame/GunFight'' (1975) and ''VideoGame/SpaceInvaders'' (1978).



Of course, a game with just a tilemap would be boring, as objects would have to move in units of a whole tile. It's fine for games like ''VideoGame/{{Tetris}}'', or games with large characters such as fighting games or the UsefulNotes/ZXSpectrum adaptation of ''WesternAnimation/TheTrapDoor'', not so much for other genres. So most tile-based systems added a sprite rendering unit. A sprite is essentially a freely movable tile that is handled by a special hardware, and just like tiles it has a piece of video memory dedicated to it. This memory contains a set of sprite outlines, but instead the tilemap stores where on the screen each of the sprites is right now. The sprite rendering hardware often has an explicit limit on the size of the sprite, and because some [=GPUs=] implemented smooth sprite movement not by post-compositing[[note]]adding the sprites to the picture during the final image assembly[[/note]] but by the character memory manipulation, the sprite often had to have the same size as the tile. The reason that the SNES had larger sprites than the NES was because the SNES's character memory was larger than on the NES, and its GPU could handle larger tiles.

to:

Of course, a game with just a tilemap would be boring, as objects would have to move in units of a whole tile. It's fine for games like ''VideoGame/{{Tetris}}'', or games with large characters such as fighting games or the UsefulNotes/ZXSpectrum Platform/ZXSpectrum adaptation of ''WesternAnimation/TheTrapDoor'', not so much for other genres. So most tile-based systems added a sprite rendering unit. A sprite is essentially a freely movable tile that is handled by a special hardware, and just like tiles it has a piece of video memory dedicated to it. This memory contains a set of sprite outlines, but instead the tilemap stores where on the screen each of the sprites is right now. The sprite rendering hardware often has an explicit limit on the size of the sprite, and because some [=GPUs=] implemented smooth sprite movement not by post-compositing[[note]]adding the sprites to the picture during the final image assembly[[/note]] but by the character memory manipulation, the sprite often had to have the same size as the tile. The reason that the SNES had larger sprites than the NES was because the SNES's character memory was larger than on the NES, and its GPU could handle larger tiles.



A misconception that's dying down is that the amount of Video RAM indicated the graphics card's performance. This started around the turn of the millennium as the PC gaming world was transitioning to 32-bit color (options existed to use either 16-bit, 65536 colors, or 32-bit, technically 16,777,216 colors plus 256 levels of transparency/luminancy and such). RAM at the time was expensive as well, and 32-bit color would take up twice the space. But as time went on, RAM became cheaper and there were occasions where lower performing graphics cards had more memory than the faster performing ones. Needless to say, a "UsefulNotes/DirectX compatible graphics card with X MB of memory" was soon dropped because the "memory = performance" trend was rendered moot. However, it seems that this belief has saw a resurgence as of late, with many deriding cards with 8GB or less as having insufficient VRAM. Much of these seems to stem from poorly optimized Triple-A titles who lists ludicrous VRAM requrements on the requirements section on Platform/{{Steam}} and Platform/EpicGamesStore and in reviews.

to:

A misconception that's dying down is that the amount of Video RAM indicated the graphics card's performance. This started around the turn of the millennium as the PC gaming world was transitioning to 32-bit color (options existed to use either 16-bit, 65536 colors, or 32-bit, technically 16,777,216 colors plus 256 levels of transparency/luminancy and such). RAM at the time was expensive as well, and 32-bit color would take up twice the space. But as time went on, RAM became cheaper and there were occasions where lower performing graphics cards had more memory than the faster performing ones. Needless to say, a "UsefulNotes/DirectX "MediaNotes/DirectX compatible graphics card with X MB of memory" was soon dropped because the "memory = performance" trend was rendered moot. However, it seems that this belief has saw a resurgence as of late, with many deriding cards with 8GB or less as having insufficient VRAM. Much of these seems to stem from poorly optimized Triple-A titles who lists ludicrous VRAM requrements on the requirements section on Platform/{{Steam}} and Platform/EpicGamesStore and in reviews.

Changed: 33

Is there an issue? Send a MessageReason:
None


In 1979, [[Creator/BandaiNamcoEntertainment Namco]] and Creator/{{Irem}} introduced tile-based graphics with their custom arcade graphics chipsets. The [[http://gaming.wikia.com/wiki/Namco_Galaxian Namco Galaxian]] arcade system in 1979 used specialized graphics hardware supporting RGB color, multi-colored sprites and tilemap backgrounds. Creator/{{Nintendo}}'s ''VideoGame/RadarScope'' also adopted a tile-based graphics system that year.

to:

In 1979, [[Creator/BandaiNamcoEntertainment Namco]] Creator/{{Namco}} and Creator/{{Irem}} introduced tile-based graphics with their custom arcade graphics chipsets. The [[http://gaming.wikia.com/wiki/Namco_Galaxian Namco Galaxian]] arcade system in 1979 used specialized graphics hardware supporting RGB color, multi-colored sprites and tilemap backgrounds. Creator/{{Nintendo}}'s ''VideoGame/RadarScope'' also adopted a tile-based graphics system that year.
Is there an issue? Send a MessageReason:
None


A misconception that's dying down is that the amount of Video RAM indicated the graphics card's performance. This started around the turn of the millennium as the PC gaming world was transitioning to 32-bit color (options existed to use either 16-bit, 65536 colors, or 32-bit, technically 16,777,216 colors plus 256 levels of transparency/luminancy and such). RAM at the time was expensive as well, and 32-bit color would take up twice the space. But as time went on, RAM became cheaper and there were occasions where lower performing graphics cards had more memory than the faster performing ones. Needless to say, a "UsefulNotes/DirectX compatible graphics card with X MB of memory" was soon dropped because the "memory = performance" trend was rendered moot.

to:

A misconception that's dying down is that the amount of Video RAM indicated the graphics card's performance. This started around the turn of the millennium as the PC gaming world was transitioning to 32-bit color (options existed to use either 16-bit, 65536 colors, or 32-bit, technically 16,777,216 colors plus 256 levels of transparency/luminancy and such). RAM at the time was expensive as well, and 32-bit color would take up twice the space. But as time went on, RAM became cheaper and there were occasions where lower performing graphics cards had more memory than the faster performing ones. Needless to say, a "UsefulNotes/DirectX compatible graphics card with X MB of memory" was soon dropped because the "memory = performance" trend was rendered moot.
moot. However, it seems that this belief has saw a resurgence as of late, with many deriding cards with 8GB or less as having insufficient VRAM. Much of these seems to stem from poorly optimized Triple-A titles who lists ludicrous VRAM requrements on the requirements section on Platform/{{Steam}} and Platform/EpicGamesStore and in reviews.
Is there an issue? Send a MessageReason:
None


The earliest consoles (and some early UsefulNotes/MainframesAndMinicomputers that pioneered graphic usage in the early [[TheSixties Sixties]]) didn't have video RAM at all, requiring the programmer to "ride the beam" and generate the video ''themselves'', in real time, for every frame. The most their hardware could usually do is to convert a digital signal for connecting to an analog CRT unit, usually the standard household TV set or (rarely) a specialized monitor. This was especially true on the UsefulNotes/{{Atari 2600}}, but most 1970s consoles used a similar technique. This was difficult, time-consuming and error-prone work, often much exacerbated by the penny-pinching design of the hardware, that frequently produced inadequate results (see the whole UsefulNotes/TheGreatVideogameCrashOf1983, caused in part by a rush of poorly made UsefulNotes/{{Atari 2600}} games).

to:

The earliest consoles (and some early UsefulNotes/MainframesAndMinicomputers that pioneered graphic usage in the early [[TheSixties Sixties]]) didn't have video RAM at all, requiring the programmer to "ride the beam" and generate the video ''themselves'', in real time, for every frame. The most their hardware could usually do is to convert a digital signal for connecting to an analog CRT unit, usually the standard household TV set or (rarely) a specialized monitor. This was especially true on the UsefulNotes/{{Atari Platform/{{Atari 2600}}, but most 1970s consoles used a similar technique. This was difficult, time-consuming and error-prone work, often much exacerbated by the penny-pinching design of the hardware, that frequently produced inadequate results (see the whole UsefulNotes/TheGreatVideogameCrashOf1983, caused in part by a rush of poorly made UsefulNotes/{{Atari Platform/{{Atari 2600}} games).



The capacity of such hardware was generally measured in the number of pixels that could be copied in video memory per second. Note again, however, that hardware like this first appeared in UsefulNotes/MainframesAndMinicomputers, that usually could afford it due to having much more memory and specialized hardware, as they didn't need to be competitive in the ''consumer'' market. In the Seventies, framebuffer technology was introduced to arcade machines, with Creator/{{Taito}}'s ''VideoGame/GunFight'' (1975), and popularized by ''VideoGame/SpaceInvaders'' (1978). In the early Eighties, it appeared on UsefulNotes/{{UNIX}} workstations, and spread to the consumer-grade machines like the [[http://www.giantbomb.com/nec-pc-9801/3045-112/ NEC PC-98]] in the early Eighties and the UsefulNotes/{{Amiga}} by the end of the decade. The first framebuffer-based GPU for normal Intel [=PCs=] was the [[https://en.wikipedia.org/wiki/NEC_µPD7220 NEC µPD7220]], released in 1982. Framebuffer technology became more prominent in the early 1990s, when the rise of UsefulNotes/MicrosoftWindows all but required it.

to:

The capacity of such hardware was generally measured in the number of pixels that could be copied in video memory per second. Note again, however, that hardware like this first appeared in UsefulNotes/MainframesAndMinicomputers, that usually could afford it due to having much more memory and specialized hardware, as they didn't need to be competitive in the ''consumer'' market. In the Seventies, framebuffer technology was introduced to arcade machines, with Creator/{{Taito}}'s ''VideoGame/GunFight'' (1975), and popularized by ''VideoGame/SpaceInvaders'' (1978). In the early Eighties, it appeared on UsefulNotes/{{UNIX}} Platform/{{UNIX}} workstations, and spread to the consumer-grade machines like the [[http://www.giantbomb.com/nec-pc-9801/3045-112/ NEC PC-98]] in the early Eighties and the UsefulNotes/{{Amiga}} Platform/{{Amiga}} by the end of the decade. The first framebuffer-based GPU for normal Intel [=PCs=] was the [[https://en.wikipedia.org/wiki/NEC_µPD7220 NEC µPD7220]], released in 1982. Framebuffer technology became more prominent in the early 1990s, when the rise of UsefulNotes/MicrosoftWindows Platform/MicrosoftWindows all but required it.



The first home computer to include anything resembling a modern GPU was the [[UsefulNotes/Atari8BitComputers Atari 400/800]] in 1979. These used a two-chip video system that had its own instruction set and implemented an early form of "scanline DMA"[[note]]It could access and operate the data for the whole line of the resulting image (as CRT monitors draw the image by scanning the screen line-by-line top to bottom) without bothering the CPU[[/note]], which was used in later consoles for special effects. The most popular of these early chips, however, was Texas Instruments' [=TMS9918=]/9928 family. The 99x8 provided either tile-based or bit-mapped memory arrangements, and could address up to 16 kilobytes of VRAM -- a huge amount at the time. The most [=TMS99x8=] display modes, however, were tile-based.

to:

The first home computer to include anything resembling a modern GPU was the [[UsefulNotes/Atari8BitComputers [[Platform/Atari8BitComputers Atari 400/800]] in 1979. These used a two-chip video system that had its own instruction set and implemented an early form of "scanline DMA"[[note]]It could access and operate the data for the whole line of the resulting image (as CRT monitors draw the image by scanning the screen line-by-line top to bottom) without bothering the CPU[[/note]], which was used in later consoles for special effects. The most popular of these early chips, however, was Texas Instruments' [=TMS9918=]/9928 family. The 99x8 provided either tile-based or bit-mapped memory arrangements, and could address up to 16 kilobytes of VRAM -- a huge amount at the time. The most [=TMS99x8=] display modes, however, were tile-based.



The [=TMS99x8=] introduced tile-based rendering to home systems, and the NES and later revs of the 99x8 in the {{UsefulNotes/MSX}} and Creator/{{Sega}} [[http://segaretro.org/SG-1000 SG-1000]] refined it. While the Texas Instruments chip required the CPU to load tiles into memory itself, the Big N went one better and put what it called the "Picture Processing Unit" on its own memory bus, freeing the CPU to do other things and making larger backgrounds and smooth scrolling possible. The [[http://segaretro.org/Sega_Master_System Sega Master System]] took a further step by giving its [[http://segaretro.org/VDP VDP]] a 16-bit bus.

to:

The [=TMS99x8=] introduced tile-based rendering to home systems, and the NES Platform/{{NES}} and later revs of the 99x8 in the {{UsefulNotes/MSX}} {{Platform/MSX}} and Creator/{{Sega}} [[http://segaretro.org/SG-1000 SG-1000]] Platform/SG1000 refined it. While the Texas Instruments chip required the CPU to load tiles into memory itself, the Big N went one better and put what it called the "Picture Processing Unit" on its own memory bus, freeing the CPU to do other things and making larger backgrounds and smooth scrolling possible. The [[http://segaretro.org/Sega_Master_System Sega Master System]] Platform/SegaMasterSystem took a further step by giving its [[http://segaretro.org/VDP VDP]] a 16-bit bus.



Many tile-based [=GPUs=] also included special circuitry for the smooth scrolling (where the image shifts one pixel each time that is), which is notoriously hard to do entirely in software. John Carmack was reportedly first to do it efficiently for a PC in ''VideoGame/CommanderKeen'', and his method required a pretty powerful 80286 CPU, which was a head and shoulders above the puny 8-bit chips in those early consoles. In the hardware, however, the CPU simply tells it the position offset for each layer, and that's what gets shown. Arcade machines like the [[http://segaretro.org/Sega_X_Board Sega X Board]], and later consoles like the UsefulNotes/MegaDrive and {{UsefulNotes/SNES}}, added new features, such as more colors, "Mode 7" support (hardware rotate/scale/zoom effects), and scanline DMA (used to generate "virtual layers").

to:

Many tile-based [=GPUs=] also included special circuitry for the smooth scrolling (where the image shifts one pixel each time that is), which is notoriously hard to do entirely in software. John Carmack was reportedly first to do it efficiently for a PC in ''VideoGame/CommanderKeen'', and his method required a pretty powerful 80286 CPU, which was a head and shoulders above the puny 8-bit chips in those early consoles. In the hardware, however, the CPU simply tells it the position offset for each layer, and that's what gets shown. Arcade machines like the [[http://segaretro.org/Sega_X_Board Sega X Board]], and later consoles like the UsefulNotes/MegaDrive Platform/MegaDrive and {{UsefulNotes/SNES}}, {{Platform/SNES}}, added new features, such as more colors, "Mode 7" support (hardware rotate/scale/zoom effects), and scanline DMA (used to generate "virtual layers").



Sprites exist within certain layers, just like tilemaps. So a sprite can appear in front of or behind tilemaps or even other sprites. Some [=GPUs=] like [[{{UsefulNotes/MSX}} Yamaha V99x8]] allowed developers to program the sprite movement and even issued interrupts[[note]]a hardware signal to execute a special subprogram out-of-order to react to a specific event[[/note]] when two sprites collided. OTOH, sprite rendering, on both the NES and other early consoles, had certain limitations. Only X number of sprite tiles could be shown per horizontal scanline of the screen. If you exceed this limit, you see errors, like object flickering and so forth. Note that what matters is the number of sprite tiles, not the number of sprites. So 2x2 spites (a sprite composed of 4 tiles arranged in a square) count as 2 sprite tiles in the horizontal direction.

to:

Sprites exist within certain layers, just like tilemaps. So a sprite can appear in front of or behind tilemaps or even other sprites. Some [=GPUs=] like [[{{UsefulNotes/MSX}} [[{{Platform/MSX}} Yamaha V99x8]] allowed developers to program the sprite movement and even issued interrupts[[note]]a hardware signal to execute a special subprogram out-of-order to react to a specific event[[/note]] when two sprites collided. OTOH, sprite rendering, on both the NES and other early consoles, had certain limitations. Only X number of sprite tiles could be shown per horizontal scanline of the screen. If you exceed this limit, you see errors, like object flickering and so forth. Note that what matters is the number of sprite tiles, not the number of sprites. So 2x2 spites (a sprite composed of 4 tiles arranged in a square) count as 2 sprite tiles in the horizontal direction.



Meanwhile, what the [[UsefulNotes/IBMPersonalComputer PCs]] did have at the time was not nearly as neat. After the horrible, horrible CGA (that had just 16 kilobytes of VRAM and was limited to 4 colors ''at best'') EGA at least had brought full 16-color graphics to the PC in the late 1980s, but its memory structure was, to put it mildly, weird. It used a ''planar'' memory setup, meaning that each of the four EGA color bits (red, green, blue and brightness) were separated into different parts of memory and accessed one at a time. This made writing slow, so the EGA also provided a set of rudimentary blitting functions to help out. Nevertheless, several games of the PC era made use of this mode. Just like blitter-based systems, there was no support for sprites, so collision detection and layering was up to the programmer.

to:

Meanwhile, what the [[UsefulNotes/IBMPersonalComputer [[Platform/IBMPersonalComputer PCs]] did have at the time was not nearly as neat. After the horrible, horrible CGA (that had just 16 kilobytes of VRAM and was limited to 4 colors ''at best'') EGA at least had brought full 16-color graphics to the PC in the late 1980s, but its memory structure was, to put it mildly, weird. It used a ''planar'' memory setup, meaning that each of the four EGA color bits (red, green, blue and brightness) were separated into different parts of memory and accessed one at a time. This made writing slow, so the EGA also provided a set of rudimentary blitting functions to help out. Nevertheless, several games of the PC era made use of this mode. Just like blitter-based systems, there was no support for sprites, so collision detection and layering was up to the programmer.

Top