Archive for the ‘VGA’ Category

ATI radeon HD 4670


Radeon HD4670

Graphics Engine : ATI RADEON HD 4670
Bus Standard : PCI Express 2.0
Video memory : GDDR3 512MB / GDDR4 512MB
Core Clock : 750 MHz
Memory Clock : 1746MHz (873X 2) / 1800 (900×2)
Memory interface : 128 bit
Shader : 320(Unified)
Resolution Max : 2560 x 1600
Video Out Function : DSub, DVI , HDMI
HDCP/HDTV/HDMI : Yes, Yes, Yes
Packages : Manual, CD Drivers, DA HD4670 Cards

SPECIFICATION
  • 514 million transistors on 55nm fabrication process
  • PCI Express 2.0 x16 bus interface
  • GDDR3/DDR3/DDR2 memory interface (depending on model)
  • Microsoft® DirectX® 10.1 support
    • Shader Model 4.1
    • 32-bit floating point texture filtering
    • Indexed cube map arrays
    • Independent blend modes per render target
    • Pixel coverage sample masking
    • Read/write multi-sample surfaces with shaders
    • Gather4 texture fetching
  • Unified Superscalar Shader Architecture
    • 320 stream processing units
      • Dynamic load balancing and resource allocation for vertex, geometry, and pixel shaders
      • Common instruction set and texture unit access supported for all types of shaders
      • Dedicated branch execution units and texture address processors
    • 128-bit floating point precision for all operations
    • Command processor for reduced CPU overhead
    • Shader instruction and constant caches
    • Up to 128 texture fetches per clock cycle
    • Up to 128 textures per pixel
    • Fully associative multi-level texture cache design
    • DXTC and 3Dc+ texture compression
    • High resolution texture support (up to 8192 x 8192)
    • Fully associative texture Z/stencil cache designs
    • Double-sided hierarchical Z/stencil buffer
    • Early Z test and Fast Z Clear
    • Lossless Z & stencil compression (up to 128:1)
    • Lossless color compression (up to 8:1)
    • 8 render targets (MRTs) with anti-aliasing support
    • Physics processing support
  • Dynamic Geometry Acceleration
    • High performance vertex cache
    • Programmable tessellation unit
    • Accelerated geometry shader path for geometry amplification
    • Memory read/write cache for improved stream output performance
  • Anti-aliasing features
    • Multi-sample anti-aliasing (2, 4, or 8 samples per pixel)
    • Up to 24x Custom Filter Anti-Aliasing (CFAA) for superior quality
    • Adaptive super-sampling and multi-sampling
    • Gamma correct
    • Super AA (ATI CrossFireX™ configurations only)
    • All anti-aliasing features compatible with HDR rendering
  • Texture filtering features
    • 2x/4x/8x/16x high quality adaptive anisotropic filtering modes (up to 128 taps per pixel)
    • 128-bit floating point HDR texture filtering
    • sRGB filtering (gamma/degamma)
    • Percentage Closer Filtering (PCF)
    • Depth & stencil texture (DST) format support
    • Shared exponent HDR (RGBE 9:9:9:5) texture format support
  • OpenGL 2.0 support
  • ATI Avivo™ HD Video and Display Platform1
    • 2nd generation Unified Video Decoder (UVD 2)
      • Enabling hardware decode acceleration of H.264, VC-1 and MPEG-2
      • Dual stream playback (or Picture-in-picture)
    • Hardware MPEG-1 and DivX video decode acceleration
      • Motion compensation and IDCT
    • ATI Avivo Video Post Processor1
      • Enhanced DVD up-conversion to HD
      • Color space conversion
      • Chroma subsampling format conversion
      • Horizontal and vertical scaling
      • Gamma correction
      • Advanced vector adaptive per-pixel de-interlacing
      • De-blocking and noise reduction filtering
      • Detail enhancement
      • Inverse telecine (2:2 and 3:2 pull-down correction)
      • Bad edit correction
      • Automatic dynamic contrast adjustment
      • Full score in HQV (SD) and HQV (HD) video quality benchmarks
    • Two independent display controllers
      • Drive two displays simultaneously with independent resolutions, refresh rates, color controls and video overlays for each display
      • Full 30-bit display processing
      • Programmable piecewise linear gamma correction, color correction, and color space conversion
      • Spatial/temporal dithering provides 30-bit color quality on 24-bit and 18-bit displays
      • High quality pre- and post-scaling engines, with underscan support for all display outputs
      • Content-adaptive de-flicker filtering for interlaced displays
      • Fast, glitch-free mode switching
      • Hardware cursor
    • Two integrated DVI display outputs
      • Primary supports 18-, 24-, and 30-bit digital displays at all resolutions up to 1920×1200 (single-link DVI) or 2560×1600 (dual-link DVI)2
      • decondary supports 18-, 24-, and 30-bit digital displays at all resolutions up to 1920×1200 (single-link DVI only)2
      • Each includes a dual-link HDCP encoder with on-chip key storage for high resolution playback of protected content3
    • Two integrated 400 MHz 30-bit RAMDACs
      • Each supports analog displays connected by VGA at all resolutions up to 2048×15362
    • DisplayPort™ output support
      • Supports 24- and 30-bit displays at all resolutions up to 2560×16002
      • Integrated HD audio controller with up to 2 channel 48 KHz stereo or multi-channel (7.1) AC3 enabling a plug-and-play cable-less audio solution4
    • HDMI output support
      • Supports all display resolutions up to 1920×10802
      • Integrated HD audio controller with up to 2 channel 48 KHz stereo or multi-channel (7.1) AC3 enabling a plug-and-play cable-less audio solution4
    • Integrated AMD Xilleon™ HDTV encoder
      • Provides high quality analog TV output (component/S-video/composite)
      • Supports SDTV and HDTV resolutions
      • Underscan and overscan compensation
    • Seamless integration of pixel shaders with video in real time
    • VGA mode support on all display outputs
  • ATI PowerPlay™ Technology5
    • Advanced power management technology for optimal performance and power savings
    • Performance-on-Demand
      • Constantly monitors GPU activity, dynamically adjusting clocks and voltage based on user scenario
      • Clock and memory speed throttling
      • Voltage switching
      • Dynamic clock gating
    • Central thermal management – on-chip sensor monitors GPU temperature and triggers thermal actions as required
  • ATI CrossFireX™ Multi-GPU Technology6
    • Scale up rendering performance and image quality with two GPUs
    • Integrated compositing engine
    • High performance bridge interconnect

1 ATI Avivo™ HD is a technology platform that includes a broad set of capabilities offered by certain ATI Radeon™ HD GPUs. Not all products have all features and full enablement of some ATI Avivo™ HD capabilities may require complementary products.
2 Some custom resolutions require user configuration
3 Playing HDCP content requires additional HDCP ready components, including but not limited to an HDCP ready monitor, Blu-ray or HD DVD disc drive, multimedia application and computer operating system.
4 Subject to digital rights management limitations; maximum supported audio stream bandwidth is 6.144 Mbps
5 ATI PowerPlay™ technology consists of numerous power saving features. Not all features may be available in all ATI Radeon HD 4600 Series graphics cards.
6 ATI CrossFireX™ technology requires an ATI CrossFireX Ready motherboard and may require a specialized power supply.

ATI Radeon™ HD graphics chips have numerous features integrated into the processor itself (e.g., HDCP, HDMI, etc.). Third parties manufacturing products based on, or incorporating ATI Radeon HD graphics chips, may choose to enable some or all of these features. If a particular feature is important to you, please inquire of the manufacturer if a particular product supports this feature. In addition, some features or technologies may require you to purchase additional components in order to make full use of them (e.g. a Blu-Ray or HD-DVD drive, HDCP-ready monitor, etc.).

ATI radeon HD 4830


AMD Introduces a $129 Radeon HD 4830 Graphics Card

Update: Shortly after Legit Reviews published our Radeon HD 4830 articles we were notified by AMD that every reference card they sent out to reviews came with an incorrect BIOS. The BIOS that shipped on the Radeon HD 4830 had one too many of the SIMDS disabled and that the Radeon HD 4830 had just 560 stream processors enabled instead of the 640 stream processors that it should have been running. Read this article to see what the right BIOS does for performance!

ATI Radeon HD 4830 Graphics Card

AMD has been pretty aggressive in the video card market lately and with the success of the Radeon HD 4800 series who could blame them. Today AMD is announcing the ATI Radeon HD 4830 graphics card, which looks like the Radeon HD 4850 at first glance. It would be easy to confuse the two cards as they use similar printed circuit boards and both use 512MB of GDDR3 for the frame buffer. What is the difference then? The Radeon HD 4830 has lower clock speeds across the board in conjunction with 160 fewer stream processors and eight fewer texture units than the Radeon HD 4850. With less features comes a lower price tag and at $129 the Radeon HD 4830 can still offer great performance for the price being paid.

ATI Radeon HD 4830 Graphics Card

When you compare the specifications on the three single GPU Radeon HD 4800 series cards you can see how they stack up across the board. Notice that the max board power remains the same on the Radeon HD 4830, so power consumption and temperatures should be close to what is seen on the Radeon HD 4850 graphics card. The clock rate on the Radeon HD 4830 is 575MHz with the memory clock being 900MHz. The Radeon HD 4830, 4850 and 4870 all have 956 Million transistors and are built on the 55nm process.

ATI Radeon HD 4830 Graphics Card

AMD has a very nice product stack in the market now with mainstream gaming graphics cards from the $79 price point all the way up to the $300 mark for those looking to spend a little more for performance.  The Radeon HD 4830 is set for the $100-$150 price point, which makes it direct competition to the GeForce 9800 GT graphics card from NVIDIA. Now that we have a basic understanding of the specifications, let’s take a closer look at the card itself.

The Radeon HD 4830 Graphics Card

ATI Radeon HD 4830 512MB Video Card

The Radeon HD 4830 video card is a single-slot solution that is 10.25″ long, which is small enough to fit in most desktop chassis. The reference Radeon HD 4830 graphics card has support for two dual-link DVI ports and also HDMI via the included DVI-to-HDMI adapter in retail packaging. There is also an analogue output that is located in-between the DVI ports that supports both S-video and component output with an included dongle.

ATI Radeon HD 4830 512MB Video Card

The Radeon HD 4830 graphics card does have one 6-pin PCI Express power connector on it, as the card has a total power rating of 110W and draws more power than the graphics slot alone can supply.

ATI Radeon HD 4830 512MB Video Card

With the copper heat spreader removed we can get a better look at the card. The Radeon HD 4830 does have two CrossFire interconnects and fully supports CrossFire and CrossFireX configurations, so you can run two, three or four of these cards together for better performance. The eight black rectangles are eight Qimonda GDDR3 memory ICs that total 512MB for the 256-bit frame buffer. The memory data rate on this card is 1.8Gbps with a memory bandwidth of 57.6 GB/s.

ATI Radeon HD 4830 512MB Video Card

The back of the Radeon HD 4830 graphics card is pretty plain as no memory or interesting components are located there.

ATI Radeon HD 4830 512MB Video Card

Here is a closer look at the GPU on the Radeon HD 4830, which is the same core used on the other Radeon 4800 series cards and is built on the same 55nm process with 956 million transistors. Clocked at 575MHz this core has 640 stream processors, which is good enough for 740 GFLOPS of compute performance.

The Test System

The Main Test System

The ATI test system was running Windows Vista Ultimate 64-bit with SP1 and all available Microsoft updates. The ATI Radeon HD 4870 X2 was using CCC 8.8 beta drivers. The ATI Radeon HD 4670 and Radeon HD 4550 were using 8.9 drivers. The Radeon HD 4870 1GB, Radeon HD 4850 and Radeon HD 4830 all used Radeon 8.10 drivers. All results shown in the charts are averages of at least three runs from each game or application used.

The Video Cards:

  • ATI Radeon HD 4870 X2 – GDDR5(500MHz/1800MHz)
  • ATI Radeon HD 4870 1GB – GDDR5 (750MHz/1800MHz)
  • ATI Radeon HD 4850 – 512MB GDDR3 (625MHz/1986MHz)
  • ATI Radeon HD 4830 – 512MB GDDR3 (575MHz/1800MHz)
  • PNY GeForce GTX 280 – GDDR3  (602MHz/2214MHz)
  • BFG Tech GeForce GTX 260 – GDDR3 (602MHz/2214MHz)
  • ATI Radeon HD 4670 – 512MB GDDR3 (750MHz/2000MHz)
  • XFX GeForce 9600 GT – 512MB GDDR3 (650MHz/1800MHz)
  • EVGA GeForce 9600 GSO – 384MB GDDR3 (550MHz/1600MHz)
  • NVIDIA GeForce 9500 GT – 256MB GDDR3 (600MHz/1000MHz)
  • ATI Radeon HD 4550 – 512MB GDDR3 (600MHz/1600MHz)

All of the ATI video cards were tested on our Intel X48 Express Test platform, which is loaded with the latest and greatest hardware.  The Intel Core 2 Quad QX9770 ‘Yorkfield’ processor was used for testing as it proves to be the best desktop processor when it comes to game performance. The test system was also loaded with 4GB of memory and water cooled to ensure throttling of the processor or memory wouldn’t cause any issues. The Corsair DDR3 1600MHz memory kit was run at 1600MHz with 9-9-9-24 memory timings. The Gigabyte X48T-DQ6 motherboard was running BIOS version F5, which was the most recent available at the time. It should be noted that the Radeon HD 4870 X2 was run on the Coolermaster 1000W power supply when running in CrossFireX mode as the Corsair HX 620W power supply was unable to handle the load. The Corsair HX 620W power supply would run the cards and was pulling more than 791W from the wall, but the games displayed artifacts and the +12V rail was low.  A quick switch to a bigger PSU, and the issue was resolved.

Intel Test Platform
Component Brand/Model Live Pricing
Processor Intel Core 2 Quad QX9770
Motherboard
Gigabyte X48T-DQ6
Memory
4GB Corsair PC3-1600C9
Video Cards See Above
Hard Drive Western Digital SATA RaptorX
Cooling Corsair Nautilus 500
Power Supply Corsair HX620W
Operating System Windows Vista Ultimate

The Main Test System

All of the NVIDIA video cards were tested on our nForce 790i SLI Ultra test platform, which is loaded with the latest and greatest hardware.  The Intel Core 2 Quad QX9770 ‘Yorkfield’ processor was used for testing as it proved to be the best desktop processor when it comes to game performance. The test system was also loaded with 4GB of memory and water cooled to ensure throttling of the processor or memory wouldn’t cause any issues. The Corsair DDR3 1600MHz memory kit was run at 1600MHz with 9-9-9-24 memory timings. The EVGA 790i SLI Ultra motherboard was running BIOS version P06, which was the most recent available at the time. The GeForce 9500 GT, 9600 GT and 9600 GSO used Forceware 177.93 video card drivers while all other NVIDIA graphics cards were run with Forceware 177.83 video card drivers installed.

Intel Test Platform
Component Brand/Model Live Pricing
Processor Intel Core 2 Quad QX9770
Motherboard EVGA 790i SLI Ultra
Memory
4GB Corsair PC3-1600C9
Video Cards See Above
Hard Drive Western Digital SATA RaptorX
Cooling Corsair Nautilus 500
Power Supply Corsair HX620W
Operating System Windows Vista Ultimate

Now that we know exactly what the test system is, we can move along to performance numbers.

ATI radeon HD 4850


You may have gathered over the last couple of weeks that we really like the ATI HD 4870. It isn’t quite the fastest graphics card you can buy – that honour goes to nVidia’s GTX 280 – but it performs very well and comes in at a quite phenomenal price. Still, there are many of us that would balk at the idea of spending nearly £200 on a graphics card, regardless of how fast it is, which is where the ATI HD 4850 comes in.

Like its more expensive sibling, the ATI HD 4850 is based on ATI’s new RV770 chip. In fact, unlike the nVidia GTX 260, which uses the same chip as the GTX 280 but with a few sections disabled, the HD 4850 uses the full extent of RV770. The differences are confined to clock speed and memory configuration.

Click for full size
Click to enlarge
So, in essence you still get 800 stream processors, 40 texture address/ filtering units, and 16 ROPs, as well as the 256-bit wide memory interface – although the memory chips themselves are GDDR3 instead of the GDDR5 seen on the HD 4870. However, the core clock speed has been reduced by 17 per cent and memory speed by 45 per cent, which should result in a performance differential that sits somewhere within that percentage range – exactly what the difference will be will differ from game to game.
And that really is it. There’s nothing more to say about the architecture of HD 4850 that hasn’t already been said in our in-depth HD 4870 review. However, when it comes to the card itself there are some significant differences.

Asus and Powercolor were the first board partners to get cards to us for review. Powercolor’s card is running stock clocks while Asus’ is from its T.O.P. range, which means it comes overclocked to 680MHz(core) and 2,100MHz(memory) straight out of the box. This understandably means the Asus card will cost a little more but the choice is there if you want a tad more performance.


Both cards come with the exact same bundle that includes converters for DVI-to-HDMI, DVI-to-VGA, S-video to composite, and S-video to component as well as a CrossfireX connector. Neither include any free games or other software but considering the approximately £120 asking price this is hardly surprising.


As a result of the cut down clock and memory speeds the cards consume less power and consequently kick out less heat than HD 4870. This means ATI has been able to use a single slot cooler for its reference design, which both cards we’re looking at today have utilised.

While this seems to make sense, ATI obviously never tried to swap out one of these cards after an extended gaming session because, my god, they get hot! Even ATI’s own Overdrive software reports that the cards are running at 80 degrees Celsius and above. Not that we experienced any stability problems, at least with Powercolor’s card. However, the Asus card, which was fine for most of our testing, didn’t fare so well. During Race Driver: GRID testing the card would regularly crash out and we eventually had to abandon testing with the Asus card. I asked an Asus representative about this and he informed us that indeed the reference cooler isn’t sufficient for reliable performance when the card is overclocked so retail versions of the T.O.P. card will use Asus’ Glaciator cooler instead.

An Asus nVidia 9600GT card utilising the Glaciator cooler
Now we’ve not seen a card with this cooler before so we can’t vouch for its abilities. However, from what we were told it enables the card to run nearly 20 degrees Celsius cooler than the reference design and, from reading around, the general opinion appears to be that this is also a very quiet and efficient cooler. Of course the obvious problem is that it will make the card semi-dual-slot, i.e. it won’t take up two PCI brackets, so you can still install a USB or audio panel, but you won’t be able to fit another full-size card in alongside.


One additional power connector is required to get the card going and this is situated in the normal position on the back edge. Likewise the Crossfire connectors are up top where you’d expect and outputs are the standard two dual-link DVI and combined S-Video/Composite/Component sockets.

TEST SETUP

All our testing uses a variety of manual run throughs and automated timedemos but regardless of which test method is used we monitor all tests to ensure performance is consistent. Where there are spurious results or dips in performance we will note this. We also do multiple run throughs then take the average of these and report that figure to you. The test setup is as follows:

Common System Components

  • Intel Core 2 Quad QX9770
  • Asus P5E3
  • 2GB Corsair TWIN3X2048-1333C9 DDR3
  • 150GB Western Digital Raptor
  • Microsoft Windows Vista Home Premium 32-bit

Drivers

  • ATI: Catalyst 8.4
  • nVidia GTX200 Series: Forceware 177.34
  • Other nVidia cards: Forceware 175.16

Cards Tested

  • ATI HD 4870
  • ATI HD 3870
  • nVidia GeForce GTX 280
  • nVidia GeForce GTX 260
  • nVidia GeForce 9800 GTX

Games Tested

  • Crysis
  • Race Driver: GRID
  • Enemy Territory: Quake Wars
  • Call of Duty 4
  • Counter-Strike: Source

CRYSIS

While it hasn’t been a huge commercial success and its gameplay is far from revolutionary, the graphical fidelity of Crysis is still second to none and as such it’s still the ultimate test for a graphics card. With masses of dynamic foliage, rolling mountain ranges, bright blue seas, and big explosions, this game has all the eye-candy you could wish for and then some.

We test using the 32-bit version of the game patched to version 1.1 and running in DirectX 10 mode. We use a custom timedemo that’s taken from the first moments at the start of the game, wondering around the beach. Surprisingly, considering its claustrophobic setting and graphically rich environment, we find that any frame rate above 30fps is about sufficient to play this game.

All in-game settings are set to high for our test runs and we test with both 0xAA and 4xAA. Transparency anti-aliasing is also manually turned on through the driver, though this is obviously only enabled when normal AA is being used in-game.

That difference in memory bandwidth obviously has a significant impact on performance as the HD 4850 is markedly slower than the HD 4870. Oddly, though, the overclocking on Asus’ card doesn’t appear to make any significant difference in this title. As for the rest of the competition, the HD 4850 is fighting quite some battle and is currently losing to the nVidia 8800 GT.

ATI radeon HD 4870 review


Not much good has happened for either party since AMD purchased ATI. New chips from both sides of the fence have been late, run hot, and underperformed compared to the competition. Meanwhile, the combined company has posted staggering financial losses, causing many folks to wonder whether AMD could continue to hold up its end of the bargain as junior partner in the PC market’s twin duopolies, for CPUs and graphics chips.

//

AMD certainly has its fair share of well-wishers, as underdogs often do. And a great many of them have been waiting with anticipation—you can almost hear them vibrating with excitement—for the Radeon HD 4800 series. The buzz has been building for weeks now. For the first time in quite a while, AMD would seem to have an unequivocal winner on its hands in this new GPU.

Our first peek at Radeon HD 4850 performance surely did nothing to quell the excitement. As I said then, the Radeon HD 4850 kicks more ass than a pair of donkeys in an MMA cage match. But that was only half of the story. What the Radeon HD 4870 tells us is that those donkeys are all out of bubble gum.

Uhm, or something like that. Keep reading to see what the Radeon HD 4800 series is all about.

The RV770 GPU
Work on the chip code-named RV770 began two and a half years ago. AMD’s design teams were, unusually, dispersed across six offices around the globe. Their common goal was to take the core elements of the underperforming R600 graphics processor and turn them into a much more efficient GPU. To make that happen, the engineers worked carefully on reducing the size of the various logic blocks on the chip without cutting out functionality. More efficient use of chip area allowed them to pack in more of everything, raising the peak capacity of the GPU in many ways. At the same time, they focused on making sure the GPU could more fully realize its potential by keeping key resources well fed and better managing the flow of data through the chip.

The fruit of their labors is a graphics processor whose elements look familiar, but whose performance and efficiency are revelations. Let’s have a look at a 10,000-foot overview of the chip, and then we’ll consider what makes it different.


A block diagram of the RV770 GPU. Source: AMD.

Some portions of the diagram above are too small to make out at first glance, I know. We’ll be looking at them in more detail in the following pages. The first thing you’ll want to notice here, though, is the number of processors in the shader array, which is something of a surprise compared to early rumors. The RV770 has 10 SIMD cores, as you can see, and each them contains 16 stream processor units. You may not be able to see it above, but each of those SP units is a superscalar processing block comprised of five ALUs. Add it all up, and the RV770 has a grand total of 800 ALUs onboard, which AMD advertises as 800 “stream processors.” Whatever you call them, that’s a tremendous amount of computing power—well beyond the 320 SPs in the RV670 GPU powering the Radeon HD 3800 series. In fact, this is the first teraflop-capable GPU, with a theoretical peak of a cool one teraflops in the Radeon HD 4850 and up to 1.2 teraflops in the Radeon HD 4870. Nvidia’s much larger GeForce GTX 280 falls just shy of the teraflop mark.

The blue blocks to the right of the SIMDs are texture units. The RV770’s texture units are now aligned with SIMDs, so that adding more shader power equates to adding more texturing power, as is the case with Nvidia’s recent GPUs. Accordingly, the RV770 has 10 texture units, capable of addressing and filtering up to 40 texels per clock, more than double the capacity of the RV670.

Across the bottom of the diagram, you can see the GPU’s four render back-ends, each of which is associated with a 64-bit memory interface. Like a bad tattoo, the four back-ends and 256 bits of total memory connectivity are telltale class indicators: this is decidedly a mid-range GPU. Yet the individual render back-ends on RV770 are vastly more powerful than their predecessors, and the memory controllers have one heck of a trick up their sleeves in the form of support for GDDR5 memory, which enables substantially more bandwidth over every pin.

Despite all of the changes, the RV770 shares the same basic feature set with the RV670 that came before it, including support for Microsoft’s DirectX 10.1 standard. The big news items this time around are (sometimes major) refinements, including formidable increases in texturing capacity, shader power, and memory bandwidth, along with efficiency improvements throughout the design.

The chip
Like the RV670 before it, the RV770 is fabricated at TSMC on a 55nm process, which packs its roughly 956 million transistors into a die that’s 16mm per side, for a total area of 260 mm². The chip has grown from the RV670, but not as much as one might expect given its increases in capacity. The RV670 weighed in at an estimated 666 million transistors and was 192 mm².

Of course, AMD’s new GPU is positively dwarfed by Nvidia’s GT200, a 577 mm² behemoth made up of 1.4 billion transistors. But the more relevant comparisons may be to Nvidia’s mid-range GPUs. The first of those GPUs, of course, is the G92, a 65nm chip that’s behind everything from the GeForce 8800 GT to the GeForce 9800 GTX. That chip measured out, with our shaky ruler, to more or less 18mm per side, or 324 mm². (Nvidia doesn’t give out official die size specs anymore, so we’re reduced to this.) The second competing GPU from Nvidia is a brand-new entrant, the 55nm die shrink of the G92 that drives the newly announced GeForce 9800 GTX+. The GTX+ chip has the same basic transistor count of 754 million, but, well, have a look. The pictures below were all taken with the camera in the same position, so they should be pretty much to scale.


Nvidia’s G92

The RV770

The die-shrunk G92 at 55nm aboard the GeForce 9800 GTX+

Yeah, so apparently I have rotation issues. These things should not be difficult, I know. Hopefully you can still get a sense of comparative size. By my measurements, interestingly enough, the 55nm GTX+ chip looks to be 16 mm per side and thus 260 mm², just like the RV770. That’s despite the gap in transistor counts between the RV770 and G92, but then Nvidia and AMD seem to count transistors differently, among a multitude of other variables at work here.

The pictures below will give you a closer look at the chip’s die itself. The second one even locates some of the more important logic blocks.


A picture of the RV770 die. Source: AMD.

The RV770 die’s functional units highlighted. Source: AMD.

As you can see, the RV770’s memory interface and I/O blocks form a ring around the periphery of the chip, while the SIMD cores and texture units take up the bulk of the area in the middle. The SIMDs and the texture units are in line with one another.

What’s in the cards
Initially, the Radeon HD 4800 series will come in two forms, powder and rock. Err, I mean, 4850 and 4870. By now, you may already be familiar with the 4850, which has been selling online for a number of days.

Here’s a look at our review sample from Sapphire. The stock clock on the 4850 is 625MHz, and that clock governs pretty much the whole chip, including the shader core. These cards come with 512MB of GDDR3 memory running at 993MHz, for an effective 1986MT/s. AMD pegs the max thermal/power rating (or TDP) of this card at 110W. As a result, the 4850 needs only a single six-pin aux power connector to stay happy.

Early on, AMD suggested the 4850 would sell for about $199 at online vendors, and so far, street prices seem to jibe with that, by and large.

And here we have the big daddy, the Radeon HD 4870. This card’s much beefier cooler takes up two slots and sends hot exhaust air out of the back of the case. The bigger cooler and dual six-pin power connections are necessary given the 4870’s 160W TDP.

Cards like this one from VisionTek should start selling online today at around $299. That’s another hundred bucks over the 4850, but then you’re getting a lot more card. The 4870’s core clock is 750MHz, and even more importantly, it’s paired up with 512MB of GDDR5 memory. The base clock on that memory is 900MHz, but it transfers data at a rate of 3600MT/s, which means the 4870’s peak memory bandwidth is nearly twice that of the 4850.

Both the 4870 and the 4850 come with dual CrossFire connectors along the top edge of the card, and both can participate in CrossFireX multi-GPU configurations with two, three, or four cards daisy-chained together.

Nvidia’s response
The folks at Nvidia aren’t likely to give up their dominance at the $199 sweet spot of the video card market without a fight. In response to the release of the Radeon HD 4850, they’ve taken several steps to remain competitive. Most of those steps involve price cuts. Stock-clocked versions of the GeForce 9800 GTX have dropped to $199 to match the 4850. Meanwhile, you have higher clocked cards like this one:

This “XXX Edition” card from XFX comes with core and shader clocks of 738 and 1836MHz, respectively, up from 675/1688MHz stock, along with 1144MHz memory. XFX bundles this card with a copy of Call of Duty 4 for $239 at Newegg, along with a $10.00 mail-in rebate, which gives you maybe better-than-even odds of getting a check for ten bucks at some point down the line, if you’re into games of chance.

Cards like this “XXX Edition” will serve as a bridge of sorts for Nvidia’s further answer to the Radeon HD 4850 in the form of the GeForce 9800 GTX+. Those cards will be based on the 55nm die shrink of the G92 GPU, and they’ll share the XXX Edition’s 738MHz core and 1836MHz shader clocks, although their memory will be slightly slower at 1100MHz. Nvidia expects GTX+ cards to be available in decent quantities by July 16 at around $229.

For most intents and purposes, of course, these two cards should be more or less equivalent, including performance. The GTX+ shares the 9800 GTX’s dual-slot cooler and layout, as well. As a result, and because of time constraints, we’ve chosen to include only the XXX Edition in most of our testing. The exception is the place where the 55nm chip is likely to make the biggest difference: in power draw and the related categories of heat and noise. We’ve tested the 9800 GTX+ separately in those cases.

Nvidia has also decided to sweeten the pot a little bit by supplying us with drivers that endow the GeForce 9800 GTX and GTX 200-series cards with support for GPU-accelerated physics via the PhysX API. You’ll see early results from those drivers in our 3DMark Vantage performance numbers.

ATI radeon HD 5770 & 5750


The last 30 days have seen a ton of new technology, from Intel’s Lynnfield-based Core i5 and Core i7 processors (which we reviewed here, tested in a number of different games with CrossFire and SLI setups here, and measured the effect of integrated PCI Express 2.0 right here) to ATI’s Cypress graphics processor (manifest through the Radeon HD 5870 and Radeon HD 5850). Between those launch stories, I’ve run thousands of benchmark numbers and written tens of thousands of words. Thus, when I sat down to write this Radeon HD 5770/5750 review (after running another 500+ tests), I had to mix it up a bit and have a little fun with the intro. Feel free to read while listening to Biz Markie’s Just A Friend.

Have you ever seen a card that you wanted to buy?
Killer performance, but a price sky-high?
Let me tell you a story of my situation;
I game on PCs, forget Playstation.
The tech that I like is really high-end.
But I gotta get by with a couple Benjamins.
I upgrade once a year, whenever I can.
Processors, hard drives, graphics cards, RAM.
i7 looked great; I bought i5.
Now it’s time for new graphics; make my games look live.
I know of Nvidia; I know ATI.
So many boards between ‘em, makes me want to cry.
G92’s been around
, and that’s a fact.
Couldn’t find 740
; that launch was whack.
But I’ve pulled out my wallet out and I’m ready to buy.
I want something new; no shrunken die.
Read Chris’ Cypress story
; that card looked hot
If I had four bones, it’d already be bought.

Come onnnnnn, I can’t even afford that.
I’m looking for something under $200, man.

And here’s where ATI chimes in…

We’ve…we’ve got what you need. And you say you have $160 to spend?
And you say you have $160 to spend? Oh gamer…
We’ve…we’ve got what you need. And you say you have $160 to spend?
And you say you have $160 to spend? Oh gamer…
We’ve…we’ve got what you need. And you say you have $160 to spend?
And you say you have $160 to spend?

Last Year’s Flagship Is This Year’s Mid-Range

Meet the Radeon HD 5770...

If the Radeon HD 5870 was characterized by roughly twice the computing resources as Radeon HD 4870, then the Radeon HD 5770 represents a halving of Radeon HD 5870. You’d think that’d yield something that looks a lot like the Radeon HD 4870 to which you’re already accustomed—and you’d be close to correct.

The Radeon HD 4870 is based on ATI’s 55nm RV770, sporting 956 million transistors on a 260 square millimeter die. It boasts 800 ALUs (shader processors), 40 texture units, a 256-bit memory interface armed with GDDR5 memory (cranking out 115.2 GB/s), and a depth/stencil rate of 64 pixels per clock.

...and the Radeon HD 5750

In contrast, ATI’s 40nm Juniper GPU is made up of 1.04 billion transistors. It also wields 800 shader processors, 40 texture units, and a depth/stencil rate of 64 pixels per clock. But its memory interface, being a halved version of Cypress,’ is only 128-bits wide. Nevertheless, ATI arms it with GDDR5 memory able to move up to 76.8 GB/s.

Right off the bat, we knew that this was going to be a very tough comparison—not only between ATI and Nvidia, but also between ATI and its own lineup of products. Yes, both of these new cards leverage DirectX 11 support. They both offer three digital display outputs split between DVI, HDMI, and DisplayPort connectors. And the pair is able to bitstream Dolby TrueHD and DTS-HD Master Audio from your home theater PC to your compatible receiver via HDMI 1.3, too.

But with specs that look roughly on par with the Radeon HD 4870 and Radeon HD 4770, anyone who recently purchased one of those previous-generation boards is bound to feel smug about the performance we see in this write-up—at least until DirectX 11 applications start emerging in greater numbers.

So, what’s the verdict? Is the Radeon HD 5770 worth paying $160 for amongst $145 Radeon HD 4870s? Is the 1GB Radeon HD 5750 worth its $129 price tag in comparison to the $120 Radeon HD 4770 (with 512MB) or even Nvidia’s GeForce GTS 250 at a similar price? Let’s dig into the speeds, feeds, numbers, and multimedia tests for more.

ATI’s Radeon HD 5770 And 5750

There’s really no need to rehash all of the architectural elements that comprise the Radeon HD 5770 and 5750—if you want to know more about how ATI improved this generation’s architecture over RV770, check out our original Radeon HD 5870 review. When I say that the Radeon HD 5770 is half of that flagship, I’m being literal.

As mentioned, the Juniper GPU consists of 1.04 billion transistors (to Cypress’ 2.15 billion). It sports 800 ALUs (to Cypress’ 1,600). It leverages 40 texture units (to Cypress’ 80). It boasts 16 ROPs (to Cypress’ 32). I think you get the picture here. If not, a die block diagram comparison should do the trick:

Mid-Range: Juniper High-End: Cypress

Even the speeds and feeds work out comparatively. Radeon HD 5870 employs 1GB of GDDR5 memory running at 1,200 MHz, delivering 153.6 GB/s. The Radeon HD 5770 also sports 1GB of GDDR5 at 1,200 MHz, serving up 76.8 GB/s. Eight hundred “shader processors” times 850 MHz times two gives the Radeon HD 5770 1.36 TFLOPS of compute power, versus the 5870’s 1,600 * 850 MHz * 2 = 2.72 TFLOPS.

Radeon HD 5770 Radeon HD 5750 Radeon HD 4870
Compute Performance 1.36 TFLOPS 1.008 TFLOPS 1.2 TFLOPS
Transistors 1.04 billion 1.04 billion .956 billion
Memory Bandwidth 76.8 GB/s 73.6 GB/s 115 GB/s
AA Resolve 64 64 64
Z/Stencil 64 64 64
Texture Units 40 36 40
Shader (ALUs) 800 720 800
Idle Board Power 18W 16W 90W
Active Board Power 108W 86W 160W

Thus, all of the same architectural balancing that went into the Radeon HD 5870 should carry over here, and we should see a performance picture as good or better than what ATI’s Radeon HD 4870 was able to do, given its 800 shader processors at 750 MHz (totaling 1.2 TFLOPS)and GDDR5 memory running at 900 MHz.

Oh, but there’s a rub. The Radeon HD 4870 also employed a 256-bit bus, giving it 115.2 GB/s of memory bandwidth. We’ll have to see how that notably different specification affects the overall performance picture. If there’s an Achilles’s heel that causes the 5770 to stumble, that will be it.

The Radeon HD 5750 centers on the same Juniper GPU as its big brother. ATI disables one of the chip’s 10 SIMD cores, switching off 80 ALUs and four texture units. The processor’s core clock is then decelerated to 700 MHz, yielding a nice round 1 TFLOPS of compute muscle. ATI doesn’t mess with the GPU’s back-end, so you still get 16 ROPs and a 128-bit memory bus loaded with 1GB of GDDR5 memory. However, the clocks there are slightly lower too, yielding 73.6 GB/s from the 1,150 MHz RAM.

The Boards

The Radeon HD 5770 itself is shorter than the Radeon HD 5850, which was already shorter than the behemoth Radeon HD 5870. At 8.5” (an inch less than the 5850), it’s a very chassis-friendly card.

As with the larger Cypress board, the 5770 employs rear-mounted auxiliary power, though it only needs one connector instead of two. Further, ATI recesses the plug a bit, so protruding cables are less likely to get in the way.

Back of the Radeon HD 5770

We were already blown away by ATI’s efforts to minimize power consumption with the Radeon HD 5870 and 5850. However, the smaller Juniper die is even more miserly. At idle, the Radeon HD 5770 is rated at just 18W (down from the 5850’s 27W and the 4870’s ravenous 90W). Under load, the Radeon HD 5770 uses just 108W (versus the 5850’s 151W). Already you can see how this might be the world’s most perfect HTPC card. But wait…there’s more.

ATI’s Radeon HD 5850 sports an entirely different design. Up until now, all of the 5000-series cards have featured enclosed shrouds with blower-type coolers that exhaust air from a vent on each card’s I/O bracket. The Radeon HD 5750 sports a simpler dual-slot heatsink/fan combination. The PCB is shorter still at 7.25,” and it likewise comes equipped with a single auxiliary power connector.

Back of the Radeon HD 5750

This could be an even better solution for big-screen gamers and theater enthusiasts. Lower clocks and a simpler cooling implementation mean a slightly more conservative 16W idle footprint, and a load requirement of up to 86W. As we’ll see in the benchmarks, this is no speed demon (at the risk of ruining several pages worth of data, it’s a bit quicker than a Radeon HD 4770); however, you’ll find that’s often enough to play at 1920×1080. And the addition of Eyefinity/bitstreaming really makes the 5750 a shoo-in for quiet environments in need of performance and better functionality.