Intel Arc Alchemist: Release Date, Specs, Everything We Know

Intel has been hyping up Xe Graphics for about two years, but the Intel Arc Alchemist GPU will finally bring some needed performance and competition from Team Blue to the discrete GPU space. This is the first ‘real’ dedicated Intel GPU since the i740 back in 1998 — or technically, a proper discrete GPU after the Intel Xe DG1 paved the way last. The competition among the best graphics cards is fierce, and Intel’s current integrated graphics solutions basically don’t even rank on our GPU benchmarks hierarchy (UHD Graphics 630 sits at 1.8% of the RTX 3090 based on just 1080p medium performance).

Could Intel, purveyor of low performance integrated GPUs—”the most popular GPUs in the world”—possibly hope to compete? Yes, it can. Sort of. Plenty of questions remain, but with the official China-first launch of Intel Arc Alchemist laptops and at least one desktop card now behind us, plus additional details of the Alchemist GPU architecture revealed at Intel Architecture Day 2021, we now have a reasonable idea of what to expect. Intel has been gearing up its driver team for the launch, fixing compatibility and performance issues on existing graphics solutions, hopefully getting ready for the US and “rest of the world” launch. Frankly, there’s nowhere to go from here but up.

The difficulty Intel faces in cracking the dedicated GPU market can’t be underestimated. AMD’s Big Navi / RDNA 2 architecture has competed with Nvidia’s Ampere architecture since late 2020. While the first Xe GPUs arrived in 2020, in the form of Tiger Lake mobile processors, and Xe DG1 showed up by the middle of 2021, neither one can hope to compete with even GPUs from several generations back. Overall, Xe DG1 performed about the same as Nvidia’s GT 1030 GDDR5, a weak-sauce GPU hailing from May 2017. It was also a bit better than half the performance of 2016’s GTX 1050 2GB, despite having twice as much memory.

Intel has a steep mountain to ascend if it wants to be taken seriously in the dedicated GPU space. Here’s the breakdown of the Arc Alchemist architecture, a look at the announced products, some Intel-provided benchmarks, all of which give us a glimpse into how Intel hopes to reach the summit. Truthfully, we’re just hoping Intel can make it to base camp, leaving the actual summiting for the future Battlemage, Celestial, and Druid architectures. But we’ll leave those for a future discussion. 

(Image credit: Intel)

Intel Arc Alchemist At A Glance

Specs: Up to 512 Vector Units / 4096 Shader Cores
Memory: Likely up to 16GB GDDR6
Process: TSMC N6 (refined N7)
Performance: RTX 3060 Ti / RX 6700 level, maybe?
Release Date: Q3 2022 (US, already launched in China)
Price: Intel needs to be competitive

Intel’s Xe Graphics aspirations hit center stage in early 2018, starting with the hiring of Raja Koduri from AMD, followed by chip architect Jim Keller and graphics marketer Chris Hook, to name just a few. Raja was the driving force behind AMD’s Radeon Technologies Group, created in November 2015, along with the Vega and Navi architectures. Clearly, the hope is that he can help lead Intel’s GPU division into new frontiers, and Arc Alchemist represents the results of several years worth of labor.

Not that Intel hasn’t tried this before. Besides the i740 in 1998, Larrabee and the Xeon Phi had similar goals back in 2009, though the GPU aspect never really panned out. Plus, Intel has steadily improved the performance and features in its integrated graphics solutions over the past couple of decades (albeit at a slow and steady snail’s pace). So, third time’s the charm, right?

There’s much more to building a good GPU than just saying you want to make one, and Intel has a lot to prove. Here’s everything we know about the upcoming Intel Arc Alchemist, including specifications, performance expectations, release date, and more.

Potential Intel Arc Alchemist Specifications and Price

This concept rendering of Intel’s Xe Graphics is a reasonable guess at what a larger card could look like, but definitely not the final product. (Image credit: Gunnir)

We’ll get into the details of the Arc Alchemist architecture below, but let’s start with the high-level overview. We know that Intel currently has two different Arc Alchemist GPU dies, covering three different product families. The middle space uses a harvested chip with the larger die.

Intel has listed five different mobile SKUs, the A350M, A370M, A550M, A730M, and A770M, but so far it has only officially given the details for a single desktop A380 part. We expect there will eventually be several different desktop version has well, though the demand may not be particularly high unless performance improves quite a bit in the coming months.

Here are the specifications for the two Arc chips that Intel has revealed.

Intel Arc Alchemist Specifications
Arc High-End Arc Entry
GPU Arc ACM-G10 Arc ACM-G11
Process (nm) TSMC N6 TSMC N6
Transistors (billion) ~20? ~8?
Die size (mm^2) ~396mm2 (24×16.5) ~153mm2 (12.4×12.4)
Xe Cores 32 8
Vector Engines 512 128
GPU cores (ALUs) 4096 1024
Clock (GHz) 1.1~2.5? 1.15~2.5?
L2 Cache 16MB 4MB
VRAM Speed (Gbps) 16? 14–16
VRAM (GB) 16 GDDR6 6 GDDR6
Bus width 256 96
ROPs 128? 32?
TMUs 256? 64?
TFLOPS 3.7~18.4? 1.8~4.7?
Bandwidth (GB/s) 512? 168–192?
TBP (watts) 200? 75?
Launch Date Q3 2022 Q3 2022
Launch Price $599? $149?

As we dig deeper throughout this article, we’ll discuss where some of the above information comes from, but those are Intel’s official core specs on the full large and small Arc Alchemist chips. Based on the wafer and die shots, along with other information, we expect Intel to enter the dedicated GPU market (not counting the DG1) with products spanning the entire budget to high-end range.

Intel has detailed three different Arc families, an entry-level A300-series, the midrange A500 series, and the high-end A700 series. The desktop product names haven’t been announced, but Intel has detailed the full mobile lineup. Unfortunately, Intel has decided to launch the Arc products, both mobile and desktop, in China first. That’s not a good look, especially since one of Intel’s previous “China only” products was Cannon Lake, with the Core i3-8121U that basically only just saw the light of day before getting buried deep under ground.

Prices and some of the finer details are estimates based on the Chinese market and some of the Intel provided information. We know the range of theoretical performance (TFLOPS), but actual real-world performance will depend on drivers, which have been a sticking point for Intel in the past. Gaming performance will play a big role in determining how much Intel can charge for the various graphics card models.

As shown in our GPU price index, the prices of competing AMD and Nvidia GPUs have plummeted this year. Intel would have been in great shape if it had managed to launch Arc at the start of the year with reasonable prices, which was the original plan (actually, late 2021 was at one point in the cards). Many gamers might have given Intel GPUs a shot if they were priced at half the cost of the competition, even if they were slower. Now, even Intel’s own performance data doesn’t give us a lot of hope for truly competitive products — unless you’re primarily interested in AV1 encoding performance.

That takes care of the high-level overview. Now let’s dig into the finer points and discuss where these estimates come from.

Arc Alchemist: Performance According to Intel

Intel has provided us with reviewer’s guides for both its mobile Arc GPUs and the desktop Arc A380. As with any manufacturer provided benchmarks, you should expect the games and settings used were selected to show Arc in the best light possible. Intel tested 17 games for laptops and desktops, but the game selection isn’t even identical, which is a bit weird. It then compared performance with two mobile GeForce solutions, and the GTX 1650 and RX 6400 for desktops. There’s a lot of missing data, since the mobile chips represent the two fastest Arc solutions, but let’s get to the actual numbers first.

Intel Arc A700M Mobile GPU Comparison — Intel Provided Benchmarks
Game Arc A770M RTX 3060 Arc A730M RTX 3050 Ti
17 Game Geometric Mean 88.3 78.8 64.6 57.2
Assassin’s Creed Valhalla (High) 69 74 50 38
Borderlands 3 (Ultra) 76 60 50 45
Control (High) 89 70 62 42
Cyberpunk 2077 (Ultra) 68 54 49 39
Death Stranding (Ultra) 102 113 87 89
Dirt 5 (High) 87 83 61 64
F1 2021 (Ultra) 123 96 86 68
Far Cry 6 (Ultra) 82 80 68 63
Gears of War 5 (Ultra) 73 72 52 58
Horizon Zero Dawn (Ultimate Quality) 68 80 50 63
Metro Exodus (Ultra) 69 53 54 39
Red Dead Redemption 2 (High) 77 66 60 46
Strange Brigade (Ultra) 172 134 123 98
The Division 2 (Ultra) 86 78 51 63
The Witcher 3 (Ultra) 141 124 101 96
Total War Saga: Troy (Ultra) 86 71 66 48
Watch Dogs Legion (High) 89 77 71 59

We’ll start with the mobile benchmarks, since Intel used its two high-end models for these. Based on the numbers, Intel suggests its A770M can outperform the RTX 3060 mobile, and the A730M can outperform the RTX 3050 Ti mobile. The overall scores put the A770M 12% ahead of the RTX 3060, and the A730M was 13% ahead of the RTX 3050 Ti. However, looking at the individual game results, the A770M was anywhere from 15% slower to 30% faster, and the A730M was 21% slower to 48% faster.

That’s a big spread in performance, and tweaks to some settings could have a significant impact on the fps results. Still, overall the list of games and settings used here looks pretty decent. However, Intel used laptops equipped with the older Core i7-11800H CPU on the Nvidia cards, and then used the latest and greatest Core i9-12900HK for the A770M and the Core i7-12700H for the A730M. There’s no question that the Alder Lake CPUs are faster than the previous generation Tiger Lake variants, though without doing our own testing we can’t say for certain how much CPU bottlenecks come into play.

There’s also the question of how much power the various chips used, as the Nvidia GPUs have a wide power range. The RTX 3050 Ti can ran at anywhere from 35W to 80W (Intel used a 60W model), and the RTX 3060 mobile has a range from 60W to 115W (Intel used an 85W model). Intel’s Arc GPUs also have a power range, from 80W to 120W on the A730M and from 120W to 150W on the A770M. While Intel didn’t specifically state the power level of its GPUs, it would have to be higher in both cases.

Intel Arc A380 GPU Comparison — Intel Provided Benchmarks
Games Intel Arc A380 GeForce GTX 1650 Radeon RX 6400
17 Game Geometric Mean 96.4 114.5 105.0
Age of Empires 4 80 102 94
Apex Legends 101 124 112
Battlefield V 72 85 94
Control 67 75 72
Destiny 2 88 109 89
DOTA 2 230 267 266
F1 2021 104 112 96
GTA V 142 164 180
Hitman 3 77 89 91
Naraka Bladepoint 70 68 64
NiZhan 200 200 200
PUBG 78 107 95
The Riftbreaker 113 141 124
The Witcher 3 85 101 81
Total War: Troy 78 98 75
Warframe 77 98 98
Wolfenstein Youngblood 95 130 96

Switching over to the desktop side of things, Intel provided the above A380 benchmarks. Note that this time the target is much lower, with the GTX 1650 and RX 6400 budget GPUs going up against the A380. Intel should still launch high-end A780 cards at some point, but for now it’s going after the budget desktop market.

Even with the usual caveats about manufacturer provided benchmarks, things aren’t looking too good for the A380. The Radeon RX 6400 delivered 9% better performance than the Arc A380, with a range of -9% to +31%. The GTX 1650 did even better, with a 19% overall margin of victory and a range of just -3% up to +37%.

And look at the list of games: Age of Empires 4, Apex Legends, DOTA 2, GTAV, Naraka Bladepoint, NiZhan, PUBG, Warframe, The Witcher 3, and Wolfenstein Youngblood? Some of those are more than five years old, several are known to be pretty light in terms of requirements, and in general that’s not a list of demanding titles. We get the idea of going after esports competitors, sort of, but wouldn’t a serious esports gamer already have something more potent than a GTX 1650?

Keep in mind that Intel potentially has a part that will have four times as much raw compute, which we expect to see in an Arc A780 at some point. If drivers and performance don’t hold it back, such a card could still theoretically match the RTX 3070 and RX 6700 XT, but drivers are very much a concern right now.

Arc Alchemist: Beyond the Integrated Graphics Barrier 

(Image credit: Intel)

Over the past decade, we’ve seen several instances where Intel’s integrated GPUs have basically doubled in theoretical performance. Despite the improvements, Intel frankly admits that integrated graphics solutions are constrained by many factors: Memory bandwidth and capacity, chip size, and total power requirements all play a role.

While CPUs that consume up to 250W of power exist — Intel’s Core i9-12900K and Core i9-11900K both fall into this category — competing CPUs that top out at around 145W are far more common (e.g., AMD’s Ryzen 5900X or the Core i7-12700K). Plus, integrated graphics have to share all of those resources with the CPU, which means it’s typically limited to about half of the total power budget. In contrast, dedicated graphics solutions have far fewer constraints.

Consider the first generation Xe-LP Graphics found in Tiger Lake (TGL). Most of the chips have a 15W TDP, and even the later-gen 8-core TGL-H chips only use up to 45W (65W configurable TDP). Except TGL-H also cut the GPU budget down to 32 EUs (Execution Units), where the lower power TGL chips had 96 EUs. The new Alder Lake desktop chips also use 32 EUs, though the mobile H-series parts get 96 EUs and a higher power limit.

Regardless, top AMD and Nvidia dedicated graphics cards like the Radeon RX 6900 XT and GeForce RTX 3080 Ti have a power budget of 300W to 350W for the reference design, with custom cards pulling as much as 400W. We don’t know precisely how high Intel plans to go on power use with Arc Alchemist, but it could go as high as 300W. What could an Intel GPU do with 20X more power available? We’ll find out if and when such a desktop part launches.

Intel Arc Alchemist Architecture 

(Image credit: Intel)

Intel may be a newcomer to the dedicated graphics card market, but it’s by no means new to making GPUs. Current Alder Lake (as well as the previous generation Rocket Lake and Tiger Lake) CPUs use the Xe Graphics architecture, the 12th generation of graphics updates from Intel.

The first generation of Intel graphics was found in the i740 and 810/815 chipsets for socket 370, back in 1998-2000. Arc Alchemist, in a sense, is second-gen Xe Graphics (i.e., Gen13 overall), and it’s common for each generation of GPUs to build on the previous architecture, adding various improvements and enhancements. The Arc Alchemist architecture changes are apparently large enough that Intel has ditched the Execution Unit naming of previous architectures and the main building block is now called the Xe-core.

To start, Arc Alchemist will support the full DirectX 12 Ultimate feature set. That means the addition of several key technologies. The headline item is ray tracing support, though that might not be the most important in practice. Variable rate shading, mesh shaders, and sampler feedback are also required — all of which are also supported by Nvidia’s RTX 20-series Turing architecture from 2018, if you’re wondering. Sampler feedback helps to optimize the way shaders work on data and can improve performance without reducing image quality.

The Xe-core contains 16 Vector Engines (formerly called Execution Units), each of which operates on a 256-bit SIMD chunk (single instruction multiple data). The Vector Engine can process eight FP32 instructions simultaneously, each of which is traditionally called a “GPU core” in AMD and Nvidia architectures, though that’s a misnomer. Other data types are supported by the Vector Engine, including FP16 and DP4a, but it’s joined by a second new pipeline, the XMX Engine (Xe Matrix eXtensions).

Each XMX pipeline operates on a 1024-bit chunk of data, which can contain 64 individual pieces of FP16 data. The Matrix Engines are effectively Intel’s equivalent of Nvidia’s Tensor cores, and they’re being put to similar use. They offer a huge amount of potential FP16 and INT8 computational performance, and should prove very capable in AI and machine learning workloads. More on this below.

(Image credit: Intel)

Xe-core represents just one of the building blocks used for Intel’s Arc GPUs. Like previous designs, the next level up from the Xe-core is called a render slice (analogous to an Nvidia GPC, sort of) that contains four Xe-core blocks. In total, a render slice contains 64 Vector and Matrix Engines, plus additional hardware. That additional hardware includes four ray tracing units (one per Xe-core), geometry and rasterization pipelines, samplers (TMUs, aka Texture Mapping Units), and the pixel backend (ROPs).

The above block diagrams may or may not be fully accurate down to the individual block level. For example, looking at the diagrams, it would appear each render slice contains 32 TMUs and 16 ROPs. That would make sense, but Intel has not yet confirmed those numbers (even though that’s what we used in the above specs table).

The ray tracing units are perhaps the most interesting addition, but other than their presence and their capabilities — they can do ray traversal, bounding box intersection, and triangle intersection — we don’t have any details on how the RT units compare to AMD’s ray accelerators or Nvidia’s RT cores. Are they faster, slower, or similar in overall performance? We’ll have to wait to get hardware in hand to find out for sure.

Intel did provide a demo of Alchemist running an Unreal Engine demo that uses ray tracing, but it’s for an unknown game, running at unknown settings … and running rather poorly, to be frank. Hopefully that’s because this is early hardware and drivers, but skip to the 4:57 mark in this Arc Alchemist video from Intel to see it in action. Based on what was shown there, we suspect Intel’s Ray Tracing Units will be similar to AMD’s Ray Accelerators, which means even the top Arc Alchemist GPU will only be roughly comparable to AMD’s Radeon RX 6600 XT — not a great place to start, but then RT performance and adoption still aren’t major factors for most gamers.

(Image credit: Intel)

Finally, Intel uses multiple render slices to create the entire GPU, with the L2 cache and the memory fabric tying everything together. Also not shown are the video processing blocks and output hardware, and those take up additional space on the GPU. The maximum Xe HPG configuration for the initial Arc Alchemist launch will have up to eight render slices. Ignoring the change in naming from EU to Vector Engine, that still gives the same maximum configuration of 512 EU/Vector Engines that’s been rumored for the past 18 months.

Intel includes 2MB of L2 cache per render slice, so 4MB on the smaller ACM-G11 and 16MB total on the ACM-G10. There will be multiple Arc configurations, though. So far, Intel has shown one with two render slices and a larger chip used in the above block diagram that comes with eight render slices. Given how much benefit AMD saw from its Infinity Cache, we have to wonder how much the 16MB cache will help with Arc performance. Even the smaller 4MB L2 cache is larger than what Nvidia uses on its GPUs, where the GTX 1650 only has 1MB of L2 and the RTX 3050 has 2MB.

While it doesn’t sound like Intel has specifically improved throughput on the Vector Engines compared to the EUs in Gen11/Gen12 solutions, that doesn’t mean performance hasn’t improved. DX12 Ultimate includes some new features that can also help performance, but the biggest change comes via boosted clock speeds. Intel’s Arc A380 clocks at up to 2.45 GHz (boost clock), while the Arc A770M only runs at up to 1.65 GHz, but we expect the desktop variants to also land in the 2.0–2.5 GHz range. With potential clock speeds of 2.4 GHz (give or take) for the desktop Arc GPUs, that yields a significant amount of raw compute.

The maximum configuration of Arc Alchemist will have up to eight render slices, each with four Xe-cores, 16 Vector Engines per Xe-core, and each Vector Engine can do eight FP32 operations per clock. Double that for FMA operations (Fused Multiply Add, a common matrix operation used in graphics workloads), then multiply by a potential 2.4 GHz clock speed, and we get the theoretical performance in GFLOPS:

8 (RS) * 4 (Xe-core) *16 (VE) * 8 (FP32) * 2 (FMA) * 2.4 (GHz) = 19,661 GFLOPS

Obviously, GFLOPS (or TFLOPS) on its own doesn’t tell us everything, but nearly 20 TFLOPS for the top configurations is nothing to scoff at. Nvidia’s Ampere GPUs still theoretically have a lot more compute. The RTX 3080, as an example, has a maximum of 29.8 TFLOPS, but some of that gets shared with INT32 calculations. AMD’s RX 6800 XT, by comparison ‘only’ has 20.7 TFLOPS, but in many games, it delivers similar performance to the RTX 3080. In other words, raw theoretical compute absolutely doesn’t tell the whole story. Arc Alchemist could punch above — or below! — its theoretical weight class.

Still, let’s give Intel the benefit of the doubt for a moment. Depending on final clock speeds, Arc Alchemist comes in below the theoretical level of the current top AMD and Nvidia GPUs, but not by much. On paper, at least, it looks like Intel could land in the vicinity of the RTX 3070/3070 Ti and RX 6800 — assuming drivers and other factors don’t hold it back.

XMX: Matrix Engines and Deep Learning for XeSS 

We briefly mentioned the XMX blocks above. They’re potentially just as useful as Nvidia’s Tensor cores, which are used not just for DLSS, but also for other AI applications, including Nvidia Broadcast. Intel also announced a new upscaling and image enhancement algorithm that it’s calling XeSS: Xe Superscaling.

Intel didn’t go deep into the details, but it’s worth mentioning that Intel hired Anton Kaplanyan. He worked at Nvidia and played an important role in creating DLSS before heading over to Facebook to work on VR. It doesn’t take much reading between the lines to conclude that he’s likely doing a lot of the groundwork for XeSS now, and there are many similarities between DLSS and XeSS.

XeSS uses the current rendered frame, motion vectors, and data from previous frames and feeds all of that into a trained neural network that handles the upscaling and enhancement to produce a final image. That sounds basically the same as DLSS 2.0, though the details matter here, and we assume the neural network will end up with different results.

Intel did provide a demo using Unreal Engine showing XeSS in action (see below), and it looked good when comparing 1080p upscaled via XeSS to 4K against the native 4K rendering. Still, that was in one demo, and we’ll have to see XeSS in action in actual shipping games before rendering any verdict.

XeSS also has to compete against AMD’s new and “universal” upscaling solution, FSR 2.0. While we’d still give DLSS the edge in terms of pure image quality, FSR 2.0 comes very close and can work on RX 6000-series GPUs, as well as older RX 500-series, RX Vega, GTX going all the way back to at least the 700-series, and even Intel integrated graphics. It will also work on Arc GPUs.

The good news with DLSS, FSR 2.0, and now XeSS is that they should all take the same basic inputs: the current rendered frame, motion vectors, the depth buffer, and data from previous frames. Any game that supports any of these three algorithms should be able to support the other two with relatively minimum effort on the part of the game’s developers — though politics and GPU vendor support will likely factor in as well.

More important than how it works will be how many game developers choose to use XeSS. They already have access to both DLSS and AMD FSR, which target the same problem of boosting performance and image quality. Adding a third option, from the newcomer to the dedicated GPU market no less, seems like a stretch for developers. However, Intel does offer a potential advantage over DLSS.

XeSS is designed to work in two modes. The highest performance mode utilizes the XMX hardware to do the upscaling and enhancement, but of course, that would only work on Intel’s Arc GPUs. That’s the same problem as DLSS, except with zero existing installation base, which would be a showstopper in terms of developer support. But Intel has a solution: XeSS will also work, in a lower performance mode, using DP4a instructions.

DP4a is widely supported by other GPUs, including Intel’s previous generation Xe LP and multiple generations of AMD and Nvidia GPUs (Nvidia Pascal and later, or AMD Vega 20 and later), which means XeSS in DP4a mode will run on virtually any modern GPU. Support might not be as universal as AMD’s FSR, which runs in shaders and basically works on any DirectX 11 or later capable GPU as far as we’re aware, but quality should be better than FSR 1.0 and might even beat FSR 2.0 as well. It would also be very interesting if Intel supported Nvidia’s Tensor cores, through DirectML or a similar library, but that wasn’t discussed.

The big question will still be developer uptake. We’d love to see similar quality to DLSS 2.x, with support covering a broad range of graphics cards from all competitors. That’s definitely something Nvidia is still missing with DLSS, as it requires an RTX card. But RTX cards already make up a huge chunk of the high-end gaming PC market, probably around 80% or more (depending on how you quantify high-end). So Intel basically has to start from scratch with XeSS, and that makes for a long uphill climb.

Arc Alchemist and GDDR6

(Image credit: Intel)

Intel has confirmed Arc Alchemist GPUs will use GDDR6 memory. Most of the mobile variants are using 14Gbps speeds, while the A770M runs at 16Gbps and the A380 desktop part uses 15.5Gbps GDDR6. We suspect most of the other future desktop models will use 16Gbps memory, if and when they arrive.

There will be multiple Xe HPG / Arc Alchemist solutions, with varying capabilities. The larger chip, which we’ve focused on so far, has eight 32-bit GDDR6 channels, giving it a 256-bit interface. That means it could use 8GB or 16GB of memory on the top model. The lower tier A730M trims that down to 192-bit, and the A550M uses a 128-bit interface. The second Arc GPU only has a 96-bit maximum interface width, though the A370M and A350M cut that to a 64-bit width, while the A380 uses the full 96-bit option and comes with 6GB of GDDR6.

Early numbers for the A380 don’t look very promising, but the larger A770M mobile part looks reasonably competitive and a higher clocked desktop variant should be decent — assuming Intel can compete on price and availability.

Arc Alchemist Die Shots and Analysis 

(Image credit: Intel)

Much of what we’ve said so far isn’t radically new information, but Intel did provide a few images and video evidence that provides some great indications of where Intel will land. So let’s start with what we know for certain.

Intel will partner with TSMC and use the N6 process (an optimized variant of N7) for Arc Alchemist. That means it’s not technically competing for the same wafers as AMD uses for its Zen 2, Zen 3, RDNA, and RDNA 2 GPUs. At the same time, AMD and Nvidia could also use N6 as well — it’s design is compatible with N7, so Intel’s use of TSMC certainly doesn’t help AMD or Nvidia production capacities.

TSMC likely has a lot of tools that overlap between N6 and N7 as well, meaning it could run batches of N6, then batches and N7, switching back and forth. That means there’s potential for this to cut into TSMC’s ability to provide wafers to other partners. And speaking of wafers…

(Image credit: Intel)

Raja showed a wafer of Arc Alchemist chips at Intel Architecture Day. By snagging a snapshot of the video and zooming in on the wafer, the various chips on the wafer are reasonably clear. We’ve drawn lines to show how large the chips are, and based on our calculations; it looks like the larger Arc die will be around 24×16.5mm (~396mm^2), give or take 5–10% in each dimension. We counted the dies on the wafer as well, and there appear to be 144 whole dies, which would also correlate to a die size of around 396mm^2.

That’s not a massive GPU — Nvidia’s GA102, for example, measures 628mm^2 and AMD’s Navi 21 measures 520mm^2 — but it’s also not small at all. AMD’s Navi 22 measures 335mm^2, and Nvidia’s GA104 is 393mm^2, so Xe HPG would be larger than AMD’s chip and similar in size to the GA104 — but made on a smaller manufacturing process. Still, putting it bluntly: Size matters.

This may be Intel’s first real dedicated GPU since the i740 back in the late 90s, but it has made many integrated solutions over the years, and it has spent the past several years building a bigger dedicated GPU team. Die size alone doesn’t determine performance, but it gives a good indication of how much stuff can be crammed into a design. A chip that’s around 400mm^2 in size suggests Intel intends to be competitive with at least the RTX 3070 and RX 6700 XT, which is perhaps higher than some were expecting.

(Image credit: Intel)

Besides the wafer shot, Intel also provided these two die shots for Xe HPG. These are clearly two different GPU dies, though they’re artistic renderings rather than actual die shots, but they do have some basis in reality.

The larger die has eight clusters in the center area that would correlate to the eight render slices. The memory interfaces are along the bottom edge and the bottom half of the left and right edges, and there are four 64-bit interfaces, for 256-bit total. Then there’s a bunch of other stuff that’s a bit more nebulous, for video encoding and decoding, display outputs, etc.

A 256-bit interface puts Intel’s Arc GPUs in an interesting position. That’s the same interface width as Nvidia’s GA104 (RTX 3060 Ti/3070/3070 Ti) and AMD’s Navi 21. Will Intel follow AMD’s lead and use 16Gbps or even 18Gbps memory, or will it opt for more conservative 14Gbps memory like Nvidia? So far the laptop parts appear to be going with 14Gbps, though the desktop A380 at least has 15.5Gbps and the mobile A770M bumps that to 16Gbps. We could see faster GDDR6 on the higher spec desktop parts as well.

The smaller die has two render slices, giving it just 128 Vector Engines. It also only has a 96-bit memory interface (the blocks in the lower-right edges of the chip), which could put it at a disadvantage relative to other cards. Then there’s the other ‘miscellaneous’ bits and pieces. Obviously, performance will be substantially lower than the bigger chip, and this would be more of an entry-level part.

While the smaller chip appears to be slower than all the current RTX 30-series GPUs, it does put Intel in an interesting position. The A380 checks in at a theoretical 5.0 TFLOPS, which means it ought to be able to compete with a GTX 1650 Super, with additional features like AV1 encoding/decoding support that no other GPU currently has. 6GB of VRAM also gives Intel a potential advantage, and on paper the A380 ought to land closer to the RX 6500 XT than the RX 6400.

That’s not currently the case, according to Intel’s own benchmarks (see above), but perhaps further tuning of the drivers could give a solid boost to performance. We certainly hope so, but let’s not count those chickens before they hatch.

 Will Intel Arc Be Good at Mining Cryptocurrency? 

(Image credit: Intel)

This is potentially a non-issue at this stage, as the potential profits from cryptocurrency mining have dropped off substantially in recent months. Still, some people might want to know if Intel’s Arc GPUs can be used for mining. Publicly, Intel has said precisely nothing about mining potential and Xe Graphics. However, given the data center roots for Xe HP/HPC (machine learning, High-Performance Compute, etc.), Intel has certainly at least looked into the possibilities mining presents, and its Bonanza Mining chips are further proof Intel isn’t afraid of engaging with crypto miners. There’s also the above image (for the entire Intel Architecture Day presentation), with a physical Bitcoin and the text “Crypto Currencies.”

Generally speaking, Xe might work fine for mining, but the most popular algorithms for GPU mining (Ethash mostly, but also Octopus and Kawpow) have performance that’s predicated almost entirely on how much memory bandwidth a GPU has. For example, Intel’s fastest Arc GPUs will likely use 16GB (maybe 8GB) of GDDR6 with a 256-bit interface. That would yield similar bandwidth to AMD’s RX 6800/6800 XT/6900 XT as well as Nvidia’s RTX 3060 Ti/3070, which would, in turn, lead to performance of around 60-ish MH/s for Ethereum mining.

That’s realistically about where we’d expect the fastest Arc GPU to land, and that’s only if the software works properly on the card. At present, a 60 MH/s card doing Ethereum mining and drawing 150W of power would net miners a whopping… $0.88 per day, and likely $0.50 or less after accounting for electricity costs. And there’s the still-looming “The Merge” that will take Ethereum to proof of stake and kill off mining entirely, which would drop potential profits down to $0.40 per day at present.

Considering desktop Arc GPUs won’t even show up until Q3 2022 (in the US), and given the volatility of cryptocurrencies, it’s unlikely that mining performance has been an overarching concern for Intel during the design phase. If Intel had launched Arc in late 2021 or even early 2022, it might have mattered a bit, but the current crypto-climate suggests that, whatever the mining performance, it won’t really matter.

Arc Alchemist Launch Date and Future GPU Plans

(Image credit: Intel)

The core specs for Arc Alchemist are shaping up nicely, and the use of TSMC N6 and potentially a 400mm^2 die with a 256-bit memory interface all point to a card that should be competitive with the current mainstream/high-end GPUs from AMD and Nvidia, but well behind the top performance models. As the newcomer, Intel needs the first Arc Alchemist GPUs to come out swinging. However, as discussed in our look at the Intel Xe DG1, there’s much more to building a good graphics card than hardware, which is probably why Arc is launching in China first, to get the drivers and software ready for the rest of the world.

Alchemist represents the first stage of Intel’s dedicated GPU plans, and there’s more to come. Along with the Alchemist codename, Intel revealed codenames for the next three generations of dedicated GPUs: Battlemage, Celestial, and Druid. Now we know our ABCs, next time won’t you build a GPU with me? Those might not be the most awe-inspiring codenames, but we appreciate the logic of going in alphabetical order.

Tentatively, with Alchemist using TSMC N6, we might see a relatively fast turnaround for Battlemage. It could use TSMC’s N5 process and ship in 2023 — which would perhaps be wise, considering we expect to see Nvidia’s Lovelace RTX 40-series GPUs and AMD’s RDNA 3 architecture in the next few months. Shrink the process, add more cores, tweak a few things to improve throughput, and Battlemage could keep Intel on even footing with AMD and Nvidia. Or it could arrive woefully late (again) and deliver less performance.

Intel needs to iterate on the future architectures and get them out sooner than later if it hopes to put some pressure on AMD and Nvidia. Arc Alchemist already slipped from 2021 to supposed hard launch date of Q1 2022, which then changed to Q2 for China and Q3 for the US and other markets. Intel really needs to stop the slippage and get cards out, with fully working drivers, sooner rather than later if it doesn’t want a repeat of its old i740 story.

“Into the Unknown!” (Image credit: Gunnir)

Final Thoughts on Intel Arc Alchemist 

The bottom line is that Intel has its work cut out for it. It may be the 800-pound gorilla of the CPU world, but it has stumbled and faltered even there over the past several years. AMD’s Ryzen gained ground, closed the gap, and took the lead up until Intel finally delivered Alder Lake and desktop 10nm (“Intel 7” now) CPUs. Intel’s manufacturing woes are apparently bad enough that it turned to TSMC to make its dedicated GPU dreams come true.

As the graphics underdog, Intel needs to come out with aggressive performance and pricing, and then iterate and improve at a rapid pace. And please don’t talk about how Intel sells more GPUs than AMD and Nvidia. Technically, that’s true, but only if you count incredibly slow integrated graphics solutions that are at best sufficient for light gaming and office work. Then again, a huge chunk of PCs and laptops are only used for office work, which is why Intel has repeatedly stuck with weak GPU performance.

We now have hard details on all the mobile Arc GPUs, along with the desktop A380. We even have Intel’s own performance data, which was less than inspiring. Had Arc launched in Q1 as planned, it could have carved out a niche. The further it slips into Q3, the worse things look.

So far, the main desktop A380 we’ve seen comes via Gunnir over in China. The card looks fine, but you almost have to laugh at the truth in advertising, as there’s a small logo on the card stating, “Into The Unknown.” Frozen 2 might appreciate the reference, but potential buyers should take that quite literally: What you get, long-term, from Arc is currently a big question mark. There are reported driver issues, and even when things do work, performance definitely isn’t where we would like to see it. Coming in 16% behind the old GTX 1650, by Intel’s own numbers? Ouch.

We’re also curious about the real-world ray tracing performance, compared to both AMD and Nvidia, though it’s not a critical factor. The current design has a maximum of 32 ray tracing units (RTUs), but we know next to nothing about what those units can do. Each one might be similar in capabilities to AMD’s ray accelerators, in which case Intel would come in pretty low on the ray tracing pecking order. Alternatively, each RTU might be the equivalent of several AMD ray accelerators, perhaps even faster than Nvidia’s Ampere RT cores. While it could be any of those, we suspect it will probably land lower on RT performance rather than higher, leaving room for growth with future iterations.

Again, the critical elements are going to be performance, price, and availability. The latter is already a major problem, because the ideal launch window was last year. Intel’s Xe DG1 was also pretty much a complete bust, even as a vehicle to pave the way for Arc, because driver problems appear to persist. Arc Alchemist sets its sights far higher than the DG1, but every month that passes those targets become less and less compelling.

We’ll hopefully find out how Intel’s discrete graphics cards stack up to the competition in the coming months, starting with the A380 which we hope to have for testing in the near future. Will we still see higher tier Arc products for desktops, or will Intel quietly sweep those under the rug and leave them as China-only laptop solutions, similar to what happened with Cannon Lake? Time will tell, but we’re still hopeful Intel can turn the current GPU duopoly into a triopoly in the coming years.