Intel Core Ultra series 3 unpacked: Can Panther Lake on Intel 18A secure Intel’s future?

Intel Core Ultra series 3 unpacked: Can Panther Lake on Intel 18A secure Intel’s future?


  1. 1. Innovations brought to life
  2. 2. Panther Lake CPU package configurations
  3. 3. How the Panther Lake compares as a platform
  4. 4. A special focus on Connectivity Upgrades
  5. 5. Performance Highlights
  6. 6. Boosting Efficiency Further
  7. 7. A newer refined NPU to push performance per silicon area
  8. 8. Intel Xe3 graphics core ups the stakes to outclass modern entry-level discrete GPUs
  9. 9. Can the Panther Lake processing platform restore the shine on Intel?

Intel’s upcoming rejuvenation of their next-gen processor platform, Panther Lake, is no ordinary update. It is a pivotal one that packs the industry’s most advanced technologies, culminating in a masterpiece that’s supposed to catapult Intel back to leadership position as it caps off their aggressive plan to deliver five process nodes in four years. Combining the very best of both Lunar Lake’s power efficiency, as well as performance scaling from Arrow Lake, Panther Lake is meant to deliver a wider spectrum of processor models with up to 50% more CPU and integrated GPU performance over both its predecessors through numerous advancements in CPU design, silicon process node design, transistor engineering, advancing the NPU and GPU processing blocks and whole slew of optimizations throughout the processor.

So, yes, the stakes are high for Panther Lake’s success to be their pièce de résistance as Intel rides out 2026 and charts its future momentum. We managed to get a detailed look at the Panther Lake at Intel’s Tech Tour US 2025 that concluded late last month, and here’s what you need to know about the new processing platform.

Innovations brought to life

Image: Intel

Panther Lake will be the first commercial processor to feature two industry-leading silicon design features as part of the Intel 18A process node utilised by this new processing platform. Namely, a brand new RibbonFET transistor design for a superior gate control to better manage electrical current flow, especially in transistor design this small, and a new backside power delivery interconnect called PowerVia.

Power and signal lines are key interconnect components between different tiles and processing blocks, but the laws of physics dictate that mixing both limits the signal strength and thus its speed. Power lines conduct power through the least resistance and prefer to be dense and fat to maximise their ability to deliver more power. Needless to say, signal lines require adequate shielding (insulation) to get a good signal across when being run alongside power signals. PowerVia revolutionises interconnects by moving the power lines to the back, thus immediately increasing power and area scaling through higher cell density and improved signal routing.

These engineering technologies not only help Panther Lake achieve new ground but also set the stage for Intel’s continued processor node advancements after moving past the existing FinFET transistor design, which has been the bedrock for most advanced processors for over a decade.

Intel’s Arizona foundry at the Ocotillo site, where Fab 52 now produces its most advanced next-gen processors manufactured on the Intel 18A process.

Image: Intel

Panther Lake is now at full production in Intel’s Arizona foundry at the Ocotillo site, specifically in Fab 52, making it the most advanced fab at the time of writing with the most advanced chip interconnect (PowerVia) and the most advanced transistor design (RibbonFET) on the Intel 18A semicon process node. While Fab 52 and Intel 18A production have been operational since July 2025, Intel is now ready to share its fully operational status as of October 9, 2025.

Me with an Intel Panther Lake processor and its 18A silicon wafer that harbors the processor’s main compute tile.

Photo: HWZ

A single wafer yields hundreds of Panther Lake’s compute tile. Would you like one?

Photo: HWZ

Panther Lake CPU package configurations

Image: Intel

Catered for various deployments in the edge, personal compute and in robotics, Panther Lake is a scalable offering that’s designed with three base configurations starting from an 8C + 4Xe offering for the thin and light laptops, stepping up to 16C + 4Xe for creators and gamers who need more power and to couple it with discrete GPUs, and finally an 16C + 12Xe offering that’s expressly made for high-performance edge computing, robotics, handheld gaming consoles and can even double up for thin and light gaming machines.

Type

8-core

16-core

16-core 12Xe

CPU Cores

Up to 8

Up to 16

Up to 16

GPU Cores

Up to 4 Xe3 cores

Up to 4 Xe3 cores

Up to 12 Xe3 cores

NPU

NPU5

NPU5

NPU5

Memory (LPDDR5x)

Up to 64GB (6800MT/s)

Up to 96GB (8533MT/s)

Up to 96GB (9600MT/s)

I/O

12 PCIe lanes

20 PCIe lanes

12 PCIe lanes

Still packaged with the now mature Foveros packaging technology that was first utilised for Meteor Lake (read more about chip assembly and the packaging technology here from our factory tour), this was what enabled Intel to support a disaggregated processor architecture where various tiles manufactured from different process technologies (and fabs) are put together on a base wafer for assembly. While Lunar Lake and Arrow Lake had the GPU core integrated within the Compute tile, Panther Lake adopted a standalone GPU tile, just like Meteor Lake. That brings the total tile count now to three:

  • Compute tile comprising of the new Cougar Cove and Darmont cores (manufactured on the new Intel 18A process)
  • Platform Controller tile that packs all the I/O and connectivity (manufactured on an external process node)
  • GPU tile (quad-core editions made on Intel 3 process, and 12-core edition made on an external process node)

Just like the original premise of adopting a tile-based architecture, this allows Intel greater flexibility to design various important functional blocks and manufacture them on a suitable process node that’s more manageable to maintain costs, die complexity, and utilise production capacity wisely.

In my hand in is a complete Intel Panther Lake processor and at this distance, you can make out all the tiles packaged on the base die. The biggest die is the Compute tile, and below it is the GPU tile. On the right with the thinner stripof tiles is two spacers and the lengthy one is the Platform Controller tile.

Photo: HWZ

How the Panther Lake compares as a platform

Here’s a table to quickly compare the best of each processing platform across key areas, helping you better appreciate what Panther Lake brings to the table.

Panther Lake

Arrow Lake-H

Lunar Lake

Meteor Lake

Processor Series

Intel Core Ultra series 3

Intel Core Ultra 200H

Intel Core Ultra 200V

Intel Core Ultra 100 series

Retail Year

2026

2025

Late 2024

2024

CPU Cores

Up to 16 (Cougar Cove + Darkmont cores) – Intel 18A

Up to 16 (Lion Cove + Skymont cores) – TSMC N3B

8 (Lion Cove + Skymont cores) – TSMC N3B

Up to 16 (Redwood Cove + Crestmont) – Intel 4

GPU Cores

Up to 12 (Xe3) – Up to 120 TOPS

Up to 8 (Xe2, Arc 140T series) – Up to 77 TOPS

Up to 8 (Xe2, Arc 140V/130V) – Up to 67 TOPS

Up to 8 (Xe-LPG)

NPU

50 TOPS (NPU 5 engine)

13 TOPS (NPU 3)

48 TOPS (Up to 6x NPU 4 engines)

11 TOPS (Up to 2x NPU 3 engines)

Memory (LPDDR5)

Up to 96GB (9600MT/s)

Up to 64GB (8400MT/s)

Up to 32GB integrated (8533MT/s)

Up to 64GB (7467MT/s)

Integrated Connectivity

Built-in Wi-Fi 7 (R2) + Bluetooth 6 + Thunderbolt 4

Wi-Fi 6E + Thunderbolt 4

Wi-Fi 7 + Bluetooth 5.4 + Thunderbolt 4

Wi-Fi 6E + Thunderbolt 4

And here’s how the different platforms advanced year by year, and and what they brought to the table:-

Meteor Lake (2023) – First tile-based disaggregated architecture, first to feature an NPU, first to use Foveros advanced packaging technology with a disaggregated architecture, increased performance/watt.

Lunar Lake (2024) – Intel’s first fully integrated design by combining compute, I/O and memory on the same packaging, significantly more powerful NPU than Meteor Lake to qualify for MS Copilot+ PC labelling and performance tier, and a powerful new Xe2 GPU.

Panther Lake (2025) – Xe3 GPU, massive uplifts in performance, efficiency and power savings, big boost in addressable memory space, Wi-Fi 7 R2, BT 6.0 and other connectivity improvements, first to be engineered on brand new transistor design and process node (engineering breakthrough).

A special focus on Connectivity Upgrades

Image: Intel

While we’ve a more detailed rundown of the core processing blocks have been upgraded on the Panther Lake below, we thought of first giving some limelight to a lesser highlighted advancement during processor upgrades only because Panther Lake seems to have packed quite a bit of leading-edge (perhaps even novel) wireless connectivity improvements that we don’t often encounter between processing platform updates.

Wi-Fi 7 is a given and is supported natively (read this for more info and benefits), but Panther Lake supports something more advanced and is Wi-Fi 7 R2 compliant (also read as Wi-Fi 7 release 2). You might be wondering what more can you expect out of this updated standard, so here are the highlights:-

  • Multi-Link Reconfiguration: Dynamic resource configuration and management across all active links to conserve power when certain channels aren’t being used such as turning off the 2.4GHz radio if it’s not being used. Power savings, no matter how small, is critical on all mobile devices.
  • Restricted TWT: Enhanced AP resource allocation for critical devices and infrastructure to deliver predictable latency and network reliability.
  • Single-link eMLSR: Enables single-radio client MLO to achieve better efficiency.
  • P2P channel coordination: Allows AP to reserve certain channels for P2P operation/communication. This avoids any negative impact on other channels, and they can freely continue operation without restrictions.

As with all other Wi-Fi standards, most of these enhancement would require the equipment and devices to support these new protocols too to avail these features. While its true benefits are a little harder to discern because some benefits apply only to very specific scenarios, the first two points are more useful in a business environment where more access points are present to seek out improvements in either power savings or reliability in time sensitive applications. Multi-link reconfiguration and Single-link eMLSR are also useful for laptops where any kind of efficiency and power savings goes a long way to improve battery uptime.

Bluetooth is another big area of update where for the first time, the Bluetooth Core 6.0 is supported by the platform out of the box (within the platform controller die). By having both Wi-Fi 7 and BT 6.0 MACs within the processor, Panther Lake is also able to utilise antennas from the Wi-Fi side to deliver dual-device Bluetooth device connectivity. This also helps double the range of Bluetooth from the traditional 20m range to up to 50m. Bluetooth 6 also supports Channel Sounding with the ability to measure the distance of your listening device (such as your earbuds) and automatically lock the laptop when not in proximity.

There’s also Bluetooth Auracast support for easier broadcasting of audio, but you’ll need audio devices that also support Auracast to take advantage of it.

Lastly, you still get integrated Thunderbolt 4 support, though Thunderbolt 5 is only available via a suitable discrete add-on within the laptop chassis.

Performance Highlights

Image: Intel

If there was a single line to sum up performance expectations of the new Panther Lake processing platform, you can anticipate 50% more CPU and GPU performance!

This is largely possible due to an extensive re-work of power management capabilities and in fact as stated by Intel, the more cores that is brought into the picture, the more power management is needed. Strengthening up the low power island (Darkmont E-core) and incorporating next-generation high performance core (Cougar Cove P-core) on the Intel 18A process node are the direct result of that.

Remember the technical advantages highlighted above for this process node utilising RibbonFET transistors and PowerVia interconnect technologies? All of this means reduced power loss, improved efficiency, newer design libraries and increased density through the adoption of the Intel 18A process and we’ve not even touched on the actual core architecture updates on the new Darkmont and Cougar Cove hybrid core designs.

At the heart of Intel’s Panther Lake is their new P- and E-cores.

Image: Intel

While numerous micro updates are present across both the new P- and E-cores, they share several similar improvements such as an better memory disambiguation to boost memory loads and store for improved pipelining performance, an expanded translation lookaside buffer (1.5x the previous TLB) for more reliable outcomes, more optimized branch prediction to improve prediction capacity, reduced latency and energy consumption while increasing performance. Specific to the Cougar Core (P-core) you also get up to 50% more shared L3 cache than in Lunar Lake, while the Darkmont (LPE/E-core) has doubled the L2 cache and now fetches telemetry to improve efficiency and performance.

Last but not least, an 8MB memory side cache and controller that was first present on Lunar Lake was absent on Arrow Lake, but it’s now back on Panther Lake to reduce memory traffic, power and improve overall caching performance.

Boosting Efficiency Further

Intel isn’t just banking on core hardware architecture updates, but also through a closer examination of the Intel Thread Director (refer here to get familiar with it), which plays a significant role in supporting our modern hybrid processor designs that employ a variety of compute engines. Remember Alder Lake, which debuted in 2021 as the first hybrid processor architecture? Yes, that’s how long ago the technology has been around, so it was apt that Intel had a closer look at the Thread Director for two reasons – Windows 11 is now the mainstream OS of choice for consumers (seeing that Windows 10 has reached end of life), and AI has come a long way since to take advantage of it.

In a nutshell, Intel Thread Director classifies workloads and tabulates which cores are available to help guide the operating system in scheduling incoming workloads more effectively. In the all-new Panther Lake processor platform, Intel has advanced the Thread Director with simultaneous execution across various core types (P/E/LP-E), improved power management input by factoring in telemetry, and adopted a more optimised classification modelling to better direct workload variety and scheduling. The latter also incorporates a more updated and expanded scope of ‘busy’ use case scenarios since workloads are even more varied as more mainstream software and compilers have also been updated to take advantage of new processing engines (disaggregated, hybrid architectures, NPUs, advanced GFX engines, etc.) and their abilities over the last few years.

Note the core scheduling priority over time, where the E-cores have or haven’t been utilised.

Image: Intel

All this means, Intel has architected the hardware to be more mindful, capable and efficient for our modern day compute needs and optimising existing silicon to deliver more out of them. In their lab tests, Intel showed to the press of the new Panther Lake processors defaulted to using the LP-E cores as much as possible for a variety of everyday tasks, and efficiently kicking up the E/P-cores as required by corresponding workloads such as in Cinebench that invokes all available cores in its multi-threaded workloads, or in DX12 gaming where more E/P-cores were utilised than LP-E cores. Meanwhile for standard office productivity, they pointed out sparse use of E/P-cores while LP-E cores were primarily active.

On the software side of things, Intel’s new Intelligent Experience Optimizer makes short work out of using your laptop more efficiently, thanks to AI or smart monitoring. Remember your default power slider mode in Windows where it’s either in energy saver mode, balanced power mode or in performance mode? Assuming you’ve set it to battery savings or the default balanced mode, this pretty much fixes the experience out of the laptop, even if you’ve a temporary high performance video render workload, for example.

Image: Intel

If you’ve ever wished for a laptop to be more intelligent to shift gears automatically to improve user experience, this is exactly what Intel’s Intelligent Experience Optimizer is focused on. This is purely a load-driven hint and is enabled by platform software to minimise the delta between battery-performance and wall-powered performance. While this is technically a great idea, Intel is also giving OEMs the power to weigh in and optimise the outcomes for their specific platform’s intent of use, avoiding any overlap of capabilities and better presenting their offerings based on the particular system configurations they’ve adopted.

What kind of performance uplifts can be seen? Intel showcased one such example where a laptop was set to a balanced power profile and running both UL’s Procyon’s office productivity suite as well as Cinebench, and in both instances, with Intel Experience Optimizer, the test laptop saw 19% uplift in performance. This probably comes with a dent in overall battery uptime, which wasn’t shared, and we’ll get to the bottom of this when we’ve suitable test laptops or better yet, ready-to-market products that we can put to the test.

There is, of course, something else that Intel is famous for over the years that could unseat the expected outcomes – the ability to complete a task quickly and enter a sleep state fast to conserve power. If this holds true, there’s a chance that Intel Experience Optimizer can deliver both performance and battery savings. However, while this is sweet news for consumers, this puts a whole different spanner into the work of reviewers like ourselves and how we can showcase baseline vs. optimised performance vs. wall-power-based outcomes.

At the end of the day, anything that improves user experience is something we wholeheartedly welcome. Challenge accepted, and we look forward to sharing our experiences in the near future.

A newer refined NPU to push performance per silicon area

The next-gen NPU focused on a re-design for area savings and thus boosting silicon efficiency.

Image: Intel

Lunar Lake’s NPU4 packed quite a punch, and even if it’s not leading the industry in raw NPU TOPS throughput, it was more than adequate for its time when considering that most AI acceleration tasks still occur within the CPU or GPU. As such, Intel is still focused on delivering a balanced set of AI engines across the CPU, NPU and GPU to deliver a total of 180 platform AI TOPS on Panther Lake to tackle light AI tasks, to tackle on-device AI assistants and more strenuous tasks like content creation and gaming (frame generation, etc.) across each processor respectively.

Panther Lake’s new NPU5 is one of the more interesting technical updates brought about through smart re-architecture of NPU4’s core. It now packs slightly more throughput at up to 50 AI NPU TOPS (Lunar Lake managed up to 48 TOPS and significantly better than what Arrow Lake offered) but more importantly, it achieves it with far less silicon area.

How the functional units have re-architected to serve a wider MAC unit array.

Image: Intel

A big portion of the neural compute engine (NCE) in the NPU is occupied by multiplication and accumulate (MAC) units that are vital for massive parallel processing of matrix and convolution operations of machine learning (which is exactly why GPUs gained extreme fame in the era of AI inferencing and training since they’ve massive arrays of these in their graphics processing pipeline). However, Intel realised that several functional units in the MAC arrays in the NPU4 (Lunar Lake) were sparsely utilised, such as the DSPs and other shared functions of the inferencing pipeline, such as load/store units and more.

Less functional units serving a wider MAC array on the new NPU5.

Image: Intel

This prompted their engineers to re-architect the NCE to double the MAC array size while maintaining the same number of shared functional units per NCE. This is a key reason why, numerically, it would appear that NPU5 ‘devolved’ with ‘only’ three NPU engines instead of the six available on NPU4. However, the net result was savings in silicon area and boosting performance per area occupied. NPU5 also now supports new data storage formats, such as INT8 and FP8, which reduce the memory footprint needed and thus enable faster processing and lower energy consumption. This bodes well for AI tasks, which aren’t as complex as needing an FP16 storage format, which then allows the NCE’s MAC arrays to process double the math (or throughput).

Leading the numbers game with a total of 180 platform AI TOPS.

Image: Intel

(Side note: Qualcomm’s Snapdragon X2 Elite Extreme for next-gen laptops boasts 80 TOPS from its Hexagon NPU, with a different approach to process larger LLM models on-device. It remains to be seen whose approach is better in the real world in this fast-changing landscape, but we’ll know better as 2026 unfolds.)

Intel Xe3 graphics core ups the stakes to outclass modern entry-level discrete GPUs

50% uplift in graphics performance is your key takeaway, but there’s more.

Image: Intel

Furthering the AI TOPS narrative, depending on the Panther Lake processor configuration, the 12 Xe GPUs edition can deliver up to 120 TOPS on its own. Compare that to Arrow Lake-H that can churn out 99 TOPS for the entire platform (CPU, NPU and GPU combined), and you can see why the expanded and upgrade Xe3 GPU core is something to look forward to.

Xe3 packs various microarchitectural improvements with more shared L1 cache, improved vector engines (tackles 25% more threads, variable register allocation, and FP8 data support), asynchronous ray-tracing support and double the L2 cache to 16MB on the 12-GPU variant (as opposed to only 8MB on the prior 8-GPU top-end variant). The increased cache size alone is a big step up to reduce graphics memory hits and thus improves performance of the GPU.

Most importantly, Xe3 was GPU is more scalable with different render slice configuration than can have either four or six Xe3 cores. This allows Intel to offer the new Xe3 GPU engines with a more diverse graphics horsepower and thus the 8-core and 16-core Panther Lake configs carry the quad-GPU variant through the smaller render slice, while the 16-core + 12Xe variant packs dual expanded render slices for a full-blown 12-core GPU configuration.

12 Xe3 GPU cores on the top-end Panther Lake configuration.

Image: Intel

With so many GPU cores and each packing 8 XMX engines, there’s a total of 96 XMX engines that quickly adds up to why it can churn out 120 TOPS of AI processing prowess. There’s also de-quantisation support, which lends itself the ability to take a huge model and scale it down through mathematical expressions to run it within the processing space allowed locally.

There’s even a new rendering method called Neural Radiance Field, which could eliminate the entire traditional render pipeline through an AI model using DirectX cooperative vectors, bringing accelerated matrix multiply capability to the shaders. If this works as intended in actual games, then there’s power savings to be gained as less silicon and processes are invoked. This feature is unlikely to yield immediate results, as it will require new drivers and game support through DirectX 12 Ultimate. Nonetheless, it’s still exciting to know that there’s future potential.

Thanks to a number of architectural improvements with Xe3, Intel’s internal testing places it to be up to 50% faster than Xe2 (when comparing Panther Lake’s top SKU against Lunar Lake’s top SKU):-

Image: Intel

Remember XeSS, which is Intel’s super sampling AI technology to insert an AI-generated frame between ‘real’ frames to boost performance and visual quality? Utilising a combination of XeSS super resolution, XeSS frame generation and Xe Low latency technologies, this is XeSS 2 as we know it from Lunar Lake’s Xe2 GPU core and it works reasonably well from our testing.

In Xe3, you now get XeSS-MFG, or multi-frame generation, where it generates three extra frames instead of one. How it works is very similar to how NVIDIA DLSS 3.0 does it, which you can read up about it here. In a nutshell, it utilises frame pacing to ensure generated frames match up to the expected time frame and uses optical flow calculations to forecast motion vectors to generate these frames. Intel Fellow for Architecture and Graphics, Tom Peteron, shared that even without XeSS-MFG, the Xe3 GPU core is 30% faster than the previous generation. Enabling MFG, he estimates that the performance could be three times as much.

Features you’ve only seen on discrete graphics are now in an integrated graphics engine.

Image: Intel

Intel’s upcoming software update will even allow you to specify how much frame generation you wish to have, by allowing you to select between 2x, 3x, or 4x frame generation options, if you wish to fine tune the output to suit your taste. The bigger reason is perhaps this is new to Intel so it’s wiser to allow gamers be in control should the default XeSS-MFG output not be to your taste, especially if you’re very easily put off by any input lag mismatches.

Slated to be marketed as XeSS 3 to encompass all the individual features, it’s unfortunate that no actual game performance numbers were shared at the event. However, we hope to test it ourselves once retail-ready laptops come about early in 2026.

One of the most pressing factors in any new feature is game support and compatibility. Peterson said the XeSS 3 is compatible with any and all games supported XeSS 2. To give you a quick gauge on that note, back when we reviewed the Intel Arc B580 discrete graphics card that also supported XeSS 2, Intel mentioned it already had over 150 games supporting this feature set. It’s not yet an impressive list, but at least you’ve got that many games that support XeSS 3 too.

Still concerned about stutters, game loading lag and performance spiking? Intel has some software updates and initiatives to nip them, including a new power management controller for Panther Lake processors.

Can the Panther Lake processing platform restore the shine on Intel?

Behinds Intel’s Arizona fab facility’s lobby lies the complex world of semicon manufacturing and the company is pushing some serious engineering behind the scenes to keep the world moving and readying it for agentic AI.

Photo: HWZ

It’s no secret that Intel has been put in the spotlight to innovate boldly to catch up with advancements made by its main competitors in the PC industry. It’s not that they haven’t done so, but a variety of reasons have led them to fall behind. After all, Apple’s decision to drop Intel and develop its own M-series of processors was mainly due to Intel’s inability to deliver timely and relevant progressions with the flexibility to keep Apple content. Intel also dropped the ball on pursuing an ideal mobile phone platform, was late to join the GPU race, and brought about notable AI accelerators for cloud and edge computing. While they are still relevant in the last two industries and have reasonable solutions, they were nonetheless embroiled in their own concerns of getting their foundry proposition and process node leadership right.

Moving past those painful moments, the recent few years have seen phenomenal updates in both processor designs, technologies and a ramp up in manufacturing more advanced process nodes to prioritize the new era of AI and Agentic AI in both data center deployments, as well as for your personal AI PC.

A snapshot of Panther Lake platform’s features to look forward to.

Image: Intel

Intel has loads of processor options targeted at every niche and scale of needs. However, the upcoming Panther Lake platform fabricated on Intel’s cutting edge 18A process technology is their first attempt in a long while to bridge multiple leading edge needs from robotics, to edge computing and high-performance computing clients all on their most advanced processor design node – from the get-go.

Panther Lake is now in volume production and is on track to debut in clients of various scale, edge, handheld gaming, thin and light laptops and high-performance machines alike in early 2026 as the Intel Core Ultra series 3. We suspect CES 2026 would be the next battleground as AMD takes the keynote stage, while Qualcomm showcases their Snapdragon X2 Elite for PCs, alongside Intel’s partners for Panther Lake powered systems.

We’ve seen Intel’s Fab 52 in Arizona bring Intel Core Ultra series 3 to life last month, and it’s a sign of commitment to Intel’s promise to deliver so many process node advances in such a short time frame, eager and earnest to recapture the pole position in x86 systems across the world. It’s not just for show because we’ve actually seen working silicon in development systems, even as far back as the beginning of the year in CES 2025. What’s left is for Intel to execute their promise and keep the channel vendors happy, get the price proposition right, and hopefully surprise reviews and enthusiasts like ourselves beyond just the presentations to deliver incredible performance and power efficiency while setting a new standard for integrated graphics capabilities with their Xe3 GPU.

2026, we welcome Intel 18A with open hands and can’t wait to see Intel deliver their Core Ultra series 3 just as well or better than Lunar Lake surprised us.





Read Full Article At Source