Cadence Debuts Its First System Chiplet Silicon To Accelerate Physical AI Development

Cadence Design Systems made a major advancement with it system chiplet, that may further accelerate the semiconductor industry’s migration toward evolving chiplet-based architectures. The company detailed the successful silicon bring-up of its system chiplet architecture, which is the cornerstone of a broader chiplet ecosystem vision designed to push modular silicon platforms forward. I first wrote about Cadence’s system chiplet earlier this year…

Read More
SiTime’s Precision MEMS Timing Technology Boosts Earnings In The AI Era

In the drive to build more powerful AI platforms and denser compute clusters with fast interconnects, one of the unsung, foundational technologies required is also becoming one of the most critical—precision timing. Every AI server, optical module, and high-speed network link (among many other things) depends on accurate, precision timing signals to keep hundreds or thousands of processors synchronized. Without them, latency increases, data errors multiply, and efficiency drops. That’s where SiTime has carved out its niche.

Read More
US DOE Taps Nvidia, AMD, And Oracle For Quartet Of Powerful AI Supercomputers

Over the last few days, the U.S. Department of Energy (DOE) announced a couple of strategic partnerships to build no less than four powerful AI supercomputers, spread across two national laboratories. AMD and Nvidia will be powering two major U.S. government-backed AI infrastructure projects—AMD with HPE for Sovereign AI Factory supercomputers and Nvidia with Oracle for the DOE’s largest AI system yet, though Oracle will also be involved with AMD’s project as well.

Read More
Nvidia And Uber Aim To Make Robotaxis Real By 2027

Nvidia and Uber further solidified a date the AV industry has been striving towards for years: large-scale robotaxis in regular service. The two companies plan to begin ramping a global autonomous fleet in 2027, growing toward 100,000 vehicles that will eventually roll directly onto Uber’s ride-hailing network. The backbone of the autonomous solution is Nvidia’s DRIVE AGX Hyperion 10 platform running the company’s DRIVE AV software stack, paired with a joint “AI data factory” leveraging Nvidia’s Cosmos development platform, that will train foundational AI models on “trillions” of real-world and synthetic driving miles.

Read More
Dave AltavillaNVIDIAComment
NextSilicon’s Dataflow Chip Could Disrupt The Processor Landscape

With its new Maverick-2 accelerator, built on what the company calls an Intelligent Compute Architecture, NextSilicon is betting on a long-pursued but rarely realized approach to accelerating HPC and data center workloads, known as dataflow computing. A dataflow architecture is designed such that the data itself, not instruction sequences, drives computation. The company believes it’s finally solved the twin barriers that kept dataflow architectures confined to research labs: programmability and practicality.

Read More
SiTime’s Titan MEMS Resonators Signal A Shift In The $4 Billion Timing Market

In the world of high-tech electronics, timing is everything—literally. Long-standing precision timing solutions company, SiTime Corp, is aiming to reset the clock, so to speak, on how resonators are designed and deployed. The company’s newly announced Titan platform introduces a family of MEMS-based resonators that are markedly smaller, more resilient, and more easily integrated than traditional quartz designs.

Read More
Dave AltavillaSITimeComment
Cadence Built A Nvidia DGX SuperPOD Digital Twin With Incredible Scale And Accuracy

In the semiconductor industry, virtually every major chip maker leverages physically accurate digital twins and simulation technologies throughout the design and manufacturing process, to gain invaluable insights into their devices, before a single wafer is prepped at the fab. When building chips, it is essentially a given that simulations and digital twins are used early and often, to ensure optimal performance, power, and area (PPA), but the same can’t be said in other industries. Even if we scale up only to the system level, for example, digital twins have been adopted by only a small fraction of companies. In this day and age of gigawatt AI factories and advanced data centers, however, it’s borderline silly to not leverage digital twins early in the design phase of complex projects.

Read More
SiFive Expands Its RISC-V Intelligence Family To Address Exploding AI Workloads

SiFive just announced an array of new additions to its product stack that run the gamut, from tiny, ultra lower-power designs for far edge IoT devices, to more powerful engines for AI data centers, and everything in between. SiFive’s 2nd Generation Intelligence Family of RISC-V processor IP includes five new products -- the X160 Gen 2 and X180 Gen 2, both of which are brand-new designs, in addition to upgraded versions of the X280, X390, and XM cores.

Read More
Google Pixel 10 Series Puts AI At The Core Of Smartphone Evolution

The Google Pixel 10 Series arrives at a time when the smartphone market feels more than saturated, and even predictable. Yearly cycles bring faster processors, brighter displays, and incremental camera bumps. What stands out this year is not that the Pixel 10 Pro XL can match other flagships in hardware refinement, but that Google is repositioning its Pixel 10 series as a contextually aware assistant, with thoughtfully built, helpful AI at its core.

Read More
Nvidia Jetson AGX Thor Dev Kit Raises The Robotics Bar With Blackwell

Nvidia’s Jetson line-up has long been the company’s proving ground for embedded AI computing at the edge, especially in robotics, industrial automation and autonomous vehicles. With the new Jetson AGX Thor developer kit, Nvidia is introducing a platform targeted at advancing machine learning in an arena where “physical AI”—robots, autonomous machines, and sensor-rich devices—must process vast amounts of data in real time.

Read More