Over the last few days, the U.S. Department of Energy (DOE) announced a couple of strategic partnerships to build no less than four powerful AI supercomputers, spread across two national laboratories. AMD and Nvidia will be powering two major U.S. government-backed AI infrastructure projects—AMD with HPE for Sovereign AI Factory supercomputers and Nvidia with Oracle for the DOE’s largest AI system yet, though Oracle will also be involved with AMD’s project as well.
Read MoreIn the semiconductor industry, virtually every major chip maker leverages physically accurate digital twins and simulation technologies throughout the design and manufacturing process, to gain invaluable insights into their devices, before a single wafer is prepped at the fab. When building chips, it is essentially a given that simulations and digital twins are used early and often, to ensure optimal performance, power, and area (PPA), but the same can’t be said in other industries. Even if we scale up only to the system level, for example, digital twins have been adopted by only a small fraction of companies. In this day and age of gigawatt AI factories and advanced data centers, however, it’s borderline silly to not leverage digital twins early in the design phase of complex projects.
Read MoreNvidia has been working on a set of performance testing tools, called DGX Cloud Benchmark Recipes, that are designed to help organizations evaluate how their hardware and cloud infrastructure perform when running the most advanced AI models available today. Our team at HotTech had a chance to kick the tires on a few of these recipes recently, and found the data they can capture to be extremely insightful.
Read MoreThe new cutting-edge GPU architecture brings real innovation for PC gamers, from DLSS 3 upscaling to advancements in ray tracing acceleration, but it also offers a ton of horsepower for creators and designers as well.
Read MoreNVIDIA’s Lovelace is indeed a beast slab of silicon with a more-of-everything design approach. However, its base chip architecture was also designed with new innovations in its various silicon engines, in an effort to scale performance and visuals beyond the limitations of Moore’s Law.
Read MoreArriving in the first half of 2023, NVIDIA’s Grace Superchips will become part of a building inertia with respect to the highly scalable ArmV9 architecture in the data center, and give legacy x86 a run for its money.
Read MoreAI and the metaverse seemingly go hand-in-hand, but NVIDIA is clearly all-in on both emerging markets and their enabling technologies.
Read MoreNVIDIA GeForce SVP Jeff Fisher kicked off a presentation, which was packed with new PC gaming and graphics disclosures, including new GPUs for gaming laptops and desktop PCs as well.
Read MoreGoogle open-sourced BERT so that others could train their own conversational question answering systems. And today, NVIDIA announced that its AI compute platform was the first to train BERT in less than an hour and complete AI inference in just over 2 milliseconds.
Read More