Nvidia surprises with powerful, open AI models

Nvidia has released a powerful open-source AI model that rivals proprietary systems from industry leaders like OpenAI and Google. The model, called NVLM 1.0, demonstrates exceptional performance in vision and language tasks while also enhancing text-only capabilities. Michael Nuñez reports on this development for VentureBeat. The main model, NVLM-D-72B, with 72 billion parameters, can process … Read more

Nvidia, Salesforce see ‘gigantic opportunity’ for AI agents

AI agents will fundamentally change the future of work. This is the prediction made by Nvidia founder Jensen Huang and Salesforce CEO Marc Benioff at the Dreamforce conference, as reported by VentureBeat. According to Huang, the capabilities of AI agents are developing rapidly and offer “gigantic” opportunities. They will soon be able to solve complex … Read more

New AI chips challenge Nvidia

Start-ups Cerebras and Groq have unveiled powerful processors for AI inference that aim to outperform Nvidia’s previously dominant GPUs. As VentureBeat reports, Cerebras’ new CS-3 chip offers 125 petaflops of processing power with 4 trillion transistors. Groq scores with an energy-efficient tensor streaming processor. Both companies are targeting the growing market for AI applications. Experts … Read more

Nvidia shows new model for synthetic data

According to Nvidia, their new Nemotron-4 340B open language model will revolutionize the generation of synthetic data and enable companies to develop custom AI models.

Nvidia Blackwell is a new hardware architecture for AI

Nvidia introduced “Blackwell,” a new hardware architecture for AI applications. It is designed to significantly increase the efficiency of data centers and accelerate the development of new AI solutions. New systems based on Blackwell will be offered by a number of vendors, including Asus, Gigabyte and Supermicro, and are expected to be suitable for both … Read more

Nvidia Inference Microservices accelerate development

Nvidia introduces NIM (Nvidia Inference Microservices), a new technology that supposedly enables developers to deliver AI applications in minutes instead of weeks. These microservices provide optimized models as containers that can be deployed in clouds, data centers, or on workstations. The goal is to enable organizations to build generative AI applications for co-piloting, chatbots, and … Read more

Kyndryl and Nvidia want to make enterprise AI easier

IT service provider Kyndryl and Nvidia are working together to make it easier for businesses to use generative AI. The partnership combines Nvidia’s hardware and software with Kyndryl’s expertise in implementing and scaling AI projects.

Nvidia ChatRTX supports Google Gemma

Nvidia’s ChatRTX chatbot now supports Google’s Gemma model, allowing users to interact with their own documents, photos, and YouTube videos. The update also includes voice search and offers more ways to search locally stored data using different AI models.

Nvidia Chat with RTX: Local AI

Local AI is an interesting concept: AI assistants similar to ChatGPT that do not run in the cloud, but on your own PC or on a server you operate yourself. One challenge is the speed of the answers. Nvidia has now presented “Chat with RTX”, which uses the computing power of its own graphics cards. … Read more