Grok 4 Fast arrives as cheaper alternative to xAI’s Grok-4

**Next Article: Grok 4 Fast Arrives as Cheaper Alternative to xAI’s Grok-4**
*By Dwaipayan Roy | Sep 20, 2025, 06:46 PM*

Elon Musk’s artificial intelligence (AI) company, xAI, has unveiled a new model called **Grok 4 Fast**, offering a more affordable alternative to the existing Grok 4. Designed for both enterprise and consumer use cases, this innovative system stands out by combining reasoning and non-reasoning tasks within a single framework, delivering high efficiency and versatility.

### Cost Efficiency

Grok 4 Fast provides a significantly better cost-to-performance ratio compared to its predecessor. It uses approximately **40% fewer “thinking tokens”** than Grok 4 while maintaining similar accuracy across multiple benchmarks. An independent review by Artificial Analysis highlighted that Grok 4 Fast achieves comparable results to Grok 4 at up to **98% less cost**, making it a highly economical option for users.

### Benchmark Performance

The model has demonstrated impressive performance on a range of industry-standard benchmarks:

– **85.7%** on GPQA Diamond
– **92%** on AIME 2025
– **93.3%** on HMMT 2025

These scores are close to those of Grok 4. Additionally, Grok 4 Fast showed improvements in code execution and search-based tasks, achieving a **95% pass rate on SimpleQA** and around **74% on X Bench Deepsearch**.

### Key Features

– **Massive Context Window:** Grok 4 Fast supports a two million token context window, enabling it to process very large inputs efficiently.
– **Reinforcement Learning:** The model was trained using reinforcement learning techniques to boost efficiency.
– **Unified Framework:** Unlike previous versions that required separate reasoning and non-reasoning models, Grok 4 Fast integrates both into a single system. This integration reduces latency and lowers operational costs significantly.

### Availability and Pricing

Grok 4 Fast is available in two versions: **grok-4-fast-reasoning** and **grok-4-fast-non-reasoning**, both supporting the extensive two million token limit.

Pricing starts at **$0.20 per million input tokens** for smaller workloads, with rates increasing based on token usage. This pricing structure makes Grok 4 Fast an ideal choice for large projects or anyone seeking a fast, affordable AI solution without sacrificing accuracy.

Users can access Grok 4 Fast through xAI’s website, mobile applications, OpenRouter, and the Vercel AI Gateway. Some platforms offer free access during the launch period.

With its blend of cost-efficiency, robust performance, and large context capacity, Grok 4 Fast positions itself as a compelling option in the evolving AI landscape.
https://www.newsbytesapp.com/news/science/elon-musk-s-xai-launches-grok-4-fast-check-features-pricing/story

India’s first trillion-parameter model to power next-gen AI apps

**India’s First Trillion-Parameter AI Model to Power Next-Gen Applications**

*By Mudit Dube | Sep 19, 2025, 05:18 PM*

In a major stride for India’s artificial intelligence landscape, BharatGen—a government-backed consortium led by IIT Bombay—has been awarded over ₹900 crore under the IndiaAI Mission. This substantial funding will support the creation of India’s first trillion-parameter large language model (LLM), designed to fuel next-generation AI applications across multiple sectors.

### Building a Trillion-Parameter Model for India

The ambitious project aims to develop a massive AI model tailored specifically for Indian contexts. However, this colossal “mother” model is not intended for direct consumer use. Instead, it will be distilled into smaller, domain-specific models suited for industries such as law, agriculture, and finance.

Rishi Bal, Executive Vice President at BharatGen, explained that these distilled models could serve practical uses—like agricultural advisory tools available in various regional languages or legal assistants trained on Indian case law—making AI more accessible and useful across diverse fields.

### Creating a Sovereign Indian Dataset

To ensure that the LLM accurately reflects India’s unique languages and cultures, BharatGen is heavily investing in building a sovereign dataset. The consortium is collaborating with publishers to license archival content and is providing free OCR services to digitize regional texts.

Furthermore, crowdsourced annotation efforts are underway to capture the linguistic nuances and cultural specificities of Indian languages. This indigenous data collection strategy is aimed at reducing reliance on foreign datasets and better aligning AI outputs with Indian realities.

### Overcoming GPU Supply and Funding Challenges

Training a trillion-parameter AI model requires thousands of GPUs working in parallel, and hardware availability remains a key challenge. Bal noted that, like many in the field, BharatGen must navigate GPU supply constraints.

The ₹900 crore government funding will partially subsidize GPU acquisition, supporting the computational backbone of this mammoth training effort. Under the IndiaAI Mission, nearly 40,000 GPUs have been allocated across various initiatives, including BharatGen’s sovereign LLM project.

### Focus on Reliability and Real-World Impact

BharatGen CEO Ganesh Ramakrishnan emphasized that the focus is on building models grounded in Indian data and languages rather than simply scaling up parameters. He highlighted the importance of reliability for real-world applications.

The consortium plans to release distilled models to developers, enabling startups and enterprises to build AI-powered solutions without needing to train massive models from scratch. This approach is expected to accelerate innovation and democratize access to cutting-edge AI technology.

### Collaborative, Efficient Operations

Operating on a hub-and-spoke model with teams spread across India, BharatGen brings together engineers, data scientists, and domain experts while maintaining lean operations. This distributed structure fosters collaboration and specialization.

Looking ahead, BharatGen is exploring public-private partnerships and sustainable revenue models such as licensing distilled AI models — ensuring continuous growth and broader adoption of Indian AI technologies.

With this landmark project, BharatGen is paving the way for AI systems that are not only powerful but also deeply rooted in India’s linguistic and cultural landscape, promising impactful and reliable applications across the nation’s key sectors.
https://www.newsbytesapp.com/news/science/iit-bombay-s-bharatgen-to-build-1t-parameter-ai-model/story

India’s first trillion-parameter model to power next-gen AI apps

**India’s First Trillion-Parameter Model to Power Next-Gen AI Apps**
*By Mudit Dube | Sep 19, 2025, 05:18 PM*

**BharatGen Consortium Awarded ₹900 Crore to Build India’s Largest AI Model**

A government-backed consortium led by IIT Bombay, BharatGen, has been granted over ₹900 crore under the IndiaAI Mission to develop India’s first trillion-parameter large language model (LLM). This ambitious project aims to create a massive AI system that will serve as the foundation for building smaller, domain-specific models tailored for sectors such as law, agriculture, and finance.

**From a ‘Mother Model’ to Specialized AI Solutions**

Rishi Bal, Executive Vice President at BharatGen, explained that the trillion-parameter model is not intended for direct use by consumers. Instead, it will act as a “mother system” from which smaller, more efficient AI models can be distilled. These specialized models could include agricultural advisory tools available in regional languages or legal assistants trained on Indian case law, designed to meet the unique needs of various industries.

**Building a Sovereign Indian Dataset**

To ensure the AI reflects authentic Indian contexts, BharatGen is heavily investing in creating a sovereign dataset by aggregating diverse Indian content. The consortium is collaborating with publishers to license their archives and create comprehensive digital corpora. Additionally, they are providing free OCR (Optical Character Recognition) services to digitize regional texts and are employing crowdsourced annotation to capture the nuances of Indian languages and culture.

**Hardware Challenges and GPU Availability**

Training a trillion-parameter model requires thousands of GPUs running in parallel. Bal acknowledged the challenges in securing sufficient hardware and noted that BharatGen must wait for GPU supply like others in the field. The ₹900 crore government funding will act as a subsidy to help procure the necessary GPUs. Under the IndiaAI mission, around 40,000 GPUs have been allocated to various projects, including building India’s sovereign LLM models.

**Focus on Reliability and Real-World Applications**

Ganesh Ramakrishnan, CEO of BharatGen, emphasized that their priority is creating AI models grounded in Indian data and languages with a strong focus on reliability for real-world applications—not just raw scale. BharatGen plans to release distilled versions of the model to developers, enabling startups and enterprises to build AI-powered applications without the need to train colossal systems from scratch.

**Operational Structure and Future Plans**

BharatGen operates on a hub-and-spoke model with teams distributed across multiple locations in India. According to Bal, this approach helps bring together engineers, data scientists, and domain experts efficiently while keeping operations lean. Ramakrishnan also noted that BharatGen is exploring public-private partnerships and revenue models such as licensing smaller distilled models to sustain and expand the initiative.

This pioneering project marks a significant step toward India’s technological sovereignty in AI, promising customized and reliable solutions tailored for the country’s diverse sectors and languages.
https://www.newsbytesapp.com/news/science/iit-bombay-s-bharatgen-to-build-1t-parameter-ai-model/story

Huawei co-develops safety-focused DeepSeek model to block politically sensitive topics

**Huawei Co-Develops Safety-Focused DeepSeek Model to Block Politically Sensitive Topics**

*By Akash Pandey | Sep 19, 2025, 06:45 PM*

Huawei, the Chinese tech giant, has announced the co-development of a modified version of the artificial intelligence (AI) model DeepSeek. The new variant, named **DeepSeek-R1-Safe**, is reported to be “nearly 100% successful” in censoring politically sensitive topics, aligning with China’s stringent regulations that require domestic AI models and applications to uphold “socialist values.”

### Training and Development

Huawei trained DeepSeek-R1-Safe using 1,000 of its own Ascend AI chips. The model is adapted from DeepSeek’s open-source R1 version and developed in collaboration with Zhejiang University, which is the alma mater of DeepSeek’s founder, Liang Wenfeng. However, neither DeepSeek nor Liang Wenfeng were directly involved in this latest project.

### AI and Political Sensitivity in China

Chinese AI chatbots, such as Baidu’s Ernie Bot—China’s counterpart to OpenAI’s ChatGPT—commonly avoid discussing domestic politics or sensitive topics. These limitations reflect the ruling Communist Party’s guidelines, aiming to control and manage online discourse.

### Model Efficiency and Performance

Huawei reports that DeepSeek-R1-Safe achieves an impressive success rate of nearly 100% in defending against “common harmful issues.” These issues include toxic speech, politically sensitive content, and incitement to illegal activities.

However, the model’s effectiveness drops to 40% when confronted with disguised behaviors such as scenario-based challenges, role-playing scenarios, or encrypted coding.

In comprehensive testing, DeepSeek-R1-Safe’s security defense capability reached 83%, outperforming several contemporary AI models—including Qwen-235B and DeepSeek-R1-671B—by 8% to 15% under identical conditions. Despite the enhanced safety features, the new model exhibited less than a 1% performance degradation compared to its predecessor, DeepSeek-R1.

Huawei’s development of DeepSeek-R1-Safe underscores the growing emphasis on AI safety and regulatory compliance in China’s technology sector, reflecting broader governmental priorities around information control and socialist values adherence.
https://www.newsbytesapp.com/news/science/huawei-unveils-ai-model-deepseek-r1-safe-to-filter-politically-sensitive-content/story

This AI predicts your disease risks 10 years in advance

### This AI Predicts Your Disease Risks 10 Years in Advance

**By Mudit Dube | Sep 18, 2025**

A team of scientists has developed a groundbreaking artificial intelligence (AI) tool capable of predicting an individual’s risk for over 1,000 diseases. The innovative system, dubbed **Delphi-2M**, can forecast health changes up to a decade in advance. The research, published in the journal *Nature*, highlights the potential of generative AI to model human disease progression on a large scale.

### How Delphi-2M Works

Delphi-2M uses algorithmic concepts similar to those found in large language models (LLMs). It predicts the likelihood of developing diseases such as cancer, diabetes, heart disease, and respiratory disorders by analyzing key “medical events” in a patient’s history—like diagnosis dates—and lifestyle factors, including obesity status, smoking or drinking habits, age, and sex.

The AI was trained on anonymized patient data from two major healthcare sources: the UK’s Biobank study, comprising 400,000 participants, and Denmark’s national patient registry, which includes 1.9 million patients.

### Potential Impact on Personalized Healthcare

Delphi-2M predicts health risks expressed as rates over time, much like weather forecasts. According to Ewan Birney, interim executive director of EMBL, patients could benefit from the tool within a few years. He envisions a future where clinicians use AI tools like Delphi-2M to identify major health risks early and suggest lifestyle changes to mitigate them.

This marks a significant step forward in personalized healthcare and disease prevention strategies.

### Advantages Over Existing Methods

Birney also emphasized Delphi-2M’s superiority compared to current risk assessment models such as Qrisk. Unlike single-disease models, Delphi-2M can assess multiple diseases simultaneously and provide predictions over an extended time frame.

The research team noted that Delphi-2M’s accuracy in predicting disease rates based on an individual’s past medical history rivals that of existing single-disease models.

### Future Prospects: Revolutionizing Healthcare with Generative Models

Professor Moritz Gerstung from the German Cancer Research Center described Delphi-2M as a major advancement in understanding human health and disease progression. He believes generative AI models like Delphi-2M could eventually personalize care and anticipate healthcare needs on a much larger scale.

This breakthrough underscores the transformative potential of AI tools in predicting individual health risks and enabling proactive healthcare management.

*Stay tuned for more updates on how AI is shaping the future of medicine and personalized health.*
https://www.newsbytesapp.com/news/science/delphi-2m-ai-tool-predicts-risk-of-over-1-000-diseases/story