Elon Musk's Space Odyssey: SpaceX Acquires xAI in Landmark $1.25 Trillion Deal In a move that has sent shockwaves through the tech and aerospace industries, Elon Musk's SpaceX has acquired his artificial intelligence startup, xAI, in a deal valued at a staggering $1.25 trillion. The merger, which was announced on Monday, combines two of Musk's most ambitious ventures, creating what he describes as "the most ambitious, vertically integrated innovation engine on (and off) Earth." The deal, which was first reported by Bloomberg, will see xAI's shares converted into SpaceX stock at a rate of roughly seven to one, with the combined entity's shares priced at $527 each. This marks a significant markup from SpaceX's recent $800 billion valuation in a secondary stock sale. Musk's vision for the merged company is to develop "orbital data centers" that can power the energy-intensive computing required for AI development, addressing what he sees as a looming global electricity shortage for AI. "Global electricity demand for AI simply cannot be met with terrestrial solutions, even in the near term," Musk wrote in a blog post. The acquisition also comes as SpaceX prepares for a highly anticipated initial public offering (IPO) as early as June, which could raise up to $50 billion and make it the largest IPO in history, surpassing Saudi Aramco's $29 billion offering in 2019. However, the merger may complicate the timeline for the IPO, as the integration of the two companies could require additional time and resources. Nonetheless, Musk remains bullish on the combined entity's prospects, stating that it will "form the most ambitious, vertically integrated innovation engine on (and off) Earth, with AI, rockets, space-based internet, direct-to-mobile-device communications, and the world's foremost real-time information and free speech platform." The move has drawn a mix of reactions from industry observers. Some see it as a bold and visionary step, while others have expressed concerns about the potential risks and challenges involved, particularly around the feasibility of Musk's space-based data center concept and the controversies surrounding xAI's products, such as the Grok chatbot and the X social media platform. Regardless of the debate, one thing is clear: Elon Musk is doubling down on his ambitious plans to revolutionize the fields of space exploration, artificial intelligence, and beyond, with this landmark acquisition serving as the latest chapter in his quest to "extend the light of consciousness to the stars."
Nvidia and OpenAI's Evolving Partnership: Navigating the Complexities of AI Investments In a rapidly evolving AI landscape, the relationship between tech giants Nvidia and OpenAI has been the subject of intense scrutiny. While the companies initially announced plans for a $100 billion investment deal, recent reports suggest that the partnership may be undergoing a rethinking. According to Nvidia CEO Jensen Huang, the $100 billion figure was "never a commitment," but the company still plans to make a "huge" investment in OpenAI. Huang has pushed back against reports of friction, calling them "nonsense" and stating, "I believe in OpenAI. The work that they do is incredible—they are one of the most consequential companies of our time, and I really love working with Sam [Altman, OpenAI's CEO]." However, sources have indicated that Huang has privately expressed concerns about OpenAI's business strategy and the competition it faces from rivals like Anthropic and Google. The Wall Street Journal reported that the two companies are now rethinking the terms of their partnership, with Nvidia potentially investing tens of billions of dollars through an equity deal rather than the previously announced $100 billion. The evolving nature of the Nvidia-OpenAI partnership has had ripple effects in the tech industry. Oracle, for instance, sought to reassure investors that its financial relationship with OpenAI remains strong, despite the uncertainty surrounding Nvidia's involvement. This attempt to project confidence, however, backfired, leading to a drop in Oracle's stock price. Nvidia has also sought to address concerns about the potential impact of its OpenAI investment on its broader customer base. The company has assured its customers that its "investments will not change our focus or impact supply to our other customers—we will continue to make every customer a top priority, with or without any equity stake." As the AI industry continues to rapidly evolve, the relationship between Nvidia and OpenAI remains a closely watched dynamic. While the specifics of their partnership may be in flux, both companies appear committed to advancing the frontiers of artificial intelligence, with Nvidia's CEO describing OpenAI as "one of the most consequential companies of our time."
Distilling the tool-using capabilities of large language models (LLMs) into smaller, more efficient small language models (SLMs) is a key challenge for their practical application. The predominant approach, supervised fine-tuning (SFT), suffers from poor generalization as it trains models to imitate a static set of teacher trajectories rather than learn a robust methodology. While reinforcement learning (RL) offers an alternative, the standard RL using sparse rewards fails to effectively guide SLMs, causing them to struggle with inefficient exploration and adopt suboptimal strategies. To address these distinct challenges, we propose MENTOR, a framework that synergistically combines RL with teacher-guided distillation. Instead of simple imitation, MENTOR employs an RL-based process to learn a more generalizable policy through exploration. In addition, to solve the problem of reward sparsity, it uses a teacher's reference trajectory to construct a dense, composite teacher-guided reward that provides fine-grained guidance. Extensive experiments demonstrate that MENTOR significantly improves the cross-domain generalization and strategic competence of SLMs compared to both SFT and standard sparse-reward RL baselines.
Gradient-based data attribution methods, such as influence functions, are critical for understanding the impact of individual training samples without requiring repeated model retraining. However, their scalability is often limited by the high computational and memory costs associated with per-sample gradient computation. In this work, we propose GraSS, a novel gradient compression algorithm and its variants FactGraSS for linear layers specifically, that explicitly leverage the inherent sparsity of per-sample gradients to achieve sub-linear space and time complexity. Extensive experiments demonstrate the effectiveness of our approach, achieving substantial speedups while preserving data influence fidelity. In particular, FactGraSS achieves up to 165% faster throughput on billion-scale models compared to the previous state-of-the-art baselines. Our code is publicly available at https://github.com/TRAIS-Lab/GraSS.
Large language models (LLMs) have demonstrated promising performance in generating diagnostic conclusions from imaging findings, thereby supporting radiology reporting, trainee education, and quality control. However, systematic guidance on how to optimize prompt design across different clinical contexts remains underexplored. Moreover, a comprehensive and standardized framework for assessing the trustworthiness of LLM-generated radiology reports is yet to be established. This study aims to enhance the trustworthiness of LLM-generated liver MRI reports by introducing a Multi-Dimensional Credibility Assessment (MDCA) framework and providing guidance on institution-specific prompt optimization. The proposed framework is applied to evaluate and compare the performance of several advanced LLMs, including Kimi-K2-Instruct-0905, Qwen3-235B-A22B-Instruct-2507, DeepSeek-V3, and ByteDance-Seed-OSS-36B-Instruct, using the SiliconFlow platform.