Tech News Today reads like a balance sheet for the AI era, where video models burn cash faster than they generate revenue, strategic alliances are rewritten in real time, and every surface from your TV to your photo archive is being refitted with an always-on AI layer that quietly reshapes how you use technology.
Daily technology news update: 12 November 2025
Across today’s stories, three pressures keep repeating:
- First, the raw cost of running frontier models at scale is colliding with the expectation that consumer AI should feel cheap or free.
- Second, the big platforms no longer trust partnerships alone to secure their place in an eventual agi landscape, so they are building their own parallel tracks even while they collaborate.
- Third, AI is moving out of demo land into infrastructure, turning TVs into assistants, archives into generative canvases, storage cleaners into mini games, and video platforms into real-time explainers. The companies that win this phase will be the ones that can make AI feel ambient and inevitable without letting the economics or governance spiral out of their control.
OpenAI leans on unsustainable Sora spending to buy time and data
Forbes’ estimates suggest OpenAI is spending about 15 million dollars per day to run Sora, which translates to roughly 5.4 billion dollars a year and more than a quarter of the company’s projected 20 billion dollars in annual recurring revenue. The model is straightforward and brutal. If 25 percent of Sora’s 4.5 million iOS users each generate around 10 videos per day, and a 10 second clip costs roughly 1.30 dollars to produce, the numbers scale into something closer to an energy bill than a software cost line.
OpenAI, which reportedly lost more than 12 billion dollars last quarter, declined to comment on the analysis, but Sora lead Bill Peebles has already said the economics are “completely unsustainable.” This is the old internet playbook at video scale. You subsidize the product heavily, you harvest prompts and usage patterns as training data, and you trust that a combination of algorithmic improvements and hardware gains will make generation five times cheaper by 2026. The risk is simple. If costs do not fall as quickly as promised, OpenAI is not just running an expensive product, it is underwriting an entire cultural experiment in free AI video that may prove far harder to charge for later.
Yann LeCun’s world-model vision is about to leave Meta’s walls
Meta’s chief AI scientist Yann LeCun is reportedly preparing to leave and build a startup focused on world models, and that is not a routine senior-executive reshuffle. LeCun is one of the field’s foundational figures, from his LeNet work on early convolutional neural networks for handwriting recognition in the late 1980s and early 1990s to his role creating FAIR in 2013 as Meta’s fundamental research lab. At Meta he has pushed self-supervised learning, long-horizon world-model research, and architectures for autonomous systems that can operate under uncertainty. The Financial Times reports that he is already in early capital-raising talks, with the new company centered on structured representations of the physical and conceptual world that can support prediction and decision-making.
The timing is awkward for Meta. The company has pledged more than 600 billion dollars in US investments by 2028 across AI infrastructure and workforce initiatives, taken a 14.3 billion dollar stake in Scale AI, and restructured its AI orgs while leaving elite units like Meta Superintelligence Labs untouched. If LeCun’s vision for world models finds cleaner execution and governance outside Meta, the company may discover that its most important long-term bet on machine understanding left just as it doubled down on “superintelligence,” and that it now has to compete with a founder it once positioned at the center of its research story.
Microsoft rewrites its OpenAI deal to run its own AGI race
Microsoft and OpenAI have signed a new definitive agreement that removes a key brake on Redmond’s ambitions. Under the prior terms, Microsoft was reportedly restricted from independently pursuing agi through 2030. The updated deal lifts that limit and clears the way for Microsoft to run its own frontier-model agenda alongside OpenAI instead of treating the lab as its default agi pipeline. Microsoft AI chief Mustafa Suleyman framed the shift plainly, saying the company needs to be “self-sufficient in AI” and must train “frontier models of all scales” on its own compute and data. A new MAI Superintelligence team is designed to make that statement operational by building an internal research group with the same level of ambition as the external partner.
The agreement still protects Microsoft’s roughly 13 billion dollar stake and cements IP rights over OpenAI models and products, including post-agi systems, through 2032. It also forces any OpenAI claim of agi through an independent expert panel rather than a unilateral press release. The result is a more explicit dual track. Microsoft will continue to distribute and monetize OpenAI models, but it is no longer structurally bound to hope that OpenAI’s roadmap aligns perfectly with its own strategic timing.
Samsung turns the TV into a primary AI assistant
Samsung’s Vision AI Companion turns its 2025 TV line into a general-purpose assistant that sits on the biggest screen in the room instead of in a smart speaker on a side table. The feature is a conversational layer that lets users ask what an actor is known for, who created an artwork on screen, or what the score was in a game being referenced, and then aligns answers with the video without cutting playback.
Image: Samsung
Beyond recognition, Vision AI Companion offers recommendations for shows and films, gives cooking guidance, shares travel tips based on destinations or themes, and surfaces local restaurants. It essentially tries to compress the “pick up your phone and search” reflex into a single interaction that never leaves the TV. The system uses models from Microsoft Copilot and Perplexity and coordinates other Samsung AI features such as automatic picture optimization and live translation through conversational prompts. Shipping across the 2025 TV range with support for 10 languages and positioned in a home where the TV is already the natural focal point, this is Samsung’s attempt to claim the living room as an AI surface before a competing ecosystem defines the default behavior there.
Galaxy S26 leak focuses on practical camera upgrades and real battery gains
Leaked firmware for Samsung’s Galaxy S26 and S26 Plus suggests a generational update that listens to user complaints instead of chasing marketing slogans. Both phones are expected to move from the long-serving 50 megapixel ISOCELL S5KGN3 main sensor to the newer 50 megapixel ISOCELL S5KGNG, and to replace the 10 megapixel telephoto S5K3K1 with a 12 megapixel S5K3LD telephoto module. On paper, resolution bumps are modest. In practice, the new modules target sharper zoom images and cleaner hybrid ranges, a direct answer to criticism that non Ultra models lag behind in telephoto clarity. The 12 megapixel Sony IMX564 ultrawide appears unchanged, which is telling. Samsung seems to believe its problems are in primary and zoom performance, not in ultrawide.
On the video side, the firmware points to Advanced Professional Video at up to 4K at 60 frames per second across both front and rear cameras, which signals that creators are meant to treat any lens as a viable primary. Underneath, the usual split remains, with Exynos 2600 in most regions and Snapdragon 8 Elite Gen 5 in markets like the United States, while APV remains supported on both. Battery changes are asymmetrical. The S26 reportedly grows to 4,300 mAh from 4,000 mAh, while the S26 Plus holds at 4,900 mAh, with real gains expected from efficiency improvements and possible Qi2 magnetic charging. If the leak holds, this is a refinement cycle designed to tighten the experience where it actually breaks down in daily use, not to inflate numbers on a slide.
Apple’s Adaptive Power feature quietly rewires iPhone battery expectations
Adaptive Power in iOS 26 is a small label for a meaningful shift in how Apple uses AI to manage the hardware it already ships. Unlike Low Power Mode, which dims screens and pulls back on background activity in broad strokes, Adaptive Power uses on-device intelligence to learn roughly a week of user behavior and then trims performance only in moments where the battery would benefit most, such as during intensive video recording, photo editing, or gaming. It is enabled by default on the iPhone 17 series and iPhone Air and is available as an opt-in for Apple Intelligence capable devices like the iPhone 16 line and iPhone 15 Pro models.
When it kicks in, users see a notification, but there is no new interaction model to learn. The feature lives under Settings, in the Battery section, as a new Power Mode option with an additional toggle for notifications. The more interesting point is strategic. Apple is tying this kind of fine grained power management directly to its Apple Intelligence stack and keeping it off iPad and Mac, even on devices that support Apple Intelligence. In other words, battery life is becoming another arena where Apple can differentiate newer, AI ready iPhone hardware without having to make a spectacle of it on stage.
Google One turns storage cleanup into a swipeable decision feed
Google One’s updated Storage Manager tries to turn the cognitively heavy task of cleaning up files into something closer to a guided feed. The new version, delivered in Google One 1.287.828055836, refreshes the Storage Manager page and the “Clean up” flows for Google Photos and Google Drive with Material 3 Expressive design elements and a new swipe interface. In the Photos cleanup pane, thumbnails get smaller so that more items fit on screen, the “select all” check mark aligns with the new design language, and filter chips occupy a single horizontal row for better scanning. A new card at the top nudges users toward clearing out redundant photos and videos.
The Drive cleanup pane mirrors this structure, with an encouragement card for large or unnecessary documents and downloads, but keeps larger thumbnails to preserve file name legibility. Once users select items to review, the experience shifts into a Tinder style view where you swipe to keep or delete each suggestion, one item at a time. Android Authority notes that this mechanics had been tested in Google Photos before, but it now ships inside Google One, with a gradual rollout. The move acknowledges a basic reality of cloud storage. If Google wants users to keep paying for capacity, it has to make pruning feel less like account maintenance and more like a lightweight, almost casual decision loop.
Google Photos AI update makes your archive feel less static
Google Photos is expanding its AI toolkit to more than 100 countries and over 17 languages, and the cumulative effect is that the app feels less like a passive backup and more like a manipulable visual database. Users can now edit specific objects and people in photos, rely on a redesigned editor with simplified controls, and use an “Ask” button to request edits or context through natural language. Prompt based editing that launched with the Pixel 10 series is extending to iOS users in the United States, who can speak or type commands such as “remove Riley’s sunglasses” or “make Engel smile” through a “Help me edit” entry point that understands face groups.
Video: Google
Google is also integrating its Nano Banana image model, which powers restyles into formats like Renaissance portraits or cartoons and supports AI templates that turn existing photos into retro portraits or action figure styled images on Android in the United States and India. Alongside this, an AI driven search feature that first launched in the United States is being rolled out to more than 100 countries with support for languages including Arabic, Bengali, French, German, Hindi, Indonesian, Italian, Japanese, Portuguese, and Spanish. The more these tools spread, the more a personal photo library stops being a fixed record of events and starts becoming raw material for continuous reinterpretation, which is powerful and slightly unsettling in equal measure.
Google’s Android PC experiments show how badly it wants a laptop story
Google’s confirmation that it will merge Chrome OS into Android, combined with leaks that Qualcomm is testing Android 16 on Snapdragon X series PC chips, makes clear that the company is not willing to cede the laptop narrative to Windows and macOS indefinitely. The Chrome OS side of this is framed as a multi year transition designed to reduce duplicated engineering and centralize app compatibility, security, and features across phones, tablets, and larger screens.
The Qualcomm side is more technical but just as revealing. Internal repositories reportedly list Android 16 support for “Purwa,” the codename for the Snapdragon X family, including manifests for computer vision, audio, BTFM, and camera subsystems that look like real system bring up, not a generic build. Sources are careful to say this does not guarantee that Snapdragon X laptops running Android are imminent. Detachable 2 in 1 tablets are seen as more likely early targets. The blocking issue is software maturity.
Rumor: Android computers appear to be on the way.
Qualcomm is working on Android 16 support for the X Elite and X (series). The picture shows purwa (Snapdragon X)’s Android 16 private code list, and Qualcomm has already uploaded the Android code for X Elite and X (to the… pic.twitter.com/pQ1vnNOvgQ
— Jukan (@Jukanlosreve) November 11, 2025
Android’s various “desktop modes” have never fully matched the expectations of a laptop user who wants robust window management, keyboard shortcuts, and external display behavior that does not feel fragile. Google now says it will leverage Samsung’s DeX experience to build a more credible desktop interface. Until that appears, the Android PC remains something closer to a recurring pilot project than a defined product line, no matter how much engineering work is happening behind the scenes.
YouTube’s new AI chat blurs the line between watching and searching
YouTube’s Gemini powered “Ask” button turns a significant slice of its catalog into something closer to a conversational knowledge surface. On eligible videos, viewers see an “Ask” option beneath the player that opens an on-screen chat. From there, they can type custom questions or use curated prompts to summarize the video, get related content recommendations, or request explanations of specific concepts while the video continues playing.
The feature is available in English for users aged 18 and above on iPhone, Android, and Windows in the United States, Canada, New Zealand, and India, with a broader rollout planned. YouTube includes a visible disclaimer that the AI can make mistakes and encourages users to verify information, which is necessary but only partly addresses the deeper shift. If a user can extract structured answers, follow up questions, and further viewing suggestions from inside a video without leaving the page, the boundary between “video platform” and “search engine” narrows further. That has consequences for creators, whose work may increasingly be consumed as input to an answer stream, and for Google, which now owns yet another front door for factual queries that bypasses traditional search entirely.
VIA: DataConomy.com










