Categories
News

Google Expands Gemini’s In-Depth Research Mode to 40 More Languages

Google announced on Friday that it is expanding Gemini’s latest in-depth research mode to support 40 additional languages. This feature, introduced earlier in December, provides users of the Google One AI Premium plan access to an AI-driven research assistant capable of creating detailed reports.

What is Gemini’s In-Depth Research Mode?

The in-depth research feature operates as a multi-step tool that simplifies complex research tasks. It begins by creating a research plan, gathers relevant information, and iteratively refines its findings through repeated searches. Finally, it synthesizes the information into a comprehensive report. This process enables users to delve deeper into topics with minimal effort.

>>> A2681 for Apple MacBook Air 13 M2 2022 661-26150

Supported Languages

Gemini’s expanded language capabilities now include:
Arabic, Bengali, Chinese, Danish, French, German, Gujarati, Hindi, Indonesian, Italian, Japanese, Kannada, Korean, Malayalam, Marathi, Polish, Portuguese, Swahili, Spanish, Tamil, Telugu, Thai, Ukrainian, and Urdu.
This diverse range aims to make Gemini accessible to a global audience, enhancing its utility across cultures and regions.

Challenges in Multilingual AI Research

While Gemini’s language expansion is significant, the process of providing accurate and grammatically correct summaries in native languages remains challenging. According to HyunJeong Choe, Google’s Director of Engineering for the Gemini app, the primary hurdle lies in sourcing reliable data in various languages and ensuring the AI summarizes it accurately.

Key Challenges:

  1. Factual Accuracy:
    Generative AI often struggles with factual consistency, especially when processing information in languages with limited reliable data sources.
  2. Grammar and Syntax:
    Summarizing information in native languages without grammatical errors is a complex task requiring advanced linguistic understanding.
  3. Native Language Biases:
    Translating or summarizing content in culturally sensitive and accurate ways demands careful training and testing.

How Google Addresses These Issues

Google has implemented several strategies to improve Gemini’s performance in multilingual contexts:

  1. Data Training with Native Sources:
    The model is trained using clean, trustworthy datasets in each language, ensuring the integrity of the information.
  2. Search-Grounded Responses:
    Gemini relies on Google Search for additional context and grounding, making its findings more reliable.
  3. Native Evaluations:
    Before releasing updates, Google conducts evaluations and fact-checking in each target language.
  4. Quality Assurance Programs:
    Local teams and native speakers review datasets and responses to maintain accuracy. Jules Walter, Gemini’s Product Lead for International Markets, highlighted the importance of testing programs that incorporate feedback from native perspectives.

>>>21-54348-02 for Symbol PDT6800

Choe acknowledged that factuality—ensuring the information is correct—is an ongoing research challenge in generative AI. Although pre-trained models have a vast repository of knowledge, they need to be refined to use that information effectively. Google is continuously training Gemini to handle information more reliably.

Categories
News

Why It May Be Time to Upgrade Your Older iPhone or iPad

As of December 18, Apple has officially discontinued iCloud backup support for devices running iOS 8 or earlier. To continue using iCloud backups, devices must now be updated to iOS 9 or later. This change was initially announced to Apple users in November, urging those with older devices to prepare for the update.

What This Means for Older Devices

If you’re still using iOS 8 or an earlier version, iCloud backups are no longer an option. However, there’s a workaround: you can create manual backups using a Mac or a Windows PC. For devices that support an update beyond iOS 8, upgrading the software will restore iCloud backup functionality.

Compatibility With the Latest iOS and iPadOS Versions

The most current version of iOS, iOS 18.2, is packed with new features and performance improvements. It supports all iPhones released since the iPhone XS, XS Max, and XR, which debuted in 2018. For iPad users, the latest iPadOS 18.2 is compatible with devices starting from the seventh-generation iPad (2019) and later models.

The Impact of CloudKit on iCloud Backups

Beginning with iOS 9, Apple introduced CloudKit, which serves as the backbone of iCloud backups. This innovative framework has significantly improved how developers manage app data and user interactions, providing a more secure and seamless experience.

Key Features of CloudKit

  1. Secure Data Storage
    CloudKit ensures app data is stored securely in iCloud, enabling users to access information across multiple devices effortlessly. Developers can choose between public and private databases to control data visibility and protect sensitive user information.
  2. User Authentication
    By leveraging Apple ID credentials, CloudKit simplifies user authentication. This eliminates the need for users to create new accounts or remember extra passwords, making app engagement quicker and easier.
  3. Flexible Data Management
    CloudKit allows developers to incorporate both collaborative and private features into their apps. This flexibility ensures that shared data is accessible while sensitive information remains protected.

Should You Upgrade Your Device?

If you own an iPhone or iPad released in the last few years, this update won’t affect you. However, for those using older devices, it might be the perfect time to explore the latest Apple products. Upgrading to a newer device will not only ensure compatibility with iCloud but also provide access to the latest features, security updates, and performance improvements.

Categories
News

Microsoft Unveils Major Upgrades to Bing Image Generator, Powered by DALL-E 3

One of the most exciting uses of artificial intelligence (AI) is in the realm of image generation, where the power of AI can turn any idea into a visual masterpiece. OpenAI pioneered this trend with its release of DALL-E 2, and more recently, the highly anticipated DALL-E 3. However, access to DALL-E 3 has been limited behind a paywall, requiring users to subscribe to a ChatGPT Plus plan for $20 per month. Now, Microsoft has provided a free workaround with its revamped Bing Image Generator.

Key Upgrades to Bing Image Generator

On Wednesday, Microsoft announced several exciting upgrades to its Bing Image Generator. The most significant of these updates is that the tool is now powered by OpenAI’s latest DALL-E 3 model, specifically the PR16 version. This enhancement promises users an experience that is both twice as fast and higher quality than before, providing a smoother and more impressive image generation process.


In addition to the model upgrade, Microsoft has revamped the Image Creator standalone site, making it more user-friendly. The redesigned interface is clean, minimalistic, and intuitive, offering a much better user experience compared to the previous design, which was darker and more overwhelming. This new look makes navigation easier and more enjoyable for users creating their images.

Easier Access and Improved Features

The Bing Image Generator is now more accessible than ever. Users can generate images directly from the Bing and Microsoft Edge search bars. All they need to do is type a prompt, such as “create an image of [insert your idea here],” and the generator will produce the requested image. This seamless integration makes it easier to create images on the fly without having to visit a separate website.


Furthermore, Microsoft has added enhanced sharing capabilities, allowing users to share their creations on popular social media platforms like FacebookWhatsApp, and Instagram. These social media integrations make it much easier for users to showcase their AI-generated artwork and share it with friends or followers.

Commitment to Safety and Quality

To ensure a safe experience, Microsoft has implemented several guardrails. These include blocking potentially harmful prompts and including a watermark in the bottom corner of each generated image. Additionally, content credentials are now attached to each image to provide context and authenticity. These measures ensure that users can enjoy the tool without worrying about the creation or distribution of inappropriate content.

Free Access and Comparison with OpenAI

One of the standout features of the Bing Image Generator is its free access to DALL-E 3, which would otherwise require a $20/month subscription for unlimited access through OpenAI’s platform. While free users of OpenAI’s DALL-E 3 are limited to only two images per day, Microsoft’s version removes this restriction entirely, offering unlimited creations for all users without any additional fees.


However, while DALL-E 3 does produce impressive outputs, some users have noted that the results may not always be as realistic as those generated by other models, such as Google’s Imagen 3. Despite being more expensive, DALL-E 3 sometimes falls short in terms of realism, though it still offers unique and valuable results depending on the type of prompt.

Conclusion

With the launch of these upgrades, Microsoft has made DALL-E 3 accessible to a much broader audience. Users can now create high-quality images with ease, directly from their search bars, and share their creations across social platforms. These changes, coupled with the safety features and free access, make the Bing Image Generator a powerful and user-friendly tool for anyone looking to explore AI-generated imagery.


As Microsoft continues to roll out these updates, users can expect even more enhancements, including a refreshed homepage for mobile devices in the near future.

Categories
News

Realme 14 Pro+: A First Look at the Upcoming Smartphone’s Unique Features

According to reports, Realme recently invited tech journalists to Denmark for an exclusive first look at the new Realme 14 Pro+. This session was more of a preview than a full official announcement, as key details about both the Realme 14 Pro+ and its sibling, the Realme 14 Pro, remain under wraps. A complete reveal is expected soon.

Innovative Exterior Design: A First for Smartphones

The Realme 14 Pro+ showcases a unique design, the result of a collaboration between Realme and Valeur Designers, a Copenhagen-based design studio. The standout feature is the color-changing back panel, which is a first in the smartphone industry. Unlike previous models like the Realme 9 Pro+, which change color when exposed to sunlight, the Realme 14 Pro+ reacts to cold temperatures. This makes it particularly suited for colder climates and seasonal changes.

In its normal state, the back of the Realme 14 Pro+ appears in a pearl white finish. However, when the temperature drops to 16°C (61°F) or lower, vibrant blue swirls appear on the surface. As the device warms up, the swirls fade and the back returns to its original off-white color. This innovative material is being used for the first time in a smartphone. Realme has described it as a “bonus feature,” but users should note that the effect will gradually fade over time due to prolonged exposure to sunlight. The color-changing effect is expected to last for about 12 months under typical usage conditions.

Underwater Photography and Durability

The Realme 14 Pro+ also introduces a new feature for photography enthusiasts: an underwater photo mode. Unlike conventional smartphones that rely on the touchscreen for capturing photos, this mode allows users to take photos underwater using the volume buttons. The device boasts an IP68 rating for water submersion and IP69 resistance to hot water jets, ensuring durability in a variety of conditions, including while submerged underwater.

Although detailed specifications for the camera are not yet available, it has been confirmed that the Realme 14 Pro+ will be the only model in the series equipped with a periscope camera, offering advanced zoom capabilities. Additionally, the back of the device features a circular arrangement of three LED flash modules, which Realme has dubbed “MagicGlow”. This design ensures even lighting for portraits in low-light settings, and the camera can adjust color temperatures for more natural results in portrait photography.

Powerful Performance with Snapdragon 7s Gen 3

In terms of performance, the Realme 14 Pro+ is powered by the Snapdragon 7s Gen 3 chipset, which brings significant improvements over its predecessor, the Snapdragon 7s Gen 2. Built on a 4nm TSMC process, the Snapdragon 7s Gen 3 features ARMv9 Cortex-A720 and A520 cores, along with the next-gen Adreno 810 GPU. Qualcomm’s official data highlights several key upgrades: the new GPU is 40% faster, CPU performance is 20% improved, and AI task performance is boosted by 30% compared to the previous generation. Furthermore, the Snapdragon 7s Gen 3 is 12% more power-efficient, with the new CPU cores being 45% more energy-efficient than the older ARMv8 cores.

Categories
News

Windows 11 24H2 Update Problems: Audio and Auto HDR Issues Persist

As we approach the end of the year, the Windows 11 24H2 update released in October continues to cause problems for many users. Microsoft has acknowledged two new issues, both of which are causing frustration for those affected. According to Neowin, one of the problems disrupts audio output, while the other impacts Auto HDR functionality.

Windows 11 operating system on PC screen. Windows 11 is the latest major release of Microsoft’s Windows NT.

Audio Output Issue

The audio issue affects devices with Dirac Audio and the cridspapo.dll file, which is responsible for enhancing audio clarity and precision. This bug prevents Windows 11 from outputting sound to a range of devices, including integrated speakers, Bluetooth speakers, headsets, and other audio peripherals. Microsoft has not disclosed which specific manufacturer is impacted, but the problem is significant enough that there is no immediate workaround for users who are affected.

Auto HDR Bug

The second issue involves Auto HDR, a feature in Windows 11 that automatically enhances standard dynamic range (SDR) content by converting it to high dynamic range (HDR). However, due to this bug, users are experiencing various glitches, including incorrect colors and even system crashes. The only known workaround at the moment is to completely disable the Auto HDR feature.

Microsoft’s Response

In response to these issues, Microsoft has temporarily blocked the Windows 11 24H2 update on affected systems. The company plans to lift the block once the bugs are resolved and a fix has been issued.

For now, users experiencing these issues will need to either wait for an official fix or, in the case of Auto HDR, disable the feature temporarily.

This isn’t the first time Windows 11 24H2 has caused problems, especially for gamers.For users who have been blocked from installing the update, it may be beneficial to wait until Microsoft releases a stable version. Given the recurring issues, waiting a few months before updating could prevent further frustration and ensure that any remaining kinks are worked out.

Categories
News

Salesforce Unveils Agentforce 2.0: Revolutionizing AI-Driven Enterprise Workflows

On December 17, Salesforce introduced Agentforce 2.0, a cutting-edge iteration of its AI-powered platform designed to integrate customizable, semi-autonomous AI agents into enterprise workflows. The new version enhances capabilities, empowering businesses to streamline processes with an AI-driven workforce, capable of handling complex tasks across various departments.

What’s New with Agentforce 2.0?

Agentforce 2.0 offers an array of new features aimed at improving business operations. The platform’s core innovation lies in its ability to allow businesses to choose from a library of pre-built AI agents or create custom agents using natural language prompts. These agents can follow multi-step plans, adhere to conditional workflows (e.g., “if/then” statements), and execute actions across systems and applications seamlessly.

Key Features of Agentforce 2.0:

  • Pre-built Skills: Salesforce has introduced several pre-configured skills for specific tasks, including Sales Development, Sales Coaching, Marketing Campaigns, and Commerce Merchant. These skills can help automate and streamline routine processes, significantly reducing the workload on human employees.
  • Tableau Integration: Agentforce agents now offer Skills for Analytics and Insight within Tableau. These capabilities help businesses track and analyze agent performance, providing valuable data visualizations for enhanced decision-making.
  • Slack Integration: Slack is deeply integrated into the Agentforce experience. Through Slack Actions, users can automate routine updates and have agents send project updates via direct messages. In January, the Agentforce Hub will be available within Slack, making it easier for teams to interact with AI agents, access relevant data, and automate tasks within Slack’s communication channels.

These features allow businesses to integrate Agentforce 2.0 directly into their existing tools, including Salesforce CRM, Slack, Tableau, and Mulesoft, creating a powerful, AI-powered digital labor force.

Customizing Your AI Agents with Natural Language Prompts

For businesses that don’t find an exact match in the pre-built library, Salesforce has introduced Agent Builder. This tool enables users to create custom AI agents using simple, natural language commands. For example, users can instruct the AI to “Onboard New Product Managers,” allowing businesses to tailor the system to their unique needs without technical expertise. Additionally, Slack Actions will be incorporated into Agent Builder, making it even more adaptable.

Integration of the Atlas Reasoning Engine

A major leap forward in Agentforce 2.0 is its Atlas Reasoning Engine, which powers advanced AI reasoning and retrieval-augmented generation (RAG). RAG allows the AI to access unstructured data and pull insights from various parts of the Salesforce Platform to deliver more nuanced and informed responses.

For instance, Salesforce highlighted an example where advanced reasoning could be used to answer a complex question like, “What investment vehicle would be best for my child’s college fund based on my current income and risk preferences?” Through RAG, Agentforce can retrieve data from across Salesforce systems to ensure more accurate, contextually relevant recommendations.

Expanding the Scope of AI in Enterprises

Salesforce’s emphasis on AI is evident in its vision to make Agentforce 2.0 the cornerstone of enterprise operations, allowing employees to focus on more value-added tasks while automating routine functions. The power of Agentforce 2.0 lies in its adaptability, making it applicable to virtually any business function, from customer service to marketing and HR.

Melik Khoury, president and CEO of Unity Environmental University, shared how his institution is leveraging Agentforce to improve workflows. By automating common queries, such as financial aid details or class registration, the university is freeing up staff time to focus on more personalized guidance for students. This represents a significant shift in how businesses and institutions can integrate AI for improved efficiency.

Looking Ahead: A Future of AI-Powered Digital Labor

Salesforce’s aggressive investment in AI tools like Agentforce 2.0 is part of a broader strategy to integrate generative AI into its entire suite of products. The company aims to provide enterprises with AI-driven solutions that not only automate basic tasks but also enhance decision-making processes with advanced reasoning. Claire Cheng, VP of Machine Learning and Engineering for Salesforce AI, stated that Salesforce expects the reasoning engine to become one of the key factors businesses will consider when evaluating digital labor options.

Availability and Next Steps

While Agentforce 2.0 will be fully available in February 2025, businesses can begin deploying it within Slack as early as January 2025. The expansion of generative AI into business workflows represents a new chapter for Salesforce and the AI industry, offering enterprises a flexible, powerful toolset to transform their operations with minimal manual intervention.

Categories
News

GitHub Unveils Free Version of Copilot AI Tool for Developers

On Wednesday, GitHub, owned by Microsoft, announced a major update to its Copilot AI-powered code completion and pair programming tool. For the first time, GitHub is offering a free version of Copilot, which will also be available by default with Microsoft’s Visual Studio Code (VS Code) editor. Previously, developers had to pay a monthly subscription fee, starting at $10 per month, with only verified students, teachers, and open-source maintainers getting free access.

This update marks a significant step for GitHub as it continues to expand its platform. The company also shared that it now has 150 million developers using its platform, a significant jump from 100 million just earlier this year.

In an exclusive interview ahead of the announcement, GitHub CEO Thomas Dohmke reflected on the company’s journey. “My first project at GitHub in 2018 was the introduction of free private repositories, which we launched in early 2019. Then we had a version two with free private organizations in 2020. We introduced free GitHub Actions entitlements, and at my first Universe conference as CEO, we announced free Codespaces. It only felt natural to extend this philosophy to Copilot, offering a free version, not just for students and open-source maintainers,” Dohmke shared.

What’s Included in the Free Copilot Plan?

While the free Copilot plan opens up the tool to a broader audience, it does come with certain limitations. The free version is designed for occasional users, not for developers working on large projects or enterprises.
Here are the key details of the free Copilot plan:

  • 2,000 code completions per month: Each code suggestion from Copilot counts towards this limit, not just accepted ones.
  • Foundation Model Access: Free plan users are limited to Anthropic’s Claude 3.5 Sonnet and OpenAI’s GPT-4o. Paid plans, on the other hand, include access to Google’s Gemini 1.5 Pro and additional OpenAI models like o1-preview and o-mini.
  • Copilot Chat: Users on the free plan can send up to 50 chat messages per month.
  • Extensions and Skills: Despite these limitations, users still get full access to all Copilot Extensions and skills.

Dohmke explained that the team analyzed years of Copilot usage data to determine the balance between occasional users and professional developers. The goal was to make it as easy as possible for developers to start using the tool and be productive without needing to jump through extra hoops.

The free version of Copilot will work across multiple platforms, including VS CodeVisual Studio, and JetBrains IDEs, as well as directly on GitHub.com. This broad compatibility ensures that developers can integrate Copilot into their existing workflow seamlessly, no matter which development environment they prefer.

Expanding Access and Competing

Since its launch, GitHub Copilot has become a prominent player in the AI coding tool market. However, competition has intensified with companies like TabnineQodo (formerly Codium), and AWS offering similar tools, many with free plans. To remain competitive and expand its user base, GitHub’s decision to introduce a freemium model for Copilot is a logical step.

By offering a free version of Copilot, GitHub aims to make its tool more accessible, particularly in emerging markets where a $10 subscription fee may be prohibitively expensive. This approach aligns with GitHub’s broader mission of enabling one billion developers globally. In countries like BrazilIndia, and Pakistan, where income disparities are significant, providing a free version of Copilot helps break down barriers to entry and empowers more people to pursue careers in software development.

In addition to supporting global developers, GitHub’s free plan also simplifies access for students. While the platform has offered free access to students before, they had to go through a verification process, which created friction. With the new freemium model, students can sign up and immediately start using Copilot, making the tool more accessible and user-friendly.

This move not only strengthens GitHub’s competitive position but also supports its long-term vision of democratizing software development, offering a powerful AI tool to developers regardless of their financial situation or geographic location.

Categories
News

Amazon Enhances Fire TV Accessibility with Dual Audio Feature for Hearing Aid Users

Amazon has unveiled its new Dual Audio feature for Fire TV, allowing users to stream audio through a hearing aid while others in the room enjoy standard sound from the TV’s speakers. This feature, which uses the Audio Streaming for Hearing Aids (ASHA) protocol, will roll out over the coming weeks, starting with the company’s latest Fire TV Omni Mini LED model.

In an official blog post, Amazon highlighted that this is the first time Fire TV customers using ASHA-enabled hearing aids can simultaneously enjoy streaming content with two different audio outputs. This new addition aims to enhance the communal viewing experience by providing a more inclusive and personalized approach to audio.

Expanded ASHA Support for Fire TV Devices

Amazon is also expanding its ASHA support to include all Widex Moment Behind-The-Ear (BTE) and Receiver-In-Canal (RIC) hearing aids. The Dual Audio feature will be available across a range of Fire TV devices, including:

  • Fire TV Omni Mini LED Series
  • Fire TV Omni QLED Series
  • Fire TV Cube
  • Fire TV 4-Series
  • Fire TV 2-Series
  • Fire TV Omni Series

This move ensures that a wide variety of Fire TV users with ASHA-enabled hearing aids can enjoy the same enhanced experience.

Accessibility Improvements in Amazon Packaging

Alongside the new feature, Amazon is making efforts to improve accessibility for all customers, particularly those with visual impairments. The company has introduced QR codes on the latest packaging for devices like Fire TV, Echo, and Kindle, which now feature tactile, raised UV dots to improve discoverability.

When scanned, these QR codes direct users to Amazon.com, providing detailed product information and step-by-step setup instructions. This feature is especially beneficial for customers who are blind or have low vision, as it allows them to find the code by touch, as explained by Amazon’s Maiken Moeller-Hansen in the blog post.

In addition to these accessibility enhancements, Amazon has introduced more eco-friendly packaging for its devices. The new packaging includes 30% more recycled fiber and 60% less ink on average, reflecting the company’s commitment to sustainability.

How to Enable Dual Audio on Fire TV Omni Mini LED

To activate the Dual Audio feature on your Fire TV Omni Mini LED, follow these simple steps:

  1. Press and hold the Home button on your remote control.
  2. Navigate to SettingsAccessibility.
  3. Turn on the Dual Audio feature and pair your compatible hearing aid to start streaming audio.

How to Set Up Your Hearing Aid with Fire TV

To set up your hearing aid with your Fire TV device:

  1. Go to SettingsAccessibility.
  2. Select the Hearing Aids section.
  3. Choose Add Hearing Aids to pair your device and begin streaming audio.
Categories
News

Mobile Starlink Beta Program: Text, Data, and Voice Coming to Remote Areas by 2025

The T-Mobile Starlink Direct-to-Cell beta program is set to change the landscape of mobile connectivity by providing coverage in areas that traditional cellular networks have long struggled to reach. As of December 2024, T-Mobile is accepting registrations for this exciting new service, which aims to eliminate the notorious “dead zones” that have plagued rural areas, remote locations, and even some airspaces. This initiative leverages Starlink’s growing constellation of low-Earth orbit (LEO) satellites to deliver high-speed connectivity where conventional towers and networks fall short.

The beta program, launching in early 2025, will initially support text messaging, allowing T-Mobile customers to send and receive messages even in the most isolated areas. The service uses a dedicated network of 300 Starlink satellites and promises to cover up to 500,000 square miles of land in the U.S. that are currently underserved by traditional cellular infrastructure​.

While the initial launch will focus on text messaging, T-Mobile and Starlink have ambitious plans to expand the service. By mid-2025, they aim to introduce voice and data capabilities. This will make it possible to not only send texts but also make calls and access the internet in areas where terrestrial networks cannot provide service​. 

The service will evolve to include more functionalities, with Internet of Things (IoT) connectivity also expected by the end of 2025​.

How Does T-Mobile’s Starlink Direct-to-Cell Work?

Unlike traditional cell towers that rely on terrestrial infrastructure, Starlink’s Direct-to-Cell technology utilizes satellites in orbit to connect directly to mobile devices. This innovation means that as long as your phone can “see the sky,” you’ll have access to text messaging, even when you’re out of range of conventional cellular towers​. T-Mobile’s integration of this technology ensures that users won’t have to manually search for signals or navigate cumbersome connection processes. Messages will be sent and received as easily as they would on a regular network​.

Who Can Benefit from T-Mobile Starlink Direct-to-Cell?

This service is designed with a broad spectrum of users in mind. Its primary audience includes people who often find themselves in areas outside the reach of traditional cellular networks. This includes:

Rural Communities: Areas that are underserved or completely ignored by traditional cellular providers will benefit significantly from this service.
First Responders: Emergency teams working in disaster-stricken areas where communications are critical will be able to rely on Starlink to stay connected​.

Outdoor Enthusiasts: Hikers, mountaineers, and campers can enjoy peace of mind, knowing they can stay in touch even when deep in remote areas.
Travelers and Pilots: T-Mobile Starlink is also looking to expand its reach to locations like airplanes, offering connectivity in-flight, which could revolutionize how people stay connected during long-haul journeys.

FAQs

1. How do I sign up for the T-Mobile Starlink beta program? 

You can sign up directly through T-Mobile’s website. The beta program is open to select T-Mobile postpaid voice customers with compatible devices​.

2. Will T-Mobile Starlink Direct-to-Cell work with my current phone? 

The service is designed to work with most modern mobile phones. However, the initial beta program will support only specific “optimized” smartphones​.

3. When will T-Mobile Starlink support voice and data? 

The voice and data capabilities are expected to be available by mid-2025​.

4. Is T-Mobile Starlink free?

The beta program is free for T-Mobile postpaid customers, but once the service expands, there might be a monthly fee, depending on your mobile plan​.

5. What areas will T-Mobile Starlink cover?

 Initially, the service will cover up to 500,000 square miles of land in the U.S. that are not served by traditional cell towers​.

6. Can I use T-Mobile Starlink Direct-to-Cell while traveling abroad?

 The service is currently limited to U.S. territory, but T-Mobile has partnerships with several international carriers, so expanded global coverage may be available in the future​.

Categories
News

Important Google Maps Update: Backup Your Data Before Timelines Are Deleted in 2025

Google Maps users should be aware of a significant change coming in 2025 that could result in the permanent loss of their personal data. According to reports from dailymail.co.uk, Google’s navigation app will no longer store users’ personal timelines on its servers starting June 9, 2025. Originally known as “Location History,” this feature tracks every movement, recording places visited and routes taken by users.

However, Google recently notified its users through emails that this feature will be removed by mid-2025, and with it, nearly a decade’s worth of personal data will also be deleted. In response to security concerns, Google has indicated that users’ location history will be moved from the cloud to a more secure on-device storage option. This change aims to improve data privacy and protection against potential cyberattacks.

While this shift to local storage will increase security, it also means that any location history not saved by June 9, 2025, will be permanently erased.

How to Save Your Google Maps Timeline Data

Many users may not be aware of the Timeline feature’s operation, as it works in the background and is often overlooked. However, it is crucial to act before the deletion deadline if you wish to preserve this data.

To prevent your data from being permanently deleted, follow these simple steps to create a local backup on your device:

  1. Open Google Maps: On your Android or iOS device, launch the Google Maps app.
  2. Access Your Profile: Tap your profile picture or initial in the top right corner of the screen.
  3. Open Backup Settings: Look for an icon resembling a cloud in the top-right corner of the page. Tap this icon.
  4. Log In if Prompted: You may need to log in with your password to access backup options.
  5. Enable Backups: If you don’t have backups enabled, tap the button to enable this option.
  6. Choose Your Backup Device: Select the device you want to back up to.
  7. Import Data: Tap the “More” option (three dots), then select “Import” from the menu. On the following screen, choose “Import timeline from backup.”
  8. Download Your Timeline: This will begin the download of your timeline data, allowing you to retain it even after the cloud-based information is deleted by Google.

By following these steps, you can ensure that your personal location history is safely stored and accessible, even after Google discontinues its cloud storage for Timeline data.