Categories
News

Google Expands Gemini’s In-Depth Research Mode to 40 More Languages

Google announced on Friday that it is expanding Gemini’s latest in-depth research mode to support 40 additional languages. This feature, introduced earlier in December, provides users of the Google One AI Premium plan access to an AI-driven research assistant capable of creating detailed reports.

What is Gemini’s In-Depth Research Mode?

The in-depth research feature operates as a multi-step tool that simplifies complex research tasks. It begins by creating a research plan, gathers relevant information, and iteratively refines its findings through repeated searches. Finally, it synthesizes the information into a comprehensive report. This process enables users to delve deeper into topics with minimal effort.

>>> A2681 for Apple MacBook Air 13 M2 2022 661-26150

Supported Languages

Gemini’s expanded language capabilities now include:
Arabic, Bengali, Chinese, Danish, French, German, Gujarati, Hindi, Indonesian, Italian, Japanese, Kannada, Korean, Malayalam, Marathi, Polish, Portuguese, Swahili, Spanish, Tamil, Telugu, Thai, Ukrainian, and Urdu.
This diverse range aims to make Gemini accessible to a global audience, enhancing its utility across cultures and regions.

Challenges in Multilingual AI Research

While Gemini’s language expansion is significant, the process of providing accurate and grammatically correct summaries in native languages remains challenging. According to HyunJeong Choe, Google’s Director of Engineering for the Gemini app, the primary hurdle lies in sourcing reliable data in various languages and ensuring the AI summarizes it accurately.

Key Challenges:

  1. Factual Accuracy:
    Generative AI often struggles with factual consistency, especially when processing information in languages with limited reliable data sources.
  2. Grammar and Syntax:
    Summarizing information in native languages without grammatical errors is a complex task requiring advanced linguistic understanding.
  3. Native Language Biases:
    Translating or summarizing content in culturally sensitive and accurate ways demands careful training and testing.

How Google Addresses These Issues

Google has implemented several strategies to improve Gemini’s performance in multilingual contexts:

  1. Data Training with Native Sources:
    The model is trained using clean, trustworthy datasets in each language, ensuring the integrity of the information.
  2. Search-Grounded Responses:
    Gemini relies on Google Search for additional context and grounding, making its findings more reliable.
  3. Native Evaluations:
    Before releasing updates, Google conducts evaluations and fact-checking in each target language.
  4. Quality Assurance Programs:
    Local teams and native speakers review datasets and responses to maintain accuracy. Jules Walter, Gemini’s Product Lead for International Markets, highlighted the importance of testing programs that incorporate feedback from native perspectives.

>>>21-54348-02 for Symbol PDT6800

Choe acknowledged that factuality—ensuring the information is correct—is an ongoing research challenge in generative AI. Although pre-trained models have a vast repository of knowledge, they need to be refined to use that information effectively. Google is continuously training Gemini to handle information more reliably.

Categories
News

Important Google Maps Update: Backup Your Data Before Timelines Are Deleted in 2025

Google Maps users should be aware of a significant change coming in 2025 that could result in the permanent loss of their personal data. According to reports from dailymail.co.uk, Google’s navigation app will no longer store users’ personal timelines on its servers starting June 9, 2025. Originally known as “Location History,” this feature tracks every movement, recording places visited and routes taken by users.

However, Google recently notified its users through emails that this feature will be removed by mid-2025, and with it, nearly a decade’s worth of personal data will also be deleted. In response to security concerns, Google has indicated that users’ location history will be moved from the cloud to a more secure on-device storage option. This change aims to improve data privacy and protection against potential cyberattacks.

While this shift to local storage will increase security, it also means that any location history not saved by June 9, 2025, will be permanently erased.

How to Save Your Google Maps Timeline Data

Many users may not be aware of the Timeline feature’s operation, as it works in the background and is often overlooked. However, it is crucial to act before the deletion deadline if you wish to preserve this data.

To prevent your data from being permanently deleted, follow these simple steps to create a local backup on your device:

  1. Open Google Maps: On your Android or iOS device, launch the Google Maps app.
  2. Access Your Profile: Tap your profile picture or initial in the top right corner of the screen.
  3. Open Backup Settings: Look for an icon resembling a cloud in the top-right corner of the page. Tap this icon.
  4. Log In if Prompted: You may need to log in with your password to access backup options.
  5. Enable Backups: If you don’t have backups enabled, tap the button to enable this option.
  6. Choose Your Backup Device: Select the device you want to back up to.
  7. Import Data: Tap the “More” option (three dots), then select “Import” from the menu. On the following screen, choose “Import timeline from backup.”
  8. Download Your Timeline: This will begin the download of your timeline data, allowing you to retain it even after the cloud-based information is deleted by Google.

By following these steps, you can ensure that your personal location history is safely stored and accessible, even after Google discontinues its cloud storage for Timeline data.

Categories
News

Google Drive’s Upcoming 2025 Update: Auto-Enhanced Scans and the Whisk AI Too

In early January 2025, tech giant Google will roll out a significant update for its Google Drive mobile app users, introducing an automatic editing feature for the app’s built-in scanner. This update promises to streamline the process of capturing and enhancing digital copies of important documents, including bills, identification cards, and more.

At present, Google Drive users can scan documents using their mobile devices, but editing these scans requires manual adjustments, such as tweaking filters and adjusting image levels. With the new update, however, Google Drive will automatically optimize scanned images, eliminating the need for users to make manual enhancements. The new auto-filter feature will automatically improve scans, delivering sharper, brighter, and more readable document versions with minimal user input.

Using the feature is simple. Users just need to tap the “+ New” button located at the bottom-right of the screen, select “Scan,” and grant the app access to their camera. After scanning a document, a sparkle icon will appear in the preview mode, indicating that the auto-enhancer tool is ready. This tool will then adjust the white balance, eliminate shadows, boost contrast, sharpen details, and optimize lighting for a more polished final result.

Google has confirmed that this update will be available to all Google Drive users, including those with free personal accounts. The feature will first be available on Android devices starting January 6, 2025.

The goal of this update is to simplify document scanning while enhancing the overall user experience, making it easier to store clear, high-quality digital versions of important documents directly within Google Drive.

In other news, Google has also introduced a groundbreaking generative AI experiment called Whisk, designed to revolutionize creative workflows. Unlike traditional image generation tools that rely on text-based prompts, Whisk allows users to drag and drop images representing the subject, scene, and style they wish to create. By remixing these images, users can produce entirely original visuals.

Powered by Google’s Gemini model, Whisk automatically generates detailed captions based on the uploaded images. These captions are then processed by Google’s Imagen 3, the company’s latest image generation model. The focus of Whisk is on capturing the essence of the subject, rather than attempting to replicate it exactly, offering a fresh approach to creative expression.

Categories
News

Google Whisk: Redefining Image Creation by Capturing Your Image’s Essence

Google has launched Whisk, its latest experimental AI image generator. Unlike traditional image generators, Whisk captures only the “essence” of an uploaded image, making it a valuable tool for brainstorming and generating creative concepts quickly. With two interactive modes and a simple interface, Whisk is perfect for visualizing ideas rather than editing images with precision.
In this article, we’ll explore Whisk’s features, how it works, its limitations, and what sets it apart from other AI tools.

What is Whisk?

Whisk is Google’s experimental AI tool, housed under Google Labs. The company describes it as “a new type of creative tool”, primarily designed for brainstorming and rapid visualizations.
Unlike other AI tools that focus on editing or replicating existing images, Whisk works to recreate your image’s “essence”. It simplifies visuals and outputs rough, creative ideas perfect for inspiration.

A Creative Tool for Brainstorming

Whisk isn’t meant for professional photo editing or detailed outputs. Instead, it provides users with quick, imaginative renderings, making it ideal for:

  • Creative concept ideation
  • Rapid brainstorming sessions
  • Exploring multiple visual styles in seconds

>>>PL432224FPC for Amazfit GTR Mini(43mm)

How Whisk Works

Whisk operates under a two-part technological process:

1. The Gemini Language Model

When you upload an image to Whisk, Google’s Gemini language model analyzes it and generates a detailed text caption describing the visual. This description serves as a textual interpretation of your uploaded image.

2. Imagen 3 Image Generator

Once Gemini creates the caption, Google feeds it into Imagen 3, Google’s advanced AI image generator. Imagen 3 uses Gemini’s description, rather than the original image, to produce a new output.
This process ensures Whisk’s result captures only key elements of your input, creating an image that feels inspired by — but not identical to — the original.

The Core Features of Google Whisk

Whisk is designed to be simple, intuitive, and experimental. It offers two primary ways to generate creative outputs:

1. Starter Interface (Basic Mode)

The starter interface is straightforward, with inputs for style and subject.
Style Options: Whisk currently offers three predefined styles:

  • Sticker
  • Enamel Pin
  • Plushie

Google chose these styles as they align with the tool’s focus on delivering simplified, creative visualizations.

2. Advanced Editor (Start From Scratch)

In the advanced mode, users gain access to:

  • Inputs for Subject, Scene, and Style
  • Customizable text prompts

However, as of now, the advanced controls may not yield outputs that exactly align with your expectations — a limitation Google acknowledges.

>>>361-00072-00 for Garmin Forerunner 220 225 230 235 620 630

Understanding the Limitations of Whisk

Whisk is not designed for precision image editing. Instead, it’s focused on:

  • Simplifying concepts
  • Capturing rough outlines
  • Providing visual inspiration

Google openly acknowledges its tool’s limitations, including:

  • Outputs may feature different heights, weights, hairstyles, or skin tones.
  • Results often vary because Whisk relies on textual interpretations, not pixel-for-pixel image recreation.

Ideal Use Cases for Google Whisk

Whisk is most effective for:

  • Brainstorming Creative Ideas: Quickly visualize ideas with basic outlines.
  • Concept Inspiration: Test how an idea looks across styles (e.g., sticker or plushie).
  • Simplified Visual Outputs: Perfect for artists, designers, and creative teams.

How to Access Google Whisk

Currently, Whisk is only available to users in the United States. You can try it by visiting the project’s page on Google Labs.
Steps to Access Whisk:

  • Visit the Google Labs site.
  • Click on the Whisk project.
  • Choose between Basic Mode or Advanced Editor.
Categories
News

Google DeepMind launches GenCast AI tool With Impressive Speed and Accuracy

The field of weather forecasting has reached a significant milestone: researchers have introduced GenCast, an AI-driven weather prediction system developed by Google DeepMind. This system demonstrates faster and more accurate forecasts than the ENS model from the European Centre for Medium-Range Weather Forecasts (ECMWF), which has long been regarded as the global leader in weather prediction.

What are the advantages of GenCast?

Improved Accuracy

GenCast outperformed ENS by up to 20% in short-term weather forecasts and showed remarkable precision in predicting the paths of extreme weather events, such as hurricanes and cyclones, including their landfall locations.

Exceptional Efficiency

Unlike traditional physics-based models that require hours of computation on supercomputers, GenCast delivers results in just 8 minutes using a single Google Cloud TPU, a machine-learning-optimized processor.

Innovative Training

The model was trained on 40 years of historical weather data (1979–2018), encompassing a wide range of atmospheric variables such as wind speed, temperature, pressure, and humidity. GenCast builds on its predecessor, GraphCast, by producing probabilistic ensembles of 50 or more forecasts, offering greater reliability for predicting uncertain weather events.

Supportive Role

For now, GenCast is designed to complement rather than replace traditional physics-based methods, providing additional clarity for events such as heatwaves, cold spells, and high winds. Its applications could extend to sectors like renewable energy, where accurate forecasts help optimize power generation.

Implications for Weather Prediction

  • Enhanced Ensemble Forecasting: GenCast’s ability to generate larger and more reliable ensembles provides improved confidence levels for extreme weather predictions.
  • Reduced Computational Costs: The efficiency of GenCast makes high-resolution forecasts more accessible and reduces dependency on expensive computational resources.
  • Transformative Potential: Experts, such as Sarah Dance from the University of Reading, have noted that this technology represents a paradigm shift in forecasting methodology, paving the way for broader adoption of AI-based approaches.

Challenges and Questions

While GenCast’s performance is promising, certain challenges remain. The authors have not answered whether their system has the physical realism to capture the ‘butterfly effect’, the cascade of fast-growing uncertainties, which is critical for effective ensemble forecasting.

The data GenCast trained on combines past observations with physics-based “hindcasts” that need sophisticated maths to fill gaps in historic data.

There is still a long way to go before machine learning approaches can completely replace physics-based forecasting.It remains to be seen whether generative machine learning can replace this step and go straight from the most recent unprocessed observations to a 15-day forecast.

The Road Ahead

GenCast is unlikely to replace traditional forecasting systems in the near future. Instead, it is expected to serve as a powerful assistive tool, augmenting current models and contributing to more accurate predictions. National weather services and industries reliant on precise weather information, such as energy and disaster management, are poised to benefit significantly.

Categories
News

GenChess:an AI-powered chess game for custom piece design

Google likes to experiment with artificial intelligence. We’ve had live DJ tools, podcast creators and a way to create custom lettering. Now Google has just released a free chess game called GenChess that brings something new to the table. GenChess is unique because it allows you to design the chess pieces you play with, using AI.

To play GenChess, simply go to the GenChess website in your browser and start designing your chess set. You can choose either a classic or creative set and then type in an AI prompt to describe the type of set you want to see.

You’ll see the prompt ‘Make a classic chess set inspired by’ at the top of the screen, and you can complete the sentence with whatever you like. GenChess will then think for a few seconds as the AI generates some sample chess pieces for your approval. If you don’t like what you see, hit the ‘Regenerate Set’ button, and it will have another go. If however you do like what you see then hit the ‘Generate opponent’ button to progress to the next stage.

The computer then picks a prompt to design the opponent’s piece that it thinks will go well with what you’ve chosen already, and generates the opposing chess pieces to play against you. 

How does GenChess work?

GenChess is built on top of the Imagen 3 artificial intelligence image generation model from Google DeepMind,and which rolled out in October. Imagen 3 has a range of features that are worth exploring. For example, you can ask it to create photorealistic landscapes, richly textured oil paintings, or even claymation scenes.

This also powers the ImageFX experiment and image creation in the Gemini chatbot. It is a very impressive model that can create everything from photorealism to design.

Categories
News

Google introduces Restore Credentials to simplify app logins on new Android devices

If you lose your iPhone or buy an upgrade, you could reasonably expect to be up and running after an hour, presuming you backed up your prior model. Doing the same swap with an Android device is more akin to starting three-quarters fresh. That might change relatively soon,as Google has announced a new Restore Credentials feature.

Transferring your data from your old Android device to a new one will soon be less daunting, thanks to “Restore Credentials,” a new developer feature for Android which can keep you logged into your apps when you make the switch. While some apps already did this, Google is making it easier for developers to include this experience by implementing a “restore key” that automatically transfers to the new phone and logs you back into the app.

The change should help make going from one Android phone to another more like upgrading an iPhone. Apple users who move from one iPhone to another are used to having everything from email accounts to app credentials transfer to the new phone, but it hasn’t always been so seamless for Android users.

Google notes that there is “no user interaction required” on its flowchart showing signing in on one device, backing it up to the cloud, and having that key come back when setting up the new device. There is, of course, a direct device-to-device option for manually moving over app restore keys.

Restore keys can also be backed up to the cloud, although developers can opt out. For that reason, transferring directly from device to device will still likely be more thorough than restoring from the cloud, as is the case with Apple devices today. Notably, Google says restore keys do not transfer if you delete an app and reinstall it.

It’s very much in Android’s interest to reduce the friction of setting up a new phone deeply hooked into Google rather than inviting inquiries into the Cupertino-based alternatives. It’s also a quiet boon to anybody who does a full reset on their phone, whether by choice or out of frustration.

Categories
News

Google Will Cancels Pixel Tablet 3 Development

Remember in 2019 when Google announced that it was giving up on tablets? And then teased the Pixel Tablet in 2022 before ultimately releasing it in 2023? Well, it looks like And now it turns out that the Pixel Tablet will also be joining that list again, though not before getting one more iteration. 

There is currently only the first generation Pixel Tablet available on the market. Insiders from Google claim that — though a Pixel Tablet 2 is under the works — there will be no third tablet.Multiple industry sources close to the project have confirmed that the device, internally known as “Kiyomi,” will not be moving forward.

According to sources familiar with the matter, Google made this decision last week, with internal communications and meetings taking place to inform the teams involved. The personnel previously assigned to the Pixel Tablet 3 project are being redirected to other initiatives within the company.

Pixel Tablet 2 is presumably too far into development to quit cold turkey which is why it will allegedly come out next year. However, there’s no saying how long support for it will last if Google has given up on the entire thing.

What does this mean for Google’s tablet?

The Pixel Tablet — something that sounds so appealing on paper — didn’t turn out to be the high-end Android experience we hoped for. Google’s newest tablet, while not bad per se, was nothing special either. The 60 Hz display in particular made it feel very outdated, a problem that is also present on the iPhone 16.

However, instead of working towards improving it and making it the Android contender we need, Google simply abandoning it.As it stands The Pixel Tablet was marketed as some “premium” tablet from Google, when in reality, it looked a lot like a cheap tablet from Five Below. It was pretty cheap compared to other premium tablets like the iPad Air and iPad Pro, as well as Samsung’s Galaxy Tab S series.And Google really can’t charge Apple prices because Android, unlike iOS, isn’t exclusive to Pixel.

This effectively means that the Pixel Tablet 2, when it launches next year, will be a lame duck tablet.

Categories
News

Google’s Digital Wellbeing App Can Remind You After Excessive App Usage

The Digital Wellbeing app is getting a new feature to help you spend less time in distracting apps.Instead of blocking users from using the app, it gently reminds them when they’ve spent too much time on it.

This feature was first spotted in a teardown of the Digital Wellbeing app last month. It has now started showing up for some users.Google has made some changes to the feature since the previous report. It’s now called Screen time reminders instead of Mindful Nudge.

The feature brings up a pill-shaped notification at the top of the screen after you’ve used one of the selected apps for a long time. The notification will show the amount of time you’ve spent on the app, prompting you to close the app and make better use of your time.

You can enable the feature on your phone by navigating to the new Screen time reminders option in the Digital Wellbeing & parental controls settings. Enable the Use reminders option on the following page and select the apps you want to see reminders for to set things up. The feature will then automatically show you a reminder when you spend too much time on one of the selected apps.

This new feature is a welcome addition to the Digital Wellbeing app. It can help users become more aware of how much time they are spending on their phones. This can help them make better choices about how they spend their time and which apps are the ones that tend to be the time-wasters. Although Digital Wellbeing apps in general already do this, the fact that this doesn’t completely lock you out of using the apps, but instead gently nudges you, it’s a different implementation.

Categories
News

Gemini standalone app launches on iOS now

Gemini officially landed as a standalone app on Android back in February. Now, only a few days after a stray report about a dedicated Gemini app landing on iOS did the rounds, the Mountain View, California-based tech giant has officially confirmed its launch.

The new app allows iPhone users to interact with Google’s AI through text or voice queries and includes support for Gemini Extensions. A key feature is Gemini Live, which wasn’t available in the previous Google app implementation. When engaged in a conversation, Gemini Live appears in both the Dynamic Island and Lock Screen, letting you control your AI interactions without returning to the main app.You can continue talking to the AI assistant even with your iPhone locked.

The app also aids with learning by allowing users to ask questions on any topic, receive personalized study plans, and access custom, step-by-step guidance tailored to their learning style. Additionally, Gemini can assess knowledge with quizzes, including those based on complex diagrams.

Furthermore, the Gemini iPhone app seamlessly connects with other Google apps through Extensions. This integration enables Gemini to access and display relevant information from apps such as YouTube, Google Maps, Gmail, and Calendar within a single conversation.

The app is free to download, and Google offers premium features through Gemini Advanced subscriptions available as in-app purchases. Gemini Advanced is part of a Google One AI premium plan costing $18.99 per month. Apart from Gemini in Mail, Docs, and more, it includes access to Google’s next-generation model, 1.5 Pro, priority access to new features, and a one million token context window. Users need to sign in with a Google account to access the service.

It’s worth noting that the app is only available for users running iOS 16 and above. Also worth noting is that even though the standalone app is rolling out worldwide, users would only be able to make use of Gemini Live in the following languages:

Arabi;Danish;French;Hungarian;Japanese;Portuguese;Spanish;Ukrainian;Chinese;Dutch;German;Hindi;Korean;Romanian;Swedish;Vietnamese;Croatian;English;Greek;Indonesian;Norwegian;Russian;Turkish;Czech;Finnish;Hebrew;Italian;Polish;Slovak;Thai