How AI could shape the future of journalism

Editor’s note: What impact can AI have on journalism? That is a question the Google News Initiative is exploring through a partnership with Polis, the international journalism think tank at the London School of Economics and Political Science. The following post is written by Mattia Peretti, who manages the program, called Journalism AI.

From the New York Times using artificial intelligence to find untold stories in millions of archived photos, to Trint using voice recognition to transcribe interviews in multiple languages, journalists around the world are applying AI in new and varied ways. When faced with financial, ethical and editorial questions around how the use of AI could impact their work, modern news organizations are exploring a wide variety of approaches to bring these new technologies to their newsrooms.

With the expert advice of newsroom leaders from Europe, the U.S. and Asia Pacific, we crafted a survey of more than 20 questions, ranging from the technical (which AI technologies have you adopted?) to the ethical (are you aware of AI biases, and how do you avoid them?). Over the last few weeks, newsrooms from all over the world have completed the survey, with contributions coming in from every continent. Their responses will lay the foundation of a report we will publish this fall, to draw a picture of how media is currently using—and could further benefit from—AI technologies.

Journalism AI workshop

Charlie Beckett presenting Journalism AI in London.

The richness and sophistication of the responses we have received so far is overwhelming. Most lament the vagueness surrounding the term “AI” and seek to adopt more precise terminology— machine learning, for example—in newsroom projects and conversations. With applications ranging from understanding readers’ likeliness to subscribe and moderating posts in the comments section, it’s easy to understand why it’s necessary to get more specific. 

Across the board, people generally agree about the motivations for adopting AI-powered technologies: No one expects machines to replace journalists, nor is anyone working towards that. The underlying goal is to delegate routine tasks to machines to free up time for creative work, in-depth investigations and audience engagement.

Today, newsrooms are exploring the potential of these new technologies, but only a few have already implemented AI at scale. For most organizations, the adoption is still in an experimental phase. While some journalists are ambivalent or skeptical, many are curious about how AI will impact workflows and processes and how newsrooms will cope with yet another new phase of disruption. 

Something fundamental is changing in the news industry. New technological challenges and opportunities are encouraging a reflection about the deeper meaning and mission of journalism, as well as the shape and ethics of the news industry in the era of artificial intelligence. As a result, many realize the urgency to explore innovative solutions to sustain the business of news. 

Algorithms and machines can augment the power of journalists, opening up new possibilities and unexplored territories. “AI just doesn’t work on its own, and we can’t expect it to fix all our problems,” one respondent to the survey said. “The best impact can be achieved as a partnership between humans and technology.”

We hope that our survey, and the community that we’re building around Journalism AI, will contribute to the quality and potential of this fascinating encounter.

Continue Reading →

Google Translate’s instant camera translation gets an upgrade

Google Translate allows you to explore unfamiliar lands, communicate in different languages, and make connections that would be otherwise impossible. One of my favorite features on the Google Translate mobile app is instant camera translation, which allows you to see the world in your language by just pointing your camera lens at the foreign text. Similar to the real-time translation feature we recently launched in Google Lens, this is an intuitive way to understand your surroundings, and it’s especially helpful when you’re traveling abroad as it works even when you’re not connected to Wi-Fi or using cellular data. Today, we’re launching new upgrades to this feature, so that it’s even more useful.

Instant camera translation.gif

Translate from 88 languages into 100+ languages

The instant camera translation adds support for 60 more languages, such as Arabic, Hindi, Malay, Thai and Vietnamese. Here’s a full list of all 88 supported languages.

What’s more exciting is that, previously you could only translate between English and other languages, but now you can translate into any of the 100+ languages supported on Google Translate. This means you can now translate from Arabic to French, or from Japanese to Chinese, etc. 

Automatically detect the language

When traveling abroad, especially in a region with multiple languages, it can be challenging for people to determine the language of the text that they need to translate. We took care of that—in the new version of the app, you can just select “Detect language” as the source language, and the Translate app will automatically detect the language and translate. Say you’re traveling through South America, where both Portuguese and Spanish are spoken, and you encounter a sign. Translate app can now determine what language the sign is in, and then translate it for you into your language of choice.

Better translations powered by Neural Machine Translation

For the first time, Neural Machine Translation (NMT) technology is built into instant camera translations. This produces more accurate and natural translations, reducing errors by 55-85 percent in certain language pairs. And most of the languages can be downloaded onto your device, so that you can use the feature without an internet connection. However, when your device is connected to the internet, the feature uses that connection to produce higher quality translations.

A new look

Last but not least, the feature has a new look and is more intuitive to use. In the past, you might have noticed the translated text would flicker when viewed on your phone, making it difficult to read. We’ve reduced that flickering, making the text more stable and easier to understand. The new look has all three camera translation features conveniently located on the bottom of the app: “Instant” translates foreign text when you point your camera at it. “Scan” lets you take a photo and use your finger to highlight text you want translated. And “Import” lets you translate text from photos on your camera roll. 

To try out the the instant camera translation feature, download the Google Translate app.

Continue Reading →

Google for Mexico: Improving Mexicans’ lives through technology

Mexico is a diverse country in search of opportunities to accelerate development in an inclusive and equitable way. In our first Google for Mexico event this week, we presented new ways to help Mexicans achieve better employment and entrepreneurship opportunities, contribute to society through technological solutions and promote the country’s culture. 

Technology as a source of growth and opportunity

The Internet is boosting local businesses in Mexico, and Google is helping through our search and advertising tools. In 2018, website publishers, nonprofit organizations and more than 40,000 companies generated 47 billion pesos in economic impact throughout the country thanks to digital tools. To learn more about our success stories, you can visit our Economic Impact Report.

Google is helping people acquire and update the necessary skills to apply for a job or to be more effective in the work they already do. With programs like Grow with Google, we’ve trained more than 11,000 people, helping thousands of users in the development of their digital skills throughout the country. We have also launched other digital training projects like Digital Garage, Primer and Women Will, among other initiatives. 

Additionally, we announced that the Google IT Support Professional Certificate, developed by Google and hosted on Coursera, will be translated into Spanish. Google.org is also giving a  $1.1 million USD grant to the International Youth Foundation to offer scholarships to 1,000 young Mexicans, to ensure that underrepresented communities have supported and free access to the course. 

Bringing technology to everyone 

In Mexico, there are currently 74 million people online, and 18 million more are expected to join in the next two years. That’s equivalent to almost 20 newly connected people per minute.

In over a year that Google Station has been in operation in Mexico, we have seen millions of people go online and get connected to more information and better opportunities. Google Station’s fast, free and open Wi-Fi is in more than 100 locations throughout the country, with more sites going live in other public places very soon.  

Google’s solutions for companies help Mexico promote itself as a great place to do business. That way, society can focus less on economics and more about improving living conditions and anticipating crises before they arrive. With the launch of Android Emergency Location Service (ELS), people will be able to contact emergency services when an emergency call is placed in a supported jurisdiction, even if the user has no mobile data plan or no mobile data credit left.

Strengthening small businesses online

The role of small and medium businesses in the Mexican economy is crucial for employment growth. Currently, less than 50 percent of small and medium sized businesses in the country have digital presences, but Google’s solutions can help expand businesses’ opportunities, reduce their operating costs and support them as they reach their consolidation.

Google for Mexico

Dora Velázquez, Flores de Oaxaca owner, used Google My Business to grow her business.

Google My Business is an easy, fast and secure solution for small and medium businesses to start their online business. The Smart Campaigns program can also help small business owners reach new customers with an easy advertising solution which creates ads based on the business’ objectives: calls, visits to their stores or visits to their websites. 

Helping Mexicans use the power of their voices 

When we launched the Google Assistant in Mexico two years ago, our goal was to help people get things done throughout the day at home, in the car and on the go—while having a unique understanding of the culture and context. Since then, more Mexicans are turning to the Assistant for help listening to music, playing games and getting answers to questions. The number of active users of the Assistant in Mexico has grown more than eight times since the beginning of 2018. Additionally, Spanish is the third most used Assistant language globally.

Over the coming months, the Assistant will get even more helpful. Mexican users will soon be able to book a ride in Spanish with providers like Cabify, Uber, and Bolt (formerly known as Taxify), order food delivery with Rappi and even transfer money to friends or family using BBVA—with help from their voice.

Google for Mexico

Assistant users in Mexico will soon be able to book a ride in Spanish with providers like Cabify, Uber and Bolt (formerly known as Taxify).

Building smarter cities 

Since 2014, Waze has been working with cities and municipalities around the world to help improve urban mobility. What started with 10 city partners has grown to more than one thousand globally, with 24 partners here in Mexico, including the Mexico City Mobility Department, the Secretariat of Communications and Transportation, Jalisco, Monterrey and many others.

Now, all Waze for Cities Data partners can now store data for free via Google Cloud, while accessing best-in-class tools including BigQuery and Data Studio. Cities will be able to easily monitor traffic and transportation events, look at historical trends, assess the before and after effects of interventions and more. 

Municipalities like Querétaro are already leveraging Waze data to make mobility improvements. They recently looked at traffic patterns during peak hours and determined when commercial trucks should enter the city and where they should park. They even re-zoned certain parts of the city. 

A rich heritage, preserved and shared with the world

Mexico’s traditions are colorful and moving, a true expression of the identity of its people. To showcase this cultural heritage, Google Arts & Culture has dedicated a special initiative to capture and share Mexico with the world.

Google for Mexico

This is the first time the Soumaya Museum is digitally presenting its research on the Grana Cochinilla.

Recently, we partnered with one of the most visited museums in the world: the Soumaya Museum. For the first time, it will be possible to visit the museum and view its collection from any device from anywhere in the world. The project showcases more than 700 items encompassing over 30 centuries of art, including one of the world’s largest Auguste Rodin’s collections outside of France. 

The Soumaya Museum has digitized 31 paintings in extremely high resolution using the Art Camera, allowing the user to see details that are not visible with the naked eye. The museum is virtually opening its doors with the use of Museum View technology, which allows anyone, anywhere to admire the architecture of Fernando Romero, at the heart of a new commercial district in Mexico City. 

Google for Mexico

Soumaya Museum, Carlos Slim Foundation, Gallery 6.

Access to information is essential for the growth of countries. At Google, we believe that technology is the fuel to empower Mexico, providing smart solutions for millions of people.

Continue Reading →

Capture the attention of sports fans with Display & Video 360

Watching sports brings friends and family together to cheer on their favorite teams and enjoy a shared experience. For decades, live sports have given marketers the opportunity to reach these engaged audiences at scale and associate their brand with the teams, players, and moments fans are excited about.

What makes sports so compelling for marketers hasn’t changed, but the formula for capturing the attention of sports fans has. Historically, marketers could buy commercial airtime on live sports broadcasts and be sure that their message was reaching a passionate and broad audience. But as our viewing options have increased, brands have had to find new ways of engaging potential customers. Display & Video 360 helps brands reach these engaged audiences where and when they are watching.

Reach engaged fans on connected TV

I’m a die-hard Dodgers fan but live outside of Los Angeles so I only watch baseball games on the MLB.tv connected TV app. This means that connected TV is the best way to reach me in these moments. This is also true for an increasing number of sports fans who turn to connected TV to find exclusive sports content or to catch up with games they missed.

With Display & Video 360, you can use Programmatic Guaranteed deals to secure a wide variety of valuable connected TV sports inventory and build a deeper relationship with a fan base. For instance, as the excitement builds-up for the second half of the Major League Baseball season and the race to the postseason, you can tap into MLB.tv’s premium content on connected TVs. Sling TV’s always-on sports deal is another source of high-profile in-game spots as well as surrounding coverage on leading broadcast and cable networks. 

Be there during peak moments with real-time triggers

The game clock is winding down. The underdog scores to win as time expires. Wouldn’t it be impactful to capture this moment by boosting your reach and updating your ad message right away?

Now available to all advertisers and agencies, real-time triggers in Display & Video 360 is an automated tool that allows marketers and agencies to instantly activate specific display and video messages in response to live TV or real-world events. Ad delivery can be accelerated across devices as soon as these pre-defined “triggers” or moments happen. 

The real-time triggers workflow makes event-based campaigns scalable and easy to build. You simply define the triggers you care about in Display & Video 360 and then specify which ad creative you want to go live immediately following that event. For example, you could set-up a trigger that would serve a specific ad for fifteen minutes during halftime of any NFL game. Or you could be even more specific and run ads for two hours after the New England Patriots win a game or for twenty minutes whenever the Patriots or a specified player scores a touchdown. 

Sports trigger set-up in Display & Video 360

Sports trigger set-up in Display & Video 360

AirAsia used Display & Video 360’s real-time triggers to reach a massive audience of engaged fans in Southeast Asia during the most memorable moments of the 2018 FIFA World Cup. Their #StillGotIt campaign featured soccer start Roberto Carlos showing off the skills he still has even after retiring, from his signature curling free kick to non-stop Samba dancing. AirAsia focused on reaching fans in Malaysia, the Philippines, Indonesia, Singapore, and Thailand — countries with avid soccer fan bases — and doubled the ads’ frequency during Brazil’s matches and the World Cup Final to ramp up engagement. By synching its ads with the most crucial moments and partnering with a world-renowned winner like Carlos, AirAsia’s campaign resonated with millions of fans. As a result, AirAsia built a strong positive connection between their brand and their ambassador, exceeding industry averages for celebrity association according to third party research.

In the past, it was a tedious and manual task to reach so many different markets in precisely the right moments with the right message Ravi Shankar
AirAsia’s Group Head of Digital Marketing

Real-time triggers in Display & Video 360 supports all English Premier League, Champions League, National Football League (NFL) and National Basketball Association (NBA) games and we’re adding more on a regular basis. In fact, a number of marketers used real-time triggers during the FIFA Women’s World Cup this year.  

In addition to Display & Video 360’s automated sports triggers, we’ve added custom triggers to help you time your ads with any tentpole moment that may be relevant to your brand. For example, if your audience is passionate about the Oscars or a reality TV show like The Bachelor, you can now decide to trigger your campaigns to launch when the first award is given or when the rose ceremony begins. You can also set-up your trigger to accelerate the delivery of your ads right at the moment when you kick off a flash sale or announce a new product.

Building your brand with sports fans requires a smart game plan. To break through and ensure your message sticks, you need to connect with people at the right time and on the right device. Access to smart buying techniques and high-quality connected TV inventory in Display & Video 360 will help you stay ahead of the competition.

Continue Reading →

A different sort of moonshot: looking back on Apollo 11

When astronauts set foot on the Moon 50 years ago, it was a technological triumph that sparked curiosity across the globe. Neil Armstrong, Buzz Aldrin and Michael Collins inspired us to learn more about space and life here on Earth. A similar spirit of curiosity and exploration has always been core to Google, with our mission to make the universe of knowledge accessible to people around the world. So on the anniversary of the Moon landing, we’re bringing you new ways to learn about this milestone of human achievement, including new perspectives and stories that celebrate the lesser-known figures who made it happen.

Starting today, in collaboration with the Smithsonian National Air and Space Museum, you can get up close to the command module that carried Armstrong, Aldrin and Collins to the Moon. To get started, search for “Apollo 11” from your AR-enabled mobile device. You’ll get the option to see the module in 3D, so you can zoom in and check it out from all angles. Using augmented reality, you can then bring the command module into your space—your bedroom, the kitchen or wherever you are—to get a better sense of its size. And later this month, you can do the same thing with Neil Armstrong’s spacesuit and examine what astronauts wore on the surface of the Moon.

Command module AR

3-D Command Module created by The Smithsonian’s Digitization Program Office

You can also explore 20 new visual stories related to the lunar mission directly from Search. When you enter a space-related query—like “Apollo 11 mission”—on your mobile device, you’ll see visual stories from the Smithsonian about the mission, the spacecraft, and the people who made it possible. These full screen, tappable visual stories feature photos, videos and information about the space journey. 

Moon landing Stories GIF

One of the stories that I found personally inspiring was of Margaret Hamilton, known for helping coin the term “software engineering,” and creating the on-board software for Apollo 11. Among other tasks, this software made sure the Apollo 11 lunar module’s system could manage the information it was receiving and safely land on the lunar surface.

Google Arts & Culture has 40 new exhibits about Apollo 11, like Walter Cronkite’s reflections on humankind’s first steps, or a lesson on how to put on a space suit and pack snacks for the journey. There’s a lot to learn–the inside of your command module is a good place to take notes. And there’s more: starting July 15, Google Earth will have several new tours and quizzes to help you visually explore more about the Moon mission, NASA and the world of space exploration.

Space has always been near and dear to our hearts, whether it’s helping you explore the International Space Station through Street View, celebrate the first photo of a black hole, or simply satisfy your curiosity on Google Search. Try searching for “moon” (or “🌙”) on Google Photos to see your snapshots of our neighbor. Ask the Google Assistant questions to learn fun facts about the Moon, like what sports have been played on the surface. And be sure to visit Google.com on the 20th for another special Moon-related surprise. 

Apollo 11 continues to have a profound impact on our planet’s history. We hope this is just the beginning of your space explorations. 🚀

Continue Reading →

To reduce plastic waste in Indonesia, one startup turns to AI

In Indonesia, plastic waste poses a major challenge. With 50,000 km of coastline and a lack of widespread public awareness of waste management across the archipelago, much of Indonesia’s trash could end up in the ocean. Gringgo Indonesia Foundation has started tackling this problem using technology—and more recently, with a little help from Google. 

Earlier this year, Gringgo was named one of 20 grantees of the Google AI Impact Challenge. In addition to receiving $500,000 of funding from Google.org, Gringgo is part of our Launchpad Accelerator program that gives them guidance and resources to jumpstart their work. 

We sat down with Febriadi Pratama, CTO & co-founder at Gringgo, to find out how this so-called “trash tech start-up” plans to change waste management in Indonesia with the help of artificial intelligence (AI). 

Why is plastic waste such a problem for Indonesia? 
In the past 30 years, Indonesia  has become overwhelmed by plastic waste. Sadly, we haven’t found a solution to deal with this waste across our many islands. 

The topography of Indonesia makes it more challenging to put a price on recyclables. It consists of more than 17,000 islands with 5 major islands, but most recycling facilities are based on the mainland of Java. This makes transporting recyclables from other islands expensive, so materials with low value aren’t sorted and end up polluting the environment.  

To add to the complexity, waste workers often have irregular routes and schedules, leaving many parts of the country unserviced. Workers also don’t always have the knowledge and expertise to accurately identify what can be recycled, and what recycled items are worth. Together, these factors have a devastating impact on recycling rates and the livelihood waste workers.

How are you proposing to address this problem? 
Waste workers’ livelihood depends on the volume and value of the recyclable waste they collect. We realized that giving workers tools to track their collections and productivity could boost their earning power while also helping the environment. 

We came up with the idea to build an image recognition tool that would help improve plastic recycling rates by classifying different materials and giving them a monetary value.  In turn, this will reduce ocean plastic pollution and strengthen waste management in under-resourced communities. We believe this creates a new economic model for waste management that prioritizes people and the planet. 

How does the tool work in practice? 
We launched several  apps in 2017—both for waste workers and the public. One of the apps allows waste workers to track the amount and type of waste they collect. This helps them save time by suggesting a more organized route, and manually quantify their collections and earning potential. Within a year of launching the apps, we were able to improve recycling rates by 35 percent in our first pilot village, Sanur Kaja in Bali.  We also launched an app for the public, connecting people with waste collection services for their homes.

Ussing the Gringgo mobile app

Febriadi Pratama with waste worker, Baidi, using the Gringgo mobile app

Tell us about the role that AI will play in your app? 

With Google’s support, we’re working with Indonesian startup Datanest to build an image recognition tool using Google’s machine learning platform, TensorFlow. The goal is to allow waste workers to better analyze and classify waste items, and quantify their value. 

With AI built into the app, waste workers will be able to take a photo of trash, and through image recognition, the tool will identify the items and their associated value. This will educate waste workers about the market value of materials, help them optimize their operations, and maximize their wages.  Ultimately, this will motivate waste workers to collect and process waste more efficiently, and boost recycling rates. 

So whether it’s a plastic bottle (worth Rp 2,500/kg or 18 cents/kg) or a cereal box (worth Rp 10,000/kg or 71 cents/kg), these new technologies should allow more precious materials to be sorted and reused, thereby removing the guesswork for workers and putting more money in their pockets.

Identifying waste through AI powered image recognition

A mock-up shows how Gringgo thinks the app will be able to identify waste through AI-powered image recognition

What do you aspire to achieve in the next ten years? 

Waste management issues aren’t specific to Bali or to Indonesia. We think our technology has the potential to benefit many people and places around the globe. Our goal is to improve our AI model, make it economically sustainable, and ultimately help implement it across Indonesia, Asia and around the world.

Continue Reading →

OEMConfig supports enterprise device features

Android’s flexibility helps device manufacturers build diverse form factors with useful features to address a variety of business needs. But consistently delivering hardware options to organizations can be difficult because enterprise mobility management (EMM) providers often struggle to quickly support management for all these capabilities.   

To solve this problem, we’re launching OEMConfig, a new Android standard that enables device makers to create custom device features that can be immediately and universally supported by EMMs. Instead of integrating enterprise APIs from each OEM to support their custom features such as control of barcode scanners or enabling extra security features, EMMs can easily use an OEM-built application that configures all of the unique capabilities of a device. 

OEMConfig utilizes a feature in Android Enterprise called managed configurations, which allows developers to provide built-in support for the configuration of apps. With OEMConfig, EMMs can support all of a device manufacturer’s diverse set of controls without any incremental development work on their end.

Earlier this year, Samsung declared early support for a preview version of OEMConfig, publishing a Knox Service Plugin (KSP) app that enabled EMMs to support Knox Platform for Enterprise features. Since then, we’ve built out the final pieces of architecture to make it even more useful for customers and EMM partners. These include:

  • An enhanced schema with four-level nesting, to present complex policies to IT admins in a structured format

  • An update broadcast to instantly inform OEMs when policies have changed

  • A feedback channel to confirm the result of policies applied on the device

OEMConfig will continue to unlock more enterprise capabilities for business customers in a consistent manner, helping organizations move faster and go further in achieving their business goals. We’re excited to see what our customers will be able to do when they harness all the flexibility and innovation our ecosystem provides. 

More information for OEMConfig can be found here.

Continue Reading →