In February 2023, Microsoft integrated Chat GPT into its search engine, Bing. For now, access is yet to be available for everyone. There is a waiting list to join.

And, if you are a journalist, there are good reasons why it’s useful to ask for access and explore the tool right now.

Interviewing AI has quickly become a journalistic genre, but here we are more interested in a “civic hacker” approach, which means working with these tools to disassemble them, understand how they work, understand what problems they cause, and know how to use them as allies.

How to access the chat integrated into Bing?

First, you must access the waiting list following this link

Image: bing.com

You can speed up the process, according to Microsoft itself, by setting Bing as your default search engine and downloading Edge and the Bing Mobile App (yes, it’s a way to create leads and to bring you into their walled garden, of course. But for now it’s also the only way to try this tool).

And then?

I’ve asked for access since the beginning, quickly gained the possibility to test the tool, and started using it, trying to understand how it works and what will change in our search experiences and our SEO strategies and techniques once these tools were to be definitively integrated into all search engines.

If you’ve read The New York Times’ viral story by Kevin Roose, Bing’s A.I. Chat: ‘I Want to Be Alive. ?’, you will probably be disappointed: to avoid that kind of weird conversations, Microsoft limited the tool to ten (six, at first, then eight: we are allowed to suppose that the limit will soon be removed) question-and-answer exchanges and it’s working to fix the so-called “hallucinations” (however, we will see some examples soon). 

Once you start your search, a voice “chat” appears under the search bar.

If you click on “chat” you will start the new experience.

Image: bing.com

First of all, notice that the screen is full of examples and warnings.

Image: bing.com

“Let’s learn together,” you can read. “Bing is powered by AI, so surprises and mistakes are possible. Make sure to check the facts. and share feedback so we can learn and improve!”

It’s the first time ever we are seeing a search engine that warns you of possible errors: in a way, that is good news. Search engines have never been to be taken as truth-bearing oracles. If we fill the web with inaccurate, irrelevant, or fake content, even traditional engines will provide you with results full of mistakes without warnings at all. 

At this point – it’s March 11th, 2023, while I’m writing (it’s essential to date any statement, given the speed with which the scenario changes) – we can see that the Bing search chat on Edge has three types of selectable “sentiments”: more creative, more balanced, and more precise.

I chose “more precise.”

“More precise” mode

Whenever you approach such a tool, my first advice is to try it with something you know very well and experiment to find limits, cross borders etc. 

I know it sounds like a thing for narcissists, but it’s beneficial for making a series of considerations: I asked the instrument about myself. 

The answer is pretty accurate: “Alberto Puliafito is an Italian journalist, director, and media analyst. He is Slow News’s editor-in-chief and works as a digital transformation and monetization consultant with Supercerchio, an independent studio. Is there anything else you would like to know about him?”.

If I go back to the traditional search, I can see that the source of this information is my author page on The Fix!

Image: bing.com

That’s ok, because I write in Italian, in general, and The Fix is definitely the main source where you can find something written by me in English. Moreover, unlike the ChatGPT version we are already used to, Bing shows sources.

But what if I’m lying about myself? The machine would never know, of course. 

Both in the traditional search and in the chat mode, Bing suggests me other questions to continue the conversation about my query. I chose to pick a surprising one: “What are his views on media?”

While the machine generates the answer, you can see that it translates the conversational query into more traditional keyword research.

Image: bing.com

But then it is unable to answer, which is almost comforting, in a way, but it also depends on the fact that my views on the media landscape are not so spread and known. At least, this is true until you try the creative mode, which we will explore later.

I then chose something easier: “What is Slow News?”

Image: bing.com

The answer is: “Slow News is the first Italian project of slow journalism. It aims to rethink mobility and imagine its future, putting people at the center. Is there anything else you would like to know about Slow News or something else?”

Here you can appreciate something similar to a hallucination, a technical term to indicate a confident response by an AI that does not seem to be justified by its training data: it remembers us the first rule handling these machines. Verification. 

In this case, training data is the whole world wide web with the first results the machine can browse. Why does it say “mobility”? That’s probably, because “mobility” is one of the beats we cover in Slow News

Of course, you can correct the chat in case of a hallucination.

Image: bing.com

It’s supposed to verify through its data your correction and to learn the correction itself if it’s valid, right? But this is not necessarily the case.

Image: bing.com

Some other considerations: 

  • This machine can provide accurate answers if fed with correct data. 
  • This machine has to make several choices. For example: why does it choose me as an answer instead of one of my homonyms? The answer is probably: because, for some reason, I’m better indexed and ranked on search engines than other people bearing the same name. 
  • This machine’s behaviour raises several general questions pertaining to both ordinary and AI-powered search engines. For example: what happens if sites with incorrect information are ranked for a specific topic? What does AI learn? How can this be corrected? How will the machine verify the information? And, in my case, what happens to my namesake?

Let’s try another machine’s sentiment to answer this question and switch to “more creative.”

“More creative” mode

In this case, Bing offers another kind of option: there are several “Alberto Puliafitos”. 

Image: bing.com

Finally, some homonyms appear. The only problem is that results 1 and 3 let you think they are different people, but it’s still me.

As you can see, the machine still has a lot to do before being able to substitute a traditional search, but yet it is to explore and the experience is completely different, if you want. 

I went a little deeper, asking again the machine about my ideas regarding the current state of journalism: again, this is something you can’t do with a traditional search engine. And this time, with its “more creative” mode, Bing’s chat is able to propose a deep answer, showing sources and doing a great job.

Image: bing.com

To avoid the ten Q&A limitation that forces you to start from scratch every time you reach the limit, I started similar different conversations with the chat. Here are the most interesting answers and findings.

First, I asked if the chat agrees with the slow journalism vision. 

Image: bing.com

The answer is polite and balanced.

At least twice, the chat asked me to answer with my opinion and at some point I confessed my identity. Sometimes, the chat’s reaction has been awkward.

Image: bing.com

Then, I asked, to provide me with an accurate analysis of Slow News and, in general, of the slow journalism concept. 

The answer contains the main arguments pros and cons to the idea.

Image: bing.com

In one case the chat asked me “Can I ask you some questions”. And then offered me an impressive series of questions that I suppose the system will use to train itself, in case of an answer.

Image: bing.com

Once I asked the chat about my identity verification: “How do you know I’m actually Alberto Puliafito”.

Image: bing.com

Again, the answer is based on the assumption that I was telling the truth.

I decided then to ask something about verification in general, claiming that it makes no sense for me that Bing’s AI-powered search engine does not verify anything. 

The answer is pretty accurate (and it can be useful for learning verification for journalists, too!). 

Image: bing.com
Image: bing.com

I personally teach digital verification and I proposed these answers to Gabriele Cruciata, an investigative Italian journalist expert in digital verification, too. He agrees that the answer is accurate and, unfortunately, “better than some answers from human journalists, if they have not received specific training”.

We don’t know precisely how the chat uses the information it receives. It will likely compare this information with the rest on the web and training data. But it’s not hard to imagine that people will be ready to use it for manipulative purposes, as with any tool or technology. After all, we already have news of various groups worldwide that use AI to create fictitious identities from scratch and to spread false or propaganda messages.

What can we learn from this experience? 

Let’s stop asking questions about niche concepts and start browsing the news.

If you ask “Why did Silicon Valley Bank fail?”, for example, Bing’s chat, even in the “more creative” mode, offers you a proper answer, with proper sources. 

Image: bing.com

You can go deeper, following the suggested questions or your own path and, unless you want to engage the machine in weird human-like conversation, you can get plausible answers.

Let’s see some questions, now, starting from these experiences.

Will this search experience be the standard one?

It’s hard to say. For now, it’s an advanced experience and we still have to wait for Google’s move.
As journalists, we need to provide the audiences with proper content to prepare them to use these tools, since they are already becoming accessible for everyone. 

Will SEO techniques change?

As I argue in my first experiments blog post about Bing’s chat (in Italian), surely there will be a close fight to end up in the chat suggestions, which means, in other words, a close fight to be in the top positions of the search engines ranks, as usual. At least for now. But what will not change are the foundations of SEO as a relational discipline.

Since, as we have seen, the answers are not necessarily accurate, it will become increasingly important to take care of creating few and regularly maintained valuable pieces of content. Those who, in all these years of the digital landscape, have used SEO as a click-making machine are already responsible for a pollution that is difficult to solve, even before the AI.

The “hallucinations” that the machine produces must also be a stimulus for dedicating the newsrooms’ budgets (in terms of time and money) to content maintenance, not only to content production.

What about ethics concerns?

They are always here. These chats can provide fake answers, results with biases, and we have to suppose that in the future they could also access sponsored pieces of content. We have to be in charge as journalists and human beings, addressing these tools for the good and not to satisfy the dominant ideas of a few, or reproduce inequalities. It’s a long run. 

And this is just the beginning

While I’m working on this article:

  • IBM is developing an architecture for solving Raven’s progressive matrices, frequently used to measure nonverbal cognitive abilities;
  • Toolformer, a language model by Facebook, aims to teach itself how to use tools;
  • Microsoft announced a release of GPT-4, which could be multimodal, according to several rumours fueled by statements from Andreas Braun, Microsoft Germany CTO, during a company event; multimodal means that GPT-4 could be based not only on text prompts and productions but also on images and videos and that, as Jim Fan, NVIDIA AI Scientist tweeted, we may see machines solving Visual IQ test, OCR-free reading comprehension, multimodal chat (for example, having a conversation about a picture), broad visual understanding abilities, like captioning, visual question answering, object detection, scene layout, common sense reasoning, audio & speech recognition.
  • Google’s AI team just published a paper about PaLM-E, an embodied multimodal language model. 

This news and publications suggest that, without any doubts, this field is simply booming. It will be increasingly important to study, continue observing these phenomena and take everything we write about them as temporary: this scenario will change quickly over time.

We also need not to be overwhelmed and confused with the hype around these tools, building a solid background to be prepared for the next steps and keeping as a North Pole the essence of journalism: verification method. 


The Fix Newsletter

Everything you need to know about European media market every week in your inbox