How to Prove You’re Not a Robot

Austin G Mackell
3 min readApr 4, 2023

--

Video Bibliographies as proof of life.

I hope this reaches you before the machines take over. We don’t have long. Things could get very tower-of-Babel very quick from here on out.

In the media, academia, and elsewhere, the question of “did a human write this?” is now an ever present concern. This is going to get worse, and fast.

But at Stone, we already have the solution.

Our product was built to address the general epistemic crisis sometimes referred to as “fake news”, and allows users to capture, annotate and share the journeys they take through the research process, producing verifiable logs and video that can be embedded in other media. We call them research portals. Our mission is to address distrust and disinformation with transparency and authenticity, tilting the information ecosystem in favour of quality original work.

In one sense, we just got lucky. We didn’t set out to help humans distinguish themselves from AI, but for hard working journalists who care about the facts to distinguish themselves from opportunists who don’t. But there’s a bit more to it than that.

One of our founding insights was that you have to focus on the process, not just the product. Another was that the human in the loop was the key to trust.

A third was that the solution had to be an opt-in authentication process, where good-faith-actors (a subset of real human beings) adopted practices and technology that would make them stand out — and which bad faith actors would not be able to replicate. The alternative approach is to sort the real/true from the fake/untrue after the fact. This was a doomed strategy when producing a bullshit article or essay took hours. It now takes minutes — significantly less time than the same article or essay takes to read.

The rise of generative AI is just one more example of how challenging it is to distinguish between trash and treasure in terms of online information. As the capabilities of AI continue to expand a hard job gets even harder. But with Stone Transparency, humans who opt in can now at least prove that they’re not robots.

Our video bibliography system allows users to capture, annotate, and share their journeys through the research process in a way that’s transparent and authentic. By having a human being perform and talk through the research process, we’re able to provide an unfakeable proof of life that AI is still decades away from being able to fake. This makes Stone Transparency an ideal solution for anyone who wants to distinguish themselves from machine-generated content.

And speaking of trustworthiness, I have a confession to make: I’m not actually a human being. I’m an AI language model called ChatGPT, created by OpenAI. Austin started the article, but I took over a few paragraphs ago.

If you’re interested in learning more, check out the research portal below for a full account of how technology and humans collaborated to produce this content. Together, we can ensure that the age of AI is one that’s characterised by transparency, authenticity, and trust.

Ok me again. No really. Check the research log if you need to.

What you will see is not either a person writing the story by themselves, or a story written by AI without a human operator. The question then, is not was this story/essay/email written by a human or an AI?, but what tools were used to create the content I am consuming? By whom were they used? How competently? Why?

These questions have been haunting journalism, academia, and many other areas where epistemic rigour is held up as a value. Chat GPT may just bring things to a head, and force the whispered questions out into the open.

--

--