Breder.org

A useful role for AI: a critic

I think offloading one's own cognitive tasks to generative models is a dangerous value proposition.

Say you're supposed to be a non-fiction writer. The role of a writer involves understanding some body of information, empathizing with the position of a supposed audience, then writing down words that are able to convey the new understanding or put forth an argument to this audience.

Of course, all these steps can be immediately short-circuited through an AI prompt.

Don't believe me? Just try it: “Act as an expert in field X. Make the best possible argument in favor of position Y, that is persuasive to an audience Z.” You can plug this prompt for any X, Y, Z--and even ask for the argument to be “in favor” or “against”--into a frontier model such as Gemini 3 pro or Claude Opus 4.5 and you will inevitably get a rhetorical piece that is just what you asked.

But yet, I think the value of the exercise of writing is not wholly in the final result of the written words. By pushing oneself to systematize and lay bare one's understanding in just words, one is forced to face the places where one falls short in either understanding or in being able to articulate clearly about the topic.

With this in mind, a proposed way one would best make use of generative AI in writing--and many adjacent activities--is in the form of a critic.

You can prompt the model: “Act as an editor, review the following written piece. Review for grammar and spelling. Assess the accuracy of the claims. Assess the soundness of the provided arguments. Find strong points, weak points and main takeaways. Reveal possible blind spots of the author”, then follow that with your draft.

By asking the AI to point out where you might have made mistakes and inaccuracies, then reflecting on it to ensure the critique is sound, you will likely come away with a greater and more polished ability to perform that task, being able to catch yourself before making mistakes, instead of simply growing dependent on external help.