AI Policy

TL:DR

I do not use generative AI to create any aspect of my written or visual work.

In certain circumstances, my clients may use AI to produce their final content based on my work.

  • They may publish their books with AI covers or illustrations.
  • They may use generative voiceover technology to produce from my eLearning scripts.

While I would rather they did not, I do not currently restrict the use of the work once it leaves my office.


What It’s Called

Regarding the definition of “Artificial Intelligence”–it is not AI in the strict sense, it is machine-learning pattern recognition and best guesses … calling it “AI” is a branding and investment play–I’ll refer to it as AI here.

What’s It Good For?

I believe this iteration of AI has utilitarian value in our increasingly complex society, particularly in the areas of research (medical and otherwise), infrastructure, systems management, supply chains, etc.

So, AI to help review millions of data points to cure cancer? Hell yeah. AI to improve the efficiency of supply chains to reduce the costs of transport and manufacturing? Go for it. AI to manage power grids to keep people warm (or cool) and the lights on so nobody freezes or dies of heat exhaustion? Yes, please.

Generative and Consumer-facing AI

Using AI to scrape the protected IP of working creatives (artists, writers, filmmakers, actors, voice artists, developers) to distill and reproduce similar content is unethical.

Using AI to replace those creatives to reduce costs, demeaning or discounting the value of human creativity while simultaneously advocating the value of such creative products (books, movies, paintings, illustrations, software, etc) is hypocritical and unethical.

To whatever small extent AI might be useful for building customized or personalized features in software or services, I’m unconvinced that it is better than existing programmatic processes. But I’ve only worked with or directed programmers; I am not one myself, so my opinions are semi-neutral on this topic, aside from the environmental impact that still needs mitigation (along with the general enshittification of the user experience).

Water and Power

The impacts of AI data centers on water use, power consumption, land use, and the overall negative repercussions for the small communities around them are well-documented.

To the extent AI can be used to improve or manage utilities and infrastructure, or contribute to important medical research for the greater good, data centers should be considered a public utility with concerted efforts made to minimize their negative impact and improve their efficiency.

I have yet to see how these impacts can possibly be justified to support AI in consumer-facing applications (software features, or the generation of consumer entertainment products).

Accepting those negative impacts for non-critical consumer-facing products is unnecessary and will prove more harmful than good.

My Personal Research

I’m not a machine-learning specialist, a legislator, or a programmer. I’m a creative who has personally seen the terrible impact of AI on individual creators through job loss, stolen IP, lost trust, and the enshittification of previously valuable software and services through the addition of useless AI features.

Know your enemy:

I have taken training in prompt engineering to better understand first-hand how the systems work and what they produce.

  • I was unimpressed at best.

I have done brief consultations with a variety of companies–established and startups–to review and discuss the value of their AI integrations (my previous history in UX and UI still comes in handy): Booksnout Publishing, ProWritingAid, PastPal, and others. This has given me a good sense of what value (if any) AI features are bringing to these services, and how they integrate assistive vs generative AI.

  • The assistive features seem competent but not preferable to non-AI features.
  • Where they veer into generative features, the integrations are clunky, and the interaction and output are wildly suboptimal to human-generated content.

I have also contracted on several different AI projects as an output reviewer, working through platforms that crowdsource creatives to review and suggest improvements for AI-generated materials. This has given me first-hand insight into how these technologies are being trained, how their rubrics are being structured, and how the projects themselves are being organized and managed internally.

  • To say the initial generative output is substandard is an understatement.
  • The efforts to create rubrics to help “correct” the outputs are laughably incompetent.
  • The management of the AI learning/improvement efforts is incompetent and disorganized, reproducible standards are inconsistent or nonexistent, and many projects are cancelled early due to an inability to generate any valuable or consistent improvements.

I’ve also run some “in-house” experiments… you can check out the article on hallucinations here: I Asked ChatGPT To Provide Citations. It Hallucinated Its Sources.

Published by Chip Street

Writey Guy || Founder/Principal, William Street Creative || Former U.S. Brand Manager, Simplilearn || Former Marketing Manager, Market Motive || Former Founder/President, Group Of People