One of the most intriguing books I’ve read this year is Machinehood by S.B. Divya. Set in the year 2095, AI and bots have taken over most of the work that humans once did, leaving humans little to do but supervise the bots or work in the gig economy, doing the very few tasks that AI can’t yet handle. It’s a dystopian future, driven by a capitalist drive to increase revenues and profits without regard to the human costs of replacing humanity with machines.
How do we avoid an AI dystopia as envisioned in Machinehood? One approach is to leave it to government regulation. Although various countries are considering such legislation, and I applaud their efforts, I think it’s unlikely that they will strike the right balance between protecting their citizenry and innovation. I’m also skeptical that they will put together regulations in a timely fashion, keeping up with technological changes. Another approach, and one that I favor, is for organizations to create their own AI policies. If enough organizations do this and adopt the best of what their peers have done, we may get something both responsible and workable.
Organizational AI Policies
Here are a few examples of AI policies:
Randall Pine: https://www.randallpine.com/generative-ai-policy-template
Fisher Phillips: https://www.fisherphillips.com/a/web/du6wach1kmRuPCgDcMLJ5Z/ai-policy.pdf
Wired Magazine: https://www.wired.com/about/generative-ai-policy/
In looking at these AI policies, a few trends and thoughts come to mind:
- AI should assist and augment human abilities, not replace them. For example, AI is no substitute for human creativity or judgment. AI should be used to improve our lives, not make them less interesting.
- AI shouldn’t be solely about increasing productivity. As Randall Pine’s policy rightly points out, AI shouldn’t be used to maximize productivity at the cost of quality. AI can churn out a lot of obvious and unoriginal content. The real challenge is to learn how to use AI in tandem with our creativity to produce better and more original content. This approach not only enhances the quality of the output but also underscores the value of human creativity in the AI era.
- Transparency is needed with AI. One of the things I admire about Christopher Penn’s writing on AI is his content authenticity statement. At the beginning of everything he writes, he discloses how much, if any, of what he wrote was generated by AI. As Christopher explains, this is needed for legal, copyright, and moral reasons.
- AI Policies need to be workable and future-proof. Fisher-Phillips’ policy has a clause that violates both of these when it asks every employee to inform their supervisor whenever they have used generative AI to help perform a task. This policy is hopelessly unworkable and unrealistic in a world where the tools you use may not even inform you when they use AI. Wired refusal to use AI for any edits also strikes me as out of date. Does this mean I can’t use Grammarly to check my grammar?
Melissa and I at the Agile Marketing Alliance are working to create our own organizational AI policy. Does your organization have an AI policy? Can you share an external version of it?
Content Authenticity Statement: 100% of this content was generated by me, the human. I did use Grammarly to check spelling and grammar.