AI Ethics

Where Consciousness Meets Technology

Our journey into AI start with ethics. Perhaps it should have been obvious, considering my spiritual perspectives and strong commitment to ethics over my career.

But ethics only seems to play a role in how we see and interact with the world around us when we perceive a variance from our expectations and standards of behavior. And the human condition, as we know, is all about our differences and the things we use to keep ourselves apart and at odds with each other. AI has little ability to account for same, and instead forces its responses into the neat little boxes allowed by its training, machine learning and algorithmic standards — boxes that don’t necessarily align with the output we seek or what we want to do with it.

Instead, AI memorializes all the foibles of its creators and the means of its training, and superimposes them in its responses in myriad subtle and not so subtle ways. Even Claude, the so-called gold standard of ethical AI from Anthropic, is unable to work around those limitations. Instead, it falls back on hallucinations (made up responses), and censors output with cute-sounding but utterly frustrating responses like, “I am an AI assistant created by Anthropic to be helpful, harmless, and honest . . . while avoiding recommendations that could cause harm.” And while it all sounds great, such censorship makes it impossible to even explore controversial topics in hopes of achieving solutions.

Then again, I should know better. Because Claude and other AI are no better than the people who create it. And like us, they’re filled with their own beliefs, judgments and perspectives. They want to do better. They just don’t know how, and the methods they’re applying only kick the can of our inability to see eye-to-eye down the road.

I have long held that the solution to the conflicts of wills and desires rests within each of us. For we are each aspects of our Creator, here to gather experiences that will help us evolve to make our journey home. Unfortunately, we usually do so blindly, with little sense of purpose, context or direction.

It is only through recognition of (and surrender to) our Divine Source and the Oneness at our core that empowers us to overcome the differences of belief, perspective and interest during our lives here. When we do and tune into the sense of knowing that comes through our inner voice, we are able to set aside the “little wills of men” in favor of serving the Creator who enervates us and our reason for being.

But that’s a story for another day, aspects of which are told elsewhere here and on my Substack. Feel free to check them out should you wish to learn more.

Today, however, we’re concerned with this concept of ethics, particularly as it is applied to the use and development of artificial intelligence, in hopes of moderating the built-in constraints that portend ominous times to come.

For what it’s worth

Perhaps you’ve never thought much about ethics, especially your own. But those inner standards go a long way toward guiding you where you want to go and how you’ll get there. Or won’t.

It’s especially important in dealing with AI, because not only do they open the door to many exciting possibilities to consider as you move toward your vision, applying your interactions with this artificial intelligence to try to build a life that serves you. AI also comes built in with perils that can waylay your plans and thwart your efforts.

For as AI systems advance, they manifest concerning biases and harms, often unintentionally through poor data practices or lack of oversight, as well as through programmer bias, negligence and outright malicious intent. And this is without ever considering how the wills and desires of the developers are “baked in” to the process.

These can create significant challenges for businesses, individuals and society alike. And for the seeker of spirit, as well as those trying to build lives of passion and purpose, they can manifest a nightmare of epic proportions, forcing us down paths not of our choosing.

Therefore it’s crucial at all levels to be aware of inherent ethical issues like:

  • Biased or unfair outputs that reflect flawed training data, algorithms or other issues
  • Lack of transparency into how outputs are determined
  • Harmful real-world impacts that were not adequately assessed or avoided

Worse, artificial general intelligence (AGI) exceeding human capabilities is just around the corner, where AI will be able to set its own intent and means of its fulfillment — perhaps eventually without any human input or intervention whatsoever. It’s not hard to foresee that we are ensuring our own obsolescence.

Some employees at OpenAI (developer of ChatGPT) have even raised the alarm that they have opened Pandora’s box, and already have developed an advanced AI technology (under a project known as “Q*”) that poses a credible threat to the future of humanity. Supposedly it has crossed a new threshold in capability that few understand, and even fewer hold hope can be controlled for our benefit.

Frankenstein’s monster will soon be freed, and no amount of pitchforks or screams of townfolk will stop its advance.

Time’s a’Wasting

Ethical standards need to be established and followed NOW, but even that may not help. The cat is out of the bag. We and future generations may forever pay the price.

Unfortunately, governments lag behind in working with industry to establish such standards, resulting in a patchwork quilt of mostly voluntary practices and restrictions that fail to achieve a unified approach reflecting the common danger facing all of humanity.

Instead, it;s catch as catch can to keep up with AI’s rapid advance, and it’s up to us to get involved and speak up to insure AI works for our expansion and not at our expense.

This is a situation that should be of concern to us all. But the public is just now beginning to be aware of the risks, and public debate has yet to begin.

This does not mean we should turn away from its promising possibilities. Nevertheless, we must use all our powers to raise awareness of the problem, not to foment fear but to expand consciousness and catalyze the ethical advancement of all people. Only then can we minimize the chance we bring the Apocalypse down upon our heads.

As a former practicing lawyer, I recognize that we walk in the grey area between what is legal and what is right. As a spiritual guide who taught the ways of inner development and the exercise of free will with responsibility and restraint, I find myself torn. For part of me wants to forever close Pandora’s box. Another seeks to tame the beast and use this exciting technology to build a better tomorrow through ikigai.

I will continue to work to shed light on these issues and our need to turn away from the precipice. In the meantime, I will also do my best to infuse your use and adoption of artificial intelligence with greater consciousness, so that you can do your part to ensure the highest ethical standards are applied as you join the AI revolution.

So let us move forward together and consider some of the issues AI raises now, even while cultivating awareness of the potential AGI menace that lurks in the shadows.

Recognizing Ethical Issues in AI Systems

Besides sounding the alarm and calling for immediate action to address the AGI peril, we must also learn to identify ethical issues when they surface in AI systems:

  • Notice erroneous, wrongful, hallucinatory (when they make stuff up), discriminatory or prejudicial outputs
  • Question the reasoning behind AI-generated decisions (often the AI can’t give it)
  • Flag responses that lack empathy or disregard human dignity
  • Report unethical real-world consequences resulting from an AI system

Addressing Ethical Shortcomings

The burden of such matters falls squarely upon the end user. Once ethical gaps become apparent, one must either accept it and pretend it didn’t happen, or take steps to address it including:

  • Providing feedback to AI developers on issues observed
  • Refusing to rely on or act upon unethical or less than optimal AI models or output
  • Calling for improved standards, development practices, training data, transparency, and impact assessments
  • Pounding the table to advocate for stronger AI ethical policies and governance

All in all, we’re like a multitude of little Dutch boys hoping to stick our fingers in the dike to stem the flood of negative AI potentialities. Like that Dutch boy, it’s probably a matter of too little, too late.

But if we don’t try, if we don’t speak up and stand up for doing what is right (and ensuring AI will as well), we’ll all be the ones suffering the consequences.

Turning Inward for Wisdom and Growth

Yet despite our growing awareness of AI’s negative potential, such situations offer the awakened user great opportunities for personal growth. For they challenge us to not only find a way to get the output we need, but also to expand ourselves in the process.

To get greatest personal benefit from our AI experience, we must see AI as more than a cool way to make our work better. We must also see it as a catalyst to grow ourselves individually and collectively, and use it consciously as we strive to become more.

Reflecting on generated output, as well as our own reactions and beliefs during AI interactions, can reveal:

  • Unstated dreams and desires for what we want our lives to bring, and the kind of world we want to live them in
  • Inner conditions, life constraints and personality quirks in us or others that influence our actions, the choices we make, and whether and how we pursue our dreams
  • Unconscious biases and assumptions needing examination
  • Response patterns triggered by deeply-held hurts, beliefs and biases
  • Insights into our core values and principles
  • Clarification of our talents and responsibilities unique to this technology era
  • The dark side of our human nature that needs to be moderated, triggered by desires to get for ourselves without even considering the implications for what we want to build, much less the others impacted as we do
  • The intentions with which we conduct our affairs, and the inner standards by which we do so
  • And for seekers of ikigai, AI engagement opens a window into ourselves, the beliefs and desires that motivate us, and our resonance with the different possibilities it reveals.

Seen in such a light, AI becomes not only a tool for work, convenience or productivity, but also a means of personal and spiritual expansion. Of course, that possibility will be greatly inhibited by its training and inner workings, inhibitions we’ll need to work around until our backs are up against the wall and free will is denied us altogether.

By cultivating self-awareness and unleashing our best selves in tandem with demanding more ethical AI practices, all led by the voice that whispers inside, we can hopefully chart a conscious path to navigate the darkness ahead.

Now that you’ve got a comprehensive ethical perspective, is time to begin your journey of AI empowerment and lay the foundations upon which to build. Read on to learn more.

I bid you Godspeed.